title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 44
values | text
stringlengths 0
8.58M
|
---|---|---|---|---|
Factors influencing access to specialised haematology units during acute myeloblastic leukaemia patient care: A population‐based study in France
|
a9340dd3-fdb1-4761-b5bc-cb2f9a007958
|
10134294
|
Internal Medicine[mh]
|
INTRODUCTION Acute Myeloblastic Leukaemia (AML), although a rare disease of the elderly, accounts for 80% of acute leukaemia in adults. With a 5‐year net survival of 27%, AML has a very poor prognosis, except for patients with a t(15;17) translocation who benefit from a specific treatment. Over the last few decades, cytogenetic and molecular profiling tools for AML have significantly improved , our understanding of the AML molecular landscape which in turn has allowed improved classification of AML. These advances have also facilitated the development of new molecules targeting specific mutations such as those targeting the FLT3 or IDH genes respectively. , These advances have also contributed to improved stratification of AML patients into prognostic groups that allow to better adapt treatments and to treat a higher number of patients. , , Despite this, the therapeutic management scheme, particularly in the general population, remains similar for most subtypes and is based on a combination of anthracycline and cytarabine, except for AML subtypes with t(15;17). However, a slight increase in net survival has been observed in AML patients (+14% net survival at 1 year, and +15% net survival at 5 years, for cases diagnosed between 1990–2015), but these patterns differ among patients, notably according to age. These differences could be explained by biological factors intrinsic to the disease and to patient clinical characteristics such as the presence of comorbidities which have an influence on patient eligibility for treatment. Differences in survival have also been attributed, at least in part, to unequal access to curative treatments, which in turn is potentially influenced by preventable, non‐biological factors associated with patient care pathways. , , As these treatments are mostly reserved to specialised care facilities, it is important to investigate the impact of the care pathway on treatment access and on patient survival. However, there are few data available in the literature on the AML patient care pathway. , A recent study has concluded that patients treated in academic institutions or high‐volume hospitals were better managed than those treated elsewhere. It was also found that patients treated in academic hospitals had better access to cytogenetic and molecular testing, to new drugs, a more likely inclusion in clinical trials and a greater probability of receiving a haematopoietic stem cell transplant. None of these studies, however, has assessed the real impact of access to a Specialised Haematology Unit (SHU) on the management of AML patients and potentially their survival, since widely available clinical trials do not optimally describe real‐life care. Our study, which is part of the large French S‐LAM (Survival of Acute Myeloblastic Leukaemia patient) project on the management of all AML patients, aimed to describe, in a real‐life setting, the characteristics of the AML patient care pathway, including access to specialised care facilities in haematology and treatment management.
METHODS 2.1 Study design The S‐LAM (Survival of Acute Myeloblastic Leukaemia) project is a retrospective longitudinal study including all incident AML cases diagnosed from 01 January 2012 to 31 December 2016 in the three French population‐based registries specialised in haematological malignancy (Côte‐d'Or, Basse‐Normandie and Gironde; around 3,625,400 inhabitants). For each patient, in addition to the core data (age, sex, place of residence, medical history, type of haematological cancer, medical follow‐up, treatment, sources of information, last date of follow‐up and vital status), we collected information on biological and molecular analyses, dates of occurrence of each event in the care pathway, including the various medical consultations and associated dates, and patient clinical evolution. The end point of patient follow‐up was set at 1 January 2021. The S‐LAM database was registered with the Commission Nationale de l'Informatique et des Libertés (CNIL) under number 921294. All data have been checked for integrity and quality. 2.2 Factors of interest 2.2.1 Care pathway We first defined seven care pathways (Emergency to SHU; Emergency to Non‐haematological unit; General Medicine to SHU; General Medicine to Non‐haematological unit; Specialised medical unit to SHU; Specialised medical unit to Non‐haematological unit and SHU only), by grouping patients according to their medical unit of admission and their diagnosis unit (Emergency, General medicine, Specialised medical unit and Specialised Haematology Unit). Then, for each of these groups, we distinguished between patients who completed their care pathway in a SHU from those who completed their follow‐up elsewhere (Appendix ‐ Table 5). We classified as academic facilities, the university hospitals and anti‐cancer centres. Non‐academic hospitals included peripheral hospitals, private health institutions of collective utility and medical practice offices. 2.2.2 Tumours and patient characteristics To describe our study population, we divided the patients into two groups, according to age at diagnosis: under and over 80 years old (y‐o) respectively, assuming that patients over 80 years of age are less likely to be treated. Finally, we described patient characteristics according to the modalities of their access to haematological care facilities. For each modality, we report the distribution of cytogenetic and biomolecular prognostic markers, de novo or secondary AML profile, the Charlson Comorbidity Index (CCI) and the European Deprivation Index (EDI). We used the CCI as an indicator of patient comorbidities, while subtracting the weight of age in the calculation. Then, we grouped the CCI variable into three classes (0: No comorbidities, 1–2: low and mild comorbidities; ≥3: high comorbidities). Also, to be consistent with the study recruitment period, the European Leukaemia Network (ELN) 2016 working group classification was used to classify patient prognosis according to their cytogenetic status and molecular mutations. Based on treatment modalities, patients were grouped into three categories: untreated, non‐curative (supportive and palliative) and curative treatments (intensive chemotherapy). 2.2.3 AML grouping AML cases were categorised into six subtypes: AML‐RCA (AML with recurrent cytogenetics abnormalities (9865‐3, 9869‐3, 9871‐3, 9896‐3, 9897‐3, 9898‐3, 9877‐3); PML‐RARA (9866‐3); AML‐MRC (AML with multilineage‐related changes: 9895‐3, 9984‐3); t‐AML/MDS (therapy‐related AML/Myelodysplasia Syndrome: 9920‐3, 9987‐3); AML‐NOS (AML not otherwise specified: (9861‐3) and AML‐others (9931‐3, 9805‐3, 9806‐3, 9808‐3, 9809‐3, 9807‐3, 9872‐3, 9873‐3, 9874‐3, 9867‐3, 9891‐3, 9840‐3, 9910‐3, 9870‐3, 9931‐3, 9930‐3). 2.3 Statistical analysis We used the Chi2/Fisher test to compare categorical variables and the Wilcoxon rank sum test for continuous variables according to patient accessibility to a specialised haematology unit. We then constructed a multivariate logistic regression model to determine the association between different covariables and access to a specialised haematology unit. For this modelling, we used a backward selection method to successively remove the variables whose significance was greater than 20%. We use Akaike Information Criterion (AIC) to choose the best fitted model. We systematically included the gender variable in the models even if it was not significant. For modelling purposes, we chose to exclude patients over 80 y‐o who died within the first 5 days after diagnostic and younger patients who died on the same day of diagnosis, assuming that these patients died due to their age or comorbidities before they had time to be referred to specialised haematology unit.
Study design The S‐LAM (Survival of Acute Myeloblastic Leukaemia) project is a retrospective longitudinal study including all incident AML cases diagnosed from 01 January 2012 to 31 December 2016 in the three French population‐based registries specialised in haematological malignancy (Côte‐d'Or, Basse‐Normandie and Gironde; around 3,625,400 inhabitants). For each patient, in addition to the core data (age, sex, place of residence, medical history, type of haematological cancer, medical follow‐up, treatment, sources of information, last date of follow‐up and vital status), we collected information on biological and molecular analyses, dates of occurrence of each event in the care pathway, including the various medical consultations and associated dates, and patient clinical evolution. The end point of patient follow‐up was set at 1 January 2021. The S‐LAM database was registered with the Commission Nationale de l'Informatique et des Libertés (CNIL) under number 921294. All data have been checked for integrity and quality.
Factors of interest 2.2.1 Care pathway We first defined seven care pathways (Emergency to SHU; Emergency to Non‐haematological unit; General Medicine to SHU; General Medicine to Non‐haematological unit; Specialised medical unit to SHU; Specialised medical unit to Non‐haematological unit and SHU only), by grouping patients according to their medical unit of admission and their diagnosis unit (Emergency, General medicine, Specialised medical unit and Specialised Haematology Unit). Then, for each of these groups, we distinguished between patients who completed their care pathway in a SHU from those who completed their follow‐up elsewhere (Appendix ‐ Table 5). We classified as academic facilities, the university hospitals and anti‐cancer centres. Non‐academic hospitals included peripheral hospitals, private health institutions of collective utility and medical practice offices. 2.2.2 Tumours and patient characteristics To describe our study population, we divided the patients into two groups, according to age at diagnosis: under and over 80 years old (y‐o) respectively, assuming that patients over 80 years of age are less likely to be treated. Finally, we described patient characteristics according to the modalities of their access to haematological care facilities. For each modality, we report the distribution of cytogenetic and biomolecular prognostic markers, de novo or secondary AML profile, the Charlson Comorbidity Index (CCI) and the European Deprivation Index (EDI). We used the CCI as an indicator of patient comorbidities, while subtracting the weight of age in the calculation. Then, we grouped the CCI variable into three classes (0: No comorbidities, 1–2: low and mild comorbidities; ≥3: high comorbidities). Also, to be consistent with the study recruitment period, the European Leukaemia Network (ELN) 2016 working group classification was used to classify patient prognosis according to their cytogenetic status and molecular mutations. Based on treatment modalities, patients were grouped into three categories: untreated, non‐curative (supportive and palliative) and curative treatments (intensive chemotherapy). 2.2.3 AML grouping AML cases were categorised into six subtypes: AML‐RCA (AML with recurrent cytogenetics abnormalities (9865‐3, 9869‐3, 9871‐3, 9896‐3, 9897‐3, 9898‐3, 9877‐3); PML‐RARA (9866‐3); AML‐MRC (AML with multilineage‐related changes: 9895‐3, 9984‐3); t‐AML/MDS (therapy‐related AML/Myelodysplasia Syndrome: 9920‐3, 9987‐3); AML‐NOS (AML not otherwise specified: (9861‐3) and AML‐others (9931‐3, 9805‐3, 9806‐3, 9808‐3, 9809‐3, 9807‐3, 9872‐3, 9873‐3, 9874‐3, 9867‐3, 9891‐3, 9840‐3, 9910‐3, 9870‐3, 9931‐3, 9930‐3).
Care pathway We first defined seven care pathways (Emergency to SHU; Emergency to Non‐haematological unit; General Medicine to SHU; General Medicine to Non‐haematological unit; Specialised medical unit to SHU; Specialised medical unit to Non‐haematological unit and SHU only), by grouping patients according to their medical unit of admission and their diagnosis unit (Emergency, General medicine, Specialised medical unit and Specialised Haematology Unit). Then, for each of these groups, we distinguished between patients who completed their care pathway in a SHU from those who completed their follow‐up elsewhere (Appendix ‐ Table 5). We classified as academic facilities, the university hospitals and anti‐cancer centres. Non‐academic hospitals included peripheral hospitals, private health institutions of collective utility and medical practice offices.
Tumours and patient characteristics To describe our study population, we divided the patients into two groups, according to age at diagnosis: under and over 80 years old (y‐o) respectively, assuming that patients over 80 years of age are less likely to be treated. Finally, we described patient characteristics according to the modalities of their access to haematological care facilities. For each modality, we report the distribution of cytogenetic and biomolecular prognostic markers, de novo or secondary AML profile, the Charlson Comorbidity Index (CCI) and the European Deprivation Index (EDI). We used the CCI as an indicator of patient comorbidities, while subtracting the weight of age in the calculation. Then, we grouped the CCI variable into three classes (0: No comorbidities, 1–2: low and mild comorbidities; ≥3: high comorbidities). Also, to be consistent with the study recruitment period, the European Leukaemia Network (ELN) 2016 working group classification was used to classify patient prognosis according to their cytogenetic status and molecular mutations. Based on treatment modalities, patients were grouped into three categories: untreated, non‐curative (supportive and palliative) and curative treatments (intensive chemotherapy).
AML grouping AML cases were categorised into six subtypes: AML‐RCA (AML with recurrent cytogenetics abnormalities (9865‐3, 9869‐3, 9871‐3, 9896‐3, 9897‐3, 9898‐3, 9877‐3); PML‐RARA (9866‐3); AML‐MRC (AML with multilineage‐related changes: 9895‐3, 9984‐3); t‐AML/MDS (therapy‐related AML/Myelodysplasia Syndrome: 9920‐3, 9987‐3); AML‐NOS (AML not otherwise specified: (9861‐3) and AML‐others (9931‐3, 9805‐3, 9806‐3, 9808‐3, 9809‐3, 9807‐3, 9872‐3, 9873‐3, 9874‐3, 9867‐3, 9891‐3, 9840‐3, 9910‐3, 9870‐3, 9931‐3, 9930‐3).
Statistical analysis We used the Chi2/Fisher test to compare categorical variables and the Wilcoxon rank sum test for continuous variables according to patient accessibility to a specialised haematology unit. We then constructed a multivariate logistic regression model to determine the association between different covariables and access to a specialised haematology unit. For this modelling, we used a backward selection method to successively remove the variables whose significance was greater than 20%. We use Akaike Information Criterion (AIC) to choose the best fitted model. We systematically included the gender variable in the models even if it was not significant. For modelling purposes, we chose to exclude patients over 80 y‐o who died within the first 5 days after diagnostic and younger patients who died on the same day of diagnosis, assuming that these patients died due to their age or comorbidities before they had time to be referred to specialised haematology unit.
RESULTS 3.1 Patients characteristics according to their accessibility to specialised haematology unit Of the 1039 incident AML cases, there were 529 men (51%) and 510 women (49%) with a median age of 73 years. There were 46% from Basse‐Normandie, 40% from Gironde and 14% from Côte d'Or (no statistical differences in AML subtypes were seen across diagnostic departments, result not shown). A total of 713 patients (69%) consulted in a SHU during their disease course and 326 patients (32%) did not (Table ). Concerning the care pathway, the first medical contact was the general practitioner in 63% of cases ( n = 650) with 71% (459/650) of access to a specialised haematology unit (the most frequently used care pathway). Similarly, 15% of patients started in an emergency unit (62% or 96/155 referred to the specialised haematology unit), 15% in a specialised medical unit (53% or 81/154 of referred to SHU) and 5% started directly in SHU (2% of missing data) (Table /Figure ). An age difference was observed in the patients accessing a specialised haematology unit (Figure ). During their care management, 86% of patients under 80 y‐o had access to SHU compared to 38% of older patients with either AML diagnosis or treatment decision (Figure ). More specifically, AML was diagnosed by a trained haematologist in 52% of patients under 80 y‐o compared to 25% in those over 80 y‐o. Similarly, 74% of patients under 80 y‐o were treated in a SHU, compared to 24% of patients over 80 y‐o (Appendix ‐ Table 3/Figure ). Patients who consulted in a SHU were younger (median age 66 vs. 83 y‐o), 90% of them went to an academic hospital (vs. 38% to a non‐academic hospital), but there was no statistical difference according to patient socio‐economic status (EDI quintile). Similarly, among patients who consulted in a SHU, 92% had access to cytogenetic testing (vs. 54% for those consulting outside a SHU); the proportion of AML‐MRC, t‐AML/MDS and AML‐NOS subtypes were less represented and 77% had de novo AML (vs. 67%). Patients admitted to SHU had more a favourable initial cytogenetic prognostic status (23% vs. 6%), less comorbidities (54% with no comorbidity vs. 32%) and more frequently received curative treatment 68% (vs. 5%). Additionally, 14 (11%) of the over‐80 y‐o patients who consulted a trained haematologist received curative treatment (vs. <1% over 80 y‐o who did not see a trained haematologist) (see details in the Appendix ‐ Table 3). Among patients who consulted in a SHU, 58% ( n = 368) received one line of chemotherapy (vs. 88%, n = 114 of non‐SHU patients), 27% ( n = 172) received two lines of chemotherapy (vs. 11%, n = 14 of non‐SHU patients) and 12% ( n = 91) received more than two lines of chemotherapy (vs. 0.8%, n = 1 of non‐SHU patients). Among patients who received curative treatment, the first‐line complete remission rate was 59% for patients who consulted in a SHU (vs. 4.2%, p = 0.001). Patients admitted to a SHU had greater access to associated treatment related to chemotherapy 68% (vs. 32%, n = 103). Access to haematopoietic stem cell transplantation (HSCT) and minimal residual disease (MRD) was reserved strictly for patients treated in SHU. Similarly, immunotherapy, radiotherapy and inclusion in clinical trials were almost exclusively seen among patients who had consulted a trained haematologist (Table ). 3.2 Factors associated with access to specialised haematology units In the univariate model, factors limiting access to the SHU were being in the age group above 50 years old, emergency referral (OR, 0.77; 95% CI, 0.58–1.01), specialised medical referral (OR, 0.11, 95% CI, 0.08–0.15), patients with low‐mild (OR, 0.52; 95% CI, 0.38–0.71) or severe (OR, 0.27, 95% CI, 0.19–0.40) comorbidities. Similarly, other factors such as being diagnosed with AML‐MRC (OR, 0.09; 95% CI, 0.02–0.27), t‐AML (OR, 0.08, 95% CI, 0.02–0.21), AML‐NOS (OR, 0.04, 95% CI, 0.01–0.11), AML‐others (OR, 0.18; CI, 0.04–0.50) or an intermediate (OR, 0.20; 95% CI, 0.09–0.39), adverse cytogenetic prognosis (OR, 0.20; 95% CI, 0.09–0.41) were also factors limiting access to SHU. In addition, based on EDI quintiles, patients with lower socio‐economic status had less access to SHUs compared to the higher income group (Table ). After adjustment, factors limiting access to a SHU were aged over 80 years old (ORa, 0.14; 95% CI, 0.04–0.38), emergency referral (ORa, 0.28; 95% CI, 0.18–0.44), or specialised unit referral (ORa, 0.12; 95% CI, 0.07–0.18). Also, patients with severe comorbidities (ORa, 0.39; 95% CI, 0.21–0.69) and patients with subtypes t‐AML/MDS (ORa, 0.13; 95% CI, 0.02–0.62), AML‐NOS (ORa, 0.10; 95% CI, 0.01–0.51) or AML‐others (ORa, 0.15; 95% CI, 0.02–0.70) were less likely to be sent to a SHU. However, being admitted to an academic hospital increased referral to SHU consultation by 8.87 times (Table ).
Patients characteristics according to their accessibility to specialised haematology unit Of the 1039 incident AML cases, there were 529 men (51%) and 510 women (49%) with a median age of 73 years. There were 46% from Basse‐Normandie, 40% from Gironde and 14% from Côte d'Or (no statistical differences in AML subtypes were seen across diagnostic departments, result not shown). A total of 713 patients (69%) consulted in a SHU during their disease course and 326 patients (32%) did not (Table ). Concerning the care pathway, the first medical contact was the general practitioner in 63% of cases ( n = 650) with 71% (459/650) of access to a specialised haematology unit (the most frequently used care pathway). Similarly, 15% of patients started in an emergency unit (62% or 96/155 referred to the specialised haematology unit), 15% in a specialised medical unit (53% or 81/154 of referred to SHU) and 5% started directly in SHU (2% of missing data) (Table /Figure ). An age difference was observed in the patients accessing a specialised haematology unit (Figure ). During their care management, 86% of patients under 80 y‐o had access to SHU compared to 38% of older patients with either AML diagnosis or treatment decision (Figure ). More specifically, AML was diagnosed by a trained haematologist in 52% of patients under 80 y‐o compared to 25% in those over 80 y‐o. Similarly, 74% of patients under 80 y‐o were treated in a SHU, compared to 24% of patients over 80 y‐o (Appendix ‐ Table 3/Figure ). Patients who consulted in a SHU were younger (median age 66 vs. 83 y‐o), 90% of them went to an academic hospital (vs. 38% to a non‐academic hospital), but there was no statistical difference according to patient socio‐economic status (EDI quintile). Similarly, among patients who consulted in a SHU, 92% had access to cytogenetic testing (vs. 54% for those consulting outside a SHU); the proportion of AML‐MRC, t‐AML/MDS and AML‐NOS subtypes were less represented and 77% had de novo AML (vs. 67%). Patients admitted to SHU had more a favourable initial cytogenetic prognostic status (23% vs. 6%), less comorbidities (54% with no comorbidity vs. 32%) and more frequently received curative treatment 68% (vs. 5%). Additionally, 14 (11%) of the over‐80 y‐o patients who consulted a trained haematologist received curative treatment (vs. <1% over 80 y‐o who did not see a trained haematologist) (see details in the Appendix ‐ Table 3). Among patients who consulted in a SHU, 58% ( n = 368) received one line of chemotherapy (vs. 88%, n = 114 of non‐SHU patients), 27% ( n = 172) received two lines of chemotherapy (vs. 11%, n = 14 of non‐SHU patients) and 12% ( n = 91) received more than two lines of chemotherapy (vs. 0.8%, n = 1 of non‐SHU patients). Among patients who received curative treatment, the first‐line complete remission rate was 59% for patients who consulted in a SHU (vs. 4.2%, p = 0.001). Patients admitted to a SHU had greater access to associated treatment related to chemotherapy 68% (vs. 32%, n = 103). Access to haematopoietic stem cell transplantation (HSCT) and minimal residual disease (MRD) was reserved strictly for patients treated in SHU. Similarly, immunotherapy, radiotherapy and inclusion in clinical trials were almost exclusively seen among patients who had consulted a trained haematologist (Table ).
Factors associated with access to specialised haematology units In the univariate model, factors limiting access to the SHU were being in the age group above 50 years old, emergency referral (OR, 0.77; 95% CI, 0.58–1.01), specialised medical referral (OR, 0.11, 95% CI, 0.08–0.15), patients with low‐mild (OR, 0.52; 95% CI, 0.38–0.71) or severe (OR, 0.27, 95% CI, 0.19–0.40) comorbidities. Similarly, other factors such as being diagnosed with AML‐MRC (OR, 0.09; 95% CI, 0.02–0.27), t‐AML (OR, 0.08, 95% CI, 0.02–0.21), AML‐NOS (OR, 0.04, 95% CI, 0.01–0.11), AML‐others (OR, 0.18; CI, 0.04–0.50) or an intermediate (OR, 0.20; 95% CI, 0.09–0.39), adverse cytogenetic prognosis (OR, 0.20; 95% CI, 0.09–0.41) were also factors limiting access to SHU. In addition, based on EDI quintiles, patients with lower socio‐economic status had less access to SHUs compared to the higher income group (Table ). After adjustment, factors limiting access to a SHU were aged over 80 years old (ORa, 0.14; 95% CI, 0.04–0.38), emergency referral (ORa, 0.28; 95% CI, 0.18–0.44), or specialised unit referral (ORa, 0.12; 95% CI, 0.07–0.18). Also, patients with severe comorbidities (ORa, 0.39; 95% CI, 0.21–0.69) and patients with subtypes t‐AML/MDS (ORa, 0.13; 95% CI, 0.02–0.62), AML‐NOS (ORa, 0.10; 95% CI, 0.01–0.51) or AML‐others (ORa, 0.15; 95% CI, 0.02–0.70) were less likely to be sent to a SHU. However, being admitted to an academic hospital increased referral to SHU consultation by 8.87 times (Table ).
DISCUSSION Our population‐based study has investigated the impact of non‐biological factors on AML patient care pathways including those that could directly and/or indirectly influence treatment management. An added strength of our study is that, the former analysis was performed alongside an assessment of the impact of known prognostic parameters, including AML subtype and cytogenetic risk group. By using this combined approach, we were able to demonstrate the importance of consulting in a specialised haematology unit during the care pathway. This seems to have an impact on access to the best diagnostic tools and curative treatments, which in turn are well described in the literature as factors improving the overall survival of AML patients. , Several studies have investigated the impact of treatment facility type upon survival in AML, without evaluating the impact of access to specialised haematology units. , The present work shows that this should be taken into consideration since patients who are managed in academic hospitals have 8.87 times more access to specialised haematology unit (Figure ). Access to a specialised haematology unit does not seem to be related to patient socio‐economic status but rather to biological or clinical factors and potentially, to the accessibility of specialised AML treatment facilities in the patient geographical area of residence. However, a trend for the most deprived patients to have less access to a specialised haematology unit was observed in the univariate analysis, although this was not confirmed in the multivariate model. In the absence of individual measures of deprivation, the ecological measure (EDI quintile) reflects both the contextual and individual deprivation of the patient, and as such, does not fully represent the patient's socio‐economic status. During the period up to formal diagnosis of AML, patients may consult several clinical units and undergo various additional examinations, leading to rather diverse care pathways. Several factors, including clinical symptoms, age, patient geographical location, and other socio‐economic factors influence this. Our data show that advanced age remains a limitation for access to the specialised haematology unit, as observed in patients with the AML‐NOS subtype (median age = 84 vs. 73 years on average; 37% access to the SHU vs. 69% on average; OR = 0.10, 95% CI, 0.01–0.51). Lack of referral of these older patients to a specialised haematology unit resulted in less access to cytogenetic analysis (39% vs. 80% on average) thus potentially explaining their low access to curative treatment (18% vs. 48% on average per subtype, result not shown). Overall, this may negatively impact survival in this patient group. This is problematic because the incidence of AML continues to increase in this age group since 1990. More generally, our work highlights the impact of the AML care pathway on access to cytogenetic testing, an essential examination for accurate AML diagnosis and prognostic classification, according to ELN guidelines. Indeed, 45% of patients not referred to a specialised haematology unit did not receive cytogenetic testing (vs. 7.3% among SHU patients). Furthermore, of those AML patients who did not have access to cytogenetic testing, 91% were diagnosed with poor prognosis AML subtypes ( n = 196) (57% AML‐NOS, 22% T‐AML, and 12% Other‐AML). It is probable that cytogenetics would allowed re‐classification of at least some of these cases to other AML subtypes. For these patients, it is possible that the lack of transfer to a specialised haematology unit, the limitation in diagnostic investigations, and / or the lack of intensive therapy derives from a perceived limited benefit of these strategies on quality of life and vital prognosis. However, a possible treatment could be claimed, if the investigations had been completed. The same reasoning can be applied to the patients with severe comorbidities and who were potentially monitored elsewhere for a previous pathology. Indeed, severe comorbidities when combined with adverse cytogenetics in some AML subtypes can negatively impact patient access to a specialised haematology unit, for the presumed limited benefit this might bring. , , , Quite strikingly, we found that 74% (203/274) of AML patients who consulted at non‐academic hospitals, were subsequently managed in a non‐haematology unit. This may simply reflect the absence of SHU in non‐academic hospitals. Similarly, it is possible that these patients died before they could be transferred to a hospital with a specialised haematology unit (death represents a competitive event for access to SHU, for which we have minimised the impact in the logistic modelling). By contrast, admission to an academic‐hospital would favour access to a specialised haematology unit (ORa = 8.87), and thus optimal AML diagnosis and prognostic stratification with consequent increased probability of receiving curative treatment. Such treatment decisions by expert haematologists are further supported by access to expert facilities for management of adverse events in academic centres. It should be noted that specialised haematology unit, tend to admit the better prognosis AML patients. More importantly, haematopoietic stem cell transplantation, immunotherapy, radiotherapy, MRD evaluation and access to clinical trials were strictly reserved for patients who were seen by a trained haematologist. Given the positive impact of transplantation on the survival of AML patients, and the innovative therapies proposed in clinical trials, , working to improve patient access to specialised haematology unit will be essential to improve AML patient survival in the general population. Finally, based on patient clinical characteristics, we split patients into eligible (age ≤75 years without severe comorbidities) and non‐eligible for treatment (over 75 years with sever comorbidities) among patients alive 5 days after diagnosis. Regarding the age boundary, we followed the age‐related Ferrara unfitness criterion. By this method, we could show that 77% of non‐eligible patients receive treatment (28% and 49% for curative and palliative care respectively) when they visit a specialised haematology unit versus 42% (2.8% and 39% for curative and palliative care respectively) when they did not ( p < 0.001) (Appendix ‐ Table 4). These results show the importance of a trained haematologist for unfit AML patients. Indeed, with the advent of oral chemotherapy agents facilitating outpatient care, and non‐intensive chemotherapies (e.g. azacytidine venetoclax combination), , it can be assumed that the trained haematologist attempts to use these new therapeutic tools to manage unfit patients. The fact that the seven patients over 80 years old who were enrolled in a clinical trial were recruited by trained haematologist tends to support this notion (Table ). By contrast, unfit patients seen elsewhere do not have access to these new therapies, especially as an increasing number of studies suggest they should be treated with non‐intensive chemotherapies. , , Our study does present a number of limitations which need to be addressed. First, we categorised the EDI based on quintiles and such class variables are potentially less informative. The EDI‐quintile may, however, reflect the level of access to adequate health care facilities, as determined by the geographical area of the patient's residence. Our results also showed that the presence of severe co‐morbidities can limit patient access to specialised haematology units. However, a higher prevalence of severe co‐morbidities is seen among the most deprived patients, as defined by EDI. To uncover how the socio‐economic status affects access to specialised care facilities and the role of co‐morbidities for AML patients, information on distance and travel times to specialised care facilities, individual comorbidities, would be required. These data were not available in our study as is the case in other reports of similar design. , A second limitation concerns our finding that consultation in non‐haematological medical units is negatively correlated (ORa = 0.12, 95% CI, 0.07–0.18) with access to specialised haematology unit. We hypothesised that this reflects more complex clinical situations that require transfer to non‐haematological units, despite a diagnosis of AML. Again, in the absence of detailed information on the clinical signs justifying the lack of consultation in a specialised haematology unit, we cannot rule out the hypothesis that these patients were advised by a specialised haematologist (e.g. during a multidisciplinary consultation meeting) or that they wished not to be treated. Such information was not available in our study. These limitations however do not affect our main conclusions, and our findings raise the question of what therapeutic approach would have been taken if these patients had consulted in a specialised haematology unit during their course of care. To this end, in the next stage of our project, we will apply causal mediation techniques to quantify how accessing a specialised haematology unit causally contributes to the likelihood of receiving a curative treatment and impacts differential AML patient net survival.
CONCLUSION In this study, we show for the first time that well‐known clinical and biological prognostic factors limit the access of AML patients to a specialised haematology unit, which in turns strongly impedes access to cytogenetic analyses and curative treatments. Our study highlights the importance of a haematological unit referral, or a consultation in an academic hospital, for AML patients to have the best chance of being optimally treated according to individual disease risk factors and comorbidities.
Kueshivi Midodji ATSOU: Data curation (lead); formal analysis (lead); methodology (lead); writing – original draft (lead). Bernard Rachet: Formal analysis (supporting); methodology (supporting); supervision (supporting); validation (supporting); writing – review and editing (lead). Edouard Cornet: Data curation (supporting); writing – review and editing (equal). Marie‐Lorraine Chretien: Conceptualization (equal); resources (equal); writing – review and editing (equal). Cédric Rossi: Conceptualization (equal); resources (equal); writing – review and editing (equal). Laurent Remontet: Conceptualization (supporting); methodology (supporting); writing – review and editing (equal). Laurent Roche: Conceptualization (supporting); methodology (supporting); writing – review and editing (equal). Roch Giorgi: Conceptualization (supporting); formal analysis (supporting); methodology (supporting); writing – review and editing (supporting). Sophie Gauthier: Conceptualization (equal); data curation (equal). Stéphanie Girard: Conceptualization (equal); data curation (equal). Johann Bôckle: Conceptualization (equal); data curation (equal). Stéphane Kroudia Wasse: Resources (equal); writing – review and editing (equal). Hélène Rachou: Data curation (equal). Laïla Bouzid: Data curation (equal). Jean‐Marc Poncet: Data curation (equal). Sébastien Orazio: Methodology (supporting). Alain Monnereau: Resources (equal); supervision (supporting); writing – review and editing (supporting). Xavier Troussard: Resources (equal); supervision (supporting); writing – review and editing (supporting). Morgane Mounier: Conceptualization (lead); funding acquisition (lead); investigation (lead); methodology (supporting); project administration (lead); resources (lead); writing – review and editing (supporting). Marc Maynadié: Conceptualization (equal); funding acquisition (lead); investigation (supporting); project administration (supporting); resources (supporting); supervision (supporting); writing – original draft (supporting); writing – review and editing (supporting).
This study was supported by research funding from Fonds Européen de développement regional (FEDER: programme opérationnel FEDER‐FSE Bourgogne 2014–2020) and from Institut National du Cancer (Projet INCa‐SHS‐ESP, n°2018–124).
The authors declare no competing financial interests.
This study was authorised by the CNIL (Commission Nationale Informatique & Libertés) and received a favourable opinion from the ethics committee of the CESRESS (Comité d'Éthique et Scientifique pour les Recherches, les études et Évaluations dans le domaine de Santé) under the reference number MLD/CBO/AR2111097.
Appendix S1 Click here for additional data file.
|
Changing incentives to ACCELERATE drug development for paediatric cancer
|
2df5ad47-d9fb-4aec-9205-beb364c733fb
|
10134303
|
Internal Medicine[mh]
|
INTRODUCTION Over 400,000 children and adolescents are diagnosed with cancer globally each year, 50,000 in Europe and North America. , , , While there has been improvement in survival since the 1970s, the decrease in mortality has reached a plateau—in high‐income countries, approximately 20% of patients will die of their disease or of disease‐related causes; paediatric cancer remains the first non‐accidental cause of death in children and adolescents. , Therefore, there is an urgent need for new medicines to cure aggressive tumours, and to reduce the toxicity and sequelae of the treatments. Evaluating new anti‐cancer drugs in paediatric patients is critical ethically and enrolment in early‐phase clinical trials is an option to be proposed to the patient and family. Children with relapsed/refractory disease ethically deserve the option of a clinical trial when no curative treatment is known. Moreover, these trials need to be scientifically robust and of the highest quality. The European Regulation on Orphan Medicinal Products for medicines for rare diseases and the Paediatric Medicines Regulation (PMR) were adopted in 2001 and 2007, respectively, aiming to improve treatment options for these patients. , , At the time, limited or no relevant data on medicinal products were available for either group (patients with rare disease and paediatric patients)—both to which children and adolescents with cancer belong. The market size was mostly small, and developing medicines and conducting clinical trials was more complex. A combination of obligations, incentives and rewards was introduced with both regulations, to address the apparent market failure. The objectives of the two regulations partly overlap, as many diseases that affect only children are rare and rare diseases often also affect children, as is the case of paediatric cancers. In 2016, the European Parliament recognized that the PMR had been beneficial to children overall, but not sufficiently effective in certain therapeutic areas—notably paediatric oncology , —and called on the Commission to revise the Regulation. The revision of the two legislations is also one of the actions of the European Union (EU) Pharmaceutical Strategy. , The evaluation carried out in 2020 by the Commission showed that both legislative instruments have stimulated research and development of medicines to treat rare diseases and of medicines for children. However, it also showed shortcomings in the functioning of the existing legal framework. This is partly due to the legislation not being able to stimulate development of medicines in areas of unmet needs, such as childhood cancers and neonatology, that is a failure of the existing incentives. , In the United States (US), the Best Pharmaceuticals for Children (BPCA), Paediatric Research Equity Act (PREA), and the Research to Accelerate Cures and Equity Act (RACE) as well as Rare and Orphan Drug Designations aim to encourage those developing drugs to implement paediatric cancer programmes early on in development; however, limited success has been achieved so far. The objective of the Creating Hope Act was to incentivise sponsors to develop new medicines specifically for children suffering of life‐threatening diseases, rewarding such efforts with Priority Review Vouchers (PRV), which reduce the FDA's review time of a specific product from ten to six months. Between 2008 and 2022, with both major EU and US legislations in place (the PMR since 2007 and the PREA since 2003, respectively), only 29 anti‐cancer new molecular medicines were approved with a paediatric indication (16 in the EU, 29 in the US). Conversely, 133 anti‐cancer medicines were approved for adults in the EU in that same period. Furthermore, the paediatric development of many potentially relevant anti‐cancer drugs for children has been waivered on the ground that the condition for which they are indicated in adults does not occur in children (for example, lung cancer). There is an urgent need to examine why the European and North‐American legislation has fallen short of expectations in this disease area and to consider potential actions that will ensure children and adolescents with cancer can derive the intended benefits. ACCELERATE, an international paediatric oncology platform involving multiple stakeholders (academia, industry, regulatory bodies and patients and families) was established to hasten paediatric oncology drug development within the current regulatory framework. , A Working Group of ACCELERATE was convened to propose more effective incentives for paediatric‐specific oncology drug development. This Working Group's conclusions are very timely in view of the ongoing revision of the European PMR and Orphan regulations. This article outlines the current framework of incentives for paediatric drug development, highlights the lack of impact of the current incentives framework on childhood cancers and proposes changes to the current EU and US legislative framework.
CURRENT FRAMEWORK OF INCENTIVES FOR PAEDIATRIC DRUG DEVELOPMENT The main European and North‐American legislation (PMR; Regulation on Orphan Medicinal Products; US Creating Hope Act; and RACE Act) are summarized in Table . Further details are described in the Appendix . The current EU framework to promote drug development for children and adolescents includes both a regulatory obligation (with the possibility of waivers if the condition for which the product is intended does not occur in the paediatric age group, if the drug is likely to be either ineffective or unsafe, or if it does not have substantial therapeutic benefit over existing treatments) and the financial rewards (6 months of extended Supplementary Protection Certificate (SPC), i.e. of market exclusivity) on delivery of a completed Paediatric Investigation Plan (PIP) and updated product labelling, regardless of the results of the paediatric trials. It is, therefore, not dependent on the PIP demonstrating benefit of the drug in a paediatric population (Figure ). According to the EU legislation, companies can file for a new marketing authorization in adults when they have an agreed PIP or a waiver. In practice, this implies that companies can submit for marketing authorization or applications for variation of an existing one, as soon as the PIP is approved because a deferral of the start of the PIP can be granted, again delaying paediatric evaluation. In the US, the FDA developed the BPCA to create the incentive of additional marketing exclusivity to sponsors who voluntarily complete paediatric clinical studies as outlined by a Written Request issued by FDA. Sponsors can also request a Written Request for drugs under development. Meeting the requirements of the request grants sponsors six additional months of market exclusivity, but sponsors must adhere and perform studies in line with the FDA Written Request. The RACE for Children Act, which took effect in 2020, requires paediatric evaluation (submission of an initial Paediatric Study Plan, iPSP) of new molecularly targeted drugs and biologics intended for the treatment of adult cancers and directed at a molecular target substantially relevant to the growth or progression of a paediatric cancer. The drug evaluation is made on a mechanism‐of‐action approach, and therefore, waivers cannot be obtained on the grounds of the disease only occurring in adults. The Creating Hope Act, enacted in 2012 and reauthorized by the US Congress in 2020 for an additional 4 years, aims to provide PRVs to those sponsors who voluntarily prioritized paediatric drug development by labelling a drug to treat a rare paediatric disease. ‘Rare’ is defined pursuant to the Orphan Drug Act, that is it affects fewer than 200,000 Americans; ‘paediatric’, pursuant to the FDA Guidance for Industry, that is over 50% of the patients present with the disease before age 18. The disease itself must also qualify for priority review—it must be life‐threatening and address an unmet medical need. The voucher entitles the marketing authorization holder to the priority review of another single human drug or biologics application, which has the potential to provide an economic and competitive advantage to medicines vying for first‐to‐market status by decreasing time to approval. Because the voucher is transferable, the recipient can sell it to another company.
LACK OF IMPACT OF THE CURRENT INCENTIVES FRAMEWORK ON CHILDHOOD CANCERS Whilst the regulatory imperatives have led the pharmaceutical industry to actively consider patients younger than 18 years of age in their drug development programmes, the balance between the level of investment needed to execute a PIP and the potential financial reward is not proving sufficiently attractive for rare indications such as childhood cancer; a waiver or deferral of the PIP being the common outcome. This was shown by a 2016 study on the economic impact of the PMR, which concluded that whilst the regulation is a commendable first step, there remain therapeutic areas where significant unmet needs continue to exist, such as in childhood cancer. This study estimated the total cost of the PMR incurred to industry to be €2106 m per year or €16,848 m for the years 2008–2015. It also analysed the economic value of the rewards provided under the PMR, by analysing the SPC extensions covering eight medicinal products, which received SPC extensions in the period between 2007–2012 and lost their exclusivity before the third quarter of 2014. The economic value as a percentage of 6‐month revenue varies between 11% and 94%. The combined economic value (or monopoly rent) of the eight products is calculated to amount to €517 m, with an extrapolated economic value of €926 m between 2007–2015. Therefore, the authors believe that ‘the objectives of the reward scheme are deemed highly relevant when considering that the rewards provide a way for organizations to sponsor and support the development of paediatric medicines. Nevertheless, the rewards themselves cannot guarantee capital allocation decisions that maximize value for companies or result in positive return on investment in individual Research and Development programmes’. Current incentives facilitate pharmaceutical companies to invest in the development of paediatric drugs, but they do not guarantee this investment will lead to the economic return that companies plan or hope for. In oncology, this means that industry investment in paediatric cancer trials is usually either absent or delayed and cancer drug development programmes remain inextricably linked to the market potential for adult cancer indications. Nader et al. have recently shown that the median times from first‐in‐adult to first‐in‐paediatric for monotherapy and combination trials are 5.7 and 3.3 years, respectively. This supports our contention that there is inadequate motivation for the pharmaceutical industry to focus on cancer drug development for paediatric cancer‐specific markets with no adult cancer marketing value. Vassal et al. have demonstrated that between 1995 and 2022, 186 medicines received a first marketing authorization for the treatment of cancer in Europe—however, only 29 had a paediatric indication. Most of these (23/29, 79.3%) were approved after the implementation of the PMR (2008–2022). Out of the 23 drugs approved since 2007 with a paediatric indication, most were studied within a PIP (18/23, 78%). The first drug to be approved as part of a PIP was everolimus in 2011 for the treatment of subependymal giant cell astrocytoma. Therefore, two main challenges are identified: First, to drive earlier initiation of paediatric studies according to a mechanism‐of‐action driven decision following drug discovery in adults, and second, to provide incentives for drug development specifically directed at cancers occurring only in children (paediatric‐specific drug development). The industry's perspective to not pursue research and development in areas that will not be commercially viable is understandable in business terms. Therefore, if the current incentives are inadequate for a company to see any economic advantage to continue paediatric cancer drug development of a medicinal product when the medicinal product is paediatric‐specific, what is needed for industry to be motivated to continue such development for rare indications in the absence of an associated lucrative commercial market? There are four scenarios for cancer drug development since the PMR was implemented, which are portrayed in Table ; scenario 3 will not be discussed in this manuscript. In the USA, while BPCA encourages paediatric drug development, and hundreds of written requests have been submitted, few have resulted in quantifiable changes in drug development for children with cancer, and rewards have been limited, resulting in only 17 paediatric label changes across the entire 20‐year history of the programme as of February 2022. The Creating Hope Act enabled the use of PRVs and was seen as a very positive step. In practice, however, only three PRVs have been awarded for a paediatric cancer: dinutuximab and naxitamab for neuroblastoma and tisagenlecleucel for B cell acute lymphoblastic leukaemia. In part, this is because in order to obtain a voucher, the product must be approved, and approved first, in a paediatric indication for a disease where a rare paediatric disease designation has been granted for the product; supplementary approvals following adult indications do not qualify for voucher reward (the converse is not true; once approved in paediatrics, the drug may be developed in adult diseases as well). It is also noteworthy that vouchers are non‐discriminatory in terms of potential market size, that is percentage of population. Frontline or narrow relapse/refractory disease indications afford the same reward. The 2020 study released by the US Government Accountability Office (GAO) investigated the effectiveness and overall impact of the PRV programme. Between 2009 and 2019, 31 PRVs were awarded, mostly for drugs to treat rare paediatric diseases, out of which 17 were sold to another drug sponsor for prices ranging from $67 million to $350 million. In this report, GAO found few studies that examined the PRV programmes, and those that did found the programmes had little or no effect on drug development. However, the participating drug sponsors stated that PRVs were a factor in drug development decisions. Some academic researchers and stakeholders expressed concerns about the PRVs as incentives for drug development. , , , , As defended by Meyer, the GAO report ‘shows weak evidence of PRVs truly incentivizing development’. This author recommends that critical appraisals ‘must include how drug development and regulatory review have changed since 2007, as well as experience with drug pricing of products granted PRVs’. Other authors, like Hwang et al., find the impact of the PRV to be more positive, yet still recommend changes. They concluded that the voucher programme was not associated with a change in the rate of new paediatric drugs starting or completing clinical testing, but there was a significant increase in the rate of progress from Phase I to Phase II clinical trials after the programme was implemented. Hence, new policies may be needed to expand the pipeline of therapies for rare paediatric diseases.
REVISING INCENTIVES TO ACCELERATE PAEDIATRIC CANCER DRUG DEVELOPMENT The PMR has not accelerated paediatric and adolescent cancer drug development, to the degree needed; therefore, the pivotal question is how can a better reward or incentive framework accelerate development? Whilst the PMR has successfully motivated the pharmaceutical industry to focus on paediatric drug development for many paediatric diseases, we need to consider if a revision in the incentives framework could extend the benefit to children and adolescents with cancer. As discussed above, two challenges emerge that we have translated into proposals 1 and 2: To accelerate paediatric drug development in a mechanism‐of‐action driven environment (Figure ) and to provide incentives for drug development specifically directed at cancers occurring only in children (Figure ). 4.1 Proposal 1: drive earlier initiation of paediatric studies (accelerate paediatric drug development) If a drug does have a potential adult market, and there is also a potential application of the drug in a paediatric condition not relevant to the adult market, it should be viable for the two development programmes to proceed in parallel, neither depending on the success of the other. This would be the case where a drug's mechanism of action is relevant to different cancers in the adult and paediatric age groups (scenario #2). The RACE legislation ensures this in the United States, and we propose that in Europe, the PMR should be modified to ensure a mandatory mechanism‐of‐action driven PIP. This is the first necessary modification to enhance paediatric drug development. To further encourage and accelerate development, incentives need to be introduced at an earlier stage in the clinical development pathway, rather than only at the end of the SPC, with a staged and milestone‐driven approach. Currently, it is mandated by the PMR that the package of proposed studies that constitute a PIP are submitted not later than upon completion of human pharmacokinetic studies in adults (i.e. completion of first phase 1 trials for oncology products) and should include the study synopses for all the planned studies of paediatric relevance that would be required for the medicinal product's application for marketing authorization in the paediatric condition. This should include non‐clinical studies, pharmacokinetic and dose finding studies, as well as phase III efficacy studies that often require specifics about a proposed comparator or a randomized design. Whilst all this is obligatory, the reward for this investment is not realized until the PIP is completed and all conditions of the PMR are met. Therefore, the delivery of PIPs for rare paediatric cancers can be challenging and there are several stages at which the full PIP could fail to be delivered, negating the potential reward despite the up‐front investment. We propose that PIPs should be considered as a more iterative process with defined interim and final deliverables, each attracting rewards in their own right (Figure ). The aim would be to encourage industry to initiate the PIP earlier in the drug's development pathway with the potential for an earlier reward based on key go/no‐go milestones. This could reduce the current tendency for deferral of initiation of all the paediatric studies until completion of the adult development. For example, the completion of the first clinical studies described in the PIP up to and including the early‐phase clinical trials would provide crucial data on the age‐relevant safety profile (including infants and younger patients where relevant and feasible), pharmacokinetics, pharmacodynamic endpoints and potentially an activity signal for the drug. These data could inform a go/no‐go decision on further development in the paediatric age group and could, therefore, be defined as the first deliverable within a given timeframe within the PIP. This first reward would hence only be given if paediatric development is started early and could consist of tax credit or other forms of economic gain that do not depend on eventual market authorization and exclusivity. It would be important to include a mandate to complete the PIP, with an incentive, if a positive ‘go signal’ is met at the first milestone. Subsequent efficacy studies would constitute a second/final deliverable, again within a given timeframe. This would lead to a final reward on submission for marketing authorization that could again be stratified. If paediatric development was started early during the process, additional 6 months of market exclusivity could be granted. If, however, paediatric studies were delayed, the extension of market exclusivity could be reduced to 2 months (or to no extension at all). In all instances, the processes to benefit from SPC rewards should be made easier. The issues with the current SPC legislation have been recognized by the Commission, which published an evaluation in 2020 that concluded that the main shortcoming is the fact that SPCs are granted and administered nationally. SPC applications are currently filed at the national patent office of each EU Member State where protection is sought and in which a basic patent, to be extended, has been granted. This undermines their effectiveness and efficiency, leading to high cost and administrative burden for the SPC users, among other issues. The legislative proposal to address these issues are planned to be adopted by the end of 2022. If the PIP deliverables are segmented, then the rewards described in the PMR can be proportionately awarded if the deliverables are achieved in the prescribed timeframe. The introduction of this ‘segmented reward approach’ for completion of an interim deliverable would be a significant change to the current PMR and could facilitate a more iterative approach to the design of studies within the PIP, with efficacy studies being informed by the preceding early‐phase studies and the contemporaneous clinical trials landscape in the diseases being studied (Figure ). The proposed change to the submission requirements of PIPs would embed reward for early initiation of paediatric studies. A company would be encouraged to submit their proposal for phase 1 paediatric study plans before commencement of the adult phase 2 trial—this would be reflected in the reward but would not be mandatory. This phase 1 study element of the PIP could include age‐relevant data on safety, pharmacokinetic and (where relevant) pharmacodynamic endpoints in at least one paediatric condition. Based on the results of the phase I trial and the predefined go/no‐go decision, the company would be required to submit the plans for the phase II and phase III study elements of the PIP before it is able to submit its application for the marketing authorization on the adult indication. This would be considered achieving the first deliverable and would receive the interim reward. The company can opt not to have an interim deliverable, and the incentives would remain as described for ‘final reward’ only and would be awarded on completion of the full PIP requirements. However, rewards should be available for medicinal products, which are not going to be advanced in adults but are beneficial in children. 4.2 Proposal 2: incentivise paediatric‐specific drug development Whilst proposal 1 aims to proportionately reward accelerated paediatric cancer drug development for medicines, which are generally following a development pathway for an adult indication, incentives are needed that motivate and reward investment in paediatric‐specific cancer drug development, uncoupled from adult cancer indications. We propose that the Orphan Regulation can be modified to incentivise paediatric‐specific drug development. Currently, the reward obtained for orphan‐designated medicines is a late reward, which consists of an extended 10 years of market exclusivity. This can once again discourage companies from the vast investment needed to fully develop a new medicine. We propose the introduction of a new ‘early reward’ (Figure ) that would be granted right after a first marketing authorization for a paediatric cancer indication. This reward could consist of tax reductions, transferable vouchers or other measures that warrant an immediate economic gain, as opposed to the classic late reward. This late reward would, however, remain as it is now. Potential rewards for developing a drug for a paediatric cancer‐specific indication include accelerated reviews, which are already carried out in Europe within PRIME. PRIME (PRIority Medicines) is a programme launched by the EMA in 2016 to enhance support for the development of medicines that target an unmet medical need. This voluntary scheme is based on enhanced interaction and early dialogue with developers of promising medicines, to optimize development plans and speed up evaluation so these medicines can reach patients earlier. This is achieved through scientific advice and accelerated assessment of medicines applications. The EMA recently published its 5‐year evaluation of PRIME, showing the feasibility and potential benefit of accelerated reviews. Between 2016 and 2021, 95 requests were granted, with 18 medicines eventually receiving marketing authorization. Out of these 18, seven were oncology drugs. Importantly, the average evaluation time for PRIME medicines was reduced by 6.7 months compared to non‐PRIME medicines. The review process for marketing authorization applications within EMA differs from that of the FDA; therefore, the concept of a Creating Hope‐like PRV would be difficult to implement in the EU, but nevertheless the concept of a transferable voucher is conceivable. Whatever the approach, the level of this new reward needs to be sufficiently attractive to motivate industry to develop a drug for a potentially non‐profitable market. One approach could be a transferable voucher for the 6‐month extension of the SPC of another drug. This would be a substantial change and would need careful evaluation of the potential socio‐economic impact. The parameters of the drug to which the transferable voucher could be applied need to be carefully defined; for example, the transfer could be restricted only to drugs with a paediatric indication and/or applied only to other compounds in the drug development pipeline and not to products with an existing marketing authorization. The ability to sell the transferable reward to another company would particularly benefit small companies without an extensive drug development portfolio to which the reward could be applied. This proposal would provide a substantial increase in the incentive to drive specific research and development programmes for the rare paediatric conditions, including paediatric and adolescent cancers that would otherwise not attract investment. In this context, small biotechnology companies can play an important role in the development of drugs for cancers that only occur in children. At ACCELERATE's 2021 annual conference, there was a dedicated session to specifically address the needs of these companies. Generally defined as smaller companies with a primary focus in research and development, biotechnology (biotech) companies do not have the resources of larger pharmaceutical companies. They can be single‐asset companies whose existence depends upon the success of its individual product or platform. As such, biotech companies are less likely to engage early in paediatric drug development, unless that is their sole purpose, or their drug has been specifically designed to do so. Given that the cost of clinical development for an individual programme is in the hundreds of millions of euros, biotech companies often do not have the funds to spend on more than one development programme at a time. As a result, the current incentives are suboptimal for these companies and mainly benefit large pharmaceutical companies. However, smaller biotechnology companies are arguably major drivers of early innovation and have the potential to provide novel drugs to children with cancer, but because of their financial structure, cannot afford to wait for late rewards. This would align with both our proposals (1 and 2) to change to a segmented reward approach, in which early rewards are offered as part of the PMR and of the Orphan Regulation. Providing rewards during the development of a drug, or post‐marketing, are of no benefit to a biotech company that does not survive to market, because of its lack of resources or early clinical failures, and such companies cannot be expected to deliver paediatric programmes for each asset as a result. Rather, incentives need to be staged and milestone‐driven, reviewed at each step of the development process and with reviews available at each step, instead of at the end. In addition to moving the timelines of incentives, we propose that novel incentives should be introduced, for example, tax incentives for early investors. Each paediatric indication study should lead to its own incentive/reward (SPC extensions, tax incentives, accelerated reviews). Furthermore, incentives should be transferrable, following the PRV model. In conclusion, incentives should be implemented earlier rather than later in the drug development process, and be staged, milestone‐driven, novel, proportional to work completed at each phase and transferrable.
Proposal 1: drive earlier initiation of paediatric studies (accelerate paediatric drug development) If a drug does have a potential adult market, and there is also a potential application of the drug in a paediatric condition not relevant to the adult market, it should be viable for the two development programmes to proceed in parallel, neither depending on the success of the other. This would be the case where a drug's mechanism of action is relevant to different cancers in the adult and paediatric age groups (scenario #2). The RACE legislation ensures this in the United States, and we propose that in Europe, the PMR should be modified to ensure a mandatory mechanism‐of‐action driven PIP. This is the first necessary modification to enhance paediatric drug development. To further encourage and accelerate development, incentives need to be introduced at an earlier stage in the clinical development pathway, rather than only at the end of the SPC, with a staged and milestone‐driven approach. Currently, it is mandated by the PMR that the package of proposed studies that constitute a PIP are submitted not later than upon completion of human pharmacokinetic studies in adults (i.e. completion of first phase 1 trials for oncology products) and should include the study synopses for all the planned studies of paediatric relevance that would be required for the medicinal product's application for marketing authorization in the paediatric condition. This should include non‐clinical studies, pharmacokinetic and dose finding studies, as well as phase III efficacy studies that often require specifics about a proposed comparator or a randomized design. Whilst all this is obligatory, the reward for this investment is not realized until the PIP is completed and all conditions of the PMR are met. Therefore, the delivery of PIPs for rare paediatric cancers can be challenging and there are several stages at which the full PIP could fail to be delivered, negating the potential reward despite the up‐front investment. We propose that PIPs should be considered as a more iterative process with defined interim and final deliverables, each attracting rewards in their own right (Figure ). The aim would be to encourage industry to initiate the PIP earlier in the drug's development pathway with the potential for an earlier reward based on key go/no‐go milestones. This could reduce the current tendency for deferral of initiation of all the paediatric studies until completion of the adult development. For example, the completion of the first clinical studies described in the PIP up to and including the early‐phase clinical trials would provide crucial data on the age‐relevant safety profile (including infants and younger patients where relevant and feasible), pharmacokinetics, pharmacodynamic endpoints and potentially an activity signal for the drug. These data could inform a go/no‐go decision on further development in the paediatric age group and could, therefore, be defined as the first deliverable within a given timeframe within the PIP. This first reward would hence only be given if paediatric development is started early and could consist of tax credit or other forms of economic gain that do not depend on eventual market authorization and exclusivity. It would be important to include a mandate to complete the PIP, with an incentive, if a positive ‘go signal’ is met at the first milestone. Subsequent efficacy studies would constitute a second/final deliverable, again within a given timeframe. This would lead to a final reward on submission for marketing authorization that could again be stratified. If paediatric development was started early during the process, additional 6 months of market exclusivity could be granted. If, however, paediatric studies were delayed, the extension of market exclusivity could be reduced to 2 months (or to no extension at all). In all instances, the processes to benefit from SPC rewards should be made easier. The issues with the current SPC legislation have been recognized by the Commission, which published an evaluation in 2020 that concluded that the main shortcoming is the fact that SPCs are granted and administered nationally. SPC applications are currently filed at the national patent office of each EU Member State where protection is sought and in which a basic patent, to be extended, has been granted. This undermines their effectiveness and efficiency, leading to high cost and administrative burden for the SPC users, among other issues. The legislative proposal to address these issues are planned to be adopted by the end of 2022. If the PIP deliverables are segmented, then the rewards described in the PMR can be proportionately awarded if the deliverables are achieved in the prescribed timeframe. The introduction of this ‘segmented reward approach’ for completion of an interim deliverable would be a significant change to the current PMR and could facilitate a more iterative approach to the design of studies within the PIP, with efficacy studies being informed by the preceding early‐phase studies and the contemporaneous clinical trials landscape in the diseases being studied (Figure ). The proposed change to the submission requirements of PIPs would embed reward for early initiation of paediatric studies. A company would be encouraged to submit their proposal for phase 1 paediatric study plans before commencement of the adult phase 2 trial—this would be reflected in the reward but would not be mandatory. This phase 1 study element of the PIP could include age‐relevant data on safety, pharmacokinetic and (where relevant) pharmacodynamic endpoints in at least one paediatric condition. Based on the results of the phase I trial and the predefined go/no‐go decision, the company would be required to submit the plans for the phase II and phase III study elements of the PIP before it is able to submit its application for the marketing authorization on the adult indication. This would be considered achieving the first deliverable and would receive the interim reward. The company can opt not to have an interim deliverable, and the incentives would remain as described for ‘final reward’ only and would be awarded on completion of the full PIP requirements. However, rewards should be available for medicinal products, which are not going to be advanced in adults but are beneficial in children.
Proposal 2: incentivise paediatric‐specific drug development Whilst proposal 1 aims to proportionately reward accelerated paediatric cancer drug development for medicines, which are generally following a development pathway for an adult indication, incentives are needed that motivate and reward investment in paediatric‐specific cancer drug development, uncoupled from adult cancer indications. We propose that the Orphan Regulation can be modified to incentivise paediatric‐specific drug development. Currently, the reward obtained for orphan‐designated medicines is a late reward, which consists of an extended 10 years of market exclusivity. This can once again discourage companies from the vast investment needed to fully develop a new medicine. We propose the introduction of a new ‘early reward’ (Figure ) that would be granted right after a first marketing authorization for a paediatric cancer indication. This reward could consist of tax reductions, transferable vouchers or other measures that warrant an immediate economic gain, as opposed to the classic late reward. This late reward would, however, remain as it is now. Potential rewards for developing a drug for a paediatric cancer‐specific indication include accelerated reviews, which are already carried out in Europe within PRIME. PRIME (PRIority Medicines) is a programme launched by the EMA in 2016 to enhance support for the development of medicines that target an unmet medical need. This voluntary scheme is based on enhanced interaction and early dialogue with developers of promising medicines, to optimize development plans and speed up evaluation so these medicines can reach patients earlier. This is achieved through scientific advice and accelerated assessment of medicines applications. The EMA recently published its 5‐year evaluation of PRIME, showing the feasibility and potential benefit of accelerated reviews. Between 2016 and 2021, 95 requests were granted, with 18 medicines eventually receiving marketing authorization. Out of these 18, seven were oncology drugs. Importantly, the average evaluation time for PRIME medicines was reduced by 6.7 months compared to non‐PRIME medicines. The review process for marketing authorization applications within EMA differs from that of the FDA; therefore, the concept of a Creating Hope‐like PRV would be difficult to implement in the EU, but nevertheless the concept of a transferable voucher is conceivable. Whatever the approach, the level of this new reward needs to be sufficiently attractive to motivate industry to develop a drug for a potentially non‐profitable market. One approach could be a transferable voucher for the 6‐month extension of the SPC of another drug. This would be a substantial change and would need careful evaluation of the potential socio‐economic impact. The parameters of the drug to which the transferable voucher could be applied need to be carefully defined; for example, the transfer could be restricted only to drugs with a paediatric indication and/or applied only to other compounds in the drug development pipeline and not to products with an existing marketing authorization. The ability to sell the transferable reward to another company would particularly benefit small companies without an extensive drug development portfolio to which the reward could be applied. This proposal would provide a substantial increase in the incentive to drive specific research and development programmes for the rare paediatric conditions, including paediatric and adolescent cancers that would otherwise not attract investment. In this context, small biotechnology companies can play an important role in the development of drugs for cancers that only occur in children. At ACCELERATE's 2021 annual conference, there was a dedicated session to specifically address the needs of these companies. Generally defined as smaller companies with a primary focus in research and development, biotechnology (biotech) companies do not have the resources of larger pharmaceutical companies. They can be single‐asset companies whose existence depends upon the success of its individual product or platform. As such, biotech companies are less likely to engage early in paediatric drug development, unless that is their sole purpose, or their drug has been specifically designed to do so. Given that the cost of clinical development for an individual programme is in the hundreds of millions of euros, biotech companies often do not have the funds to spend on more than one development programme at a time. As a result, the current incentives are suboptimal for these companies and mainly benefit large pharmaceutical companies. However, smaller biotechnology companies are arguably major drivers of early innovation and have the potential to provide novel drugs to children with cancer, but because of their financial structure, cannot afford to wait for late rewards. This would align with both our proposals (1 and 2) to change to a segmented reward approach, in which early rewards are offered as part of the PMR and of the Orphan Regulation. Providing rewards during the development of a drug, or post‐marketing, are of no benefit to a biotech company that does not survive to market, because of its lack of resources or early clinical failures, and such companies cannot be expected to deliver paediatric programmes for each asset as a result. Rather, incentives need to be staged and milestone‐driven, reviewed at each step of the development process and with reviews available at each step, instead of at the end. In addition to moving the timelines of incentives, we propose that novel incentives should be introduced, for example, tax incentives for early investors. Each paediatric indication study should lead to its own incentive/reward (SPC extensions, tax incentives, accelerated reviews). Furthermore, incentives should be transferrable, following the PRV model. In conclusion, incentives should be implemented earlier rather than later in the drug development process, and be staged, milestone‐driven, novel, proportional to work completed at each phase and transferrable.
CONCLUSIONS Drug development for childhood cancers is limited by the imbalance between the resources needed to deliver a full paediatric cancer programme and the potential market reward for doing so successfully. Furthermore, because the profits are coupled to a more lucrative adult drug development market, product innovation unique to paediatric cancers is rarely undertaken. At first glance, incentives to drive industry to invest in drug development for rare paediatric diseases, like childhood cancer, appear to be in place in Europe (Regulation on Orphan Medicinal Products, PMR) and in the US (RACE, Creating Hope Act), but they have not been as effective as was anticipated. The European Pharmaceutical legislation is currently under revision, and we hope that our proposals (Box ) can be incorporated into the upcoming modifications. We believe the changes in the timing (segmented reward approach) and type (transferable exclusivity voucher) would be of significant benefit to children and adolescents with cancer as well as to other life‐threatening diseases with unmet medical needs. Of note, the segmented reward approach would not increase the overall financial incentives to pharma but is a mechanism to drive more rapid implementation of the goals of the regulation for the benefit of children with life‐threatening diseases. BOX 1 Recommendations for the revision of the Paediatric Medicine Regulation (PMR) and the Regulation on Orphan Medicinal Products To change the timing and nature of the rewards, which would both drive earlier initiation of paediatric studies and provide incentives for drug development specifically for children. To modify the PMR to ensure mechanism‐of‐action driven, mandatory paediatric investigation plans Incentives should be reorganized to a stepwise and incremental approach. Interim and final deliverables should be defined within a PIP, each attracting a reward on completion. An optional interim deliverable would require production of paediatric data that inform the go/no‐go decisions on whether to take a drug forward to paediatric efficacy trials. To promote paediatric‐specific cancer drug development with the introduction of early rewards in the frame of the Orphan Medicinal Products regulation, with a variant on the US Creating Hope Act and its priority review vouchers.
Teresa de Rojas: Conceptualization (lead); data curation (lead); formal analysis (lead); investigation (lead); methodology (lead); supervision (lead); validation (lead); visualization (lead); writing – original draft (lead); writing – review and editing (lead). Pamela Kearns: Conceptualization (lead); data curation (equal); formal analysis (equal); investigation (equal); methodology (equal); supervision (lead); validation (equal); visualization (supporting); writing – original draft (lead); writing – review and editing (equal). Patricia Blanc: Conceptualization (equal); data curation (supporting); investigation (supporting); methodology (supporting); supervision (equal); validation (supporting); visualization (supporting); writing – original draft (equal); writing – review and editing (equal). Jeffrey Skolnik: Conceptualization (supporting); data curation (supporting); investigation (supporting); supervision (equal); validation (equal); visualization (supporting); writing – original draft (supporting); writing – review and editing (equal). Elizabeth Fox: Conceptualization (supporting); investigation (supporting); supervision (equal); validation (equal); visualization (supporting); writing – original draft (supporting); writing – review and editing (equal). Leona Knox: Conceptualization (supporting); investigation (supporting); methodology (supporting); supervision (equal); validation (equal); visualization (supporting); writing – original draft (supporting); writing – review and editing (equal). Raphael Rousseau: Conceptualization (supporting); investigation (supporting); supervision (equal); validation (supporting); visualization (supporting); writing – original draft (supporting); writing – review and editing (supporting). François Doz: Conceptualization (supporting); investigation (supporting); supervision (equal); validation (supporting); visualization (supporting); writing – original draft (supporting); writing – review and editing (equal). Nick Bird: Conceptualization (equal); formal analysis (supporting); investigation (equal); supervision (equal); validation (equal); visualization (supporting); writing – original draft (supporting); writing – review and editing (equal). Andrew Pearson: Conceptualization (lead); data curation (equal); formal analysis (equal); investigation (equal); methodology (equal); supervision (lead); validation (lead); visualization (equal); writing – original draft (supporting); writing – review and editing (lead). Gilles Vassal: Conceptualization (lead); data curation (equal); formal analysis (equal); investigation (lead); methodology (equal); supervision (lead); validation (lead); visualization (equal); writing – original draft (supporting); writing – review and editing (lead).
Supported by the Andrew McDonough B+ Foundation.
ADJP has consulted for Lilly, Norgine and Developmental Therapeutics Consortium Limited and been an advisor for Amgen. FD has participated in advisory boards for Bayer, BMS, Roche, Celgene, LOXO Oncology, Servier and Tesaro; he has been rewarded for consultancy services by Roche and Servier; he has worked in scientific partnership with Onxeo and Synth‐Innove (all these payments were received in a research account, not a personal account). FD has also been refunded travel expenses by Bayer, BMS and Roche. JS is an employee of Inovio Pharmaceuticals. The rest of the authors have no conflicts of interest to declare.
Ethical approval was not sought from an institutional review board nor ethics committee as it is not needed/applicable for this kind of study (no inclusion of human subjects).
Appendix S1. Click here for additional data file.
|
Implementation of microsatellite instability testing for the assessment of solid tumors in clinical practice
|
1e3ca255-8441-48ee-9be6-f08d1b9e1e0d
|
10134335
|
Internal Medicine[mh]
|
INTRODUCTION Immune checkpoint inhibitors (ICIs) were introduced in clinical practice in the 2010s for treating various tumors. Currently, the combination of immuno‐oncology and ICIs has shown durable responses and manageable toxicities and is globally recognized as an established therapeutic strategy in clinical oncology. , A solid tumor with high microsatellite instability (MSI) has deficient DNA mismatch repair (dMMR), which causes hypermutation and produces mutation‐generated neoantigens that elicit immune cell responses. , Therefore, theoretically, patients with MSI‐high solid tumors are considered good candidates for ICI treatment. , In the KEYNOTE (KN) 016 study, the effectiveness of pembrolizumab, an anti‐programmed cell death 1 (PD‐1) inhibitor, was first demonstrated in patients with dMMR; however, it was ineffective in those with proficient MMR (pMMR). Subsequent clinical trials in patients with dMMR/MSI‐high tumors have improved the detection of ICI responders regardless of their origin. , , , In November 2018, the Promega MSI Analysis System was approved as an in vitro diagnostic test to identify MSI‐high solid tumors and determine their suitability for pembrolizumab treatment in Japan. , Thus, MSI testing is increasingly being conducted to determine the suitability of patients for ICI treatment, screen for Lynch syndrome, and consider the use of adjuvant chemotherapy for colorectal cancer (CRC) following curative resection. However, several issues associated with the implementation of MSI testing in clinical practice remain unclear, including the turnaround time (TAT), availability of sufficient tissue from small specimens, DNA degradation due to prolonged storage, and collaboration with genetic counselors. Herein, we evaluated our real‐world experience with MSI testing to identify issues associated with its implementation in clinical practice, thus enabling further advancement of precision oncology.
MATERIALS AND METHODS 2.1 Study design and patients We retrospectively reviewed the medical records of patients with solid tumors who underwent MSI testing between January 2019 and December 2020 at our institution. We enrolled patients who met the following inclusion criteria : patients with solid tumors, those who underwent polymerase chain reaction (PCR)‐based MSI testing at our institution, and those who provided written informed consent for MSI testing. We followed the treatment schedule as specified in previous pivotal clinical trials. , , This study was approved by the ethics committee of the Cancer Institute Hospital of the Japanese Foundation for Cancer Research (JFCR) in Tokyo, Japan (approval no. 2020–1229), and was conducted in accordance with the tenets of the Declaration of Helsinki (1964) and its later amendments. Considering the retrospective nature of this study and the option for patients to opt out, the need for informed consent was waived. 2.2 MSI testing procedure The actual MSI testing was outsourced to an inspection company (LSI Medience corporation). Pathologists selected optimal specimens with adequate tumor cells. At our institution, since January 2018, only 10% neutral buffered formalin (NBF) was used to fix tissue biopsy specimens, whereas 20% NBF was used in other cases. After extracting DNA from the formalin‐fixed paraffin‐embedded tissue specimens, MSI testing was conducted via a PCR‐based MSI analysis system (FALCO Biosystems Ltd.) using five quasimonomorphic mononucleotide repeat markers (NR‐21, BAT‐25, MONO‐27, NR‐24, and BAT‐26). These mononucleotide markers have few germline variant alleles; therefore, the MSI status could be determined based on the quasimonomorphic variation range (QMVR) without using normal controls. However, some cases required normal controls to identify the MSI status. MSI status was classified as MSI‐high and microsatellite stable (MSS). MSI‐high was defined as the detection of the size shift in the PCR band outside QMVR in two or more of the five markers, whereas MSS was defined as the detection of one or no unstable marker. Samples with weak fluorescence intensity after amplification indicated DNA degradation and were retested using a higher number of PCR cycles. 2.3 Indication for MSI testing and ICI PCR MSI testing, not MMR–immunohistochemistry (IHC) analysis, is the only diagnostic method for determining ICI indication for MSI‐high cases in Japan. In addition, PCR‐based MSI testing has been approved as a screening tool for Lynch syndrome. As MSI testing is occasionally performed in patients with CRC before adjuvant chemotherapy, early stage patients with MSI‐high tumors may not receive ICI. 2.4 Statistical analyses Statistical analyses were performed using Fisher's exact test for categorical data and Mann–Whitney test for continuous data. A p ‐value of <0.05 was considered statistically significant for all analyses. We used Kaplan–Meier survival curves to calculate overall survival (OS) and progression‐free survival (PFS). OS was defined as the time from the start of chemotherapy to the latest follow‐up or death. PFS was defined as the time from the start of chemotherapy to the first day of disease progression or death. The cutoff date for survival and progression was October 30, 2021. For patients with target lesions, the objective response rate (ORR) and disease control rate (DCR) were calculated according to the Response Evaluation Criteria in Solid Tumors guidelines (version 1.1). All statistical analyses were performed using the graphical user interface for R (The R Foundation for Statistical Computing) and GraphPad Prism v 9.0 for Windows (GraphPad Software Inc.).
Study design and patients We retrospectively reviewed the medical records of patients with solid tumors who underwent MSI testing between January 2019 and December 2020 at our institution. We enrolled patients who met the following inclusion criteria : patients with solid tumors, those who underwent polymerase chain reaction (PCR)‐based MSI testing at our institution, and those who provided written informed consent for MSI testing. We followed the treatment schedule as specified in previous pivotal clinical trials. , , This study was approved by the ethics committee of the Cancer Institute Hospital of the Japanese Foundation for Cancer Research (JFCR) in Tokyo, Japan (approval no. 2020–1229), and was conducted in accordance with the tenets of the Declaration of Helsinki (1964) and its later amendments. Considering the retrospective nature of this study and the option for patients to opt out, the need for informed consent was waived.
MSI testing procedure The actual MSI testing was outsourced to an inspection company (LSI Medience corporation). Pathologists selected optimal specimens with adequate tumor cells. At our institution, since January 2018, only 10% neutral buffered formalin (NBF) was used to fix tissue biopsy specimens, whereas 20% NBF was used in other cases. After extracting DNA from the formalin‐fixed paraffin‐embedded tissue specimens, MSI testing was conducted via a PCR‐based MSI analysis system (FALCO Biosystems Ltd.) using five quasimonomorphic mononucleotide repeat markers (NR‐21, BAT‐25, MONO‐27, NR‐24, and BAT‐26). These mononucleotide markers have few germline variant alleles; therefore, the MSI status could be determined based on the quasimonomorphic variation range (QMVR) without using normal controls. However, some cases required normal controls to identify the MSI status. MSI status was classified as MSI‐high and microsatellite stable (MSS). MSI‐high was defined as the detection of the size shift in the PCR band outside QMVR in two or more of the five markers, whereas MSS was defined as the detection of one or no unstable marker. Samples with weak fluorescence intensity after amplification indicated DNA degradation and were retested using a higher number of PCR cycles.
Indication for MSI testing and ICI PCR MSI testing, not MMR–immunohistochemistry (IHC) analysis, is the only diagnostic method for determining ICI indication for MSI‐high cases in Japan. In addition, PCR‐based MSI testing has been approved as a screening tool for Lynch syndrome. As MSI testing is occasionally performed in patients with CRC before adjuvant chemotherapy, early stage patients with MSI‐high tumors may not receive ICI.
Statistical analyses Statistical analyses were performed using Fisher's exact test for categorical data and Mann–Whitney test for continuous data. A p ‐value of <0.05 was considered statistically significant for all analyses. We used Kaplan–Meier survival curves to calculate overall survival (OS) and progression‐free survival (PFS). OS was defined as the time from the start of chemotherapy to the latest follow‐up or death. PFS was defined as the time from the start of chemotherapy to the first day of disease progression or death. The cutoff date for survival and progression was October 30, 2021. For patients with target lesions, the objective response rate (ORR) and disease control rate (DCR) were calculated according to the Response Evaluation Criteria in Solid Tumors guidelines (version 1.1). All statistical analyses were performed using the graphical user interface for R (The R Foundation for Statistical Computing) and GraphPad Prism v 9.0 for Windows (GraphPad Software Inc.).
RESULTS 3.1 Feasibility of MSI testing Between January 2019 and December 2020, 1052 consecutive MSI tests were conducted in 1047 patients with solid tumors at the Cancer Institute Hospital of JFCR. Among them, five patients underwent MSI testing twice, including two patients with two synchronous primary cancers, two with primary and metastatic cancers, and one whose specimen was unsuitable for initial testing. In total, we assessed 27 different types of solid tumors. Patients with CRC accounted for approximately 40% ( n = 437) of the cohort, of which only 4.6% ( n = 20) was MSI‐high. Patients with MSI‐high endometrial cancer showed the highest proportion ( n = 17, 21.3%) (Figure ). None of the patients who underwent MSI testing twice had MSI‐high tumors. Table presents the success rates of MSI testing. The MSI status could be determined in 1041 of 1052 cases, and the overall success rate of MSI testing was 99.0% (95% confidence interval [CI]: 98.0–100.0). The detection rate of MSI‐high cases was 4.7% ( n = 50) in the entire cohort. Patients with surgically or endoscopically resected and biopsy specimens, including those collected via fine‐needle aspiration, underwent successful MSI testing (98.6% and 99.5%, respectively). However, based on specimen conditions, some differences were noted in the success rates of MSI testing. Patients whose specimens were fixed with 20% NBF showed a lower success rate (98.4%; 95% CI: 97.2–99.2) than those whose specimens were fixed using 10% NBF (100.0%; 95% CI: 99.1–100.0). In addition, patients who had specimens with prolonged storage (>36 months) showed a significantly lower success rate (95.4%; 95% CI: 90.7–98.1) than those who had specimens with nonprolonged storage (99.6%; 95% CI: 98.9–99.9) (Table ). For patients whose specimens were fixed with 20% formalin, a statistically significant difference was noted in the success rates of MSI testing between specimens with prolonged and nonprolonged storage (95.4% [95% CI: 90.7–98.1] vs. 99.3% [95% CI: 98.2–99.8]; p = 0.0028). However, for patients who had specimens with nonprolonged storage, no significant difference was observed in the success rates of MSI testing between specimens fixed with 10% and 20% NBF. 3.2 Determination of MSI status using tumor samples alone MSI status could be determined based on QMVR without using normal controls in 994 (94.5%) of 1052 cases. Normal tissue and blood samples were used to determine the MSI status in 25 and 27 patients, respectively. Furthermore, both normal tissue and blood samples were used to determine the MSI status in six patients. Notably, for cases in which only tumor samples were used to determine the MSI status, the proportion of MSI‐high cases (29 [42.0%] of 50) was statistically significantly lower than that of MSS cases (956 [96.5%] of 991) ( p < 0.001). 3.3 TAT The median TAT was 17 and 24 days in MSS and MSI‐high cases, respectively ( p < 0.001) (Figure ). Statistically significant differences in TAT were noted between MSS and MSI‐high cases ( p < 0.001). The TAT was within 7, 14, 21, and 28 days in 0 (0.0%), 346 (34.6%), 812 (81.2%), and 915 (91.5%) patients with MSS tumors and in 0 (0.0%), 8 (16.0%), 23 (46.0%), and 31 (62.0%) patients with MSI‐high tumors, respectively (Figure ). Further, 20% NBF and overfixation can lead to DNA degradation. Retesting because of DNA degradation can prolong the TAT. The TAT of specimens fixed with 20% NBF or specimens with prolonged storage was statistically significantly longer than that of specimens fixed with 10% NBF or those with nonprolonged storage ( p = 0.002 and 0.005, respectively) (Figure ). The proportion of specimens with degraded DNA was higher in MSI‐high cases ( n = 10 [20.0%]) than in MSS cases ( n = 41 [4.1%]). However, the statistically significant differences in TAT between MSS and MSI‐high cases were maintained after excluding these 51 cases ( p < 0.001). 3.4 Characteristics of patients with MSI‐high tumors treated with ICIs Of the 50 patients with MSI‐high tumors, 24 received ICI monotherapy or combination therapy before the cutoff date. Patients with MSI‐high tumors did not receive ICIs due to the following reasons: early stage or resectable tumors ( n = 12), ongoing therapy ( n = 7), death ( n = 5), patient refusal ( n = 1), and autoimmune disease ( n = 1) (Figure ). The median age of these patients was 56 (range: 35–84) years. The origin of the primary tumors was as follows: CRC ( n = 8 [33.3%]), non‐CRC gastrointestinal malignancy ( n = 4 [16.7%]), endometrial cancer ( n = 7 [29.2%]), and others ( n = 5 [20.8%]). Further, 20 (83.3%) patients received monotherapy (pembrolizumab, n = 19; nivolumab, n = 1), and four (16.7%) patients with metastatic CRC (mCRC) were treated with nivolumab plus ipilimumab therapy. Only one patient did not undergo prior treatment. In total, seven (29.2%) and 16 (66.7%) patients received one or ≥two previous treatment regimens, respectively (Table ). 3.5 OS, PFS, ORR, and DCR On the day of analysis, 18 (75.0%) patients presented with disease progression, and seven (29.2%) died. The median PFS and OS was 5.1 (95% CI: 1.6–NA) and 23.8 months (95% CI: 6.1–NA), respectively (Figure A, B). The ORR and DCR in 21 patients with target lesions were 38.1% (95% CI: 18.1–61.6) and 66.7% (95% CI: 43.0–85.4), respectively (Table ). Among them, 14 (66.7%) patients presented with a median tumor shrinkage rate of 18.0% from the baseline (range: −157.9% to 100.0%) (Figure ). 3.6 Genetic consultation for patients with MSI‐high solid tumors Of the 50 patients with MSI‐high tumors, 34 were referred to the Department of Clinical Genetic Oncology (Figure ). Death or loss to follow‐up ( n = 6) was the most common reason for no consultation. Of these 34 patients, 18 underwent genetic testing. Six patients were diagnosed with Lynch syndrome. Among them, one patient was known to have Lynch syndrome; three were newly diagnosed after tumor MSI testing, which is used as a screening tool for Lynch syndrome; and two were newly diagnosed after tumor MSI testing for determining the indication of ICI. Of the remaining 14 patients who did not undergo genetic testing, four presented with pMMR and one died after the initial consultation. The most common reason for not conducting genetic testing was patient refusal ( n = 11).
Feasibility of MSI testing Between January 2019 and December 2020, 1052 consecutive MSI tests were conducted in 1047 patients with solid tumors at the Cancer Institute Hospital of JFCR. Among them, five patients underwent MSI testing twice, including two patients with two synchronous primary cancers, two with primary and metastatic cancers, and one whose specimen was unsuitable for initial testing. In total, we assessed 27 different types of solid tumors. Patients with CRC accounted for approximately 40% ( n = 437) of the cohort, of which only 4.6% ( n = 20) was MSI‐high. Patients with MSI‐high endometrial cancer showed the highest proportion ( n = 17, 21.3%) (Figure ). None of the patients who underwent MSI testing twice had MSI‐high tumors. Table presents the success rates of MSI testing. The MSI status could be determined in 1041 of 1052 cases, and the overall success rate of MSI testing was 99.0% (95% confidence interval [CI]: 98.0–100.0). The detection rate of MSI‐high cases was 4.7% ( n = 50) in the entire cohort. Patients with surgically or endoscopically resected and biopsy specimens, including those collected via fine‐needle aspiration, underwent successful MSI testing (98.6% and 99.5%, respectively). However, based on specimen conditions, some differences were noted in the success rates of MSI testing. Patients whose specimens were fixed with 20% NBF showed a lower success rate (98.4%; 95% CI: 97.2–99.2) than those whose specimens were fixed using 10% NBF (100.0%; 95% CI: 99.1–100.0). In addition, patients who had specimens with prolonged storage (>36 months) showed a significantly lower success rate (95.4%; 95% CI: 90.7–98.1) than those who had specimens with nonprolonged storage (99.6%; 95% CI: 98.9–99.9) (Table ). For patients whose specimens were fixed with 20% formalin, a statistically significant difference was noted in the success rates of MSI testing between specimens with prolonged and nonprolonged storage (95.4% [95% CI: 90.7–98.1] vs. 99.3% [95% CI: 98.2–99.8]; p = 0.0028). However, for patients who had specimens with nonprolonged storage, no significant difference was observed in the success rates of MSI testing between specimens fixed with 10% and 20% NBF.
Determination of MSI status using tumor samples alone MSI status could be determined based on QMVR without using normal controls in 994 (94.5%) of 1052 cases. Normal tissue and blood samples were used to determine the MSI status in 25 and 27 patients, respectively. Furthermore, both normal tissue and blood samples were used to determine the MSI status in six patients. Notably, for cases in which only tumor samples were used to determine the MSI status, the proportion of MSI‐high cases (29 [42.0%] of 50) was statistically significantly lower than that of MSS cases (956 [96.5%] of 991) ( p < 0.001).
TAT The median TAT was 17 and 24 days in MSS and MSI‐high cases, respectively ( p < 0.001) (Figure ). Statistically significant differences in TAT were noted between MSS and MSI‐high cases ( p < 0.001). The TAT was within 7, 14, 21, and 28 days in 0 (0.0%), 346 (34.6%), 812 (81.2%), and 915 (91.5%) patients with MSS tumors and in 0 (0.0%), 8 (16.0%), 23 (46.0%), and 31 (62.0%) patients with MSI‐high tumors, respectively (Figure ). Further, 20% NBF and overfixation can lead to DNA degradation. Retesting because of DNA degradation can prolong the TAT. The TAT of specimens fixed with 20% NBF or specimens with prolonged storage was statistically significantly longer than that of specimens fixed with 10% NBF or those with nonprolonged storage ( p = 0.002 and 0.005, respectively) (Figure ). The proportion of specimens with degraded DNA was higher in MSI‐high cases ( n = 10 [20.0%]) than in MSS cases ( n = 41 [4.1%]). However, the statistically significant differences in TAT between MSS and MSI‐high cases were maintained after excluding these 51 cases ( p < 0.001).
Characteristics of patients with MSI‐high tumors treated with ICIs Of the 50 patients with MSI‐high tumors, 24 received ICI monotherapy or combination therapy before the cutoff date. Patients with MSI‐high tumors did not receive ICIs due to the following reasons: early stage or resectable tumors ( n = 12), ongoing therapy ( n = 7), death ( n = 5), patient refusal ( n = 1), and autoimmune disease ( n = 1) (Figure ). The median age of these patients was 56 (range: 35–84) years. The origin of the primary tumors was as follows: CRC ( n = 8 [33.3%]), non‐CRC gastrointestinal malignancy ( n = 4 [16.7%]), endometrial cancer ( n = 7 [29.2%]), and others ( n = 5 [20.8%]). Further, 20 (83.3%) patients received monotherapy (pembrolizumab, n = 19; nivolumab, n = 1), and four (16.7%) patients with metastatic CRC (mCRC) were treated with nivolumab plus ipilimumab therapy. Only one patient did not undergo prior treatment. In total, seven (29.2%) and 16 (66.7%) patients received one or ≥two previous treatment regimens, respectively (Table ).
OS, PFS, ORR, and DCR On the day of analysis, 18 (75.0%) patients presented with disease progression, and seven (29.2%) died. The median PFS and OS was 5.1 (95% CI: 1.6–NA) and 23.8 months (95% CI: 6.1–NA), respectively (Figure A, B). The ORR and DCR in 21 patients with target lesions were 38.1% (95% CI: 18.1–61.6) and 66.7% (95% CI: 43.0–85.4), respectively (Table ). Among them, 14 (66.7%) patients presented with a median tumor shrinkage rate of 18.0% from the baseline (range: −157.9% to 100.0%) (Figure ).
Genetic consultation for patients with MSI‐high solid tumors Of the 50 patients with MSI‐high tumors, 34 were referred to the Department of Clinical Genetic Oncology (Figure ). Death or loss to follow‐up ( n = 6) was the most common reason for no consultation. Of these 34 patients, 18 underwent genetic testing. Six patients were diagnosed with Lynch syndrome. Among them, one patient was known to have Lynch syndrome; three were newly diagnosed after tumor MSI testing, which is used as a screening tool for Lynch syndrome; and two were newly diagnosed after tumor MSI testing for determining the indication of ICI. Of the remaining 14 patients who did not undergo genetic testing, four presented with pMMR and one died after the initial consultation. The most common reason for not conducting genetic testing was patient refusal ( n = 11).
DISCUSSION MSI testing via PCR‐based methods with a mononucleotide panel has been successfully used in clinical practice for assessing different types of tumors using specimens with prolonged storage or small biopsy specimens. The clinical outcomes of patients with MSI‐high tumors treated with ICIs were comparable to those reported in previous trials. , Our real‐world study revealed that MSI testing showed a lower success rate in patients with overfixed specimens and longer TAT in MSI‐high cases; moreover, it indicated the inadequate awareness of MSI testing as a screening tool for Lynch syndrome. The overall success rate of MSI testing in this study (99.0%) was similar to that reported in previous large‐scale real‐world studies (99.1%) in Japan. In particular, the success rate in patients with overfixed specimens was low (>95%). Further, pH, formalin concentration, and fixation time can affect DNA degradation. Specimen quality was found to be associated with prolonged TAT. MSI testing was introduced in 2019 at our hospital after the implementation of a standard protocol for genomic testing of pathological specimens. Therefore, only overfixed and archived specimens were available at that time. Ideally, the optimal specimen for MSI testing would improve the success rate and TAT. However, it is challenging to strictly comply with this recommendation in daily practice. Recently, in addition to the use of several biomarkers for patient selection, such as HER2, PD‐L1, EGFR, and KRAS, personalized therapy with next‐generation sequencing (NGS)‐based multigene panels is increasingly being used in several countries, and the demand for optimal specimens is increasing. DNA degradation can potentially lead to amplification failure. However, if the specimens are well amplified, MSI testing can be performed. In our study, most specimens that were fixed with 20% NBF and those with prolonged storage could be successfully subjected to MSI testing. Furthermore, MSI‐high tumors belong to a low‐incident tumor subset (<5%). Thus, testing based on high‐quality specimens is less likely to be prioritized than other biomarker tests. NGS‐based testing can identify the MSI status and comprehensive genome profile and provide a useful alternative when the amount of specimen is low. , , , The development of NGS‐based MSI testing can improve precision oncology. Furthermore, the plasma‐based detection of MSI status can help overcome limitations in specimen amount. The application of NGS‐based comprehensive genome profiling including MSI status (Foundation One CDx®) was approved in Japan in June 2021. MSI testing using NGS‐based methods could be an alternative method. However, the NGS‐based method cannot yet replace the PCR‐based MSI testing. The use of NGS‐based genome testing in previously untreated patients has not been approved and is limited to patients with metastatic disease. Thus, we believe that the demand for PCR‐based methods will continue in the near future. Shortening the TAT for MSI testing is an urgent issue that must be addressed in future studies on precision oncology. Recently, the KN‐177 trial revealed that the efficacy of pembrolizumab alone was superior to that of standard chemotherapy when used as the first‐line treatment for patients with MSI‐high mCRC. Delayed diagnosis may significantly affect its treatment. Compared with MSS cases, a higher proportion of matched normal samples (tissue or blood samples) were required for MSI‐high cases, and this is the main reason for the prolonged TAT. Bando et al. revealed that MSI testing based on QMVR using tumor samples alone showed a higher concordance rate than the conventional method with paired normal samples in a Japanese cohort. However, owing to the low prevalence of MSI‐high mCRC cases, only 11 cases were included in their study. Thus, the application of MSI testing based on QMVR of mononucleotide markers in MSI‐high cases must be further investigated. MMR–IHC analysis is an effective and feasible alternative method for shortening the TAT in MSI‐high cases. Patients with dMMR tumors who were diagnosed using local MMR–IHC analysis were enrolled in pivotal clinical trials, such as the KN‐158, KN‐164, and KN‐177 studies. , , However, MMR–IHC analysis is not universally available. In September 2022, MMR–IHC was just approved in Japan. MMR–IHC analysis could address the limitations of PCR‐based MSI testing, particularly TAT. However, PCR‐based MSI testing might be useful in cases that cannot be diagnosed via MMR–IHC analysis alone. Based on demand, the combination of MSI testing and MMR–IHC analysis could be feasible in clinical practice. MSI testing is commonly conducted to determine ICI indications for most cases, and it could also be used as a screening tool for Lynch syndrome. However, of the 50 patients with MSI‐high tumors in our study, 16 did not consult hereditary tumor experts because of death or loss to follow‐up after disease progression ( n = 6). In addition, 11 of 34 patients who were referred to the Clinical Genetic Department refused to undergo germline tests. This may be due to the procedural cost. The National Insurance System does not cover genetic testing for cancer prevention. The genetic testing result has no effect on the therapeutic strategy itself, and some patients believed that genetic testing was not cost‐effective. To improve the diagnosis of Lynch syndrome via MSI testing, its cost should be reduced. However, the medical costs in the aging population of Japan are continually rising, thereby decreasing the possibility of further increase in cost. In addition, due to the low rate of genetic testing, there may be no reduction in the cost of this procedure. Therefore, it is less likely to observe significant increases in the near future. A steady attempt to explain the need for MSI testing over time is important and is the most feasible option. Delayed MSI testing or long TAT may have affected the results. In addition, the timing of MSI testing is key from the viewpoint of Lynch syndrome screening. Patients with severe cancer‐associated symptoms must pay considerable attention to future cancer risk. Early MSI testing could enhance the accessibility of genetic testing or counseling. Nevertheless, we could not determine the reason for no consultation in the medical records of five cases; moreover, we might have paid inadequate attention to familial or hereditary cancers in daily practice. More clinicians could potentially provide the opportunity for patients with MSI‐high tumors to undergo genetic testing and counseling. Further understanding of cancer prevention and health care among patients' relatives is important to facilitate the successful growth of genomic medicine. Our study has several limitations. First, the number of patients with MSI‐high tumors was relatively small. Owing to the low incidence of MSI‐high tumors and the fact that the study was performed at a single institution, most types of tumors, including gastric, urothelial, and renal cell lesions, which are indicated for ICIs regardless of the MSI status, were eliminated from MSI testing. There were some discrepancies in the specimen quality for MSS and MSI‐high cases. Due to the small number of cases, we could not eliminate the effects of confounding factors on the TAT. Second, this study was conducted at a single institution in Japan. The fixation protocol was only used at our institution, and specimen selection was based on the discretion of the attending pathologists. Third, we outsourced the MSI testing, and the TAT included the shipping time from our laboratory to the inspection company as well as reporting delays. Thus, in this study, the TAT of MSI testing was slightly longer than that of in‐house testing. However, the TAT differed according to the MSI status. In conclusion, more than 1000 real‐world studies have reported the versatility and reliability of MSI testing using different types of tumor samples in clinical practice. However, TAT may be affected by specimen quality and MSI status. Prolonged TAT can delay treatment in patients with MSI‐high tumors. Increasing the number of methods used for determining MSI status is a potential solution for issues such as limited availability of optimal specimens and TAT. Furthermore, awareness regarding the importance of hereditary tumors among clinicians is important for the successful growth of precision oncology.
Izuma Nakayama: Conceptualization (lead); data curation (lead); formal analysis (lead); investigation (lead); methodology (lead); project administration (lead); writing – original draft (lead); writing – review and editing (lead). Eiji Shinozaki: Conceptualization (equal); project administration (supporting); supervision (supporting); writing – review and editing (supporting). Hiroshi Kawachi: Conceptualization (supporting); data curation (supporting); investigation (supporting); writing – review and editing (supporting). Takashi Sasaki: Investigation (supporting); writing – review and editing (supporting). Mayu Yunokawa: Investigation (supporting). Junichi Tomomatsu: Investigation (supporting). Takeshi Yuasa: Investigation (supporting); writing – review and editing (supporting). Satoru Kitazono: Investigation (supporting). Kokoro Kobayashi: Investigation (supporting). Keiko Hayakawa: Investigation (supporting). Arisa Ueki: Data curation (supporting); investigation (supporting); writing – review and editing (supporting). Shunji Takahashi: Project administration (supporting); supervision (supporting). Kensei Yamaguchi: Project administration (lead); supervision (lead).
The authors have no conflict of interest to declare.
This study was approved by the ethics committee of the Cancer Institute Hospital of the Japanese Foundation for Cancer Research (JFCR) in Tokyo, Japan (approval no. 2020–1229) and was conducted in accordance with the tenets of the Declaration of Helsinki (1964) and its later amendments.
Figure S1. Click here for additional data file. Figure S2. Click here for additional data file. Table S1. n Click here for additional data file.
|
Mixed reality navigation training system for liver surgery based on a high‐definition human cross‐sectional anatomy data set
|
95f20b09-f49e-4875-bd4e-58e3bfe35e05
|
10134360
|
Anatomy[mh]
|
INTRODUCTION Clinically, liver‐related diseases are relatively common, and the primary liver cancer is the second most common cause of death worldwide. Comprehensive treatment based on surgical resection of the tumor is the most effective for liver cancer. However, due to the complexity, variation of intrahepatic blood vessels, bile ducts branches, and the close relationship between tumors and those vessels, and ducts, hepatic surgery is challenging. Imaging is the routine diagnosis and treatment assessment method of liver cancer. At present, the computer three‐dimensional (3D) reconstruction technology is gradually becoming mature, with a range from definite diagnosis, preoperative surgical planning, intraoperative navigation training, and even “last minute simulation”, , which can help doctors to intuitively grasp the fine details. Since the Visible Human Project (VHP) appeared in 1995, several studies on 3D reconstruction of virtual liver based on tomographic anatomical datasets have been conducted. , , , , Advances in sectional anatomy are reflected in the maturity of specimen preprocessing technology, accuracy of milling, camera resolution, and the improvement of software and hardware related to image registration, segmentation and reconstruction. Thus, the 3D reconstructed virtual liver system based on cross‐sectional anatomical datasets can recreate, complete and meticulous model, which can provide a realistic anatomical and morphological theoretical basis for precise surgery. Moreover, with the development of 3D printing, the advantages of no mold and short production cycles, it is particularly suitable for the rapid delivery of complex structured and customized medical products. Due to its high degree of simulation and homogeneity, the model is well suited to help surgeons understand the configuration of complex organs and structures. Solid models printed by 3D technology can be used as tools for preoperative communication, visualization of complex structures, and surgical rehearsal and planning. This may reduce the surgical risks and improve surgical treatment outcomes. The 3D printing technology can help simulate surgery and improve the accuracy of surgery, but it cannot determine the course of intrahepatic vasculature, bile ducts and the location of the disease in real time intraoperatively. Hybrid or Mixed reality technology is an emerging holographic image technology that can realize the interaction between virtual world and reality. It combines the virtual and physical world to enhance the user's sense of reality. Due to this blending, MR is also called hybrid reality or extended reality. In this form of virtual reality, users can interact with both the physical and virtual worlds simultaneously. , , By combining 3D printing and mixed reality technology, it helps hepatobiliary surgeons to implement virtual data on the operating table in real time and accurately. The aim of the study is to establish a novel mixed reality technology to investigate and analyze its feasibility and application value in hepatic surgery. [Video ]. The purpose of our system training is to make surgeons more proficient in intraoperative environment using mixed reality technology to assist them in making judgments and interpret and act upon any apparent error during surgery, a judgment that requires a combination of factors including prediction of the tissue deformation and use of advanced visualization algorithms to assist the surgeons to identify the overlay error. MR technology can help physicians to perform precise positioning during surgery. However, MR technology involves complex operations and physicians' proficiency in operating MR glasses limits the spread of this technology. 3D printing and MR technology can both help physicians perform accurate positioning during surgery. Our system gives doctors an opportunity to validate imaging data directly on the patient's body in a simulated surgery, which is of great interest to liver surgeons, and is believed to help in precise preoperative planning, accurate intraoperative identification, reduction of liver damage, and improvement of traditional liver surgery training methods (face validity, content validity, or construct validity). ,
MATERIALS AND METHODS 2.1 Acquisition of high‐definition two‐dimensional cross‐sectional images The specimens selected in this study were obtained from fresh‐frozen cadaver from an adult female, (50 years old, height 160 cm, weight 62 kg, general condition was good, without underlying diseases). The magnetic resonance imaging showed the integrity of liver and peripheral structures, without corruption and liquefaction. After pretreatment, the blood vessels were exposed, vascular intubation and large blood vessel irrigation were performed. The arterial or venous fillers were made with gelatin plus red dye or blue dye respectively. In the supine position, a special embedding box was used and frozen with 5% gelatin solution. Stored in −25°C freezer for 2 weeks until frozen. Full refrigeration fan set to −25°C, and 4 LED light sources, kept always bright, we used computer‐numerical‐control‐milling machine (Hanchuan XK2408B, China) to cut layer by layer from foot to head. Using the closed‐loop computer‐numerical‐control operating system, the milling thickness was set to 100 μm. In the process of milling, the RENCAY scanning system (Rencay 16 k3 Scanback) was used to scan and take pictures layer by layer, and the resolution was 13,000 × 8000 pixels. Through the large format mobile scanning system, the milling section of the specimen was transformed into two‐dimensional (2D) dataset. This study was approved by the Ethics Committee of Basic Medical Sciences, Shandong University (IRB No. ECSBMSSDU2018‐1‐050 and Ethics Committee of Scientific Research of Shandong University Qilu Hospital IRB No. KYLL‐2022(ZM)‐749). 2.2 Image processing and 3D model reconstruction The images in the 2D dataset were pre‐processed by the digital human specimen sectional sequence image processing system (Shandong Digihuman company). That included color calibration, deicing, brightness registration, spatial registration, etc. (Figure ). Then the liver and duct systems on each 2D image were manually labeled. Each image was set up with five groups of paths. The liver shape group marked the main body outline of the liver on each level. The duct system was divided into (1) hepatic vein group (including inferior vena cava, and intrahepatic branches of hepatic vein), (2) hepatic portal vein group and branches, (3) proper hepatic artery and branchs, and (4) hepatic duct group (including gallbladder, cystic duct, common bile duct, common hepatic duct and branches). We manually segmented the shape and intrahepatic structures one by one using the Adobe Photoshop CC software 2018 (version 19) (Adobe Inc.). After the data segmentation was completed, the digital human specimen sectional 3D reconstruction system (Shandong Digihuman company) was used to extract the manually segmented structural path according to the path components. The 3D reconstruction of its face was then drawn. The reconstructed file was in ply format. The model file was imported into MeshlabV2016.12 software for smoothing and simplifying the model surface. the interactive 3D model was then constructed. According to the results of 3D reconstruction, the structural composition, 3D morphology and adjacent relationship of the liver and its duct systems were observed and studied by sectional anatomy and 3D morphology. 2.3 3D printing of liver model The normal model data were converted into STL format data that can be read by the 3D printer. The 3D printer (Sailner, model J401Pro, Zhuhai Sailner Company) was used to print the 1:1 simulation model. After setting the structural color of each part, we printed the model with PolyJet photopolymer (RGD720). The material used for 3D printing was photosensitive resin, which was developed and supplied by Medical Independent Research and Development Center of Shanghai Black Flame Medical Technology Co., Ltd. 2.4 Liver hybrid reality navigation training system The 3D reconstructed liver data were imported into Microsoft Hololens (Microsoft, USA) to construct a liver hybrid display surgical simulation system (Anhui Ziwei King Star Digtal S&T Co., Ltd./VisionTeke Medical Imaging Systems Co., Ltd). Microsoft's head‐mounted holographic glasses Microsoft Hololens have spatial positioning and motion capture gesture operation functions, which can achieve automatic contour line tracking and attachment fusion of solid print models. Automatic tracking and fusion attachment is calculated through the contour line to achieve automatic identification of fusion adsorption, with the virtual model through gesture manipulation mobile rotation zoom in and out and other manipulation. A volunteer stood with a model liver placed on the right upper abdominal quadrant. The volunteer only needs to hold the 3D printed model to maintain a fixed position, the person wearing the Hololens helmet through gestures to manipulate the model display and positioning within the glasses, start automatic virtual reality fusion or manual manipulation of the mobile model and the model carried by the volunteer for alignment fusion, multi‐directional multi‐angle adjustment. Subjects wore HoloLens glasses and verified the position of intrahepatic structures on liver model under the guidance of HoloLens navigation glasses. The transparency of PolyJet photopolymer was high. The subject could verify overlapping important ducts in real time and maximize matching accuracy, so as to train for navigation through HoloLens glasses intraoperatively. 2.5 Evaluation of its application in clinical teaching and training Participants were recruited to participate in the “ Mixed Reality Navigation in Liver Surgical Anatomy: Learning and Training System”. After the training, the questionnaire was filled out by participant. The recruitment criteria were: (1) currently specializes in abdominal surgery, liver surgery and/or the treatment of liver diseases; (2) is currently a medical practitioner in this specialty; (3) is located in Jinan, Shandong Province. An open questionnaire was designed for participants as the main body. The three main indicators of the questionnaire were: (1) Basic information of participants; (2) Perceptions related to clinical anatomy learning based on 3D printed liver models; (3) Perceptions related to mixed reality navigation training system in liver surgery. The participants filled in a questionnaire to investigate the training effect and the recognition of mixed reality technology for liver surgery. The recruited participants of “Mixed Reality Navigation in Liver Surgical Anatomy Learning and Training System,” were all clinically experienced liver surgeons, and the application of 3D printed models recruited a diverse group of liver surgeons. It does not vary from hybrid‐reality navigation training system. The training involved the know‐how of the basics of the HoloLens II, setting up and functions of the HoloLens II, and use HoloLens II navigation to verify the location of intrahepatic structures on a 3D‐printed liver model. It was conducted in hospital settings by filling out the questionnaire after the training.
Acquisition of high‐definition two‐dimensional cross‐sectional images The specimens selected in this study were obtained from fresh‐frozen cadaver from an adult female, (50 years old, height 160 cm, weight 62 kg, general condition was good, without underlying diseases). The magnetic resonance imaging showed the integrity of liver and peripheral structures, without corruption and liquefaction. After pretreatment, the blood vessels were exposed, vascular intubation and large blood vessel irrigation were performed. The arterial or venous fillers were made with gelatin plus red dye or blue dye respectively. In the supine position, a special embedding box was used and frozen with 5% gelatin solution. Stored in −25°C freezer for 2 weeks until frozen. Full refrigeration fan set to −25°C, and 4 LED light sources, kept always bright, we used computer‐numerical‐control‐milling machine (Hanchuan XK2408B, China) to cut layer by layer from foot to head. Using the closed‐loop computer‐numerical‐control operating system, the milling thickness was set to 100 μm. In the process of milling, the RENCAY scanning system (Rencay 16 k3 Scanback) was used to scan and take pictures layer by layer, and the resolution was 13,000 × 8000 pixels. Through the large format mobile scanning system, the milling section of the specimen was transformed into two‐dimensional (2D) dataset. This study was approved by the Ethics Committee of Basic Medical Sciences, Shandong University (IRB No. ECSBMSSDU2018‐1‐050 and Ethics Committee of Scientific Research of Shandong University Qilu Hospital IRB No. KYLL‐2022(ZM)‐749).
Image processing and 3D model reconstruction The images in the 2D dataset were pre‐processed by the digital human specimen sectional sequence image processing system (Shandong Digihuman company). That included color calibration, deicing, brightness registration, spatial registration, etc. (Figure ). Then the liver and duct systems on each 2D image were manually labeled. Each image was set up with five groups of paths. The liver shape group marked the main body outline of the liver on each level. The duct system was divided into (1) hepatic vein group (including inferior vena cava, and intrahepatic branches of hepatic vein), (2) hepatic portal vein group and branches, (3) proper hepatic artery and branchs, and (4) hepatic duct group (including gallbladder, cystic duct, common bile duct, common hepatic duct and branches). We manually segmented the shape and intrahepatic structures one by one using the Adobe Photoshop CC software 2018 (version 19) (Adobe Inc.). After the data segmentation was completed, the digital human specimen sectional 3D reconstruction system (Shandong Digihuman company) was used to extract the manually segmented structural path according to the path components. The 3D reconstruction of its face was then drawn. The reconstructed file was in ply format. The model file was imported into MeshlabV2016.12 software for smoothing and simplifying the model surface. the interactive 3D model was then constructed. According to the results of 3D reconstruction, the structural composition, 3D morphology and adjacent relationship of the liver and its duct systems were observed and studied by sectional anatomy and 3D morphology.
3D printing of liver model The normal model data were converted into STL format data that can be read by the 3D printer. The 3D printer (Sailner, model J401Pro, Zhuhai Sailner Company) was used to print the 1:1 simulation model. After setting the structural color of each part, we printed the model with PolyJet photopolymer (RGD720). The material used for 3D printing was photosensitive resin, which was developed and supplied by Medical Independent Research and Development Center of Shanghai Black Flame Medical Technology Co., Ltd.
Liver hybrid reality navigation training system The 3D reconstructed liver data were imported into Microsoft Hololens (Microsoft, USA) to construct a liver hybrid display surgical simulation system (Anhui Ziwei King Star Digtal S&T Co., Ltd./VisionTeke Medical Imaging Systems Co., Ltd). Microsoft's head‐mounted holographic glasses Microsoft Hololens have spatial positioning and motion capture gesture operation functions, which can achieve automatic contour line tracking and attachment fusion of solid print models. Automatic tracking and fusion attachment is calculated through the contour line to achieve automatic identification of fusion adsorption, with the virtual model through gesture manipulation mobile rotation zoom in and out and other manipulation. A volunteer stood with a model liver placed on the right upper abdominal quadrant. The volunteer only needs to hold the 3D printed model to maintain a fixed position, the person wearing the Hololens helmet through gestures to manipulate the model display and positioning within the glasses, start automatic virtual reality fusion or manual manipulation of the mobile model and the model carried by the volunteer for alignment fusion, multi‐directional multi‐angle adjustment. Subjects wore HoloLens glasses and verified the position of intrahepatic structures on liver model under the guidance of HoloLens navigation glasses. The transparency of PolyJet photopolymer was high. The subject could verify overlapping important ducts in real time and maximize matching accuracy, so as to train for navigation through HoloLens glasses intraoperatively.
Evaluation of its application in clinical teaching and training Participants were recruited to participate in the “ Mixed Reality Navigation in Liver Surgical Anatomy: Learning and Training System”. After the training, the questionnaire was filled out by participant. The recruitment criteria were: (1) currently specializes in abdominal surgery, liver surgery and/or the treatment of liver diseases; (2) is currently a medical practitioner in this specialty; (3) is located in Jinan, Shandong Province. An open questionnaire was designed for participants as the main body. The three main indicators of the questionnaire were: (1) Basic information of participants; (2) Perceptions related to clinical anatomy learning based on 3D printed liver models; (3) Perceptions related to mixed reality navigation training system in liver surgery. The participants filled in a questionnaire to investigate the training effect and the recognition of mixed reality technology for liver surgery. The recruited participants of “Mixed Reality Navigation in Liver Surgical Anatomy Learning and Training System,” were all clinically experienced liver surgeons, and the application of 3D printed models recruited a diverse group of liver surgeons. It does not vary from hybrid‐reality navigation training system. The training involved the know‐how of the basics of the HoloLens II, setting up and functions of the HoloLens II, and use HoloLens II navigation to verify the location of intrahepatic structures on a 3D‐printed liver model. It was conducted in hospital settings by filling out the questionnaire after the training.
RESULTS 3.1 2D images We successfully obtained high‐resolution 2D image datasets of liver sections (100 μm). Taking the highest point and the lowest point of the liver as the boundary, the distance between them was 15.4 cm, a total of 1540 sections were obtained (Number 10480–12020). The magnified image was clear and undistorted. The structures were easy to identify. The color restoration was accurate. The collection process was continuous and there was no defect or deletion of the cross‐section. The four types of ducts in respective colors were clearly discernible (Figure ). 3.2 3D reconstruction After the manual segmentation of 1540 images, we used Digihuman 3D Reconstruction System to process five groups of paths to get five 3D model files, including liver, hepatic vein, hepatic portal vein, proper hepatic artery and hepatic duct groups. After the five 3D model files were modified with MeshLab software, each group of models were displayed separately or together in different colors, zoomed arbitrarily, rotated and observed at different angles. The visual image structure after 3D reconstruction was obvious and clear. The stroke and branches of the ducts truly reproduced the positional relationship and close relationship between the ducts and the liver (Figure ). The left and right branches (lobe and segment artery), origin and distribution of the hepatic artery were displayed clearly (Figure , Figure ). Through the observation of the 3D structure of 3D reconstruction, it was found that the branches of the proper hepatic artery appeared in advance in 5 places. As shown in Figure , around the branch of the right superior segment of the hepatic portal vein and the caudate lobe branch, there were 2–3 proper hepatic arteries around a hepatic portal vein. The shape of the hepatic duct accords with the common morphology. (Figure , Figure ). In addition, the hepatic portal vein was consistent with the normal type in the classification of hepatic portal vein of Couinaud (Figure , Figure ). Moreover, the branches, confluence, distribution, formation of hepatic veins were also very distinct (Figure ). 6 accessory right hepatic veins were directly draining into the inferior vena cava (Figure , Figure ). 3.3 3D printed results After entering the data into the Sailner J401 Pro 3D printer, in 12 h and 42 min, we printed out a full‐color transparent liver weighing 2743 g. The whole body was 15.4 cm × 18.9 cm height and width respectively. This accorded with the original size of the specimen. The internal duct system was clearly discernible (Figure ). Data segmentation was performed by manual segmentation by two researchers, and it took 15,400 min/256.67 h (10 min per slice, for a total of 1540 slices). 3D printed material cost $0.50/g and weighed 2743 g. The total cost was US$1372. Model printing took 12 h and 42 min, and model post‐printing processing took 6 h. 3.4 Liver hybrid reality navigation training system The 3D reconstructed liver data were imported into Microsoft HoloLens II (Microsoft, USA), and combined with the 3D printed model. The intraoperative navigation simulation was performed. We invited clinicians to wear HoloLens II glasses and use HoloLens II navigation to verify the location of intrahepatic structures on a 3D‐printed liver model. This improved the surgical navigation skills through HoloLens II glasses (Figure ). 3.5 Application in clinical teaching and training Our mixed reality navigation training system is a combination of 3D printing and MR to simulate surgical scenarios for training and improve physician proficiency in MR technology. According to the recruitment criteria, 26 clinicians participated in the training of “Mixed Reality Navigation in Liver Surgical Anatomy: Learning and Training System”. After the training, we conducted a questionnaire evaluation. All participants were licensed liver surgeons (Table ). Among the questions related to clinical anatomy learning based on 3D printed liver model, we first set up a test question to ask how many accessory hepatic veins were in the model liver. The correct rate of this question was very low, and only 2 senior doctors answered correctly. There were 6 accessory hepatic veins in our model, of which two larger ones dominate the right posterior lobe, these two were obvious, and there were 4 in the right anterior and left anterior. In the remaining questions, all agreed that it was very important to understand the hierarchical and spatial relationships of intrahepatic structures (including blood vessels and tumors) in the surgical plan for liver lesions. The complete outcome of the survey is stated in Table . In the questionnaire of the liver surgery mixed reality navigation training system, 77% of the participants mastered the operation of HoloLens glasses through the training part. The Remaining 30.77% of the participants developed dizziness and disorientation, and 15.38% of the participants had a headache. All participants thought that mixed reality technology was useful in preoperative planning, simulation and intraoperative navigation. The 84.62% of participants thought it was useful in postoperative evaluation. The 84.62% participants hoped to promote mixed reality technology in future liver surgery. The 92.31% of the participants had a good evaluation of the hybrid reality navigation training system for liver surgery. The 76.92% of the participants mastered the application of this technology in surgery through our liver surgery hybrid reality navigation training system. All participants believed that the liver surgery hybrid reality navigation training system was a good tool for the specialization of complex surgical techniques. We hope to continue using our system for further training. It can clarify the anatomical relationship (32%), improve the radical operation rate (12%), improve the operation efficiency (23%), reduce the operation risk (23%), and reduce the postoperative recurrence rate (10%) (Figure ). Participants also put forward areas for improvement. They want the system to be most stable and easier. The glasses were too heavy. The change of position during the operation, would change image accordingly, suggesting to fix the position of the image. To design a training system to integrate laparoscopy and HoloLens glasses.
2D images We successfully obtained high‐resolution 2D image datasets of liver sections (100 μm). Taking the highest point and the lowest point of the liver as the boundary, the distance between them was 15.4 cm, a total of 1540 sections were obtained (Number 10480–12020). The magnified image was clear and undistorted. The structures were easy to identify. The color restoration was accurate. The collection process was continuous and there was no defect or deletion of the cross‐section. The four types of ducts in respective colors were clearly discernible (Figure ).
3D reconstruction After the manual segmentation of 1540 images, we used Digihuman 3D Reconstruction System to process five groups of paths to get five 3D model files, including liver, hepatic vein, hepatic portal vein, proper hepatic artery and hepatic duct groups. After the five 3D model files were modified with MeshLab software, each group of models were displayed separately or together in different colors, zoomed arbitrarily, rotated and observed at different angles. The visual image structure after 3D reconstruction was obvious and clear. The stroke and branches of the ducts truly reproduced the positional relationship and close relationship between the ducts and the liver (Figure ). The left and right branches (lobe and segment artery), origin and distribution of the hepatic artery were displayed clearly (Figure , Figure ). Through the observation of the 3D structure of 3D reconstruction, it was found that the branches of the proper hepatic artery appeared in advance in 5 places. As shown in Figure , around the branch of the right superior segment of the hepatic portal vein and the caudate lobe branch, there were 2–3 proper hepatic arteries around a hepatic portal vein. The shape of the hepatic duct accords with the common morphology. (Figure , Figure ). In addition, the hepatic portal vein was consistent with the normal type in the classification of hepatic portal vein of Couinaud (Figure , Figure ). Moreover, the branches, confluence, distribution, formation of hepatic veins were also very distinct (Figure ). 6 accessory right hepatic veins were directly draining into the inferior vena cava (Figure , Figure ).
3D printed results After entering the data into the Sailner J401 Pro 3D printer, in 12 h and 42 min, we printed out a full‐color transparent liver weighing 2743 g. The whole body was 15.4 cm × 18.9 cm height and width respectively. This accorded with the original size of the specimen. The internal duct system was clearly discernible (Figure ). Data segmentation was performed by manual segmentation by two researchers, and it took 15,400 min/256.67 h (10 min per slice, for a total of 1540 slices). 3D printed material cost $0.50/g and weighed 2743 g. The total cost was US$1372. Model printing took 12 h and 42 min, and model post‐printing processing took 6 h.
Liver hybrid reality navigation training system The 3D reconstructed liver data were imported into Microsoft HoloLens II (Microsoft, USA), and combined with the 3D printed model. The intraoperative navigation simulation was performed. We invited clinicians to wear HoloLens II glasses and use HoloLens II navigation to verify the location of intrahepatic structures on a 3D‐printed liver model. This improved the surgical navigation skills through HoloLens II glasses (Figure ).
Application in clinical teaching and training Our mixed reality navigation training system is a combination of 3D printing and MR to simulate surgical scenarios for training and improve physician proficiency in MR technology. According to the recruitment criteria, 26 clinicians participated in the training of “Mixed Reality Navigation in Liver Surgical Anatomy: Learning and Training System”. After the training, we conducted a questionnaire evaluation. All participants were licensed liver surgeons (Table ). Among the questions related to clinical anatomy learning based on 3D printed liver model, we first set up a test question to ask how many accessory hepatic veins were in the model liver. The correct rate of this question was very low, and only 2 senior doctors answered correctly. There were 6 accessory hepatic veins in our model, of which two larger ones dominate the right posterior lobe, these two were obvious, and there were 4 in the right anterior and left anterior. In the remaining questions, all agreed that it was very important to understand the hierarchical and spatial relationships of intrahepatic structures (including blood vessels and tumors) in the surgical plan for liver lesions. The complete outcome of the survey is stated in Table . In the questionnaire of the liver surgery mixed reality navigation training system, 77% of the participants mastered the operation of HoloLens glasses through the training part. The Remaining 30.77% of the participants developed dizziness and disorientation, and 15.38% of the participants had a headache. All participants thought that mixed reality technology was useful in preoperative planning, simulation and intraoperative navigation. The 84.62% of participants thought it was useful in postoperative evaluation. The 84.62% participants hoped to promote mixed reality technology in future liver surgery. The 92.31% of the participants had a good evaluation of the hybrid reality navigation training system for liver surgery. The 76.92% of the participants mastered the application of this technology in surgery through our liver surgery hybrid reality navigation training system. All participants believed that the liver surgery hybrid reality navigation training system was a good tool for the specialization of complex surgical techniques. We hope to continue using our system for further training. It can clarify the anatomical relationship (32%), improve the radical operation rate (12%), improve the operation efficiency (23%), reduce the operation risk (23%), and reduce the postoperative recurrence rate (10%) (Figure ). Participants also put forward areas for improvement. They want the system to be most stable and easier. The glasses were too heavy. The change of position during the operation, would change image accordingly, suggesting to fix the position of the image. To design a training system to integrate laparoscopy and HoloLens glasses.
DISCUSSION In 1997, Fasel et al. selected 148 cross‐sectional data with 1 mm slice thickness from the VHP data set to reconstruct the liver and part of the trunk of intrahepatic ducts. In 2005, Fang et al. selected 875 cross‐sectional data with thickness of 0.2 mm from the dataset of “Digital Virtual Chinese Female I". In 2009, Lou et al. selected 233 coronal data with thickness of 0.6 mm from the coronal data set. In 2009, Chen et al. selected 178 cross‐sectional data with a thickness of 1 mm from the Chinese digital human dataset. In 2009 Shin et al. selected 277 cross‐sectional data with a thickness of 0.2 mm from the visible Korean human dataset to reconstruct liver and intrahepatic duct. We found that with the passage of time, the accuracy and precision of reconstructions are getting higher and higher. First, the dataset obtained is better, which is manifested in thinner layer thickness, higher resolution and more accurate inter‐layer registration. Secondly, with the progress of segmentation and reconstruction software, the functions of software are becoming stronger, the efficiency is constantly improved, and new functions are constantly appearing. Of course, this is accompanied by the rapid development of computer hardware performance. We compare results of the previous reconstruction lists as follows. Tabular comparison was made between data parameters obtained in this study and previous studies on liver dissection (Table ). In order to ensure the effect, gelatin plus red dye or blue dye was used to make arterial or venous fillers for perfusion, so that the lumen of hepatic vein was blue and the proper hepatic artery was red. The hepatic duct was yellow because of the presence of bile. The hepatic portal vein had no special color. In this way, four kinds of intrahepatic ducts have identification basis. Although, perfusion cannot ensure that all the small branches could be reached. The continuous playback of pictures, real‐time tracking was used to ensure the identification of various ducts. The CNC milling machine used has the advantages of stable operation and high accuracy, which ensures the uniform layer thickness of the dataset 0.1 mm. Milling from the foot to the head, the cross section obtained was the lower surface of the specimen. It is consistent with the direction of clinical imaging examination. In the process of image segmentation, we tried to use the threshold segmentation method, but the accuracy was not enough. The small ducts were difficult to identify automatically. The error rate was high. The four systems were easy to be confused. In order to get better effect and accuracy, we used manually segmented 1540 cross‐sectional data respectively. The workload was heavy. Because the distance between the adjacent layers was only 100 μm, and the corresponding structure changes little. We copied the paths of the previous layer to the next layer, and then made minor adjustments. It helped us to identify small ducts and greatly reduced the workload. The Digihuman 3D Reconstruction System is a special software for digital anatomy independently developed by Shandong Digital Man Company based on openCV. It optimizes the sectional anatomical segmentation and ensures the speed and stability of the reconstruction process. The 3D model improves the teaching of anatomy and surgical residents' understanding of surgical anatomy. A better understanding of liver anatomy may contribute to laparoscopic or open hepatectomy. , Virtual reality produces an interactive 3D environment, which makes the real‐life experience realistic and immersive. Augmented reality provides enhanced Virtual Reality rendering, providing surgeons with basic information to optimize navigation during complex operations and reduce intraoperative and postoperative complications. , Mixed reality (MR) technology, as a new technology formed on the basis of 3D applications, overlays anatomical structures directly on the target organs, and it has high potential to improve the movement and perception of surgeons in open visceral surgery. , In abdominal surgery, the liver is deformed and results in change of form, position and size by respiration, pneumoperitoneum, body position, tissue dissection, traction and iatrogenic manipulation which requires the surgeon to make a judgment based on the actual situation during surgery. , , , , The purpose of our system training is to make surgeons more proficient in using mixed reality technology to assist them in making judgments and interpret and act upon any apparent error during surgery, a judgment that requires a combination of factors including prediction of the tissue deformation and use of advanced visualization algorithms to assist the surgeons to identify the overlay error. A study emphasized the advantages of using personalized 3D liver models to guide hepatectomy. We believe that the virtual effect of MR surgery is closely related to image scanning parameters. In order to establish a high‐precision model, we need high‐precision imaging data, otherwise the processed data is of little significance to clinical guidance. When compared with the results of 3D reconstruction of clinical imaging data, the complex details were absent. Our results provide a comprehensively detailed model of intrahepatic duct distribution. We can display the reconstructed model by 3D printing, Virtual Reality, Augmented Reality and MR. MR exchanges information quickly through the connection between reality and virtual world, and enhances the reality of experience. , The three basic attributes of MR technology determine its unique advantages in clinical teaching: immersion, interaction, and imagination. In the field of abdominal surgery, mixed reality technology has been gradually applied and achieved good clinical results. A study applied mixed reality technology in laparoscopic nephrectomy, and achieved good results in operation plan, intraoperative navigation, remote consultation, teaching, and doctor‐patient communication etc. Our study shows the treatment process of liver tumor intuitively as a 3D spatial structure through MR technology. Our training system combines 3D printing technology and mixed reality technology to train surgeons in the operation of mixed reality technology. This greatly reduced the difficulty of identifying the complex spatial structure of the tumor. It helps to enhance the students' understanding of the operation plan, and significantly shortens the learning cycle. When the mixed reality technology is applied to the clinical teaching of liver surgery, students show active interest and higher initiative. The mixed reality technique was applied to the operation of liver tumor, and its application value in the operation of liver tumor was explained from the aspects of preoperative planning, teaching, and training, and the care of important anatomical structures intraoperatively (Figure ). , We used HoloLens glasses combined with 3D model to build a training system. The preoperative reconstruction image is directly projected on the model, so that the operator can accurately predict the course of the hepatic duct, the location of the tumor and its relationships, and take this as the basis for excision. The main advantages of our study are; it can accurately locate the location of hepatic vessels and tumors before and during operation, preoperative simulation, determine the surgical incision and simulate the scope of resection, so as to achieve ideal surgical results. We are working to reconstruct the organs and blood vessels around the liver to form a virtual abdominal anatomy system with rich details, and combine 3D printing technology with mixed reality technology to improve the mixed reality navigation training system of liver surgery, to provide most guidance for surgery. This system is a training system based on the normal anatomy of the liver, and the training is intended to enable surgeons to achieve high levels of precision become proficient in the operation of mixed reality technology and avoid complications. Mixed Reality training is a worthy alternative to provide 3D information to clinicians and its application in surgery. This conclusion was obtained based on a questionnaire and evaluation. Surgeons with extensive experience in surgical operations perceived in the questionnaire that this technology is useful in liver surgery, would help in precise preoperative planning, accurate intraoperative identification, and reduction of hepatic injury. , MR technology can help physicians with precise positioning during surgery. However, the surgeon's proficiency in operating MR glasses has limited the diffusion of this technology. In the future, we will reconstruct data and 3D print isometric models based on CT data of liver tumors to be applied to our training system. In actual surgery, it is difficult to render the finer vessels with CT data. Our model reconstructed based on tomographic anatomical data can allow doctors to see more details, different from the actual CT, but we believe the trainers will have different gains. In future studies we plan to improve registration accuracy and non‐rigid registration algorithms will be required to address intraoperative anatomical deformation. , , , , Our study has several limitations. The 3D reconstruction and modeling of cross‐sectional anatomical dataset was from one cadaver. Dataset was obtained manually which is time consuming and 3D printed model is costly. There are potential inaccuracies at each stage of model fabrication as well. The training of “MRNSALTS” System was conducted by a group of 26 clinicians which need to be enlarged.
CONCLUSION This study shows that a higher quality cross‐sectional anatomical dataset can reconstruct detailed 3D models. A hybrid reality navigation training system for liver surgery is created by the combination of 3D printing and HoloLens glasses virtual reality technology. Mixed Reality training is a worthy alternative to provide 3D information to clinicians and its possible application in surgery. This conclusion was obtained based on a questionnaire and evaluation. Surgeons with extensive experience in surgical operations perceived in the questionnaire that this technology might be useful in liver surgery, would help in precise preoperative planning, accurate intraoperative identification, and reduction of hepatic injury.
Muhammad Shahbaz: Conceptualization (lead); data curation (lead); formal analysis (equal); investigation (equal); methodology (lead); resources (equal); software (equal); validation (equal); visualization (equal); writing – original draft (lead); writing – review and editing (lead). Huachun Miao: Conceptualization (lead); data curation (equal); formal analysis (equal); investigation (equal); methodology (lead); software (lead); writing – original draft (equal); writing – review and editing (equal). Zeeshan Farhaj: Formal analysis (equal); investigation (equal); methodology (equal); software (equal); writing – original draft (equal); writing – review and editing (equal). Xin Gong: Conceptualization (equal); data curation (equal); formal analysis (equal); investigation (equal); methodology (equal); software (equal); writing – original draft (equal). Sun Weikai: Data curation (equal); formal analysis (equal); investigation (equal); methodology (equal); resources (equal); software (equal); writing – original draft (equal). Wenqing Dong: Data curation (equal); formal analysis (equal); investigation (equal); methodology (equal); resources (equal); software (equal); writing – original draft (equal). Niu Jun: Conceptualization (equal); methodology (equal); project administration (equal); resources (equal); software (equal); supervision (equal); validation (equal); writing – original draft (equal). Liu Shuwei: Conceptualization (equal); data curation (equal); funding acquisition (equal); methodology (equal); project administration (equal); resources (equal); software (equal); supervision (equal); validation (equal); writing – original draft (equal); writing – review and editing (equal). Dexin Yu: Conceptualization (equal); data curation (equal); formal analysis (equal); funding acquisition (equal); methodology (equal); project administration (equal); resources (equal); software (equal); supervision (equal); validation (equal); writing – original draft (equal); writing – review and editing (equal).
This work was supported by Major scientific and technological innovation projects Shandong Province (No. 2015ZDXX0201A02, No. 2019JZZY020106), National Natural Science Foundation China (No. 81771888) and Shandong Provincial Natural Science Foundation China (ZR2017MH006).
The authors declare no conflict of interest.
This study was approved by the Ethics Committee of Basic Medical Sciences, Shandong University (IRB No. ECSBMSSDU2018‐1‐050 and Ethics Committee of Scientific Research of Shandong University Qilu Hospital IRB No. KYLL‐2022(ZM)‐749).
Figure S1. Figure S2. Figure S3 Figure S4 Figure S5 Click here for additional data file. Video S1 Click here for additional data file.
|
A rare case of ocular and testicular T-cell lymphoma in a hermaphrodite koi carp (
|
fb7f6ecc-5f51-4b1c-a727-96f4c6a09d86
|
10134520
|
Anatomy[mh]
|
The aquaculture industry’s most important sectors are food and ornamental fish cultivation . Although ornamental fish culture is a minor component of the global fish trade , it has evolved into one of the most important aspects of aquaculture , and it is recognized as one of the most profitable industries in many countries around the world . Cyprinids are one of the world’s most well-known and largest families of East Asian freshwater fish . The koi carp ( Cyprinus carpio Linnaeus 1758) as a freshwater ornamental fish is a coloured variety of common carp ( C. carpio ) that originated in Japan and has been bred in Iran since 2002 . Neoplasia can occur in both lower and higher vertebrates . Fish neoplasms are classified according to the mammalian tumour classification system . Lymphatic neoplasia may occur in various types, such as lymphoma, lymphosarcoma, lympholeukemia, and plasmacytoid leukemia . According to veterinary literature, lymphomas are common in dogs, cats, and pigs, but comparatively rare in horses and other domestic species . Lymphoma, defined as a malignant tumour of lymphoid tissue , has been found in a number of fish species including northern pike ( Esox lucius ) , Japanese medaka ( Oryzias latipes ) , coho salmon ( Oncorhynchus kisutch ) , black bullhead ( Ameiurus melas ) , rainbow trout ( Oncorhynchus mykiss ) , flower horn (hybrid cichlid) , gold crossback arowana ( Scleropages formosus ) , Atlantic stingray ( Hypanus sabinus ) and captive white catfish ( Ameiurus catus Linnaeus) . In contrast to the epizootics of lymphoma in Esocidae and Salmonidae , the occurrence of lymphoma is rare in those belonging to the Cyprinidae, and the current study serves to present the clinical, histological, and immunohistochemical characteristics of the first T-cell lymphoma from a hermaphrodite ornamental koi carp ( C. carpio ) in Iran.
In October 2020, a 2-year-old koi carp ( Cyprinus carpio ) was referred to the Ornamental Fish Clinic, Faculty of Veterinary Medicine, University of Tehran. The koi carp ( C. carpio ) was referred due to a large ocular mass with an ulcerated surface, extreme exophthalmia, and right eye hemorrhage (Fig. a). On clinical inspection, the mass was soft on palpation. Except for the extreme exophthalmia, no abnormal behavioral changes in the affected fish were observed. The mass had been observed one month prior to the submission and showed progressive growth. On gross examination, koi carp measured 25 cm and weighed 180 g in total body length and body weight, respectively. Although the affected fish had been kept with 4 other koi carp in the same aquarium, no mass had been observed in other tank mates. Wet mounts of skin, gills, and feces were prepared and observed by light microscopy (E600; Nikon). The fish were then anesthetized with the aquatic anesthetic “PI222” (100 ml/l; the main active ingredients of the PI222 are Eugenol, Carvacrol and Eugenol acetate) (Pars Imen Daru, Iran). PI222 was administered by immersing the animal in a PI222 solution. Enucleation was performed under anesthesia depending on the special condition and requirement of the fish for surgical intervention. Hemostasis was achieved using cautery. The orbital socket was left open to heal after surgery . Tetracycline (5 mg/L) was added to the tank water postoperatively and was repeated on day 3 after a 50% water change. After the initial treatment, 50% of the tank water was changed on days 6 and 9 . Following the enucleation, the wound and fish behavior were monitored until day 221. The overall condition of the fish improved after surgery, the process of skin formation began slowly (Fig. b), and complete healing of the right eye was observed 142-days post-surgery (DPS) (Fig. c). Exophthalmia in the left eye (Fig. b) has been noticed after 57 days post enucleation of the right eye. Despite antibiotic prescriptions, the exophthalmia of the left eye progressed (Fig. c). After 210 days, some clinical signs, including lethargy, anorexia, and imbalance in swimming, were observed. 211 DPS fish were found moribund. As a result of the poor prognosis and the owner’s consent, the fish was euthanized by an overdose of PI222. In the current study, the euthanasia procedures were in accordance with AVMA guidelines for animal euthanasia . Necropsy was performed under sterile conditions. Aerobic and anaerobic bacterial cultures from the liver, kidney, and masses were incubated at 25 °C. For histological examinations, all masses and internal organs were dissected and fixed in 10% neutral buffered formalin before being dehydrated in ethanol series and embedded in paraffin with a paraffin tissue processor and paraffin dispenser. Several 5 μm sections were cut and stained with haematoxylin and eosin (H&E). In addition, immunohistochemical studies on mass sections were performed using primary antibodies against CD3 and CD20. The antibodies used were rabbit polyclonal anti-CD3 (T lymphocyte; Biocare) and rabbit polyclonal anti-CD20 (B lymphocyte; Thermo Fisher Scientific). Slides were counterstained with hematoxylin. The sections of the masses were scanned by the Plustek OpticLab H850 slide scanner. Also, sections were examined by light microscopy (E600; Nikon), and representative images were taken using an IDS UI-2250 microscope camera (IDS imaging). No bacterial growth was observed on blood agar. Also, no external or internal parasites were revealed during microscopic examination of the internal organs, fins, gills, and skin scrapes. The fish was hermaphrodite, with both right and left testicles and one ovary. A large mass attached to the left testis and small whitish nodules on the surface of the liver were clearly visible (Fig. d). On histopathological investigation, the right ocular mass was hypercellular in scant connective tissue. This mass was composed of sheets and clusters of densely and uniformly basophilic lymphoid cells involving the three tunics (the fibrous, vascular, and neuroepithelial), anterior and posterior chambers, extraocular muscles, and adipose tissue. These cells were extended from behind the bony orbit, through the extraocular muscle layer, adipose tissue, the eye globe, and onto the cornea (Fig. a-d). At higher magnifications, the corneal stroma (Fig. d), scleral cartilage (Fig. a & b), and sclera (Fig. c) were also infiltrated by neoplastic lymphocytes. In addition, multifocal hemorrhages (Fig. d), and invasion of basophilic neoplastic cells between the muscle fibers and adipose tissue (Fig. a &b) were observed in the sections. On microscopic examination, neoplastic cells were round to ovoid, bordered by a narrow rim of pale eosinophiliccytoplasm with an indistinct margin. The nuclei were round, with multiple nucleoli. Anisokaryosis and anisocytosis were mild to moderate. Mitotic figures (Fig. c) (mitotic count in 2.37 mm 2 ) were one to four per high-power field. The changes seen in the right eye indicated that an infiltrative, densely neoplastic cellular mass composed of round cells, similar to those seen in the left eye, was affecting the various parts of the eye (Fig. d). Histopathologically, samples of the left gonad showed testicular tissue and an area of ovarian tissue. In the ovarian component, there were some follicles. In the testicular component, the seminiferous tubules were also observed. Therefore, it was recognized as an ovotestis. The majority of the left gonad was occupied by sheets and clusters of densely neoplastic lymphocytes (Fig. a). These neoplastic cells had a small amount of eosinophilic cytoplasm, round nuclei, and multiple nucleoli, similar to those seen in the left and right eyes (Fig. b). Basophilic neoplastic cells were detected in blood vessels inside of the testicular mass (Fig. c & d), raising a suspicion of systemic spread. Microscopic metastasis, which had similar morphologic features as the ocular and testicular tumors, was observed in the liver (Fig. a & b). Despite the presence of neoplastic alterations in eye, gonad, and liver tissues, no histopathological changes were observed in the hematopoietic tissues of fish. Immunohistochemically, the neoplastic cells infiltrating the left and right eye, as well as the testicular mass, were positive for CD3 (Fig. c & d) but negative for CD20. T-cell lymphoma was diagnosed based on histopathological and immunohistochemical findings.
Cancer is a multistep process and a disease of the genome, arising from DNA alterations that disrupt gene structure or function . The tumor classification system in veterinary oncology is based on the WHO (World Health Organization) histological classification of tumors of domestic species . As in mammals, neoplasms of fish are clinicopathologically classified according to the histogenesis and the benign or malignant nature of the neoplasm . Lymphomas are the most common hematopoietic malignant neoplasms in humans and domestic animals . In human oncology, lymphomas of T-cell origin are considered to be more aggressive and have a poorer prognosis in comparison with B-cell lymphoma . Furthermore, as in canines, the prevalence of T-cell lymphomas appears to be lower in fish than that of B-cell lymphomas. Generally, the thymus and kidney are described as the most common primary sites for neoplastic development . However, based on the location of lymphocytic cells in the early stages of the lesion, lymphomas may originate from other organs such as the testis and eye, justifying the diagnosis of oculo-testicular lymphoma in the current study. The complementary diagnosis of lymphoma in the koi carp ( Cyprinus carpio ) was based on the light microscopic features and immunohistochemical characteristics of the neoplasm as described in both mammals and poikilotherms. Histopathologically, hypercellularity of mass in scant connective tissue was observed, which was consistent with the findings of Kieser et al. . The majority of the neoplastic cells were round to ovoid and had eosinophilic cytoplasm with round nuclei and multiple nucleoli, which was in concurrence with the findings of Germann et al. . The mass consisted of sheets of densely packed basophilic lymphoid cells, as described by Corapi et al. . There was invasion of basophilic neoplastic lymphocytes into the corneal stroma, scleral cartilage, and sclera. Observation of several mitotic figures was consistent with the findings of Blazer and Schrank , Jung et al. , Corapi et al. , and Kasantikul et al. . In addition, there was mild-to-moderate anisocytosis and anisokaryosis, which was in concurrence with the findings of Trope et al. . Contrary to Thompson and Bruno & Smail observations, metastases in fish lymphoma have been reported previously. Liver metastasis of spontaneous stomach lymphoma in flowerhorn cichlids has been reported . Also, metastasis in different organs in lymphoma in a Japanese medaka ( Oryzias latipes ) has been described . The presence of similar morphologic features in the liver as the ocular and testicular masses confirmed the metastasis in the current study. The presence of specific cytoplasmic immunolabelling for CD20 and CD79a suggested that neoplastic cells of lymphosarcoma in a captive bonnethead shark ( Sphyrna tiburo ) were most likely of B lymphocyte origin. Manire et al. described the cross-reactivity of mammalian antibodies for B-lymphocyte markers CD79a and CD20 in the bonnethead shark ( S. tiburo ). Immunohistochemically, positive staining of the neoplastic cells infiltrating the left and right eye and testicular mass for CD3 and the negative staining for CD20 were consistent with the findings of Lakooraj et al. and Namazi et al. . These findings support the suggestion that immunohistochemical studies can be used as an additional method for diagnosing hematopoietic malignant neoplasms in some of fish species. Retroviruses are well established as a cause of lymphoma in many domesticated mammal species and are suspected of causing cutaneous lymphomas in pike and muskellunge . Although viral etiology is suspected in a number of fish hematopoietic tumours (e.g., lymphoma and lymphosarcoma) , there have been some reports of chemically-induced lymphosarcoma. Chen et al. revealed that a chemical carcinogen such as N-methyl-N’-nitro-N-nitrosoguanidine plays a key role in the progression of lymphosarcoma in channel catfish ( Ictalurus punctatus ). Also, Schultz and Schultz showed that 7, 12-dimethylbenz (a)-anthracene and diethylnitrosamine may be considered as a potential promoting factors for the development of lymphosarcoma in Poeciliopsis . Furthermore, Brown et al. established an association between the intensity of environmental contamination and the prevalence of lymphoma in pike ( Esox lucius ). Following observations on the ornamental fish farm, no more affected fish were discovered. In addition, no infectious agent was detected in the current study. Therefore, further research is needed to determine the cause of the lymphoid proliferation and the precise promoting factor for the development of this tumor in fish. A final diagnosis of ocular and testicular T-cell lymphoma in the present study was based on the clinical signs, morphology, and texture of the tumour masses in the macroscopic and microscopic examinations. In addition, histopathological and immunohistochemical findings corresponded to T-cell lymphoma characteristics.
|
Gynecological morbidity and treatment-seeking among older adult (aged 45–59) women in India
|
d23e727c-edad-473a-a90c-a2130f07d9a9
|
10134576
|
Gynaecology[mh]
|
Gynecological morbidity (GM) includes any condition, diseases or dysfunction of the reproductive system which is not related to pregnancy, abortion or childbirth, but may be related to sexual behavior . Gynecological problems are major causes of illness and mortality worldwide, with women in Lower- and Middle-Income Countries (LMICs) bearing the majority of the disease load. The gynecological disease makes up 4.5% of the global disease burden, more than other global health concerns like malaria, TB, ischemic heart disease, and maternal diseases . Women’s gynecological health needs are not limited to the reproductive years of their life. Women from LMICs experience GMs throughout their reproductive years and beyond, in part due to the limited medical care they receive during labor and delivery, combined with high parity . Moreover, the sexual health of older women is often considered taboo in many cultures , including India. Studies on the sexuality of older ages in the Indian setting indicate the prevalence of sexual activities among middle-aged heterosexual couples ; nevertheless, public discourse on the subject is avoided to prevent unfavorable cultural attitudes . Women are at risk of hormonal changes, gynecological malignancies, and various genitourinary conditions as they move toward menopause and beyond . The lining of the vagina can reduce innate protective mechanisms against infection among postmenopausal women, and older women having chronic pelvic infection plus reduced immunity are vulnerable to different infectious diseases, including HIV . Evidence suggests women themselves may not seek care, often because they accept the physical discomforts associated with gynecological problems, menopause, and aging as natural . In India, the 2017 National Health Policy (NHP) envisages as its goal the attainment of the highest possible level of health and wellbeing for all at all ages, through a preventive and promotive health care orientation in all developmental policies and universal access to good quality health care services without anyone having to face financial hardship as a consequence. More specifically, the NHP targets enhanced provisions for reproductive morbidities and health needs of women beyond the reproductive age group (40+) . India Strategy for Women, Adolescents and Child Health (I-WACH) builds on these policies to articulate a life-course approach to women’s health . The life cycle approach to providing health services, including sexual and reproductive health (SRH) services, refers to offering services over the course of a client’s life, making sure that women’s SRH needs are met all throughout their lives . Nevertheless, concerns about older people’s SRH and rights (a) continue to be a taboo topic and (b) of little interest to researchers and professionals in the healthcare field. Despite the widespread agreement, the life course approach to addressing SRH and rights has received minimal attention . Moreover, as India moves closer to Universal Health Coverage, it is important to assess if policy initiatives to broaden women’s health beyond maternal health and family planning have increased women’s service utilization. In India, the prevalence of GM ranges between 43 and 92% . Most of the literature on GM in the Indian context cover women of reproductive age (15–49 years); and years of schooling, age, religion, caste, number of pregnancies, autonomy, mass-media exposure, economic status, and place of residence were found to be the significant predictors of GM and treatment-seeking . However, the GMs of postmenopausal women has received minimal attention so far as policy focus and research is concerned, though there is some attention on their general health . Existing scanty evidence on GM and treatment seeking of older adults in India are based on small-scale community-level studies . Against this backdrop, using a nationally representative sample, the present study estimates the prevalence and assesses the determinants and treatment-seeking of GM among older adult women (45–59 years) in India. Results will be a benchmark for assessing women’s reproductive health undergoing premenopausal/menopausal transition in India. Data The study used data from the Longitudinal Ageing Study in India (LASI-Wave 1), 2016–2017. The International Institute for Population Sciences (IIPS), the Harvard T.H. Chan School of Public Health, and the University of Southern California collaborate to conduct the LASI, a multi-topic, nationally representative, large-scale survey. It offers crucial details on chronic illnesses, symptom-based illnesses, demography, functional and mental health, household economic status, healthcare utilization, health insurance, work, employment, and retirement, as well as life expectations for participants 45 years of age and older with their spouses. The LASI is a nationally representative survey covering 72,250 individuals aged 45 and above and their spouses. The study adopted a multistage stratified area probability cluster sampling design to select the observation units, i.e., older adults aged 45 and above and their spouses, irrespective of age. Trained research investigators gathered the data using computer-assisted personal interviewing (CAPI). Only those respondents who gave oral/written consent were interviewed in the survey. The published national report provides detailed survey design, questionnaire, and quality control measures . The survey asked female respondents aged below 60 about GM and its treatment-seeking. Of the 18,717 surveyed women aged 45–59 years, information on GM was missing for 170, and thus data from 18,547 women were finally considered for analysis (Fig. ). Outcome variables The outcome variables used in this analysis were ‘had any GM’ and ‘sought treatment for any GM.’ In the survey, women below 60 were asked, ‘in the last 12 months, have you had any of the following health problem(s)?’. The respondents with any enquired GMs such as per vaginal bleeding, foul-smelling vaginal discharge, uterus prolapses, mood swings/irritability, fibroid/cyst, and dry vagina causing painful intercourse were considered to have any GM. Women with any of the symptoms above were further asked, ‘did you seek a doctor’s consultation or treatment for any of these health problems?’. Women responding ‘yes’ to this question were considered to have sought treatment. Predictor variables The individual characteristics such as current age (45–49, 50–54, 55–59 years), marital status (currently married/others), years of schooling (no formal education, < 10 years, 10+ years), number of pregnancies (< 3, 3–4, 5+), hysterectomy (yes, no), mass media exposure (full, partial, never), and involvement in any household decision-making activities (yes, no) were included in the analysis to assess the role of the women’s characteristics in the prevalence and treatment-seeking of GM. Health insurance (yes, no) was included as an additional predictor variable for treatment-seeking of GM. Additionally, the household features like social groups (scheduled caste-SC, scheduled tribe-ST, other backward classes-OBC, Non-SC/ST/OBC), religion (Hindu, Muslim, others), and Monthly Per Capita Consumption Expenditure (MPCE) (poorest, poorer, middle, richer, richest) and community-level characteristics such as residence (urban, rural) and geographical regions (north, central, east, northeast, west, south) were included in the analysis to assess their association with GM and treatment-seeking. MPCE was computed using data on consumption expenditure collected using the abridged version of the National Sample Survey (NSS) consumption schedule. Women reading newspaper/watching television daily or several times a week were considered to have full mass media exposure, while those reading newspaper/watching television sometimes or rarely were considered to have partial exposure, and those women who never had read newspaper/watched television were considered to have no mass media exposure. Women’s involvement in household decision-making was assessed through their participation in paying bills and settling financial matters, advising the children, and settling disputes. In the survey, women were asked, “Are you usually involved in the following household activities, such as cooking, shopping for the household, payment of bills and settling of financial matters, taking care of household chores, giving advice to the children, settling disputes, and other decisions?”. In this analysis involvement of women in any of the three activities above, assumed to be crucial measures of autonomy, was considered. Statistical analysis The descriptive statistics of the study population by selected socioeconomic and demographic characteristics of the women have been presented for the sample considered for analysis. Additionally, as the outcome variables were dichotomous, binary logistic regression was employed to examine the adjusted effect of socioeconomic and demographic predictors of GM and treatment-seeking of older adult women. The predictor variables included in the regression analysis were finalized after assessing their independent association with the outcome variable (any GM) and checking collinearity among the predictor variables. Multicollinearity was evaluated through Variance Inflation Factor (VIF) method. National individual sample weight was used in the analysis. LASI sample weight accounts for selection probabilities and is adjusted for nonresponse and post-stratification to represent the population characteristics accurately. Stata (V 16) was used for statistical analyses with a 5% significance level. The study used data from the Longitudinal Ageing Study in India (LASI-Wave 1), 2016–2017. The International Institute for Population Sciences (IIPS), the Harvard T.H. Chan School of Public Health, and the University of Southern California collaborate to conduct the LASI, a multi-topic, nationally representative, large-scale survey. It offers crucial details on chronic illnesses, symptom-based illnesses, demography, functional and mental health, household economic status, healthcare utilization, health insurance, work, employment, and retirement, as well as life expectations for participants 45 years of age and older with their spouses. The LASI is a nationally representative survey covering 72,250 individuals aged 45 and above and their spouses. The study adopted a multistage stratified area probability cluster sampling design to select the observation units, i.e., older adults aged 45 and above and their spouses, irrespective of age. Trained research investigators gathered the data using computer-assisted personal interviewing (CAPI). Only those respondents who gave oral/written consent were interviewed in the survey. The published national report provides detailed survey design, questionnaire, and quality control measures . The survey asked female respondents aged below 60 about GM and its treatment-seeking. Of the 18,717 surveyed women aged 45–59 years, information on GM was missing for 170, and thus data from 18,547 women were finally considered for analysis (Fig. ). The outcome variables used in this analysis were ‘had any GM’ and ‘sought treatment for any GM.’ In the survey, women below 60 were asked, ‘in the last 12 months, have you had any of the following health problem(s)?’. The respondents with any enquired GMs such as per vaginal bleeding, foul-smelling vaginal discharge, uterus prolapses, mood swings/irritability, fibroid/cyst, and dry vagina causing painful intercourse were considered to have any GM. Women with any of the symptoms above were further asked, ‘did you seek a doctor’s consultation or treatment for any of these health problems?’. Women responding ‘yes’ to this question were considered to have sought treatment. The individual characteristics such as current age (45–49, 50–54, 55–59 years), marital status (currently married/others), years of schooling (no formal education, < 10 years, 10+ years), number of pregnancies (< 3, 3–4, 5+), hysterectomy (yes, no), mass media exposure (full, partial, never), and involvement in any household decision-making activities (yes, no) were included in the analysis to assess the role of the women’s characteristics in the prevalence and treatment-seeking of GM. Health insurance (yes, no) was included as an additional predictor variable for treatment-seeking of GM. Additionally, the household features like social groups (scheduled caste-SC, scheduled tribe-ST, other backward classes-OBC, Non-SC/ST/OBC), religion (Hindu, Muslim, others), and Monthly Per Capita Consumption Expenditure (MPCE) (poorest, poorer, middle, richer, richest) and community-level characteristics such as residence (urban, rural) and geographical regions (north, central, east, northeast, west, south) were included in the analysis to assess their association with GM and treatment-seeking. MPCE was computed using data on consumption expenditure collected using the abridged version of the National Sample Survey (NSS) consumption schedule. Women reading newspaper/watching television daily or several times a week were considered to have full mass media exposure, while those reading newspaper/watching television sometimes or rarely were considered to have partial exposure, and those women who never had read newspaper/watched television were considered to have no mass media exposure. Women’s involvement in household decision-making was assessed through their participation in paying bills and settling financial matters, advising the children, and settling disputes. In the survey, women were asked, “Are you usually involved in the following household activities, such as cooking, shopping for the household, payment of bills and settling of financial matters, taking care of household chores, giving advice to the children, settling disputes, and other decisions?”. In this analysis involvement of women in any of the three activities above, assumed to be crucial measures of autonomy, was considered. The descriptive statistics of the study population by selected socioeconomic and demographic characteristics of the women have been presented for the sample considered for analysis. Additionally, as the outcome variables were dichotomous, binary logistic regression was employed to examine the adjusted effect of socioeconomic and demographic predictors of GM and treatment-seeking of older adult women. The predictor variables included in the regression analysis were finalized after assessing their independent association with the outcome variable (any GM) and checking collinearity among the predictor variables. Multicollinearity was evaluated through Variance Inflation Factor (VIF) method. National individual sample weight was used in the analysis. LASI sample weight accounts for selection probabilities and is adjusted for nonresponse and post-stratification to represent the population characteristics accurately. Stata (V 16) was used for statistical analyses with a 5% significance level. Socioeconomic and demographic profile of the older adult women Table presents the socioeconomic and demographic characteristics of the surveyed women aged 45–59 years. Of the women, 39% were 45–49 years old, 31% were 50–54 years old, and the rest were aged 55–59 years. Nearly four out of every five women (79%) were currently married. About three-fifths (57%) of the women had no formal education. Of the women, 26% were pregnant less than three times, 41% were three to four times, and the rest had five or more pregnancies. Thirteen percent of these women had undergone hysterectomy. More than half (53%) of the women had full mass media exposure, one-fifth of them had partial exposure, and the rest (26%) had no mass media exposure. Nearly four-fifths (79%) of the women were involved in household decision-making. About one-fifth (21%) of the women had health insurance. An almost equal proportion of these women belonged to the MPCE quintiles. Of the total women, 46% were from OBC, 26% from non-SC/ST/OBC, 20% from SC, and 9% from the ST category. A majority (81%) of the women were Hindus. Two-thirds of these women reside in rural areas. Twenty-six percent of the women belonged to the southern region, 23% to the east region, 19% to the central region, 16% to the western region, 12% to the northern region, and 4% to the north-eastern part of the country. Prevalence of GM & treatment-seeking Fifteen percent of the women aged 45–59 had any GM (Fig. ). Of them, 6% experienced mood swings/irritability, 4% experienced vaginal bleeding or foul-smelling vaginal discharge, 3% reported uterus prolapse, and 1% reported fibroid/cyst and dry vagina causing painful intercourse. Only 41% of older adult women (45–59 years) had sought treatment for any GM (Fig. ). Determinants of GM & treatment-seeking After adjusting the effect of other predictors, women aged 50–54 years had 11% (OR 0.89, 95% CI 0.81–0.98), and those aged 55–59 years had 29% (OR 0.71, 95% CI 0.64–0.79) lower odds of having any GM than women aged 45–49 years (Table ). The probability of any GM was higher among women with any education than their counterparts without formal education. The women with five-plus pregnancies had a 36% (OR 1.36, 95% CI 1.20–1.54) higher likelihood of any GM than those with less than three pregnancies. Women with a hysterectomy were nearly three times more likely to report any GM than those without a hysterectomy (OR 2.96, 95% CI 2.65–3.31). The chances of any GM were higher among women involved in household decision-making (OR 1.30, 95% CI 1.15–1.46) than their counterparts. The odds of any GM were higher among Muslim women (OR 1.22, 95% CI 1.07–1.39) and those Non-Hindu/Muslims (OR 1.26, 95% CI 1.09–1.44) than those Hindu women. Compared with women from SC, the chances of any GM were higher among the women from ST (OR 1.25, 95% CI 1.07–1.46). Women from middle-income households had 27%, those from richer households had 33%, and those from the richest households had 43% higher odds of any GM than those from the poorest households. Compared to the northern region, the likelihood of GM was significantly lower in the southern and western regions (OR 0.66, CI 0.57–0.76) and higher in the central region (OR 1.22, 95% CI 1.05–1.42). The odds of treatment-seeking were higher among women with 10+ years of schooling (OR 1.66, CI 1.23–2.23), with hysterectomy (OR 7.36, CI 5.92–9.14), with five and more pregnancies (OR 1.25, CI 0.96–1.64), and those from richer (OR 1.45, CI 1.06–1.98)/richest (OR 1.91, CI 1.40–2.60) households than their respective counterparts. Women with full mass media exposure had lower odds (OR 0.73, CI 0.57–0.94) of treatment-seeking than those without any exposure. The chance of treatment-seeking was lower among the STs (OR 0.63, CI 0.44–0.90) than those from SCs. Muslim women (OR 0.78, CI 0.59–1.03) and those Non-Hindu/Muslims (OR 0.71, CI 0.51–1.00) had a lower likelihood of seeking treatment for GM than their Hindu counterparts. The women from the west (OR 2.21, CI 1.60–3.06), central (OR 1.39, CI 1.08–1.88), and south (OR 1.42, CI 1.04–2.92) regions had a higher probability of availing treatment than those from the northern region. Table presents the socioeconomic and demographic characteristics of the surveyed women aged 45–59 years. Of the women, 39% were 45–49 years old, 31% were 50–54 years old, and the rest were aged 55–59 years. Nearly four out of every five women (79%) were currently married. About three-fifths (57%) of the women had no formal education. Of the women, 26% were pregnant less than three times, 41% were three to four times, and the rest had five or more pregnancies. Thirteen percent of these women had undergone hysterectomy. More than half (53%) of the women had full mass media exposure, one-fifth of them had partial exposure, and the rest (26%) had no mass media exposure. Nearly four-fifths (79%) of the women were involved in household decision-making. About one-fifth (21%) of the women had health insurance. An almost equal proportion of these women belonged to the MPCE quintiles. Of the total women, 46% were from OBC, 26% from non-SC/ST/OBC, 20% from SC, and 9% from the ST category. A majority (81%) of the women were Hindus. Two-thirds of these women reside in rural areas. Twenty-six percent of the women belonged to the southern region, 23% to the east region, 19% to the central region, 16% to the western region, 12% to the northern region, and 4% to the north-eastern part of the country. Fifteen percent of the women aged 45–59 had any GM (Fig. ). Of them, 6% experienced mood swings/irritability, 4% experienced vaginal bleeding or foul-smelling vaginal discharge, 3% reported uterus prolapse, and 1% reported fibroid/cyst and dry vagina causing painful intercourse. Only 41% of older adult women (45–59 years) had sought treatment for any GM (Fig. ). After adjusting the effect of other predictors, women aged 50–54 years had 11% (OR 0.89, 95% CI 0.81–0.98), and those aged 55–59 years had 29% (OR 0.71, 95% CI 0.64–0.79) lower odds of having any GM than women aged 45–49 years (Table ). The probability of any GM was higher among women with any education than their counterparts without formal education. The women with five-plus pregnancies had a 36% (OR 1.36, 95% CI 1.20–1.54) higher likelihood of any GM than those with less than three pregnancies. Women with a hysterectomy were nearly three times more likely to report any GM than those without a hysterectomy (OR 2.96, 95% CI 2.65–3.31). The chances of any GM were higher among women involved in household decision-making (OR 1.30, 95% CI 1.15–1.46) than their counterparts. The odds of any GM were higher among Muslim women (OR 1.22, 95% CI 1.07–1.39) and those Non-Hindu/Muslims (OR 1.26, 95% CI 1.09–1.44) than those Hindu women. Compared with women from SC, the chances of any GM were higher among the women from ST (OR 1.25, 95% CI 1.07–1.46). Women from middle-income households had 27%, those from richer households had 33%, and those from the richest households had 43% higher odds of any GM than those from the poorest households. Compared to the northern region, the likelihood of GM was significantly lower in the southern and western regions (OR 0.66, CI 0.57–0.76) and higher in the central region (OR 1.22, 95% CI 1.05–1.42). The odds of treatment-seeking were higher among women with 10+ years of schooling (OR 1.66, CI 1.23–2.23), with hysterectomy (OR 7.36, CI 5.92–9.14), with five and more pregnancies (OR 1.25, CI 0.96–1.64), and those from richer (OR 1.45, CI 1.06–1.98)/richest (OR 1.91, CI 1.40–2.60) households than their respective counterparts. Women with full mass media exposure had lower odds (OR 0.73, CI 0.57–0.94) of treatment-seeking than those without any exposure. The chance of treatment-seeking was lower among the STs (OR 0.63, CI 0.44–0.90) than those from SCs. Muslim women (OR 0.78, CI 0.59–1.03) and those Non-Hindu/Muslims (OR 0.71, CI 0.51–1.00) had a lower likelihood of seeking treatment for GM than their Hindu counterparts. The women from the west (OR 2.21, CI 1.60–3.06), central (OR 1.39, CI 1.08–1.88), and south (OR 1.42, CI 1.04–2.92) regions had a higher probability of availing treatment than those from the northern region. A sizable number of older adult women had GM, and the prevalence varied considerably by their socioeconomic and demographic characteristics. Age of the women, marital status, education, number of pregnancies, hysterectomy, involvement in household decision-making, social group, religion, wealth status, and the region were significantly associated with GM among older adult women. Higher GMs among women aged 45–49 than those older may be associated with perimenopause/menopause, as evidence suggests that women experience gynecological concerns around menopause . Women experiencing menopausal symptoms had a significantly lower health-related quality of life and higher work impairment than women without menopausal symptoms . Increased GM among women with better education, economic status, and household decision-making indicates better awareness about GM and, thus, reporting. Lower GM among those currently not married women may be due to underreporting as GMs are often perceived to be associated with the sexual behavior of the women. Sexual intercourse beyond marital union continues to be a taboo in India , which might influence the reporting of GMs among those not in unions. Thus, there is a high likelihood that women not in marital unions are likely to underreport or not report the GMs to avoid stigmatization. However, given their exposure to sexual intercourse, older women in the union are likely to experience GM such as—a dry vagina causing painful intercourse. Women with autonomy in household decision making are more likely to report GM, while their disadvantaged counterparts may either ignore or perceive GM as natural corresponding to their age and hence not report it . An earlier study also reveals that sexual autonomy is a significant predictor of self-reported sexually transmitted infections (STI) . Religious and cultural beliefs were barriers to accessing SRH services and information among Muslims . Perhaps that explains the higher prevalence of GM among women following Islam. In conformity with earlier community-based studies , this study also found a higher prevalence of GM among the STs, which is often credited to their inadequate knowledge about RH and lower utilization of RH care services. The GM adversely affects women’s health and wellbeing, urging program and policy attention. Gynecological morbidities affect women’s physical and psychological life, social role, and religious life . Women with symptoms of GMs have shown an inability to complete their daily routine work . Gynecological problems further affect psychological health , and symptom for a longer time is significantly associated with psychiatric morbidity . The study found inadequate treatment-seeking for GM, which conforms with an earlier study , which revealed that services for reproductive tract infection (RTI) remain a challenge for women in India. Another possibility may be that women perceive these problems as usual during older ages and are not seeking treatment . Ageism, which refers to age-based stereotypes, prejudice, and discrimination, is another issue that restricts older people’s access to healthcare. Moreover, older people’s SRHR continues to be a taboo topic, affecting treatment-seeking . Untreated RTI can cause cervical cancer and pelvic inflammatory disease and affect psychological life. Evidence suggests thousands of women die from the sequelae of undiagnosed or untreated RTIs, including cervical cancer, ectopic pregnancy, acute and chronic infections of the uterus and fallopian tubes, and puerperal infections . RTIs/STIs also increase the risk of HIV transmission . The study found that higher percentages of women with a hysterectomy are going for treatment, possibly due to hysterectomy-induced GMs and the need for regular health check-ups. Evidence reveals several adverse effects of hysterectomy, such as urinary incontinence , sexual dysfunction , late medical problems such as backache and weakness , and earlier onset of menopause . Women with more pregnancies seek treatment for GM, indicating possible awareness of GM, as most of them were found to have any GM. The inverse association between mass media exposure and treatment-seeking may be because there is insufficient/inadequate information about the GM of older adults in the mass media. The women’s household decision-making autonomy did not significantly influence treatment-seeking. This may be because, besides autonomy in household decision making, treatment-seeking requires resources like money, time, availability of services, and permission from husbands. Hence, it will not necessarily enhance treatment seeking. An earlier study reveals that one-third of those women with GM who did not seek treatment conveyed their problems to their husbands. However, husbands often (a) do not perceive the GM as a problem, thus ruling out treatment seeking, (b) do not feel the need to accompany their wife for treatment, and the wife cannot go alone to the health care providers for treatment due to social problems, and (c) many husbands absolve themselves of their responsibility by only agreeing to pay for the treatment; thus affecting the treatment-seeking for GMs . Contrary to past studies that found that health insurance leads to higher medical check-ups and cervical screening among reproductive-age women , this study found no significant association between health insurance and treatment-seeking for GM among older adult women. This may be due to the perception that the problems are usual during older ages, so treatment is not required . Another research found that the most prevalent cause for women not seeking treatment for GM was their belief that they did not require treatment . In conformity to a past study, we also found higher treatment seeking among women from the southern region . This may result from better health infrastructure and elevated female literacy levels in the southern region compared to the other region. As found in an earlier study , we also noticed lower treatment-seeking among women with no/less education and lower economic status. Our results reveal lower treatment-seeking among socio-economically disadvantaged groups—which calls for the urgent need to develop strategies to address these vulnerabilities and inequities. There are several strengths of this study. To the best of our knowledge, it is the first study to analyze the prevalence and determinants of GM and treatment-seeking behavior of older adult women using a nationally representative data. Secondly, this study uses the recent large-scale LASI data with a robust sampling design; thus, the results are contemporary and relevant. Nevertheless, the results are based on cross-sectional data, so inferences drawn on the causal association between the predictor and outcome variables should be carefully studied. GMs are self-reported; thus, the possibility of under-reporting cannot be ruled out. Treatment-seeking for GM may also be influenced by several cultural and contextual factors, which this study could not include due to the unavailability of the survey data. Implications for policy and practice The study reemphasizes the need for a life-course approach in women’s health in general and SRH in particular in the Indian context. The strategies under NHP aimed at enhanced provisions for reproductive morbidities and health needs of women beyond the reproductive age group should be rigorously implemented and monitored. Existing policies and programs should target the more vulnerable section for GMs, such as women of higher parity and those who have undergone hysterectomies. Lower treatment-seeking suggests a need for more awareness of the adverse implications of GMs, which may be addressed by engaging the grassroots community and health workers in delivering health messages to older adults. An earlier study also suggests the engagement of community-based health workers to improve health-seeking for multi-morbidity among older adults beyond reproductive age in India . Efforts to sensitize women through community-based activities and awareness camps may reduce the stigma associated with GM among older adults and enhance their health-seeking for GM. The study reemphasizes the need for a life-course approach in women’s health in general and SRH in particular in the Indian context. The strategies under NHP aimed at enhanced provisions for reproductive morbidities and health needs of women beyond the reproductive age group should be rigorously implemented and monitored. Existing policies and programs should target the more vulnerable section for GMs, such as women of higher parity and those who have undergone hysterectomies. Lower treatment-seeking suggests a need for more awareness of the adverse implications of GMs, which may be addressed by engaging the grassroots community and health workers in delivering health messages to older adults. An earlier study also suggests the engagement of community-based health workers to improve health-seeking for multi-morbidity among older adults beyond reproductive age in India . Efforts to sensitize women through community-based activities and awareness camps may reduce the stigma associated with GM among older adults and enhance their health-seeking for GM. Many older adult women had GM, and treatment-seeking was inadequate. The GM prevalence and treatment-seeking vary considerably by socioeconomic and demographic characteristics. Results suggest awareness generation and the inclusion of this otherwise ignored group in existing and future programs targeting better health and wellbeing of women. Improved health of older adult women will contribute to achieving Goals 3 and 5 of the Sustainable Development Goals (SDGs).
|
Researchers in rheumatology should avoid categorization of continuous predictor variables
|
f83f02f2-fcc1-44d1-a78d-ccc7f2d5ac31
|
10134601
|
Internal Medicine[mh]
|
Epidemiological research can suggest potential risk factors and strategies to prevent, delay or reverse osteoarthritis and other rheumatic diseases. In epidemiological research in osteoarthritis and other rheumatic diseases, it is common practice to categorize a continuous variable that is a predictor of an outcome (the ‘predictor variable’), as evident in studies published in the past two years (2020 to 2022 ). These studies categorized continuous predictor variables such as: change in weight ; change in body mass index (BMI) ; risk score for mortality ; age ; appendicular lean mass index ; fat mass index ; disease activity score ; years of use of analgesics ; patient global visual analogue scale assessment of disease activity ; the Stanford Health Assessment Questionnaire-Disability Index ; glucocorticoid drug dosage ; swollen joint count ; tender joint count ; and alcohol unit consumption per week . To further exemplify how common the practice of categorization of continuous predictor variables is in rheumatology research, we surveyed articles published in 2021 (the year before commencing this current study) in the journal titled Arthritis Care & Research . Arthritis Care & Research is an official journal of the American College of Rheumatology , which is a leading professional organization in rheumatology. In our survey, we only included articles reporting observational studies and randomized trials. Our survey revealed that 49% (101 of 208 ) of those articles categorized the continuous predictor variables that were used in their primary analysis. Categorizing continuous variables is not specific to research in osteoarthritis and other rheumatic diseases. Indeed, a review of 58 articles published in the two months of December 2007 and January 2008 in ten journals (five epidemiological and five general medicine) found that 86% of these articles categorized the primary predictor variable . A more recent review of 23 observational studies published between April and June 2015 found that 61% categorized continuous predictor variables . Although it is widely used, categorization of continuous predictor variables or continuous outcome variables is not recommended in research because of several issues: distortion of associations ; loss of power and precision ; increased probability of biased estimates ; type I errors (false positives) ; type II errors (false negatives) ; and inflated effect sizes (odds ratio) . The common practice of categorizing continuous predictor variables in epidemiological research in osteoarthritis and other rheumatic diseases despite the drawbacks mentioned above may be due to the need for clarity on how this practice changes the results and conclusions. Therefore, our primary aim in this study was to investigate the extent to which the categorization of continuous predictor variables changes findings in epidemiological rheumatology research. For this study, we will use as our example the percentage change in BMI as a predictor variable, with the percentage change in BMI treated either as 1) a categorical variable or 2) a continuous variable, and the two outcome variable domains of structure and pain in knee and hip osteoarthritis.
We revisited a study by Joseph et al. that investigated the association between percentage change in BMI over four years and the two outcome variable domains of structure and pain in knee and hip osteoarthritis using data from the Osteoarthritis Initiative (OAI) study. The authors treated the percentage change in BMI as a categorical variable. From their results, they suggested that while a decrease of 5% or more in BMI may protect against overall structural changes in the knee (as assessed by radiography) and may decrease pain in the knee over four years, an increase of 5% or more in BMI may exacerbate medial joint space narrowing (JSN) in the knee and the development of pain in the knee over four years . There was no association of the percentage change in BMI – when treated as a categorical variable – with any outcomes of hip osteoarthritis . Data We used data from the OAI study . OAI data is openly available to researchers for scientific and educational purposes. The OAI is a multi-center longitudinal study that collected data over four years from a total of 4796 adults (45 to 79 years of age) with or at risk of clinically significant knee osteoarthritis. The local institutional review boards of the OAI centers reviewed and approved the informed consent documentation and ethics approval. Exposures Our predictor variable was the percentage change in BMI between baseline and four years, calculated as follows . We fitted a simple linear regression line for each participant to estimate their annual rate of change in BMI, based on their data for BMI at baseline and other available time points. We then multiplied the slope of this regression line by 4 to estimate the absolute change in BMI over four years. The percentage change in BMI for each individual was then calculated as the absolute change in BMI over four years divided by the baseline BMI of that individual . Fitting a simple linear regression line for each participant allowed us to estimate the change in BMI in cases of missing data, by using all available data points. For the ‘categorical analysis’, we created 3 weight change groups: ≥ 5% decrease in BMI, < 5% change in BMI (i.e., stable BMI, the reference category), and ≥ 5% increase in BMI between baseline and four years. As opposed to the study by Joseph et al. , we did not exclude participants who showed a modest change in BMI (3–5%); and we defined the “stable BMI” category (which was the reference category) as those individuals who exhibited a change in BMI of less than 5%, whereas it was defined by Joseph et al. as a change in BMI of less than 3%. By including participants that exhibited modest change in BMI, we have increased our sample size by 26.2%, and have therefore increased statistical power in our study . We used a 5% weight change threshold because prior studies suggest that this degree of weight change is clinically relevant . For the ‘continuous analysis’, we treated the percentage change in BMI between baseline and four years as the continuous variable that it is. Outcomes Our two outcome variable domains of structure and pain of knee and hip osteoarthritis covered a total of 26 outcomes (18 in the structure and 8 in the pain outcome variable domains). The definitions of these 26 outcomes are detailed in the . These outcomes were defined based on the definitions in the study by Joseph et al. . The 18 outcomes that were in the outcome variable domains of structure were as follows: eight outcomes related to the progression of knee osteoarthritis as assessed by radiography at four years’ follow up; eight outcomes related to the progression of hip osteoarthritis, also assessed by radiography at four years’ follow up; one outcome for the incidence of total knee replacement (TKR) over four years; and one outcome for the incidence of total hip replacement (THR) over four years. For our eight outcomes related to the progression of knee osteoarthritis, we separately investigated the overall structure of the knee joint, and the following seven individual structural features (ISFs) of the knee joint: 1) joint space narrowing (JSN) in the medial or lateral compartment; 2) JSN in the medial compartment; 3) JSN in the lateral compartment; 4) osteophytes on the medial tibial surface; 5) osteophytes on the lateral tibial surface; 6) osteophytes on the medial femoral surface; and 7) osteophytes on the lateral femoral surface. For our eight outcomes for the progression of hip osteoarthritis, we also separately investigated the overall structure of the hip joint, and the following seven ISFs of the hip joint: 1) JSN in the medial or lateral compartment; 2) JSN in the medial compartment; 3) JSN in the lateral compartment; 4) osteophytes on the superior acetabular surface; 5) osteophytes on the superior inferior surface; 6) osteophytes on the superior femoral surface; and 7) osteophytes on the inferior femoral surface. In the outcome variable domain of pain, two types of pain were investigated for the knee and hip: “frequent pain” and “any pain”. For frequent pain in the knee and hip, we used the following 4 outcomes in the analyses: 1) development of frequent pain in the knee 2) development of frequent pain in the hip; 3) resolution of frequent pain in the knee; and 4) resolution of frequent pain in the hip, by four years’ follow up. For any pain in the knee and hip, we used the following 4 outcomes in the analyses: 1) development of any pain in the knee 2) development of any pain in the hip; 3) resolution of any pain in the knee; and 4) resolution of any pain in the hip, by four years’ follow up. Participant selection We applied exclusion criteria for participant selection as per the study by Joseph et al. . Firstly, we excluded participants that had BMI data at less than three of the five available timepoints (Fig. ). This was due to needing a minimum of three timepoints with BMI data to determine weight cycling (to be explained below) from BMI fluctuation. Secondly, we excluded participants who had end stage osteoarthritis of knees or hips at baseline (Fig. ). End stage osteoarthritis of knees was defined as having a Kellgren Lawrence (KL) grade of 4 (the highest possible KL grade) in both knees. End stage osteoarthritis of hips was defined as having JSN that had an Osteoarthritis Research Society International (OARSI) grade of 3 (the highest possible OARSI grade) in both hips, in any of the two sides of the hip (i.e., lateral or medial). Exclusion of these participants was done to avoid any possible confounding effect of their data on the study results due to their potentially reduced mobility and / or reduced ability to exercise. Additionally, there is no way to assess further change in the structure of the knee or hip joints as assessed radiographically once a participant has reached end-stage osteoarthritis. Thirdly, we also excluded participants with rheumatoid arthritis, cancer, or cardiac failure at baseline, as these conditions may cause pathological weight change, which in turn can impact change in BMI (Fig. ). Fourthly, using BMI fluctuation information, we excluded participants who had ‘weight cycling’ during follow up. Weight cycling refers to a repetitive pattern of weight loss and regain . We excluded participants with weight cycling as they would not completely be classified in the weight loss or weight gain categories. Moreover, weight cycling is associated with increased progression of structural defects in osteoarthritis, regardless of whether there is net weight gain or net weight loss . Weight cycling was defined based on BMI fluctuation. BMI fluctuation was calculated as the root mean square error (RMSE) of the regression line of BMI over time that was calculated for each individual . The participants with a RMSE value in the top 10% of all RMSE values were determined as having weight cycling and were thus excluded (Fig. ). With the application of these four selection criteria, the ‘main cohort’ was created, which was used for investigating the 18 outcomes in the outcome variable domain of structure for the progression of knee and hip osteoarthritis and the incidence of TKR and THR. Further, we created four additional sub-cohorts (the ‘frequent knee pain cohort’, ‘frequent hip pain cohort’, ‘any knee pain cohort’, and ‘any hip pain cohort’) which was used for investigating the 8 outcomes in the outcome variable domain of pain (Fig. ). The 4 outcomes for frequent knee and hip pain were investigated in the ‘frequent knee pain cohort’ and ‘frequent hip pain cohort’, respectively. The 4 outcomes for any knee and hip pain were investigated in the ‘any knee pain cohort’ and ‘any hip pain cohort’, respectively. Statistical analyses We used STATA/BE 17.0 for our analyses. We set our threshold for statistical significance as a two-tailed P- value of less than 0.05, as in the study by Joseph et al. . We have not adjusted the significance level for multiple testing (e.g., Bonferroni adjustment). We investigated the association between the percentage change in BMI (treated categorically and continuously) and the outcomes described above using generalized estimating equations with a logistic link function , sometimes referred to as logistic regression with clustering within individuals. In this case, the clustering is of the left and right knee or hip. This approach takes into account the within-person correlation between the two knees or hips and allows for a more accurate estimation of any association between the exposure and outcome. All analyses were adjusted for the following variables: age, sex, and baseline BMI. For the continuous analysis, we first determined whether the percentage change in BMI had a linear relationship with each of our outcomes using the Box-Tidwell method. In this method, an interaction between the percentage change in BMI and its natural logarithmic value is added to the model. A significant interaction indicated a nonlinearity between the percentage change in BMI and the outcome variable . While our statistical analysis suggested that 25 of the 26 outcomes had an apparent linear relationship with the percentage change in BMI, there may be some degree of uncertainty regarding the existence of these relationships, as the inference of linear relationships was based on the results of statistical tests. The remaining outcome, overall structural defects in knee osteoarthritis, did not show any apparent linear relationship with change in BMI. For those 25 outcomes that did have a linear relationship, we fitted a line from the available continuous range of BMI change where the relationship with the outcome variable is linear on the log odds ratio scale, then estimated the effect sizes (odds ratios) from that line. We reported the point estimates of a 5% decrease and a 5% increase in BMI. For the one outcome that did not show any apparent linear association with the percentage change in BMI (i.e., overall structural defects in knee osteoarthritis), we used the statistical method of piecewise linear spline regression. In this method, we divided the data into three separate segments: a decrease of ≥ 5% in BMI; a change of < 5% in BMI; and an increase of ≥ 5% in BMI. In each of the three segments, the change in BMI was linear, but with each segment potentially having a different effect size. We calculated effect sizes from two of these 3 separate segments; one effect size from the segment of decrease of 5% or more in BMI; the other effect size from the segment of increase of 5% or more in BMI. We used these two segments to calculate the point estimates of the effect sizes at a 5% decrease in BMI and a 5% increase in BMI. Sensitivity analyses In our primary analyses (where we investigated the association between the percent change in BMI and 26 outcomes from the outcome variable domains of structure and pain in knee and hip osteoarthritis), the estimates were calculated using a 5% change in BMI in the categorical and continuous analyses. We performed sensitivity analyses to assess if our conclusions from the results that were obtained in our primary analyses would still hold for different percentage changes in BMI. For that, we performed sensitivity analyses by repeating the primary analyses but this time instead of 5%, using a 3% change in BMI categories (i.e., ≥ 3% decrease in BMI, < 3% change in BMI, and ≥ 3% increase in BMI) and a 10% change in BMI categories (i.e., ≥ 10% decrease in BMI, < 10% change in BMI, and ≥ 10% increase in BMI).
We used data from the OAI study . OAI data is openly available to researchers for scientific and educational purposes. The OAI is a multi-center longitudinal study that collected data over four years from a total of 4796 adults (45 to 79 years of age) with or at risk of clinically significant knee osteoarthritis. The local institutional review boards of the OAI centers reviewed and approved the informed consent documentation and ethics approval.
Our predictor variable was the percentage change in BMI between baseline and four years, calculated as follows . We fitted a simple linear regression line for each participant to estimate their annual rate of change in BMI, based on their data for BMI at baseline and other available time points. We then multiplied the slope of this regression line by 4 to estimate the absolute change in BMI over four years. The percentage change in BMI for each individual was then calculated as the absolute change in BMI over four years divided by the baseline BMI of that individual . Fitting a simple linear regression line for each participant allowed us to estimate the change in BMI in cases of missing data, by using all available data points. For the ‘categorical analysis’, we created 3 weight change groups: ≥ 5% decrease in BMI, < 5% change in BMI (i.e., stable BMI, the reference category), and ≥ 5% increase in BMI between baseline and four years. As opposed to the study by Joseph et al. , we did not exclude participants who showed a modest change in BMI (3–5%); and we defined the “stable BMI” category (which was the reference category) as those individuals who exhibited a change in BMI of less than 5%, whereas it was defined by Joseph et al. as a change in BMI of less than 3%. By including participants that exhibited modest change in BMI, we have increased our sample size by 26.2%, and have therefore increased statistical power in our study . We used a 5% weight change threshold because prior studies suggest that this degree of weight change is clinically relevant . For the ‘continuous analysis’, we treated the percentage change in BMI between baseline and four years as the continuous variable that it is.
Our two outcome variable domains of structure and pain of knee and hip osteoarthritis covered a total of 26 outcomes (18 in the structure and 8 in the pain outcome variable domains). The definitions of these 26 outcomes are detailed in the . These outcomes were defined based on the definitions in the study by Joseph et al. . The 18 outcomes that were in the outcome variable domains of structure were as follows: eight outcomes related to the progression of knee osteoarthritis as assessed by radiography at four years’ follow up; eight outcomes related to the progression of hip osteoarthritis, also assessed by radiography at four years’ follow up; one outcome for the incidence of total knee replacement (TKR) over four years; and one outcome for the incidence of total hip replacement (THR) over four years. For our eight outcomes related to the progression of knee osteoarthritis, we separately investigated the overall structure of the knee joint, and the following seven individual structural features (ISFs) of the knee joint: 1) joint space narrowing (JSN) in the medial or lateral compartment; 2) JSN in the medial compartment; 3) JSN in the lateral compartment; 4) osteophytes on the medial tibial surface; 5) osteophytes on the lateral tibial surface; 6) osteophytes on the medial femoral surface; and 7) osteophytes on the lateral femoral surface. For our eight outcomes for the progression of hip osteoarthritis, we also separately investigated the overall structure of the hip joint, and the following seven ISFs of the hip joint: 1) JSN in the medial or lateral compartment; 2) JSN in the medial compartment; 3) JSN in the lateral compartment; 4) osteophytes on the superior acetabular surface; 5) osteophytes on the superior inferior surface; 6) osteophytes on the superior femoral surface; and 7) osteophytes on the inferior femoral surface. In the outcome variable domain of pain, two types of pain were investigated for the knee and hip: “frequent pain” and “any pain”. For frequent pain in the knee and hip, we used the following 4 outcomes in the analyses: 1) development of frequent pain in the knee 2) development of frequent pain in the hip; 3) resolution of frequent pain in the knee; and 4) resolution of frequent pain in the hip, by four years’ follow up. For any pain in the knee and hip, we used the following 4 outcomes in the analyses: 1) development of any pain in the knee 2) development of any pain in the hip; 3) resolution of any pain in the knee; and 4) resolution of any pain in the hip, by four years’ follow up.
We applied exclusion criteria for participant selection as per the study by Joseph et al. . Firstly, we excluded participants that had BMI data at less than three of the five available timepoints (Fig. ). This was due to needing a minimum of three timepoints with BMI data to determine weight cycling (to be explained below) from BMI fluctuation. Secondly, we excluded participants who had end stage osteoarthritis of knees or hips at baseline (Fig. ). End stage osteoarthritis of knees was defined as having a Kellgren Lawrence (KL) grade of 4 (the highest possible KL grade) in both knees. End stage osteoarthritis of hips was defined as having JSN that had an Osteoarthritis Research Society International (OARSI) grade of 3 (the highest possible OARSI grade) in both hips, in any of the two sides of the hip (i.e., lateral or medial). Exclusion of these participants was done to avoid any possible confounding effect of their data on the study results due to their potentially reduced mobility and / or reduced ability to exercise. Additionally, there is no way to assess further change in the structure of the knee or hip joints as assessed radiographically once a participant has reached end-stage osteoarthritis. Thirdly, we also excluded participants with rheumatoid arthritis, cancer, or cardiac failure at baseline, as these conditions may cause pathological weight change, which in turn can impact change in BMI (Fig. ). Fourthly, using BMI fluctuation information, we excluded participants who had ‘weight cycling’ during follow up. Weight cycling refers to a repetitive pattern of weight loss and regain . We excluded participants with weight cycling as they would not completely be classified in the weight loss or weight gain categories. Moreover, weight cycling is associated with increased progression of structural defects in osteoarthritis, regardless of whether there is net weight gain or net weight loss . Weight cycling was defined based on BMI fluctuation. BMI fluctuation was calculated as the root mean square error (RMSE) of the regression line of BMI over time that was calculated for each individual . The participants with a RMSE value in the top 10% of all RMSE values were determined as having weight cycling and were thus excluded (Fig. ). With the application of these four selection criteria, the ‘main cohort’ was created, which was used for investigating the 18 outcomes in the outcome variable domain of structure for the progression of knee and hip osteoarthritis and the incidence of TKR and THR. Further, we created four additional sub-cohorts (the ‘frequent knee pain cohort’, ‘frequent hip pain cohort’, ‘any knee pain cohort’, and ‘any hip pain cohort’) which was used for investigating the 8 outcomes in the outcome variable domain of pain (Fig. ). The 4 outcomes for frequent knee and hip pain were investigated in the ‘frequent knee pain cohort’ and ‘frequent hip pain cohort’, respectively. The 4 outcomes for any knee and hip pain were investigated in the ‘any knee pain cohort’ and ‘any hip pain cohort’, respectively.
We used STATA/BE 17.0 for our analyses. We set our threshold for statistical significance as a two-tailed P- value of less than 0.05, as in the study by Joseph et al. . We have not adjusted the significance level for multiple testing (e.g., Bonferroni adjustment). We investigated the association between the percentage change in BMI (treated categorically and continuously) and the outcomes described above using generalized estimating equations with a logistic link function , sometimes referred to as logistic regression with clustering within individuals. In this case, the clustering is of the left and right knee or hip. This approach takes into account the within-person correlation between the two knees or hips and allows for a more accurate estimation of any association between the exposure and outcome. All analyses were adjusted for the following variables: age, sex, and baseline BMI. For the continuous analysis, we first determined whether the percentage change in BMI had a linear relationship with each of our outcomes using the Box-Tidwell method. In this method, an interaction between the percentage change in BMI and its natural logarithmic value is added to the model. A significant interaction indicated a nonlinearity between the percentage change in BMI and the outcome variable . While our statistical analysis suggested that 25 of the 26 outcomes had an apparent linear relationship with the percentage change in BMI, there may be some degree of uncertainty regarding the existence of these relationships, as the inference of linear relationships was based on the results of statistical tests. The remaining outcome, overall structural defects in knee osteoarthritis, did not show any apparent linear relationship with change in BMI. For those 25 outcomes that did have a linear relationship, we fitted a line from the available continuous range of BMI change where the relationship with the outcome variable is linear on the log odds ratio scale, then estimated the effect sizes (odds ratios) from that line. We reported the point estimates of a 5% decrease and a 5% increase in BMI. For the one outcome that did not show any apparent linear association with the percentage change in BMI (i.e., overall structural defects in knee osteoarthritis), we used the statistical method of piecewise linear spline regression. In this method, we divided the data into three separate segments: a decrease of ≥ 5% in BMI; a change of < 5% in BMI; and an increase of ≥ 5% in BMI. In each of the three segments, the change in BMI was linear, but with each segment potentially having a different effect size. We calculated effect sizes from two of these 3 separate segments; one effect size from the segment of decrease of 5% or more in BMI; the other effect size from the segment of increase of 5% or more in BMI. We used these two segments to calculate the point estimates of the effect sizes at a 5% decrease in BMI and a 5% increase in BMI.
In our primary analyses (where we investigated the association between the percent change in BMI and 26 outcomes from the outcome variable domains of structure and pain in knee and hip osteoarthritis), the estimates were calculated using a 5% change in BMI in the categorical and continuous analyses. We performed sensitivity analyses to assess if our conclusions from the results that were obtained in our primary analyses would still hold for different percentage changes in BMI. For that, we performed sensitivity analyses by repeating the primary analyses but this time instead of 5%, using a 3% change in BMI categories (i.e., ≥ 3% decrease in BMI, < 3% change in BMI, and ≥ 3% increase in BMI) and a 10% change in BMI categories (i.e., ≥ 10% decrease in BMI, < 10% change in BMI, and ≥ 10% increase in BMI).
Participant characteristics There were 3378 participants with 6756 knees and 6756 hips in the main cohort (the cohort in which we investigated the 18 outcomes in the outcome variable domain of structure). There were 3108 participants with 5728 knees in the frequent knee pain cohort, 3312 participants with 6644 hips in the frequent hip pain cohort, 2065 participants with 3128 knees in the any knee pain cohort, and 3022 participants with 5364 hips in the any hip pain cohort (Fig. ). Table shows characteristics of the participants included in each of the five cohorts in this study. The mean age of participants in each cohort was similar, ranging from 61.1 (standard deviation [SD] 9.2) to 61.9 (SD 9.3) years. The percentage of female participants was higher than that of male participants in each cohort, ranging from 55.9 to 57.4%. The mean BMI of participants in each cohort was also similar, ranging from 27.7 (SD 4.4) to 28.1 (SD 4.6) kg/m 2 . Figure shows the distribution of participants by the percentage change in BMI from baseline to four years’ follow up in the main cohort. Of the 3378 participants in this cohort, there were 469 (13.9%) that had a decrease in BMI of 5% or more, 2223 (65.8%) that had a stable BMI (change of less than 5%), and 686 (20.3%) that had an increase in BMI of 5% or more. The distribution of percentage change in BMI was similar in all the other four sub-cohorts (data not shown). Incidence of outcomes The incidence count of outcomes in the five cohorts can be found in Tables S1 to S5 in the . In comparison to knee, the numbers for the incidence of outcomes for hip were generally lower, with the exception of development and resolution of any hip pain (12.7% versus 14.2% for the development of pain, and 13.6% versus 15.4% for the resolution of pain of the knee and hip, respectively). Association between percentage change in BMI and outcomes Table shows the results from our two analyses (categorical and continuous) for the associations of a 5% change in BMI with the 26 outcomes. Of the 26 outcomes investigated, 18 (69%) showed the same result in both the categorical and continual analyses. Of these 18 outcomes, 17 showed no association with the percentage change in BMI when treated either categorically or continuously. These 17 outcomes were: (for the knee) progression in lateral JSN; progression in medial tibial osteophytes; progression in lateral tibial osteophytes; progression in lateral femoral osteophytes; and incidence of TKR; (and for the hip) progression in overall structural defects in hip osteoarthritis; progression in medial or lateral JSN; progression in medial JSN; progression in lateral JSN; progression in superior acetabular osteophytes; progression in superior femoral osteophytes; progression in inferior femoral osteophytes; development of frequent pain in hip; resolution of frequent pain in hip; development of any pain in hip; resolution of any pain in hip; and incidence of THR (Table ). The remaining one of these 18 outcomes (i.e., progression in overall structural defects in knee osteoarthritis) showed an association with a decrease in BMI but not with an increase in BMI in both the continuous and the categorical analysis. Of the 26 outcomes investigated, the remaining eight outcomes (31%) showed association with a 5% change in BMI in either the categorical or the continuous analysis, but not in both analyses (Table ). These eight outcomes were: (for the knee) 1) progression in medial or lateral JSN; 2) progression in medial JSN; 3) progression in medial femoral osteophytes; 4) development of frequent pain in knee; 5) resolution of frequent pain in knee; 6) development of any pain in knee; 7) resolution of any pain in knee; and (for the hip) 8) inferior acetabular osteophyte progression (Table ). Although these eight outcomes were associated with the percentage change in BMI when treated categorically or continuously, there were three types of differences between the associations in the two analyses. These differences will be explained below in points a, b, and c. Outcomes showed associations with percentage change in BMI only in one direction in the categorical analyses but in both directions in the continuous analyses Six of these eight outcomes were positively associated with the percentage change in BMI (i.e., both increase and decrease in BMI) when BMI was treated as a continuous variable. However, in the categorical analysis, these six outcomes had an association with either an increase in BMI or a decrease in BMI, but not both. In the categorical analyses, five of these six outcomes were only associated with a decrease in BMI but not an increase in BMI. These five outcomes (all for knee) were: 1) progression in medial or lateral JSN; 2) progression in medial JSN; 3) progression in medial femoral osteophytes; 4) resolution of frequent pain in knee; and 5) resolution of any pain in knee. The remaining one outcome (development of frequent pain in knee) showed an association with an increase in BMI but not with a decrease in BMI in the categorical analysis (Table ). Outcomes showed associations with percentage change in BMI in the categorical analysis but not in the continuous analysis (possible false positive) When BMI was treated as a categorical variable, one of these eight outcomes, namely progression in inferior acetabular osteophytes in the hip, showed an association with a decrease in BMI (but not an increase in BMI). This may be a false positive, because 1) the outcome showed no significant association with the percentage change in BMI when percentage change in BMI was treated as a continuous variable (Table ); 2) there was no other significant association for the any of the 8 outcomes for progression of hip osteoarthritis assessed by radiography in the categorical analysis (Table ); and 3) acetabular osteophytes are not a reliable measure for the progression of hip osteoarthritis as it is difficult to distinguish them from normal anatomy . Outcomes showed associations with percentage change in BMI in the continuous analysis but not in the categorical analysis (possible false negative) When BMI was treated as a categorical variable, one of these eight outcomes, namely the development of any pain in the knee, showed no association with the percentage change in BMI (either a decrease in BMI or an increase in BMI). This may be a false negative, because 1) the outcome showed an association with the percentage change in BMI when percentage change in BMI was treated as a continuous variable (Table ); 2) all the other 3 of 4 outcomes for knee pain showed an association either with a decrease or increase in BMI in the categorical analysis, suggesting likelihood of an association; and 3) other studies showed an association of change in BMI with the development of knee pain due to osteoarthritis . Sensitivity analyses The results from the sensitivity analyses using 3% and 10% change in BMI (decrease or increase) showed that eight and seven of the 26 outcomes investigated, respectively, differed in the categorical compared to the continuous analysis, showing all of the three different types of differences that were shown in our primary analyses (i.e., using 5% change in BMI) (Table S6).
There were 3378 participants with 6756 knees and 6756 hips in the main cohort (the cohort in which we investigated the 18 outcomes in the outcome variable domain of structure). There were 3108 participants with 5728 knees in the frequent knee pain cohort, 3312 participants with 6644 hips in the frequent hip pain cohort, 2065 participants with 3128 knees in the any knee pain cohort, and 3022 participants with 5364 hips in the any hip pain cohort (Fig. ). Table shows characteristics of the participants included in each of the five cohorts in this study. The mean age of participants in each cohort was similar, ranging from 61.1 (standard deviation [SD] 9.2) to 61.9 (SD 9.3) years. The percentage of female participants was higher than that of male participants in each cohort, ranging from 55.9 to 57.4%. The mean BMI of participants in each cohort was also similar, ranging from 27.7 (SD 4.4) to 28.1 (SD 4.6) kg/m 2 . Figure shows the distribution of participants by the percentage change in BMI from baseline to four years’ follow up in the main cohort. Of the 3378 participants in this cohort, there were 469 (13.9%) that had a decrease in BMI of 5% or more, 2223 (65.8%) that had a stable BMI (change of less than 5%), and 686 (20.3%) that had an increase in BMI of 5% or more. The distribution of percentage change in BMI was similar in all the other four sub-cohorts (data not shown).
The incidence count of outcomes in the five cohorts can be found in Tables S1 to S5 in the . In comparison to knee, the numbers for the incidence of outcomes for hip were generally lower, with the exception of development and resolution of any hip pain (12.7% versus 14.2% for the development of pain, and 13.6% versus 15.4% for the resolution of pain of the knee and hip, respectively).
Table shows the results from our two analyses (categorical and continuous) for the associations of a 5% change in BMI with the 26 outcomes. Of the 26 outcomes investigated, 18 (69%) showed the same result in both the categorical and continual analyses. Of these 18 outcomes, 17 showed no association with the percentage change in BMI when treated either categorically or continuously. These 17 outcomes were: (for the knee) progression in lateral JSN; progression in medial tibial osteophytes; progression in lateral tibial osteophytes; progression in lateral femoral osteophytes; and incidence of TKR; (and for the hip) progression in overall structural defects in hip osteoarthritis; progression in medial or lateral JSN; progression in medial JSN; progression in lateral JSN; progression in superior acetabular osteophytes; progression in superior femoral osteophytes; progression in inferior femoral osteophytes; development of frequent pain in hip; resolution of frequent pain in hip; development of any pain in hip; resolution of any pain in hip; and incidence of THR (Table ). The remaining one of these 18 outcomes (i.e., progression in overall structural defects in knee osteoarthritis) showed an association with a decrease in BMI but not with an increase in BMI in both the continuous and the categorical analysis. Of the 26 outcomes investigated, the remaining eight outcomes (31%) showed association with a 5% change in BMI in either the categorical or the continuous analysis, but not in both analyses (Table ). These eight outcomes were: (for the knee) 1) progression in medial or lateral JSN; 2) progression in medial JSN; 3) progression in medial femoral osteophytes; 4) development of frequent pain in knee; 5) resolution of frequent pain in knee; 6) development of any pain in knee; 7) resolution of any pain in knee; and (for the hip) 8) inferior acetabular osteophyte progression (Table ). Although these eight outcomes were associated with the percentage change in BMI when treated categorically or continuously, there were three types of differences between the associations in the two analyses. These differences will be explained below in points a, b, and c. Outcomes showed associations with percentage change in BMI only in one direction in the categorical analyses but in both directions in the continuous analyses Six of these eight outcomes were positively associated with the percentage change in BMI (i.e., both increase and decrease in BMI) when BMI was treated as a continuous variable. However, in the categorical analysis, these six outcomes had an association with either an increase in BMI or a decrease in BMI, but not both. In the categorical analyses, five of these six outcomes were only associated with a decrease in BMI but not an increase in BMI. These five outcomes (all for knee) were: 1) progression in medial or lateral JSN; 2) progression in medial JSN; 3) progression in medial femoral osteophytes; 4) resolution of frequent pain in knee; and 5) resolution of any pain in knee. The remaining one outcome (development of frequent pain in knee) showed an association with an increase in BMI but not with a decrease in BMI in the categorical analysis (Table ). Outcomes showed associations with percentage change in BMI in the categorical analysis but not in the continuous analysis (possible false positive) When BMI was treated as a categorical variable, one of these eight outcomes, namely progression in inferior acetabular osteophytes in the hip, showed an association with a decrease in BMI (but not an increase in BMI). This may be a false positive, because 1) the outcome showed no significant association with the percentage change in BMI when percentage change in BMI was treated as a continuous variable (Table ); 2) there was no other significant association for the any of the 8 outcomes for progression of hip osteoarthritis assessed by radiography in the categorical analysis (Table ); and 3) acetabular osteophytes are not a reliable measure for the progression of hip osteoarthritis as it is difficult to distinguish them from normal anatomy . Outcomes showed associations with percentage change in BMI in the continuous analysis but not in the categorical analysis (possible false negative) When BMI was treated as a categorical variable, one of these eight outcomes, namely the development of any pain in the knee, showed no association with the percentage change in BMI (either a decrease in BMI or an increase in BMI). This may be a false negative, because 1) the outcome showed an association with the percentage change in BMI when percentage change in BMI was treated as a continuous variable (Table ); 2) all the other 3 of 4 outcomes for knee pain showed an association either with a decrease or increase in BMI in the categorical analysis, suggesting likelihood of an association; and 3) other studies showed an association of change in BMI with the development of knee pain due to osteoarthritis .
The results from the sensitivity analyses using 3% and 10% change in BMI (decrease or increase) showed that eight and seven of the 26 outcomes investigated, respectively, differed in the categorical compared to the continuous analysis, showing all of the three different types of differences that were shown in our primary analyses (i.e., using 5% change in BMI) (Table S6).
This study in osteoarthritis showed that categorizing the continuous predictor variable in the analysis (in this example, the percentage change in BMI) could influence the results in three ways. The first of these three ways was that statistically significant associations were found in one direction when percentage change in BMI was treated categorically. In contrast, they were found in both directions when percentage change in BMI was left as a continuous variable. The second of these three ways was by showing statistically significant associations that are non-existent when the variable is left as a continuous variable (possible false positives) . Specifically, in our categorical analysis, the outcome of progression in inferior acetabular osteophytes in the hip was associated with a decrease in BMI. In contrast, it was not associated with either a decrease or increase in BMI in the continuous analysis (Table ). The third way was that the analyses with categorized continuous variables might mask statistically significant associations when the variable is left as a continuous variable (possible false negatives) . Specifically, in our categorical analysis, the outcome of the development of any pain in knee was not associated with a decrease nor an increase in BMI. In contrast, it was associated with both a decrease and an increase in BMI in the continuous analysis (Table ). Further, our sensitivity analyses using 3% and 10% changes in BMI delivered the same conclusions as our primary analyses (that used a 5% change in BMI), showing that these three issues with categorization of continuous variables are independent of these different thresholds of the continuous variable (percentage change in BMI). Of these three ways that results differed depending on whether the predictor variable was treated as a continuous or a categorical variable, the first one was a major problem as the conclusions drawn from the continuous and categorical analyses results would be different. From the continuous analyses in this study, we would conclude a beneficial association between a decrease in BMI and a harmful association of an increase in BMI for structural changes and pain in knee osteoarthritis over four years, as the effect of percentage change in BMI was shown in both directions (decrease and increase). However, from the categorical analysis in this study, we would have concluded that a decrease in BMI is associated with beneficial effects for knee structure and pain in osteoarthritis but that an increase in BMI is not associated with harmful effects. The conclusion about lack of association between an increase in BMI from the categorical analyses conflicts with the conclusion from the continuous analyses, as well as from previous research showing that weight gain is associated with harmful effects of structural changes and pain in knee osteoarthritis while weight loss is associated with beneficial effects . It is difficult to reconcile that a decrease in BMI is associated with one effect, whereas an increase in BMI is not associated with the opposite effect. This difficulty in the reconciliation of the results from the analysis using categorized continuous variables can also be seen in the study by Joseph et al. which we revisited. That study, which used categorized percentage change in BMI, showed the association of either a decrease in BMI or an increase in BMI with outcomes of structural changes and pain in knee osteoarthritis, but not for both a decrease and an increase in BMI for any outcome. We acknowledge the limitations in our study. There were 276 (5.8%) of the 4,796 participants with missing data for whom we could not estimate their BMI change. Therefore, we cannot exclude the possibility that missing data could have resulted in bias in our estimates. Additionally, it is important to note that our study only investigated the impact of categorizing one predictor variable (percentage change in BMI) on several outcomes in one specific population (OAI). Therefore, our results cannot be generalized to all situations in which researchers categorize variables in rheumatology. Categorization of continuous predictor variables can be useful in rheumatology research when there is strong prior knowledge or established cut-offs for a particular variable, such as disease activity scores, categorization of antibody titers (anti-citrullinated protein antibody, ACPA positive/negative), or achieving remission or not (yes/no). In such cases, categorization can aid in simplifying the analysis and interpretation of the results.
In conclusion, our study demonstrated that categorizing continuous predictor variables in rheumatology may result in associations being shown in only one direction, and could also lead to possible false positive and possible false negative associations, which may lead to erroneous conclusions. We suggest that researchers in rheumatology, including clinicians and peer reviewers, consider the potential drawbacks of categorizing continuous predictor variables and prioritize the use of continuous variables.
Additional file 1: Definitions of outcomes. Table S1. Incidence of outcomes in main cohort (the cohort for investigating the progression of knee and hip osteoarthritis and the incidence of total joint replacement). Table S2. Incidence of outcomes in frequent knee pain cohort. Table S3. Incidence of outcomes in frequent hip pain cohort. Table S4. Incidence of outcomes in any knee pain cohort. Table S5. Incidence of outcomes in any hip pain cohort. Table S6. The associations of change in BMI change with outcomes, as treating change in BMI categorical (using 3, 5 and 10% weight change categories) and continuous variable.
|
Critical value in surgical pathology: evaluating the current status in a multicenter study
|
57f47afb-a524-42ec-a486-c35df844a247
|
10134675
|
Pathology[mh]
|
In 1972, Lundberg first proposed the concept of critical value . A critical value refers to a laboratory finding outside the normal range that might constitute an immediate health risk that would be otherwise difficult to detect . It is also known as "critical diagnosis," "urgent diagnosis," and treatable, immediately life-threatening diagnosis." Regardless of the attributed terms, an immediate report to a healthcare provider is necessary for taking the required medical actions . In surgical pathology, turnaround time is variable from 2 to 14 days, involving tissue processing, slide preparation, microscopic evaluation, and the typing and signing of reports. In these reports, there are some critical results that demand rapid reporting for rapid intervention before routine reporting. Therefore, clear cutoff points must be developed to differentiate between life-threatening conditions and those that can be managed in routine practice . In addition to critical diagnoses, there are a few diagnoses in surgical pathology that are unusual or unexpected and should be addressed during treatment, although not as immediately as the critical ones. These results are referred to as "significant, unexpected diagnoses" . The concept of critical value is quite evident when dealing with numerical data in clinical pathology; nevertheless, surgical pathology is information-sensitive, and surgical pathologists are involved in the interpretation of findings rather than numerical data . Non-pathologists and pathologists have quite different expectations, and these differing expectations might result in miscommunication and patient harm as a result . Moreover, there are no clear guidelines, and research on this topic is limited. In the absence of such guidelines, the surgical pathologist's expertise and judgment determine when immediate physician contact is warranted. Furthermore, despite the critical value in clinical pathology, which is completely well-known, respected, and documented, many laboratories do not have proper estimation and documentation plans for critical values in surgical pathology . With this background in mind, the present study aimed to reach an agreement regarding the determination, documentation, and reporting of critical or unexpected surgical pathology results in centers affiliated with Shiraz University of Medical Sciences (SUMS), Shiraz, Iran, to calculate the annual frequency of these findings and evaluate the necessity of policy implementation. This study was conducted in five surgical laboratories of the SUMS Pathology Department, which conduct pathological assessments in various medical fields. Centers 1 and 2 served as general centers; nevertheless, Centers 3, 4, and 5 were specialist centers for otorhinolaryngology, gynecology (GYN), and transplantation. This study was carried out according to the tenets of the Declaration of Helsinki after obtaining approval from the Ethics Committee of SUMS (IR.SUMS.MED.REC.1400.081). As there were no established criteria for critical or unexpected results in the studied centers, a multiple-choice questionnaire was developed. The list of diagnoses was chosen by the authors according to previous surveys and the authors' experience to represent diagnoses that might be critical (Table ). All pathologists and some clinicians with various subspecialties in the five centers were asked to participate in this study using an invitation link. The most well-liked items for each question were selected to establish a standard operating procedure (SOP) for the determination, documentation, and reporting of critical or unexpected pathology results. Afterward, all pathologists in the study center were asked to follow this SOP. The statistics were compiled from all centers at the end of the year. Microsoft Excel (version 2016) was used to analyze the recorded data. Among 340 physicians (60 pathologists and 280 non-pathologists), a total of 87 physicians, including 43 pathologists and 44 non-pathologists, with subspecialties in general surgery ( n = 15), GYN ( n = 6), dermatology ( n = 4), otorhinolaryngology ( n = 3), urology ( n = 2), neurosurgery ( n = 2), internal medicine ( n = 6), pediatric medicine ( n = 3), and general medicine ( n = 3) participated in this investigation. Overall, 37%, 5%, 33%, and 5% of the participants were residents, fellows, non-attending specialists, and general practitioners, respectively. Nearly 20% of the participants were attending physicians. The acceptable critical items are ranked from the most popular to the least popular in Fig. . Most participants (64%) agreed that the optimal time to announce critical or unexpected reports is within 24 h of establishing the final diagnosis. In addition, phone calls (36%) and in-person meetings (31% each) were the most dependable communication options. The most qualified recipients of critical or unexpected results were the attending physicians, followed by residents and fellows. Moreover, most participants (94%) believed all critical or unexpected cases needed to be documented. The concept of an unexpected diagnosis was defined as an inconsistent finding with the clinical information. Finally, most participants believed that in unpredictable situations, the individual pathologist should decide whether certain communication with the clinical team is necessary. Based on their responses, a written policy was selected and implemented for one year while documented data were collected and analyzed. Among 33,934 pathology reports from five hospitals, 177 critical or unexpected cases (0.5%) were detected (Table ), all of which were contacted and documented. Table shows the number of critical or unexpected cases stratified by each center. One of the most critical functions of pathology reports or laboratory services is to facilitate clear, accurate, and rapid communication of critical test results (critical values) with care providers. Critical diagnoses refer to those that might have an immediate impact on patient care. An example of a critical diagnosis is finding a serious infection (e.g., CMV) in an immunocompromised individual. Significant, unexpected diagnoses should be both significant and unexpected, relying heavily on the pathologist's experience and judgment for identification. An example of a significant, unexpected diagnosis is finding a carcinoma in a uterus removed for leiomyoma . According to Rosai and Ackerman's Surgical Pathology book, "when an urgent decision needs to be made based on pathological findings, the clinician should not wait for the information to reach him/her in a routinely typewritten report!" . In the literature, the number of studies evaluating the necessity of determining and reporting critical or unexpected pathology results is quite limited. In the annual meeting of the Iranian Society of Pathology in Tehran, Iran, Mireskadari reported on a study of 147 pathologists to determine which findings should be considered critical in surgical pathology. Nearly 90 different conditions were extracted from the aforementioned survey . In 2004, Pereira et al. conducted a retrospective review and survey of 2,659 surgical pathology reports based on the perceptions of five clinicians and 11 pathologists regarding critical values in surgical pathology. They identified 13 critical cases (0.49%). Moreover, 4 out of these 13 reports documented phone calls to clinicians (in most cases, at least one day before the final sign-out) . Pathologists should reach an agreement with their clinician colleagues on what kinds of diagnoses are regarded as critical. Moreover, effective communication and proper documentation in pathology reports are the key components of establishing a critical diagnosis policy. The consequences of a delay or failure in communicating critical diagnoses might be devastating. The main reason for establishing a policy for critical or unexpected diagnoses in surgical pathology is to ensure that the written report is not overlooked. Verbal communication would hasten the reporting process. However, since the communication between pathologists and clinicians is frequently suboptimal and might result in misunderstandings, all reports should be conveyed in writing. Furthermore, semi-automated reporting via special codes can improve the quality of patient care . According to the Association of Directors of Anatomic and Surgical Pathology (ADASP), the establishment of critical diagnosis guidelines for anatomic pathology represents practice improvement and patient safety initiatives. The ADASP also recognized that a generic critical diagnosis guideline in anatomic pathology should only be used as a template because the list needs to be customized in each laboratory following consultation with relevant clinical service providers. In the meantime, the College of American Pathologists (CAP) added checklist items GEN.41320 and GEN.41330 to its Laboratory General Checklist, necessitating laboratories to have written procedures for immediate physician notifications when the results fall outside specific critical ranges and to document notification alerts for these results . The Joint Commission on Accreditation of Healthcare Organizations and the CAP surveyed 1,130 laboratories to determine the current policies and practices for critical diagnoses in anatomic pathology. The survey results demonstrated that 75% of laboratories had a written policy for critical diagnoses in anatomic pathology; nevertheless, only 30% had a list of specific examples of critical diagnoses. Additionally, the effective communication and documentation of critical diagnoses in anatomic pathology are not well addressed in the literature . A study by Coffin in 2007 showed that 9.4% of pediatric surgical pathology accessions were critical and that nearly 80% had been reported and documented before policy implementation. Based on the findings of the aforementioned study, after policy implementation, 97.3% (402/413) of the cases were verbally reported and documented . With individual center rates ranging from 0.02% to 3%, on average, 0.5% of the cases in this study after policy implementation were critical or unexpected, comparable to Pereira's survey results. Opportunistic infections caused by Mucormycosis and CMV were the most frequent critical or unexpected cases in the centers investigated for this study. However, there might be some variations in infectious organisms between centers with different geographic and health conditions. Although the otorhinolaryngology center and one general center reported the majority of discovered cases, respectively, other centers also reported significant critical cases, such as severe rejections and necrosis of transplanted organs in the transplantation center. Additionally, the most prevalent critical or unexpected cases differed among centers, highlighting the importance of list formation and policy implementation for every center separately. The fact that this recently implemented policy for determining, documenting, and reporting critical or unexpected pathology results was developed based on input from pathologists and non-pathologists (clinicians) was advantageous; however, it is still impossible to guarantee that all critical or unexpected results will be addressed. Generally, developing a dynamic list and policy emphasizing both pathologists' and physicians' views, in addition to an ongoing evaluation of success, can be an ideal approach for dealing with critical or unexpected cases. Although the concept of critical value in surgical pathology has been recently accepted by most laboratories, there is no standardization for critical items. It might be possible to develop more uniform norms for the determination, reporting, and documentation of these cases by expanding relevant research and recruiting more pathologists and physicians. Additionally, each medical facility is recommended to compile its own unique critical or unexpected diagnosis list and SOP to deal with surgical pathology findings, as these cases vary among medical facilities.
|
Development of a Novel Phagomagnetic-Assisted Isothermal DNA Amplification System for Endpoint Electrochemical Detection of
|
d1f9e7a6-e7ef-4cd4-9dcd-34d85ae558d9
|
10136355
|
Microbiology[mh]
|
Listeria monocytogenes is the etiological agent of invasive listeriosis, a severe, albeit sporadic, infectious disease. In 2021, listeriosis was the fifth most reported zoonosis under European Union (EU) surveillance , with 96.5% of cases requiring hospitalization and an associated case-fatality rate of 13.7%. This bacterium may occur and efficiently persist in food-processing facilities , owing to the complex adaptation mechanisms underlying the remarkable capability to cope with the industrially inflicted sublethal hurdles, challenging pathogen eradication. The long-term persistence and potential post-processing cross-contamination pose a serious food safety concern, being particularly worrisome in ready-to-eat (RTE) foods [ , , ], which support bacterium growth and are intended for consumption without thermal processing. The history of gradually evolving listeriosis outbreaks has propelled the introduction of stringent regulatory policies , rendering compliance with legislation a challenge. Industry commitment to the stipulated policies, particularly the zero-tolerance limit, has triggered the quest for expeditious on-site detection systems to minimize the economic burden of a costly food recall. In fact, the conventional microbiological methods for detecting L. monocytogenes in food matrices, although reliable and accurate, are laborious and time-consuming (five to seven days). Hence, the claimed drawbacks of these standard methods provide a compelling argument for the exploitation of rapid approaches [ , , ], namely nucleic acid amplification-based methods such as PCR (considered the “gold standard” technique) and real-time PCR . Nonetheless, these techniques are operationally complex, and cumbersome, requiring non-portable equipment and specialized technicians to perform the analysis. To circumvent these constraints, loop-mediated isothermal amplification (LAMP) has emerged as a valuable on-site nucleic acid amplification procedure owing to the simplicity of the reaction scheme, swiftness, and cost-efficiency. This molecular-based technique accomplishes the target DNA amplification at a single reaction temperature, in an expeditious format, resorting to portable and affordable instrumentation. The high specificity of this method relies on four to six core oligonucleotides which hybridize with six to eight distinct regions on the DNA template. Moreover, LAMP has proven superior analytical sensitivity, vanquishing PCR-based systems performance . Beyond the well-documented outstanding LAMP performance [ , , , ], the versatility of the endpoint readout (e.g., turbidimetry, colourimetry, electrochemistry, fluorescence) is an appealing trait, contributing to the practical implementation in resource-scarce laboratories/facilities, or for field purposes. Noteworthy, LAMP assays may require lengthy pre-treatment procedures to isolate and concentrate the target bacterium from the complex food matrix . These methods are of paramount importance to efficiently cope with matrix interferents and/or inhibitors of the LAMP technique, and concomitantly improve the analytical sensitivity, ergo, lowering the limit of detection. Hence, the quest for rapid pre-analytical concentration approaches has swiftly evolved towards the design of novel bioreceptor-based systems, paving the way for the exploitation of aptamers , nucleic acids , antibodies [ , , ], antimicrobial peptides , and bacteriophages . Amongst these, (bacterio)phages (viruses that specifically infect bacteria) have emerged as auspicious biorecognition elements owing to their remarkable selectivity, sensitivity, and cost-efficient production, also evidencing notable stability to withstand harsh physicochemical conditions. Notwithstanding the valuable LAMP robustness, the method is devoid of the ability to discriminate between viable virulent cells from non-viable harmless analogues, which may contribute to an overestimation of the bacterial concentration, impairing its implementation for routine monitoring purposes. To address this challenge, complementary procedures have been coupled to LAMP, namely those relying on the DNA-intercalating dye, propidium monoazide (PMA) , which specifically diffuses through the damaged cell membrane, hampering the DNA amplification of dead cells. Nevertheless, matrix interferents may compromise the PMA performance, and hence additional pre-processing strategies may be entailed to assure detection reliability. Moreover, this technique depends on the energy-demanding photoactivation of PMA performed with grid electricity, thus hindering the integrated system portability for on-field purposes. This complex procedure precludes the application of an expeditious surveillance system. Hence, the most promising approach relies on the integration of bacteriophages which, among the quoted traits, possess the unique ability to discriminate the physiological state of the cell. More precisely, phages have been documented to hold great potential in specifically recognizing the viable but non-culturable state (VBNC) . Listeria monocytogenes cells may persist in this dormant state in food-processing environments, thereby evading detection using conventional methods. Therefore, phages may constitute a feasible approach to tackle this challenge. Hitherto, a phagomagnetic-assisted LAMP detection scheme targeting L. monocytogenes is still unexplored. In this sense, in the current work, one envisaged the application of the broad lytic spectrum phage Listex™ P100, a member of the Herelleviridae family, to propose a novel phagomagnetic separation protocol. This strictly virulent listeriaphage was exploited to selectively pre-concentrate viable cells, and elicit the ensuing bacterial DNA leakage, owing to the occurrence of the host lysis at the last stage of the phage infection cycle. Analogous approaches were formerly documented for the confirmatory identification of Escherichia coli using coliphages . The purpose of the present work was to develop a novel all-in-one integrated system comprising a targeted LAMP assay, assisted by a P100–magnetic platform and coupled with an endpoint electrochemical technique, towards a rapid and accurate screening of L. monocytogenes along the food chain (farm-to-fork). The analytical performance and applicability of the approach were validated in pasteurized milk, formerly associated with listeriosis outbreaks.
2.1. Reagents and Solutions Magnesium sulfate heptahydrate, Tris hydrochloride, Tween 20 and gelatin from porcine skin, bis(sulfosuccinimidyl)suberate (BS3), and glycerol were purchased from Sigma-Aldrich (St. Louis, MO, USA). Disodium hydrogen phosphate dihydrate and sodium dihydrogen phosphate were acquired from Riedel-de Haën (Seelze, Germany). Sodium hydroxide and sodium molybdate dihydrate were obtained from VWR chemicals (Maia, Portugal). Sodium chloride was purchased from Panreac Quimica S.A (Barcelona, Spain), and methylene blue (MBlue) was acquired from Thermo Scientific (Waltham, MA, USA). All the chemicals were analytical grade or equivalent and used as received without further purification. LISTEX™ P100 bacteriophage was purchased from Micreos Food Safety (Wageningen, The Netherlands). The Micromer ® -M magnetic particles (Ø, 2 µm) with a magnetite core coated by a styrene-maleic acid-copolymer and with the surface functionalized with PEG-NH 2 groups (PEG–MBs) were purchased from Micromod ® Partikeltechnologie GmbH (Rostock, Germany). Brain heart infusion (BHI) broth and BHI agar were acquired from Biokar Diagnostics (Beauvais, France). The DNA polymerases ( Bst LF and Bst 2.0) and deoxyribonucleotide triphosphates (dNTPs) were obtained from New England Biolabs (Ipswich, Massachusetts, USA). Agarose was purchased from GRiSP Research Solutions (Porto, Portugal). Further information about pH buffer solutions and culture medium preparation is detailed in . Ultrapure DNase- and RNase-free water were used for DNA amplification experiments. 2.2. Microorganisms and Inoculum Preparation 2.2.1. Bacterial Strains and Culture Conditions Listeria monocytogenes EGD-e (ATCC BAA-679) (phage P100 susceptible) was used as the reference strain to develop and evaluate the phagomagnetic-assisted LAMP procedure and to perform downstream applications. A cohort of L. monocytogenes and Listeria spp. strains detailed in were comprehensively selected to evaluate the LAMP system performance. Stock cultures were preserved at −80 °C in BHI broth supplemented with 20% ( v / v ) glycerol. Prior to each experiment, the bacterial strains were routinely streaked onto BHI agar and incubated overnight at 37 °C. Afterwards, a single colony was inoculated into BHI broth and grown at 37 °C until the late exponential phase, and sub-cultured (1%, v / v ) onto fresh medium, under the indicated growth conditions. 2.2.2. Bacteriophage Titration by the Double-Layer Method The phage Listex™ P100 stock solution presented an initial titre of 10 11 plaque forming units (PFU) mL −1 and was stored in the original saline buffer at 4 °C. The phage titration was performed according to the double-layer method (plaque assay), as formerly described by Kropinski et al. . Briefly, phage samples (MB-immobilized or in their free form) were serially 10-fold diluted in SM buffer. The host culture (100 μL of overnight grown L. monocytogenes ATCC 19116) and aliquots of 100 μL of the decimal phage dilution were mixed with 3 mL of molten LC soft agar. The suspension was poured onto BHI agar plates and incubated at 30 °C. Plaque forming units enumeration was performed 24h post-infection. 2.3. Preparation of P100 Modified Magnetic Particles Commercial PEG–MBs (20 μL, 10 mg mL −1 in H 2 O) were used as a magnetic platform for P100 bacteriophage loading. P100 was physically immobilized resorting to an optimized three-step protocol comprised of: (i) PEG–MBs sterilization (ethanol 97%, 30 min); (ii) bacteriophage immobilization (140 μL of 1 × 10 9 PFU mL −1 of P100, in 0.01 M citrate buffer pH 5 or only citrate buffer in blank assays, incubated overnight at 350 rpm, 4 °C) and; (iii) microparticle active binding sites blockage (bovine serum albumin (BSA) 1% ( w / v ) in 0.1 M PBS pH 7.4, for 8 h, at 350 rpm, 4 °C); all interspersed by several washing steps (3× with 0.01 M PBST after step (i), 2× with 0.01 M PBS after steps (ii), and 1× after (iii)). In the end, the separated bacteriophage-functionalized magnetic particles (P100–MBs) were resuspended in 500 μL of SM buffer (particle concentration of 0.4 mg mL −1 ). All samples were stored at 4 °C until used. A PCMT Thermoshaker from Grant Instruments (Shepreth, UK) was used for all incubation steps with temperature control. A MagJET separation rack from Thermo Scientific™ was used to perform magnetic separation between the incubation and washing steps. The effect of the immobilization method (physical or chemical) was also evaluated through the application of a covalent immobilization protocol , using the bis(sulfosuccinimidyl)suberate (BS3) crosslink between amine surface groups of PEG–MBs and surface amine moieties of P100 particles. Briefly, 10 mM of BS3 in 0.01 M PBS was added to the sterilized PEG–MBs suspension and allowed to react under orbital shaking (350 rpm) for 30 min. Afterwards, the particles were washed three times with PBST and incubated with P100 phage particles (10 9 PFU mL −1 , 0.01 M PBS) overnight at 4 °C using the thermo-shaker. The titre of non-immobilized P100 phages in the supernatant ( PFU supernatant ) and the initial titre ( PFU initial ) were determined by the double-layer method. The immobilization efficiency ( IE ) was calculated following Equation (1). (1) I E % = 1 − P F U s u p e r n a t a n t P F U i n i t i a l × 100 2.4. Phagomagnetic Separation Protocol: Capture Efficiency A stationary-phase culture of L. monocytogenes EGD-e was 100-fold diluted in BHI and grown at 37 °C to exponential phase (optical density at 600 nm (OD 600 nm ) equal to 0.6). The cells were then harvested by centrifugation (4000× g , for 10 min, at room temperature) and washed thrice with SM buffer. To ascertain the colony-forming units (CFU) of the initial inoculum, the obtained bacterial cell suspension was 10-fold serially diluted in PBS and plated onto BHI agar. Afterwards, for magnetic separation and pre-enrichment, 500 μL of the L. monocytogenes suspension (10 3 CFU mL −1 ) were added into P100-modified magnetic particles (P100–MBs) previously washed twice with 0.1 M Tris pH 7.2. The optimization of the magnetic separation protocol (depicted in ) was performed using the molybdate assay procedure . Briefly, the L. monocytogenes cells were incubated with the P100–MBs under orbital shaking (250 rpm, 25 °C) for 15 and 30 min. Subsequently, P100–MBs probes with captured cells ( Lm -P100–MBs) were magnetically attracted, and the supernatant was exposed to 90 °C for 10 min to accomplish the bacterial cells’ thermal lysis. Afterwards, the lysate suspension was mixed 1:1 with Na 2 MoO 4 (20 mM) on a screen-printed carbon working electrode (DRP-110, from DropSens, Oviedo, Spain) for 15 min before electrochemical measurements. All the results were correlated with a calibration plot ( L. monocytogenes cells in CFU mL −1 against molybdate peak current intensity) performed daily, under the same experimental conditions. Square wave voltammetry (SWV) scans were obtained with a potentiostat/galvanostat PalmSens 4 (Houten, The Netherlands), using 0.02 V of amplitude and a frequency of 5 Hz. Capture efficiency (%) and specific capture (%) were calculated using Equations (2) and (3), respectively. (2) C a p t u r e e f f i c i e n c y % = L m c e l l n u m b e r i n i t i a l − L m c e l l n u m b e r s u p e r n a t a n t L m c e l l n u m b e r i n i t i a l × 100 (3) S p e c i f i c c a p t u r e = % c a p t u r e w i t h P 100 MBs − % c a p t u r e w i t h b l a n k MBs The effect of P100–MBs mass (16, 32, 64, and 160 µg), pH (5, 6, 7, and 9), and temperature (4, 11, 25, and 37 °C) on capture efficiency and specific capture were studied following this protocol. 2.5. Development of a Novel LAMP Assay Targeting prfA 2.5.1. Preparation of Genomic DNA The extraction of the bacterial genomic DNA was performed with a commercial genomic DNA extraction kit (GRS Genomic DNA Kit Bacteria, from GRiSP Research Solutions) according to the manufacturer’s instructions. The DNA concentration was estimated utilizing Qubit™ 1X dsDNA High Sensitivity Assay Kit and the respective fluorometer (Invitrogen, Waltham, MA, USA), and the corresponding DNA integrity was evaluated using agarose gel (0.8%, w / v ) electrophoresis. The DNA quality was assessed with the NanoDrop™ One Spectrophotometer (Thermo Scientific). 2.5.2. Design of the L. monocytogenes -Specific LAMP Primers The positive regulatory factor A (PrfA)-encoding gene was selected as the specific target for the L. monocytogenes detection. The sequences of the pleiotropic regulatory gene were retrieved from the National Center for Biotechnology Information database. Afterwards, ClustalW sequences alignment was performed to identify the conserved regions. Three sets of primers targeting the consensus prfA sequence were generated ( in ) using Primer Explorer software. Each primer set comprised four core primers, recognizing six distinct regions of the DNA template, namely two outer displacement primers, F3 and B3 (forward and backward outer primers), and two inner primers, FIP and BIP (forward and backward inner primers, designed with the intent to hybridize with the complementary and reverse complementary target sequences, respectively). The melting temperature value of the oligonucleotide sets was determined theoretically. The corresponding GC content, secondary structure formation (hairpin structures), and dimerization were evaluated in silico using the OligoEvaluator software. The specificity of the designed primers was examined in silico utilizing the Basic Local Alignment Search Tool to guarantee that the selected oligonucleotides were unique to the desired target sequence and that probe efficiency was not negatively impacted owing to off-target interactions. The DNA oligo primers utilized for LAMP amplification were synthesized (high-performance liquid chromatography (HPLC) purified) by Stab Vida (Caparica, Portugal) and are listed in in the . 2.5.3. Optimization of LAMP Reaction System Optimization of the LAMP reaction conditions comprised the evaluation of different concentrations of the following components: deoxyribonucleotide triphosphates (dNTPs; 0.2 to 1.4 mM), magnesium sulphate (MgSO 4 ; 0.5 to 4 mM), along with the determination of the optimum primers ratio (1:1−1:10). The Bst LF and Bst 2.0 DNA polymerase activity performance was also compared. The optimal reaction system consisted of 0.2 μM F3/B3 primers, 0.8 μM FIP/BIP, Bst 2.0 polymerase (8 U), 0.3 mM dNTPs, 2 mM MgSO 4 , 20 mM Tris–HCl, 10 mM (NH 4 ) 2 SO 4 , 50 mM KCl, 0.1% ( v / v ) Tween 20 (pH 8.8). In each assay, nuclease-free water was included as a negative control. The LAMP amplification was conducted in a heating block at 62 °C for 50 min and terminated following a heat inactivation at 80 °C for 5 min. LAMP amplicons were resolved on 2% ( w / v ) agarose gel. 2.5.4. Evaluation of LAMP Assay Specificity The inclusivity of the newly developed LAMP method was evaluated by resorting to a cohort of 61 L. monocytogenes strains ( ), comprising representatives of the most relevant serotypes (1/2a, 1/2b, 1/2c, and 4b). In line with the proposed food-focused application of the assay, strains isolated from dairy specimens were also included in this cohort. The potential cross-reactivity (exclusivity) of the LAMP technique was further investigated. A panel consisting of genomic DNA samples from Gram-positive and Gram-negative bacteria ( ) was analysed using the novel assay system. This cohort ranged from closely related Listeria species to distantly related species, including competitive microbiota (particularly those prevailing in pasteurized milk, namely mesophilic bacteria), with a focus on a dairy-related application. 2.5.5. Determination of Analytical Sensitivity The LOD 95 is defined as the concentration of the target DNA at which an amplicon is detected with a probability of 0.95, being estimated by probit regression analysis. For this purpose, genomic DNA from an overnight culture of L. monocytogenes EGD-e was isolated and fluorometrically quantified on Qubit. The bacterium genome is 2.9 Mb long, harbouring a single copy of prfA. The corresponding copy number of the gene was determined based on the molecular weight of the double-stranded DNA template, following Equation (4) formerly described . (4) C o p i e s o f t e m p l a t e = n g o f d o u b l e s t r a n d e d D N A × A v o g a d r o ’ s c o n s t a n t L e n g t h i n b a s e p a i r s × 10 9 × 650 D a Experiments were performed on 10-fold serially diluted genomic DNA ranging from 39 ng μL −1 to 0.39 fg μL −1 to determine the limit of detection (LOD 95 ). These DNA concentration values corresponded to 1.25 × 10 7 and 0.125 copies, respectively, of the genome per LAMP reaction. 2.5.6. PCR Targeting prfA A formerly developed PCR procedure targeting prfA of L. monocytogenes was used as a standard for comparison with the novel LAMP assay. The master mix comprised 3 mM of MgCl 2 , 150 mM dNTPs, 0.25 μM of each primer (LIP1 and LIP2, ), 1 U Taq DNA Polymerase, 1× Taq Buffer (Thermo Fisher Scientific) and 0.8 ng of DNA. The amplification was performed in a T100 thermal cycler (Bio-Rad Laboratories, Hercules, CA, USA) and PCR products were resolved on 0.8% ( w / v ) gel electrophoresis. 2.6. Validation of the Applicability of the Developed Method in Pasteurized Milk 2.6.1. Application of Phagomagnetic-Assisted LAMP Method in Milk Samples The proof-of-concept of the optimized combined system for the rapid and accurate detection of L. monocytogenes was performed in whole pasteurized milk (depicted in A). This food product was selected as a representative model of dairy matrices, which are often associated with listeriosis outbreaks. Pasteurized whole milk was purchased from a local supermarket (Porto, Portugal) and confirmed for the absence of culturable L. monocytogenes cells according to the ISO 11290-1:2017 standard . Afterwards, 25 mL of pasteurized milk were aseptically transferred to sterile Falcon tubes, spiked with decimally diluted L. monocytogenes cell suspensions to obtain bacterial loads in the range of 10 to 10 3 CFU mL −1 , and thoroughly homogenized. The Listeria inoculum volume represented 1% of the total sample volume . In parallel, for each challenging experiment, an un-spiked milk sample was used as a negative control. Three independent replicates were prepared for each sample. The previously optimized magnetic separation protocol ( ) was performed to capture and pre-concentrate bacterial cells from those artificially contaminated samples. In brief, 10 mL of the spiked and un-spiked samples were divided into 1 mL aliquots, and 64 µg of P100–MBs were added to each aliquot, and the mixture was incubated at 25 °C under orbital shaking for 30 min. The same protocol was also conducted with magnetic particles devoid of P100 (negative control). The magnetic separation procedure was performed, and the complex Lm -P100–MB from each aliquot was resuspended in PBS, pooled, and spread-plated onto BHI and PALCAM Listeria selective agar for enumeration of adsorbed L. monocytogenes cells. Four independent experiments were performed in triplicate. Concomitantly, L. monocytogenes suspensions with theoretical 10 or 50 cells per 10 mL were prepared from the 10 2 CFU mL −1 bacterial cell suspension and submitted to the above detailed magnetic separation protocol. The collected Lm -P100–MB complex was rinsed thrice with PBS, resuspended in BHI, and incubated at 30 °C, for an additional period of 30 min to elicit the lytic cycle (total incubation time of 60 min). Afterwards, the lysed samples were centrifuged at 12,000× g for 5 min, the supernatant was collected, and the released genomic DNA was LAMP amplified. These experiments included the target sample, a blank ( Lm – MB ) control, as well as a positive control (DNA extracted from L. monocytogenes pure culture), and a negative control (nuclease-free water). The amplicons, and corresponding control samples, were resolved by conventional gel electrophoresis. The LOD 95 of the P100–MB-assisted LAMP method was determined using a logistic regression model. 2.6.2. Endpoint Electrochemical Detection Methylene blue was used as an intercalator redox probe, and a reduction of the MBlue peak current intensity was observed in the presence of the LAMP amplification products ( in the ). The LAMP product was mixed 1:1 with methylene blue solution and left to react for 15 min. Afterwards, 20 µL of the mixture was dropped on a screen-printed carbon electrode (AC1-W4-R2, from BVT Technologies, Strázek, Czech Republic), and the square wave voltammetry (SWV) scans were obtained with a portable potentiostat/galvanostat from PalmSens 4, using 0.025 V of amplitude, potential step 0.004 V and a frequency of 100 Hz (as depicted in B). Various MBlue concentrations (5–25 mM) were investigated to determine the suitable amount of the redox probe. 2.7. Statistical Analysis Statistical analysis was performed using SPSS Statistics software version 28 (IBM ® , Chicago, IL, USA). Analysis of variance (ANOVA) was used to determine differences between groups (with Tukey´s post hoc test for pairwise comparisons) when all assumptions necessary were validated, namely normality and homoscedasticity of data. Normality was assessed by Shapiro-Wilk or Kolmogorov-Smirnov tests and homogeneity of variances by Levene’s test. Non-parametric alternative tests were used when these assumptions were not verified, namely Kruskal-Wallis and Mann-Whitney tests, respectively. The significance level assumed in all tests performed was 5%.
Magnesium sulfate heptahydrate, Tris hydrochloride, Tween 20 and gelatin from porcine skin, bis(sulfosuccinimidyl)suberate (BS3), and glycerol were purchased from Sigma-Aldrich (St. Louis, MO, USA). Disodium hydrogen phosphate dihydrate and sodium dihydrogen phosphate were acquired from Riedel-de Haën (Seelze, Germany). Sodium hydroxide and sodium molybdate dihydrate were obtained from VWR chemicals (Maia, Portugal). Sodium chloride was purchased from Panreac Quimica S.A (Barcelona, Spain), and methylene blue (MBlue) was acquired from Thermo Scientific (Waltham, MA, USA). All the chemicals were analytical grade or equivalent and used as received without further purification. LISTEX™ P100 bacteriophage was purchased from Micreos Food Safety (Wageningen, The Netherlands). The Micromer ® -M magnetic particles (Ø, 2 µm) with a magnetite core coated by a styrene-maleic acid-copolymer and with the surface functionalized with PEG-NH 2 groups (PEG–MBs) were purchased from Micromod ® Partikeltechnologie GmbH (Rostock, Germany). Brain heart infusion (BHI) broth and BHI agar were acquired from Biokar Diagnostics (Beauvais, France). The DNA polymerases ( Bst LF and Bst 2.0) and deoxyribonucleotide triphosphates (dNTPs) were obtained from New England Biolabs (Ipswich, Massachusetts, USA). Agarose was purchased from GRiSP Research Solutions (Porto, Portugal). Further information about pH buffer solutions and culture medium preparation is detailed in . Ultrapure DNase- and RNase-free water were used for DNA amplification experiments.
2.2.1. Bacterial Strains and Culture Conditions Listeria monocytogenes EGD-e (ATCC BAA-679) (phage P100 susceptible) was used as the reference strain to develop and evaluate the phagomagnetic-assisted LAMP procedure and to perform downstream applications. A cohort of L. monocytogenes and Listeria spp. strains detailed in were comprehensively selected to evaluate the LAMP system performance. Stock cultures were preserved at −80 °C in BHI broth supplemented with 20% ( v / v ) glycerol. Prior to each experiment, the bacterial strains were routinely streaked onto BHI agar and incubated overnight at 37 °C. Afterwards, a single colony was inoculated into BHI broth and grown at 37 °C until the late exponential phase, and sub-cultured (1%, v / v ) onto fresh medium, under the indicated growth conditions. 2.2.2. Bacteriophage Titration by the Double-Layer Method The phage Listex™ P100 stock solution presented an initial titre of 10 11 plaque forming units (PFU) mL −1 and was stored in the original saline buffer at 4 °C. The phage titration was performed according to the double-layer method (plaque assay), as formerly described by Kropinski et al. . Briefly, phage samples (MB-immobilized or in their free form) were serially 10-fold diluted in SM buffer. The host culture (100 μL of overnight grown L. monocytogenes ATCC 19116) and aliquots of 100 μL of the decimal phage dilution were mixed with 3 mL of molten LC soft agar. The suspension was poured onto BHI agar plates and incubated at 30 °C. Plaque forming units enumeration was performed 24h post-infection.
Listeria monocytogenes EGD-e (ATCC BAA-679) (phage P100 susceptible) was used as the reference strain to develop and evaluate the phagomagnetic-assisted LAMP procedure and to perform downstream applications. A cohort of L. monocytogenes and Listeria spp. strains detailed in were comprehensively selected to evaluate the LAMP system performance. Stock cultures were preserved at −80 °C in BHI broth supplemented with 20% ( v / v ) glycerol. Prior to each experiment, the bacterial strains were routinely streaked onto BHI agar and incubated overnight at 37 °C. Afterwards, a single colony was inoculated into BHI broth and grown at 37 °C until the late exponential phase, and sub-cultured (1%, v / v ) onto fresh medium, under the indicated growth conditions.
The phage Listex™ P100 stock solution presented an initial titre of 10 11 plaque forming units (PFU) mL −1 and was stored in the original saline buffer at 4 °C. The phage titration was performed according to the double-layer method (plaque assay), as formerly described by Kropinski et al. . Briefly, phage samples (MB-immobilized or in their free form) were serially 10-fold diluted in SM buffer. The host culture (100 μL of overnight grown L. monocytogenes ATCC 19116) and aliquots of 100 μL of the decimal phage dilution were mixed with 3 mL of molten LC soft agar. The suspension was poured onto BHI agar plates and incubated at 30 °C. Plaque forming units enumeration was performed 24h post-infection.
Commercial PEG–MBs (20 μL, 10 mg mL −1 in H 2 O) were used as a magnetic platform for P100 bacteriophage loading. P100 was physically immobilized resorting to an optimized three-step protocol comprised of: (i) PEG–MBs sterilization (ethanol 97%, 30 min); (ii) bacteriophage immobilization (140 μL of 1 × 10 9 PFU mL −1 of P100, in 0.01 M citrate buffer pH 5 or only citrate buffer in blank assays, incubated overnight at 350 rpm, 4 °C) and; (iii) microparticle active binding sites blockage (bovine serum albumin (BSA) 1% ( w / v ) in 0.1 M PBS pH 7.4, for 8 h, at 350 rpm, 4 °C); all interspersed by several washing steps (3× with 0.01 M PBST after step (i), 2× with 0.01 M PBS after steps (ii), and 1× after (iii)). In the end, the separated bacteriophage-functionalized magnetic particles (P100–MBs) were resuspended in 500 μL of SM buffer (particle concentration of 0.4 mg mL −1 ). All samples were stored at 4 °C until used. A PCMT Thermoshaker from Grant Instruments (Shepreth, UK) was used for all incubation steps with temperature control. A MagJET separation rack from Thermo Scientific™ was used to perform magnetic separation between the incubation and washing steps. The effect of the immobilization method (physical or chemical) was also evaluated through the application of a covalent immobilization protocol , using the bis(sulfosuccinimidyl)suberate (BS3) crosslink between amine surface groups of PEG–MBs and surface amine moieties of P100 particles. Briefly, 10 mM of BS3 in 0.01 M PBS was added to the sterilized PEG–MBs suspension and allowed to react under orbital shaking (350 rpm) for 30 min. Afterwards, the particles were washed three times with PBST and incubated with P100 phage particles (10 9 PFU mL −1 , 0.01 M PBS) overnight at 4 °C using the thermo-shaker. The titre of non-immobilized P100 phages in the supernatant ( PFU supernatant ) and the initial titre ( PFU initial ) were determined by the double-layer method. The immobilization efficiency ( IE ) was calculated following Equation (1). (1) I E % = 1 − P F U s u p e r n a t a n t P F U i n i t i a l × 100
A stationary-phase culture of L. monocytogenes EGD-e was 100-fold diluted in BHI and grown at 37 °C to exponential phase (optical density at 600 nm (OD 600 nm ) equal to 0.6). The cells were then harvested by centrifugation (4000× g , for 10 min, at room temperature) and washed thrice with SM buffer. To ascertain the colony-forming units (CFU) of the initial inoculum, the obtained bacterial cell suspension was 10-fold serially diluted in PBS and plated onto BHI agar. Afterwards, for magnetic separation and pre-enrichment, 500 μL of the L. monocytogenes suspension (10 3 CFU mL −1 ) were added into P100-modified magnetic particles (P100–MBs) previously washed twice with 0.1 M Tris pH 7.2. The optimization of the magnetic separation protocol (depicted in ) was performed using the molybdate assay procedure . Briefly, the L. monocytogenes cells were incubated with the P100–MBs under orbital shaking (250 rpm, 25 °C) for 15 and 30 min. Subsequently, P100–MBs probes with captured cells ( Lm -P100–MBs) were magnetically attracted, and the supernatant was exposed to 90 °C for 10 min to accomplish the bacterial cells’ thermal lysis. Afterwards, the lysate suspension was mixed 1:1 with Na 2 MoO 4 (20 mM) on a screen-printed carbon working electrode (DRP-110, from DropSens, Oviedo, Spain) for 15 min before electrochemical measurements. All the results were correlated with a calibration plot ( L. monocytogenes cells in CFU mL −1 against molybdate peak current intensity) performed daily, under the same experimental conditions. Square wave voltammetry (SWV) scans were obtained with a potentiostat/galvanostat PalmSens 4 (Houten, The Netherlands), using 0.02 V of amplitude and a frequency of 5 Hz. Capture efficiency (%) and specific capture (%) were calculated using Equations (2) and (3), respectively. (2) C a p t u r e e f f i c i e n c y % = L m c e l l n u m b e r i n i t i a l − L m c e l l n u m b e r s u p e r n a t a n t L m c e l l n u m b e r i n i t i a l × 100 (3) S p e c i f i c c a p t u r e = % c a p t u r e w i t h P 100 MBs − % c a p t u r e w i t h b l a n k MBs The effect of P100–MBs mass (16, 32, 64, and 160 µg), pH (5, 6, 7, and 9), and temperature (4, 11, 25, and 37 °C) on capture efficiency and specific capture were studied following this protocol.
2.5.1. Preparation of Genomic DNA The extraction of the bacterial genomic DNA was performed with a commercial genomic DNA extraction kit (GRS Genomic DNA Kit Bacteria, from GRiSP Research Solutions) according to the manufacturer’s instructions. The DNA concentration was estimated utilizing Qubit™ 1X dsDNA High Sensitivity Assay Kit and the respective fluorometer (Invitrogen, Waltham, MA, USA), and the corresponding DNA integrity was evaluated using agarose gel (0.8%, w / v ) electrophoresis. The DNA quality was assessed with the NanoDrop™ One Spectrophotometer (Thermo Scientific). 2.5.2. Design of the L. monocytogenes -Specific LAMP Primers The positive regulatory factor A (PrfA)-encoding gene was selected as the specific target for the L. monocytogenes detection. The sequences of the pleiotropic regulatory gene were retrieved from the National Center for Biotechnology Information database. Afterwards, ClustalW sequences alignment was performed to identify the conserved regions. Three sets of primers targeting the consensus prfA sequence were generated ( in ) using Primer Explorer software. Each primer set comprised four core primers, recognizing six distinct regions of the DNA template, namely two outer displacement primers, F3 and B3 (forward and backward outer primers), and two inner primers, FIP and BIP (forward and backward inner primers, designed with the intent to hybridize with the complementary and reverse complementary target sequences, respectively). The melting temperature value of the oligonucleotide sets was determined theoretically. The corresponding GC content, secondary structure formation (hairpin structures), and dimerization were evaluated in silico using the OligoEvaluator software. The specificity of the designed primers was examined in silico utilizing the Basic Local Alignment Search Tool to guarantee that the selected oligonucleotides were unique to the desired target sequence and that probe efficiency was not negatively impacted owing to off-target interactions. The DNA oligo primers utilized for LAMP amplification were synthesized (high-performance liquid chromatography (HPLC) purified) by Stab Vida (Caparica, Portugal) and are listed in in the . 2.5.3. Optimization of LAMP Reaction System Optimization of the LAMP reaction conditions comprised the evaluation of different concentrations of the following components: deoxyribonucleotide triphosphates (dNTPs; 0.2 to 1.4 mM), magnesium sulphate (MgSO 4 ; 0.5 to 4 mM), along with the determination of the optimum primers ratio (1:1−1:10). The Bst LF and Bst 2.0 DNA polymerase activity performance was also compared. The optimal reaction system consisted of 0.2 μM F3/B3 primers, 0.8 μM FIP/BIP, Bst 2.0 polymerase (8 U), 0.3 mM dNTPs, 2 mM MgSO 4 , 20 mM Tris–HCl, 10 mM (NH 4 ) 2 SO 4 , 50 mM KCl, 0.1% ( v / v ) Tween 20 (pH 8.8). In each assay, nuclease-free water was included as a negative control. The LAMP amplification was conducted in a heating block at 62 °C for 50 min and terminated following a heat inactivation at 80 °C for 5 min. LAMP amplicons were resolved on 2% ( w / v ) agarose gel. 2.5.4. Evaluation of LAMP Assay Specificity The inclusivity of the newly developed LAMP method was evaluated by resorting to a cohort of 61 L. monocytogenes strains ( ), comprising representatives of the most relevant serotypes (1/2a, 1/2b, 1/2c, and 4b). In line with the proposed food-focused application of the assay, strains isolated from dairy specimens were also included in this cohort. The potential cross-reactivity (exclusivity) of the LAMP technique was further investigated. A panel consisting of genomic DNA samples from Gram-positive and Gram-negative bacteria ( ) was analysed using the novel assay system. This cohort ranged from closely related Listeria species to distantly related species, including competitive microbiota (particularly those prevailing in pasteurized milk, namely mesophilic bacteria), with a focus on a dairy-related application. 2.5.5. Determination of Analytical Sensitivity The LOD 95 is defined as the concentration of the target DNA at which an amplicon is detected with a probability of 0.95, being estimated by probit regression analysis. For this purpose, genomic DNA from an overnight culture of L. monocytogenes EGD-e was isolated and fluorometrically quantified on Qubit. The bacterium genome is 2.9 Mb long, harbouring a single copy of prfA. The corresponding copy number of the gene was determined based on the molecular weight of the double-stranded DNA template, following Equation (4) formerly described . (4) C o p i e s o f t e m p l a t e = n g o f d o u b l e s t r a n d e d D N A × A v o g a d r o ’ s c o n s t a n t L e n g t h i n b a s e p a i r s × 10 9 × 650 D a Experiments were performed on 10-fold serially diluted genomic DNA ranging from 39 ng μL −1 to 0.39 fg μL −1 to determine the limit of detection (LOD 95 ). These DNA concentration values corresponded to 1.25 × 10 7 and 0.125 copies, respectively, of the genome per LAMP reaction. 2.5.6. PCR Targeting prfA A formerly developed PCR procedure targeting prfA of L. monocytogenes was used as a standard for comparison with the novel LAMP assay. The master mix comprised 3 mM of MgCl 2 , 150 mM dNTPs, 0.25 μM of each primer (LIP1 and LIP2, ), 1 U Taq DNA Polymerase, 1× Taq Buffer (Thermo Fisher Scientific) and 0.8 ng of DNA. The amplification was performed in a T100 thermal cycler (Bio-Rad Laboratories, Hercules, CA, USA) and PCR products were resolved on 0.8% ( w / v ) gel electrophoresis.
The extraction of the bacterial genomic DNA was performed with a commercial genomic DNA extraction kit (GRS Genomic DNA Kit Bacteria, from GRiSP Research Solutions) according to the manufacturer’s instructions. The DNA concentration was estimated utilizing Qubit™ 1X dsDNA High Sensitivity Assay Kit and the respective fluorometer (Invitrogen, Waltham, MA, USA), and the corresponding DNA integrity was evaluated using agarose gel (0.8%, w / v ) electrophoresis. The DNA quality was assessed with the NanoDrop™ One Spectrophotometer (Thermo Scientific).
L. monocytogenes -Specific LAMP Primers The positive regulatory factor A (PrfA)-encoding gene was selected as the specific target for the L. monocytogenes detection. The sequences of the pleiotropic regulatory gene were retrieved from the National Center for Biotechnology Information database. Afterwards, ClustalW sequences alignment was performed to identify the conserved regions. Three sets of primers targeting the consensus prfA sequence were generated ( in ) using Primer Explorer software. Each primer set comprised four core primers, recognizing six distinct regions of the DNA template, namely two outer displacement primers, F3 and B3 (forward and backward outer primers), and two inner primers, FIP and BIP (forward and backward inner primers, designed with the intent to hybridize with the complementary and reverse complementary target sequences, respectively). The melting temperature value of the oligonucleotide sets was determined theoretically. The corresponding GC content, secondary structure formation (hairpin structures), and dimerization were evaluated in silico using the OligoEvaluator software. The specificity of the designed primers was examined in silico utilizing the Basic Local Alignment Search Tool to guarantee that the selected oligonucleotides were unique to the desired target sequence and that probe efficiency was not negatively impacted owing to off-target interactions. The DNA oligo primers utilized for LAMP amplification were synthesized (high-performance liquid chromatography (HPLC) purified) by Stab Vida (Caparica, Portugal) and are listed in in the .
Optimization of the LAMP reaction conditions comprised the evaluation of different concentrations of the following components: deoxyribonucleotide triphosphates (dNTPs; 0.2 to 1.4 mM), magnesium sulphate (MgSO 4 ; 0.5 to 4 mM), along with the determination of the optimum primers ratio (1:1−1:10). The Bst LF and Bst 2.0 DNA polymerase activity performance was also compared. The optimal reaction system consisted of 0.2 μM F3/B3 primers, 0.8 μM FIP/BIP, Bst 2.0 polymerase (8 U), 0.3 mM dNTPs, 2 mM MgSO 4 , 20 mM Tris–HCl, 10 mM (NH 4 ) 2 SO 4 , 50 mM KCl, 0.1% ( v / v ) Tween 20 (pH 8.8). In each assay, nuclease-free water was included as a negative control. The LAMP amplification was conducted in a heating block at 62 °C for 50 min and terminated following a heat inactivation at 80 °C for 5 min. LAMP amplicons were resolved on 2% ( w / v ) agarose gel.
The inclusivity of the newly developed LAMP method was evaluated by resorting to a cohort of 61 L. monocytogenes strains ( ), comprising representatives of the most relevant serotypes (1/2a, 1/2b, 1/2c, and 4b). In line with the proposed food-focused application of the assay, strains isolated from dairy specimens were also included in this cohort. The potential cross-reactivity (exclusivity) of the LAMP technique was further investigated. A panel consisting of genomic DNA samples from Gram-positive and Gram-negative bacteria ( ) was analysed using the novel assay system. This cohort ranged from closely related Listeria species to distantly related species, including competitive microbiota (particularly those prevailing in pasteurized milk, namely mesophilic bacteria), with a focus on a dairy-related application.
The LOD 95 is defined as the concentration of the target DNA at which an amplicon is detected with a probability of 0.95, being estimated by probit regression analysis. For this purpose, genomic DNA from an overnight culture of L. monocytogenes EGD-e was isolated and fluorometrically quantified on Qubit. The bacterium genome is 2.9 Mb long, harbouring a single copy of prfA. The corresponding copy number of the gene was determined based on the molecular weight of the double-stranded DNA template, following Equation (4) formerly described . (4) C o p i e s o f t e m p l a t e = n g o f d o u b l e s t r a n d e d D N A × A v o g a d r o ’ s c o n s t a n t L e n g t h i n b a s e p a i r s × 10 9 × 650 D a Experiments were performed on 10-fold serially diluted genomic DNA ranging from 39 ng μL −1 to 0.39 fg μL −1 to determine the limit of detection (LOD 95 ). These DNA concentration values corresponded to 1.25 × 10 7 and 0.125 copies, respectively, of the genome per LAMP reaction.
prfA A formerly developed PCR procedure targeting prfA of L. monocytogenes was used as a standard for comparison with the novel LAMP assay. The master mix comprised 3 mM of MgCl 2 , 150 mM dNTPs, 0.25 μM of each primer (LIP1 and LIP2, ), 1 U Taq DNA Polymerase, 1× Taq Buffer (Thermo Fisher Scientific) and 0.8 ng of DNA. The amplification was performed in a T100 thermal cycler (Bio-Rad Laboratories, Hercules, CA, USA) and PCR products were resolved on 0.8% ( w / v ) gel electrophoresis.
2.6.1. Application of Phagomagnetic-Assisted LAMP Method in Milk Samples The proof-of-concept of the optimized combined system for the rapid and accurate detection of L. monocytogenes was performed in whole pasteurized milk (depicted in A). This food product was selected as a representative model of dairy matrices, which are often associated with listeriosis outbreaks. Pasteurized whole milk was purchased from a local supermarket (Porto, Portugal) and confirmed for the absence of culturable L. monocytogenes cells according to the ISO 11290-1:2017 standard . Afterwards, 25 mL of pasteurized milk were aseptically transferred to sterile Falcon tubes, spiked with decimally diluted L. monocytogenes cell suspensions to obtain bacterial loads in the range of 10 to 10 3 CFU mL −1 , and thoroughly homogenized. The Listeria inoculum volume represented 1% of the total sample volume . In parallel, for each challenging experiment, an un-spiked milk sample was used as a negative control. Three independent replicates were prepared for each sample. The previously optimized magnetic separation protocol ( ) was performed to capture and pre-concentrate bacterial cells from those artificially contaminated samples. In brief, 10 mL of the spiked and un-spiked samples were divided into 1 mL aliquots, and 64 µg of P100–MBs were added to each aliquot, and the mixture was incubated at 25 °C under orbital shaking for 30 min. The same protocol was also conducted with magnetic particles devoid of P100 (negative control). The magnetic separation procedure was performed, and the complex Lm -P100–MB from each aliquot was resuspended in PBS, pooled, and spread-plated onto BHI and PALCAM Listeria selective agar for enumeration of adsorbed L. monocytogenes cells. Four independent experiments were performed in triplicate. Concomitantly, L. monocytogenes suspensions with theoretical 10 or 50 cells per 10 mL were prepared from the 10 2 CFU mL −1 bacterial cell suspension and submitted to the above detailed magnetic separation protocol. The collected Lm -P100–MB complex was rinsed thrice with PBS, resuspended in BHI, and incubated at 30 °C, for an additional period of 30 min to elicit the lytic cycle (total incubation time of 60 min). Afterwards, the lysed samples were centrifuged at 12,000× g for 5 min, the supernatant was collected, and the released genomic DNA was LAMP amplified. These experiments included the target sample, a blank ( Lm – MB ) control, as well as a positive control (DNA extracted from L. monocytogenes pure culture), and a negative control (nuclease-free water). The amplicons, and corresponding control samples, were resolved by conventional gel electrophoresis. The LOD 95 of the P100–MB-assisted LAMP method was determined using a logistic regression model. 2.6.2. Endpoint Electrochemical Detection Methylene blue was used as an intercalator redox probe, and a reduction of the MBlue peak current intensity was observed in the presence of the LAMP amplification products ( in the ). The LAMP product was mixed 1:1 with methylene blue solution and left to react for 15 min. Afterwards, 20 µL of the mixture was dropped on a screen-printed carbon electrode (AC1-W4-R2, from BVT Technologies, Strázek, Czech Republic), and the square wave voltammetry (SWV) scans were obtained with a portable potentiostat/galvanostat from PalmSens 4, using 0.025 V of amplitude, potential step 0.004 V and a frequency of 100 Hz (as depicted in B). Various MBlue concentrations (5–25 mM) were investigated to determine the suitable amount of the redox probe.
The proof-of-concept of the optimized combined system for the rapid and accurate detection of L. monocytogenes was performed in whole pasteurized milk (depicted in A). This food product was selected as a representative model of dairy matrices, which are often associated with listeriosis outbreaks. Pasteurized whole milk was purchased from a local supermarket (Porto, Portugal) and confirmed for the absence of culturable L. monocytogenes cells according to the ISO 11290-1:2017 standard . Afterwards, 25 mL of pasteurized milk were aseptically transferred to sterile Falcon tubes, spiked with decimally diluted L. monocytogenes cell suspensions to obtain bacterial loads in the range of 10 to 10 3 CFU mL −1 , and thoroughly homogenized. The Listeria inoculum volume represented 1% of the total sample volume . In parallel, for each challenging experiment, an un-spiked milk sample was used as a negative control. Three independent replicates were prepared for each sample. The previously optimized magnetic separation protocol ( ) was performed to capture and pre-concentrate bacterial cells from those artificially contaminated samples. In brief, 10 mL of the spiked and un-spiked samples were divided into 1 mL aliquots, and 64 µg of P100–MBs were added to each aliquot, and the mixture was incubated at 25 °C under orbital shaking for 30 min. The same protocol was also conducted with magnetic particles devoid of P100 (negative control). The magnetic separation procedure was performed, and the complex Lm -P100–MB from each aliquot was resuspended in PBS, pooled, and spread-plated onto BHI and PALCAM Listeria selective agar for enumeration of adsorbed L. monocytogenes cells. Four independent experiments were performed in triplicate. Concomitantly, L. monocytogenes suspensions with theoretical 10 or 50 cells per 10 mL were prepared from the 10 2 CFU mL −1 bacterial cell suspension and submitted to the above detailed magnetic separation protocol. The collected Lm -P100–MB complex was rinsed thrice with PBS, resuspended in BHI, and incubated at 30 °C, for an additional period of 30 min to elicit the lytic cycle (total incubation time of 60 min). Afterwards, the lysed samples were centrifuged at 12,000× g for 5 min, the supernatant was collected, and the released genomic DNA was LAMP amplified. These experiments included the target sample, a blank ( Lm – MB ) control, as well as a positive control (DNA extracted from L. monocytogenes pure culture), and a negative control (nuclease-free water). The amplicons, and corresponding control samples, were resolved by conventional gel electrophoresis. The LOD 95 of the P100–MB-assisted LAMP method was determined using a logistic regression model.
Methylene blue was used as an intercalator redox probe, and a reduction of the MBlue peak current intensity was observed in the presence of the LAMP amplification products ( in the ). The LAMP product was mixed 1:1 with methylene blue solution and left to react for 15 min. Afterwards, 20 µL of the mixture was dropped on a screen-printed carbon electrode (AC1-W4-R2, from BVT Technologies, Strázek, Czech Republic), and the square wave voltammetry (SWV) scans were obtained with a portable potentiostat/galvanostat from PalmSens 4, using 0.025 V of amplitude, potential step 0.004 V and a frequency of 100 Hz (as depicted in B). Various MBlue concentrations (5–25 mM) were investigated to determine the suitable amount of the redox probe.
Statistical analysis was performed using SPSS Statistics software version 28 (IBM ® , Chicago, IL, USA). Analysis of variance (ANOVA) was used to determine differences between groups (with Tukey´s post hoc test for pairwise comparisons) when all assumptions necessary were validated, namely normality and homoscedasticity of data. Normality was assessed by Shapiro-Wilk or Kolmogorov-Smirnov tests and homogeneity of variances by Levene’s test. Non-parametric alternative tests were used when these assumptions were not verified, namely Kruskal-Wallis and Mann-Whitney tests, respectively. The significance level assumed in all tests performed was 5%.
3.1. Effect of Incubation Solution pH and Immobilization Method Virions, such as phage P100 are permanent dipoles, with a negatively charged head and positively charged tail, being able to orient their capsids or tails on a charged surface due to electrostatic interactions . Hence, the determination of the surface charge of PEG-NH 2 –MBs and P100 is a requisite for the proper characterization of MBs and indispensable for the development of physisorption immobilization protocols . The isoelectric point of PEG-NH 2 –MBs particles was estimated to be approximately 5.3 ( in ). Therefore, it is expected a positive surface net charge below pH 5.3 and a negative charge at higher pH values. P100 Dynamic Light Scattering (DLS) experiments ( in ) suggest aggregation or changes in the phage configuration at pH ≤ 4.4, according to the expected P100 size of about 90 × 198 nm (head × tail) . These results indicate that aggregation is not dependent on the isoelectric point of the whole phage P100 , previously predicted at 5.67 , reflecting the individual amino acid composition of their capsid/tail . Hereupon, to avoid conflicting effects of virus aggregation, the optimization of immobilization protocol proceeded within a pH range of 4 to 9 in which phages were preferentially monodispersed. The immobilization of phage particles on the surface of the PEG-NH 2 –MBs does not guarantee successful bacterial capture nor the antibacterial lytic activity of the functionalized magnetic material . Listex P100 is a tailed, net-charged asymmetric phage, capable to recognize and adsorb to L. monocytogenes by phage receptor-binding proteins present in the tail . Thus, the proper orientation of the immobilized phage on the magnetic particle is a key point in achieving optimum capture efficiency . Accordingly, besides the estimation of immobilization efficiency (%, by Equation (1)), the infectivity retention of P100–MBs was also considered in the optimization of the P100 immobilization protocol. Herein, the influence of the immobilization method (physical adsorption/electrostatic or covalent) and incubation solution pH effect on immobilized phage orientation was studied ( ). The overall results evidenced that physical immobilization has the potential to achieve a greater number of active phages (properly oriented) on PEG–MBs, rather than the non-oriented covalent bound protocol. The best P100 immobilization efficiency was achieved at pH 7 (77%) and the poorest results were obtained at pH 4 (38%), closer to the covalent protocol results (42%). In contrast, the best lytic activity level was observed at pH 4 suggesting a preferential charge-related “tail-upward” orientation of immobilized P100, which improved the probability of bacteria recognition, despite the lower phage concentration on its surface. The DLS studies and P100 electric dipole moment conjugated with the MBs zeta-potentials (+ 6.42 and + 1.83 mV, at pH 4 and 5; data presented in ) may support the obtained results. Nonetheless, the interaction of bacteriophages at solid–water interfaces is very complex and cannot be explained solely based on sorbent surface and phage isoelectric point, since hydrophobic effects and other minor interactions (e.g., hydrogen bonding, steric hindrance) can also favour P100 adhesion, albeit more weakly and reversibly than under electrostatic forces conditions . A favourable contribution of this phenomenon was empirically observed in experiments performed at pH 7 and 9, which achieved high efficiency of phage immobilization despite the predicted low electrostatic forces. Concerning the maintenance of P100 stability (aggregates formation at pH 4), and likely irreversible electrostatic physisorption, a pH 5 immobilization solution was selected for the optimized P100 immobilization protocol used in subsequent capture experiments . Hereupon, stability studies of the optimized P100–MBs were also conducted, being disclosed remarkable stability to changes in ionic strength and pH immediately after physisorption (see immobilization protocol) and beyond (long-term storage), maintaining 90% of its initial lytic activity after 8 weeks. 3.2. Influence of Non-Specific Adsorption: Surface Blocking Step Optimization The P100–MBs surface blocking step was optimized according to critical variables such as concentration and incubation time of the blocking solution. This study evaluated and compared two different standard blocking agents (BSA and casein) with distinct sizes and adsorption strengths on hydrophilic surfaces, to maximize the blocking of the unmodified sites of the PEG–MBs while ensuring a reduced steric hindrance of attached P100 . illustrates the effects of BSA and casein concentration (%, w / v ) on P100–MBs specific capture and non-specific adsorption, for a blocking time of 1 h and 8 h (grey shadow). The 1% BSA solution presented a positive effect in reducing non-specific adsorption (lower capture efficiency in blank assays with PEG–MBs) in 1 h step assays, besides exhibiting a relatively low specific capture compared to casein blocking protocols, even after increasing the BSA concentration to 2%. Contrastingly, casein was demonstrated to have a scarcer influence in reducing non-specific adsorption, compromising the selectivity of the developing separation method intended to be applied to complex food matrices. Henceforth, a 1% BSA concentration was selected for the blocking time optimization. The increasing blocking time (from 1 to 8 h), also had a positive effect on P100–MBs specific capture, presenting non-specific adsorption six-fold lower than unblocked blank PEG–MBs. 3.3. Phagomagnetic Separation Protocol Optimization The optimization of phagomagnetic separation protocol experimental conditions (P100–MBs mass, pH, and temperature) was performed by resorting to the molybdophosphate culture-independent procedure. Briefly, this method is based on the reaction between phosphate moieties of bacterial DNA backbone with sodium molybdate to form an insoluble redox molybdophosphate precipitate, which may be electrochemically quantified and indirectly correlated to the bacterial load of the initial inoculum to calculate the specific capture efficiency (%) . To test the practicability and viability of this approach, live and thermally lysed L. monocytogenes cells (10 3 CFU mL −1 ) were electrochemically quantified using a disposable screen-printed carbon electrode (SPCE). The thermal-lysed cells solution showed two redox peaks at 0.19 V and 0.30 V, characteristics of the different valence states of molybdate present in the precipitate formed on SPCE [ , , ]. Contrastingly, only residual redox peaks were displayed in the living cells solution ( in ), demonstrating the feasibility of the method to be used in the quantification of bacterial DNA released from the cells into the supernatant immediately after the phagomagnetic capture. The results of the phagomagnetic capture optimization are summarized in and unveiled a general increase in specific capture rates with longer capture times when comparing the 15 to 30 min protocols. Extended times were not tested to avoid interference from the lytic effect of P100 on the study. Regarding the pH and MBs mass variables, alkaline incubation solutions (pH 9) and a high concentration of P100–MBs were demonstrated to impair the capture efficiency at low bacterial load. Thus, a mass between 16 and 32 µg is adequate for the contamination level evaluated (10 3 CFU mL −1 ). Moreover, higher temperatures (25 and 37 °C) appear to promote a more effective capture. Therefore, an incubation solution of pH 7, a temperature of 25 °C, and a magnetic probes mass of 32 µg were selected as reference conditions, presenting a capture efficiency and specific capture of 80% and 60.6%, respectively (30 min of capture time). Zhou et al. also evaluated the performance of magnetic particles biofunctionalized with the phage P100 (physically immobilized) and documented a significantly lower capture efficiency (40–50%) compared to the value obtained in the current work. 3.4. LAMP Assay Targeting prfA We then sought to develop a novel LAMP system to be coupled with the previously optimized phagomagnetic separation platform towards the specific detection of viable L. monocytogenes . Following LAMP optimization, the performance of the method was validated through the specificity evaluation (inclusivity and exclusivity) and analytical sensitivity determination. 3.4.1. Primers Efficiency Evaluation An experiment was conducted to ascertain the most efficient LAMP primers set amongst the three sets of four oligonucleotides designed. Sets 2 and 3 were not capable of consistently hybridizing to the two on-target sequences assessed, irrespective of the temperature, and hence were excluded. Contrastingly, primer set 1 demonstrated a systematic specific recognition and hybridization to the cognate DNA templates of the L. monocytogenes strains (representatives of serotypes 1/2a, and 4b) evaluated, presenting the best compromise between non-specific background and amplification efficiency. In fact, this remarkable performance was inferred through the electrophoretic profile of the corresponding LAMP amplicons being observed, the characteristic ladder-like pattern ( , ), hence validating the newly designed primer set selection to be utilized in the following experiments. Moreover, since no spurious amplicon formation was visualized, the negative control proved the non-occurrence of oligo primers dimerization or heterodimerization, corroborating the in silico prediction. 3.4.2. Assessment of LAMP Specificity—Inclusivity The optimized LAMP procedure proved the capability to robustly identify a 61-cohort of L. monocytogenes strains belonging to each of the three most common invasive listeriosis-associated serotypes (1/2a, 1/2b, 4b), along with serotype 1/2c ( ). The bacterial cohort comprised strains belonging to the genetic lineage I (harbouring serotypes 1/2b and 4b), and lineage II (comprising serotypes 1/2a and 1/2c). The electrophoretic analysis disclosed the remarkable inclusivity (100%) of the developed assay since a conspicuous amplicons profile (owing to the formation of the stem-loop DNA structures) was visualized ( ), highlighting the positive on-target DNA amplification of the four serotypes considered. The specific detection of different strains belonging to the same serotype was highly consistent and supported that this conserved gene is an appropriate target for the broad-spectrum identification of L. monocytogenes . Cho et al. also investigated the feasibility of the isothermal method targeting prfA for L. monocytogenes detection and the results, in agreement with the documented herein, underlined the high specificity (100% of inclusivity) of the assay, since the amplicons of the 23 L. monocytogenes strains assessed were systematically generated. LAMP results were in accordance with the positive signal obtained with the conventional PCR. D’Agostino et al. , utilizing the same PCR oligonucleotides, evaluated the assay performance against a panel of 38 L. monocytogenes and documented the notable efficiency (100% inclusive) of the method. Consistent with our findings, Cooray et al. also reported the suitability of utilizing prfA as a highly species-specific gene. 3.4.3. Evaluation of LAMP Specificity—Exclusivity The potential cross-reactivity of the proposed LAMP assay was further examined. The electrophoretic pattern obtained ( , ) demonstrated that the LAMP reaction system is highly species-specific, and cross-hybridization was not observed for the other closely related Listeria species, namely Listeria sensu stricto species, Listeria ivanovii NCTC 11846 and Listeria innocua 2030c, and Listeria sensu lato species, Listeria aquatica . Noteworthy, the occurrence of the prfA gene is not constrained to the virulent L. monocytogenes strains, since L. ivanovii NCTC 11846 (the animal pathogen) harbours an orthologous gene (albeit with a low nucleotide sequence similarity to the query DNA conserved region), whilst the non-pathogenic environmental saprophyte L. aquatica is devoid of the whole prfA gene cluster . LAMP results corroborated the previous in silico prediction of the absence of hybridization of the designed oligonucleotides with the heterologous L. ivanovii NCTC 11846 DNA sequence. Moreover, no apparent cross-amplifications were noticed for all the 39 non- Listeria strains (20 Gram-positive and 19 Gram-negative bacteria) tested. These observations evince the non-formation of the characteristic “dumbbell” structures, indicative of no nonspecific complementarity of the oligonucleotides with the reference non-target DNA sequences. Concerning the experiments performed at a higher temperature (63 °C), the non-template DNA displayed a faint electrophoretic profile ( , lane 12) which was evincive of spurious hybridization. Therefore, 62 °C was deemed to be the optimum temperature since an improved stringency was accomplished. This LAMP method was found to be 100% exclusive towards 42 non-target Gram-positive and Gram-negative bacterial strains. Pertaining to the PCR specificity ( ), the results were in close agreement with the proposed LAMP assay, corroborating the formerly documented by Simon et al. and D’Agostino et al. . According to the latter, amongst the 52 non- L. monocytogenes strains evaluated, the prfA -based PCR method proved 100% exclusivity. In opposition to the PCR and RT–PCR-based approaches targeting prfA [ , , , ], hitherto prfA -based LAMP assays have been scarcely exploited. Cho et al. also assessed LAMP specificity towards 16 non- L. monocytogenes strains and the method demonstrated 100% of exclusivity, which is in close agreement with the reported herein. Considering our findings and those formerly documented in the previous study, one may conclude that prfA is, as aforementioned, an appropriate target gene towards the specific LAMP detection of L. monocytogenes . 3.4.4. Evaluation of LAMP Analytical Performance (LOD 95 ) The analytical sensitivity (limit of detection) of the newly developed LAMP assay was also investigated. Probit analysis was conducted to estimate the LOD of the designed LAMP ( in ) with 95% of confidence (LOD 95 ), being obtained a value of 1.98 fg μL −1 (0.95 level of confidence interval 1.1 to 15 fg μL −1 ), theoretically equivalent to 0.5 CFU mL −1 . Hence, the current method was demonstrated to be highly sensitive, since it proved efficient in consistently detecting as few as 0.063 copies of the genome per reaction (1.98 fg μL −1 of L. monocytogenes genomic DNA). For comparative purposes, the sensitivity performance of conventional PCR was also assessed using the same tenfold standard dilutions of template DNA. In opposition to the LAMP assay, an order-of-magnitude higher LOD 95 was obtained, indicating that the latter was 20-fold more sensitive. Moreover, LAMP DNA amplification was accomplished 42 min swifter than the standard PCR. ( ) outlines important features of LAMP protocols compiled through a systematic literature review pertaining to the highly specific L. monocytogenes detection. The probit estimated LOD 95 (0.5 CFU mL −1 ) was within the same order of magnitude as those values documented by Wachiralurpan et al. and Lee et al. , which developed LAMP protocols with the capability of detecting as low as 0.3–3 and 1 CFU mL −1 , respectively. The comparison with other detection thresholds gathered from the available literature ( ) highlighted the superior analytical performance of the current method, which displayed a LOD 95 value 6- to 20,000-fold lower [ , , , , , , ]. Concerning the paramount importance of the oligo primers length, Wachiralurpan et al. hypothesized that the 1000-fold difference in the LOD value of LAMP assays targeting plcB (2.8 CFU mL −1 ) and hly (2.8 × 10 3 CFU mL −1 ) might be attributed to the longer sequence of the latter, which possessed a lower annealing efficiency owing to the putative formation of secondary structures. In another study (in which actA was targeted), in an attempt to accomplish a higher detection performance , a pre-concentration approach was proposed, attaining a 10-fold lower cut-off value when aptamer-based magnetic capture was associated with LAMP. Moreover, in most works, the LAMP technique has proven to surpass PCR analytical sensitivity (by at least ten-fold), corroborating the herein-obtained results. The detection limit and selectivity of these molecular-based approaches are considered pivotal parameters to evaluate the accuracy of the method . These features highlighted the superior performance of the herein proposed LAMP technique, substantiating the suitability of this method as an affordable routine screening procedure for the presumptive presence of L. monocytogenes . With the development and optimization of the current LAMP procedure, the groundwork was laid for the validation of its applicability in food matrices. 3.5. Applicability of the Combined Detection System in Pasteurized Milk 3.5.1. Phagomagnetic Particles Performance in Pasteurized Milk The inherent complexity of milk composition poses a challenge in the development of effective foodborne pathogens detection protocols, since some components (protein and lipid content) may constitute critical interferents in magnetic separation procedures. Hence, the previously optimized phagomagnetic separation protocol was exploited to evaluate the L. monocytogenes capture performance in pasteurized whole milk. The capture efficiency value (58%, ) of P100–MBs in milk samples spiked with 10 3 CFU mL −1 was lower than the value obtained at 30 min in Tris buffer pH 7.2 (85%). Results disclosed a 2.5-fold enhancement in the L. monocytogenes capture for P100–MBs in comparison to the corresponding phage-devoid MBs counterpart, demonstrating the ability of the immobilized phage to capture bacterial cells. Moreover, the P100–MBs were found to be highly sensitive, presenting a separation limit below 10 CFU mL −1 , with a significant increase ( p < 0.05) in the capture efficiency, concomitantly with specific capture efficiency, up to 10 3 CFU mL −1 ( ). According to the aforementioned results ( ), the interaction of L. monocytogenes and P100–MBs was not pH-dependent. Therefore, the lower performance of P100–MBs in milk may be attributed to the protein and lipid contents of the matrix. Those may constitute pivotal interferents in the phagomagnetic particle diffusion, hence influencing the P100–MBs adsorption to the target bacterium. The milk proteins (casein (insoluble) and whey proteins (soluble)) may hinder the contact between immobilized phages and L. monocytogenes . Concerning the milk lipid content, the occurrence of electrostatic and hydrophobic interactions between PEG-immobilized virion particles and lipid molecules may also hamper the bacterium attachment . Zhou et al. also exploited phage P100 as a biorecognition element to propose a phagomagnetic protocol towards L. monocytogenes isolation in whole milk. The authors documented a significantly lower capture efficiency (46%) in comparison to the value obtained herein (58%). In a distinct approach, Shan et al. proposed an immunomagnetic method to isolate the same bacteria in whole milk, being reported a higher separation performance (85%). Notwithstanding, in the current work a lower separation limit was attained (10 CFU mL −1 ) in comparison to the determined therein (10 3 CFU mL −1 ). Accordingly, Yang et al. also employing immunomagnetic nanoparticles in semi-skimmed milk, documented a low sensitivity of the separation method (10 2 CFU mL −1 ) with a low capture efficiency (4.6% for a bacterial load of 10 2 CFU mL −1 ). A novel pre-concentration platform relying on ampicillin-biofunctionalized magnetic nanoparticles was recently described by Bai et al. and a higher limit of detection in spiked milk was reported (10 2 CFU mL −1 ), even when in combination with qPCR. Noteworthy, owing to the broad-spectrum activity of the bioreceptor, a low specificity was observed. The phagomagnetic method proposed herein demonstrated to be a promising bio-approach to selectively capture and pre-concentrate L. monocytogenes in pasteurized whole milk. In particular, the results convey the utility of this platform, which holds a remarkable potential to isolate VBNC L. monocytogenes cells from a complex food matrix for accurate downstream detection. 3.5.2. Phagomagnetic-Assisted LAMP Assay The efficiency of the optimized LAMP assay was assessed concerning the detection of the Lm -P100–MBs complex, previously isolated from spiked pasteurized milk. Despite the notable specificity of the LAMP technique, one of the claimed drawbacks is the inability to distinguish viable virulent cells from dead harmless analogues. Hence, owing to this lack of discriminatory potential of the DNA amplification technique, biased L. monocytogenes detection results might occur, leading to false positive detection. The coupling of LAMP with a prior phagomagnetic capture addressed this issue. Moreover, we sought to exploit an alternative method (phage P100-mediated lysis) to the classic DNA extraction procedures. The rationale underlying the dual-purpose (capture and lysis) phage-based approach relied on the notable potential of this virus as a biorecognition element to specifically adsorb to viable L. monocytogenes cells, along with its intrinsic strict lytic trait, triggering the ensuing bacterial chromosomal DNA leakage. The P100–MBs mediated lysis of L. monocytogenes isolated from pasteurized milk or culture medium was electrophoretically assessed. The analysis disclosed the effective P100-induced lysis of magnetically captured L. monocytogenes since leaked genomic DNA was observed for the distinct bacterial loads analysed (5 to 10 2 CFU mL −1 ) ( and , ). Moreover, the efficiency of detecting the host DNA in phage lysates of cells captured from milk or isolated from cells in a culture medium was comparable ( ). Additionally, the absence of DNA in the Lm -blank–MBs sample (phage-negative control) supported that the nucleic acid was released owing to the specific phage infection, highlighting the outstanding lytic performance of phage P100. Therefore, the proposed method proved the suitability to preclude the use of a nucleic acid isolation kit, which is a prominent advantage. Noteworthy, as aforementioned, this phage-based approach warrants the detection of viable L. monocytogenes and hence provides high confidence for the confirmation of the contamination. The results obtained herein were in close accordance with former studies [ , , ]. Tlili et al. were the first to exploit a phage-mediated lysis protocol to extract genomic DNA from bacterial host cells. The authors reported that the phage T4 covalently immobilized onto the surface of a gold electrode elicited the irreversible intracellular DNA delivery of the T4-captured E. coli into the lysate milieu. The target gene tuf was subsequently LAMP-amplified and detected via linear sweep voltammetry. Wang et al. proposed an experimental merged scheme analogous to the presented herein, in which a coliphage, covalently conjugated with magnetic beads, was utilized as a bioreceptor and lysing agent for viable E. coli O157:H7. The extracellularly leaked bacterial DNA was amplified by qPCR to quantify and identify the target bacterium in water samples. Swift et al. developed a mycobacterium phage D29-triggered lysis (Actiphage ® ) procedure to efficiently extract genomic DNA from viable, low-cell numbers, mycobacteria. The released bacterial DNA was subsequently utilized as the template for PCR amplification, providing a sensitive detection tool for viable and pathogenic mycobacteria collected from blood specimens. The method proposed herein circumvents the use of laborious commercial DNA extraction kits, which is of utmost importance to accomplish the straightforwardness and cost-efficiency required for an on-field application. Furthermore, the inclusion of the magnetic capture step proved to be appropriate to efficiently cope with the inhibitors/interferents estimated to be present in the pasteurized milk sample. 3.5.3. Detection Limit of the Phagomagnetic-Assisted LAMP Assay in Milk The performance of the novel LAMP assay in pasteurized milk was evaluated by electrophoretic analysis, and the analytical sensitivity (LOD 95 ) was determined by probit regression ( , in ). The current method was determined to be highly sensitive since it proved efficient in consistently detecting as few as 5 CFU mL −1 (LOD 95 of 4.1 CFU mL −1 ). In comparison to the LAMP amplification performed with high-purity genomic DNA extracted (commercial kit) from L. monocytogenes pure cultures, devoid of prior magnetic isolation and subsequent phage-mediated lysis, a 10-fold lower sensitivity was obtained. One may theorize that this discrepancy may be attributed to the magnetic platform’s inefficiency in capturing the totality of the bacterial load. Beyond the well-documented high specificity of the developed isothermal amplification technique, the versatility of the reaction readout, which may be performed (alternatively to the electrophoretic analysis) via an endpoint electrochemical technique resorting to MBlue intercalation, was explored. The voltametric analysis ( ) disclosed that the technique presented the potential to detect L. monocytogenes with high analytical sensitivity (1 CFU mL −1 ) in 20 min, which proved a superior detection performance compared to the electrophoretic analysis (5 CFU mL −1 ). Accordingly, Lau et al. and Azek et al. also documented an improved analytical sensitivity of electrochemical detection techniques, over conventional gel electrophoresis. The improved efficiency (combined with the swiftness and convenience) of electrochemical readouts may contribute to the suitable implementation of this rapid detection method in resource-scarce industries. This on-time surveillance system would be highly valuable, with a remarkable potential for practical application in the dairy industry. Amongst the detection thresholds gathered from the available literature on LAMP-based detection of L. monocytogenes in milk ( in ), solely Roumani et al. accomplished a significantly lower value (0.11 CFU g −1 ) than the documented herein. One may surmise that the superior sensitivity could be attributable to the 24h-selective enrichment of the spiked milk (before the analytical phase), hence eliciting the enhancement of the initial bacterial load of the matrix which, therefore, may warrant the biased (improved) LAMP detection of such a low number of L. monocytogenes cells. The value obtained in this work is in close accordance with the LOD 95 of formerly developed LAMP procedures [ , , ] with values of the same order of magnitude (1–3.2 CFU mL −1 ). Contrastingly, evaluation of the analytical performance of the current assay indicated a considerable superiority compared with LAMP methods proposed by Wang et al. , Teixeira et al. , and Wang et al. , displaying a limit of detection value 45-, 90- and 6000-fold lower, respectively. Accordingly, a 2000-fold sensitivity improvement was achieved in comparison to the commercially available LAMP kit for L. monocytogenes detection (Eiken), which is constrained by a limit of detection of 10 4 –10 5 CFU mL −1 .
Virions, such as phage P100 are permanent dipoles, with a negatively charged head and positively charged tail, being able to orient their capsids or tails on a charged surface due to electrostatic interactions . Hence, the determination of the surface charge of PEG-NH 2 –MBs and P100 is a requisite for the proper characterization of MBs and indispensable for the development of physisorption immobilization protocols . The isoelectric point of PEG-NH 2 –MBs particles was estimated to be approximately 5.3 ( in ). Therefore, it is expected a positive surface net charge below pH 5.3 and a negative charge at higher pH values. P100 Dynamic Light Scattering (DLS) experiments ( in ) suggest aggregation or changes in the phage configuration at pH ≤ 4.4, according to the expected P100 size of about 90 × 198 nm (head × tail) . These results indicate that aggregation is not dependent on the isoelectric point of the whole phage P100 , previously predicted at 5.67 , reflecting the individual amino acid composition of their capsid/tail . Hereupon, to avoid conflicting effects of virus aggregation, the optimization of immobilization protocol proceeded within a pH range of 4 to 9 in which phages were preferentially monodispersed. The immobilization of phage particles on the surface of the PEG-NH 2 –MBs does not guarantee successful bacterial capture nor the antibacterial lytic activity of the functionalized magnetic material . Listex P100 is a tailed, net-charged asymmetric phage, capable to recognize and adsorb to L. monocytogenes by phage receptor-binding proteins present in the tail . Thus, the proper orientation of the immobilized phage on the magnetic particle is a key point in achieving optimum capture efficiency . Accordingly, besides the estimation of immobilization efficiency (%, by Equation (1)), the infectivity retention of P100–MBs was also considered in the optimization of the P100 immobilization protocol. Herein, the influence of the immobilization method (physical adsorption/electrostatic or covalent) and incubation solution pH effect on immobilized phage orientation was studied ( ). The overall results evidenced that physical immobilization has the potential to achieve a greater number of active phages (properly oriented) on PEG–MBs, rather than the non-oriented covalent bound protocol. The best P100 immobilization efficiency was achieved at pH 7 (77%) and the poorest results were obtained at pH 4 (38%), closer to the covalent protocol results (42%). In contrast, the best lytic activity level was observed at pH 4 suggesting a preferential charge-related “tail-upward” orientation of immobilized P100, which improved the probability of bacteria recognition, despite the lower phage concentration on its surface. The DLS studies and P100 electric dipole moment conjugated with the MBs zeta-potentials (+ 6.42 and + 1.83 mV, at pH 4 and 5; data presented in ) may support the obtained results. Nonetheless, the interaction of bacteriophages at solid–water interfaces is very complex and cannot be explained solely based on sorbent surface and phage isoelectric point, since hydrophobic effects and other minor interactions (e.g., hydrogen bonding, steric hindrance) can also favour P100 adhesion, albeit more weakly and reversibly than under electrostatic forces conditions . A favourable contribution of this phenomenon was empirically observed in experiments performed at pH 7 and 9, which achieved high efficiency of phage immobilization despite the predicted low electrostatic forces. Concerning the maintenance of P100 stability (aggregates formation at pH 4), and likely irreversible electrostatic physisorption, a pH 5 immobilization solution was selected for the optimized P100 immobilization protocol used in subsequent capture experiments . Hereupon, stability studies of the optimized P100–MBs were also conducted, being disclosed remarkable stability to changes in ionic strength and pH immediately after physisorption (see immobilization protocol) and beyond (long-term storage), maintaining 90% of its initial lytic activity after 8 weeks.
The P100–MBs surface blocking step was optimized according to critical variables such as concentration and incubation time of the blocking solution. This study evaluated and compared two different standard blocking agents (BSA and casein) with distinct sizes and adsorption strengths on hydrophilic surfaces, to maximize the blocking of the unmodified sites of the PEG–MBs while ensuring a reduced steric hindrance of attached P100 . illustrates the effects of BSA and casein concentration (%, w / v ) on P100–MBs specific capture and non-specific adsorption, for a blocking time of 1 h and 8 h (grey shadow). The 1% BSA solution presented a positive effect in reducing non-specific adsorption (lower capture efficiency in blank assays with PEG–MBs) in 1 h step assays, besides exhibiting a relatively low specific capture compared to casein blocking protocols, even after increasing the BSA concentration to 2%. Contrastingly, casein was demonstrated to have a scarcer influence in reducing non-specific adsorption, compromising the selectivity of the developing separation method intended to be applied to complex food matrices. Henceforth, a 1% BSA concentration was selected for the blocking time optimization. The increasing blocking time (from 1 to 8 h), also had a positive effect on P100–MBs specific capture, presenting non-specific adsorption six-fold lower than unblocked blank PEG–MBs.
The optimization of phagomagnetic separation protocol experimental conditions (P100–MBs mass, pH, and temperature) was performed by resorting to the molybdophosphate culture-independent procedure. Briefly, this method is based on the reaction between phosphate moieties of bacterial DNA backbone with sodium molybdate to form an insoluble redox molybdophosphate precipitate, which may be electrochemically quantified and indirectly correlated to the bacterial load of the initial inoculum to calculate the specific capture efficiency (%) . To test the practicability and viability of this approach, live and thermally lysed L. monocytogenes cells (10 3 CFU mL −1 ) were electrochemically quantified using a disposable screen-printed carbon electrode (SPCE). The thermal-lysed cells solution showed two redox peaks at 0.19 V and 0.30 V, characteristics of the different valence states of molybdate present in the precipitate formed on SPCE [ , , ]. Contrastingly, only residual redox peaks were displayed in the living cells solution ( in ), demonstrating the feasibility of the method to be used in the quantification of bacterial DNA released from the cells into the supernatant immediately after the phagomagnetic capture. The results of the phagomagnetic capture optimization are summarized in and unveiled a general increase in specific capture rates with longer capture times when comparing the 15 to 30 min protocols. Extended times were not tested to avoid interference from the lytic effect of P100 on the study. Regarding the pH and MBs mass variables, alkaline incubation solutions (pH 9) and a high concentration of P100–MBs were demonstrated to impair the capture efficiency at low bacterial load. Thus, a mass between 16 and 32 µg is adequate for the contamination level evaluated (10 3 CFU mL −1 ). Moreover, higher temperatures (25 and 37 °C) appear to promote a more effective capture. Therefore, an incubation solution of pH 7, a temperature of 25 °C, and a magnetic probes mass of 32 µg were selected as reference conditions, presenting a capture efficiency and specific capture of 80% and 60.6%, respectively (30 min of capture time). Zhou et al. also evaluated the performance of magnetic particles biofunctionalized with the phage P100 (physically immobilized) and documented a significantly lower capture efficiency (40–50%) compared to the value obtained in the current work.
We then sought to develop a novel LAMP system to be coupled with the previously optimized phagomagnetic separation platform towards the specific detection of viable L. monocytogenes . Following LAMP optimization, the performance of the method was validated through the specificity evaluation (inclusivity and exclusivity) and analytical sensitivity determination. 3.4.1. Primers Efficiency Evaluation An experiment was conducted to ascertain the most efficient LAMP primers set amongst the three sets of four oligonucleotides designed. Sets 2 and 3 were not capable of consistently hybridizing to the two on-target sequences assessed, irrespective of the temperature, and hence were excluded. Contrastingly, primer set 1 demonstrated a systematic specific recognition and hybridization to the cognate DNA templates of the L. monocytogenes strains (representatives of serotypes 1/2a, and 4b) evaluated, presenting the best compromise between non-specific background and amplification efficiency. In fact, this remarkable performance was inferred through the electrophoretic profile of the corresponding LAMP amplicons being observed, the characteristic ladder-like pattern ( , ), hence validating the newly designed primer set selection to be utilized in the following experiments. Moreover, since no spurious amplicon formation was visualized, the negative control proved the non-occurrence of oligo primers dimerization or heterodimerization, corroborating the in silico prediction. 3.4.2. Assessment of LAMP Specificity—Inclusivity The optimized LAMP procedure proved the capability to robustly identify a 61-cohort of L. monocytogenes strains belonging to each of the three most common invasive listeriosis-associated serotypes (1/2a, 1/2b, 4b), along with serotype 1/2c ( ). The bacterial cohort comprised strains belonging to the genetic lineage I (harbouring serotypes 1/2b and 4b), and lineage II (comprising serotypes 1/2a and 1/2c). The electrophoretic analysis disclosed the remarkable inclusivity (100%) of the developed assay since a conspicuous amplicons profile (owing to the formation of the stem-loop DNA structures) was visualized ( ), highlighting the positive on-target DNA amplification of the four serotypes considered. The specific detection of different strains belonging to the same serotype was highly consistent and supported that this conserved gene is an appropriate target for the broad-spectrum identification of L. monocytogenes . Cho et al. also investigated the feasibility of the isothermal method targeting prfA for L. monocytogenes detection and the results, in agreement with the documented herein, underlined the high specificity (100% of inclusivity) of the assay, since the amplicons of the 23 L. monocytogenes strains assessed were systematically generated. LAMP results were in accordance with the positive signal obtained with the conventional PCR. D’Agostino et al. , utilizing the same PCR oligonucleotides, evaluated the assay performance against a panel of 38 L. monocytogenes and documented the notable efficiency (100% inclusive) of the method. Consistent with our findings, Cooray et al. also reported the suitability of utilizing prfA as a highly species-specific gene. 3.4.3. Evaluation of LAMP Specificity—Exclusivity The potential cross-reactivity of the proposed LAMP assay was further examined. The electrophoretic pattern obtained ( , ) demonstrated that the LAMP reaction system is highly species-specific, and cross-hybridization was not observed for the other closely related Listeria species, namely Listeria sensu stricto species, Listeria ivanovii NCTC 11846 and Listeria innocua 2030c, and Listeria sensu lato species, Listeria aquatica . Noteworthy, the occurrence of the prfA gene is not constrained to the virulent L. monocytogenes strains, since L. ivanovii NCTC 11846 (the animal pathogen) harbours an orthologous gene (albeit with a low nucleotide sequence similarity to the query DNA conserved region), whilst the non-pathogenic environmental saprophyte L. aquatica is devoid of the whole prfA gene cluster . LAMP results corroborated the previous in silico prediction of the absence of hybridization of the designed oligonucleotides with the heterologous L. ivanovii NCTC 11846 DNA sequence. Moreover, no apparent cross-amplifications were noticed for all the 39 non- Listeria strains (20 Gram-positive and 19 Gram-negative bacteria) tested. These observations evince the non-formation of the characteristic “dumbbell” structures, indicative of no nonspecific complementarity of the oligonucleotides with the reference non-target DNA sequences. Concerning the experiments performed at a higher temperature (63 °C), the non-template DNA displayed a faint electrophoretic profile ( , lane 12) which was evincive of spurious hybridization. Therefore, 62 °C was deemed to be the optimum temperature since an improved stringency was accomplished. This LAMP method was found to be 100% exclusive towards 42 non-target Gram-positive and Gram-negative bacterial strains. Pertaining to the PCR specificity ( ), the results were in close agreement with the proposed LAMP assay, corroborating the formerly documented by Simon et al. and D’Agostino et al. . According to the latter, amongst the 52 non- L. monocytogenes strains evaluated, the prfA -based PCR method proved 100% exclusivity. In opposition to the PCR and RT–PCR-based approaches targeting prfA [ , , , ], hitherto prfA -based LAMP assays have been scarcely exploited. Cho et al. also assessed LAMP specificity towards 16 non- L. monocytogenes strains and the method demonstrated 100% of exclusivity, which is in close agreement with the reported herein. Considering our findings and those formerly documented in the previous study, one may conclude that prfA is, as aforementioned, an appropriate target gene towards the specific LAMP detection of L. monocytogenes . 3.4.4. Evaluation of LAMP Analytical Performance (LOD 95 ) The analytical sensitivity (limit of detection) of the newly developed LAMP assay was also investigated. Probit analysis was conducted to estimate the LOD of the designed LAMP ( in ) with 95% of confidence (LOD 95 ), being obtained a value of 1.98 fg μL −1 (0.95 level of confidence interval 1.1 to 15 fg μL −1 ), theoretically equivalent to 0.5 CFU mL −1 . Hence, the current method was demonstrated to be highly sensitive, since it proved efficient in consistently detecting as few as 0.063 copies of the genome per reaction (1.98 fg μL −1 of L. monocytogenes genomic DNA). For comparative purposes, the sensitivity performance of conventional PCR was also assessed using the same tenfold standard dilutions of template DNA. In opposition to the LAMP assay, an order-of-magnitude higher LOD 95 was obtained, indicating that the latter was 20-fold more sensitive. Moreover, LAMP DNA amplification was accomplished 42 min swifter than the standard PCR. ( ) outlines important features of LAMP protocols compiled through a systematic literature review pertaining to the highly specific L. monocytogenes detection. The probit estimated LOD 95 (0.5 CFU mL −1 ) was within the same order of magnitude as those values documented by Wachiralurpan et al. and Lee et al. , which developed LAMP protocols with the capability of detecting as low as 0.3–3 and 1 CFU mL −1 , respectively. The comparison with other detection thresholds gathered from the available literature ( ) highlighted the superior analytical performance of the current method, which displayed a LOD 95 value 6- to 20,000-fold lower [ , , , , , , ]. Concerning the paramount importance of the oligo primers length, Wachiralurpan et al. hypothesized that the 1000-fold difference in the LOD value of LAMP assays targeting plcB (2.8 CFU mL −1 ) and hly (2.8 × 10 3 CFU mL −1 ) might be attributed to the longer sequence of the latter, which possessed a lower annealing efficiency owing to the putative formation of secondary structures. In another study (in which actA was targeted), in an attempt to accomplish a higher detection performance , a pre-concentration approach was proposed, attaining a 10-fold lower cut-off value when aptamer-based magnetic capture was associated with LAMP. Moreover, in most works, the LAMP technique has proven to surpass PCR analytical sensitivity (by at least ten-fold), corroborating the herein-obtained results. The detection limit and selectivity of these molecular-based approaches are considered pivotal parameters to evaluate the accuracy of the method . These features highlighted the superior performance of the herein proposed LAMP technique, substantiating the suitability of this method as an affordable routine screening procedure for the presumptive presence of L. monocytogenes . With the development and optimization of the current LAMP procedure, the groundwork was laid for the validation of its applicability in food matrices.
An experiment was conducted to ascertain the most efficient LAMP primers set amongst the three sets of four oligonucleotides designed. Sets 2 and 3 were not capable of consistently hybridizing to the two on-target sequences assessed, irrespective of the temperature, and hence were excluded. Contrastingly, primer set 1 demonstrated a systematic specific recognition and hybridization to the cognate DNA templates of the L. monocytogenes strains (representatives of serotypes 1/2a, and 4b) evaluated, presenting the best compromise between non-specific background and amplification efficiency. In fact, this remarkable performance was inferred through the electrophoretic profile of the corresponding LAMP amplicons being observed, the characteristic ladder-like pattern ( , ), hence validating the newly designed primer set selection to be utilized in the following experiments. Moreover, since no spurious amplicon formation was visualized, the negative control proved the non-occurrence of oligo primers dimerization or heterodimerization, corroborating the in silico prediction.
The optimized LAMP procedure proved the capability to robustly identify a 61-cohort of L. monocytogenes strains belonging to each of the three most common invasive listeriosis-associated serotypes (1/2a, 1/2b, 4b), along with serotype 1/2c ( ). The bacterial cohort comprised strains belonging to the genetic lineage I (harbouring serotypes 1/2b and 4b), and lineage II (comprising serotypes 1/2a and 1/2c). The electrophoretic analysis disclosed the remarkable inclusivity (100%) of the developed assay since a conspicuous amplicons profile (owing to the formation of the stem-loop DNA structures) was visualized ( ), highlighting the positive on-target DNA amplification of the four serotypes considered. The specific detection of different strains belonging to the same serotype was highly consistent and supported that this conserved gene is an appropriate target for the broad-spectrum identification of L. monocytogenes . Cho et al. also investigated the feasibility of the isothermal method targeting prfA for L. monocytogenes detection and the results, in agreement with the documented herein, underlined the high specificity (100% of inclusivity) of the assay, since the amplicons of the 23 L. monocytogenes strains assessed were systematically generated. LAMP results were in accordance with the positive signal obtained with the conventional PCR. D’Agostino et al. , utilizing the same PCR oligonucleotides, evaluated the assay performance against a panel of 38 L. monocytogenes and documented the notable efficiency (100% inclusive) of the method. Consistent with our findings, Cooray et al. also reported the suitability of utilizing prfA as a highly species-specific gene.
The potential cross-reactivity of the proposed LAMP assay was further examined. The electrophoretic pattern obtained ( , ) demonstrated that the LAMP reaction system is highly species-specific, and cross-hybridization was not observed for the other closely related Listeria species, namely Listeria sensu stricto species, Listeria ivanovii NCTC 11846 and Listeria innocua 2030c, and Listeria sensu lato species, Listeria aquatica . Noteworthy, the occurrence of the prfA gene is not constrained to the virulent L. monocytogenes strains, since L. ivanovii NCTC 11846 (the animal pathogen) harbours an orthologous gene (albeit with a low nucleotide sequence similarity to the query DNA conserved region), whilst the non-pathogenic environmental saprophyte L. aquatica is devoid of the whole prfA gene cluster . LAMP results corroborated the previous in silico prediction of the absence of hybridization of the designed oligonucleotides with the heterologous L. ivanovii NCTC 11846 DNA sequence. Moreover, no apparent cross-amplifications were noticed for all the 39 non- Listeria strains (20 Gram-positive and 19 Gram-negative bacteria) tested. These observations evince the non-formation of the characteristic “dumbbell” structures, indicative of no nonspecific complementarity of the oligonucleotides with the reference non-target DNA sequences. Concerning the experiments performed at a higher temperature (63 °C), the non-template DNA displayed a faint electrophoretic profile ( , lane 12) which was evincive of spurious hybridization. Therefore, 62 °C was deemed to be the optimum temperature since an improved stringency was accomplished. This LAMP method was found to be 100% exclusive towards 42 non-target Gram-positive and Gram-negative bacterial strains. Pertaining to the PCR specificity ( ), the results were in close agreement with the proposed LAMP assay, corroborating the formerly documented by Simon et al. and D’Agostino et al. . According to the latter, amongst the 52 non- L. monocytogenes strains evaluated, the prfA -based PCR method proved 100% exclusivity. In opposition to the PCR and RT–PCR-based approaches targeting prfA [ , , , ], hitherto prfA -based LAMP assays have been scarcely exploited. Cho et al. also assessed LAMP specificity towards 16 non- L. monocytogenes strains and the method demonstrated 100% of exclusivity, which is in close agreement with the reported herein. Considering our findings and those formerly documented in the previous study, one may conclude that prfA is, as aforementioned, an appropriate target gene towards the specific LAMP detection of L. monocytogenes .
95 ) The analytical sensitivity (limit of detection) of the newly developed LAMP assay was also investigated. Probit analysis was conducted to estimate the LOD of the designed LAMP ( in ) with 95% of confidence (LOD 95 ), being obtained a value of 1.98 fg μL −1 (0.95 level of confidence interval 1.1 to 15 fg μL −1 ), theoretically equivalent to 0.5 CFU mL −1 . Hence, the current method was demonstrated to be highly sensitive, since it proved efficient in consistently detecting as few as 0.063 copies of the genome per reaction (1.98 fg μL −1 of L. monocytogenes genomic DNA). For comparative purposes, the sensitivity performance of conventional PCR was also assessed using the same tenfold standard dilutions of template DNA. In opposition to the LAMP assay, an order-of-magnitude higher LOD 95 was obtained, indicating that the latter was 20-fold more sensitive. Moreover, LAMP DNA amplification was accomplished 42 min swifter than the standard PCR. ( ) outlines important features of LAMP protocols compiled through a systematic literature review pertaining to the highly specific L. monocytogenes detection. The probit estimated LOD 95 (0.5 CFU mL −1 ) was within the same order of magnitude as those values documented by Wachiralurpan et al. and Lee et al. , which developed LAMP protocols with the capability of detecting as low as 0.3–3 and 1 CFU mL −1 , respectively. The comparison with other detection thresholds gathered from the available literature ( ) highlighted the superior analytical performance of the current method, which displayed a LOD 95 value 6- to 20,000-fold lower [ , , , , , , ]. Concerning the paramount importance of the oligo primers length, Wachiralurpan et al. hypothesized that the 1000-fold difference in the LOD value of LAMP assays targeting plcB (2.8 CFU mL −1 ) and hly (2.8 × 10 3 CFU mL −1 ) might be attributed to the longer sequence of the latter, which possessed a lower annealing efficiency owing to the putative formation of secondary structures. In another study (in which actA was targeted), in an attempt to accomplish a higher detection performance , a pre-concentration approach was proposed, attaining a 10-fold lower cut-off value when aptamer-based magnetic capture was associated with LAMP. Moreover, in most works, the LAMP technique has proven to surpass PCR analytical sensitivity (by at least ten-fold), corroborating the herein-obtained results. The detection limit and selectivity of these molecular-based approaches are considered pivotal parameters to evaluate the accuracy of the method . These features highlighted the superior performance of the herein proposed LAMP technique, substantiating the suitability of this method as an affordable routine screening procedure for the presumptive presence of L. monocytogenes . With the development and optimization of the current LAMP procedure, the groundwork was laid for the validation of its applicability in food matrices.
3.5.1. Phagomagnetic Particles Performance in Pasteurized Milk The inherent complexity of milk composition poses a challenge in the development of effective foodborne pathogens detection protocols, since some components (protein and lipid content) may constitute critical interferents in magnetic separation procedures. Hence, the previously optimized phagomagnetic separation protocol was exploited to evaluate the L. monocytogenes capture performance in pasteurized whole milk. The capture efficiency value (58%, ) of P100–MBs in milk samples spiked with 10 3 CFU mL −1 was lower than the value obtained at 30 min in Tris buffer pH 7.2 (85%). Results disclosed a 2.5-fold enhancement in the L. monocytogenes capture for P100–MBs in comparison to the corresponding phage-devoid MBs counterpart, demonstrating the ability of the immobilized phage to capture bacterial cells. Moreover, the P100–MBs were found to be highly sensitive, presenting a separation limit below 10 CFU mL −1 , with a significant increase ( p < 0.05) in the capture efficiency, concomitantly with specific capture efficiency, up to 10 3 CFU mL −1 ( ). According to the aforementioned results ( ), the interaction of L. monocytogenes and P100–MBs was not pH-dependent. Therefore, the lower performance of P100–MBs in milk may be attributed to the protein and lipid contents of the matrix. Those may constitute pivotal interferents in the phagomagnetic particle diffusion, hence influencing the P100–MBs adsorption to the target bacterium. The milk proteins (casein (insoluble) and whey proteins (soluble)) may hinder the contact between immobilized phages and L. monocytogenes . Concerning the milk lipid content, the occurrence of electrostatic and hydrophobic interactions between PEG-immobilized virion particles and lipid molecules may also hamper the bacterium attachment . Zhou et al. also exploited phage P100 as a biorecognition element to propose a phagomagnetic protocol towards L. monocytogenes isolation in whole milk. The authors documented a significantly lower capture efficiency (46%) in comparison to the value obtained herein (58%). In a distinct approach, Shan et al. proposed an immunomagnetic method to isolate the same bacteria in whole milk, being reported a higher separation performance (85%). Notwithstanding, in the current work a lower separation limit was attained (10 CFU mL −1 ) in comparison to the determined therein (10 3 CFU mL −1 ). Accordingly, Yang et al. also employing immunomagnetic nanoparticles in semi-skimmed milk, documented a low sensitivity of the separation method (10 2 CFU mL −1 ) with a low capture efficiency (4.6% for a bacterial load of 10 2 CFU mL −1 ). A novel pre-concentration platform relying on ampicillin-biofunctionalized magnetic nanoparticles was recently described by Bai et al. and a higher limit of detection in spiked milk was reported (10 2 CFU mL −1 ), even when in combination with qPCR. Noteworthy, owing to the broad-spectrum activity of the bioreceptor, a low specificity was observed. The phagomagnetic method proposed herein demonstrated to be a promising bio-approach to selectively capture and pre-concentrate L. monocytogenes in pasteurized whole milk. In particular, the results convey the utility of this platform, which holds a remarkable potential to isolate VBNC L. monocytogenes cells from a complex food matrix for accurate downstream detection. 3.5.2. Phagomagnetic-Assisted LAMP Assay The efficiency of the optimized LAMP assay was assessed concerning the detection of the Lm -P100–MBs complex, previously isolated from spiked pasteurized milk. Despite the notable specificity of the LAMP technique, one of the claimed drawbacks is the inability to distinguish viable virulent cells from dead harmless analogues. Hence, owing to this lack of discriminatory potential of the DNA amplification technique, biased L. monocytogenes detection results might occur, leading to false positive detection. The coupling of LAMP with a prior phagomagnetic capture addressed this issue. Moreover, we sought to exploit an alternative method (phage P100-mediated lysis) to the classic DNA extraction procedures. The rationale underlying the dual-purpose (capture and lysis) phage-based approach relied on the notable potential of this virus as a biorecognition element to specifically adsorb to viable L. monocytogenes cells, along with its intrinsic strict lytic trait, triggering the ensuing bacterial chromosomal DNA leakage. The P100–MBs mediated lysis of L. monocytogenes isolated from pasteurized milk or culture medium was electrophoretically assessed. The analysis disclosed the effective P100-induced lysis of magnetically captured L. monocytogenes since leaked genomic DNA was observed for the distinct bacterial loads analysed (5 to 10 2 CFU mL −1 ) ( and , ). Moreover, the efficiency of detecting the host DNA in phage lysates of cells captured from milk or isolated from cells in a culture medium was comparable ( ). Additionally, the absence of DNA in the Lm -blank–MBs sample (phage-negative control) supported that the nucleic acid was released owing to the specific phage infection, highlighting the outstanding lytic performance of phage P100. Therefore, the proposed method proved the suitability to preclude the use of a nucleic acid isolation kit, which is a prominent advantage. Noteworthy, as aforementioned, this phage-based approach warrants the detection of viable L. monocytogenes and hence provides high confidence for the confirmation of the contamination. The results obtained herein were in close accordance with former studies [ , , ]. Tlili et al. were the first to exploit a phage-mediated lysis protocol to extract genomic DNA from bacterial host cells. The authors reported that the phage T4 covalently immobilized onto the surface of a gold electrode elicited the irreversible intracellular DNA delivery of the T4-captured E. coli into the lysate milieu. The target gene tuf was subsequently LAMP-amplified and detected via linear sweep voltammetry. Wang et al. proposed an experimental merged scheme analogous to the presented herein, in which a coliphage, covalently conjugated with magnetic beads, was utilized as a bioreceptor and lysing agent for viable E. coli O157:H7. The extracellularly leaked bacterial DNA was amplified by qPCR to quantify and identify the target bacterium in water samples. Swift et al. developed a mycobacterium phage D29-triggered lysis (Actiphage ® ) procedure to efficiently extract genomic DNA from viable, low-cell numbers, mycobacteria. The released bacterial DNA was subsequently utilized as the template for PCR amplification, providing a sensitive detection tool for viable and pathogenic mycobacteria collected from blood specimens. The method proposed herein circumvents the use of laborious commercial DNA extraction kits, which is of utmost importance to accomplish the straightforwardness and cost-efficiency required for an on-field application. Furthermore, the inclusion of the magnetic capture step proved to be appropriate to efficiently cope with the inhibitors/interferents estimated to be present in the pasteurized milk sample. 3.5.3. Detection Limit of the Phagomagnetic-Assisted LAMP Assay in Milk The performance of the novel LAMP assay in pasteurized milk was evaluated by electrophoretic analysis, and the analytical sensitivity (LOD 95 ) was determined by probit regression ( , in ). The current method was determined to be highly sensitive since it proved efficient in consistently detecting as few as 5 CFU mL −1 (LOD 95 of 4.1 CFU mL −1 ). In comparison to the LAMP amplification performed with high-purity genomic DNA extracted (commercial kit) from L. monocytogenes pure cultures, devoid of prior magnetic isolation and subsequent phage-mediated lysis, a 10-fold lower sensitivity was obtained. One may theorize that this discrepancy may be attributed to the magnetic platform’s inefficiency in capturing the totality of the bacterial load. Beyond the well-documented high specificity of the developed isothermal amplification technique, the versatility of the reaction readout, which may be performed (alternatively to the electrophoretic analysis) via an endpoint electrochemical technique resorting to MBlue intercalation, was explored. The voltametric analysis ( ) disclosed that the technique presented the potential to detect L. monocytogenes with high analytical sensitivity (1 CFU mL −1 ) in 20 min, which proved a superior detection performance compared to the electrophoretic analysis (5 CFU mL −1 ). Accordingly, Lau et al. and Azek et al. also documented an improved analytical sensitivity of electrochemical detection techniques, over conventional gel electrophoresis. The improved efficiency (combined with the swiftness and convenience) of electrochemical readouts may contribute to the suitable implementation of this rapid detection method in resource-scarce industries. This on-time surveillance system would be highly valuable, with a remarkable potential for practical application in the dairy industry. Amongst the detection thresholds gathered from the available literature on LAMP-based detection of L. monocytogenes in milk ( in ), solely Roumani et al. accomplished a significantly lower value (0.11 CFU g −1 ) than the documented herein. One may surmise that the superior sensitivity could be attributable to the 24h-selective enrichment of the spiked milk (before the analytical phase), hence eliciting the enhancement of the initial bacterial load of the matrix which, therefore, may warrant the biased (improved) LAMP detection of such a low number of L. monocytogenes cells. The value obtained in this work is in close accordance with the LOD 95 of formerly developed LAMP procedures [ , , ] with values of the same order of magnitude (1–3.2 CFU mL −1 ). Contrastingly, evaluation of the analytical performance of the current assay indicated a considerable superiority compared with LAMP methods proposed by Wang et al. , Teixeira et al. , and Wang et al. , displaying a limit of detection value 45-, 90- and 6000-fold lower, respectively. Accordingly, a 2000-fold sensitivity improvement was achieved in comparison to the commercially available LAMP kit for L. monocytogenes detection (Eiken), which is constrained by a limit of detection of 10 4 –10 5 CFU mL −1 .
The inherent complexity of milk composition poses a challenge in the development of effective foodborne pathogens detection protocols, since some components (protein and lipid content) may constitute critical interferents in magnetic separation procedures. Hence, the previously optimized phagomagnetic separation protocol was exploited to evaluate the L. monocytogenes capture performance in pasteurized whole milk. The capture efficiency value (58%, ) of P100–MBs in milk samples spiked with 10 3 CFU mL −1 was lower than the value obtained at 30 min in Tris buffer pH 7.2 (85%). Results disclosed a 2.5-fold enhancement in the L. monocytogenes capture for P100–MBs in comparison to the corresponding phage-devoid MBs counterpart, demonstrating the ability of the immobilized phage to capture bacterial cells. Moreover, the P100–MBs were found to be highly sensitive, presenting a separation limit below 10 CFU mL −1 , with a significant increase ( p < 0.05) in the capture efficiency, concomitantly with specific capture efficiency, up to 10 3 CFU mL −1 ( ). According to the aforementioned results ( ), the interaction of L. monocytogenes and P100–MBs was not pH-dependent. Therefore, the lower performance of P100–MBs in milk may be attributed to the protein and lipid contents of the matrix. Those may constitute pivotal interferents in the phagomagnetic particle diffusion, hence influencing the P100–MBs adsorption to the target bacterium. The milk proteins (casein (insoluble) and whey proteins (soluble)) may hinder the contact between immobilized phages and L. monocytogenes . Concerning the milk lipid content, the occurrence of electrostatic and hydrophobic interactions between PEG-immobilized virion particles and lipid molecules may also hamper the bacterium attachment . Zhou et al. also exploited phage P100 as a biorecognition element to propose a phagomagnetic protocol towards L. monocytogenes isolation in whole milk. The authors documented a significantly lower capture efficiency (46%) in comparison to the value obtained herein (58%). In a distinct approach, Shan et al. proposed an immunomagnetic method to isolate the same bacteria in whole milk, being reported a higher separation performance (85%). Notwithstanding, in the current work a lower separation limit was attained (10 CFU mL −1 ) in comparison to the determined therein (10 3 CFU mL −1 ). Accordingly, Yang et al. also employing immunomagnetic nanoparticles in semi-skimmed milk, documented a low sensitivity of the separation method (10 2 CFU mL −1 ) with a low capture efficiency (4.6% for a bacterial load of 10 2 CFU mL −1 ). A novel pre-concentration platform relying on ampicillin-biofunctionalized magnetic nanoparticles was recently described by Bai et al. and a higher limit of detection in spiked milk was reported (10 2 CFU mL −1 ), even when in combination with qPCR. Noteworthy, owing to the broad-spectrum activity of the bioreceptor, a low specificity was observed. The phagomagnetic method proposed herein demonstrated to be a promising bio-approach to selectively capture and pre-concentrate L. monocytogenes in pasteurized whole milk. In particular, the results convey the utility of this platform, which holds a remarkable potential to isolate VBNC L. monocytogenes cells from a complex food matrix for accurate downstream detection.
The efficiency of the optimized LAMP assay was assessed concerning the detection of the Lm -P100–MBs complex, previously isolated from spiked pasteurized milk. Despite the notable specificity of the LAMP technique, one of the claimed drawbacks is the inability to distinguish viable virulent cells from dead harmless analogues. Hence, owing to this lack of discriminatory potential of the DNA amplification technique, biased L. monocytogenes detection results might occur, leading to false positive detection. The coupling of LAMP with a prior phagomagnetic capture addressed this issue. Moreover, we sought to exploit an alternative method (phage P100-mediated lysis) to the classic DNA extraction procedures. The rationale underlying the dual-purpose (capture and lysis) phage-based approach relied on the notable potential of this virus as a biorecognition element to specifically adsorb to viable L. monocytogenes cells, along with its intrinsic strict lytic trait, triggering the ensuing bacterial chromosomal DNA leakage. The P100–MBs mediated lysis of L. monocytogenes isolated from pasteurized milk or culture medium was electrophoretically assessed. The analysis disclosed the effective P100-induced lysis of magnetically captured L. monocytogenes since leaked genomic DNA was observed for the distinct bacterial loads analysed (5 to 10 2 CFU mL −1 ) ( and , ). Moreover, the efficiency of detecting the host DNA in phage lysates of cells captured from milk or isolated from cells in a culture medium was comparable ( ). Additionally, the absence of DNA in the Lm -blank–MBs sample (phage-negative control) supported that the nucleic acid was released owing to the specific phage infection, highlighting the outstanding lytic performance of phage P100. Therefore, the proposed method proved the suitability to preclude the use of a nucleic acid isolation kit, which is a prominent advantage. Noteworthy, as aforementioned, this phage-based approach warrants the detection of viable L. monocytogenes and hence provides high confidence for the confirmation of the contamination. The results obtained herein were in close accordance with former studies [ , , ]. Tlili et al. were the first to exploit a phage-mediated lysis protocol to extract genomic DNA from bacterial host cells. The authors reported that the phage T4 covalently immobilized onto the surface of a gold electrode elicited the irreversible intracellular DNA delivery of the T4-captured E. coli into the lysate milieu. The target gene tuf was subsequently LAMP-amplified and detected via linear sweep voltammetry. Wang et al. proposed an experimental merged scheme analogous to the presented herein, in which a coliphage, covalently conjugated with magnetic beads, was utilized as a bioreceptor and lysing agent for viable E. coli O157:H7. The extracellularly leaked bacterial DNA was amplified by qPCR to quantify and identify the target bacterium in water samples. Swift et al. developed a mycobacterium phage D29-triggered lysis (Actiphage ® ) procedure to efficiently extract genomic DNA from viable, low-cell numbers, mycobacteria. The released bacterial DNA was subsequently utilized as the template for PCR amplification, providing a sensitive detection tool for viable and pathogenic mycobacteria collected from blood specimens. The method proposed herein circumvents the use of laborious commercial DNA extraction kits, which is of utmost importance to accomplish the straightforwardness and cost-efficiency required for an on-field application. Furthermore, the inclusion of the magnetic capture step proved to be appropriate to efficiently cope with the inhibitors/interferents estimated to be present in the pasteurized milk sample.
The performance of the novel LAMP assay in pasteurized milk was evaluated by electrophoretic analysis, and the analytical sensitivity (LOD 95 ) was determined by probit regression ( , in ). The current method was determined to be highly sensitive since it proved efficient in consistently detecting as few as 5 CFU mL −1 (LOD 95 of 4.1 CFU mL −1 ). In comparison to the LAMP amplification performed with high-purity genomic DNA extracted (commercial kit) from L. monocytogenes pure cultures, devoid of prior magnetic isolation and subsequent phage-mediated lysis, a 10-fold lower sensitivity was obtained. One may theorize that this discrepancy may be attributed to the magnetic platform’s inefficiency in capturing the totality of the bacterial load. Beyond the well-documented high specificity of the developed isothermal amplification technique, the versatility of the reaction readout, which may be performed (alternatively to the electrophoretic analysis) via an endpoint electrochemical technique resorting to MBlue intercalation, was explored. The voltametric analysis ( ) disclosed that the technique presented the potential to detect L. monocytogenes with high analytical sensitivity (1 CFU mL −1 ) in 20 min, which proved a superior detection performance compared to the electrophoretic analysis (5 CFU mL −1 ). Accordingly, Lau et al. and Azek et al. also documented an improved analytical sensitivity of electrochemical detection techniques, over conventional gel electrophoresis. The improved efficiency (combined with the swiftness and convenience) of electrochemical readouts may contribute to the suitable implementation of this rapid detection method in resource-scarce industries. This on-time surveillance system would be highly valuable, with a remarkable potential for practical application in the dairy industry. Amongst the detection thresholds gathered from the available literature on LAMP-based detection of L. monocytogenes in milk ( in ), solely Roumani et al. accomplished a significantly lower value (0.11 CFU g −1 ) than the documented herein. One may surmise that the superior sensitivity could be attributable to the 24h-selective enrichment of the spiked milk (before the analytical phase), hence eliciting the enhancement of the initial bacterial load of the matrix which, therefore, may warrant the biased (improved) LAMP detection of such a low number of L. monocytogenes cells. The value obtained in this work is in close accordance with the LOD 95 of formerly developed LAMP procedures [ , , ] with values of the same order of magnitude (1–3.2 CFU mL −1 ). Contrastingly, evaluation of the analytical performance of the current assay indicated a considerable superiority compared with LAMP methods proposed by Wang et al. , Teixeira et al. , and Wang et al. , displaying a limit of detection value 45-, 90- and 6000-fold lower, respectively. Accordingly, a 2000-fold sensitivity improvement was achieved in comparison to the commercially available LAMP kit for L. monocytogenes detection (Eiken), which is constrained by a limit of detection of 10 4 –10 5 CFU mL −1 .
In the current work, we demonstrated the feasibility of coupling a novel targeted LAMP assay (assisted by a P100–MB platform) with an endpoint electrochemical readout system towards a swift and accurate L. monocytogenes detection in food matrices. This system succeeded in the accomplishment of the requirements considered pivotal for the implementation of an on-site detection method, namely the needlessness of robust and sophisticated laboratory apparatus, the non-inclusion of a lengthy culture-based selective enrichment protocol, an expeditious procedure (2.5 h), and a notable sensitivity (1 CFU mL −1 ). Moreover, the proposed combined approach may provide a reliable molecular-based surveillance tool for food safety analytical services and public health authorities. The developed detection scheme could be of utmost importance for the demonstration/validation of compliance with food safety standards.
|
Global trends and performances in diabetic retinopathy studies: A bibliometric analysis
|
8cdc6203-bfb8-4946-94a1-ab90d7fae20e
|
10136779
|
Ophthalmology[mh]
|
Introduction Diabetic retinopathy (DR) is the primary cause of visual impairment worldwide and is a common complication of diabetes mellitus . According to a survey by the 74th World Health Organization (WHO), more than 420 million people suffered from diabetes in 2021, and this number is expected to increase to 578 million by 2030. The global incidence of diabetes continues to increase, leading to a corresponding increase in the number of people affected by DR . It is estimated that from 2020 to 2045, the number of DR patients globally will increase from 103.12 million to 160.5 million , with 44.82 million people experiencing vision problems. This has become a significant global public health and economic issue. Diabetic retinopathy is the subject of extensive research, with vast literature making it difficult to identify the research emphasis and frontier. Therefore, a comprehensive retrospective analysis is crucial to understanding the development state, research hotspots, and future development trends of DR. To achieve this, we used VOSviewer, a program that excels at constructing any type of text map, for literature-based cooperative network analysis, co-occurrence analysis, citation analysis, literature coupling analysis, and co-citation analysis . We also employed CiteSpace’s timeline view, which depicts the progress of scientific research and mutation detection to identify the frontiers of scientific study . By utilizing the literature metrology approach and two bibliometric tools, we analyzed DR-related literature from the WOS to provide pertinent information. Method 2.1. Data sources and search strategy We obtained all literature from the Web of Science (WOS), a leading global database of scholarly information founded in 1985. WOS includes authoritative and influential journals across a wide range of disciplines. We used the WOS core collection to ensure that high-quality academic journals were selected and searched the database from its establishment until 1 November 2022. To maximize precision while maintaining search sensitivity, we merged the title and abstract and excluded interference from the “early treatment of diabetic retinopathy study.” Our retrieval formula was as follows: {[TI = (“diabetic retinopathy”) OR AB = (“diabetic retinopathy”)] NOT [AB = (“early treatment of diabetic retinopathy study”) OR AB = (“early treatment diabetic retinopathy study”)]} OR {[TI = (diabet*) OR TI = (mellitu*)] AND [AB = (“early treatment of diabetic retinopathy study”) OR AB = (“early treatment diabetic retinopathy study”)]}. We limited the document types to articles, the language to English, and excluded retracted publications. 2.2. Data collection We downloaded literature information in batches and recorded “Full Records and References Cited.” To integrate the downloaded bibliographic data into CiteSpace 5.5.R2 and VOSviewer 1.6.17, we named the downloaded file “download X.” We collected publication counts, citation frequency, and h-index of countries, institutions, and journals, along with primary information on the top 10 most cited articles from the WOS analysis results and citation reports. We also evaluated the impact factors (IFs) of important journals derived from the 2022 Journal Citation Reports (JCRs). 2.3. Data analysis We imported the findings of the WOS analysis results and citation reports into Microsoft Excel to provide annual publication counts, major nations, institutions, and journals in charts. We condensed the most significant data from frequently referenced articles and presented the findings in a tabular format. We used VOSviewer to display collaboration networks of countries and high-frequency keywords, setting the node type successively as countries and all keywords, and different thresholds to display only the top 37 countries and top 60 keywords. National node weights were selected based on total link strength, and keywords node weights on documents. Both figures used network visualization as the image type. We visualized keyword timelines and burst keywords using CiteSpace, setting the period of analysis from 1996 to 2022 since the first relevant literature was published in 1996. The time-slicing was set to 1 year and the threshold to N = 50. The node type was set as the keyword. We used Pathfinder, pruning networks, and pruning the merged networks to simplify minor wiring, while the remaining parameters were set to default settings. Data sources and search strategy We obtained all literature from the Web of Science (WOS), a leading global database of scholarly information founded in 1985. WOS includes authoritative and influential journals across a wide range of disciplines. We used the WOS core collection to ensure that high-quality academic journals were selected and searched the database from its establishment until 1 November 2022. To maximize precision while maintaining search sensitivity, we merged the title and abstract and excluded interference from the “early treatment of diabetic retinopathy study.” Our retrieval formula was as follows: {[TI = (“diabetic retinopathy”) OR AB = (“diabetic retinopathy”)] NOT [AB = (“early treatment of diabetic retinopathy study”) OR AB = (“early treatment diabetic retinopathy study”)]} OR {[TI = (diabet*) OR TI = (mellitu*)] AND [AB = (“early treatment of diabetic retinopathy study”) OR AB = (“early treatment diabetic retinopathy study”)]}. We limited the document types to articles, the language to English, and excluded retracted publications. Data collection We downloaded literature information in batches and recorded “Full Records and References Cited.” To integrate the downloaded bibliographic data into CiteSpace 5.5.R2 and VOSviewer 1.6.17, we named the downloaded file “download X.” We collected publication counts, citation frequency, and h-index of countries, institutions, and journals, along with primary information on the top 10 most cited articles from the WOS analysis results and citation reports. We also evaluated the impact factors (IFs) of important journals derived from the 2022 Journal Citation Reports (JCRs). Data analysis We imported the findings of the WOS analysis results and citation reports into Microsoft Excel to provide annual publication counts, major nations, institutions, and journals in charts. We condensed the most significant data from frequently referenced articles and presented the findings in a tabular format. We used VOSviewer to display collaboration networks of countries and high-frequency keywords, setting the node type successively as countries and all keywords, and different thresholds to display only the top 37 countries and top 60 keywords. National node weights were selected based on total link strength, and keywords node weights on documents. Both figures used network visualization as the image type. We visualized keyword timelines and burst keywords using CiteSpace, setting the period of analysis from 1996 to 2022 since the first relevant literature was published in 1996. The time-slicing was set to 1 year and the threshold to N = 50. The node type was set as the keyword. We used Pathfinder, pruning networks, and pruning the merged networks to simplify minor wiring, while the remaining parameters were set to default settings. Results 3.1. Growth trends of publications A total of 10,709 records met the search criteria. Jirousek MR et al. released the first record on the subject of DR in 1996. They proposed the idea of inhibiting overactive PKC isoenzyme to treat DR . However, at that time, few researchers paid attention to this field. From 1996 to 2010, there were only three scattered pieces of drug research literature . In 2011, the number of publications on this topic skyrocketed. Since 2011, the number of publications regarding DR has typically increased, reaching a peak of 1,508 articles in 2021, and it may continue to increase this year . The top three citation frequency rankings were 2012, 2017, and 2016. 3.2. Distribution of countries According to the articles retrieved, relevant material was published in 127 countries. displays the 10 nations with the most publications. The top three nations, China, America, and India, accounted for more than half of the total. Articles from America were cited 86,263 times, ranking first among all countries, followed by China and England. America and Germany had the greatest h-index and ACI, respectively. In addition, we conducted a study of country distribution and cooperation to assess the degree of international collaboration. Collaborations between nations were widespread, as seen in . The cooperation map showed that America had the strongest total link strength (1,821) and the largest national cooperation network, followed by England and China. 3.3. Distribution of institutions The top 10 institutions in terms of the number of publications were from four distinct nations: England, America, China, and Singapore . The University of London published the most articles, followed by Harvard University and Shanghai Jiao Tong University. According to the citation frequency analysis, the University of London in England had 13,035 citations, placing it at the top. The highest ACI and h-index were recorded by the Singapore National Eye Center and Harvard University, respectively. 3.4. Distribution of journals Half of the top 10 most commonly circulated DR publications were from America, indicating the country’s significant impact. The British Journal of Ophthalmology had the highest IF (2022). Investigative Ophthalmology Visual Science published the most literature, which also had the highest citation frequency and ACI. 3.5. Top cited references provides a summary of the 10 most cited sources. These top 10 articles have been referenced over 11,000 times, with the first article being quoted 2,897 times and the 10th article being cited 541 times. The majority of these articles focused on deep learning and incidence. 3.6. Main keywords Keywords can refine literature and help identify the field’s hotspots based on their frequency. The VOSviewer node was set to all keywords, and synonyms such as diabetes and mellitus, vegf and vascular endothelial growth factor were combined. Eventually, we identified the top 20 keywords , and the top 60 are shown in . Apart from “DR,” “diabetes,” and “retinopathy,” “prevalence” and “risk factors” were the most frequently occurring. shows four colors to illustrate the four primary research directions of DR: green for therapy clustering, red for pathogenesis clustering, yellow for AI clustering, and blue for epidemiology clustering. 3.7. The evolutionary path of keywords depicts the distribution of keywords over time. By following each keyword transformation, the migration route of study emphasis can be seen intuitively. Prior to 2013, DR research focused on macular edema, risk factors, and neovascularization, among others. Between 2013 and 2016, biomarkers and neurodegeneration received significant attention. From 2016 to 2019, the focus shifted to OCTA, meta-analysis, and the foveal avascular zone (FAZ). During 2019–2022, AI and deep learning emerged as new research priorities. 3.8. Research frontier analysis Keyword bursts, which illustrate terms that abruptly and often emerge within a certain period, can be used to forecast hotspot changes. presents the top 10 keywords with the strongest bursts. Deep learning had the strongest strength and was a crucial research focus from 2020 to 2022. Pathogenesis had the longest burst duration and was a research hotspot from 2013 to 2017. Deep learning, biomarkers, OCTA, and models were in the burst phase, indicating that they are the present research hotspots and may become the focus in the next few years. Growth trends of publications A total of 10,709 records met the search criteria. Jirousek MR et al. released the first record on the subject of DR in 1996. They proposed the idea of inhibiting overactive PKC isoenzyme to treat DR . However, at that time, few researchers paid attention to this field. From 1996 to 2010, there were only three scattered pieces of drug research literature . In 2011, the number of publications on this topic skyrocketed. Since 2011, the number of publications regarding DR has typically increased, reaching a peak of 1,508 articles in 2021, and it may continue to increase this year . The top three citation frequency rankings were 2012, 2017, and 2016. Distribution of countries According to the articles retrieved, relevant material was published in 127 countries. displays the 10 nations with the most publications. The top three nations, China, America, and India, accounted for more than half of the total. Articles from America were cited 86,263 times, ranking first among all countries, followed by China and England. America and Germany had the greatest h-index and ACI, respectively. In addition, we conducted a study of country distribution and cooperation to assess the degree of international collaboration. Collaborations between nations were widespread, as seen in . The cooperation map showed that America had the strongest total link strength (1,821) and the largest national cooperation network, followed by England and China. Distribution of institutions The top 10 institutions in terms of the number of publications were from four distinct nations: England, America, China, and Singapore . The University of London published the most articles, followed by Harvard University and Shanghai Jiao Tong University. According to the citation frequency analysis, the University of London in England had 13,035 citations, placing it at the top. The highest ACI and h-index were recorded by the Singapore National Eye Center and Harvard University, respectively. Distribution of journals Half of the top 10 most commonly circulated DR publications were from America, indicating the country’s significant impact. The British Journal of Ophthalmology had the highest IF (2022). Investigative Ophthalmology Visual Science published the most literature, which also had the highest citation frequency and ACI. Top cited references provides a summary of the 10 most cited sources. These top 10 articles have been referenced over 11,000 times, with the first article being quoted 2,897 times and the 10th article being cited 541 times. The majority of these articles focused on deep learning and incidence. Main keywords Keywords can refine literature and help identify the field’s hotspots based on their frequency. The VOSviewer node was set to all keywords, and synonyms such as diabetes and mellitus, vegf and vascular endothelial growth factor were combined. Eventually, we identified the top 20 keywords , and the top 60 are shown in . Apart from “DR,” “diabetes,” and “retinopathy,” “prevalence” and “risk factors” were the most frequently occurring. shows four colors to illustrate the four primary research directions of DR: green for therapy clustering, red for pathogenesis clustering, yellow for AI clustering, and blue for epidemiology clustering. The evolutionary path of keywords depicts the distribution of keywords over time. By following each keyword transformation, the migration route of study emphasis can be seen intuitively. Prior to 2013, DR research focused on macular edema, risk factors, and neovascularization, among others. Between 2013 and 2016, biomarkers and neurodegeneration received significant attention. From 2016 to 2019, the focus shifted to OCTA, meta-analysis, and the foveal avascular zone (FAZ). During 2019–2022, AI and deep learning emerged as new research priorities. Research frontier analysis Keyword bursts, which illustrate terms that abruptly and often emerge within a certain period, can be used to forecast hotspot changes. presents the top 10 keywords with the strongest bursts. Deep learning had the strongest strength and was a crucial research focus from 2020 to 2022. Pathogenesis had the longest burst duration and was a research hotspot from 2013 to 2017. Deep learning, biomarkers, OCTA, and models were in the burst phase, indicating that they are the present research hotspots and may become the focus in the next few years. Discussion Bibliometric analysis is a prominent method to identify active research hotspots and future trends. It can visualize the relationships between the literature as a scientific knowledge map and is widely recognized as a valuable tool for mining useful information from the complex network structure of literature data . In this study, we conducted a bibliometric analysis of DR-related literature from the WOS, identifying publication trends and global contributions, and determining four clusters and keyword bursts within the visual network. 4.1. Publication trend in DR research The trend changes in the number of publications reflect the updating of knowledge about a subject. Before 2011, DR had not received sufficient attention, as fewer articles were published. However, with the increasing number of diabetics in the world, the incidence of DR is also significantly increasing. The increasing incidence of DR and the advancement and promotion of fundus examination equipment have both contributed to the expansion of DR research. Since 2011, the research on DR has been in a rapid development stage, as reflected by the number of published articles. Based on the posting trend, it is predicted that the number of publications will reach a new peak this year. In addition, the citation frequency in 2017 is the highest, indicating that the research results have gained widespread attention. 4.2. International cooperation According to the h-index and citation frequency, America is the leading country in DR research with the most impact, accounting for 23.795% of the total publications. China ranks second, with the largest number of publications but lesser influence. As research deepens, global cooperation is becoming closer. Based on the connections between the various nodes, America places great importance on communication and collaboration in academia, which explains why America has both high productivity and high-quality research. The top three prolific journals were all from America, with Investigative Ophthalmology and Visual Science (IOVS) being the most productive and cited journal in DR research. Among the top 10 prolific institutions, six were located in Europe and America, which is consistent with the increasing prevalence of DR in developed countries. 4.3. Research directions of DR The keyword analysis results from VOSviewer revealed four clusters in the DR field. Based on the findings of the bibliometric analysis, we discuss the following four main research fields of DR research: 4.3.1. Prevalence and risk factors of DR In recent years, the prevalence of diabetes has been steadily increasing, leading to an increase in the incidence of DR, a serious worldwide public health concern. DR is the leading cause of blindness among individuals of working age (20–74 years old) in many countries . According to the International Diabetes Federation (IDF), in 2021, 537 million people aged 20–79 years have diabetes, accounting for 10.5% of the global population in this age group . Studies show that 22.27% of diabetes patients suffer from DR, and the prevalence of DR in diabetic individuals over a 10-year period may reach 60% . The prevalence of any DR is higher in individuals with type 1 diabetes (77.3%) than in those with type 2 diabetes (25.2%) . Available data suggest that 93 million people had DR in 2010 and 28 million were at risk of visual impairment . However, recent studies show that 860,000 people over the age of 50 worldwide will be blinded by DR in 2020 . By 2045, there will be 160.5 million DR patients, and 44.82 million patients may be at risk of visual impairment . Multiple factors influence the occurrence and progression of DR. The incidence rates of DR vary depending on the region and the type or duration of diabetes . In addition, reviews suggest that the most significant risk factors for the initiation and progression of DR include smoking, a high body mass index, insufficient glycemic control, hypertension, a long history of diabetes, dyslipidemia, and microalbuminuria . Arterial stiffness has also been proposed as a risk factor related to DR in recent years . Although the exact role of these factors in the pathogenesis of DR is not well-defined, they are important in guiding DR screening and the development of public healthcare strategies. 4.3.2. Pathogenesis of DR Advances in research have improved our understanding of the pathophysiological processes that lead to the development of DR. Hyperglycemia, along with other pathological risk factors such as hypertension, sets off a cascade of metabolic pathways that eventually cause microvascular damage and retinal malfunction. The modification of the retinal microvasculature is the primary pathogenic feature of DR. The blood–retinal barrier, formed by tight junctions between endothelial and pericyte cells, is crucial for intravascular homeostasis . In a high glucose environment, abnormal expression of circRNA-CZNF532 , circEhmt1 , and miRNA-138-5p induces pericyte degeneration and vascular dysfunction. The aberrant expression of miRNA-34a , miRNA-126 , and miRNA-221 results in the upregulation of pro-inflammatory molecules, leading to the accumulation of leukocytes around the retinal capillary wall and apoptosis of vascular endothelial cells. Hyperglycemia also causes oxidative stress, which further leads to apoptosis, inflammation, and structural and functional changes in the retina . Microangiopathy may cause damage to pericytes and endothelial cells, resulting in ischemic changes in the retina. The ischemic condition triggers compensatory mechanisms, resulting in the proliferation of vascular endothelial cells and the promotion of angiogenesis. The activation of miRNA-21 and the downregulation of miRNA-200b signify the eventual development of proliferative diabetic retinopathy (PDR) through neovascularization . Furthermore, DR pathogenesis involves retinal neurodegeneration. Studies have shown that diabetic individuals have abnormal retinal neurons and glial cells, and retinal neurodegeneration occurs before retinal microvascular damage . Retinal ganglion cells, photoreceptors, peakless cells, and bipolar cells all undergo changes during retinal neurodegeneration . Microvascular dysfunction and retinal neurodegeneration occur simultaneously . Along with research into the retinal vascular and neurological causes of DR, targeted drugs have been developed . 4.3.3. Treatment of DR The effective regulation of glucose levels and other perilous aspects is paramount in forestalling the onset of DR during the diabetic phase. Sodium-dependent glucose transporter 2 inhibitors (SGLT2i) effectively regulate blood glucose levels and inhibit the manifestation of SGLT2 within the retina, thus mitigating the probability of DR . In addition, traditional Chinese medicine (TCM) monomers such as Lycium barbarum polysaccharide , Astragalus polysaccharide , curcumin , and Crocin can serve as natural extracts that can help protect the retina against apoptosis, inflammation, and oxidative stress. The combination of TCM monomers with nanotechnology can also address the issue of inadequate bioavailability, making them a promising solution for the early prevention and treatment of DR . For individuals with a confirmed diagnosis of proliferative diabetic retinopathy (PDR), the main therapeutic options are retinal laser photocoagulation and intravitreal anti-vascular endothelial growth factor (VEGF) injection. Laser photocoagulation reduces VEGF production and the risk of vision loss by destroying the ischemic retinal area. Anti-VEGF treatment, on the other hand, improves retinal edema and patients’ vision by suppressing vascular permeability. Common anti-VEGF drugs include conbercept, aflibercept, ranibizumab, and bevacizumab. Intravitreal anti-VEGF injection has been found to significantly reduce peripheral vision impairment and decrease the incidence of macular edema in DR patients, making it the preferred option for treating diabetic macular edema. A meta-analysis has also shown that anti-VEGF pretreatment 6–14 days prior to vitrectomy in DR patients can reduce operation time, improve postoperative best-corrected visual acuity (BCVA), and decrease the recurrence rate of vitreous hemorrhage . When anti-VEGF therapy is not successful, corticosteroids may be considered due to the involvement of multiple inflammatory mediators in the pathological processes in the retina . Common corticosteroids include dexamethasone, fluoroquinolone acetonide, and triamcinolone acetonide. Dexamethasone intravitreal implant not only improves patients’ symptoms but also reduces the psychological burden of frequent vitreous injections. However, it is important to be aware of the risk of increased intraocular pressure and cataract progression when using hormonal drugs . Persistent vitreous hemorrhage may require a vitrectomy to preserve functional vision . Blocking the expression of DR-related genes can help prevent neurovascular abnormalities in DR while inducing stem cell differentiation can reshape retinal function . However, further study is needed before this technology can be applied in the clinic. 4.3.4. Fundus image for deep learning Diabetic retinopathy has different fundus manifestations in different stages. In the non-proliferative diabetic retinopathy (NPDR) stage, fundus manifestations include arteriolar hemangiomas, punctate retinal hemorrhage, hard exudation, cotton-wool spots, retinal edema, and beaded venous dilatation. Angiogenesis is the main feature of the fundus in the PDR stage. Fundus images can be used to detect and diagnose macular edema and vitreous hemorrhage, which are the main complications of DR. If the fundus image shows vitreous membranes with retinal eminence, it suggests that DR has developed tractional retinal detachment. Currently, fundus images of DR are not only used for clinical diagnosis but also for deep learning, to improve the detection rate and accuracy of DR. 4.4. The research frontier of DR Keyword burst analysis provides a useful tool to forecast research frontiers and predict fundamental and clinical research trends. The current research frontiers of DR can be summarized as follows: 4.4.1. Deep learning and AI models The development and validation of deep learning algorithms for DR is the most commonly cited article . The improvements in algorithmic DR research have led to significant breakthroughs in the early diagnosis of DR. A new model has been developed to predict the development of diabetes into DR . Through deep learning of vast fundus images, AI has achieved early identification and severity grading of DR and constantly optimized the algorithm to improve the detection rate and accuracy. In addition, AI can be used to evaluate prognosis and develop personalized treatment plans based on the personal electronic records of DR patients . The rapid development of technology has led to the incorporation of AI for early detection of DR into the American Diabetes Association’s guidelines . Combining AI and telemedicine can reduce public health costs and improve the efficiency of diagnosis and treatment, which will address the number imbalance between ophthalmic doctors and patients and be an important strategy to deal with the high global incidence of DR . However, an ethical review of a large number of original data is required for the research of AI algorithms, and the application of clinical care must be properly standardized and deployed . Furthermore, there is an urgent need to establish a liability system for critical medical negligence, which may be caused by AI misdiagnosis and missed diagnosis . 4.4.2. Biomarkers and OCTA Biomarkers can be objectively measured and used as indicators to assess normal biologic status, pathogenic processes, or response to interventions. Abnormal results of the fundus, blood markers, cytokines, etc. can all be used as the biomarkers of DR. Identifying these biomarkers can assist doctors in diagnosis and timely intervention to avoid the aggravation of the patient’s condition. Furthermore, DR biomarkers, abnormal gene expression , and metabolomics may explain the pathophysiology, forecast the prognosis, and provide suggestions for novel therapeutic development. In addition to biomarkers, OCTA, a non-invasive fundus imaging technique, can be utilized to classify DR objectively. OCTA is commonly used in conjunction with DR biomarker identification because it can provide a layered view of the retinal and choroidal vasculature in living tissue . In OCTA pictures, biomarkers of DR were observed as the expansion of the macula without perfusion, a reduction in vessel density, and vascular structural alterations in the fundus . The development of wide-angle OCTA has increased the sensitivity of spotting non-perfusion regions and neovascular arteries in the retina . Moreover, associated fluorescein angiography (FA) images may be used to align the OCTA. Automatic positioning methods facilitate quantitatively comparing the microvascular characteristics in DR . In conclusion, OCTA is extensively utilized in clinical and research applications. It may be used for clinical assessment, to track changes in fundus conditions, and to evaluate the efficacy of therapy. Therefore, developing DR screening programs and discovering more specific and sensitive biomarkers is vital to assist early identification of DR to minimize the incidence of visual impairment and blindness. 4.5. Strengths and limitations This study utilized bibliometrics to evaluate publishing trends, leading research countries, institutes, and journals in the area of DR. The use of VOSviewer allowed for the identification of national collaborations and keywords with high frequency. CiteSpace illustrated changes in research hotspots and predicted future research trends. However, the limitation of this study is that it only included literature from a single database, which may have omitted relevant DR-related literature and introduced bias. Publication trend in DR research The trend changes in the number of publications reflect the updating of knowledge about a subject. Before 2011, DR had not received sufficient attention, as fewer articles were published. However, with the increasing number of diabetics in the world, the incidence of DR is also significantly increasing. The increasing incidence of DR and the advancement and promotion of fundus examination equipment have both contributed to the expansion of DR research. Since 2011, the research on DR has been in a rapid development stage, as reflected by the number of published articles. Based on the posting trend, it is predicted that the number of publications will reach a new peak this year. In addition, the citation frequency in 2017 is the highest, indicating that the research results have gained widespread attention. International cooperation According to the h-index and citation frequency, America is the leading country in DR research with the most impact, accounting for 23.795% of the total publications. China ranks second, with the largest number of publications but lesser influence. As research deepens, global cooperation is becoming closer. Based on the connections between the various nodes, America places great importance on communication and collaboration in academia, which explains why America has both high productivity and high-quality research. The top three prolific journals were all from America, with Investigative Ophthalmology and Visual Science (IOVS) being the most productive and cited journal in DR research. Among the top 10 prolific institutions, six were located in Europe and America, which is consistent with the increasing prevalence of DR in developed countries. Research directions of DR The keyword analysis results from VOSviewer revealed four clusters in the DR field. Based on the findings of the bibliometric analysis, we discuss the following four main research fields of DR research: 4.3.1. Prevalence and risk factors of DR In recent years, the prevalence of diabetes has been steadily increasing, leading to an increase in the incidence of DR, a serious worldwide public health concern. DR is the leading cause of blindness among individuals of working age (20–74 years old) in many countries . According to the International Diabetes Federation (IDF), in 2021, 537 million people aged 20–79 years have diabetes, accounting for 10.5% of the global population in this age group . Studies show that 22.27% of diabetes patients suffer from DR, and the prevalence of DR in diabetic individuals over a 10-year period may reach 60% . The prevalence of any DR is higher in individuals with type 1 diabetes (77.3%) than in those with type 2 diabetes (25.2%) . Available data suggest that 93 million people had DR in 2010 and 28 million were at risk of visual impairment . However, recent studies show that 860,000 people over the age of 50 worldwide will be blinded by DR in 2020 . By 2045, there will be 160.5 million DR patients, and 44.82 million patients may be at risk of visual impairment . Multiple factors influence the occurrence and progression of DR. The incidence rates of DR vary depending on the region and the type or duration of diabetes . In addition, reviews suggest that the most significant risk factors for the initiation and progression of DR include smoking, a high body mass index, insufficient glycemic control, hypertension, a long history of diabetes, dyslipidemia, and microalbuminuria . Arterial stiffness has also been proposed as a risk factor related to DR in recent years . Although the exact role of these factors in the pathogenesis of DR is not well-defined, they are important in guiding DR screening and the development of public healthcare strategies. 4.3.2. Pathogenesis of DR Advances in research have improved our understanding of the pathophysiological processes that lead to the development of DR. Hyperglycemia, along with other pathological risk factors such as hypertension, sets off a cascade of metabolic pathways that eventually cause microvascular damage and retinal malfunction. The modification of the retinal microvasculature is the primary pathogenic feature of DR. The blood–retinal barrier, formed by tight junctions between endothelial and pericyte cells, is crucial for intravascular homeostasis . In a high glucose environment, abnormal expression of circRNA-CZNF532 , circEhmt1 , and miRNA-138-5p induces pericyte degeneration and vascular dysfunction. The aberrant expression of miRNA-34a , miRNA-126 , and miRNA-221 results in the upregulation of pro-inflammatory molecules, leading to the accumulation of leukocytes around the retinal capillary wall and apoptosis of vascular endothelial cells. Hyperglycemia also causes oxidative stress, which further leads to apoptosis, inflammation, and structural and functional changes in the retina . Microangiopathy may cause damage to pericytes and endothelial cells, resulting in ischemic changes in the retina. The ischemic condition triggers compensatory mechanisms, resulting in the proliferation of vascular endothelial cells and the promotion of angiogenesis. The activation of miRNA-21 and the downregulation of miRNA-200b signify the eventual development of proliferative diabetic retinopathy (PDR) through neovascularization . Furthermore, DR pathogenesis involves retinal neurodegeneration. Studies have shown that diabetic individuals have abnormal retinal neurons and glial cells, and retinal neurodegeneration occurs before retinal microvascular damage . Retinal ganglion cells, photoreceptors, peakless cells, and bipolar cells all undergo changes during retinal neurodegeneration . Microvascular dysfunction and retinal neurodegeneration occur simultaneously . Along with research into the retinal vascular and neurological causes of DR, targeted drugs have been developed . 4.3.3. Treatment of DR The effective regulation of glucose levels and other perilous aspects is paramount in forestalling the onset of DR during the diabetic phase. Sodium-dependent glucose transporter 2 inhibitors (SGLT2i) effectively regulate blood glucose levels and inhibit the manifestation of SGLT2 within the retina, thus mitigating the probability of DR . In addition, traditional Chinese medicine (TCM) monomers such as Lycium barbarum polysaccharide , Astragalus polysaccharide , curcumin , and Crocin can serve as natural extracts that can help protect the retina against apoptosis, inflammation, and oxidative stress. The combination of TCM monomers with nanotechnology can also address the issue of inadequate bioavailability, making them a promising solution for the early prevention and treatment of DR . For individuals with a confirmed diagnosis of proliferative diabetic retinopathy (PDR), the main therapeutic options are retinal laser photocoagulation and intravitreal anti-vascular endothelial growth factor (VEGF) injection. Laser photocoagulation reduces VEGF production and the risk of vision loss by destroying the ischemic retinal area. Anti-VEGF treatment, on the other hand, improves retinal edema and patients’ vision by suppressing vascular permeability. Common anti-VEGF drugs include conbercept, aflibercept, ranibizumab, and bevacizumab. Intravitreal anti-VEGF injection has been found to significantly reduce peripheral vision impairment and decrease the incidence of macular edema in DR patients, making it the preferred option for treating diabetic macular edema. A meta-analysis has also shown that anti-VEGF pretreatment 6–14 days prior to vitrectomy in DR patients can reduce operation time, improve postoperative best-corrected visual acuity (BCVA), and decrease the recurrence rate of vitreous hemorrhage . When anti-VEGF therapy is not successful, corticosteroids may be considered due to the involvement of multiple inflammatory mediators in the pathological processes in the retina . Common corticosteroids include dexamethasone, fluoroquinolone acetonide, and triamcinolone acetonide. Dexamethasone intravitreal implant not only improves patients’ symptoms but also reduces the psychological burden of frequent vitreous injections. However, it is important to be aware of the risk of increased intraocular pressure and cataract progression when using hormonal drugs . Persistent vitreous hemorrhage may require a vitrectomy to preserve functional vision . Blocking the expression of DR-related genes can help prevent neurovascular abnormalities in DR while inducing stem cell differentiation can reshape retinal function . However, further study is needed before this technology can be applied in the clinic. 4.3.4. Fundus image for deep learning Diabetic retinopathy has different fundus manifestations in different stages. In the non-proliferative diabetic retinopathy (NPDR) stage, fundus manifestations include arteriolar hemangiomas, punctate retinal hemorrhage, hard exudation, cotton-wool spots, retinal edema, and beaded venous dilatation. Angiogenesis is the main feature of the fundus in the PDR stage. Fundus images can be used to detect and diagnose macular edema and vitreous hemorrhage, which are the main complications of DR. If the fundus image shows vitreous membranes with retinal eminence, it suggests that DR has developed tractional retinal detachment. Currently, fundus images of DR are not only used for clinical diagnosis but also for deep learning, to improve the detection rate and accuracy of DR. Prevalence and risk factors of DR In recent years, the prevalence of diabetes has been steadily increasing, leading to an increase in the incidence of DR, a serious worldwide public health concern. DR is the leading cause of blindness among individuals of working age (20–74 years old) in many countries . According to the International Diabetes Federation (IDF), in 2021, 537 million people aged 20–79 years have diabetes, accounting for 10.5% of the global population in this age group . Studies show that 22.27% of diabetes patients suffer from DR, and the prevalence of DR in diabetic individuals over a 10-year period may reach 60% . The prevalence of any DR is higher in individuals with type 1 diabetes (77.3%) than in those with type 2 diabetes (25.2%) . Available data suggest that 93 million people had DR in 2010 and 28 million were at risk of visual impairment . However, recent studies show that 860,000 people over the age of 50 worldwide will be blinded by DR in 2020 . By 2045, there will be 160.5 million DR patients, and 44.82 million patients may be at risk of visual impairment . Multiple factors influence the occurrence and progression of DR. The incidence rates of DR vary depending on the region and the type or duration of diabetes . In addition, reviews suggest that the most significant risk factors for the initiation and progression of DR include smoking, a high body mass index, insufficient glycemic control, hypertension, a long history of diabetes, dyslipidemia, and microalbuminuria . Arterial stiffness has also been proposed as a risk factor related to DR in recent years . Although the exact role of these factors in the pathogenesis of DR is not well-defined, they are important in guiding DR screening and the development of public healthcare strategies. Pathogenesis of DR Advances in research have improved our understanding of the pathophysiological processes that lead to the development of DR. Hyperglycemia, along with other pathological risk factors such as hypertension, sets off a cascade of metabolic pathways that eventually cause microvascular damage and retinal malfunction. The modification of the retinal microvasculature is the primary pathogenic feature of DR. The blood–retinal barrier, formed by tight junctions between endothelial and pericyte cells, is crucial for intravascular homeostasis . In a high glucose environment, abnormal expression of circRNA-CZNF532 , circEhmt1 , and miRNA-138-5p induces pericyte degeneration and vascular dysfunction. The aberrant expression of miRNA-34a , miRNA-126 , and miRNA-221 results in the upregulation of pro-inflammatory molecules, leading to the accumulation of leukocytes around the retinal capillary wall and apoptosis of vascular endothelial cells. Hyperglycemia also causes oxidative stress, which further leads to apoptosis, inflammation, and structural and functional changes in the retina . Microangiopathy may cause damage to pericytes and endothelial cells, resulting in ischemic changes in the retina. The ischemic condition triggers compensatory mechanisms, resulting in the proliferation of vascular endothelial cells and the promotion of angiogenesis. The activation of miRNA-21 and the downregulation of miRNA-200b signify the eventual development of proliferative diabetic retinopathy (PDR) through neovascularization . Furthermore, DR pathogenesis involves retinal neurodegeneration. Studies have shown that diabetic individuals have abnormal retinal neurons and glial cells, and retinal neurodegeneration occurs before retinal microvascular damage . Retinal ganglion cells, photoreceptors, peakless cells, and bipolar cells all undergo changes during retinal neurodegeneration . Microvascular dysfunction and retinal neurodegeneration occur simultaneously . Along with research into the retinal vascular and neurological causes of DR, targeted drugs have been developed . Treatment of DR The effective regulation of glucose levels and other perilous aspects is paramount in forestalling the onset of DR during the diabetic phase. Sodium-dependent glucose transporter 2 inhibitors (SGLT2i) effectively regulate blood glucose levels and inhibit the manifestation of SGLT2 within the retina, thus mitigating the probability of DR . In addition, traditional Chinese medicine (TCM) monomers such as Lycium barbarum polysaccharide , Astragalus polysaccharide , curcumin , and Crocin can serve as natural extracts that can help protect the retina against apoptosis, inflammation, and oxidative stress. The combination of TCM monomers with nanotechnology can also address the issue of inadequate bioavailability, making them a promising solution for the early prevention and treatment of DR . For individuals with a confirmed diagnosis of proliferative diabetic retinopathy (PDR), the main therapeutic options are retinal laser photocoagulation and intravitreal anti-vascular endothelial growth factor (VEGF) injection. Laser photocoagulation reduces VEGF production and the risk of vision loss by destroying the ischemic retinal area. Anti-VEGF treatment, on the other hand, improves retinal edema and patients’ vision by suppressing vascular permeability. Common anti-VEGF drugs include conbercept, aflibercept, ranibizumab, and bevacizumab. Intravitreal anti-VEGF injection has been found to significantly reduce peripheral vision impairment and decrease the incidence of macular edema in DR patients, making it the preferred option for treating diabetic macular edema. A meta-analysis has also shown that anti-VEGF pretreatment 6–14 days prior to vitrectomy in DR patients can reduce operation time, improve postoperative best-corrected visual acuity (BCVA), and decrease the recurrence rate of vitreous hemorrhage . When anti-VEGF therapy is not successful, corticosteroids may be considered due to the involvement of multiple inflammatory mediators in the pathological processes in the retina . Common corticosteroids include dexamethasone, fluoroquinolone acetonide, and triamcinolone acetonide. Dexamethasone intravitreal implant not only improves patients’ symptoms but also reduces the psychological burden of frequent vitreous injections. However, it is important to be aware of the risk of increased intraocular pressure and cataract progression when using hormonal drugs . Persistent vitreous hemorrhage may require a vitrectomy to preserve functional vision . Blocking the expression of DR-related genes can help prevent neurovascular abnormalities in DR while inducing stem cell differentiation can reshape retinal function . However, further study is needed before this technology can be applied in the clinic. Fundus image for deep learning Diabetic retinopathy has different fundus manifestations in different stages. In the non-proliferative diabetic retinopathy (NPDR) stage, fundus manifestations include arteriolar hemangiomas, punctate retinal hemorrhage, hard exudation, cotton-wool spots, retinal edema, and beaded venous dilatation. Angiogenesis is the main feature of the fundus in the PDR stage. Fundus images can be used to detect and diagnose macular edema and vitreous hemorrhage, which are the main complications of DR. If the fundus image shows vitreous membranes with retinal eminence, it suggests that DR has developed tractional retinal detachment. Currently, fundus images of DR are not only used for clinical diagnosis but also for deep learning, to improve the detection rate and accuracy of DR. The research frontier of DR Keyword burst analysis provides a useful tool to forecast research frontiers and predict fundamental and clinical research trends. The current research frontiers of DR can be summarized as follows: 4.4.1. Deep learning and AI models The development and validation of deep learning algorithms for DR is the most commonly cited article . The improvements in algorithmic DR research have led to significant breakthroughs in the early diagnosis of DR. A new model has been developed to predict the development of diabetes into DR . Through deep learning of vast fundus images, AI has achieved early identification and severity grading of DR and constantly optimized the algorithm to improve the detection rate and accuracy. In addition, AI can be used to evaluate prognosis and develop personalized treatment plans based on the personal electronic records of DR patients . The rapid development of technology has led to the incorporation of AI for early detection of DR into the American Diabetes Association’s guidelines . Combining AI and telemedicine can reduce public health costs and improve the efficiency of diagnosis and treatment, which will address the number imbalance between ophthalmic doctors and patients and be an important strategy to deal with the high global incidence of DR . However, an ethical review of a large number of original data is required for the research of AI algorithms, and the application of clinical care must be properly standardized and deployed . Furthermore, there is an urgent need to establish a liability system for critical medical negligence, which may be caused by AI misdiagnosis and missed diagnosis . 4.4.2. Biomarkers and OCTA Biomarkers can be objectively measured and used as indicators to assess normal biologic status, pathogenic processes, or response to interventions. Abnormal results of the fundus, blood markers, cytokines, etc. can all be used as the biomarkers of DR. Identifying these biomarkers can assist doctors in diagnosis and timely intervention to avoid the aggravation of the patient’s condition. Furthermore, DR biomarkers, abnormal gene expression , and metabolomics may explain the pathophysiology, forecast the prognosis, and provide suggestions for novel therapeutic development. In addition to biomarkers, OCTA, a non-invasive fundus imaging technique, can be utilized to classify DR objectively. OCTA is commonly used in conjunction with DR biomarker identification because it can provide a layered view of the retinal and choroidal vasculature in living tissue . In OCTA pictures, biomarkers of DR were observed as the expansion of the macula without perfusion, a reduction in vessel density, and vascular structural alterations in the fundus . The development of wide-angle OCTA has increased the sensitivity of spotting non-perfusion regions and neovascular arteries in the retina . Moreover, associated fluorescein angiography (FA) images may be used to align the OCTA. Automatic positioning methods facilitate quantitatively comparing the microvascular characteristics in DR . In conclusion, OCTA is extensively utilized in clinical and research applications. It may be used for clinical assessment, to track changes in fundus conditions, and to evaluate the efficacy of therapy. Therefore, developing DR screening programs and discovering more specific and sensitive biomarkers is vital to assist early identification of DR to minimize the incidence of visual impairment and blindness. Deep learning and AI models The development and validation of deep learning algorithms for DR is the most commonly cited article . The improvements in algorithmic DR research have led to significant breakthroughs in the early diagnosis of DR. A new model has been developed to predict the development of diabetes into DR . Through deep learning of vast fundus images, AI has achieved early identification and severity grading of DR and constantly optimized the algorithm to improve the detection rate and accuracy. In addition, AI can be used to evaluate prognosis and develop personalized treatment plans based on the personal electronic records of DR patients . The rapid development of technology has led to the incorporation of AI for early detection of DR into the American Diabetes Association’s guidelines . Combining AI and telemedicine can reduce public health costs and improve the efficiency of diagnosis and treatment, which will address the number imbalance between ophthalmic doctors and patients and be an important strategy to deal with the high global incidence of DR . However, an ethical review of a large number of original data is required for the research of AI algorithms, and the application of clinical care must be properly standardized and deployed . Furthermore, there is an urgent need to establish a liability system for critical medical negligence, which may be caused by AI misdiagnosis and missed diagnosis . Biomarkers and OCTA Biomarkers can be objectively measured and used as indicators to assess normal biologic status, pathogenic processes, or response to interventions. Abnormal results of the fundus, blood markers, cytokines, etc. can all be used as the biomarkers of DR. Identifying these biomarkers can assist doctors in diagnosis and timely intervention to avoid the aggravation of the patient’s condition. Furthermore, DR biomarkers, abnormal gene expression , and metabolomics may explain the pathophysiology, forecast the prognosis, and provide suggestions for novel therapeutic development. In addition to biomarkers, OCTA, a non-invasive fundus imaging technique, can be utilized to classify DR objectively. OCTA is commonly used in conjunction with DR biomarker identification because it can provide a layered view of the retinal and choroidal vasculature in living tissue . In OCTA pictures, biomarkers of DR were observed as the expansion of the macula without perfusion, a reduction in vessel density, and vascular structural alterations in the fundus . The development of wide-angle OCTA has increased the sensitivity of spotting non-perfusion regions and neovascular arteries in the retina . Moreover, associated fluorescein angiography (FA) images may be used to align the OCTA. Automatic positioning methods facilitate quantitatively comparing the microvascular characteristics in DR . In conclusion, OCTA is extensively utilized in clinical and research applications. It may be used for clinical assessment, to track changes in fundus conditions, and to evaluate the efficacy of therapy. Therefore, developing DR screening programs and discovering more specific and sensitive biomarkers is vital to assist early identification of DR to minimize the incidence of visual impairment and blindness. Strengths and limitations This study utilized bibliometrics to evaluate publishing trends, leading research countries, institutes, and journals in the area of DR. The use of VOSviewer allowed for the identification of national collaborations and keywords with high frequency. CiteSpace illustrated changes in research hotspots and predicted future research trends. However, the limitation of this study is that it only included literature from a single database, which may have omitted relevant DR-related literature and introduced bias. Conclusion The results of this bibliometric analysis demonstrate that DR is a critical research field that has been expanding rapidly. The United States has had a significant academic impact on DR research, and international collaborations are increasingly important for the field’s development. OCTA screening and the identification of specific biomarkers are crucial for early DR detection and the prevention of visual impairment and blindness. With the increasing prevalence of DR, the development of AI technologies and telemedicine to address the shortage of ophthalmologists is a potential research hotspot and an urgent issue. These findings can provide valuable references for future DR research. The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. HX and JT conducted the bibliometric analysis and drafted the manuscript. JD and FZ reviewed the manuscript. LL developed the search strategy. ML, MC, JZ, XW, and YN contributed to the data analysis. All authors contributed to the article and approved the submitted version. The authors declare that the research was conducted without any commercial or financial relationships that could potentially create a conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
Evidence-Based Practice in Psychosocial Oncology from the Perspective of Canadian Service Directors
|
387e0630-a6dc-444d-9e49-70b64754dfce
|
10136815
|
Internal Medicine[mh]
|
1.1. Introduction to Evidence-Based Practice Regulatory bodies in psychology require professional training and clinical practice to be based on scientific evidence . In addition, third-party payers are increasingly demanding outcome-driven, cost-effective care models (p. 204). Science-informed practice refers to using the latest, best-available scientific evidence to inform all aspects of clinical service delivery. Evidence-based practice (EBP) as defined by the American and the Canadian Psychological Associations goes a step further by prescribing how science is to be used to guide clinical practice, i.e., through the conscientious integration of three fundamental components of EBP: (1) best scientific evidence, (2) clinical expertise, and (3) characteristics, treatment preferences, and cultural background of the patient . EBP further entails the monitoring of treatment outcomes from intake to termination to inform treatment planning, modification, and discontinuation. Implied in the definition of EBP is a commitment to continuously being informed by the best scientific evidence. However, clinicians frequently report insufficient time or resources for developing and maintaining a working knowledge of the latest scientific evidence, which is exacerbated by an ever-increasing demand for mental health services . 1.2. Evidence-Based Practice in Psychosocial Oncology Patients and their social support systems have mental health needs that require attention across the cancer trajectory . A wealth of research has shown the effectiveness and value of addressing these needs through psychosocial oncology (PSO) interventions in routine cancer care . Yet, despite increased awareness and demonstrated value of PSO care, patients report that their psychosocial struggles and symptoms are often not sufficiently addressed and sometimes not even acknowledged . Some individual and systemic barriers to EBP in PSO have been mentioned in the literature, such as high patient-to-clinician ratios or an overwhelming amount of relevant scientific literature . Over the past decade, evidence summaries and clinical decision-making resources, such as clinical practice guidelines, have been created to facilitate EBP in PSO throughout the cancer trajectory . Clinical practice guidelines systematically evaluate and summarize the latest scientific evidence for specific symptoms or conditions and synthesize findings into actionable recommendations for clinical practice. Given the growing knowledge base on psychological interventions specific to PSO, guidelines have been developed that provide clinicians with evidence-based recommendations, e.g., for screening, assessment, and treatment of cancer-related distress, depression, fear of cancer recurrence, and for psychosocial support of cancer survivors . Researchers have noted that there are extensive variations in the usage of clinical practice guidelines, pointing out challenges such as insufficient research on the complex processes of guideline implementation, difficulty in keeping guidelines up to date with the newest evidence, knowledge and attitude barriers at the clinician level, patient concerns about the stigma of needing counselling, as well as resource barriers at the systemic level . 1.3. Research Objectives To our knowledge, there is no empirical evidence detailing the use of EBP in PSO services provided to adult patients. Wiener et al. (2015) published the only recent study that examined the state of EBP in pediatric PSO, focusing specifically on children with cancer and their families . The current study examines the perspective of Canadian directors, coordinators, and managers of PSO services regarding evidence-based psychosocial care for adults (18+) diagnosed with cancer and their families. Our research questions are as follows: (1) how are evidence-based practices in psychosocial oncology being implemented in clinical care and how is the service quality monitored, and (2) what are the barriers and facilitators of evidence-based practices in Canadian psychosocial oncology services?
Regulatory bodies in psychology require professional training and clinical practice to be based on scientific evidence . In addition, third-party payers are increasingly demanding outcome-driven, cost-effective care models (p. 204). Science-informed practice refers to using the latest, best-available scientific evidence to inform all aspects of clinical service delivery. Evidence-based practice (EBP) as defined by the American and the Canadian Psychological Associations goes a step further by prescribing how science is to be used to guide clinical practice, i.e., through the conscientious integration of three fundamental components of EBP: (1) best scientific evidence, (2) clinical expertise, and (3) characteristics, treatment preferences, and cultural background of the patient . EBP further entails the monitoring of treatment outcomes from intake to termination to inform treatment planning, modification, and discontinuation. Implied in the definition of EBP is a commitment to continuously being informed by the best scientific evidence. However, clinicians frequently report insufficient time or resources for developing and maintaining a working knowledge of the latest scientific evidence, which is exacerbated by an ever-increasing demand for mental health services .
Patients and their social support systems have mental health needs that require attention across the cancer trajectory . A wealth of research has shown the effectiveness and value of addressing these needs through psychosocial oncology (PSO) interventions in routine cancer care . Yet, despite increased awareness and demonstrated value of PSO care, patients report that their psychosocial struggles and symptoms are often not sufficiently addressed and sometimes not even acknowledged . Some individual and systemic barriers to EBP in PSO have been mentioned in the literature, such as high patient-to-clinician ratios or an overwhelming amount of relevant scientific literature . Over the past decade, evidence summaries and clinical decision-making resources, such as clinical practice guidelines, have been created to facilitate EBP in PSO throughout the cancer trajectory . Clinical practice guidelines systematically evaluate and summarize the latest scientific evidence for specific symptoms or conditions and synthesize findings into actionable recommendations for clinical practice. Given the growing knowledge base on psychological interventions specific to PSO, guidelines have been developed that provide clinicians with evidence-based recommendations, e.g., for screening, assessment, and treatment of cancer-related distress, depression, fear of cancer recurrence, and for psychosocial support of cancer survivors . Researchers have noted that there are extensive variations in the usage of clinical practice guidelines, pointing out challenges such as insufficient research on the complex processes of guideline implementation, difficulty in keeping guidelines up to date with the newest evidence, knowledge and attitude barriers at the clinician level, patient concerns about the stigma of needing counselling, as well as resource barriers at the systemic level .
To our knowledge, there is no empirical evidence detailing the use of EBP in PSO services provided to adult patients. Wiener et al. (2015) published the only recent study that examined the state of EBP in pediatric PSO, focusing specifically on children with cancer and their families . The current study examines the perspective of Canadian directors, coordinators, and managers of PSO services regarding evidence-based psychosocial care for adults (18+) diagnosed with cancer and their families. Our research questions are as follows: (1) how are evidence-based practices in psychosocial oncology being implemented in clinical care and how is the service quality monitored, and (2) what are the barriers and facilitators of evidence-based practices in Canadian psychosocial oncology services?
2.1. Research Design Overview This study presents primary data based on semi-structured phone interviews with directors, managers, and coordinators of PSO services in Canada. Reflexive notetaking was used as a secondary data collection strategy. The qualitative data-analytic procedure incorporated inductive and deductive generation of themes in an emerging process. The approach to inquiry and philosophical assumptions underpinning this research design are rooted in a social constructivist paradigm. Knowledge was understood to be co-constructed through dynamic interactions between participants and researchers . As primary investigator of this project, S.M. complemented their graduate training in qualitative inquiry by consulting two qualitative experts and participating in monthly meetings of McGill’s Qualitative Health Research group as part of their commitment to broadening the scope of their qualitative method knowledge base and expertise . Their understanding of the phenomena under study has been informed by an in-depth literature review that influenced the structure of data collection and the lens through which data analysis occurred. To manage this influence, S.M. used a bottom-up approach to thematic analysis and two additional independent coders helped develop the coding manual and analyze data. S.M.’s interest in conducting this study was motivated by understanding the complexities of psychology as a discipline and how Canadian healthcare standards and contextual factors continuously shape program structures. Finally, S.M. had no prior relationship with any of the study participants. 2.2. Procedures and Participants The Research Ethics Board of McGill University granted ethical approval for this project (#104-0719). Purposeful snowball sampling, whereby participants may refer additional respondents, was the primary recruitment method due to its practical advantage in accessing a network of professionals who are few in number . The distribution of participants resulted from objective probability and deliberate selection. An online advertisement was posted in the monthly newsletter of the Canadian Association of Psychosocial Oncology. Researchers also shared the advertisement with PSO services identified though online searches to ensure the inclusion of service providers from various provinces and to reach potential interviewees from geographical regions that had not yet been represented in the study sample. Healthcare professionals were eligible to participate if they were clinical directors, managers, or coordinators of PSO services provided within hospitals, cancer centres, or community-based institutions, but not private practices, in Canada. The interview guide was co-constructed in consultation with PSO experts (please see for the interview guide). Participants were asked open-ended questions about their program’s structure; about their current policies and standards of care as it relates to EBP; barriers and facilitators for the delivery of EBP; and details concerning research access and ongoing education opportunities for clinicians. Common participant questions as well as the evolving findings led to minor alterations in the interview guide. Individual interviews were audio-recorded, transcribed verbatim, and anonymized. Transcripts, field notes, and debrief discussion notes were used for analyses. Initial data analyses were conducted alongside data collection. The interviews ranged from 45 to 82 min ( X ¯ = 62.3 min) and were completed between November 2019 and June 2020. Recruitment was terminated when authors agreed that thematic saturation had been achieved. 2.3. Data Analyses Thematic data analysis was conducted following Braun and Clarke’s (2006) guidelines for immersion in the data, generating initial codes, as well as reviewing and defining simply the fewest number of content themes . S.M. used NVivo software 12.0 and noted initial codes and emerging content themes. The coding team read transcripts focusing on participant descriptions of the implementation of EBP in PSO, monitoring PSO service quality, as well as the barriers and facilitators to this. S.M. and V.T. independently coded two transcripts and compared their respective coded text segments according to a coding manual. Discrepancies in coding and definitions of codes were discussed until consensus was reached with assistance from A.K. when necessary. Consensus discussions led to final modifications of the coding scheme and consolidated a common understanding of the codes between coders. Intercoder reliability (ICR) was calculated using the Mezzich procedure by applying 44 non-mutually exclusive codes to 4 interview transcripts coded independently by S.M. and V.T (please see for the Intercoder Reliability Data Table). Two selection criteria were used in the assessment of ICR: codes had to address substantive issues related to the research questions and needed to appear with reasonable frequency of at least three times in the text . Concordances and discordances were listed in a cross-table and regarded as concordant if both coders assigned the main statement of the text to the same code. The overall Mezzich’s kappa coefficient for the 32 codes was 0.64, which indicates significant agreement at t (31) = 5.66, p < 0.001, i.e., a moderate level of agreement . Intercoder agreement (ICA) on any discordances was 99.6%. S.M. coded the remaining 9 transcripts independently.
This study presents primary data based on semi-structured phone interviews with directors, managers, and coordinators of PSO services in Canada. Reflexive notetaking was used as a secondary data collection strategy. The qualitative data-analytic procedure incorporated inductive and deductive generation of themes in an emerging process. The approach to inquiry and philosophical assumptions underpinning this research design are rooted in a social constructivist paradigm. Knowledge was understood to be co-constructed through dynamic interactions between participants and researchers . As primary investigator of this project, S.M. complemented their graduate training in qualitative inquiry by consulting two qualitative experts and participating in monthly meetings of McGill’s Qualitative Health Research group as part of their commitment to broadening the scope of their qualitative method knowledge base and expertise . Their understanding of the phenomena under study has been informed by an in-depth literature review that influenced the structure of data collection and the lens through which data analysis occurred. To manage this influence, S.M. used a bottom-up approach to thematic analysis and two additional independent coders helped develop the coding manual and analyze data. S.M.’s interest in conducting this study was motivated by understanding the complexities of psychology as a discipline and how Canadian healthcare standards and contextual factors continuously shape program structures. Finally, S.M. had no prior relationship with any of the study participants.
The Research Ethics Board of McGill University granted ethical approval for this project (#104-0719). Purposeful snowball sampling, whereby participants may refer additional respondents, was the primary recruitment method due to its practical advantage in accessing a network of professionals who are few in number . The distribution of participants resulted from objective probability and deliberate selection. An online advertisement was posted in the monthly newsletter of the Canadian Association of Psychosocial Oncology. Researchers also shared the advertisement with PSO services identified though online searches to ensure the inclusion of service providers from various provinces and to reach potential interviewees from geographical regions that had not yet been represented in the study sample. Healthcare professionals were eligible to participate if they were clinical directors, managers, or coordinators of PSO services provided within hospitals, cancer centres, or community-based institutions, but not private practices, in Canada. The interview guide was co-constructed in consultation with PSO experts (please see for the interview guide). Participants were asked open-ended questions about their program’s structure; about their current policies and standards of care as it relates to EBP; barriers and facilitators for the delivery of EBP; and details concerning research access and ongoing education opportunities for clinicians. Common participant questions as well as the evolving findings led to minor alterations in the interview guide. Individual interviews were audio-recorded, transcribed verbatim, and anonymized. Transcripts, field notes, and debrief discussion notes were used for analyses. Initial data analyses were conducted alongside data collection. The interviews ranged from 45 to 82 min ( X ¯ = 62.3 min) and were completed between November 2019 and June 2020. Recruitment was terminated when authors agreed that thematic saturation had been achieved.
Thematic data analysis was conducted following Braun and Clarke’s (2006) guidelines for immersion in the data, generating initial codes, as well as reviewing and defining simply the fewest number of content themes . S.M. used NVivo software 12.0 and noted initial codes and emerging content themes. The coding team read transcripts focusing on participant descriptions of the implementation of EBP in PSO, monitoring PSO service quality, as well as the barriers and facilitators to this. S.M. and V.T. independently coded two transcripts and compared their respective coded text segments according to a coding manual. Discrepancies in coding and definitions of codes were discussed until consensus was reached with assistance from A.K. when necessary. Consensus discussions led to final modifications of the coding scheme and consolidated a common understanding of the codes between coders. Intercoder reliability (ICR) was calculated using the Mezzich procedure by applying 44 non-mutually exclusive codes to 4 interview transcripts coded independently by S.M. and V.T (please see for the Intercoder Reliability Data Table). Two selection criteria were used in the assessment of ICR: codes had to address substantive issues related to the research questions and needed to appear with reasonable frequency of at least three times in the text . Concordances and discordances were listed in a cross-table and regarded as concordant if both coders assigned the main statement of the text to the same code. The overall Mezzich’s kappa coefficient for the 32 codes was 0.64, which indicates significant agreement at t (31) = 5.66, p < 0.001, i.e., a moderate level of agreement . Intercoder agreement (ICA) on any discordances was 99.6%. S.M. coded the remaining 9 transcripts independently.
In total, sixteen directors from thirteen unique clinical sites participated in this study (five hospitals, six cancer centres, two community centres), the majority of which were university affiliated (69%). Hospital sites were most likely to be affiliated to a university (100%), followed by cancer centres (67%), while no community centres were affiliated with a university. provides a detailed overview of the sociodemographic characteristics of the study sample. The sample was diverse in terms of gender (62% women), age (44% between 45–54 years), educational attainment, and years working in the field of PSO (range: 1–38 years, X ¯ = 10.4 years, SD = 9.8). The sample was more homogenous in terms of their field of training (38% social work, 31% psychology) and geographical location (31% Ontario, 25% Quebec, 44% other Canadian provinces). The professions of study participants, in addition to social work and psychology, also included nursing, occupational therapy, kinesiology, and psychiatry. Interview results are presented according to the two research questions (please see for a brief overview of the themes identified through qualitative analysis). In the interest of brevity, the researchers tell the story of the participants, keeping quotes to a minimum, which have been lightly edited for conciseness and clarity while retaining their original meaning. Findings regarding research question 1: How are evidence-based practices in psychosocial oncology implemented in clinical care and how is the service quality monitored? 3.1. Screening for Distress and Referral to PSO Services 3.1.1. Initial and Repeated Distress Screening Few topics received as much attention as the implementation of distress screening because of its importance in accurately identifying patients in need of PSO care. One participant noted that patients differ in their expression of distress in saying: “they’re looking stoic and brave but aren’t referred because they’re not crying in the office”. All participants except for one described some variation in initial distress screening during patients’ first visit, with high scores prompting further exploration. The use of diverse screening instruments was mentioned, such as the Patient Health Questionnaire (PHQ-9), the Generalized Anxiety Disorder Scale (GAD-7), the Canadian Problem Checklist from the Canadian Partnership Against Cancer (CPAC), and the Distress Thermometer. Participants expressed that repeated distress screening, which would trigger a PSO referral whenever indicated across the cancer trajectory, was inconsistently and rarely implemented. Some participants detailed discrepancies between their intended operating procedures and the reality of initial distress screening. The participant, who disclosed their program had no formalized distress screening, indicated they missed patients in need while other patients were viewed by more than one clinician. Another participant shared that frontline healthcare workers omitted the initial distress screening when they knew that sufficient follow-up PSO services would not be available. Notably, one participant commented that patients reporting distress higher than two on a ten-point scale should receive follow-up assessment, but follow-up was only happening for patients with scores above seven. They added that, according to certain stakeholders, initial oncology appointments are often so loaded with information that psychosocial needs are the last to be addressed; by that time, patients feel too overwhelmed to engage in psychosocial assessment. 3.1.2. Triage and Referral Systems Participants also highlighted the importance of triage and referral systems to connect cancer patients in distress with appropriate PSO services. Some programs had only the most basic referral system, while others reported high levels of multidisciplinary, departmental, and external cooperation. One participant described triaging according to the stepped care model, where symptom management and group programming are offered first, followed by more individualized care for those for whom this is not effective in managing their distress. It was expressed that programs have a responsibility to (a) have a working knowledge of internal and external resources available to patients as well as their acceptance criteria, and (b) visibly promote available services and ensure patients are informed about available services in a timely manner to maximize service uptake. Some sites emphasized the importance of patients being able to express their treatment preferences. Several challenges to these perceived responsibilities were described. Participants expressed that it is difficult for clinicians to maintain a working knowledge of resources due to the frequently changing availability and acceptance criteria of PSO services. One participant explained that their province-wide fatigue management group was poorly attended due to low awareness of its availability among healthcare providers and patients where it might have otherwise been beneficial since cancer patients commonly suffer from fatigue. Moreover, when discussing recent survey findings indicating that patients did not know about services soon enough, one participant questioned: “whether or not patients were told; if they recalled; or if the timing was right”. 3.1.3. Administration and Technology Some participants explained that, for efficiency reasons, their PSO programs created designated positions, such as administrative assistants and resource counsellors. This decreases the burden for each individual clinician and frontline staff to stay up to date with all available internal and external resources, including the respective eligibility criteria. Staff in these roles either helped or completely managed initial and repeated distress screening as well as the triage and referral processes. These roles were also supportive in that they could provide patient orientation, update and organizing service guides with programming options, and opportunities for patients to express treatment preferences before starting a specific service. One participant exemplified the usefulness of such administrative support: “Our program has upwards of about 2500 h of programming we offer a month. So participants are assigned a wellness guide with all the different programs that they can take which will follow them… So, we periodically call in, see how they’re doing, see what they like, what they don’t like, what their needs are… Every month a new calendar comes out and they select their wish list, and a team actually assigns each participant based on their wishes and based on our availability for that month.” An electronic referral system was also emphasized as a key facilitative technology for the implementation of distress screening, whereby patients complete screening tools on a tablet or kiosk. In this PSO program, staff were then automatically notified to follow up with patients based on predetermined cut-off scores. This reportedly led to higher rates of patients receiving PSO care than when screening was conducted manually by oncologists and nurses. Other facilitative technologies included digitizing case management and using a secure online platform to make files accessible to clinicians and patients. 3.2. Delivery of Evidence-Based PSO Services 3.2.1. Therapeutic Interventions Some participants described using the stepped care model to guide the implementation of their PSO services, where symptom management and group programming are offered first, followed by more individualized care for those for whom this is not effective in managing their distress. Some also noted that, prior to implementing new group programming, scientific evidence for its efficacy had to be provided to healthcare decision makers external to the PSO program. Finally, participants also remarked that offering group programming and training others to offer this is a specialized skillset that is not easily replaced: “We have a clinical nurse specialist who resigned—she was the one who did all the training around the physical symptoms and side effects of cancer with women in particular”. Participants described that clinicians used approaches specifically developed and validated in the cancer context, including CALM therapy, dignity therapy, and meaning-centered therapy. They also mentioned other evidence-based approaches commonly used in mental health disciplines, such as cognitive behavioural therapy, mindfulness-based interventions, motivational interviewing, and narrative therapy. Most participants indicated that these services are offered in an integrative fashion, i.e., adapted to the unique needs of each patient. 3.2.2. Credentials and Prior Professional Experience Participants specified requirements in PSO programs’ hiring processes, such as the expectation for clinicians to have pre-existing skillsets related to EBP in PSO. Except for administrative assistants or resource counsellors, clinicians were expected to have a relevant master’s degree and discipline-specific licensure. Moreover, participants commented that clinicians with experience in the healthcare system were viewed more favorably in the hiring process due to likely having transferrable knowledge and skills. They further expressed that experience in oncology-related areas, such as chronic illness or palliative care, helped clinicians to better understand medical treatments in oncology and the unique physical and psychological side effects associated with cancer. As one participant put it: “Offering evidence-based treatment requires that a clinician understands the cancer trajectory and related issues like treatments, side effects, and fatigue.” If clinicians did not already arrive with oncology-specific expertise and experience, it was expected that they would gradually develop it. 3.2.3. Onboarding Training Protocols Onboarding training protocols can be understood as a transition point between clinicians’ prior professional experiences and the need to acquire and maintain knowledge regarding EBP in PSO. Most participants described that their PSO program had at least an informal training protocol that was intended to guide new staff members and student clinicians in the delivery of services. Informal training sometimes included a shadowing or mentorship period, orientation to PSO care, administrative orientation, and recommended readings. Fewer participants described their PSO program had a comprehensive and periodically reviewed training protocol to facilitate EBP. These protocols included, for example, ongoing supervisory support, access to online training courses, and opportunities for specialist training. One participant emphasized that their PSO program’s training procedures are informed by synthesized scientific evidence, whereas another participant shared that, as a program director, it is challenging to create training protocols incorporating the latest research evidence: “From a manager’s perspective, I find that there’s no help with developing those protocols”. 3.2.4. Ongoing Access to Research and Training Participants described that ongoing access to primary research and research summaries were important for the maintenance of EBP in PSO services. Participants remarked that clinicians relied more on merged evidence resources rather than on original research studies when deciding how to implement interventions. Examples of sources at the national level included the clinical practice guidelines from the Canadian Association of Psychosocial Oncology and distance education opportunities from the Interprofessional Psychosocial Oncology Distance Education, as well as provincial guidelines from organizations such as Cancer Care Ontario. Participants noted two main barriers to remaining science-informed. The first was a lack of access to scientific databases such as those available at university libraries, which one participant circumvented by accessing others’ library access but expressed considerable concern: “One of the largest barriers for us is that individual clinicians, including me as clinical lead and our manager, no longer have direct access to the university library system—they deleted our library. So, if we want to do searches in the library, you have to by hook or by crook, find somebody who has access. I used to do it with my daughter, but she doesn’t go to university anymore. So, I can’t look up or follow up on something unless I jump through hoops […] Well, you know that’s not good enough. So that’s a big deal. That’s a real barrier”. Other participants highlighted that the maintenance of PSO knowledge requires a considerable amount of time and effort on the part of clinicians, as one participant noted: “You can go to a seminar, but the real challenge is that when you go back to your normal life, where you work—how much time do you have for implementation, follow-up, and support to become more proficient with whatever it is that you’ve learned? Sometimes I think that getting training is half the battle: the other half is maintaining the resolve and protecting the time, finding your own internal motivation and external support to continue to use it”. Participants reported cooperative sharing, which refers to sharing knowledge of EBP within the PSO community through collegial interactions. This fostered clinicians’ maintenance of their EBP knowledge in PSO. Activities exemplifying cooperative sharing included supervision and mentorship, journal clubs and communities of practice, peer and case consultations, specialist presentations via visiting speakers, conferences or symposiums, and research collaboration. While participants felt that advanced scientific literacy was associated with doctoral degrees, it was also expressed that scientific literacy among clinicians could be facilitated through discussions of the merits and limitations of PSO research. Along these lines, the second barrier to consuming research was a universally voiced concern about having too little time for all the activities that would enable clinicians to remain science-informed. This is elaborated further under political context. 3.2.5. Clinician Attitude and Specialization Participants reported that facilitators of EBP included clinicians’ commitment to ongoing learning and self-reflection as well as having chosen to specialize further within PSO, e.g., disease/tumor site, population attributes, or therapeutic approach. Ongoing learning was said to improve patient care but also foster clinician wellbeing. As one participant put it: “I don’t think, if you’re just seeing patients hour after hour that sense of openness to new ideas is there. I’ve done best when I see cancer patients and when I’m actively involved in research—I feel it keeps me interested, fresh”. Specialization was universally described to be of benefit to patients, whose unique needs might be better addressed through tailored care, as well as to clinicians, and more broadly to the field of PSO. Narrowing one’s clinical focus led to more relevant patient referrals for the clinician, made it easier for the clinician to keep up with research in a specific area, increased their confidence, and also helped to inform research agendas. 3.3. Monitoring of PSO Services Participants described three areas of monitoring needed: at the patient level, that is, if are patients reporting that the PSO services are effective in managing and reducing distress; at the clinician level, which procedures are in place to ensure services are evidence-based; and at the program level, what the overall outcomes of services provided are at a given PSO site. 3.3.1. Patient Feedback Participants expressed that patient feedback could be acquired through repeated distress screening, patient satisfaction surveys, verbal feedback, and by welcoming patient advisory groups. One participant noted the importance of explicitly soliciting patient feedback, explaining that, while patients rarely complained about their care, they are not necessarily given the opportunity to share their concerns about what is or is not working. Outcome monitoring via patient feedback was described as instrumental for improving the quality of PSO care as it informed the science sought out by clinicians to gain additional knowledge. Moreover, when PSO programs were able to gather these data, it also helped justifying resource acquisition from health authorities and expansions in programming to increase the quantity of care offered. However, some participants expressed concerns that, even though patient feedback was solicited, little was done with this data, either because the data were not specific to their PSO program and/or because they had too little time to act on it. One participant shared: “[Provincial Cancer Care Organization], requires that patients complete their ESAS [Edmonton Symptom Assessment System] every once and a while. We do get those results back but they’re not specific to the program that you’re in or just for the psychosocial oncology program—we’d be getting results for the [oncology] program as a whole. So, it’s hard to use that data within our program or to evaluate what we’re actually doing”. 3.3.2. Requirements of Licensing Bodies and Performance Reviews Participants explained that, at the individual clinician level, they needed to trust regulatory colleges to monitor that their members adhere to their respective standards of practice and work within their scope in an evidence-based manner. Colleges handle this largely through continuing education requirements. Certain participants expressed monitoring at PSO programs occurred through annual audits or performance reviews, which allowed them to substantiate whether clinicians were offering evidence-based services. Participants added that these were useful opportunities to provide constructive feedback to clinicians and collaboratively set professional goals for upcoming periods. 3.3.3. Program Evaluation Many participants reported that their PSO program had some kind of program-wide monitoring, even if this was completed informally and infrequently. Most programs collected productivity statistics requested by health authorities, such as patient wait times, number of patients served and each clinician’s caseload. Participants universally felt there was a distinct emphasis on quantity of care and explained that the primary reason for not collecting program-wide patient outcome data was due to health authorities almost exclusively allocating resources for documenting the number of patients seen. Participants had recommendations for important data on service quality to be gathered (besides the patient-reported outcomes already mentioned under Patient Feedback ), e.g., tracking the number of patients who accepted referrals to external resources when internal ones could not be offered and monitoring the uptake of group programming relative to spaces available in a group. 3.3.4. Research and Quality Assurance Projects Participants described research and quality assurance projects that included obtaining and reviewing patient feedback and completing program evaluations. They stated that such projects held the potential to maximize the use of existing resources and helped advocate for the expansion of PSO services. Other participants described how being involved in research or quality assurance projects helped improve the effectiveness of services: “We were able to carry out clinical research, very interesting projects, with young adults with cancer with specific symptom management projects around brachytherapy for colorectal cancer. There were papers published based on this data. So, we had research as a way of keeping us at the edge. The science practitioner model is the best way for clinicians to hone their skills because they have to review the literature, they have to know what’s going on—you’re examining interventions and programs and help service delivery models that bring better care to patients”. However, participants explained that, if clinicians were permitted to be involved in research projects, they tended to be collaborators. Some participants explained this was due to time constraints and not having research mandates in their job descriptions. Participants expressed concern that, even at institutes where research is mandated, the support for such projects was inconsistent: “We’re in a teaching hospital of [a large university] so part of our mandate was to do clinical research. When the ministry changed, research became less of a priority [and stopped being given] real support”. As such, research was most often led by external primary investigators while PSO clinicians were involved as collaborators or as research participants”. Findings regarding research question 2: What are the barriers and facilitators of evidence-based practice in psychosocial oncology services? Barriers and facilitators were described along four different but overlapping contexts: political, social, economic, and geographical. These factors contributed to the unique situation of each PSO program and by extension to the way the programs implemented and monitored EBP. 3.4. Political Context The term health authority was brought up when alluding to or specifically discussing power differentials that occurred between organizations and programs (political context) as well as directors and clinicians (social context) positioned within a vertical hierarchy. External health authorities included bodies such as the federal Public Health Agency of Canada, provincial Ministries of Health, as well as provincial cancer care organizations. Internal health authorities included PSO program leaders who oversee clinical services (and research activities, if any) within each PSO program as well as healthcare decision makers within the organization but external to the PSO program, e.g., directors of the department of oncology, institutional managers. These political and social contexts had profound implications for the economic context of each individual PSO program, where health authorities largely had control over the resources distributed to the PSO program as well as how those resources were used. Participants explained that funds were passed through vertical hierarchies from Ministries of Health to cancer care organizations, who then distribute funds with budgeting mandates or certain directives to PSO programs. Some program directors were granted an extension of this control over resources, which is discussed further under Social Context . Participants overwhelmingly shared the belief that health authorities overvalued quantity of care—that is, were more concerned with minimizing costs and maximizing volume—which created barriers for PSO programs and clinicians implementing and monitoring services. Participants reported a relative devaluation of PSO services when compared to medical treatments, noting that the latter generated more revenue and extended patient life. Participants implied that the perceptions health authorities have regarding the value of mental health services and their role in fostering patients’ quality of life was a political issue. While this may be stating the obvious, one participant expressed that: “For a PSO program to exist, the people, the health authorities, and the government— they have to believe in what we do. If there’s no belief, it’s hard to sustain our services”. Participants said that the devaluation or even systematic disincentivization of PSO services has historically caused high patient-to-clinician ratios and lack of support for activities related to quality improvement, such as continuing education and service monitoring projects. Given that a clinician’s caseload was also a primary performance metric measured by health authorities, participants said and insinuated that taking time for any task other than direct service delivery reduced time available to see patients, which, therefore, impacted their perceived productivity. As one participant put it, if clinicians are “invested in something other than seeing patients then it affects the numbers. And the [provincial health authority] evaluate us by the number of patients we see”. A theme among participants was that significant responsibility is placed on individual clinicians to be evidence-based rather than health authorities playing a larger role to help realize this. Consequently, PSO programs varied widely in the extent they were able to support clinicians’ continuing education, with those receiving less support expressing the sentiment that it is “almost impossible to get training in any meaningful way”. One director argued that chronically high workloads make it unrealistic or even impossible for staff to remain current with science during their working hours, let alone contemplate “what should we be doing” at the program level to increase service quality and quantity. When commenting on their program evaluations, another participant said: “The barriers are mostly structural. It would be great to have our own budget, our own team, and some control over these. That would have helped us to do the program evaluations, which we couldn’t do”. Ultimately, while inherent resource limitations do necessitate a focus on efficiency, the push to “do more with less” was echoed by all participants, who described being underfunded and understaffed despite the enormous and growing need for PSO services. Participants highlighted the health authorities’ potential to play a greater role in monitoring PSO programs through program evaluations as well as providing directors and clinicians with greater autonomy in monitoring their own services. One participant described that their health authority asked their PSO program to choose and apply recommendations from a list, then annually report back on their progress: “[Provincial Cancer Center] came out last year with thirty recommendations considered to be standards of care. We were tasked to take three of those recommendations and apply them.” Another participant spoke of a provincial pilot to fund services based on scientific evidence for services that respond to patient needs: “Part of our health system reform has been using a new type of funding developed by our provincial health authority. It says, “Okay, how do we implement the quality-based procedures model for radiation therapy?” and then, “What is the appropriate amount of funding for each radiation patient to ensure their psychosocial needs are met?” So, this shifts away from a fee-for-service model and instead says, “We’re going to fund your cancer centre based on what the evidence says about needs and the care that should be delivered”. So, this new model is trying to tie funding to the provision of quality services”. Lastly, some community sites described that reduced presence of political hierarchy was helpful in servicing patients more quickly. They reported having more flexibility in managing time and finances compared to tertiary care services. This autonomy reportedly reduced bureaucracy and allowed PSO programs and clinicians to respond to patient needs in a timely manner, as explained by one participant: “The fact that we don’t have the red tape of bureaucracy helps us respond a little bit faster. There aren’t many people at our organization—so we can, for example, offer an additional cancer support group when we notice big waitlists”. 3.5. Social Context 3.5.1. Directorial Vision Participants described that they, as directors, are uniquely positioned to shape the implementation of PSO services, particularly when holding a leadership position over long, uninterrupted periods of time. Consequently, hiring directors specialized in PSO was emphasized for reasons having to do with strong working knowledge of PSO-specific EBP. However, participants also noted that some PSO leadership positions lacked compensation commensurate with the level of training required for the role. One participant noted that they did not have an educational background in mental health or PSO, which made accessing relevant funding and clinical practice guidelines more challenging. Another participant who had PSO experience but was new to their leadership position explained that they had to complete significant foundational work to address program weaknesses in order to elevate the program to standards they deemed appropriate. Directors who were given more control by health authorities recounted acting as an intermediary or buffer in the face of high patient-to-clinician ratios and inconsistent support for activities that facilitate quality improvement, e.g., continuing education or quality assurance projects. Participants described that, depending on the extent to which health authorities granted directors control, PSO service directors could address some of the implementation and monitoring barriers commonly arising from political context. Some directors reported facilitating EBP by protecting time during work hours or offering financial coverage for continuing education and by creating tailored reading recommendations in line with their PSO programming. Participants justified this in saying that these activities not only facilitate clinicians’ services to be evidence-based but also sparked the curiosity and engagement necessary to sustain the long-term emotional demands of providing PSO services. Some participants explained that, although they wanted to create these opportunities for their clinicians, job descriptions were often too narrow to allow for this, and their autonomy as directors was ultimately limited. When participants were given greater autonomy by health authorities, they assumed that it was due to the high performance metrics of their program relative to other programs, whereas those who perceived restricted autonomy thought it was related to poorer quantitative performance of the program, relatively speaking. Certain participants believed that the only way for their PSO programs to be in the position to monitor their own services would be to create new positions for which the job description includes quality assurance projects, including program evaluation in terms of monitoring quantity, as well as quality of services, including patient-reported outcomes—which reportedly was not possible in most cases. 3.5.2. Internal Communication Participants stressed the importance of inter- and multidisciplinary connections to communicate about the implementation of PSO services. This was fostered through protected time for regular meetings and events, such as lunch-and-learns. While the extent of such connection seemed to vary widely, participants explained this communication facilitated that patients received the appropriate services, increased the cooperative sharing of evidence-based PSO knowledge, and fostered collaboration on various forms of quality assurance projects and other service monitoring tasks. Participants reported that protected time for internal communication facilitated distress screening, triage and referral, and knowledge of available programming. Several participants mentioned that multidisciplinary meetings helped staff understand each other’s areas of oncology specialization and assign tasks in alignment with the stepped care model. One PSO program had nurses who assisted with psychosocial symptom management. Participants explained that all staff involved in triage and referrals should be aware of available services and when these services are indicated for a specific patient according to the stepped care model as well as which clinician would be best for a patient’s specific PSO needs. Poor internal communication was associated with issues in implementing distress screening as well as triage and referral. Participants also described being able to tailor therapeutic interventions and provide more holistic PSO care via internal communication, which was reported to have improved the social atmosphere within PSO programs. Some participants said that, when clinicians shared what worked with a particular patient, oncology teams could develop a clearer direction for a patient’s cancer care from a multidisciplinary perspective. One participant expressed: “When you have a multidisciplinary team, you can see and evaluate and get input from different perspectives so that your lens isn’t myopic—it’s broad”. Several participants described that, without this team-based approach, PSO clinicians experienced low morale and diminished sensitivity to patient needs, partly due to a general feeling of disconnection and isolation. Reluctance of individual clinicians regarding collaboration and cooperation was perceived as detrimental to the effective implementation of evidence-based PSO services. 3.5.3. External Affiliations Participants described that affiliations with teaching hospitals or universities helped improve PSO services as their program attracted student talent and graduates, was more likely to have formal training protocols in place, and had greater access to research, e.g., the university affiliation ensured clinicians had access to scientific databases and even assistance from librarians for literature searches. While university affiliations could not entirely compensate for the limitations that the political context imposed on cooperative sharing, it did stimulate internal communication about EBP by connecting PSO service providers with specialists higher in scientific literacy who have accessed and interpreted more original research in PSO. Collectively, this was reported to have heightened interest, motivation, and passion for EBP in clinicians. It also increased the likelihood of stakeholder buy-in due to having the capacity to delineate and make the case for the EBP of PSO care. Moreover, these affiliations increased research collaboration and quality assurance projects involving client feedback and program evaluation. While most PSO clinicians were reportedly not provided with time to act as primary investigators, they sometimes collaborated on study design, project proposals, or participated in data collection as key stakeholders. Participants shared this was more likely to be permitted if such collaboration was included as part of clinicians’ job descriptions. In conclusion, it was ultimately remarked that protected time for internal cooperation and external collaboration was mutually beneficial in building the PSO scientific database and increasing opportunities for PSO programs and clinicians to maintain an EBP. 3.6. Economic Context Participants outlined two main funding models: core funding and “per patient” funding. Core-funded programs had consistent budgets that did not vary annually and were generally allocated to create permanent full-time equivalent positions. Nevertheless, participants shared concerns that frequent turnover among health authority figures created barriers, such as delays in funding for PSO services, caused funding cuts for quality assurance projects, even for PSO programs embedded in teaching hospitals with research mandates, and led to positions remaining unstaffed for extended durations. In contrast to core-funded programs, PSO programs funded on a “per patient” basis provided annual reports on the number of patients served and the type of medical treatments patients received, which informed funding decisions for the following year. One concern regarding both these funding models included rigid allocation conditions restricting the autonomy of PSO program directors in growing their PSO program services. For example, given that funding is allocated almost exclusively for clinicians’ full-time salaries, a director might not be able to fund administrative assistance or quality assurance projects—even if this would facilitate the implementation and monitoring of evidence-based PSO services. Thus, services or activities of PSO programs that are more tangible to addressing patient volume were difficult to put in place without funding from alternative sources, which was more likely to lead to temporary solutions based on “soft money”. Such rigidity to funding allocation presents an obstacle for programs trying to expand or refine their services. Participants expressed strong concerns that “per patient” funding models lead to inequitable PSO service access depending on the type of medical treatment patients received and the number of patients serviced by each PSO program. PSO programs with fewer patients were more likely to have funding cuts in subsequent years, which was a detriment to rural providers given that the majority of provincial resources were reportedly aggregated in urban regions. Regarding medical treatments, participants reported that more funding is provided for patients receiving systemic and chemotherapy treatment, less funding is provided for radiation therapy, and virtually no funding is provided for surgical patients, patients receiving hormonal treatments, and patients in the survivorship phase. This is described by one participant, who stated: “[Provincial Cancer Care Organization] give us funds for each patient receiving radiation or systemic therapy. Our program also has patients that only go the surgical route and are not eligible for our services because we don’t get any funding from that activity. Patients are also only eligible for support up to one year after the end of their treatment. So, once it comes to the survivorship or even the bereavement phases, we don’t tend to get involved with those patients. We just don’t have the capacity”. Another participant shared that funding was unavailable for cancer patients in the survivorship phase: “Our mandate here is to only see patients being treated. So as soon as the patient is done [their medical] treatment—we’re supposed to end our care. But she [lead clinician] finds that that’s really when they need the most help, when they’re supposed to go back to normal, they’re just kind of left to their own, they’re not followed by anyone”. Participants reported that family and caregivers are allies in offering support to patients and also have PSO needs, which were excluded from “per patient” models of funding. Based on a holistic approach to PSO, participants expressed that family members and caregivers should also have access to PSO services—yet often do not receive services no matter the funding model. Participants further shared that patients may benefit from support for tangible needs, such as food, accommodation, and transportation. One example of such support was the provision of “comfort funds” for housing family members when long-term hospitalization was required far from home. Participants highlighted that two variable financial resources could be procured: donations and funding accessed through specific requests. Donations from individual donors were universally accepted by PSO programs. Not-for-profit community sites tended to rely entirely on these donations. Participants also described being able to apply for funding through competitive grant applications for programming expansion or contract positions for specialized services, such as helping underserved populations. Participants were concerned by the potential discontinuation of services that solely rely on this type of funding. PSO programs with more external affiliations reported having more opportunities to apply for grant funding. 3.7. Geographic Context An additional challenge for EBP and generally for the delivery of PSO services in rural areas was the distance between patients and PSO services as well as between clinicians and training opportunities. Higher travel costs and travel times as well as limited public transit options presented challenges to in-person care, while poor internet connection and low digital literacy were described as barriers for telehealth services. Due to having fewer patients, participants reported running fewer group services, which are less costly than individual sessions. Living in rural areas made it difficult for clinicians to remain science-informed and to pursue specialization in PSO, which negatively impacted the availability of specialized services in rural areas. Similarly, participants expressed having to hire applicants that did not meet minimum educational or experience criteria. In addition to these challenges related to sheer physical distance, participants expressed that “per patient” funding models resulted in diminished financial resources. More positions were part-time, which led to dual reporting where clinicians worked at multiple sites and/or had multiple managers. Participants noted that few resources were available for continuing education, and leadership positions covered vast geographic areas and professional disciplines. One participant disclosed that being the only oncology director for their entire province made it difficult to adequately support their staff in discipline-specific areas. Collectively, these challenges were described as barriers to offering equitable PSO services in rural areas. Participants also reported efforts to minimize the impact of these barriers to rural PSO services. One participant described that their oncology team consistently connected to coordinate patients’ oncology appointments to occur on the same day. Another participant remarked that the COVID-19 pandemic accelerated their transition to virtual service delivery, which reportedly increased service visibility and the likelihood that patients would access services when the timing was right for them. Telehealth was mentioned as a key solution, enabling patients to access more specialized PSO services not typically available in their area. Similarly, another participant with a rural PSO program asked their clinicians to participate in online grand rounds and co-facilitate virtual groups, which increased access to peer consultation and provided opportunities to further specialize in PSO. In summary, considerate scheduling and facilitative technologies made PSO care more accessible in rural areas.
3.1.1. Initial and Repeated Distress Screening Few topics received as much attention as the implementation of distress screening because of its importance in accurately identifying patients in need of PSO care. One participant noted that patients differ in their expression of distress in saying: “they’re looking stoic and brave but aren’t referred because they’re not crying in the office”. All participants except for one described some variation in initial distress screening during patients’ first visit, with high scores prompting further exploration. The use of diverse screening instruments was mentioned, such as the Patient Health Questionnaire (PHQ-9), the Generalized Anxiety Disorder Scale (GAD-7), the Canadian Problem Checklist from the Canadian Partnership Against Cancer (CPAC), and the Distress Thermometer. Participants expressed that repeated distress screening, which would trigger a PSO referral whenever indicated across the cancer trajectory, was inconsistently and rarely implemented. Some participants detailed discrepancies between their intended operating procedures and the reality of initial distress screening. The participant, who disclosed their program had no formalized distress screening, indicated they missed patients in need while other patients were viewed by more than one clinician. Another participant shared that frontline healthcare workers omitted the initial distress screening when they knew that sufficient follow-up PSO services would not be available. Notably, one participant commented that patients reporting distress higher than two on a ten-point scale should receive follow-up assessment, but follow-up was only happening for patients with scores above seven. They added that, according to certain stakeholders, initial oncology appointments are often so loaded with information that psychosocial needs are the last to be addressed; by that time, patients feel too overwhelmed to engage in psychosocial assessment. 3.1.2. Triage and Referral Systems Participants also highlighted the importance of triage and referral systems to connect cancer patients in distress with appropriate PSO services. Some programs had only the most basic referral system, while others reported high levels of multidisciplinary, departmental, and external cooperation. One participant described triaging according to the stepped care model, where symptom management and group programming are offered first, followed by more individualized care for those for whom this is not effective in managing their distress. It was expressed that programs have a responsibility to (a) have a working knowledge of internal and external resources available to patients as well as their acceptance criteria, and (b) visibly promote available services and ensure patients are informed about available services in a timely manner to maximize service uptake. Some sites emphasized the importance of patients being able to express their treatment preferences. Several challenges to these perceived responsibilities were described. Participants expressed that it is difficult for clinicians to maintain a working knowledge of resources due to the frequently changing availability and acceptance criteria of PSO services. One participant explained that their province-wide fatigue management group was poorly attended due to low awareness of its availability among healthcare providers and patients where it might have otherwise been beneficial since cancer patients commonly suffer from fatigue. Moreover, when discussing recent survey findings indicating that patients did not know about services soon enough, one participant questioned: “whether or not patients were told; if they recalled; or if the timing was right”. 3.1.3. Administration and Technology Some participants explained that, for efficiency reasons, their PSO programs created designated positions, such as administrative assistants and resource counsellors. This decreases the burden for each individual clinician and frontline staff to stay up to date with all available internal and external resources, including the respective eligibility criteria. Staff in these roles either helped or completely managed initial and repeated distress screening as well as the triage and referral processes. These roles were also supportive in that they could provide patient orientation, update and organizing service guides with programming options, and opportunities for patients to express treatment preferences before starting a specific service. One participant exemplified the usefulness of such administrative support: “Our program has upwards of about 2500 h of programming we offer a month. So participants are assigned a wellness guide with all the different programs that they can take which will follow them… So, we periodically call in, see how they’re doing, see what they like, what they don’t like, what their needs are… Every month a new calendar comes out and they select their wish list, and a team actually assigns each participant based on their wishes and based on our availability for that month.” An electronic referral system was also emphasized as a key facilitative technology for the implementation of distress screening, whereby patients complete screening tools on a tablet or kiosk. In this PSO program, staff were then automatically notified to follow up with patients based on predetermined cut-off scores. This reportedly led to higher rates of patients receiving PSO care than when screening was conducted manually by oncologists and nurses. Other facilitative technologies included digitizing case management and using a secure online platform to make files accessible to clinicians and patients.
Few topics received as much attention as the implementation of distress screening because of its importance in accurately identifying patients in need of PSO care. One participant noted that patients differ in their expression of distress in saying: “they’re looking stoic and brave but aren’t referred because they’re not crying in the office”. All participants except for one described some variation in initial distress screening during patients’ first visit, with high scores prompting further exploration. The use of diverse screening instruments was mentioned, such as the Patient Health Questionnaire (PHQ-9), the Generalized Anxiety Disorder Scale (GAD-7), the Canadian Problem Checklist from the Canadian Partnership Against Cancer (CPAC), and the Distress Thermometer. Participants expressed that repeated distress screening, which would trigger a PSO referral whenever indicated across the cancer trajectory, was inconsistently and rarely implemented. Some participants detailed discrepancies between their intended operating procedures and the reality of initial distress screening. The participant, who disclosed their program had no formalized distress screening, indicated they missed patients in need while other patients were viewed by more than one clinician. Another participant shared that frontline healthcare workers omitted the initial distress screening when they knew that sufficient follow-up PSO services would not be available. Notably, one participant commented that patients reporting distress higher than two on a ten-point scale should receive follow-up assessment, but follow-up was only happening for patients with scores above seven. They added that, according to certain stakeholders, initial oncology appointments are often so loaded with information that psychosocial needs are the last to be addressed; by that time, patients feel too overwhelmed to engage in psychosocial assessment.
Participants also highlighted the importance of triage and referral systems to connect cancer patients in distress with appropriate PSO services. Some programs had only the most basic referral system, while others reported high levels of multidisciplinary, departmental, and external cooperation. One participant described triaging according to the stepped care model, where symptom management and group programming are offered first, followed by more individualized care for those for whom this is not effective in managing their distress. It was expressed that programs have a responsibility to (a) have a working knowledge of internal and external resources available to patients as well as their acceptance criteria, and (b) visibly promote available services and ensure patients are informed about available services in a timely manner to maximize service uptake. Some sites emphasized the importance of patients being able to express their treatment preferences. Several challenges to these perceived responsibilities were described. Participants expressed that it is difficult for clinicians to maintain a working knowledge of resources due to the frequently changing availability and acceptance criteria of PSO services. One participant explained that their province-wide fatigue management group was poorly attended due to low awareness of its availability among healthcare providers and patients where it might have otherwise been beneficial since cancer patients commonly suffer from fatigue. Moreover, when discussing recent survey findings indicating that patients did not know about services soon enough, one participant questioned: “whether or not patients were told; if they recalled; or if the timing was right”.
Some participants explained that, for efficiency reasons, their PSO programs created designated positions, such as administrative assistants and resource counsellors. This decreases the burden for each individual clinician and frontline staff to stay up to date with all available internal and external resources, including the respective eligibility criteria. Staff in these roles either helped or completely managed initial and repeated distress screening as well as the triage and referral processes. These roles were also supportive in that they could provide patient orientation, update and organizing service guides with programming options, and opportunities for patients to express treatment preferences before starting a specific service. One participant exemplified the usefulness of such administrative support: “Our program has upwards of about 2500 h of programming we offer a month. So participants are assigned a wellness guide with all the different programs that they can take which will follow them… So, we periodically call in, see how they’re doing, see what they like, what they don’t like, what their needs are… Every month a new calendar comes out and they select their wish list, and a team actually assigns each participant based on their wishes and based on our availability for that month.” An electronic referral system was also emphasized as a key facilitative technology for the implementation of distress screening, whereby patients complete screening tools on a tablet or kiosk. In this PSO program, staff were then automatically notified to follow up with patients based on predetermined cut-off scores. This reportedly led to higher rates of patients receiving PSO care than when screening was conducted manually by oncologists and nurses. Other facilitative technologies included digitizing case management and using a secure online platform to make files accessible to clinicians and patients.
3.2.1. Therapeutic Interventions Some participants described using the stepped care model to guide the implementation of their PSO services, where symptom management and group programming are offered first, followed by more individualized care for those for whom this is not effective in managing their distress. Some also noted that, prior to implementing new group programming, scientific evidence for its efficacy had to be provided to healthcare decision makers external to the PSO program. Finally, participants also remarked that offering group programming and training others to offer this is a specialized skillset that is not easily replaced: “We have a clinical nurse specialist who resigned—she was the one who did all the training around the physical symptoms and side effects of cancer with women in particular”. Participants described that clinicians used approaches specifically developed and validated in the cancer context, including CALM therapy, dignity therapy, and meaning-centered therapy. They also mentioned other evidence-based approaches commonly used in mental health disciplines, such as cognitive behavioural therapy, mindfulness-based interventions, motivational interviewing, and narrative therapy. Most participants indicated that these services are offered in an integrative fashion, i.e., adapted to the unique needs of each patient. 3.2.2. Credentials and Prior Professional Experience Participants specified requirements in PSO programs’ hiring processes, such as the expectation for clinicians to have pre-existing skillsets related to EBP in PSO. Except for administrative assistants or resource counsellors, clinicians were expected to have a relevant master’s degree and discipline-specific licensure. Moreover, participants commented that clinicians with experience in the healthcare system were viewed more favorably in the hiring process due to likely having transferrable knowledge and skills. They further expressed that experience in oncology-related areas, such as chronic illness or palliative care, helped clinicians to better understand medical treatments in oncology and the unique physical and psychological side effects associated with cancer. As one participant put it: “Offering evidence-based treatment requires that a clinician understands the cancer trajectory and related issues like treatments, side effects, and fatigue.” If clinicians did not already arrive with oncology-specific expertise and experience, it was expected that they would gradually develop it. 3.2.3. Onboarding Training Protocols Onboarding training protocols can be understood as a transition point between clinicians’ prior professional experiences and the need to acquire and maintain knowledge regarding EBP in PSO. Most participants described that their PSO program had at least an informal training protocol that was intended to guide new staff members and student clinicians in the delivery of services. Informal training sometimes included a shadowing or mentorship period, orientation to PSO care, administrative orientation, and recommended readings. Fewer participants described their PSO program had a comprehensive and periodically reviewed training protocol to facilitate EBP. These protocols included, for example, ongoing supervisory support, access to online training courses, and opportunities for specialist training. One participant emphasized that their PSO program’s training procedures are informed by synthesized scientific evidence, whereas another participant shared that, as a program director, it is challenging to create training protocols incorporating the latest research evidence: “From a manager’s perspective, I find that there’s no help with developing those protocols”. 3.2.4. Ongoing Access to Research and Training Participants described that ongoing access to primary research and research summaries were important for the maintenance of EBP in PSO services. Participants remarked that clinicians relied more on merged evidence resources rather than on original research studies when deciding how to implement interventions. Examples of sources at the national level included the clinical practice guidelines from the Canadian Association of Psychosocial Oncology and distance education opportunities from the Interprofessional Psychosocial Oncology Distance Education, as well as provincial guidelines from organizations such as Cancer Care Ontario. Participants noted two main barriers to remaining science-informed. The first was a lack of access to scientific databases such as those available at university libraries, which one participant circumvented by accessing others’ library access but expressed considerable concern: “One of the largest barriers for us is that individual clinicians, including me as clinical lead and our manager, no longer have direct access to the university library system—they deleted our library. So, if we want to do searches in the library, you have to by hook or by crook, find somebody who has access. I used to do it with my daughter, but she doesn’t go to university anymore. So, I can’t look up or follow up on something unless I jump through hoops […] Well, you know that’s not good enough. So that’s a big deal. That’s a real barrier”. Other participants highlighted that the maintenance of PSO knowledge requires a considerable amount of time and effort on the part of clinicians, as one participant noted: “You can go to a seminar, but the real challenge is that when you go back to your normal life, where you work—how much time do you have for implementation, follow-up, and support to become more proficient with whatever it is that you’ve learned? Sometimes I think that getting training is half the battle: the other half is maintaining the resolve and protecting the time, finding your own internal motivation and external support to continue to use it”. Participants reported cooperative sharing, which refers to sharing knowledge of EBP within the PSO community through collegial interactions. This fostered clinicians’ maintenance of their EBP knowledge in PSO. Activities exemplifying cooperative sharing included supervision and mentorship, journal clubs and communities of practice, peer and case consultations, specialist presentations via visiting speakers, conferences or symposiums, and research collaboration. While participants felt that advanced scientific literacy was associated with doctoral degrees, it was also expressed that scientific literacy among clinicians could be facilitated through discussions of the merits and limitations of PSO research. Along these lines, the second barrier to consuming research was a universally voiced concern about having too little time for all the activities that would enable clinicians to remain science-informed. This is elaborated further under political context. 3.2.5. Clinician Attitude and Specialization Participants reported that facilitators of EBP included clinicians’ commitment to ongoing learning and self-reflection as well as having chosen to specialize further within PSO, e.g., disease/tumor site, population attributes, or therapeutic approach. Ongoing learning was said to improve patient care but also foster clinician wellbeing. As one participant put it: “I don’t think, if you’re just seeing patients hour after hour that sense of openness to new ideas is there. I’ve done best when I see cancer patients and when I’m actively involved in research—I feel it keeps me interested, fresh”. Specialization was universally described to be of benefit to patients, whose unique needs might be better addressed through tailored care, as well as to clinicians, and more broadly to the field of PSO. Narrowing one’s clinical focus led to more relevant patient referrals for the clinician, made it easier for the clinician to keep up with research in a specific area, increased their confidence, and also helped to inform research agendas.
Some participants described using the stepped care model to guide the implementation of their PSO services, where symptom management and group programming are offered first, followed by more individualized care for those for whom this is not effective in managing their distress. Some also noted that, prior to implementing new group programming, scientific evidence for its efficacy had to be provided to healthcare decision makers external to the PSO program. Finally, participants also remarked that offering group programming and training others to offer this is a specialized skillset that is not easily replaced: “We have a clinical nurse specialist who resigned—she was the one who did all the training around the physical symptoms and side effects of cancer with women in particular”. Participants described that clinicians used approaches specifically developed and validated in the cancer context, including CALM therapy, dignity therapy, and meaning-centered therapy. They also mentioned other evidence-based approaches commonly used in mental health disciplines, such as cognitive behavioural therapy, mindfulness-based interventions, motivational interviewing, and narrative therapy. Most participants indicated that these services are offered in an integrative fashion, i.e., adapted to the unique needs of each patient.
Participants specified requirements in PSO programs’ hiring processes, such as the expectation for clinicians to have pre-existing skillsets related to EBP in PSO. Except for administrative assistants or resource counsellors, clinicians were expected to have a relevant master’s degree and discipline-specific licensure. Moreover, participants commented that clinicians with experience in the healthcare system were viewed more favorably in the hiring process due to likely having transferrable knowledge and skills. They further expressed that experience in oncology-related areas, such as chronic illness or palliative care, helped clinicians to better understand medical treatments in oncology and the unique physical and psychological side effects associated with cancer. As one participant put it: “Offering evidence-based treatment requires that a clinician understands the cancer trajectory and related issues like treatments, side effects, and fatigue.” If clinicians did not already arrive with oncology-specific expertise and experience, it was expected that they would gradually develop it.
Onboarding training protocols can be understood as a transition point between clinicians’ prior professional experiences and the need to acquire and maintain knowledge regarding EBP in PSO. Most participants described that their PSO program had at least an informal training protocol that was intended to guide new staff members and student clinicians in the delivery of services. Informal training sometimes included a shadowing or mentorship period, orientation to PSO care, administrative orientation, and recommended readings. Fewer participants described their PSO program had a comprehensive and periodically reviewed training protocol to facilitate EBP. These protocols included, for example, ongoing supervisory support, access to online training courses, and opportunities for specialist training. One participant emphasized that their PSO program’s training procedures are informed by synthesized scientific evidence, whereas another participant shared that, as a program director, it is challenging to create training protocols incorporating the latest research evidence: “From a manager’s perspective, I find that there’s no help with developing those protocols”.
Participants described that ongoing access to primary research and research summaries were important for the maintenance of EBP in PSO services. Participants remarked that clinicians relied more on merged evidence resources rather than on original research studies when deciding how to implement interventions. Examples of sources at the national level included the clinical practice guidelines from the Canadian Association of Psychosocial Oncology and distance education opportunities from the Interprofessional Psychosocial Oncology Distance Education, as well as provincial guidelines from organizations such as Cancer Care Ontario. Participants noted two main barriers to remaining science-informed. The first was a lack of access to scientific databases such as those available at university libraries, which one participant circumvented by accessing others’ library access but expressed considerable concern: “One of the largest barriers for us is that individual clinicians, including me as clinical lead and our manager, no longer have direct access to the university library system—they deleted our library. So, if we want to do searches in the library, you have to by hook or by crook, find somebody who has access. I used to do it with my daughter, but she doesn’t go to university anymore. So, I can’t look up or follow up on something unless I jump through hoops […] Well, you know that’s not good enough. So that’s a big deal. That’s a real barrier”. Other participants highlighted that the maintenance of PSO knowledge requires a considerable amount of time and effort on the part of clinicians, as one participant noted: “You can go to a seminar, but the real challenge is that when you go back to your normal life, where you work—how much time do you have for implementation, follow-up, and support to become more proficient with whatever it is that you’ve learned? Sometimes I think that getting training is half the battle: the other half is maintaining the resolve and protecting the time, finding your own internal motivation and external support to continue to use it”. Participants reported cooperative sharing, which refers to sharing knowledge of EBP within the PSO community through collegial interactions. This fostered clinicians’ maintenance of their EBP knowledge in PSO. Activities exemplifying cooperative sharing included supervision and mentorship, journal clubs and communities of practice, peer and case consultations, specialist presentations via visiting speakers, conferences or symposiums, and research collaboration. While participants felt that advanced scientific literacy was associated with doctoral degrees, it was also expressed that scientific literacy among clinicians could be facilitated through discussions of the merits and limitations of PSO research. Along these lines, the second barrier to consuming research was a universally voiced concern about having too little time for all the activities that would enable clinicians to remain science-informed. This is elaborated further under political context.
Participants reported that facilitators of EBP included clinicians’ commitment to ongoing learning and self-reflection as well as having chosen to specialize further within PSO, e.g., disease/tumor site, population attributes, or therapeutic approach. Ongoing learning was said to improve patient care but also foster clinician wellbeing. As one participant put it: “I don’t think, if you’re just seeing patients hour after hour that sense of openness to new ideas is there. I’ve done best when I see cancer patients and when I’m actively involved in research—I feel it keeps me interested, fresh”. Specialization was universally described to be of benefit to patients, whose unique needs might be better addressed through tailored care, as well as to clinicians, and more broadly to the field of PSO. Narrowing one’s clinical focus led to more relevant patient referrals for the clinician, made it easier for the clinician to keep up with research in a specific area, increased their confidence, and also helped to inform research agendas.
Participants described three areas of monitoring needed: at the patient level, that is, if are patients reporting that the PSO services are effective in managing and reducing distress; at the clinician level, which procedures are in place to ensure services are evidence-based; and at the program level, what the overall outcomes of services provided are at a given PSO site. 3.3.1. Patient Feedback Participants expressed that patient feedback could be acquired through repeated distress screening, patient satisfaction surveys, verbal feedback, and by welcoming patient advisory groups. One participant noted the importance of explicitly soliciting patient feedback, explaining that, while patients rarely complained about their care, they are not necessarily given the opportunity to share their concerns about what is or is not working. Outcome monitoring via patient feedback was described as instrumental for improving the quality of PSO care as it informed the science sought out by clinicians to gain additional knowledge. Moreover, when PSO programs were able to gather these data, it also helped justifying resource acquisition from health authorities and expansions in programming to increase the quantity of care offered. However, some participants expressed concerns that, even though patient feedback was solicited, little was done with this data, either because the data were not specific to their PSO program and/or because they had too little time to act on it. One participant shared: “[Provincial Cancer Care Organization], requires that patients complete their ESAS [Edmonton Symptom Assessment System] every once and a while. We do get those results back but they’re not specific to the program that you’re in or just for the psychosocial oncology program—we’d be getting results for the [oncology] program as a whole. So, it’s hard to use that data within our program or to evaluate what we’re actually doing”. 3.3.2. Requirements of Licensing Bodies and Performance Reviews Participants explained that, at the individual clinician level, they needed to trust regulatory colleges to monitor that their members adhere to their respective standards of practice and work within their scope in an evidence-based manner. Colleges handle this largely through continuing education requirements. Certain participants expressed monitoring at PSO programs occurred through annual audits or performance reviews, which allowed them to substantiate whether clinicians were offering evidence-based services. Participants added that these were useful opportunities to provide constructive feedback to clinicians and collaboratively set professional goals for upcoming periods. 3.3.3. Program Evaluation Many participants reported that their PSO program had some kind of program-wide monitoring, even if this was completed informally and infrequently. Most programs collected productivity statistics requested by health authorities, such as patient wait times, number of patients served and each clinician’s caseload. Participants universally felt there was a distinct emphasis on quantity of care and explained that the primary reason for not collecting program-wide patient outcome data was due to health authorities almost exclusively allocating resources for documenting the number of patients seen. Participants had recommendations for important data on service quality to be gathered (besides the patient-reported outcomes already mentioned under Patient Feedback ), e.g., tracking the number of patients who accepted referrals to external resources when internal ones could not be offered and monitoring the uptake of group programming relative to spaces available in a group. 3.3.4. Research and Quality Assurance Projects Participants described research and quality assurance projects that included obtaining and reviewing patient feedback and completing program evaluations. They stated that such projects held the potential to maximize the use of existing resources and helped advocate for the expansion of PSO services. Other participants described how being involved in research or quality assurance projects helped improve the effectiveness of services: “We were able to carry out clinical research, very interesting projects, with young adults with cancer with specific symptom management projects around brachytherapy for colorectal cancer. There were papers published based on this data. So, we had research as a way of keeping us at the edge. The science practitioner model is the best way for clinicians to hone their skills because they have to review the literature, they have to know what’s going on—you’re examining interventions and programs and help service delivery models that bring better care to patients”. However, participants explained that, if clinicians were permitted to be involved in research projects, they tended to be collaborators. Some participants explained this was due to time constraints and not having research mandates in their job descriptions. Participants expressed concern that, even at institutes where research is mandated, the support for such projects was inconsistent: “We’re in a teaching hospital of [a large university] so part of our mandate was to do clinical research. When the ministry changed, research became less of a priority [and stopped being given] real support”. As such, research was most often led by external primary investigators while PSO clinicians were involved as collaborators or as research participants”. Findings regarding research question 2: What are the barriers and facilitators of evidence-based practice in psychosocial oncology services? Barriers and facilitators were described along four different but overlapping contexts: political, social, economic, and geographical. These factors contributed to the unique situation of each PSO program and by extension to the way the programs implemented and monitored EBP.
Participants expressed that patient feedback could be acquired through repeated distress screening, patient satisfaction surveys, verbal feedback, and by welcoming patient advisory groups. One participant noted the importance of explicitly soliciting patient feedback, explaining that, while patients rarely complained about their care, they are not necessarily given the opportunity to share their concerns about what is or is not working. Outcome monitoring via patient feedback was described as instrumental for improving the quality of PSO care as it informed the science sought out by clinicians to gain additional knowledge. Moreover, when PSO programs were able to gather these data, it also helped justifying resource acquisition from health authorities and expansions in programming to increase the quantity of care offered. However, some participants expressed concerns that, even though patient feedback was solicited, little was done with this data, either because the data were not specific to their PSO program and/or because they had too little time to act on it. One participant shared: “[Provincial Cancer Care Organization], requires that patients complete their ESAS [Edmonton Symptom Assessment System] every once and a while. We do get those results back but they’re not specific to the program that you’re in or just for the psychosocial oncology program—we’d be getting results for the [oncology] program as a whole. So, it’s hard to use that data within our program or to evaluate what we’re actually doing”.
Participants explained that, at the individual clinician level, they needed to trust regulatory colleges to monitor that their members adhere to their respective standards of practice and work within their scope in an evidence-based manner. Colleges handle this largely through continuing education requirements. Certain participants expressed monitoring at PSO programs occurred through annual audits or performance reviews, which allowed them to substantiate whether clinicians were offering evidence-based services. Participants added that these were useful opportunities to provide constructive feedback to clinicians and collaboratively set professional goals for upcoming periods.
Many participants reported that their PSO program had some kind of program-wide monitoring, even if this was completed informally and infrequently. Most programs collected productivity statistics requested by health authorities, such as patient wait times, number of patients served and each clinician’s caseload. Participants universally felt there was a distinct emphasis on quantity of care and explained that the primary reason for not collecting program-wide patient outcome data was due to health authorities almost exclusively allocating resources for documenting the number of patients seen. Participants had recommendations for important data on service quality to be gathered (besides the patient-reported outcomes already mentioned under Patient Feedback ), e.g., tracking the number of patients who accepted referrals to external resources when internal ones could not be offered and monitoring the uptake of group programming relative to spaces available in a group.
Participants described research and quality assurance projects that included obtaining and reviewing patient feedback and completing program evaluations. They stated that such projects held the potential to maximize the use of existing resources and helped advocate for the expansion of PSO services. Other participants described how being involved in research or quality assurance projects helped improve the effectiveness of services: “We were able to carry out clinical research, very interesting projects, with young adults with cancer with specific symptom management projects around brachytherapy for colorectal cancer. There were papers published based on this data. So, we had research as a way of keeping us at the edge. The science practitioner model is the best way for clinicians to hone their skills because they have to review the literature, they have to know what’s going on—you’re examining interventions and programs and help service delivery models that bring better care to patients”. However, participants explained that, if clinicians were permitted to be involved in research projects, they tended to be collaborators. Some participants explained this was due to time constraints and not having research mandates in their job descriptions. Participants expressed concern that, even at institutes where research is mandated, the support for such projects was inconsistent: “We’re in a teaching hospital of [a large university] so part of our mandate was to do clinical research. When the ministry changed, research became less of a priority [and stopped being given] real support”. As such, research was most often led by external primary investigators while PSO clinicians were involved as collaborators or as research participants”. Findings regarding research question 2: What are the barriers and facilitators of evidence-based practice in psychosocial oncology services? Barriers and facilitators were described along four different but overlapping contexts: political, social, economic, and geographical. These factors contributed to the unique situation of each PSO program and by extension to the way the programs implemented and monitored EBP.
The term health authority was brought up when alluding to or specifically discussing power differentials that occurred between organizations and programs (political context) as well as directors and clinicians (social context) positioned within a vertical hierarchy. External health authorities included bodies such as the federal Public Health Agency of Canada, provincial Ministries of Health, as well as provincial cancer care organizations. Internal health authorities included PSO program leaders who oversee clinical services (and research activities, if any) within each PSO program as well as healthcare decision makers within the organization but external to the PSO program, e.g., directors of the department of oncology, institutional managers. These political and social contexts had profound implications for the economic context of each individual PSO program, where health authorities largely had control over the resources distributed to the PSO program as well as how those resources were used. Participants explained that funds were passed through vertical hierarchies from Ministries of Health to cancer care organizations, who then distribute funds with budgeting mandates or certain directives to PSO programs. Some program directors were granted an extension of this control over resources, which is discussed further under Social Context . Participants overwhelmingly shared the belief that health authorities overvalued quantity of care—that is, were more concerned with minimizing costs and maximizing volume—which created barriers for PSO programs and clinicians implementing and monitoring services. Participants reported a relative devaluation of PSO services when compared to medical treatments, noting that the latter generated more revenue and extended patient life. Participants implied that the perceptions health authorities have regarding the value of mental health services and their role in fostering patients’ quality of life was a political issue. While this may be stating the obvious, one participant expressed that: “For a PSO program to exist, the people, the health authorities, and the government— they have to believe in what we do. If there’s no belief, it’s hard to sustain our services”. Participants said that the devaluation or even systematic disincentivization of PSO services has historically caused high patient-to-clinician ratios and lack of support for activities related to quality improvement, such as continuing education and service monitoring projects. Given that a clinician’s caseload was also a primary performance metric measured by health authorities, participants said and insinuated that taking time for any task other than direct service delivery reduced time available to see patients, which, therefore, impacted their perceived productivity. As one participant put it, if clinicians are “invested in something other than seeing patients then it affects the numbers. And the [provincial health authority] evaluate us by the number of patients we see”. A theme among participants was that significant responsibility is placed on individual clinicians to be evidence-based rather than health authorities playing a larger role to help realize this. Consequently, PSO programs varied widely in the extent they were able to support clinicians’ continuing education, with those receiving less support expressing the sentiment that it is “almost impossible to get training in any meaningful way”. One director argued that chronically high workloads make it unrealistic or even impossible for staff to remain current with science during their working hours, let alone contemplate “what should we be doing” at the program level to increase service quality and quantity. When commenting on their program evaluations, another participant said: “The barriers are mostly structural. It would be great to have our own budget, our own team, and some control over these. That would have helped us to do the program evaluations, which we couldn’t do”. Ultimately, while inherent resource limitations do necessitate a focus on efficiency, the push to “do more with less” was echoed by all participants, who described being underfunded and understaffed despite the enormous and growing need for PSO services. Participants highlighted the health authorities’ potential to play a greater role in monitoring PSO programs through program evaluations as well as providing directors and clinicians with greater autonomy in monitoring their own services. One participant described that their health authority asked their PSO program to choose and apply recommendations from a list, then annually report back on their progress: “[Provincial Cancer Center] came out last year with thirty recommendations considered to be standards of care. We were tasked to take three of those recommendations and apply them.” Another participant spoke of a provincial pilot to fund services based on scientific evidence for services that respond to patient needs: “Part of our health system reform has been using a new type of funding developed by our provincial health authority. It says, “Okay, how do we implement the quality-based procedures model for radiation therapy?” and then, “What is the appropriate amount of funding for each radiation patient to ensure their psychosocial needs are met?” So, this shifts away from a fee-for-service model and instead says, “We’re going to fund your cancer centre based on what the evidence says about needs and the care that should be delivered”. So, this new model is trying to tie funding to the provision of quality services”. Lastly, some community sites described that reduced presence of political hierarchy was helpful in servicing patients more quickly. They reported having more flexibility in managing time and finances compared to tertiary care services. This autonomy reportedly reduced bureaucracy and allowed PSO programs and clinicians to respond to patient needs in a timely manner, as explained by one participant: “The fact that we don’t have the red tape of bureaucracy helps us respond a little bit faster. There aren’t many people at our organization—so we can, for example, offer an additional cancer support group when we notice big waitlists”.
3.5.1. Directorial Vision Participants described that they, as directors, are uniquely positioned to shape the implementation of PSO services, particularly when holding a leadership position over long, uninterrupted periods of time. Consequently, hiring directors specialized in PSO was emphasized for reasons having to do with strong working knowledge of PSO-specific EBP. However, participants also noted that some PSO leadership positions lacked compensation commensurate with the level of training required for the role. One participant noted that they did not have an educational background in mental health or PSO, which made accessing relevant funding and clinical practice guidelines more challenging. Another participant who had PSO experience but was new to their leadership position explained that they had to complete significant foundational work to address program weaknesses in order to elevate the program to standards they deemed appropriate. Directors who were given more control by health authorities recounted acting as an intermediary or buffer in the face of high patient-to-clinician ratios and inconsistent support for activities that facilitate quality improvement, e.g., continuing education or quality assurance projects. Participants described that, depending on the extent to which health authorities granted directors control, PSO service directors could address some of the implementation and monitoring barriers commonly arising from political context. Some directors reported facilitating EBP by protecting time during work hours or offering financial coverage for continuing education and by creating tailored reading recommendations in line with their PSO programming. Participants justified this in saying that these activities not only facilitate clinicians’ services to be evidence-based but also sparked the curiosity and engagement necessary to sustain the long-term emotional demands of providing PSO services. Some participants explained that, although they wanted to create these opportunities for their clinicians, job descriptions were often too narrow to allow for this, and their autonomy as directors was ultimately limited. When participants were given greater autonomy by health authorities, they assumed that it was due to the high performance metrics of their program relative to other programs, whereas those who perceived restricted autonomy thought it was related to poorer quantitative performance of the program, relatively speaking. Certain participants believed that the only way for their PSO programs to be in the position to monitor their own services would be to create new positions for which the job description includes quality assurance projects, including program evaluation in terms of monitoring quantity, as well as quality of services, including patient-reported outcomes—which reportedly was not possible in most cases. 3.5.2. Internal Communication Participants stressed the importance of inter- and multidisciplinary connections to communicate about the implementation of PSO services. This was fostered through protected time for regular meetings and events, such as lunch-and-learns. While the extent of such connection seemed to vary widely, participants explained this communication facilitated that patients received the appropriate services, increased the cooperative sharing of evidence-based PSO knowledge, and fostered collaboration on various forms of quality assurance projects and other service monitoring tasks. Participants reported that protected time for internal communication facilitated distress screening, triage and referral, and knowledge of available programming. Several participants mentioned that multidisciplinary meetings helped staff understand each other’s areas of oncology specialization and assign tasks in alignment with the stepped care model. One PSO program had nurses who assisted with psychosocial symptom management. Participants explained that all staff involved in triage and referrals should be aware of available services and when these services are indicated for a specific patient according to the stepped care model as well as which clinician would be best for a patient’s specific PSO needs. Poor internal communication was associated with issues in implementing distress screening as well as triage and referral. Participants also described being able to tailor therapeutic interventions and provide more holistic PSO care via internal communication, which was reported to have improved the social atmosphere within PSO programs. Some participants said that, when clinicians shared what worked with a particular patient, oncology teams could develop a clearer direction for a patient’s cancer care from a multidisciplinary perspective. One participant expressed: “When you have a multidisciplinary team, you can see and evaluate and get input from different perspectives so that your lens isn’t myopic—it’s broad”. Several participants described that, without this team-based approach, PSO clinicians experienced low morale and diminished sensitivity to patient needs, partly due to a general feeling of disconnection and isolation. Reluctance of individual clinicians regarding collaboration and cooperation was perceived as detrimental to the effective implementation of evidence-based PSO services. 3.5.3. External Affiliations Participants described that affiliations with teaching hospitals or universities helped improve PSO services as their program attracted student talent and graduates, was more likely to have formal training protocols in place, and had greater access to research, e.g., the university affiliation ensured clinicians had access to scientific databases and even assistance from librarians for literature searches. While university affiliations could not entirely compensate for the limitations that the political context imposed on cooperative sharing, it did stimulate internal communication about EBP by connecting PSO service providers with specialists higher in scientific literacy who have accessed and interpreted more original research in PSO. Collectively, this was reported to have heightened interest, motivation, and passion for EBP in clinicians. It also increased the likelihood of stakeholder buy-in due to having the capacity to delineate and make the case for the EBP of PSO care. Moreover, these affiliations increased research collaboration and quality assurance projects involving client feedback and program evaluation. While most PSO clinicians were reportedly not provided with time to act as primary investigators, they sometimes collaborated on study design, project proposals, or participated in data collection as key stakeholders. Participants shared this was more likely to be permitted if such collaboration was included as part of clinicians’ job descriptions. In conclusion, it was ultimately remarked that protected time for internal cooperation and external collaboration was mutually beneficial in building the PSO scientific database and increasing opportunities for PSO programs and clinicians to maintain an EBP.
Participants described that they, as directors, are uniquely positioned to shape the implementation of PSO services, particularly when holding a leadership position over long, uninterrupted periods of time. Consequently, hiring directors specialized in PSO was emphasized for reasons having to do with strong working knowledge of PSO-specific EBP. However, participants also noted that some PSO leadership positions lacked compensation commensurate with the level of training required for the role. One participant noted that they did not have an educational background in mental health or PSO, which made accessing relevant funding and clinical practice guidelines more challenging. Another participant who had PSO experience but was new to their leadership position explained that they had to complete significant foundational work to address program weaknesses in order to elevate the program to standards they deemed appropriate. Directors who were given more control by health authorities recounted acting as an intermediary or buffer in the face of high patient-to-clinician ratios and inconsistent support for activities that facilitate quality improvement, e.g., continuing education or quality assurance projects. Participants described that, depending on the extent to which health authorities granted directors control, PSO service directors could address some of the implementation and monitoring barriers commonly arising from political context. Some directors reported facilitating EBP by protecting time during work hours or offering financial coverage for continuing education and by creating tailored reading recommendations in line with their PSO programming. Participants justified this in saying that these activities not only facilitate clinicians’ services to be evidence-based but also sparked the curiosity and engagement necessary to sustain the long-term emotional demands of providing PSO services. Some participants explained that, although they wanted to create these opportunities for their clinicians, job descriptions were often too narrow to allow for this, and their autonomy as directors was ultimately limited. When participants were given greater autonomy by health authorities, they assumed that it was due to the high performance metrics of their program relative to other programs, whereas those who perceived restricted autonomy thought it was related to poorer quantitative performance of the program, relatively speaking. Certain participants believed that the only way for their PSO programs to be in the position to monitor their own services would be to create new positions for which the job description includes quality assurance projects, including program evaluation in terms of monitoring quantity, as well as quality of services, including patient-reported outcomes—which reportedly was not possible in most cases.
Participants stressed the importance of inter- and multidisciplinary connections to communicate about the implementation of PSO services. This was fostered through protected time for regular meetings and events, such as lunch-and-learns. While the extent of such connection seemed to vary widely, participants explained this communication facilitated that patients received the appropriate services, increased the cooperative sharing of evidence-based PSO knowledge, and fostered collaboration on various forms of quality assurance projects and other service monitoring tasks. Participants reported that protected time for internal communication facilitated distress screening, triage and referral, and knowledge of available programming. Several participants mentioned that multidisciplinary meetings helped staff understand each other’s areas of oncology specialization and assign tasks in alignment with the stepped care model. One PSO program had nurses who assisted with psychosocial symptom management. Participants explained that all staff involved in triage and referrals should be aware of available services and when these services are indicated for a specific patient according to the stepped care model as well as which clinician would be best for a patient’s specific PSO needs. Poor internal communication was associated with issues in implementing distress screening as well as triage and referral. Participants also described being able to tailor therapeutic interventions and provide more holistic PSO care via internal communication, which was reported to have improved the social atmosphere within PSO programs. Some participants said that, when clinicians shared what worked with a particular patient, oncology teams could develop a clearer direction for a patient’s cancer care from a multidisciplinary perspective. One participant expressed: “When you have a multidisciplinary team, you can see and evaluate and get input from different perspectives so that your lens isn’t myopic—it’s broad”. Several participants described that, without this team-based approach, PSO clinicians experienced low morale and diminished sensitivity to patient needs, partly due to a general feeling of disconnection and isolation. Reluctance of individual clinicians regarding collaboration and cooperation was perceived as detrimental to the effective implementation of evidence-based PSO services.
Participants described that affiliations with teaching hospitals or universities helped improve PSO services as their program attracted student talent and graduates, was more likely to have formal training protocols in place, and had greater access to research, e.g., the university affiliation ensured clinicians had access to scientific databases and even assistance from librarians for literature searches. While university affiliations could not entirely compensate for the limitations that the political context imposed on cooperative sharing, it did stimulate internal communication about EBP by connecting PSO service providers with specialists higher in scientific literacy who have accessed and interpreted more original research in PSO. Collectively, this was reported to have heightened interest, motivation, and passion for EBP in clinicians. It also increased the likelihood of stakeholder buy-in due to having the capacity to delineate and make the case for the EBP of PSO care. Moreover, these affiliations increased research collaboration and quality assurance projects involving client feedback and program evaluation. While most PSO clinicians were reportedly not provided with time to act as primary investigators, they sometimes collaborated on study design, project proposals, or participated in data collection as key stakeholders. Participants shared this was more likely to be permitted if such collaboration was included as part of clinicians’ job descriptions. In conclusion, it was ultimately remarked that protected time for internal cooperation and external collaboration was mutually beneficial in building the PSO scientific database and increasing opportunities for PSO programs and clinicians to maintain an EBP.
Participants outlined two main funding models: core funding and “per patient” funding. Core-funded programs had consistent budgets that did not vary annually and were generally allocated to create permanent full-time equivalent positions. Nevertheless, participants shared concerns that frequent turnover among health authority figures created barriers, such as delays in funding for PSO services, caused funding cuts for quality assurance projects, even for PSO programs embedded in teaching hospitals with research mandates, and led to positions remaining unstaffed for extended durations. In contrast to core-funded programs, PSO programs funded on a “per patient” basis provided annual reports on the number of patients served and the type of medical treatments patients received, which informed funding decisions for the following year. One concern regarding both these funding models included rigid allocation conditions restricting the autonomy of PSO program directors in growing their PSO program services. For example, given that funding is allocated almost exclusively for clinicians’ full-time salaries, a director might not be able to fund administrative assistance or quality assurance projects—even if this would facilitate the implementation and monitoring of evidence-based PSO services. Thus, services or activities of PSO programs that are more tangible to addressing patient volume were difficult to put in place without funding from alternative sources, which was more likely to lead to temporary solutions based on “soft money”. Such rigidity to funding allocation presents an obstacle for programs trying to expand or refine their services. Participants expressed strong concerns that “per patient” funding models lead to inequitable PSO service access depending on the type of medical treatment patients received and the number of patients serviced by each PSO program. PSO programs with fewer patients were more likely to have funding cuts in subsequent years, which was a detriment to rural providers given that the majority of provincial resources were reportedly aggregated in urban regions. Regarding medical treatments, participants reported that more funding is provided for patients receiving systemic and chemotherapy treatment, less funding is provided for radiation therapy, and virtually no funding is provided for surgical patients, patients receiving hormonal treatments, and patients in the survivorship phase. This is described by one participant, who stated: “[Provincial Cancer Care Organization] give us funds for each patient receiving radiation or systemic therapy. Our program also has patients that only go the surgical route and are not eligible for our services because we don’t get any funding from that activity. Patients are also only eligible for support up to one year after the end of their treatment. So, once it comes to the survivorship or even the bereavement phases, we don’t tend to get involved with those patients. We just don’t have the capacity”. Another participant shared that funding was unavailable for cancer patients in the survivorship phase: “Our mandate here is to only see patients being treated. So as soon as the patient is done [their medical] treatment—we’re supposed to end our care. But she [lead clinician] finds that that’s really when they need the most help, when they’re supposed to go back to normal, they’re just kind of left to their own, they’re not followed by anyone”. Participants reported that family and caregivers are allies in offering support to patients and also have PSO needs, which were excluded from “per patient” models of funding. Based on a holistic approach to PSO, participants expressed that family members and caregivers should also have access to PSO services—yet often do not receive services no matter the funding model. Participants further shared that patients may benefit from support for tangible needs, such as food, accommodation, and transportation. One example of such support was the provision of “comfort funds” for housing family members when long-term hospitalization was required far from home. Participants highlighted that two variable financial resources could be procured: donations and funding accessed through specific requests. Donations from individual donors were universally accepted by PSO programs. Not-for-profit community sites tended to rely entirely on these donations. Participants also described being able to apply for funding through competitive grant applications for programming expansion or contract positions for specialized services, such as helping underserved populations. Participants were concerned by the potential discontinuation of services that solely rely on this type of funding. PSO programs with more external affiliations reported having more opportunities to apply for grant funding.
An additional challenge for EBP and generally for the delivery of PSO services in rural areas was the distance between patients and PSO services as well as between clinicians and training opportunities. Higher travel costs and travel times as well as limited public transit options presented challenges to in-person care, while poor internet connection and low digital literacy were described as barriers for telehealth services. Due to having fewer patients, participants reported running fewer group services, which are less costly than individual sessions. Living in rural areas made it difficult for clinicians to remain science-informed and to pursue specialization in PSO, which negatively impacted the availability of specialized services in rural areas. Similarly, participants expressed having to hire applicants that did not meet minimum educational or experience criteria. In addition to these challenges related to sheer physical distance, participants expressed that “per patient” funding models resulted in diminished financial resources. More positions were part-time, which led to dual reporting where clinicians worked at multiple sites and/or had multiple managers. Participants noted that few resources were available for continuing education, and leadership positions covered vast geographic areas and professional disciplines. One participant disclosed that being the only oncology director for their entire province made it difficult to adequately support their staff in discipline-specific areas. Collectively, these challenges were described as barriers to offering equitable PSO services in rural areas. Participants also reported efforts to minimize the impact of these barriers to rural PSO services. One participant described that their oncology team consistently connected to coordinate patients’ oncology appointments to occur on the same day. Another participant remarked that the COVID-19 pandemic accelerated their transition to virtual service delivery, which reportedly increased service visibility and the likelihood that patients would access services when the timing was right for them. Telehealth was mentioned as a key solution, enabling patients to access more specialized PSO services not typically available in their area. Similarly, another participant with a rural PSO program asked their clinicians to participate in online grand rounds and co-facilitate virtual groups, which increased access to peer consultation and provided opportunities to further specialize in PSO. In summary, considerate scheduling and facilitative technologies made PSO care more accessible in rural areas.
The goal of the current study was to examine the perspective of Canadian PSO service directors regarding (a) how EBP in PSO is being implemented in supportive cancer care for adults and their families and how the service quality is monitored, and (b) the barriers and facilitators of evidence-based PSO services. Service directors identified some evidence-based practices in all areas of their PSO programs, including in the screening of patients, intervention delivery, and monitoring of care. Each participant identified challenges to implementing EBP in PSO services, with insufficient funding and protected time being consistently described as major detriments to ensuring clinicians remain science-informed and patient needs are met. Each one reported various approaches attempting to overcome these challenges. The major themes identified in this study speak to structural barriers creating a feedback loop system that constrains PSO programs’ efforts to provide evidence-based services. For example, program evaluation based solely on number of clients and wait times results in limited funding. This disincentivizes non-direct service provision activities, such as internal communication and sharing of EBP knowledge among PSO colleagues, which could maximize the effective use of resources by facilitating the stepped care model. At the same time, barriers such as limited data collection mandates, lack of affiliation with universities, and high patient load hinder the initiation of research projects and the engagement in the generation of knowledge that could transform the terms by which the program is evaluated. The following three sections discuss the evidence-based practices and respective barriers and facilitators for each of the three major themes identified in response to research question 1. 4.1. Screening for Distress and Referral to PSO Services Participants emphasized that the delivery of evidence-based PSO services depends on the accurate identification of patients in distress and the timely referral to appropriate services, which is in line with the existing scientific evidence. The initial and repeated use of distress screening tools is essential for triaging and referring patients to appropriate levels of care at diagnosis throughout treatment and during follow-up care . Directors expressed that PSO programs have a responsibility to facilitate an effective referral process by a) being knowledgeable about available internal and external resources as well as their acceptance criteria and b) ensuring patients are informed in a timely manner to maximize the uptake of appropriate services. Participants explained that services such as psychoeducation, symptom screening, and group interventions should first be prescribed, whereas more resources-intensive interventions, such as individual sessions, would be offered should the former interventions be unsuccessful in managing cancer-related psychosocial problems. This is in accordance with the stepped care model, which Smith and Darling (2015) described as a gold standard for directing the efficient delivery of appropriate care, with simpler interventions being administered at first and moving to more intensive interventions when a good outcome cannot be achieved . The application of this model to PSO has been endorsed by various stakeholders, including the Canadian Partnership against Cancer , the Canadian Psychosocial Oncology Association , and Cancer Care Ontario . There is ubiquitous agreement that all patients with cancer require an assessment of their supportive care needs, while it is proposed that for about 20% of patients it will be sufficient to receive only the most basic support, that 30% will need additional support, such as psychoeducation or peer support groups, that professional interventions to manage psychosocial distress and other cancer-related symptoms will be required for another 35% to 40%, and that 10% to 15% of patients will need the most intensive interventions as presented in . Our research identified inconsistencies in distress screening, cut-off score implementation, patient eligibility based on their medical treatment modality, and service availability in rural areas as systemic barriers to the provision of evidence-based PSO care. All participants except for one reported that some variation in initial distress screening was part of their PSO program’s operating procedures. However, some participants indicated discrepancies in applying this standard due to the real or perceived lack of available follow-up care . Repeated distress screening was rarely implemented, which is consistent with previously documented concerns regarding inconsistent distress monitoring where patients who met the evidence-based cut-off score during initial screening and patients whose distress levels changed over time were not referred to appropriate services. Frequent changes in the availability of internal and/or external PSO services and in respective eligibility criteria as well as low awareness of service availability among certain health providers resulted in patients’ limited awareness of existing resources. Furthermore, “per patient” based on treatment modality led to offering PSO services to patients receiving certain medical treatments (i.e., systemic and chemotherapy), while no services or only limited services could be offered to patients undergoing other medical regimes (i.e., radiation therapy, surgical patients, hormonal treatments). These models of funding also excluded patients in the survivorship phase and families/caregivers from PSO services. The denial of services to certain patient groups, such as those in the survivorship and bereavement stages, is at odds with recent literature highlighting the importance of addressing the psychosocial needs of the growing survivorship cohort and of holistically addressing family units and caregiver contexts . Participants suggested that the discrepancy between best practice recommendations and the clinical reality is at least partially driven by the fact that health authorities regard PSO as a “nonessential dimension of cancer care”, which is supported by prior research reports and affects the funding allocation to PSO services. Our findings align with previously voiced concerns that many patients who would benefit from PSO care are not receiving it . Participants identified consistent screening standards, digital screening procedures, designated administrative positions, and internal communication as facilitators for distress screening and triage. This is in line with three steps previously put forth to facilitate the Canada-wide implementation of repeated distress screening : (a) establish national progress monitoring standards; (b) raise stakeholder awareness of repeated screening as an EBP standard of care; and (c) secure resources for this. Digital screening procedures can both reduce the misidentification of distress by flagging patients in need of services based on evidence-based, pre-determined cut-off scores, such as reported in the “Pan Canadian Practice Guideline: Screening, Assessment and Care of Psychosocial Distress, Depression, and Anxiety in Adults with Cancer” . Digital screening procedures can also reduce costs associated with manually administering initial and repeat distress screening tools, gather relevant outcome data of patient distress over time, and document the number of patients above the cut-off score who cannot be offered services . People with designated roles, such as administrative positions, resource counsellors, program coordinators, or patient navigators, are all people who helped PSO programs acquire knowledge of available programming and sources, increasing the visibility of services and orienting patients to available services based on their individual preferences before matching them with a service. Protected time for internal communication facilitated both implementing evidence-based PSO services and monitoring them. Internal communication helped with service delivery and monitoring by increasing clinician knowledge of available programming (especially in the absence of a person with this designated role), facilitated triaging patients to the best available services, and helped clinicians to assign tasks in accordance with the stepped care model. Collaborative sharing was also said to help improve the effective delivery of evidence-based PSO services by increasing a clinician’s involvement with psychosocial symptom management, developing clearer directions for patient care and sensitivity to their needs, as well as through an improved social atmosphere and increased clinician morale. Internal communication and telehealth technologies reduced travel for patients at rural sites through better coordination of all oncology services. Rodin (2018) posits that patients’ perceived support from their healthcare team and the healthcare system more broadly can have a protective effect via increased feelings of validation and safety, regulating painful emotions, greater patient’s self-efficacy, and participation in the decision making process for their care . 4.2. Delivery of Evidence-Based PSO Services The delivery of evidence-based PSO services is closely interwoven with clinicians’ acquisition and maintenance of the respective expertise. Wide variations in onboarding training protocols may be a barrier to clinicians uniformly acquiring PSO-specific knowledge when joining the team. Since clinicians arrive at PSO programs with various levels of relevant expertise and experience, well-designed and periodically updated onboarding training protocols that respond to differing training needs and are based on the newest research evidence would be a unique opportunity to ensure all team members acquire the basic knowledge on EBP in PSO. However, it is unrealistic to put this solely on the shoulders of PSO directors. It seems to be a wise investment if health authorities would support the development and delivery of such training opportunities as a joint initiative across provinces. Participants also reported various levels of being able to support clinician efforts to maintain their PSO knowledge through professional activities including but not limited to supervision and mentorship, journal clubs and communities of practice, peer and case consultations, guest speakers, and external training events. Participants’ concerns were centered on having only minimal and inconsistent health authority support for PSO knowledge acquisition and maintenance activities. Related barriers included lack of access to research databases, limited and inconsistent support for continuing education, narrow definition of performance, as well as restricted job descriptions. Participants further remarked that clinicians relied more on synthesized research summaries, including recommendations for clinical practice, than on original research studies. However, the utility of clinical practice guidelines that were developed a decade or longer ago is questionable given that the latest research evidence has not been integrated in them. Provincial as well as federal healthcare decision makers would be well advised to invest in the updating of high-quality practice guidelines in PSO as an important means to facilitate EBP. Facilitators to clinicians acquiring and maintaining a working knowledge of EBP in PSO were almost entirely related to the social context, including the pursuit of a directorial vision, internal communication, and external affiliations. To the extent directors were given control, they attempted to provide protected time during working hours and/or financial support for continuing education and cooperative sharing of EBP within. Directors believed that time for such activities sparked the curiosity and engagement necessary to sustain the long-term emotional demands of providing PSO services. Participants who worked at PSO programs with university affiliations described their onboarding training protocols as comprehensive, evidence-based, and periodically reviewed. They also reported these affiliations increased clinician connection with specialists (i.e., via visiting speakers, conferences or symposiums, and research collaboration), stimulated discussions about latest evidence, and heighted clinicians’ interest and motivation in their clinical work. Thus, it seems beneficial for health authorities to recognize that time for collaborative sharing and internal communication are facilitative to EBP in PSO. 4.3. Monitoring of Psychosocial Oncology Services The American and the Canadian Psychological Association’s definitions of evidence-based practice state that EBP includes the monitoring of the treatments provided, and our study participants stressed the importance of gathering patient-reported outcome data to monitor and improve the quality of PSO interventions and not solely focus on quantitative data, such as number of patients and sessions or average patient wait time. Such a broader perspective is also in line with recommendations from the Canadian Partnership against Cancer , a government-funded, independent organization aiming to facilitate cancer control in Canada. Yet, in many PSO programs, only the collection of the latter data (e.g., number of patients seen) was mandated and financially supported by healthcare decision makers. Key barriers to monitoring outcomes included funding cuts for such projects even at teaching hospitals, a lack of autonomy to gather more diversified types of data, and a general lack of control to allocate resources to technological and administrative facilitators. These are clear obstacles regarding EBP in PSO, and participants were painfully aware that collecting more pertinent data would be key to demonstrating the value of PSO services and successful advocacy with financial decision makers. The lack of data on outcomes and patient needs impedes efforts to justify resource requests for collecting those data and providing better services, meaningfully addressing systemic PSO issues. If health authorities want to meet their targets for greater efficiency of PSO services, it seems well advised to move beyond gathering a quite restricted range of quantitative data and towards facilitating the assessment of service quality indicators and acknowledging the value of protected time for focussing on program evaluation and advancement. Technology might already make it possible to extract some additional relevant data at treatment centres that have implemented initial and/or repeated distress screening throughout the illness trajectory. One could, for example, determine the number of patients who should have been offered PSO given their distress score above an evidence-based cut-off score but did not receive a service offer because the PSO program was only able to provide services to patients with the highest distress scores. Other valuable data in the service of quality improvement could be gathered by tracking the uptake of group programming relative to spaces available in a group. Such data have the potential to identify program-specific areas of improvement. It should also be noted that some resources exist for PSO programs to self-evaluate and develop their services in the absence of clear direction from health authorities. 4.4. Limitations Participants representing thirteen unique PSO sites described three categories of service providers excluding private practices: PSO programs within tertiary care settings, community agencies, and cancer centres, the latter of which were named “community oncology programs” elsewhere . Because our ethics protocol required transcripts to be anonymized to keep participants’ identity confidential, it was not possible to compare PSO services by site type. Moreover, the wide variation in program structures each embedded in different political, economic, social, and geographical contexts means that our conclusions are not indicative of each PSO program’s reality. Certain health authorities have called attention to research challenges associated with the lack of standardized program structures , indicating that a facilitative attitude should examine individual PSO services as unique entities within an overarching PSO system. Another limitation is that the extent to which PSO programs implemented lower-intensity interventions first, such as psychoeducation and group programming, seemed to vary from program to program; however, this was not explicitly inquired about. For transferability reasons, future research studies might gather more detailed information about the contexts of each site so readers can better assess how findings are relevant to certain types of PSO programs. Important contextual data might include health authorities and directorial reporting structures, as well as the funding models that PSO programs currently have and had in the recent past. It might also be helpful to examine in more detail PSO program processes for initial and repeated distress screening, triage and referral, the use of digital screening and administrative positions, and whether patients are being offered and are accepting external referrals when PSO programs are unable to offer them services internally.
Participants emphasized that the delivery of evidence-based PSO services depends on the accurate identification of patients in distress and the timely referral to appropriate services, which is in line with the existing scientific evidence. The initial and repeated use of distress screening tools is essential for triaging and referring patients to appropriate levels of care at diagnosis throughout treatment and during follow-up care . Directors expressed that PSO programs have a responsibility to facilitate an effective referral process by a) being knowledgeable about available internal and external resources as well as their acceptance criteria and b) ensuring patients are informed in a timely manner to maximize the uptake of appropriate services. Participants explained that services such as psychoeducation, symptom screening, and group interventions should first be prescribed, whereas more resources-intensive interventions, such as individual sessions, would be offered should the former interventions be unsuccessful in managing cancer-related psychosocial problems. This is in accordance with the stepped care model, which Smith and Darling (2015) described as a gold standard for directing the efficient delivery of appropriate care, with simpler interventions being administered at first and moving to more intensive interventions when a good outcome cannot be achieved . The application of this model to PSO has been endorsed by various stakeholders, including the Canadian Partnership against Cancer , the Canadian Psychosocial Oncology Association , and Cancer Care Ontario . There is ubiquitous agreement that all patients with cancer require an assessment of their supportive care needs, while it is proposed that for about 20% of patients it will be sufficient to receive only the most basic support, that 30% will need additional support, such as psychoeducation or peer support groups, that professional interventions to manage psychosocial distress and other cancer-related symptoms will be required for another 35% to 40%, and that 10% to 15% of patients will need the most intensive interventions as presented in . Our research identified inconsistencies in distress screening, cut-off score implementation, patient eligibility based on their medical treatment modality, and service availability in rural areas as systemic barriers to the provision of evidence-based PSO care. All participants except for one reported that some variation in initial distress screening was part of their PSO program’s operating procedures. However, some participants indicated discrepancies in applying this standard due to the real or perceived lack of available follow-up care . Repeated distress screening was rarely implemented, which is consistent with previously documented concerns regarding inconsistent distress monitoring where patients who met the evidence-based cut-off score during initial screening and patients whose distress levels changed over time were not referred to appropriate services. Frequent changes in the availability of internal and/or external PSO services and in respective eligibility criteria as well as low awareness of service availability among certain health providers resulted in patients’ limited awareness of existing resources. Furthermore, “per patient” based on treatment modality led to offering PSO services to patients receiving certain medical treatments (i.e., systemic and chemotherapy), while no services or only limited services could be offered to patients undergoing other medical regimes (i.e., radiation therapy, surgical patients, hormonal treatments). These models of funding also excluded patients in the survivorship phase and families/caregivers from PSO services. The denial of services to certain patient groups, such as those in the survivorship and bereavement stages, is at odds with recent literature highlighting the importance of addressing the psychosocial needs of the growing survivorship cohort and of holistically addressing family units and caregiver contexts . Participants suggested that the discrepancy between best practice recommendations and the clinical reality is at least partially driven by the fact that health authorities regard PSO as a “nonessential dimension of cancer care”, which is supported by prior research reports and affects the funding allocation to PSO services. Our findings align with previously voiced concerns that many patients who would benefit from PSO care are not receiving it . Participants identified consistent screening standards, digital screening procedures, designated administrative positions, and internal communication as facilitators for distress screening and triage. This is in line with three steps previously put forth to facilitate the Canada-wide implementation of repeated distress screening : (a) establish national progress monitoring standards; (b) raise stakeholder awareness of repeated screening as an EBP standard of care; and (c) secure resources for this. Digital screening procedures can both reduce the misidentification of distress by flagging patients in need of services based on evidence-based, pre-determined cut-off scores, such as reported in the “Pan Canadian Practice Guideline: Screening, Assessment and Care of Psychosocial Distress, Depression, and Anxiety in Adults with Cancer” . Digital screening procedures can also reduce costs associated with manually administering initial and repeat distress screening tools, gather relevant outcome data of patient distress over time, and document the number of patients above the cut-off score who cannot be offered services . People with designated roles, such as administrative positions, resource counsellors, program coordinators, or patient navigators, are all people who helped PSO programs acquire knowledge of available programming and sources, increasing the visibility of services and orienting patients to available services based on their individual preferences before matching them with a service. Protected time for internal communication facilitated both implementing evidence-based PSO services and monitoring them. Internal communication helped with service delivery and monitoring by increasing clinician knowledge of available programming (especially in the absence of a person with this designated role), facilitated triaging patients to the best available services, and helped clinicians to assign tasks in accordance with the stepped care model. Collaborative sharing was also said to help improve the effective delivery of evidence-based PSO services by increasing a clinician’s involvement with psychosocial symptom management, developing clearer directions for patient care and sensitivity to their needs, as well as through an improved social atmosphere and increased clinician morale. Internal communication and telehealth technologies reduced travel for patients at rural sites through better coordination of all oncology services. Rodin (2018) posits that patients’ perceived support from their healthcare team and the healthcare system more broadly can have a protective effect via increased feelings of validation and safety, regulating painful emotions, greater patient’s self-efficacy, and participation in the decision making process for their care .
The delivery of evidence-based PSO services is closely interwoven with clinicians’ acquisition and maintenance of the respective expertise. Wide variations in onboarding training protocols may be a barrier to clinicians uniformly acquiring PSO-specific knowledge when joining the team. Since clinicians arrive at PSO programs with various levels of relevant expertise and experience, well-designed and periodically updated onboarding training protocols that respond to differing training needs and are based on the newest research evidence would be a unique opportunity to ensure all team members acquire the basic knowledge on EBP in PSO. However, it is unrealistic to put this solely on the shoulders of PSO directors. It seems to be a wise investment if health authorities would support the development and delivery of such training opportunities as a joint initiative across provinces. Participants also reported various levels of being able to support clinician efforts to maintain their PSO knowledge through professional activities including but not limited to supervision and mentorship, journal clubs and communities of practice, peer and case consultations, guest speakers, and external training events. Participants’ concerns were centered on having only minimal and inconsistent health authority support for PSO knowledge acquisition and maintenance activities. Related barriers included lack of access to research databases, limited and inconsistent support for continuing education, narrow definition of performance, as well as restricted job descriptions. Participants further remarked that clinicians relied more on synthesized research summaries, including recommendations for clinical practice, than on original research studies. However, the utility of clinical practice guidelines that were developed a decade or longer ago is questionable given that the latest research evidence has not been integrated in them. Provincial as well as federal healthcare decision makers would be well advised to invest in the updating of high-quality practice guidelines in PSO as an important means to facilitate EBP. Facilitators to clinicians acquiring and maintaining a working knowledge of EBP in PSO were almost entirely related to the social context, including the pursuit of a directorial vision, internal communication, and external affiliations. To the extent directors were given control, they attempted to provide protected time during working hours and/or financial support for continuing education and cooperative sharing of EBP within. Directors believed that time for such activities sparked the curiosity and engagement necessary to sustain the long-term emotional demands of providing PSO services. Participants who worked at PSO programs with university affiliations described their onboarding training protocols as comprehensive, evidence-based, and periodically reviewed. They also reported these affiliations increased clinician connection with specialists (i.e., via visiting speakers, conferences or symposiums, and research collaboration), stimulated discussions about latest evidence, and heighted clinicians’ interest and motivation in their clinical work. Thus, it seems beneficial for health authorities to recognize that time for collaborative sharing and internal communication are facilitative to EBP in PSO.
The American and the Canadian Psychological Association’s definitions of evidence-based practice state that EBP includes the monitoring of the treatments provided, and our study participants stressed the importance of gathering patient-reported outcome data to monitor and improve the quality of PSO interventions and not solely focus on quantitative data, such as number of patients and sessions or average patient wait time. Such a broader perspective is also in line with recommendations from the Canadian Partnership against Cancer , a government-funded, independent organization aiming to facilitate cancer control in Canada. Yet, in many PSO programs, only the collection of the latter data (e.g., number of patients seen) was mandated and financially supported by healthcare decision makers. Key barriers to monitoring outcomes included funding cuts for such projects even at teaching hospitals, a lack of autonomy to gather more diversified types of data, and a general lack of control to allocate resources to technological and administrative facilitators. These are clear obstacles regarding EBP in PSO, and participants were painfully aware that collecting more pertinent data would be key to demonstrating the value of PSO services and successful advocacy with financial decision makers. The lack of data on outcomes and patient needs impedes efforts to justify resource requests for collecting those data and providing better services, meaningfully addressing systemic PSO issues. If health authorities want to meet their targets for greater efficiency of PSO services, it seems well advised to move beyond gathering a quite restricted range of quantitative data and towards facilitating the assessment of service quality indicators and acknowledging the value of protected time for focussing on program evaluation and advancement. Technology might already make it possible to extract some additional relevant data at treatment centres that have implemented initial and/or repeated distress screening throughout the illness trajectory. One could, for example, determine the number of patients who should have been offered PSO given their distress score above an evidence-based cut-off score but did not receive a service offer because the PSO program was only able to provide services to patients with the highest distress scores. Other valuable data in the service of quality improvement could be gathered by tracking the uptake of group programming relative to spaces available in a group. Such data have the potential to identify program-specific areas of improvement. It should also be noted that some resources exist for PSO programs to self-evaluate and develop their services in the absence of clear direction from health authorities.
Participants representing thirteen unique PSO sites described three categories of service providers excluding private practices: PSO programs within tertiary care settings, community agencies, and cancer centres, the latter of which were named “community oncology programs” elsewhere . Because our ethics protocol required transcripts to be anonymized to keep participants’ identity confidential, it was not possible to compare PSO services by site type. Moreover, the wide variation in program structures each embedded in different political, economic, social, and geographical contexts means that our conclusions are not indicative of each PSO program’s reality. Certain health authorities have called attention to research challenges associated with the lack of standardized program structures , indicating that a facilitative attitude should examine individual PSO services as unique entities within an overarching PSO system. Another limitation is that the extent to which PSO programs implemented lower-intensity interventions first, such as psychoeducation and group programming, seemed to vary from program to program; however, this was not explicitly inquired about. For transferability reasons, future research studies might gather more detailed information about the contexts of each site so readers can better assess how findings are relevant to certain types of PSO programs. Important contextual data might include health authorities and directorial reporting structures, as well as the funding models that PSO programs currently have and had in the recent past. It might also be helpful to examine in more detail PSO program processes for initial and repeated distress screening, triage and referral, the use of digital screening and administrative positions, and whether patients are being offered and are accepting external referrals when PSO programs are unable to offer them services internally.
Directors of PSO services across Canada indicated that the PSO system is struggling to meet an enormous and growing demand for psychosocial care for cancer patients and their families. Evidence-based recommendations that would likely see a return on investment at a systemic level—notably a stepped care approach with increased use of electronic screening systems and improved administrative processes for ongoing patient triage—have not been widely implemented yet. Moreover, electronic tools could also facilitate gathering patient-reported outcome data that are needed to monitor and improve service quality. However, technology alone is not solving the problem of healthcare professionals being expected to deliver evidence-based PSO care to an ever-growing patient population while having caseloads that prevent them from learning about and implementing EBP in PSO services. Despite the inherent reality of limited resources, protected time is required for the acquisition and maintenance of evidence-based PSO knowledge. It would also seem beneficial if health authorities at provincial or federal level would pool resources to support the development and maintenance of training programs that could be used countrywide. At the same time, healthcare professionals need patient–clinician ratios that allow time for continuing education, for knowledge exchange about EBP, to monitor outcomes, and to collaboratively use those data to improve the quality and quantity of services provided. Ensuring PSO services are evidence-based cannot only rest on the shoulders of PSO directors but necessitates more support from the part of health authorities. Partnering with researchers would be one avenue towards gathering valuable information and evidence that could be used when advocating for the allocation of sufficient resources to implement EBPs. Granting PSO directors more autonomy may help each individual PSO program in meaningfully addressing unique barriers to evidence-based PSO care by allowing them to decide where money and time are most needed. Fostering connections among members of the PSO team as well as with other healthcare professionals involved in cancer care at a given institution could be beneficial to creating a work context better equipped to meet the increasing demands for psychosocial care and a culture within the PSO program that is more resilient to these pressures. Education and advocacy are essential to shift current dichotomous attitudes about quality versus quantity of services towards recognizing their inherent interconnection and interdependence.
|
The Baron Pasquale Revoltella’s Will in the Forensic Genetics Era
|
182f28e7-58d5-4aeb-a746-7e34f52978aa
|
10137703
|
Forensic Medicine[mh]
|
The recent developments in molecular biology allow analyses that were inconceivable until only a few decades ago. Among these extraordinary advances, scientists from different branches are now able to perform studies on ancient or museum specimens . In forensics, DNA profiling is a routine tool in criminal investigations , and skeletal remains are one of the most challenging samples . Autosomal STR (short tandem repeat) typing is the gold standard for individual identification in forensics , however, in some circumstances other DNA markers can be used. For example, iSNPs (identity single nucleotide polymorphisms) and InDel (insertion/deletion) typing may be greatly beneficial when analyzing degraded samples [ , , ]. It is also possible, however, that DNA degradation , an unavoidable process of the post mortem tissues, can make nuclear genetic testing of scarce practical utility, or even inconclusive [ , , ]. In such cases, the analysis of mitochondrial DNA (mtDNA) can be of help . Capillary electrophoresis (CE) analysis of PCR (polymerase chain reaction) products is the gold standard approach in forensics , whereas MPS (massive parallel sequencing) is an emerging promising technology in the typing of low-template degraded samples [ , , ]. Although high-throughput shotgun sequencing and the analysis of genome-wide data have largely replaced current PCR-based methods in ancient DNA (aDNA) analysis [ , , ], there are many aspects that aDNA analysis and forensic DNA analysis have in common; for instance, the use of limited amounts of degraded DNA, the precautions adopted to prevent contamination, and the use of authenticity criteria [ , , , , , , ]. In addition, both disciplines have developed strategies to select the skeletal element which provides, a priori, the highest probabilities of positive outcomes [ , , , , , , , , , , , , , , , , ]. In the last decade, several genetic studies have been conducted on the skeletal remains of famous figures from the past, such as Nicolaus Copernicus and King Richard III , as well from lesser-known individuals or mass graves victims of the Spanish Civil War and Second World War . These studies, conducted by interdisciplinary teams of geneticists, archaeologists, anthropologists, and historians, were excellent trials to assess the performance of standard and emerging technologies, as well as solving historical and archeological questions [ , , , , , , , , ]. In the early 19th century, Baron Pasquale Revoltella lived in Trieste, Italy (see for his short biography and other historical details), and he left the following will: “I do not like being buried as I am, I want to be embalmed in the Egyptian manner … and then to be laid in the ready-made sarcophagus in the Crypt of the Church of San Pasquale erected in the park of my country villa. My tomb will be reopened one hundred years after my death, and will be closed after three days of unforgettable celebrations”. Following his will, his body was mummified and laid in the crypt, but neither exhumation nor celebrations were carried out in 1969 (i.e., one hundred years after his death). However, one hundred and forty-two years after his death, his body was exhumed and a single bone element was sampled. In addition, bone remains that likely belong to the Baron’s mother were sampled to confirm the identity. The results of the molecular analyses that follow the forensic protocols are shown and discussed below.
2.1. Sample Collection and Precautions to Avoid Contaminations The body of the Baron Revoltella was exhumed from the crypt of the San Pasquale Church on 4 June 2011. The body lay inside a metal coffin. The examination of the body was carried out the same day, and the body showed clear signs of complete artificial mummification. The collection of a single bone sample was allowed. Therefore, after cutting the treasures and the bandages, the whole right patella (sample P) was excised (see ). This bone sample was inserted in a sterile tube and transferred to the laboratory where it was processed immediately. After sawing (see ), a few specimens were selected for histological analysis, whereas the remaining portion was stored in a sealed tube at −80 °C until the molecular analyses. In the course of the exhumation procedure, a metal box with the inscription “Domenica Revoltella”, Baron’s mother’s name, was found in a niche next to the Baron grave. The metal box contained bone remains whose anthropological examination revealed the presence of an incomplete unique female skeleton; the skeleton was of a female individual, 55–65 years old, approximately 159–161 cm of height. Therefore, it was hypothesized that these skeletal remains belonged to the Baron’s mother, Domenica Privato Revoltella (D.P.R.). For molecular purposes, as no tooth was found, one segment of about ten centimeters each was sawn from the diaphysis of the right femur (sample RF) and the left femur (sample LF) (see ). These samples were then transferred in sterile tubes and stored in the dark at room temperature until the time of the molecular analyses. Throughout the procedures, precautions were taken to avoid modern DNA contamination [ , , , , ]. For the elimination database, buccal swabs were obtained after informed consent from all personnel involved in these operations, as well in the molecular analyses. All methods, including genetic data storage, were performed in accordance with the guidelines and regulations of the Ethics Committee of the University of Trieste (101/04.12.2019). 2.2. DNA Extraction and Quantification The extraction procedures were carried out in rooms dedicated solely to aged bones analysis, and adopting stringent precautions to prevent contamination . DNA from samples P1, LF1, and RF1 were isolated in 2012 as previously described with minimal modifications. Briefly, 0.5 g of the inner (trabecular) part of each bone was crushed in a mortar, and decalcified in 5 mL of 0.5 M Na 2 EDTA at room temperature for 48 h. After centrifugation, the pellet was resuspended in 5 mL of lysis buffer with proteinase K (at a final concentration of 200 µg/mL) and incubated at room temperature for 24 h. After two phenol/chloroform/isoamyl alcohol (25/24/1) purifications, and one chloroform/isoamyl alcohol purification, the extract was filtered through a K100 Amicon column. After three washes with water and one with low TE buffer (LTE; 1 mM Tris pH 7.6, 0.1 mM Na 2 EDTA pH 8.0), 30–35 µL of extract was obtained. Negative extraction controls (NEC) were carried out simultaneously. The extract was aliquoted and stored at −20 °C until use. DNA quantification was carried out by the use of the Quantifiler TM Trio DNA Quantification kit (Thermo Fisher Scientific, Waltham, MA, USA). As shown in , samples P2, RF2, and LF2 were extracted in 2022 as previously reported (for femurs, compact cortical parts were used). Briefly, 0.5 g of the powdered bone was incubated in 0.5 M Na 2 EDTA at 37 °C overnight. After centrifugation, the pellet was washed with water and extracted with the EZ1 DNA Investigator Kit (Qiagen, Hilden, Germany) in a final volume elution of 50 µL. A Biorobot EZ1 device (Qiagen) was used. NECs were carried out simultaneously. DNA quantification of samples RF2 and LF2 was performed using the PowerQuant Kit (Promega, Madison, WI, USA). The number of mtDNA molecules was assessed in samples P1, P2, and LF2 using an in-house qPCR method based on the study of Alonso et al. . Accordingly, a 620 bp long fragment as standard was used, and 113 bp and 287 bp fragments of mitogenome defined by two custom-ordered primer solutions (Applied Biosystems), Renfrewshire, UK) were quantified, using the QuantStudio™ 5 Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA) and Design and Analysis Software v1.5.2 (Thermo Fisher Scientific, Waltham, MA, USA). A longer, 287 bp fragment was also quantified to estimate the mitogenome degradation. The mtDNA degradation was calculated for a given sample by the ratio between the 113 bp mtDNA copies and the 287 bp mtDNA copies. All samples were quantified in duplicate. In lieu of 1X TaqMan Universal PCR Master Mix (Applied Biosystems, Renfrewshire, UK), which was used in the initial study but was not available anymore, a newer commercially available alternative was used: the TaqMan™ Universal Master Mix II with Uracil-N-glycosylase (Thermo Fisher Scientific, Waltham, MA, USA). The molecular weight of sample P2 was assessed by 1.2% agarose gel electrophoresis in TBE buffer containing EtBr (5 ng/mL). 2.3. DNA Typing As shown in , different approaches were carried out for genetic typing. They can be summarized as follows. 2.3.1. STR-Y Typing by PCR-CE Approximately 0.8 ng of DNA from sample P1 was amplified, in duplicate tests, with 30 PCR cycles using the Y-filer Kit (Applied Biosystems, Renfrewshire, UK). Amplicons were typed by CE analysis in a 310 ABI Prism apparatus (Applied Biosystems, Renfrewshire, UK). NECs and no template controls (NTCs) were tested simultaneously. 2.3.2. Autosomal STR Typing by PCR-CE For samples RF2 and LF2, 17.5 µL of DNA solution corresponding to 129 and 194 pg, respectively, were amplified with 30 PCR cycles using the PowerPlex ESX Kit (Promega, Madison, WI, USA). Five-hundred picograms of sample P1 DNA were tested in duplicate (30 PCR cycles). Amplicons were typed by CE analysis in a 310 ABI Prism or a SeqStudio apparatus (Thermo Fisher Scientific, Waltham, MA, USA). NEC and NTE were analyzed simultaneously. 2.3.3. iSNP Typing by PCR-MPS The HID-identity panel (allowing the analysis of 90 autosomal iSNPs plus 34 Y-specific SNPs) was used in two different experiments with different techniques. Briefly, libraries from samples LF1, RF1, and P1 were built manually . One nanogram of P1 DNA was amplified in duplicate with 21 PCR cycles, whereas 15 µL of DNA from samples LF1 and RF1 were amplified with 25 PCR cycles. In the second experiment, 15 µL of DNA from samples RF2 and LF2, corresponding to 111 and 166 pg, respectively, were amplified with 27 PCR cycles using the Ion Chef apparatus (Thermo Fisher Scientific, Waltham, MA, USA). Sample RF2 was run in duplicate. Libraries at the concentration of 30 pM were pooled and run in a chip using an Ion apparatus (Thermo Fisher Scientific, Waltham, MA, USA) . NEC and NTE were analyzed simultaneously. The analytical threshold of 50 reads was applied for locus call . 2.3.4. mtDNA Analysis Both PCR-MPS and PCR-CE approaches were performed (see ) for mtDNA typing. For PCR-CE analysis, mtDNA hypervariable regions I and II (HVR-I and HVR-II) were amplified separately, as previously described , in a final volume of 25 µL, with 35 PCR cycles. To each sample, 1 U GoTaq ® Flexi DNA Polymerase (Promega, Madison, WI, USA) and 500 pg DNA recovered from samples RF2 and P1 were added. The molecular weight of the amplified products was checked by agarose gel electrophoresis. PCR products were then purified using QIAquick PCR NucleoSpin Gel (Qiagen, Hilden, Germany) following the manufacturer’s instructions. Sanger sequencing reactions were carried out using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Renfrewshire, UK) with primers (forward and reverse) used for the amplification reactions. Unincorporated dye terminators were removed from the reaction using the NucleoSEQ Kit (Macherey-Nagel, Dueren, Germany). Sequences were separated by capillary electrophoresis on a SeqStudio Genetic Analyzer (Thermo Fisher Scientific, Waltham, MA, USA). Raw data were analyzed using the Sequencing Analysis v.5.2 software, and the resulting electropherograms were compared with the rCRS (revised Cambridge reference sequence) . The mtDNA haplotypes were then checked for quality parameters in the EMPOP database and for the phylogenetic assignment of the corresponding haplogroup. For PCR-MPS analysis, automated combined library preparation was carried out on an HID Ion Chef™ Instrument (Thermo Fisher Scientific, Waltham, MA, USA) with the Precision ID DL8 Kit™ (Thermo Fisher Scientific, Waltham, MA, USA) and Precision ID mtDNA Control Region Panel (Thermo Fisher Scientific, Waltham, MA, USA), following the manufacturer’s instructions . Accordingly, for samples LF2, P1, and P2 approximately 10,000 mtDNA molecules were used for each sample. The number of primer pools was 2, the number of PCR cycles was 22, and anneal and extension times were 4. Each combined library was quantified in duplicate with the Ion Library TaqMan™ Quantitation Kit (Thermo Fisher Scientific, Waltham, MA, USA) in a QuantStudio™ 5 Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer’s guidelines . Raw data were analyzed with Design and Analysis Software v1.5.2 (Thermo Fisher Scientific, Waltham, MA, USA). Equimolar amounts required for superpooling libraries were calculated as recommended by the manufacturer. Templating was fully automated by the use of an Ion 530™ Chip (Thermo Fisher Scientific, Waltham, MA, USA) in an Ion Chef™ Instrument (Thermo Fisher Scientific, Waltham, MA, USA), with dedicated reagents, namely, the Ion S5™ Precision ID Chef Supplies, the Ion S5™ Precision ID Chef Reagents, and the Ion S5™ Precision ID Chef Solutions (Thermo Fisher Scientific, Waltham, MA, USA), following the manufacturer’s recommendations (Thermo Fisher Scientific, Waltham, MA, USA, 2021). Accordingly, 30 pM of each pool was used for templating. The Ion GeneStudio™ S5 System (Thermo Fisher Scientific, Waltham, MA, USA), together with Ion S5™ Precision ID Sequencing Reagents and Ion S5™ Precision ID Sequencing Solutions (Thermo Fisher Scientific, Waltham, MA, USA), were used to generate raw data for mtDNA sequencing. Primary analysis of raw data, including sequence alignment to rCRS and variant calling, was performed with the Ion Torrent™ Suite 5.10.1 (Thermo Fisher Scientific, Waltham, MA, USA) software and HID Genotyper 2.2 and Coverage Analysis (v5.10.0.3) plugins. Secondary data analysis was carried out with Converge™ Software v2.3 (Thermo Fisher Scientific, Waltham, MA, USA). 2.4. Histological Examination To analyze the preservation of bone tissue, bone thin sections were cut from the patella bone, fixed in 10% neutral buffered formalin (ratio formalin/sample 20:1), and decalcified at room temperature with 0.5 M Na 2 EDTA pH 8.0. After dehydration in increasing alcohol series and xylene, samples were embedded in paraffin, and 5 μm sections were cut and stained with Hematoxylin–Eosin following standard procedures. Bone tissue was analyzed and classified according to the Oxford histological index (OHI), as previously described . Six stages were defined (from 0 to 5), considering the amount of well-preserved bone tissue and the possible identification of bone features, such as osteons, lamellae, and osteocyte lacunae. Well-preserved bone tissue, comparable to fresh bone, with more than 95% of intact bone, was classified as “5”, while bone sections with no recognizable features and less than 5% of well-preserved bone tissue were classified as “0”. 2.5. Data Analysis Consensus methods for results interpretation were adopted, when duplicate tests were carried out. For STR data, methods described by Taberlet et al. were used, whereas for SNP data, the method described by Turchi et al. was used. The YHRD database ( https://yhrd.org/ ) was used to analyze the Y-STR profile obtained from the Baron’s sample. For kinship analysis, the Familias software (version 3.2.9) was used. As reference databases, the STR allele frequencies of Italian population and the SNP allele frequencies of Caucasian population ( https://www.ncbi.nlm.nih.gov/snp ) were used. For Y haplogroup prediction of sample P1, the plugin HID SNP Genotyper, as well the websites http://ytree.morleydna.com and http://phylotree.org/Y/tree/index.htm , were used. To check mtDNA haplotype frequencies and the corresponding haplogroup prediction, the website EMPOP database was used.
The body of the Baron Revoltella was exhumed from the crypt of the San Pasquale Church on 4 June 2011. The body lay inside a metal coffin. The examination of the body was carried out the same day, and the body showed clear signs of complete artificial mummification. The collection of a single bone sample was allowed. Therefore, after cutting the treasures and the bandages, the whole right patella (sample P) was excised (see ). This bone sample was inserted in a sterile tube and transferred to the laboratory where it was processed immediately. After sawing (see ), a few specimens were selected for histological analysis, whereas the remaining portion was stored in a sealed tube at −80 °C until the molecular analyses. In the course of the exhumation procedure, a metal box with the inscription “Domenica Revoltella”, Baron’s mother’s name, was found in a niche next to the Baron grave. The metal box contained bone remains whose anthropological examination revealed the presence of an incomplete unique female skeleton; the skeleton was of a female individual, 55–65 years old, approximately 159–161 cm of height. Therefore, it was hypothesized that these skeletal remains belonged to the Baron’s mother, Domenica Privato Revoltella (D.P.R.). For molecular purposes, as no tooth was found, one segment of about ten centimeters each was sawn from the diaphysis of the right femur (sample RF) and the left femur (sample LF) (see ). These samples were then transferred in sterile tubes and stored in the dark at room temperature until the time of the molecular analyses. Throughout the procedures, precautions were taken to avoid modern DNA contamination [ , , , , ]. For the elimination database, buccal swabs were obtained after informed consent from all personnel involved in these operations, as well in the molecular analyses. All methods, including genetic data storage, were performed in accordance with the guidelines and regulations of the Ethics Committee of the University of Trieste (101/04.12.2019).
The extraction procedures were carried out in rooms dedicated solely to aged bones analysis, and adopting stringent precautions to prevent contamination . DNA from samples P1, LF1, and RF1 were isolated in 2012 as previously described with minimal modifications. Briefly, 0.5 g of the inner (trabecular) part of each bone was crushed in a mortar, and decalcified in 5 mL of 0.5 M Na 2 EDTA at room temperature for 48 h. After centrifugation, the pellet was resuspended in 5 mL of lysis buffer with proteinase K (at a final concentration of 200 µg/mL) and incubated at room temperature for 24 h. After two phenol/chloroform/isoamyl alcohol (25/24/1) purifications, and one chloroform/isoamyl alcohol purification, the extract was filtered through a K100 Amicon column. After three washes with water and one with low TE buffer (LTE; 1 mM Tris pH 7.6, 0.1 mM Na 2 EDTA pH 8.0), 30–35 µL of extract was obtained. Negative extraction controls (NEC) were carried out simultaneously. The extract was aliquoted and stored at −20 °C until use. DNA quantification was carried out by the use of the Quantifiler TM Trio DNA Quantification kit (Thermo Fisher Scientific, Waltham, MA, USA). As shown in , samples P2, RF2, and LF2 were extracted in 2022 as previously reported (for femurs, compact cortical parts were used). Briefly, 0.5 g of the powdered bone was incubated in 0.5 M Na 2 EDTA at 37 °C overnight. After centrifugation, the pellet was washed with water and extracted with the EZ1 DNA Investigator Kit (Qiagen, Hilden, Germany) in a final volume elution of 50 µL. A Biorobot EZ1 device (Qiagen) was used. NECs were carried out simultaneously. DNA quantification of samples RF2 and LF2 was performed using the PowerQuant Kit (Promega, Madison, WI, USA). The number of mtDNA molecules was assessed in samples P1, P2, and LF2 using an in-house qPCR method based on the study of Alonso et al. . Accordingly, a 620 bp long fragment as standard was used, and 113 bp and 287 bp fragments of mitogenome defined by two custom-ordered primer solutions (Applied Biosystems), Renfrewshire, UK) were quantified, using the QuantStudio™ 5 Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA) and Design and Analysis Software v1.5.2 (Thermo Fisher Scientific, Waltham, MA, USA). A longer, 287 bp fragment was also quantified to estimate the mitogenome degradation. The mtDNA degradation was calculated for a given sample by the ratio between the 113 bp mtDNA copies and the 287 bp mtDNA copies. All samples were quantified in duplicate. In lieu of 1X TaqMan Universal PCR Master Mix (Applied Biosystems, Renfrewshire, UK), which was used in the initial study but was not available anymore, a newer commercially available alternative was used: the TaqMan™ Universal Master Mix II with Uracil-N-glycosylase (Thermo Fisher Scientific, Waltham, MA, USA). The molecular weight of sample P2 was assessed by 1.2% agarose gel electrophoresis in TBE buffer containing EtBr (5 ng/mL).
As shown in , different approaches were carried out for genetic typing. They can be summarized as follows. 2.3.1. STR-Y Typing by PCR-CE Approximately 0.8 ng of DNA from sample P1 was amplified, in duplicate tests, with 30 PCR cycles using the Y-filer Kit (Applied Biosystems, Renfrewshire, UK). Amplicons were typed by CE analysis in a 310 ABI Prism apparatus (Applied Biosystems, Renfrewshire, UK). NECs and no template controls (NTCs) were tested simultaneously. 2.3.2. Autosomal STR Typing by PCR-CE For samples RF2 and LF2, 17.5 µL of DNA solution corresponding to 129 and 194 pg, respectively, were amplified with 30 PCR cycles using the PowerPlex ESX Kit (Promega, Madison, WI, USA). Five-hundred picograms of sample P1 DNA were tested in duplicate (30 PCR cycles). Amplicons were typed by CE analysis in a 310 ABI Prism or a SeqStudio apparatus (Thermo Fisher Scientific, Waltham, MA, USA). NEC and NTE were analyzed simultaneously. 2.3.3. iSNP Typing by PCR-MPS The HID-identity panel (allowing the analysis of 90 autosomal iSNPs plus 34 Y-specific SNPs) was used in two different experiments with different techniques. Briefly, libraries from samples LF1, RF1, and P1 were built manually . One nanogram of P1 DNA was amplified in duplicate with 21 PCR cycles, whereas 15 µL of DNA from samples LF1 and RF1 were amplified with 25 PCR cycles. In the second experiment, 15 µL of DNA from samples RF2 and LF2, corresponding to 111 and 166 pg, respectively, were amplified with 27 PCR cycles using the Ion Chef apparatus (Thermo Fisher Scientific, Waltham, MA, USA). Sample RF2 was run in duplicate. Libraries at the concentration of 30 pM were pooled and run in a chip using an Ion apparatus (Thermo Fisher Scientific, Waltham, MA, USA) . NEC and NTE were analyzed simultaneously. The analytical threshold of 50 reads was applied for locus call . 2.3.4. mtDNA Analysis Both PCR-MPS and PCR-CE approaches were performed (see ) for mtDNA typing. For PCR-CE analysis, mtDNA hypervariable regions I and II (HVR-I and HVR-II) were amplified separately, as previously described , in a final volume of 25 µL, with 35 PCR cycles. To each sample, 1 U GoTaq ® Flexi DNA Polymerase (Promega, Madison, WI, USA) and 500 pg DNA recovered from samples RF2 and P1 were added. The molecular weight of the amplified products was checked by agarose gel electrophoresis. PCR products were then purified using QIAquick PCR NucleoSpin Gel (Qiagen, Hilden, Germany) following the manufacturer’s instructions. Sanger sequencing reactions were carried out using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Renfrewshire, UK) with primers (forward and reverse) used for the amplification reactions. Unincorporated dye terminators were removed from the reaction using the NucleoSEQ Kit (Macherey-Nagel, Dueren, Germany). Sequences were separated by capillary electrophoresis on a SeqStudio Genetic Analyzer (Thermo Fisher Scientific, Waltham, MA, USA). Raw data were analyzed using the Sequencing Analysis v.5.2 software, and the resulting electropherograms were compared with the rCRS (revised Cambridge reference sequence) . The mtDNA haplotypes were then checked for quality parameters in the EMPOP database and for the phylogenetic assignment of the corresponding haplogroup. For PCR-MPS analysis, automated combined library preparation was carried out on an HID Ion Chef™ Instrument (Thermo Fisher Scientific, Waltham, MA, USA) with the Precision ID DL8 Kit™ (Thermo Fisher Scientific, Waltham, MA, USA) and Precision ID mtDNA Control Region Panel (Thermo Fisher Scientific, Waltham, MA, USA), following the manufacturer’s instructions . Accordingly, for samples LF2, P1, and P2 approximately 10,000 mtDNA molecules were used for each sample. The number of primer pools was 2, the number of PCR cycles was 22, and anneal and extension times were 4. Each combined library was quantified in duplicate with the Ion Library TaqMan™ Quantitation Kit (Thermo Fisher Scientific, Waltham, MA, USA) in a QuantStudio™ 5 Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer’s guidelines . Raw data were analyzed with Design and Analysis Software v1.5.2 (Thermo Fisher Scientific, Waltham, MA, USA). Equimolar amounts required for superpooling libraries were calculated as recommended by the manufacturer. Templating was fully automated by the use of an Ion 530™ Chip (Thermo Fisher Scientific, Waltham, MA, USA) in an Ion Chef™ Instrument (Thermo Fisher Scientific, Waltham, MA, USA), with dedicated reagents, namely, the Ion S5™ Precision ID Chef Supplies, the Ion S5™ Precision ID Chef Reagents, and the Ion S5™ Precision ID Chef Solutions (Thermo Fisher Scientific, Waltham, MA, USA), following the manufacturer’s recommendations (Thermo Fisher Scientific, Waltham, MA, USA, 2021). Accordingly, 30 pM of each pool was used for templating. The Ion GeneStudio™ S5 System (Thermo Fisher Scientific, Waltham, MA, USA), together with Ion S5™ Precision ID Sequencing Reagents and Ion S5™ Precision ID Sequencing Solutions (Thermo Fisher Scientific, Waltham, MA, USA), were used to generate raw data for mtDNA sequencing. Primary analysis of raw data, including sequence alignment to rCRS and variant calling, was performed with the Ion Torrent™ Suite 5.10.1 (Thermo Fisher Scientific, Waltham, MA, USA) software and HID Genotyper 2.2 and Coverage Analysis (v5.10.0.3) plugins. Secondary data analysis was carried out with Converge™ Software v2.3 (Thermo Fisher Scientific, Waltham, MA, USA).
Approximately 0.8 ng of DNA from sample P1 was amplified, in duplicate tests, with 30 PCR cycles using the Y-filer Kit (Applied Biosystems, Renfrewshire, UK). Amplicons were typed by CE analysis in a 310 ABI Prism apparatus (Applied Biosystems, Renfrewshire, UK). NECs and no template controls (NTCs) were tested simultaneously.
For samples RF2 and LF2, 17.5 µL of DNA solution corresponding to 129 and 194 pg, respectively, were amplified with 30 PCR cycles using the PowerPlex ESX Kit (Promega, Madison, WI, USA). Five-hundred picograms of sample P1 DNA were tested in duplicate (30 PCR cycles). Amplicons were typed by CE analysis in a 310 ABI Prism or a SeqStudio apparatus (Thermo Fisher Scientific, Waltham, MA, USA). NEC and NTE were analyzed simultaneously.
The HID-identity panel (allowing the analysis of 90 autosomal iSNPs plus 34 Y-specific SNPs) was used in two different experiments with different techniques. Briefly, libraries from samples LF1, RF1, and P1 were built manually . One nanogram of P1 DNA was amplified in duplicate with 21 PCR cycles, whereas 15 µL of DNA from samples LF1 and RF1 were amplified with 25 PCR cycles. In the second experiment, 15 µL of DNA from samples RF2 and LF2, corresponding to 111 and 166 pg, respectively, were amplified with 27 PCR cycles using the Ion Chef apparatus (Thermo Fisher Scientific, Waltham, MA, USA). Sample RF2 was run in duplicate. Libraries at the concentration of 30 pM were pooled and run in a chip using an Ion apparatus (Thermo Fisher Scientific, Waltham, MA, USA) . NEC and NTE were analyzed simultaneously. The analytical threshold of 50 reads was applied for locus call .
Both PCR-MPS and PCR-CE approaches were performed (see ) for mtDNA typing. For PCR-CE analysis, mtDNA hypervariable regions I and II (HVR-I and HVR-II) were amplified separately, as previously described , in a final volume of 25 µL, with 35 PCR cycles. To each sample, 1 U GoTaq ® Flexi DNA Polymerase (Promega, Madison, WI, USA) and 500 pg DNA recovered from samples RF2 and P1 were added. The molecular weight of the amplified products was checked by agarose gel electrophoresis. PCR products were then purified using QIAquick PCR NucleoSpin Gel (Qiagen, Hilden, Germany) following the manufacturer’s instructions. Sanger sequencing reactions were carried out using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Renfrewshire, UK) with primers (forward and reverse) used for the amplification reactions. Unincorporated dye terminators were removed from the reaction using the NucleoSEQ Kit (Macherey-Nagel, Dueren, Germany). Sequences were separated by capillary electrophoresis on a SeqStudio Genetic Analyzer (Thermo Fisher Scientific, Waltham, MA, USA). Raw data were analyzed using the Sequencing Analysis v.5.2 software, and the resulting electropherograms were compared with the rCRS (revised Cambridge reference sequence) . The mtDNA haplotypes were then checked for quality parameters in the EMPOP database and for the phylogenetic assignment of the corresponding haplogroup. For PCR-MPS analysis, automated combined library preparation was carried out on an HID Ion Chef™ Instrument (Thermo Fisher Scientific, Waltham, MA, USA) with the Precision ID DL8 Kit™ (Thermo Fisher Scientific, Waltham, MA, USA) and Precision ID mtDNA Control Region Panel (Thermo Fisher Scientific, Waltham, MA, USA), following the manufacturer’s instructions . Accordingly, for samples LF2, P1, and P2 approximately 10,000 mtDNA molecules were used for each sample. The number of primer pools was 2, the number of PCR cycles was 22, and anneal and extension times were 4. Each combined library was quantified in duplicate with the Ion Library TaqMan™ Quantitation Kit (Thermo Fisher Scientific, Waltham, MA, USA) in a QuantStudio™ 5 Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA) following the manufacturer’s guidelines . Raw data were analyzed with Design and Analysis Software v1.5.2 (Thermo Fisher Scientific, Waltham, MA, USA). Equimolar amounts required for superpooling libraries were calculated as recommended by the manufacturer. Templating was fully automated by the use of an Ion 530™ Chip (Thermo Fisher Scientific, Waltham, MA, USA) in an Ion Chef™ Instrument (Thermo Fisher Scientific, Waltham, MA, USA), with dedicated reagents, namely, the Ion S5™ Precision ID Chef Supplies, the Ion S5™ Precision ID Chef Reagents, and the Ion S5™ Precision ID Chef Solutions (Thermo Fisher Scientific, Waltham, MA, USA), following the manufacturer’s recommendations (Thermo Fisher Scientific, Waltham, MA, USA, 2021). Accordingly, 30 pM of each pool was used for templating. The Ion GeneStudio™ S5 System (Thermo Fisher Scientific, Waltham, MA, USA), together with Ion S5™ Precision ID Sequencing Reagents and Ion S5™ Precision ID Sequencing Solutions (Thermo Fisher Scientific, Waltham, MA, USA), were used to generate raw data for mtDNA sequencing. Primary analysis of raw data, including sequence alignment to rCRS and variant calling, was performed with the Ion Torrent™ Suite 5.10.1 (Thermo Fisher Scientific, Waltham, MA, USA) software and HID Genotyper 2.2 and Coverage Analysis (v5.10.0.3) plugins. Secondary data analysis was carried out with Converge™ Software v2.3 (Thermo Fisher Scientific, Waltham, MA, USA).
To analyze the preservation of bone tissue, bone thin sections were cut from the patella bone, fixed in 10% neutral buffered formalin (ratio formalin/sample 20:1), and decalcified at room temperature with 0.5 M Na 2 EDTA pH 8.0. After dehydration in increasing alcohol series and xylene, samples were embedded in paraffin, and 5 μm sections were cut and stained with Hematoxylin–Eosin following standard procedures. Bone tissue was analyzed and classified according to the Oxford histological index (OHI), as previously described . Six stages were defined (from 0 to 5), considering the amount of well-preserved bone tissue and the possible identification of bone features, such as osteons, lamellae, and osteocyte lacunae. Well-preserved bone tissue, comparable to fresh bone, with more than 95% of intact bone, was classified as “5”, while bone sections with no recognizable features and less than 5% of well-preserved bone tissue were classified as “0”.
Consensus methods for results interpretation were adopted, when duplicate tests were carried out. For STR data, methods described by Taberlet et al. were used, whereas for SNP data, the method described by Turchi et al. was used. The YHRD database ( https://yhrd.org/ ) was used to analyze the Y-STR profile obtained from the Baron’s sample. For kinship analysis, the Familias software (version 3.2.9) was used. As reference databases, the STR allele frequencies of Italian population and the SNP allele frequencies of Caucasian population ( https://www.ncbi.nlm.nih.gov/snp ) were used. For Y haplogroup prediction of sample P1, the plugin HID SNP Genotyper, as well the websites http://ytree.morleydna.com and http://phylotree.org/Y/tree/index.htm , were used. To check mtDNA haplotype frequencies and the corresponding haplogroup prediction, the website EMPOP database was used.
3.1. DNA Isolation and Quantification Quantification of autosomal DNA from the Baron’s sample (sample P1) returned high DNA quantities with a degradation index of 1.8, indicative of minimal degradation (see ). The good preservation of the sample was also supported by agarose gel electrophoresis, as shown in . The results of the quantification of the Baron’s mother samples are shown in . Three out of four samples revealed detectable levels by qPCR for the short autosomal targets. DNA isolated from the outer part (compact cortical bone tissue) of the two femurs had a degradation index (D.I.) of 11.6 and 10.3, respectively, supporting better preservation of the outer part samples compared to the inner part of the femurs (trabecular bone tissue). No amplification was obtained for the long targets in DNA samples from the trabecular part of the two bones (samples RF1 and LF1). Mitochondrial DNA copies in samples LF2, P1, and P2 are reported in . It is noteworthy that the number of mtDNA copies/g of tissue in sample P2, which was stored at −80 °C for ten years, was about two orders of magnitude lower than in sample P1 (extracted in 2012). Furthermore, while from sample P1 it was possible to amplify both targets (with a ratio of approximately 17 between the short and the long target), in sample P2 no amplification of the longer target (287 bp) was obtained. Altogether, those data strongly point out that DNA degradation occurred during the bone sample storage at −80 °C. No mtDNA was detected in the NEC. 3.2. DNA Typing The overall DNA typing results were summarized in . The detailed results are hereafter presented. 3.2.1. Y-STR Typing by PCR-CE Results of Y-STR typing in sample P1 are reported in the . The duplicate test results confirmed the genetic data obtained from DNA isolated a few weeks after the exhumation. No amplification was obtained from negative controls. The obtained haplotype did not match with the entire HYRD database (searching among 290,147 haplotypes; accessed on 26 January 2023). Furthermore, no match was found with haplotypes from the male staff members involved in the study. Overall, these results, supported by the high amount of amplifiable DNA obtained from the patella bone, were used as authenticity criteria . The “minimal haplotype” analysis, performed on eight markers of 350,500 unrelated samples, showed 202 matches (mainly in the USA with 22 matches, followed by Spain and Italy). 3.2.2. Autosomal STR Typing by PCR-CE The PowePlex ESX Kit was used with 30 PCR cycles. Results of the analysis for samples RF2 and LF2 are reported in (see also ). No amplicon was detected in the negative controls. Genetic typing data were used to generate a consensus profile for 10 out of 16 loci . A full profile was achieved from the Baron’s sample in duplicate tests. By comparing the Baron’s profile with that of the alleged mother D.P.R., allele sharing was always scored, as shown in . Familias software returned a likelihood ratio (LR) value of maternity of 3409 (corresponding to a posterior probability of maternity of 99.971%). 3.2.3. iSNP Typing by PCR-MPS Samples RF1, LF1, and P1 results were previously reported . Briefly, a full and complete profile was achieved from the Baron’s sample, while samples RF1 and LF1 (both extracted from the trabecular bone tissue) did not return any result. On the opposite, libraries of good quality were obtained from RF2 and LF2 DNA samples extracted from the compact cortical bone tissue of the two femurs ( for sequencing parameters). Possibly due to the higher number of PCR cycles (27), a high coverage of the markers was obtained, with more than 300,000 mapped reads/library. Genotyping results are reported in . No result was obtained from NEC. The three independent tests data allowed generating the consensus profiles for 80 out of 90 autosomal identity SNPs. Ten markers did not return reliable results, as shown in , because of stochastic phenomena (locus drop out and allelic drop out) which were scored in the replicates. However, when the consensus profile was compared with the sample P1 genotype, allele sharing was always scored. Furthermore, Familias software returned a likelihood ratio (LR) value of maternity of 2678 (corresponding to a posterior probability of 99.962%). When SNP and STR typing results were computed together, the cumulative LR value was of 9,129,302 (corresponding to a probability of maternity of 99.9999). Data from the 34 Y-specific SNP markers of the identity panel (see ) were used to establish the Baron’s ancestry. The “Y Haplogroup Prediction” made with the plugin of the HID SNP Genotyper revealed that the Baron’s sample belongs to the R1b (R1b-M343) haplogroup. This prediction was further confirmed by the use of http://ytree.morleydna.com and Phylo Tree Y ( http://phylotree.org/Y/tree/index.htm ) sites (accessed on 26 January 2023). 3.2.4. mtDNA Typing Mitochondrial DNA typing in samples P1, P2, RF2, and LF2 was carried out with two technologies, namely, PCR-CE and PCR-MPS (see ). Hypervariable control regions HVR-I and HVR-II were amplified and sequenced by conventional Sanger sequencing, while for MPS, the entire hypervariable control region was investigated by the addition of the HVR-III hypervariable region (see for sequencing parameters). The resulting haplotypes are reported in . Both methods returned the same haplotype for HVR-I and II in all tested samples. Poly-C stretch length variation at np 309 was observed in all tested samples with both techniques (dominant variant 309.1 C), with a visible additional C insertion (309.2 C) identified only in samples P1 and RF2 by Sanger sequencing. This additional C insertion was not recorded in the three samples analyzed by MPS. Furthermore, a point heteroplasmy at np 16,168 (16168 Y) was detected only in sample LF2 by the MPS technology. Those variations can likely be due to the different DNA sources (RF and LF for D.P.R. samples) or to the different technologies and chemistries used . Despite this, the genetic compatibility of the haplotypes supports the maternal relationship between the Baron and D.P.R. To check the frequency of the haplotypes in the population and for haplogroup estimation, the haplotypes were loaded in the EMPOP database . InDels at np 309 were disregarded, and ranges corresponding to the sequenced mitochondrial region were established (Sanger sequencing: 16024-16365, 60-340; MPS technology: 16024-576). Matches were observed in the worldwide and European databases, providing haplotype frequencies between 1 × 10 −2 and 1 × 10 −4 . Phylogenetic analyses assigned the samples to haplogroup HV0 (Sanger sequencing data) and V18 (MPS data), both frequent in the European population. In particular, haplogroup V18 is frequent in the Netherlands, Germany, and Italy ( https://www.eupedia.com/europe/Haplogroup_V_mtDNA.shtml , accessed on 13 February 2023). The assignment of those different haplogroups was related to the identification of the additional mutation in the HVR-III region (508G) by the MPS method. Since the sequence information from the entire control region is associated to a greater resolution of the phylogenetic tree , the haplotype obtained from MPS analysis, and therefore haplogroup V18, was considered as a final result. 3.3. Histological Examination The visual examination of the Baron’s patella showed a good preservation of the bone tissue. This finding was confirmed by microscopic histological investigation as well. All sections showed intact bone (more than 95% of the tissue) with well-recognizable bone tissue components, such as bone canals, lamellae, and osteocyte lacunae. The OHI score was five (see ).
Quantification of autosomal DNA from the Baron’s sample (sample P1) returned high DNA quantities with a degradation index of 1.8, indicative of minimal degradation (see ). The good preservation of the sample was also supported by agarose gel electrophoresis, as shown in . The results of the quantification of the Baron’s mother samples are shown in . Three out of four samples revealed detectable levels by qPCR for the short autosomal targets. DNA isolated from the outer part (compact cortical bone tissue) of the two femurs had a degradation index (D.I.) of 11.6 and 10.3, respectively, supporting better preservation of the outer part samples compared to the inner part of the femurs (trabecular bone tissue). No amplification was obtained for the long targets in DNA samples from the trabecular part of the two bones (samples RF1 and LF1). Mitochondrial DNA copies in samples LF2, P1, and P2 are reported in . It is noteworthy that the number of mtDNA copies/g of tissue in sample P2, which was stored at −80 °C for ten years, was about two orders of magnitude lower than in sample P1 (extracted in 2012). Furthermore, while from sample P1 it was possible to amplify both targets (with a ratio of approximately 17 between the short and the long target), in sample P2 no amplification of the longer target (287 bp) was obtained. Altogether, those data strongly point out that DNA degradation occurred during the bone sample storage at −80 °C. No mtDNA was detected in the NEC.
The overall DNA typing results were summarized in . The detailed results are hereafter presented. 3.2.1. Y-STR Typing by PCR-CE Results of Y-STR typing in sample P1 are reported in the . The duplicate test results confirmed the genetic data obtained from DNA isolated a few weeks after the exhumation. No amplification was obtained from negative controls. The obtained haplotype did not match with the entire HYRD database (searching among 290,147 haplotypes; accessed on 26 January 2023). Furthermore, no match was found with haplotypes from the male staff members involved in the study. Overall, these results, supported by the high amount of amplifiable DNA obtained from the patella bone, were used as authenticity criteria . The “minimal haplotype” analysis, performed on eight markers of 350,500 unrelated samples, showed 202 matches (mainly in the USA with 22 matches, followed by Spain and Italy). 3.2.2. Autosomal STR Typing by PCR-CE The PowePlex ESX Kit was used with 30 PCR cycles. Results of the analysis for samples RF2 and LF2 are reported in (see also ). No amplicon was detected in the negative controls. Genetic typing data were used to generate a consensus profile for 10 out of 16 loci . A full profile was achieved from the Baron’s sample in duplicate tests. By comparing the Baron’s profile with that of the alleged mother D.P.R., allele sharing was always scored, as shown in . Familias software returned a likelihood ratio (LR) value of maternity of 3409 (corresponding to a posterior probability of maternity of 99.971%). 3.2.3. iSNP Typing by PCR-MPS Samples RF1, LF1, and P1 results were previously reported . Briefly, a full and complete profile was achieved from the Baron’s sample, while samples RF1 and LF1 (both extracted from the trabecular bone tissue) did not return any result. On the opposite, libraries of good quality were obtained from RF2 and LF2 DNA samples extracted from the compact cortical bone tissue of the two femurs ( for sequencing parameters). Possibly due to the higher number of PCR cycles (27), a high coverage of the markers was obtained, with more than 300,000 mapped reads/library. Genotyping results are reported in . No result was obtained from NEC. The three independent tests data allowed generating the consensus profiles for 80 out of 90 autosomal identity SNPs. Ten markers did not return reliable results, as shown in , because of stochastic phenomena (locus drop out and allelic drop out) which were scored in the replicates. However, when the consensus profile was compared with the sample P1 genotype, allele sharing was always scored. Furthermore, Familias software returned a likelihood ratio (LR) value of maternity of 2678 (corresponding to a posterior probability of 99.962%). When SNP and STR typing results were computed together, the cumulative LR value was of 9,129,302 (corresponding to a probability of maternity of 99.9999). Data from the 34 Y-specific SNP markers of the identity panel (see ) were used to establish the Baron’s ancestry. The “Y Haplogroup Prediction” made with the plugin of the HID SNP Genotyper revealed that the Baron’s sample belongs to the R1b (R1b-M343) haplogroup. This prediction was further confirmed by the use of http://ytree.morleydna.com and Phylo Tree Y ( http://phylotree.org/Y/tree/index.htm ) sites (accessed on 26 January 2023). 3.2.4. mtDNA Typing Mitochondrial DNA typing in samples P1, P2, RF2, and LF2 was carried out with two technologies, namely, PCR-CE and PCR-MPS (see ). Hypervariable control regions HVR-I and HVR-II were amplified and sequenced by conventional Sanger sequencing, while for MPS, the entire hypervariable control region was investigated by the addition of the HVR-III hypervariable region (see for sequencing parameters). The resulting haplotypes are reported in . Both methods returned the same haplotype for HVR-I and II in all tested samples. Poly-C stretch length variation at np 309 was observed in all tested samples with both techniques (dominant variant 309.1 C), with a visible additional C insertion (309.2 C) identified only in samples P1 and RF2 by Sanger sequencing. This additional C insertion was not recorded in the three samples analyzed by MPS. Furthermore, a point heteroplasmy at np 16,168 (16168 Y) was detected only in sample LF2 by the MPS technology. Those variations can likely be due to the different DNA sources (RF and LF for D.P.R. samples) or to the different technologies and chemistries used . Despite this, the genetic compatibility of the haplotypes supports the maternal relationship between the Baron and D.P.R. To check the frequency of the haplotypes in the population and for haplogroup estimation, the haplotypes were loaded in the EMPOP database . InDels at np 309 were disregarded, and ranges corresponding to the sequenced mitochondrial region were established (Sanger sequencing: 16024-16365, 60-340; MPS technology: 16024-576). Matches were observed in the worldwide and European databases, providing haplotype frequencies between 1 × 10 −2 and 1 × 10 −4 . Phylogenetic analyses assigned the samples to haplogroup HV0 (Sanger sequencing data) and V18 (MPS data), both frequent in the European population. In particular, haplogroup V18 is frequent in the Netherlands, Germany, and Italy ( https://www.eupedia.com/europe/Haplogroup_V_mtDNA.shtml , accessed on 13 February 2023). The assignment of those different haplogroups was related to the identification of the additional mutation in the HVR-III region (508G) by the MPS method. Since the sequence information from the entire control region is associated to a greater resolution of the phylogenetic tree , the haplotype obtained from MPS analysis, and therefore haplogroup V18, was considered as a final result.
Results of Y-STR typing in sample P1 are reported in the . The duplicate test results confirmed the genetic data obtained from DNA isolated a few weeks after the exhumation. No amplification was obtained from negative controls. The obtained haplotype did not match with the entire HYRD database (searching among 290,147 haplotypes; accessed on 26 January 2023). Furthermore, no match was found with haplotypes from the male staff members involved in the study. Overall, these results, supported by the high amount of amplifiable DNA obtained from the patella bone, were used as authenticity criteria . The “minimal haplotype” analysis, performed on eight markers of 350,500 unrelated samples, showed 202 matches (mainly in the USA with 22 matches, followed by Spain and Italy).
The PowePlex ESX Kit was used with 30 PCR cycles. Results of the analysis for samples RF2 and LF2 are reported in (see also ). No amplicon was detected in the negative controls. Genetic typing data were used to generate a consensus profile for 10 out of 16 loci . A full profile was achieved from the Baron’s sample in duplicate tests. By comparing the Baron’s profile with that of the alleged mother D.P.R., allele sharing was always scored, as shown in . Familias software returned a likelihood ratio (LR) value of maternity of 3409 (corresponding to a posterior probability of maternity of 99.971%).
Samples RF1, LF1, and P1 results were previously reported . Briefly, a full and complete profile was achieved from the Baron’s sample, while samples RF1 and LF1 (both extracted from the trabecular bone tissue) did not return any result. On the opposite, libraries of good quality were obtained from RF2 and LF2 DNA samples extracted from the compact cortical bone tissue of the two femurs ( for sequencing parameters). Possibly due to the higher number of PCR cycles (27), a high coverage of the markers was obtained, with more than 300,000 mapped reads/library. Genotyping results are reported in . No result was obtained from NEC. The three independent tests data allowed generating the consensus profiles for 80 out of 90 autosomal identity SNPs. Ten markers did not return reliable results, as shown in , because of stochastic phenomena (locus drop out and allelic drop out) which were scored in the replicates. However, when the consensus profile was compared with the sample P1 genotype, allele sharing was always scored. Furthermore, Familias software returned a likelihood ratio (LR) value of maternity of 2678 (corresponding to a posterior probability of 99.962%). When SNP and STR typing results were computed together, the cumulative LR value was of 9,129,302 (corresponding to a probability of maternity of 99.9999). Data from the 34 Y-specific SNP markers of the identity panel (see ) were used to establish the Baron’s ancestry. The “Y Haplogroup Prediction” made with the plugin of the HID SNP Genotyper revealed that the Baron’s sample belongs to the R1b (R1b-M343) haplogroup. This prediction was further confirmed by the use of http://ytree.morleydna.com and Phylo Tree Y ( http://phylotree.org/Y/tree/index.htm ) sites (accessed on 26 January 2023).
Mitochondrial DNA typing in samples P1, P2, RF2, and LF2 was carried out with two technologies, namely, PCR-CE and PCR-MPS (see ). Hypervariable control regions HVR-I and HVR-II were amplified and sequenced by conventional Sanger sequencing, while for MPS, the entire hypervariable control region was investigated by the addition of the HVR-III hypervariable region (see for sequencing parameters). The resulting haplotypes are reported in . Both methods returned the same haplotype for HVR-I and II in all tested samples. Poly-C stretch length variation at np 309 was observed in all tested samples with both techniques (dominant variant 309.1 C), with a visible additional C insertion (309.2 C) identified only in samples P1 and RF2 by Sanger sequencing. This additional C insertion was not recorded in the three samples analyzed by MPS. Furthermore, a point heteroplasmy at np 16,168 (16168 Y) was detected only in sample LF2 by the MPS technology. Those variations can likely be due to the different DNA sources (RF and LF for D.P.R. samples) or to the different technologies and chemistries used . Despite this, the genetic compatibility of the haplotypes supports the maternal relationship between the Baron and D.P.R. To check the frequency of the haplotypes in the population and for haplogroup estimation, the haplotypes were loaded in the EMPOP database . InDels at np 309 were disregarded, and ranges corresponding to the sequenced mitochondrial region were established (Sanger sequencing: 16024-16365, 60-340; MPS technology: 16024-576). Matches were observed in the worldwide and European databases, providing haplotype frequencies between 1 × 10 −2 and 1 × 10 −4 . Phylogenetic analyses assigned the samples to haplogroup HV0 (Sanger sequencing data) and V18 (MPS data), both frequent in the European population. In particular, haplogroup V18 is frequent in the Netherlands, Germany, and Italy ( https://www.eupedia.com/europe/Haplogroup_V_mtDNA.shtml , accessed on 13 February 2023). The assignment of those different haplogroups was related to the identification of the additional mutation in the HVR-III region (508G) by the MPS method. Since the sequence information from the entire control region is associated to a greater resolution of the phylogenetic tree , the haplotype obtained from MPS analysis, and therefore haplogroup V18, was considered as a final result.
The visual examination of the Baron’s patella showed a good preservation of the bone tissue. This finding was confirmed by microscopic histological investigation as well. All sections showed intact bone (more than 95% of the tissue) with well-recognizable bone tissue components, such as bone canals, lamellae, and osteocyte lacunae. The OHI score was five (see ).
In this article, we describe multiple analytical strategies that were first developed for forensic purposes, on a set of three bone samples collected in 2011. The selection of the bone element for molecular analysis is an important step, as DNA is not preserved equally in skeletal elements from different anatomical regions of the human body [ , , , , , , , ]. The bone element type that offers the highest DNA yield, especially in ancient skeletons, is the temporal bone, in particular the inner ear of the petrous bone [ , , , , , ]. However, the temporal bone is not always available for genetic testing because of historical interest or of practical or ethical reasons . In such cases, other bones have to be selected for the genetic testing of aged skeletons; long bones (mainly femur) are preferred for analyses , while metacarpal and other short bone elements provide promising or even better results . Therefore, several bones from the same skeleton should be collected whenever possible [ , , , , , , , , ]. Regardless of the type of bone selected, the intrabone part plays an important role because of the high variability of DNA preservation observed not only between different bones, but also within an individual bone. A big difference between diaphysis and epiphysis of the long bones was observed [ , , , , ] and, as shown in the vertebrae, different parts of the same bone yielded variable amounts of DNA, resulting in different STR typing success , which differs according to the ratio between the compact cortical bone tissue and the trabecular bone tissue located in the inner parts of the bones . When exposed to harmful environmental conditions for long periods of time, compact cortical bone tissues and its DNA survive longer than trabecular bone tissues [ , , ]. Thus, because the preservation of DNA in aged skeletons depends on many factors that are very complex, case-to-case strategies need to be implemented carefully. In addition, several DNA extraction protocols were developed in the last decade. Most of the protocols are based on mechanical pulverization of the bone sample, followed by Na 2 EDTA decalcification and lysis , while some authors suggest to omit the pulverization step, replacing it with a longer (up to five days) decalcification step for large fragments of bone . In addition, protocols have been developed to perform decalcification and lysis in the same step . Finally, even the organic DNA purification protocol with phenol/chloroform has been replaced by silica-based or magnetic bead-based procedures . In the present research, we selected a single bone (patella) collected from the artificially mummified body of the Baron Pasquale Revoltella (1795–1869), as well as two femur segments that were hypothesized to belong to the Baron’s mother, Domenica Privato Revoltella (1775–1830). These bone elements, at the time of the exhumation (2011), were considered among the most reliable in terms of DNA recovery and quality for the genetic analyses available at that time. As shown by visual and histological examinations, the preservation of the Baron’s bone was excellent, likely thanks to the mummification procedures coupled with favorable environmental conditions. Although no strict relationship exists between histological preservation and degradation level of the nucleic acids , high amounts of well-preserved DNA were yielded from the inner (trabecular) part of the patella, as shown by agarose gel electrophoresis and qPCR analysis. This sample, stored for seven months at −80 °C after the exhumation, confirmed the results of our preliminary Y-STR PCR-CE typing, and yielded a full autosomal STR PCR-CE profile as well. In addition, a full identity SNP profile was obtained using PCR-MPS. Finally, mtDNA was successfully typed both with PCR-CE and PCR-MPS technologies. Therefore, despite the skeletal remains being 142 years old, the DNA sample showed no degradation or inhibition issues; this provides further evidence that environmental conditions are the major factor in DNA preservation [ , , , , , , ]. The only indication that the sample was ancient was the historical record. However, it is noteworthy that relevant levels of degradation occurred during the 10 years the sample was stored at −80 °C; this was indicated by both the decrement of the number of mtDNA molecules and the lack of the amplification of the long qPCR mitochondrial target. This result is in agreement with previous studies, which found that freezing did not eliminate DNA degradation issues . In addition to the Baron’s remains, a metal box that likely contained the Baron’s mother’s remains was found in a niche next to the Baron’s grave. D.P.R. died in 1830 and, according to Baron’s will, her skeletal remains were transferred from the Municipal Cemetery of Santa Anna (Trieste) to the San Pasquale crypt in 1870. The samples from the trabecular residues of Baron’s mother’s femurs initially gave no results . In contrast, the compact cortical bone tissue, we analyzed from the same two femurs ten years later, provided excellent and reliable genotyping data. In fact, samples RF2 and LF2 yielded a consensus profile for 10 out of 16 autosomal STR markers, and 80 out of 90 identity SNP markers, respectively. Finally, mtDNA control region analysis was successfully performed using two different technologies. These results highlight how crucial it is to sample the long bones correctly, because no genotyping data was obtained from the trabecular bone of the two femurs analyzed. Therefore, our results support previous data showing that the compact cortical bone has to be preferred in genetic studies [ , , , , , , , , ]. However, we strongly recommend that more than one bone sample, including the temporal [ , , , , , , , ] and the metatarsals bones, should be collected if available, both in forensic and archaeological casework. It is likely that even the magnetic bead-based protocol we used for DNA purification in 2022 contributed to the successful outcome. The availability of the genotypic data from the alleged mother–son pair prompted us to perform a kinship analysis. The analysis of the autosomal markers (STR and identity SNP) showed a cumulative LR of 9,129,302 (corresponding to a probability of maternity of 99.9999%). Further evidence for the maternal relationship was found through mtDNA control region sequencing, which highlighted the sharing of the same haplotype between the Baron and D.P.R. bone samples. There is no doubt that a mother–son pair was studied (and therefore that the skeletal remains found in the metal box near the Baron’s grave belonged to his mother). Finally, the analysis of the haploid SNP markers allowed us to establish that the Baron belongs to the R1b (R1b-M343) haplogroup, which originated in South-East Asia and then spread to Eurasia and the Americas. Even the minimal haplotype, built by eight Y-STR markers, confirmed the Baron’s ancestry as Eurasian. Altogether, these genetic data are in agreement with the historical records of the archives of the Municipality of Venice, where the data of his paternal lineage were found up to the grand-parents (no data were found on his maternal lineage).
This casework represents a challenging trial to test forensic protocols on bone samples of historical interest. Essentially, optimization of the DNA extraction procedures, thanks to the implementation of the instruments in suitable facilities, allowed the genetic typing, even with the gold standard PCR-CE technology. It is also true, however, that PCR-MPS technology is an extraordinary tool when large sets of markers need to be analyzed simultaneously, such as for the identity SNP panel studied here. Nevertheless, both techniques require duplicate tests, as well as stringent precautions for preventing/identifying exogenous contaminations, which can potentially lead to misleading conclusions [ , , , ]. Thus, the results of this study support the finding that the methods commonly used in forensic genetics are also suitable for the analysis of historical remains. However, the current limit of this analysis is the high cost of the next generation sequencing technology.
|
Applying Unique Molecular Indices with an Extensive All-in-One Forensic SNP Panel for Improved Genotype Accuracy and Sensitivity
|
68735742-1192-40f7-8d15-92a66299dd1a
|
10137749
|
Forensic Medicine[mh]
|
One of the key challenges within forensic genetics is to increase sensitivity, enabling detection of the smallest possible amounts of DNA. However, several analysis techniques include a certain level of interfering noise, which could hinder the interpretation. To increase the sensitivity, one approach would be to develop techniques that can distinguish true signals from noise; more specifically, from a forensic genetic perspective, to distinguish true alleles from false variants. Massively parallel sequencing (MPS) is one technique that has revolutionized the field of forensic genetics and has been shown to be a powerful method for forensic DNA analysis [ , , , ]. However, technical artefacts exist, and when analyzing samples with low template DNA, distinguishing between true and false alleles can be difficult. One of the most common error types is caused during the PCR amplification process. Stutter artifacts are well known when analyzing traditional short tandem repeat (STR) markers. These are caused by strand slippage of the DNA polymerase during the amplification process . The stutter phenomenon is not an issue when analyzing single nucleotide polymorphisms (SNPs) due to the lack of repetitive regions in SNP loci. However, there are other amplification issues, such as polymerase base substitution errors , which could result in erroneous PCR products. A misincorporated base early in the cycling process could result in incorrect genotype interpretation. In addition, the risk of amplification errors increases when analyzing low copy numbers of DNA, mainly due to stochastic effects . Another source of error originates from the sequencing process, where base substitutions can occur . Unique Molecular Indices (UMIs), also known as unique molecular identifiers or molecular barcodes, were initially introduced as a tool to count the absolute number of molecules [ , , , , ] and were later applied to the field of medical genetics for sensitive detection of cell-free DNA [ , , , , ]. For instance, early detection of circulating tumor DNA is an important strategy for detecting tumor development, determining treatment and monitoring drug response by quantitative measures of circulating cell-free tumor DNA . A UMI is a short random nucleotide sequence, commonly 8–12 base pairs long. These random sequences can either be incorporated into the sample during an initial PCR or enzymatically ligated prior to amplification . Addition of the UMI sequences enables bioinformatic detection of the original template molecules. Since all reads have a UMI attached, one can distinguish reads that result from PCR amplification (i.e., having the same UMI sequence) and reads that represent the original template molecules (i.e., having unique UMI sequences). Errors resulting from both amplification and sequencing can be present in the final reads; however, by counting the number of unique UMIs instead of all reads, the error rates could be reduced. From all reads with the same UMI sequence, a consensus read (or UMI read) is created. If a variant is presented only in some of the reads carrying the same UMI, this variant will be filtered out and considered as a false variant. A variant is considered to be true if most of the reads carrying the same UMI have the variant present, and the specific read number threshold can be user defined. One commercially available kit that has incorporated UMI technology is the QIAseq Targeted DNA Custom Panel (Qiagen, Hilden, Germany). This assay is a multiplexed PCR based on a single primer extension technology. The QIAseq kit was primarily developed and evaluated for the detection of circulating DNA with high amounts of DNA (>10 ng) available [ , , ]. Even though UMIs have mainly been used in medical applications, the UMI principle could be applied within forensics as well. A sensitive and accurate detection of low-level DNA variants could potentially be a successful technological improvement for the field of forensic genetics. At present, and as far as we know, only a few studies applying UMI from a forensic perspective have been conducted [ , , ]. The rapid technological advancements of MPS and increased knowledge about the human genome have enabled additional forensic applications of DNA, such as DNA intelligence. For instance, DNA can be used as an investigative lead to narrow down the list of suspects. The prediction of human appearance, such as eye, hair, and skin color, from DNA has been well described as has the prediction of biogeographical ancestry . A more recent adoption in the field is investigative genetic genealogy (IGG), which has generated crucial investigative leads for the identification of unknown perpetrators in a number of criminal cases as well as the identification of human remains . An extended DNA profile is required for the IGG method, which preferably consists of hundreds of thousands of SNPs which, for instance, can be generated via high-density SNP microarrays or whole genome sequencing assays. The high-density SNP profile can then be uploaded to a public genealogy database to trace biological relatives to the unknown by matching segments of shared DNA [ , , ]. Subsequently, the traditional genealogy investigation hopefully results in a candidate suspect, and traditional forensic methods, such as STR typing, are used to either confirm or reject the candidate. The use of STR markers as a confirmation method is feasible as long as the quality of the sample is high enough to enable generation of a sufficiently informative STR profile; however, some forensic casework samples can be heavily degraded, such as old bone samples. The STR typing can, in such cases, result in partial DNA profiles and, in the worst case scenario, insufficient information for identification. Furthermore, if the reference sample is a distant relative, mainly in cases of identifications of historical human remains, the STR markers can be too few to generate a sufficiently high support for any of the tested hypotheses. Due to these limitations in STR typing, a SNP-based approach could be a more appropriate alternative. Recently, extensive marker panels with thousands of SNPs have been developed for forensic applications [ , , , , ]. The FORensic Capture Enrichment (FORCE) panel is an extensive all-in-one SNP marker set for different forensic applications. The panel consists of carefully selected SNP markers, including ancestry, phenotype, identity, and kinship informative markers, as well as X- and Y-chromosomal SNPs. This panel can be applied with different enrichment and sequencing methods based on different chemistries. A hybridization capture technique (myBaits, Arbor Biosciences, Ann Arbor, MI, USA) was used and evaluated in the initial FORCE publication . The main aim of this study is to investigate the potential of UMIs in forensic genetics by applying the UMI technology together with the FORCE panel. We assessed the impact of incorporating UMIs on the genotype accuracy and sensitivity by evaluating the observed genotypes with and without considering the UMIs. Additionally, we evaluated the overall FORCE QIAseq assay performance by analyzing different sample types of forensic relevance, such as mock case, degraded and DNA mixture samples. Furthermore, we investigated the potential of this panel for casework-like applications such as kinship analysis, forensic DNA phenotyping and biogeographical ancestry predictions.
2.1. The FORCE Panel The FORCE panel can be adopted with several enrichment and sequencing strategies. In this study, all samples were analyzed with a QIAseq Targeted DNA Custom Panel (Qiagen) comprising the FORCE SNPs. All DNA libraries were sequenced on a MiSeq FGx instrument (Verogen, San Diego, CA, USA). For this FORCE QIAseq assay, 5507 SNPs were selected. 2.2. Sample Selection All samples were handled and analyzed in accordance with the ethical approval by the Swedish Ethical Review Authority (Dnr 2022-06781-01). 2.2.1. Reference Samples Repeatability, sensitivity and genotype accuracy were investigated based on three different reference samples. Two of the samples (NA12877 and NA12878) were provided by the Coriell Institute for Medical Research (Camden, NJ, USA), and one was 2800M (Promega, Madison, WI, USA). All three samples were analyzed with 20 ng of DNA as input. NA12877 and 2800M were analyzed in duplicate. The five samples were pooled and sequenced together. A dilution series of NA12877 was prepared with the following input amounts of DNA: 10 ng, 1 ng, 0.5 ng, 0.25 ng, 0.125 ng, 0.06 ng, 0.03 ng and 0.015 ng. All eight samples in the dilution series were sequenced together. 2.2.2. Mixture Samples The two Coriell samples, NA12877 and NA12878, were mixed in four different ratios, 1:1, 1:10, 1:50 and 1:100, with NA12878 as the major contributor. All mixtures were analyzed in duplicate, with 10 ng DNA as input amount. The ability to detect a mixture was evaluated by investigating the allele read frequency (ARF) distribution and by calculating the heterozygosity rate . The ARF for each locus was calculated by dividing the read depth of the allele with the most reads with the total number of reads. Density plots of the ARF values for both mixture and single-source samples were plotted in R to illustrate the distribution. Additionally, we evaluated the ability to extract the genotypes from one unknown individual in the mixture, assuming genotypes from the other individuals were known. This could represent a true case with a DNA mixture of victim (known genotypes) and perpetrator (unknown genotypes). This was done for the 1:1 mixture, with a quantitative approach , by removing the read counts for the known contributor, assuming 50% contribution. The remaining reads were used to determine the genotypes of the unknown contributor. For those reads, we applied a coverage threshold of 10× and an allelic balance threshold for homozygotes of ≥0.90 and for heterozygotes of ≤0.55 for the genotype calling. Subsequently, call rate and accuracy for the extracted genotypes were calculated. 2.2.3. Mock Case Samples One female saliva sample was extracted with a Chelex-based extraction method . Two different amounts of DNA (1 ng and 10 ng) were treated with two known PCR inhibitors, soil (humic substances) and moist snuff, which represent two known inhibitors in Swedish forensic casework samples. The soil solution was prepared by mixing soil with nuclease-free water (20% w / w ) and shake-incubated for one hour . Subsequently, 5 µL of the soil solution was added to the initial library preparation step, together with 1 ng and 10 ng DNA. Moist snuff solution was prepared by leaching snuff bags in 1 mL nuclease-free water to extract the inhibitors . Then, 0.6 µL of the supernatant was added to the initial library preparation step, with 1 ng and 10 ng DNA as input. The saliva samples were also analyzed without any inhibitor for comparison. The 10 ng untreated sample was further used as reference when conducting genotype concordance tests with the inhibitor-spiked samples. 2.2.4. Bone and Tissue Samples Eight human skeletal bone samples were selected and extracted with two different extraction methods: a PrepFiler BTA method with Automate Express (Thermo Fisher Scientific, Waltham, MA, USA) and a phenol/chloroform-based extraction assay . Additionally, four human tissue samples were selected and extracted with a phenol-chloroform based extraction method . All bone and tissue samples had previously been analyzed in case work, generating complete STR profiles. All samples were diluted to 1 ng prior to analysis. Six of the bone samples had previously been analyzed with a forensically validated in-house SNP panel consisting of 131 SNPs overlapping with the FORCE panel. Furthermore, six bone samples had been analyzed with the ForenSeq DNA Signature Prep kit with 167 overlapping SNPs. Thus, concordance rates were calculated between FORCE QIAseq genotypes and the two additional panels. 2.2.5. Kinship Samples Kinship-based assessment was performed based on blood samples from two different families with known relations, each consisting of the two parents and their three children, giving a total of 10 samples. DNA was extracted, and 1 ng of DNA was used for the library preparation. Based on the observed DNA data from the kinship informative SNPs (max 3935 SNPs), consistency with Mendelian inheritance patterns was verified, and likelihood ratio (LR) calculations were performed in Familias . Allele frequencies from the SweGen project , consisting of allele frequencies of a Swedish population, were used for the LR calculations. Paternity tests for each of the children were calculated in both trio (including known mother, alleged father and child) and duo cases (including alleged father and child). Additionally, maternity tests in duo cases (alleged mother and child) were also performed for all children. To further examine the informativeness in paternity duo cases, 1000 simulations were performed in Familias with the following hypotheses: H1: The alleged father is the biological father of the child; H2: The alleged father is unrelated to the child. The number of genetic inconsistencies were counted when hypothesis H2 was simulated as the true hypothesis. To assess the informative power of the panel for more distant relationships, ranging from second to fifth degree relatives, simulations were performed 1000 times for each of the following hypotheses. The simulations were performed in ILIR based on allele frequencies from a Swedish population generated from the SweGen project . Genetic linkage was accounted for using genetic position information from a Rutgers map . 2nd degree relation: Half siblings (H1) versus unrelated (H2) 3rd degree relation: First cousins (H1) versus unrelated (H2) 4th degree relation: First cousins once removed (H1) versus unrelated (H2) 5th degree relation: Second cousins (H1) versus unrelated (H2) 2.2.6. Phenotype and Ancestry Predictions Phenotype and ancestry predictions were performed for two individuals (blood samples) based on the phenotype and ancestry informative SNP markers. Eye, hair and skin color predictions were done with the HIrisPlex-S web tool [ , , ], and FORCE QIAseq generated genotypes were converted to HIrisPlex-S compatible nomenclature. The results were compared with self-reported eye, hair and skin color information for the tested individuals. Biogeographical ancestry predictions were performed using FamLink2 with a naïve Bayes–based approach. Reference samples comprised allele frequencies for the autosomal SNPs from seven meta populations (African, American, East Asian, European, Middle Eastern, Oceanic and South Asian). The self-reported ancestry for the two individuals was reported as country of origin for their grandparents. See . 2.3. Library Preparation The library preparation was performed with the QIAseq Targeted DNA Custom Panel (Qiagen), consisting of the FORCE SNPs. All samples were analyzed according to the manufacturer’s recommendations specified in the protocol . All samples were quantified prior to library preparation using the Qubit 2.0 fluorometer (Thermo Fisher Scientific). The initial step of the library preparation was a multienzymatic reaction consisting of fragmentation, end-repair and A-addition. This was immediately followed by an adaptor ligation step, which included ligation of both the sample-specific index and the UMIs. The adapter-ligated DNA were then cleaned twice with QIAseq magnetic beads. Target enrichment was then performed using single primer extension of the specific targets in a PCR reaction, according to the protocol of 6 cycles of 15 s at 98 °C and 15 min at 65 °C. The target enrichment was followed by a second QIAseq magnetic bead-based clean-up and a universal PCR, including ligation of the second sample specific index. The cycling conditions followed the manufacturer’s protocol, and the number of cycles was set to 19. The second PCR was followed by a final QIAseq magnetic bead-based clean-up. The final libraries were then quantified using the Qubit 2.0 fluorometer. The DNA integrity was checked using the High Sensitivity DNA kit on the 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). The samples were diluted to 4 nM based on the quantification and fragment size distribution of the samples. Samples were then pooled, denatured and further diluted to 10 pM, which was loaded onto the MiSeq FGx (Verogen) instrument. Additionally, a QIAseq A Read 1 Custom Primer I was loaded according to the manufacturer’s protocol, and a paired-end 2 × 151 bp sequencing was selected. The number of samples pooled for sequencing varied from three to eight per sequencing; this is described in more detail in . 2.4. Bioinformatic Analysis with UMI The bioinformatic workflow was built in the CLC Genomics Workbench V.21.0.3 (Qiagen). All thresholds and settings were set to default. The resulting FASTQ files from the MiSeq FGx were imported into CLC, and the initial step was the Remove and annotate with unique molecular indices tool. The UMI sequences, together with the common sequence, were removed to improve the efficiency and accuracy of the read mapping. The reads were then annotated with the UMI information for further analysis. This was followed by a read mapping with hg19 as the reference genome with the Map reads to reference tool. All the mapped reads that also belonged to the same UMI were annotated with a UMI group ID with the Calculate Unique Molecular Index Groups tool. Based on these groups, a single consensus read was created (UMI read) using the Create UMI reads from grouped reads tool. These UMI reads were then aligned to the same position as the original read. This was followed by Remove ligation artifacts to reduce erroneous reads that originated from the adaptor ligation step. Next, the InDels and structural variants tool was used to identify structural variants, relying on information from unaligned ends. This information was then used for a second alignment, the Local realignment , which is used to improve the initial read mapping. Identify known mutations from read mapping was used to identify the reads at the specific SNP positions. The final step was then to annotate the identified variants with the UMI information by using the tool Annotate variants with UMI info. Final genotype calling was performed in Microsoft Excel with a defined coverage threshold at 10×. The ARF threshold for homozygotes was set to ≥0.95, and for heterozygotes to ≤0.80. The quality score threshold was set to ≥15. 2.5. Bioinformatic Analysis without UMI One approach to evaluate the power of UMI is to analyze the same sequencing data without taking the UMI information into consideration. We analyzed the same sequencing data by counting the total number of reads, including the PCR duplicates, which is the traditional bioinformatic workflow when evaluating MPS data, thus ignoring the UMI information. This was done in CLC Genomics Workbench V.21.0.3 by importing the same FASTQ files as above. The first step was to use the Remove and annotate UMI information tool to remove the UMI and thereby improve the read mapping. Secondly, the reads were mapped to the reference genome (hg19) with the Map reads to reference reads tool. This was followed by the InDels and structural variants tool to identify structural insertions and deletions from the mapping. Next, a Local Realignment was performed to further improve the read mapping, and finally the Identify known mutations from read mapping was used to identify the reads at each specific locus. The resulting read counts were then analyzed in Microsoft Excel for genotype calling. This approach was applied to the dilution series of the Coriell sample NA12877. The ARF and quality score thresholds for the non-UMI data were the same as for the UMI approach above. However, the coverage thresholds for the non-UMI data varied and were set with two different approaches. Firstly, the coverage for the UMI data was set to 10×, and the non-UMI data coverage was set so that the call rates between the two data sets were similar (i.e., increasing the coverage threshold for the non-UMI data). From this, error rates were compared between the UMI and non-UMI data. Secondly, the coverage for the non-UMI data was adjusted so that the error rates were similar with and without UMI information. From this approach, the call rates between the UMI and non-UMI data sets were compared. The hypothesis was that the use of UMI increases sensitivity and genotype accuracy. This implies that if the call rates are similar, the error rates would be lower in the UMI data. In addition, if the error rates between the two data sets are similar, the call rate would be higher in the UMI data.
The FORCE panel can be adopted with several enrichment and sequencing strategies. In this study, all samples were analyzed with a QIAseq Targeted DNA Custom Panel (Qiagen) comprising the FORCE SNPs. All DNA libraries were sequenced on a MiSeq FGx instrument (Verogen, San Diego, CA, USA). For this FORCE QIAseq assay, 5507 SNPs were selected.
All samples were handled and analyzed in accordance with the ethical approval by the Swedish Ethical Review Authority (Dnr 2022-06781-01). 2.2.1. Reference Samples Repeatability, sensitivity and genotype accuracy were investigated based on three different reference samples. Two of the samples (NA12877 and NA12878) were provided by the Coriell Institute for Medical Research (Camden, NJ, USA), and one was 2800M (Promega, Madison, WI, USA). All three samples were analyzed with 20 ng of DNA as input. NA12877 and 2800M were analyzed in duplicate. The five samples were pooled and sequenced together. A dilution series of NA12877 was prepared with the following input amounts of DNA: 10 ng, 1 ng, 0.5 ng, 0.25 ng, 0.125 ng, 0.06 ng, 0.03 ng and 0.015 ng. All eight samples in the dilution series were sequenced together. 2.2.2. Mixture Samples The two Coriell samples, NA12877 and NA12878, were mixed in four different ratios, 1:1, 1:10, 1:50 and 1:100, with NA12878 as the major contributor. All mixtures were analyzed in duplicate, with 10 ng DNA as input amount. The ability to detect a mixture was evaluated by investigating the allele read frequency (ARF) distribution and by calculating the heterozygosity rate . The ARF for each locus was calculated by dividing the read depth of the allele with the most reads with the total number of reads. Density plots of the ARF values for both mixture and single-source samples were plotted in R to illustrate the distribution. Additionally, we evaluated the ability to extract the genotypes from one unknown individual in the mixture, assuming genotypes from the other individuals were known. This could represent a true case with a DNA mixture of victim (known genotypes) and perpetrator (unknown genotypes). This was done for the 1:1 mixture, with a quantitative approach , by removing the read counts for the known contributor, assuming 50% contribution. The remaining reads were used to determine the genotypes of the unknown contributor. For those reads, we applied a coverage threshold of 10× and an allelic balance threshold for homozygotes of ≥0.90 and for heterozygotes of ≤0.55 for the genotype calling. Subsequently, call rate and accuracy for the extracted genotypes were calculated. 2.2.3. Mock Case Samples One female saliva sample was extracted with a Chelex-based extraction method . Two different amounts of DNA (1 ng and 10 ng) were treated with two known PCR inhibitors, soil (humic substances) and moist snuff, which represent two known inhibitors in Swedish forensic casework samples. The soil solution was prepared by mixing soil with nuclease-free water (20% w / w ) and shake-incubated for one hour . Subsequently, 5 µL of the soil solution was added to the initial library preparation step, together with 1 ng and 10 ng DNA. Moist snuff solution was prepared by leaching snuff bags in 1 mL nuclease-free water to extract the inhibitors . Then, 0.6 µL of the supernatant was added to the initial library preparation step, with 1 ng and 10 ng DNA as input. The saliva samples were also analyzed without any inhibitor for comparison. The 10 ng untreated sample was further used as reference when conducting genotype concordance tests with the inhibitor-spiked samples. 2.2.4. Bone and Tissue Samples Eight human skeletal bone samples were selected and extracted with two different extraction methods: a PrepFiler BTA method with Automate Express (Thermo Fisher Scientific, Waltham, MA, USA) and a phenol/chloroform-based extraction assay . Additionally, four human tissue samples were selected and extracted with a phenol-chloroform based extraction method . All bone and tissue samples had previously been analyzed in case work, generating complete STR profiles. All samples were diluted to 1 ng prior to analysis. Six of the bone samples had previously been analyzed with a forensically validated in-house SNP panel consisting of 131 SNPs overlapping with the FORCE panel. Furthermore, six bone samples had been analyzed with the ForenSeq DNA Signature Prep kit with 167 overlapping SNPs. Thus, concordance rates were calculated between FORCE QIAseq genotypes and the two additional panels. 2.2.5. Kinship Samples Kinship-based assessment was performed based on blood samples from two different families with known relations, each consisting of the two parents and their three children, giving a total of 10 samples. DNA was extracted, and 1 ng of DNA was used for the library preparation. Based on the observed DNA data from the kinship informative SNPs (max 3935 SNPs), consistency with Mendelian inheritance patterns was verified, and likelihood ratio (LR) calculations were performed in Familias . Allele frequencies from the SweGen project , consisting of allele frequencies of a Swedish population, were used for the LR calculations. Paternity tests for each of the children were calculated in both trio (including known mother, alleged father and child) and duo cases (including alleged father and child). Additionally, maternity tests in duo cases (alleged mother and child) were also performed for all children. To further examine the informativeness in paternity duo cases, 1000 simulations were performed in Familias with the following hypotheses: H1: The alleged father is the biological father of the child; H2: The alleged father is unrelated to the child. The number of genetic inconsistencies were counted when hypothesis H2 was simulated as the true hypothesis. To assess the informative power of the panel for more distant relationships, ranging from second to fifth degree relatives, simulations were performed 1000 times for each of the following hypotheses. The simulations were performed in ILIR based on allele frequencies from a Swedish population generated from the SweGen project . Genetic linkage was accounted for using genetic position information from a Rutgers map . 2nd degree relation: Half siblings (H1) versus unrelated (H2) 3rd degree relation: First cousins (H1) versus unrelated (H2) 4th degree relation: First cousins once removed (H1) versus unrelated (H2) 5th degree relation: Second cousins (H1) versus unrelated (H2) 2.2.6. Phenotype and Ancestry Predictions Phenotype and ancestry predictions were performed for two individuals (blood samples) based on the phenotype and ancestry informative SNP markers. Eye, hair and skin color predictions were done with the HIrisPlex-S web tool [ , , ], and FORCE QIAseq generated genotypes were converted to HIrisPlex-S compatible nomenclature. The results were compared with self-reported eye, hair and skin color information for the tested individuals. Biogeographical ancestry predictions were performed using FamLink2 with a naïve Bayes–based approach. Reference samples comprised allele frequencies for the autosomal SNPs from seven meta populations (African, American, East Asian, European, Middle Eastern, Oceanic and South Asian). The self-reported ancestry for the two individuals was reported as country of origin for their grandparents. See .
Repeatability, sensitivity and genotype accuracy were investigated based on three different reference samples. Two of the samples (NA12877 and NA12878) were provided by the Coriell Institute for Medical Research (Camden, NJ, USA), and one was 2800M (Promega, Madison, WI, USA). All three samples were analyzed with 20 ng of DNA as input. NA12877 and 2800M were analyzed in duplicate. The five samples were pooled and sequenced together. A dilution series of NA12877 was prepared with the following input amounts of DNA: 10 ng, 1 ng, 0.5 ng, 0.25 ng, 0.125 ng, 0.06 ng, 0.03 ng and 0.015 ng. All eight samples in the dilution series were sequenced together.
The two Coriell samples, NA12877 and NA12878, were mixed in four different ratios, 1:1, 1:10, 1:50 and 1:100, with NA12878 as the major contributor. All mixtures were analyzed in duplicate, with 10 ng DNA as input amount. The ability to detect a mixture was evaluated by investigating the allele read frequency (ARF) distribution and by calculating the heterozygosity rate . The ARF for each locus was calculated by dividing the read depth of the allele with the most reads with the total number of reads. Density plots of the ARF values for both mixture and single-source samples were plotted in R to illustrate the distribution. Additionally, we evaluated the ability to extract the genotypes from one unknown individual in the mixture, assuming genotypes from the other individuals were known. This could represent a true case with a DNA mixture of victim (known genotypes) and perpetrator (unknown genotypes). This was done for the 1:1 mixture, with a quantitative approach , by removing the read counts for the known contributor, assuming 50% contribution. The remaining reads were used to determine the genotypes of the unknown contributor. For those reads, we applied a coverage threshold of 10× and an allelic balance threshold for homozygotes of ≥0.90 and for heterozygotes of ≤0.55 for the genotype calling. Subsequently, call rate and accuracy for the extracted genotypes were calculated.
One female saliva sample was extracted with a Chelex-based extraction method . Two different amounts of DNA (1 ng and 10 ng) were treated with two known PCR inhibitors, soil (humic substances) and moist snuff, which represent two known inhibitors in Swedish forensic casework samples. The soil solution was prepared by mixing soil with nuclease-free water (20% w / w ) and shake-incubated for one hour . Subsequently, 5 µL of the soil solution was added to the initial library preparation step, together with 1 ng and 10 ng DNA. Moist snuff solution was prepared by leaching snuff bags in 1 mL nuclease-free water to extract the inhibitors . Then, 0.6 µL of the supernatant was added to the initial library preparation step, with 1 ng and 10 ng DNA as input. The saliva samples were also analyzed without any inhibitor for comparison. The 10 ng untreated sample was further used as reference when conducting genotype concordance tests with the inhibitor-spiked samples.
Eight human skeletal bone samples were selected and extracted with two different extraction methods: a PrepFiler BTA method with Automate Express (Thermo Fisher Scientific, Waltham, MA, USA) and a phenol/chloroform-based extraction assay . Additionally, four human tissue samples were selected and extracted with a phenol-chloroform based extraction method . All bone and tissue samples had previously been analyzed in case work, generating complete STR profiles. All samples were diluted to 1 ng prior to analysis. Six of the bone samples had previously been analyzed with a forensically validated in-house SNP panel consisting of 131 SNPs overlapping with the FORCE panel. Furthermore, six bone samples had been analyzed with the ForenSeq DNA Signature Prep kit with 167 overlapping SNPs. Thus, concordance rates were calculated between FORCE QIAseq genotypes and the two additional panels.
Kinship-based assessment was performed based on blood samples from two different families with known relations, each consisting of the two parents and their three children, giving a total of 10 samples. DNA was extracted, and 1 ng of DNA was used for the library preparation. Based on the observed DNA data from the kinship informative SNPs (max 3935 SNPs), consistency with Mendelian inheritance patterns was verified, and likelihood ratio (LR) calculations were performed in Familias . Allele frequencies from the SweGen project , consisting of allele frequencies of a Swedish population, were used for the LR calculations. Paternity tests for each of the children were calculated in both trio (including known mother, alleged father and child) and duo cases (including alleged father and child). Additionally, maternity tests in duo cases (alleged mother and child) were also performed for all children. To further examine the informativeness in paternity duo cases, 1000 simulations were performed in Familias with the following hypotheses: H1: The alleged father is the biological father of the child; H2: The alleged father is unrelated to the child. The number of genetic inconsistencies were counted when hypothesis H2 was simulated as the true hypothesis. To assess the informative power of the panel for more distant relationships, ranging from second to fifth degree relatives, simulations were performed 1000 times for each of the following hypotheses. The simulations were performed in ILIR based on allele frequencies from a Swedish population generated from the SweGen project . Genetic linkage was accounted for using genetic position information from a Rutgers map . 2nd degree relation: Half siblings (H1) versus unrelated (H2) 3rd degree relation: First cousins (H1) versus unrelated (H2) 4th degree relation: First cousins once removed (H1) versus unrelated (H2) 5th degree relation: Second cousins (H1) versus unrelated (H2)
Phenotype and ancestry predictions were performed for two individuals (blood samples) based on the phenotype and ancestry informative SNP markers. Eye, hair and skin color predictions were done with the HIrisPlex-S web tool [ , , ], and FORCE QIAseq generated genotypes were converted to HIrisPlex-S compatible nomenclature. The results were compared with self-reported eye, hair and skin color information for the tested individuals. Biogeographical ancestry predictions were performed using FamLink2 with a naïve Bayes–based approach. Reference samples comprised allele frequencies for the autosomal SNPs from seven meta populations (African, American, East Asian, European, Middle Eastern, Oceanic and South Asian). The self-reported ancestry for the two individuals was reported as country of origin for their grandparents. See .
The library preparation was performed with the QIAseq Targeted DNA Custom Panel (Qiagen), consisting of the FORCE SNPs. All samples were analyzed according to the manufacturer’s recommendations specified in the protocol . All samples were quantified prior to library preparation using the Qubit 2.0 fluorometer (Thermo Fisher Scientific). The initial step of the library preparation was a multienzymatic reaction consisting of fragmentation, end-repair and A-addition. This was immediately followed by an adaptor ligation step, which included ligation of both the sample-specific index and the UMIs. The adapter-ligated DNA were then cleaned twice with QIAseq magnetic beads. Target enrichment was then performed using single primer extension of the specific targets in a PCR reaction, according to the protocol of 6 cycles of 15 s at 98 °C and 15 min at 65 °C. The target enrichment was followed by a second QIAseq magnetic bead-based clean-up and a universal PCR, including ligation of the second sample specific index. The cycling conditions followed the manufacturer’s protocol, and the number of cycles was set to 19. The second PCR was followed by a final QIAseq magnetic bead-based clean-up. The final libraries were then quantified using the Qubit 2.0 fluorometer. The DNA integrity was checked using the High Sensitivity DNA kit on the 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). The samples were diluted to 4 nM based on the quantification and fragment size distribution of the samples. Samples were then pooled, denatured and further diluted to 10 pM, which was loaded onto the MiSeq FGx (Verogen) instrument. Additionally, a QIAseq A Read 1 Custom Primer I was loaded according to the manufacturer’s protocol, and a paired-end 2 × 151 bp sequencing was selected. The number of samples pooled for sequencing varied from three to eight per sequencing; this is described in more detail in .
The bioinformatic workflow was built in the CLC Genomics Workbench V.21.0.3 (Qiagen). All thresholds and settings were set to default. The resulting FASTQ files from the MiSeq FGx were imported into CLC, and the initial step was the Remove and annotate with unique molecular indices tool. The UMI sequences, together with the common sequence, were removed to improve the efficiency and accuracy of the read mapping. The reads were then annotated with the UMI information for further analysis. This was followed by a read mapping with hg19 as the reference genome with the Map reads to reference tool. All the mapped reads that also belonged to the same UMI were annotated with a UMI group ID with the Calculate Unique Molecular Index Groups tool. Based on these groups, a single consensus read was created (UMI read) using the Create UMI reads from grouped reads tool. These UMI reads were then aligned to the same position as the original read. This was followed by Remove ligation artifacts to reduce erroneous reads that originated from the adaptor ligation step. Next, the InDels and structural variants tool was used to identify structural variants, relying on information from unaligned ends. This information was then used for a second alignment, the Local realignment , which is used to improve the initial read mapping. Identify known mutations from read mapping was used to identify the reads at the specific SNP positions. The final step was then to annotate the identified variants with the UMI information by using the tool Annotate variants with UMI info. Final genotype calling was performed in Microsoft Excel with a defined coverage threshold at 10×. The ARF threshold for homozygotes was set to ≥0.95, and for heterozygotes to ≤0.80. The quality score threshold was set to ≥15.
One approach to evaluate the power of UMI is to analyze the same sequencing data without taking the UMI information into consideration. We analyzed the same sequencing data by counting the total number of reads, including the PCR duplicates, which is the traditional bioinformatic workflow when evaluating MPS data, thus ignoring the UMI information. This was done in CLC Genomics Workbench V.21.0.3 by importing the same FASTQ files as above. The first step was to use the Remove and annotate UMI information tool to remove the UMI and thereby improve the read mapping. Secondly, the reads were mapped to the reference genome (hg19) with the Map reads to reference reads tool. This was followed by the InDels and structural variants tool to identify structural insertions and deletions from the mapping. Next, a Local Realignment was performed to further improve the read mapping, and finally the Identify known mutations from read mapping was used to identify the reads at each specific locus. The resulting read counts were then analyzed in Microsoft Excel for genotype calling. This approach was applied to the dilution series of the Coriell sample NA12877. The ARF and quality score thresholds for the non-UMI data were the same as for the UMI approach above. However, the coverage thresholds for the non-UMI data varied and were set with two different approaches. Firstly, the coverage for the UMI data was set to 10×, and the non-UMI data coverage was set so that the call rates between the two data sets were similar (i.e., increasing the coverage threshold for the non-UMI data). From this, error rates were compared between the UMI and non-UMI data. Secondly, the coverage for the non-UMI data was adjusted so that the error rates were similar with and without UMI information. From this approach, the call rates between the UMI and non-UMI data sets were compared. The hypothesis was that the use of UMI increases sensitivity and genotype accuracy. This implies that if the call rates are similar, the error rates would be lower in the UMI data. In addition, if the error rates between the two data sets are similar, the call rate would be higher in the UMI data.
In total, 5507 SNPs were selected to be included in the panel. Six markers were excluded during the primer design. One reason for exclusion was that the genome context close to the SNP region was not unique. The erroneous region could be amplified and, subsequently, false variants could be detected since the reads could map in multiple places in the genome. Another reason was that the genome context close to the SNP had either an abnormally high or low GC % content or extremely repetitive regions, resulting in difficulties in designing specific and efficient primers. Therefore, the primer design resulted in primers for 5501 SNPs. Most of the SNPs had two primers covering the SNP region, which would reduce the impact of population-specific nucleotide polymorphism in the primer site. A distribution of the distance from primer to SNP is illustrated as a histogram in . Additionally, four SNP markers were excluded, since no read data was observed at these sites for any of the analyzed samples. Thus, 5497 SNPs markers were further evaluated. See for a detailed description of all the included SNPs. In total, 58 samples were analyzed on 15 sequencing runs. shows the DNA input amount, average coverage and FASTQ file sizes for all samples. Additionally, sequencing quality metrics are presented. 3.1. The Effect of Applying UMIs Nine samples with various concentrations of NA12877 were bioinformatically analyzed both with and without taking the UMI information into account. We applied two different approaches to evaluate the data by defining different coverage thresholds, since the number of reads varies when counting UMI reads compared to counting all reads. When applying a threshold resulting in similar call rates between the two datasets, the genotype accuracy increased when taking the UMIs into account. This is illustrated in A. The genotype accuracy was similar down to 500 pg, though it was always slightly higher with UMI. For 250 pg and lower, the difference is visually notable. A pairwise t -test showed that the difference was statistically significant ( p < 0.05). The other approach was to set a threshold that resulted in similar error rates and then compare the call rates ( B). With the same genotype accuracy for the two data sets, the call rates were always higher when taking UMIs into account, especially for the lower DNA amounts. The difference was statistically significant ( p < 0.047), applying a pairwise t -test. The 1 ng sample without UMI did not reach the same high genotype accuracy as for the data with UMI information, regardless of coverage threshold. We decided to plot equal call rates even though the genotype accuracy was slightly lower for the non-UMI data. 3.2. General Assay Performance 3.2.1. Genotype Accuracy and Repeatability The Coriell sample NA12877 was analyzed in duplicate (labeled NA12877-1 and NA12877-2), with 20 ng of DNA as input. The genotype accuracy was assessed by comparing the generated genotypes for each of the duplicate samples with previously published genotypes for NA12877. In the first sample (NA12877-1), complete genotype accuracy was seen for the 5490 called SNPs. Seven markers (0.13%) (rs7537605, rs1710456, rs4092077, rs1428142, rs367600495, rs576471146 and rs169250) were not typed due to imbalance in both heterozygote and homozygote genotypes. For the replicate sample (NA12877-2), six markers (0.11%) (rs1710456, rs4092077, rs1428142, rs710160, rs367600495 and rs169250) resulted in no calls due to the same reason. One discordant genotype (0.02%) was observed in the NA21877-2 sample as an allele drop-out in marker rs7537605. The same marker was inconclusive in the NA12877-1 sample due to imbalance. The number of called genotypes in both replicates were 5489 (99.8%), and complete concordance between the samples was observed. The NA12878 reference is a female sample; therefore, 4610 markers were evaluated (excluding the Y-SNPs). Complete genotype accuracy was found in the 4601 called markers. Nine SNPs (0.20%) were not typed (rs4027132, rs4092077, rs1428142, rs1029047, rs1223550, rs7117433, rs1126809, rs10892689 and rs710160) due to imbalances. Control sample 2800M was analyzed in duplicate, and the genotypes were compared. Complete concordance was seen for the 5487 SNPs that were called in both replicates. Eight markers (0.15%) (rs4092077, rs1428142, rs1029047, rs200332530, rs372687543, rs367600495, rs9785702 and rs2032672) were not called in both duplicates due to imbalance. Additionally, rs576471146 was inconclusive in one of the duplicates, and rs710160 was inconclusive in the other duplicate. We also compared the FORCE genotypes of 2800M with previously published genotypes from the ForenSeq DNA Signature prep kit . Out of the 169 SNPs analyzed in both assays, complete concordance was seen in both replicates. FORCE genotypes generated with the myBaits assay for 2800M were previously published in . A total of 5386 markers overlapped with the two duplicate samples, and discordance was noticed in three markers (rs7537605, rs169250 and rs9785659). In total, 19 markers (0.35%) were found to be either inconclusive, due to imbalance, or discordant based on the initial analysis of the three high-quantity reference DNA samples, totaling five samples, including the replicates. summarizes all these SNPs, and detailed read data is presented in . Possible reasons for the imbalances and discordances were found for 10 of the markers by examining the regions in the Integrative Genomics Viewer (IGV) software version 2.7.2 . For instance, seven SNPs had polynucleotide stretches close to the SNP site, one locus had a SNP variant in the covering primer region and one SNP mapped to multiple places in the genome. See for a detailed description of the observations in IGV. 3.2.2. Sensitivity The investigation of sensitivity was performed based on the dilution series of NA12877 with the following input amounts of DNA: 20 ng, 10 ng, 1 ng, 0.5 ng, 0.25 ng, 0.125 ng, 0.06 ng, 0.03 ng and 0.015 ng. The call rate was greater than 97% down to 1 ng ( A). Genotype accuracy greater than 99.9% for the 5497 SNP markers was seen down to 500 pg of DNA input ( B). In total, four markers were causing the discordances in the samples down to 500 pg, and all of them belonged to the problematic SNPs identified in . Thus, if excluding these poorly performing SNPs, complete genotype accuracy was seen down to 500 pg. In addition, genotype accuracy larger than 99% was seen down to 125 pg. Lower amounts of DNA resulted in lower call rates (less than 40%) and, subsequently, the genotype accuracy dropped from 96% at 60 pg to 82% at 15 pg. A substantial majority of the observed discordances from 250 pg and lower were allele drop-outs. One approach to improve the call rates would be to adjust the ARF thresholds to be more non-conservative. We decreased the homozygous ARF value to 0.9 and increased the heterozygous ARF value to 0.85, which resulted in improved call rates. However, a slightly negative effect on the genotype accuracy was observed ( ). 3.3. Performance with Casework-Relevant Samples 3.3.1. Mixture Detection and Deconvolution Two-person mixtures were analyzed in four different ratios; 1:1, 1:10, 1:50 and 1:100. The aims of the mixture analysis were to firstly detect the mixture, by distinguishing it from a single-source sample, and secondly perform accurate genotype calling for one unknown contributor. Allele read frequencies (ARFs) were calculated for each SNP marker. Differences in the ARF distribution were used to distinguish the mixtures from single-source samples. displays density plots of the 1:1 and 1:10 mixtures together with an ARF distribution for a single-source sample as reference. The 1:1 mixture could clearly be separated from a single-source sample based on the ARF distribution. A more homogenous distribution was seen in the 1:10 mixture compared to a single-source sample. However, a difference was observed, especially as a shift to the left of the ARF distribution for the homozygotes. The two additional mixtures (1:50 and 1:100) could not be distinguished from a single-source sample based on the ARF values ( ). Furthermore, an increased heterozygosity rate indicates the presence of a DNA mixture . The heterozygosity rates for two single-source samples and for the mixture samples are illustrated in together with the theoretical heterozygosity rate for the investigated mixture. The 1:1 and 1:10 mixture could be detected based on an increased heterozygosity rate. However, the 1:50 and 1:100 mixture could not be detected. We performed a mixture deconvolution test for the 1:1 mixture. We assumed a 50% contribution and removed reads that theoretically originated from the known contributor. The remaining UMI reads were used for genotype calling, and the call rates were 59.4% and 82.0% respectively for the duplicates. The genotype accuracy of the called genotypes for the duplicates was 99.2% and 99.9%, respectively, when applying adjusted ARF thresholds (homozygous ≥ 0.90 and heterozygous ≤ 0.55). The discordances were caused by allele drop-ins. 3.3.2. Mock Case Samples One female saliva sample was analyzed with two different input amounts of DNA, 10 ng and 1 ng. The sample was analyzed with and without the addition of two inhibitors, soil and snuff. The untreated 10 ng sample was used as reference, and concordances were investigated between the inhibitor-spiked samples. Six out of 4610 markers were not called due to imbalance in the 10 ng reference sample; three of those markers were identified as problematic in . Complete concordance was seen in both inhibitor-treated samples with 10 ng of DNA. With the 1 ng samples, the call rate dropped to 93%, 90% and 90% for the reference, soil and snuff samples, respectively. The number of discordances were 6 (0.13%), 7 (0.15%) and 14 (0.30%) for the reference, soil and snuff samples, respectively. All discordances were caused by allele drop-outs. The results are summarized in . 3.3.3. Bone and Tissue Samples DNA from eight bone samples and four tissue samples was analyzed, with 1 ng as input. The call rates ranged from 88% to 99% ( ). Six of the bone samples were previously analyzed with a forensically validated in-house SNP panel with 131 SNPs. Complete genotype concordance was seen for all the overlapping SNPs. Additionally, six bone samples were analyzed with the ForenSeq DNA Signature Prep kit (Verogen), and complete genotype concordance with overlapping SNPs (max 167 SNPs) was observed; see . 3.4. Forensic Casework Applications 3.4.1. Kinship Analysis Likelihood ratio (LR) calculations and Mendelian inheritance pattern analyses were performed in the two families with known relations, based on the DNA data from the kinship informative SNPs. Paternity tests were performed in both duo and trio cases, and maternity tests were performed as duos. The compared hypotheses were that each parent is a parent of the child (H1) versus that the parent and child are unrelated (H2). The LR ranged from 6 × 10 263 to 2 × 10 291 for the duo cases. The LRs in the trio cases were all above 10 300 . See for details. One genetic inconsistency (0.002%) was observed between the mother and one child in one of the families, and thus no LR could be calculated without accounting for genotype errors or mutations in the statistical calculation. SNP marker rs7537605 was typed as homozygous AA in the mother and homozygous GG in the child. This marker was found to be problematic in several of the reference DNA samples ( ); if excluding this marker, the LR was calculated to 7 × 10 285 . Based on allele frequencies for the FORCE kinship informative SNPs from the SweGen project, 1000 simulations were performed in Familias, with the hypothesis that an alleged father is father to the child (H1) versus that the alleged father is unrelated to the child (H2). The number of genetic inconsistencies when the alternative hypothesis (H2) is true was, on average, 411 and is illustrated in . The lowest number of genetic inconsistencies was 344 and was observed in one simulation. Additionally, 1000 simulations were performed in ILIR for evaluating the power of the panel in more distant relationships. The tested hypotheses were two individuals being half siblings, first cousins, first cousins once removed, and second cousins, all with unrelated as the alternative hypothesis. displays a density plot with LRs for each hypothesis. The tested and alternative hypotheses are well separated for the second to fourth degree of relation. For second cousins, the majority of the LRs were still informative; however, some overlap of the LR distribution curves exists. These findings are in concordance with previous results based on allele frequencies from a European population . 3.4.2. Phenotype and Ancestry Predictions summarizes the phenotype and ancestry predictions for the two samples based on the observed genotypes. All included phenotype and ancestry informative markers were called (44 and 255 SNPs, respectively) in both samples. All predictions were consistent with the self-reported data, except for one sample where the self-reported eye color was intermediate, and the most probable predicted eye color was blue (prediction probability 0.93).
Nine samples with various concentrations of NA12877 were bioinformatically analyzed both with and without taking the UMI information into account. We applied two different approaches to evaluate the data by defining different coverage thresholds, since the number of reads varies when counting UMI reads compared to counting all reads. When applying a threshold resulting in similar call rates between the two datasets, the genotype accuracy increased when taking the UMIs into account. This is illustrated in A. The genotype accuracy was similar down to 500 pg, though it was always slightly higher with UMI. For 250 pg and lower, the difference is visually notable. A pairwise t -test showed that the difference was statistically significant ( p < 0.05). The other approach was to set a threshold that resulted in similar error rates and then compare the call rates ( B). With the same genotype accuracy for the two data sets, the call rates were always higher when taking UMIs into account, especially for the lower DNA amounts. The difference was statistically significant ( p < 0.047), applying a pairwise t -test. The 1 ng sample without UMI did not reach the same high genotype accuracy as for the data with UMI information, regardless of coverage threshold. We decided to plot equal call rates even though the genotype accuracy was slightly lower for the non-UMI data.
3.2.1. Genotype Accuracy and Repeatability The Coriell sample NA12877 was analyzed in duplicate (labeled NA12877-1 and NA12877-2), with 20 ng of DNA as input. The genotype accuracy was assessed by comparing the generated genotypes for each of the duplicate samples with previously published genotypes for NA12877. In the first sample (NA12877-1), complete genotype accuracy was seen for the 5490 called SNPs. Seven markers (0.13%) (rs7537605, rs1710456, rs4092077, rs1428142, rs367600495, rs576471146 and rs169250) were not typed due to imbalance in both heterozygote and homozygote genotypes. For the replicate sample (NA12877-2), six markers (0.11%) (rs1710456, rs4092077, rs1428142, rs710160, rs367600495 and rs169250) resulted in no calls due to the same reason. One discordant genotype (0.02%) was observed in the NA21877-2 sample as an allele drop-out in marker rs7537605. The same marker was inconclusive in the NA12877-1 sample due to imbalance. The number of called genotypes in both replicates were 5489 (99.8%), and complete concordance between the samples was observed. The NA12878 reference is a female sample; therefore, 4610 markers were evaluated (excluding the Y-SNPs). Complete genotype accuracy was found in the 4601 called markers. Nine SNPs (0.20%) were not typed (rs4027132, rs4092077, rs1428142, rs1029047, rs1223550, rs7117433, rs1126809, rs10892689 and rs710160) due to imbalances. Control sample 2800M was analyzed in duplicate, and the genotypes were compared. Complete concordance was seen for the 5487 SNPs that were called in both replicates. Eight markers (0.15%) (rs4092077, rs1428142, rs1029047, rs200332530, rs372687543, rs367600495, rs9785702 and rs2032672) were not called in both duplicates due to imbalance. Additionally, rs576471146 was inconclusive in one of the duplicates, and rs710160 was inconclusive in the other duplicate. We also compared the FORCE genotypes of 2800M with previously published genotypes from the ForenSeq DNA Signature prep kit . Out of the 169 SNPs analyzed in both assays, complete concordance was seen in both replicates. FORCE genotypes generated with the myBaits assay for 2800M were previously published in . A total of 5386 markers overlapped with the two duplicate samples, and discordance was noticed in three markers (rs7537605, rs169250 and rs9785659). In total, 19 markers (0.35%) were found to be either inconclusive, due to imbalance, or discordant based on the initial analysis of the three high-quantity reference DNA samples, totaling five samples, including the replicates. summarizes all these SNPs, and detailed read data is presented in . Possible reasons for the imbalances and discordances were found for 10 of the markers by examining the regions in the Integrative Genomics Viewer (IGV) software version 2.7.2 . For instance, seven SNPs had polynucleotide stretches close to the SNP site, one locus had a SNP variant in the covering primer region and one SNP mapped to multiple places in the genome. See for a detailed description of the observations in IGV. 3.2.2. Sensitivity The investigation of sensitivity was performed based on the dilution series of NA12877 with the following input amounts of DNA: 20 ng, 10 ng, 1 ng, 0.5 ng, 0.25 ng, 0.125 ng, 0.06 ng, 0.03 ng and 0.015 ng. The call rate was greater than 97% down to 1 ng ( A). Genotype accuracy greater than 99.9% for the 5497 SNP markers was seen down to 500 pg of DNA input ( B). In total, four markers were causing the discordances in the samples down to 500 pg, and all of them belonged to the problematic SNPs identified in . Thus, if excluding these poorly performing SNPs, complete genotype accuracy was seen down to 500 pg. In addition, genotype accuracy larger than 99% was seen down to 125 pg. Lower amounts of DNA resulted in lower call rates (less than 40%) and, subsequently, the genotype accuracy dropped from 96% at 60 pg to 82% at 15 pg. A substantial majority of the observed discordances from 250 pg and lower were allele drop-outs. One approach to improve the call rates would be to adjust the ARF thresholds to be more non-conservative. We decreased the homozygous ARF value to 0.9 and increased the heterozygous ARF value to 0.85, which resulted in improved call rates. However, a slightly negative effect on the genotype accuracy was observed ( ).
The Coriell sample NA12877 was analyzed in duplicate (labeled NA12877-1 and NA12877-2), with 20 ng of DNA as input. The genotype accuracy was assessed by comparing the generated genotypes for each of the duplicate samples with previously published genotypes for NA12877. In the first sample (NA12877-1), complete genotype accuracy was seen for the 5490 called SNPs. Seven markers (0.13%) (rs7537605, rs1710456, rs4092077, rs1428142, rs367600495, rs576471146 and rs169250) were not typed due to imbalance in both heterozygote and homozygote genotypes. For the replicate sample (NA12877-2), six markers (0.11%) (rs1710456, rs4092077, rs1428142, rs710160, rs367600495 and rs169250) resulted in no calls due to the same reason. One discordant genotype (0.02%) was observed in the NA21877-2 sample as an allele drop-out in marker rs7537605. The same marker was inconclusive in the NA12877-1 sample due to imbalance. The number of called genotypes in both replicates were 5489 (99.8%), and complete concordance between the samples was observed. The NA12878 reference is a female sample; therefore, 4610 markers were evaluated (excluding the Y-SNPs). Complete genotype accuracy was found in the 4601 called markers. Nine SNPs (0.20%) were not typed (rs4027132, rs4092077, rs1428142, rs1029047, rs1223550, rs7117433, rs1126809, rs10892689 and rs710160) due to imbalances. Control sample 2800M was analyzed in duplicate, and the genotypes were compared. Complete concordance was seen for the 5487 SNPs that were called in both replicates. Eight markers (0.15%) (rs4092077, rs1428142, rs1029047, rs200332530, rs372687543, rs367600495, rs9785702 and rs2032672) were not called in both duplicates due to imbalance. Additionally, rs576471146 was inconclusive in one of the duplicates, and rs710160 was inconclusive in the other duplicate. We also compared the FORCE genotypes of 2800M with previously published genotypes from the ForenSeq DNA Signature prep kit . Out of the 169 SNPs analyzed in both assays, complete concordance was seen in both replicates. FORCE genotypes generated with the myBaits assay for 2800M were previously published in . A total of 5386 markers overlapped with the two duplicate samples, and discordance was noticed in three markers (rs7537605, rs169250 and rs9785659). In total, 19 markers (0.35%) were found to be either inconclusive, due to imbalance, or discordant based on the initial analysis of the three high-quantity reference DNA samples, totaling five samples, including the replicates. summarizes all these SNPs, and detailed read data is presented in . Possible reasons for the imbalances and discordances were found for 10 of the markers by examining the regions in the Integrative Genomics Viewer (IGV) software version 2.7.2 . For instance, seven SNPs had polynucleotide stretches close to the SNP site, one locus had a SNP variant in the covering primer region and one SNP mapped to multiple places in the genome. See for a detailed description of the observations in IGV.
The investigation of sensitivity was performed based on the dilution series of NA12877 with the following input amounts of DNA: 20 ng, 10 ng, 1 ng, 0.5 ng, 0.25 ng, 0.125 ng, 0.06 ng, 0.03 ng and 0.015 ng. The call rate was greater than 97% down to 1 ng ( A). Genotype accuracy greater than 99.9% for the 5497 SNP markers was seen down to 500 pg of DNA input ( B). In total, four markers were causing the discordances in the samples down to 500 pg, and all of them belonged to the problematic SNPs identified in . Thus, if excluding these poorly performing SNPs, complete genotype accuracy was seen down to 500 pg. In addition, genotype accuracy larger than 99% was seen down to 125 pg. Lower amounts of DNA resulted in lower call rates (less than 40%) and, subsequently, the genotype accuracy dropped from 96% at 60 pg to 82% at 15 pg. A substantial majority of the observed discordances from 250 pg and lower were allele drop-outs. One approach to improve the call rates would be to adjust the ARF thresholds to be more non-conservative. We decreased the homozygous ARF value to 0.9 and increased the heterozygous ARF value to 0.85, which resulted in improved call rates. However, a slightly negative effect on the genotype accuracy was observed ( ).
3.3.1. Mixture Detection and Deconvolution Two-person mixtures were analyzed in four different ratios; 1:1, 1:10, 1:50 and 1:100. The aims of the mixture analysis were to firstly detect the mixture, by distinguishing it from a single-source sample, and secondly perform accurate genotype calling for one unknown contributor. Allele read frequencies (ARFs) were calculated for each SNP marker. Differences in the ARF distribution were used to distinguish the mixtures from single-source samples. displays density plots of the 1:1 and 1:10 mixtures together with an ARF distribution for a single-source sample as reference. The 1:1 mixture could clearly be separated from a single-source sample based on the ARF distribution. A more homogenous distribution was seen in the 1:10 mixture compared to a single-source sample. However, a difference was observed, especially as a shift to the left of the ARF distribution for the homozygotes. The two additional mixtures (1:50 and 1:100) could not be distinguished from a single-source sample based on the ARF values ( ). Furthermore, an increased heterozygosity rate indicates the presence of a DNA mixture . The heterozygosity rates for two single-source samples and for the mixture samples are illustrated in together with the theoretical heterozygosity rate for the investigated mixture. The 1:1 and 1:10 mixture could be detected based on an increased heterozygosity rate. However, the 1:50 and 1:100 mixture could not be detected. We performed a mixture deconvolution test for the 1:1 mixture. We assumed a 50% contribution and removed reads that theoretically originated from the known contributor. The remaining UMI reads were used for genotype calling, and the call rates were 59.4% and 82.0% respectively for the duplicates. The genotype accuracy of the called genotypes for the duplicates was 99.2% and 99.9%, respectively, when applying adjusted ARF thresholds (homozygous ≥ 0.90 and heterozygous ≤ 0.55). The discordances were caused by allele drop-ins. 3.3.2. Mock Case Samples One female saliva sample was analyzed with two different input amounts of DNA, 10 ng and 1 ng. The sample was analyzed with and without the addition of two inhibitors, soil and snuff. The untreated 10 ng sample was used as reference, and concordances were investigated between the inhibitor-spiked samples. Six out of 4610 markers were not called due to imbalance in the 10 ng reference sample; three of those markers were identified as problematic in . Complete concordance was seen in both inhibitor-treated samples with 10 ng of DNA. With the 1 ng samples, the call rate dropped to 93%, 90% and 90% for the reference, soil and snuff samples, respectively. The number of discordances were 6 (0.13%), 7 (0.15%) and 14 (0.30%) for the reference, soil and snuff samples, respectively. All discordances were caused by allele drop-outs. The results are summarized in . 3.3.3. Bone and Tissue Samples DNA from eight bone samples and four tissue samples was analyzed, with 1 ng as input. The call rates ranged from 88% to 99% ( ). Six of the bone samples were previously analyzed with a forensically validated in-house SNP panel with 131 SNPs. Complete genotype concordance was seen for all the overlapping SNPs. Additionally, six bone samples were analyzed with the ForenSeq DNA Signature Prep kit (Verogen), and complete genotype concordance with overlapping SNPs (max 167 SNPs) was observed; see .
Two-person mixtures were analyzed in four different ratios; 1:1, 1:10, 1:50 and 1:100. The aims of the mixture analysis were to firstly detect the mixture, by distinguishing it from a single-source sample, and secondly perform accurate genotype calling for one unknown contributor. Allele read frequencies (ARFs) were calculated for each SNP marker. Differences in the ARF distribution were used to distinguish the mixtures from single-source samples. displays density plots of the 1:1 and 1:10 mixtures together with an ARF distribution for a single-source sample as reference. The 1:1 mixture could clearly be separated from a single-source sample based on the ARF distribution. A more homogenous distribution was seen in the 1:10 mixture compared to a single-source sample. However, a difference was observed, especially as a shift to the left of the ARF distribution for the homozygotes. The two additional mixtures (1:50 and 1:100) could not be distinguished from a single-source sample based on the ARF values ( ). Furthermore, an increased heterozygosity rate indicates the presence of a DNA mixture . The heterozygosity rates for two single-source samples and for the mixture samples are illustrated in together with the theoretical heterozygosity rate for the investigated mixture. The 1:1 and 1:10 mixture could be detected based on an increased heterozygosity rate. However, the 1:50 and 1:100 mixture could not be detected. We performed a mixture deconvolution test for the 1:1 mixture. We assumed a 50% contribution and removed reads that theoretically originated from the known contributor. The remaining UMI reads were used for genotype calling, and the call rates were 59.4% and 82.0% respectively for the duplicates. The genotype accuracy of the called genotypes for the duplicates was 99.2% and 99.9%, respectively, when applying adjusted ARF thresholds (homozygous ≥ 0.90 and heterozygous ≤ 0.55). The discordances were caused by allele drop-ins.
One female saliva sample was analyzed with two different input amounts of DNA, 10 ng and 1 ng. The sample was analyzed with and without the addition of two inhibitors, soil and snuff. The untreated 10 ng sample was used as reference, and concordances were investigated between the inhibitor-spiked samples. Six out of 4610 markers were not called due to imbalance in the 10 ng reference sample; three of those markers were identified as problematic in . Complete concordance was seen in both inhibitor-treated samples with 10 ng of DNA. With the 1 ng samples, the call rate dropped to 93%, 90% and 90% for the reference, soil and snuff samples, respectively. The number of discordances were 6 (0.13%), 7 (0.15%) and 14 (0.30%) for the reference, soil and snuff samples, respectively. All discordances were caused by allele drop-outs. The results are summarized in .
DNA from eight bone samples and four tissue samples was analyzed, with 1 ng as input. The call rates ranged from 88% to 99% ( ). Six of the bone samples were previously analyzed with a forensically validated in-house SNP panel with 131 SNPs. Complete genotype concordance was seen for all the overlapping SNPs. Additionally, six bone samples were analyzed with the ForenSeq DNA Signature Prep kit (Verogen), and complete genotype concordance with overlapping SNPs (max 167 SNPs) was observed; see .
3.4.1. Kinship Analysis Likelihood ratio (LR) calculations and Mendelian inheritance pattern analyses were performed in the two families with known relations, based on the DNA data from the kinship informative SNPs. Paternity tests were performed in both duo and trio cases, and maternity tests were performed as duos. The compared hypotheses were that each parent is a parent of the child (H1) versus that the parent and child are unrelated (H2). The LR ranged from 6 × 10 263 to 2 × 10 291 for the duo cases. The LRs in the trio cases were all above 10 300 . See for details. One genetic inconsistency (0.002%) was observed between the mother and one child in one of the families, and thus no LR could be calculated without accounting for genotype errors or mutations in the statistical calculation. SNP marker rs7537605 was typed as homozygous AA in the mother and homozygous GG in the child. This marker was found to be problematic in several of the reference DNA samples ( ); if excluding this marker, the LR was calculated to 7 × 10 285 . Based on allele frequencies for the FORCE kinship informative SNPs from the SweGen project, 1000 simulations were performed in Familias, with the hypothesis that an alleged father is father to the child (H1) versus that the alleged father is unrelated to the child (H2). The number of genetic inconsistencies when the alternative hypothesis (H2) is true was, on average, 411 and is illustrated in . The lowest number of genetic inconsistencies was 344 and was observed in one simulation. Additionally, 1000 simulations were performed in ILIR for evaluating the power of the panel in more distant relationships. The tested hypotheses were two individuals being half siblings, first cousins, first cousins once removed, and second cousins, all with unrelated as the alternative hypothesis. displays a density plot with LRs for each hypothesis. The tested and alternative hypotheses are well separated for the second to fourth degree of relation. For second cousins, the majority of the LRs were still informative; however, some overlap of the LR distribution curves exists. These findings are in concordance with previous results based on allele frequencies from a European population . 3.4.2. Phenotype and Ancestry Predictions summarizes the phenotype and ancestry predictions for the two samples based on the observed genotypes. All included phenotype and ancestry informative markers were called (44 and 255 SNPs, respectively) in both samples. All predictions were consistent with the self-reported data, except for one sample where the self-reported eye color was intermediate, and the most probable predicted eye color was blue (prediction probability 0.93).
Likelihood ratio (LR) calculations and Mendelian inheritance pattern analyses were performed in the two families with known relations, based on the DNA data from the kinship informative SNPs. Paternity tests were performed in both duo and trio cases, and maternity tests were performed as duos. The compared hypotheses were that each parent is a parent of the child (H1) versus that the parent and child are unrelated (H2). The LR ranged from 6 × 10 263 to 2 × 10 291 for the duo cases. The LRs in the trio cases were all above 10 300 . See for details. One genetic inconsistency (0.002%) was observed between the mother and one child in one of the families, and thus no LR could be calculated without accounting for genotype errors or mutations in the statistical calculation. SNP marker rs7537605 was typed as homozygous AA in the mother and homozygous GG in the child. This marker was found to be problematic in several of the reference DNA samples ( ); if excluding this marker, the LR was calculated to 7 × 10 285 . Based on allele frequencies for the FORCE kinship informative SNPs from the SweGen project, 1000 simulations were performed in Familias, with the hypothesis that an alleged father is father to the child (H1) versus that the alleged father is unrelated to the child (H2). The number of genetic inconsistencies when the alternative hypothesis (H2) is true was, on average, 411 and is illustrated in . The lowest number of genetic inconsistencies was 344 and was observed in one simulation. Additionally, 1000 simulations were performed in ILIR for evaluating the power of the panel in more distant relationships. The tested hypotheses were two individuals being half siblings, first cousins, first cousins once removed, and second cousins, all with unrelated as the alternative hypothesis. displays a density plot with LRs for each hypothesis. The tested and alternative hypotheses are well separated for the second to fourth degree of relation. For second cousins, the majority of the LRs were still informative; however, some overlap of the LR distribution curves exists. These findings are in concordance with previous results based on allele frequencies from a European population .
summarizes the phenotype and ancestry predictions for the two samples based on the observed genotypes. All included phenotype and ancestry informative markers were called (44 and 255 SNPs, respectively) in both samples. All predictions were consistent with the self-reported data, except for one sample where the self-reported eye color was intermediate, and the most probable predicted eye color was blue (prediction probability 0.93).
In this study, we evaluated the FORCE panel, which includes ~5500 SNPs, with a QIAseq Targeted DNA Custom Panel (Qiagen), including UMIs. One of the main aims of this study was to explore the power of UMIs in MPS-based genotyping from a forensic genetic perspective. We approached this by analyzing the same raw sequencing data with two different bioinformatic workflows, with and without taking the UMI information into account. We showed that the call rate increased with the UMIs while maintaining the same genotype accuracy. The differences were mainly observed for the lower amounts of DNA, from 0.250 ng, where approximately twice as many genotypes were called. Likewise, the genotype accuracy increased with the UMI information when the call rates were similar in the two data sets. The positive effect on the genotype accuracy was also mainly observed for the lower amounts of DNA. Woerner and Crysup et al. previously applied UMIs in a forensic context, focusing on STRs. Consistent with our findings, their results demonstrated that incorporating UMI reads led to improved genotype calling. Additionally, they showed that implementing a machine learning approach for the genotype calling further enhanced the potential power of UMIs. Instead of applying thresholds based on only counting the number of UMI reads, additional parameters were analyzed with the machine learning approach, for instance, the number of reads per UMI read and accounting for possible PCR or sequencing errors in the UMI sequence. Further optimizations and analyses of our bioinformatic workflow would be necessary to evaluate if a similar positive effect on genotype calling could be observed in our data as well. The call rates for the five reference DNA samples in the initial analysis were very high (>99.8%); most importantly, the genotype accuracy was >99.9% for all samples. The investigation of sensitivity showed that the call rate started to drop around 1 ng of DNA (97.25% call rate). Although, since the total number of SNPs in this panel is high (~5500), even a low call rate at 60% still represents >3200 SNPs, which, depending on the context, still could be sufficiently informative in many forensic investigations. The concordance of the called genotypes remained very high, greater than 99.9%, down to 500 pg DNA. Furthermore, complete accuracy for the observed genotypes down to 500 pg could be achieved if excluding four of the problematic SNPs (rs4092077, rs1428142, rs7537605 and rs1710456) identified in . Similar performance regarding genotype call rate and accuracy was shown by Peck et al. in their validation study of the ForenSeq Kintelligence kit (Verogen), which also is an extensive SNP panel based on multiplex PCR technology. However, we observed a slightly improved genotype accuracy, which could potentially be an effect of the use of UMIs in our data. Overall, the assay showed good tolerance for challenging forensic samples, including degraded bone and tissue samples as well as inhibitor-spiked samples. The same type of inhibitors were evaluated in another MPS-based SNP assay, and our results are in consensus with previously published data . It is notable that this assay is more sensitive to the amount of DNA rather than the tested inhibitors, since genotype dropouts increased due to lower amounts of DNA rather than the presence of inhibitors. A detailed investigation of the problematic SNPs in IGV identified potential complex regions in 10 markers ( ). Seven of the SNPs were located close to a complex polynucleotide region, thus causing difficulties in sequencing or alignment of the reads . We observed that some of the problematic SNPs had a considerably high number of “other” nucleotide reads (i.e., not A, C, G or T) ( ). This can occur if two different nucleotides are detected on the read 1 (R1) and read 2 (R2) SNP sites. Differences in R1 and R2 could be caused by these polynucleotide regions, thus explaining the observed errors. These 10 problematic SNPs could be excluded in future FORCE panel designs. We could not find any potential reason for the remaining nine problematic markers. However, these were only identified in one sample (or in both replicates of one sample). Three of the 19 identified markers (rs169250, rs1428142 and rs1223550) had previously been reported as poorly performing SNPs in . However, some problematic SNPs identified in this study displayed good performance with the FORCE MyBaits assay, and vice versa. The performance of the SNPs is therefore assay-dependent and should be evaluated separately for each enrichment strategy. Although the overall call rate is relatively high, at least for DNA quantities down to 1 ng, there can be specific needs to increase the call rate even further. One approach to increase the call rate would be to adjust the ARF thresholds to allow more genotypes to be called. We show in that a more liberal ARF threshold increases the call rates. However, this has a slightly negative effect on the genotype accuracy, since a more generous threshold allows skewed alleles to be typed, which could increase the error rate. It is therefore important that each laboratory defines their needs regarding call rates versus accuracies during internal validation. Furthermore, it is also possible to set specific thresholds for specific types of markers, and the laboratory should consider the application of the data when defining the optimal threshold. For instance, a slightly higher error rate could be acceptable in DNA intelligence applications compared to direct matching or kinship inference. Following library preparation, an unexpected 200 bp PCR product was observed in fragment analysis in all DNA libraries with less than 20 ng input DNA. The expected DNA library fragments ranged from 300–600 bp. illustrates Bioanalyzer figures of three DNA libraries. This unwanted PCR product could be caused by adapter dimer formation. Too low amounts of DNA would cause the adapters to form dimers instead of binding to the DNA fragments; the number of dimers increased with decreased DNA, as expected. Furthermore, the number of undetermined reads (reads that cannot be assigned to a specific sample) correlated with the number of dimers, implying that the dimers consist of flow cell-compatible adaptor sequences. We noticed that a vast majority of the undetermined reads consisted of a specific DNA sequence originating from read 2 (AACTCCATCAATCAGGTCAGTTTCTCACTTTCAAAACGCAATACTGTACATT) with a specific adaptor sequence (CCAGTCGT). Truelsen et al. noticed a similar phenomenon with dimers for low template DNA samples. They diluted the adaptor indices to adjust the number of adaptors when analyzing low levels of DNA, which successfully decreased the adaptor dimers. However, we applied the same approach and performed a 10-fold dilution of the adaptors without any notable decrease in adaptor dimers. Possibly, additional dilutions could be required, although a too-extensive dilution could have a negative effect by reducing the number of adaptor-ligated reads. Another approach to reduce the adaptor dimer is to repeat the final magnetic bead-based clean-up ( ). However, it is quite labor intensive if several clean-ups are required. The QIAseq assay was primarily developed for non-forensic applications, and several studies, with access to high amounts of DNA, have shown successful results [ , , ]. Still, forensic samples can often have much lower quantities of DNA, and a general assay optimization for low level DNA could be preferable. We investigated if the low call rate for the 60 pg sample could be caused by the high amount of dimers, which theoretically would decrease the sequencing capacity. The short length of the dimers favors them to cluster more efficiently to the flow cell compared to the intended libraries. Additional magnetic bead-based clean-up was performed five times for the 60 pg sample to reduce the amount of dimers. This sample was then sequenced alone. The resulting call rate was, however, not improved. This indicates that the dimer sequences did not have a negative impact on the number of reads for the 60 pg sample, and the low call rate could be explained by the low amount of input DNA. However, we still believe that the dimer sequences could have a negative effect on the read counts for the intended libraries for samples with higher DNA amount. Different strategies can be applied for mixture detection with MPS-based biallelic SNP assays. Mixtures can be identified by observing variation in the allele read frequency or by detecting an increase in heterozygotes . We could successfully distinguish mixtures from single-source samples down to 1:10 mixtures by applying either of the proposed detection methods. The additional analyzed DNA mixtures (1:50 and 1:100) could not be visually distinguished from single-source samples; however, improved depth of coverage could potentially enable more sensitive mixture detection. Previous studies of MPS-based assays have shown similar detection limits . However, the heterozygosity rate could not be used to distinguish the ratio of the mixture, i.e., differentiate a 1:1 mixture from a 1:10 mixture. The ARF distribution plot, illustrated in , elegantly separates the two different mixture ratios. This could be important for downstream analysis and mixture deconvolution. Furthermore, we applied a quantitative model, described in , to deconvolute the mixture by extracting reads contributed by a known donor. If, based on the ARF distribution, we assume that the mixture is 1:1, we would extract 50% of the reads. The resulting reads should, theoretically, originate from the unknown donor. We observed that we needed to adjust the thresholds to be more conservative for mixture deconvolution compared to single-source genotype calling. The accuracy of the deconvoluted genotypes was >99.2% for both replicates. Moreover, the call rate was as low as 59.4% for one of the duplicate samples. However, due to the high number of SNPs in the FORCE panel, that proportion of markers still represents more than 3200 SNPs, which likely would be sufficient for direct human identification or close kinship cases. The results are only based on one duplicate sample, and further research is required to find optimal strategies for genotype deconvolution of DNA mixtures from sequencing data. The 3935 kinship Informative SNPs in the FORCE panel generated considerably high likelihood ratios for both maternity duos and paternity duos and trios. We observed one discordant genotype in one of the families with known relations. The discordant SNP was found to be problematic in several other samples as well and could, preferably, be excluded in future panel designs. Even though we analyzed biallelic SNPs, which are less informative per marker compared to common forensic STR markers, the average number of genetic inconsistencies in a paternity duo case with an unrelated alleged father was 400. This means that we, on average, observed one genetic inconsistency in every 10th kinship SNP. Furthermore, the simulation results of more distant relations presented in this paper showed great potential for predicting relations from second to fifth degree based on allele frequencies of a Swedish population. Our results are consistent with simulation results based on European allele frequencies , which was expected. The phenotype and ancestry informative markers have previously been identified and found to be informative [ , , ]. We have shown that all these markers could be successfully recovered with this assay. The predictions were consistent with the self-reported phenotypes and ancestries, except for one eye color prediction. However, the prediction of intermediate eye color has previously been shown to be difficult , and the aim of this study was not to evaluate the prediction power but rather to show that we can analyze the phenotype and ancestry informative SNP markers with the FORCE QIAseq assay including UMIs.
This study aimed to evaluate the power of unique molecular indices (UMI) in forensic genetic applications and to show the utility of the FORCE panel with a QIAseq Targeted DNA Custom Panel. We showed that both sensitivity and genotype accuracy were improved when taking UMIs into account. The differences were mainly observed for low amounts of DNA. In total, 5497 SNP markers were analyzed, and both very high call rate (>99.8%) and genotype accuracy (>99.9%) were seen for high quality reference samples. Additionally, the assay showed good tolerance for challenging forensic samples, such as bone and tissue samples, as well as inhibitor-spiked samples. A few SNPs displayed poor performance, and we suggest that some of these should be excluded in future designs of the panel. Based on analysis of the dilution series, the call rate started to drop from 1 ng of DNA input (call rate 97.25%). However, complete genotype accuracy was observed down to 500 pg DNA when excluding the four problematic SNPs. DNA mixtures could be detected down to 1:10 mixtures using ARF distributions or heterozygosity rates, and we successfully deconvoluted a 1:1 mixture with >99.2% genotype accuracy for the observed genotypes. Extremely high likelihood ratios (in the range of 6 × 10 263 ) were observed for maternity and paternity tests with known relation. In addition, simulations showed that second to fifth degree relationships could be predicted with strong statistical power, applying the kinship informative SNPs. Additionally, phenotype and ancestry informative SNPs were successfully typed. To summarize, we showed that the QIAseq assay of the FORCE panel has great potential for various types of forensic applications. Finally, our results showed an improved genotype accuracy and sensitivity when applying UMIs, and this technological improvement should be further evaluated and ultimately implemented by the forensic community.
|
An MPS-Based 50plex Microhaplotype Assay for Forensic DNA Analysis
|
0d7f7cb4-6320-4867-af05-e6d665aa0a54
|
10137789
|
Forensic Medicine[mh]
|
Microhaplotypes (MHs) are novel genetic markers, proposed by the Kidd lab in 2013, to complement current DNA genotyping tools used in forensic genetics . They are characterized by the presence of two or more closely linked single nucleotide polymorphisms (SNPs) within 300 bp, with three or more alleles (haplotypes). Therefore, they provide more information than single SNPs, and exhibit a low rate of recombination over such short distances (assuming an average of 1% recombination per megabase and no recombination hotspots within the locus) . Microhaplotypes do not preferentially amplify certain alleles within a locus because all alleles at a locus are the same size. Compared to short tandem repeats (STRs), MHs have no stutters, lower mutation rates, and fewer alleles . A large set of MHs can approach the same discrimination power as a set of STRs and provide valuable information on individual identification, mixture interpretation, ancestry prediction, kinship testing, and medical diagnostic applications [ , , ]. Therefore, they are gaining popularity in the forensic DNA field and have been applied in different related studies [ , , , , , ]. Currently, massively parallel sequencing (MPS) is the mainstream method for detecting MHs . Sanger sequencing was the “gold standard” method for DNA sequencing . However, when two or more loci are heterozygous, Sanger sequencing cannot determine the cis-trans relationship between alleles of a single SNP in genomic DNA , i.e., the haplotype phase . Our previous research showed that although the capillary electrophoresis (CE) platform can phase MHs, it only resolved those composed of two SNPs, with a low detection throughput at one time . However, MPS can compensate for the deficiencies in Sanger sequencing and CE platforms. It can identify every parental MH allele at a specific locus by clonal amplification, followed by sequencing every amplicon of every DNA strand present in the sample, regardless of its origin from a single or mixed source [ , , ]. In addition, MPS provides a high sequencing throughput and can simultaneously detect hundreds of thousands of variations. Thus, it enables the forensic analysis of MHs defined by multiple SNPs, and the combination of different SNP alleles within a single short locus can provide a greater probability of individual identification [ , , , , , , ]. Thus, MPS technology, which enables clonal sequencing of paternal haplotypes on paternal and maternal chromosomes, has greatly enhanced the characterization of forensic MHs. Internationally reported panels have successfully developed different sets of MHs [ , , ]. Thus, an increasing number of identity-, ancestry-, and mixture-informative MHs have recently been published and made available to the global forensic community [ , , , , , ]. The analysis of these markers and population genetic data will serve as the basis for the future implementation of MH DNA analysis in casework. When a person of interest (POI) cannot be excluded as a possible donor of forensic biological evidence, population-specific allele frequencies are used to estimate the statistical weight of the evidence. Similar to traditional STRs, the application of MH sequencing in casework requires the development of large and appropriate allele frequency (AF) datasets . Nonetheless, Kidd et al. have collected the AF data of initial MHs among the global population and uploaded it to the ALFRED (ALelle FREquency Database) . However, ALFRED does not include the MH AF of the Southwest Chinese Han population (Chengdu City), which would hinder relevant forensic application research. Although the MicroHapDB (Microhaplotype Database) established by Standage et al. includes the basic parameters of 412 MHs in 26 populations, it only includes the published MHs. These markers were selected from the original MH pools by different researchers for certain purposes or in specific populations. However, t most of the works do not release the data of the original MH pools, which limits the marker selection of other researchers to those published MHs. It may be difficult to meet other different research needs sometimes . To fill this gap, in this study, we extracted the “original loci pool of MHs” of the Chinese Southern Han (CHS) from the 1000 Genomes Project (Phase 3) using our developed MHs screening software combined with the PHASE software. Thus, after a series of extractions and optimizations, we constructed 50 MHs (251 SNPs) on 21 autosomes using a MultipSeq ® multiple polymerase chain reaction (multi-PCR) targeted capture sequencing protocol based on MPS. From this, we developed an MPS-based 50-plex MH panel to obtain the genotypes of 137 Southwest Chinese Han individuals, and calculated AF and forensic statistical parameters for each sample. We then characterized the efficiency of custom probe detection based on depth of coverage (DoCs) and allele coverage ratios (ACRs). Moreover, we demonstrated the applicability of the protocol by analyzing the sensitivity, accuracy, specificity, population genetics, simulated degraded samples, simulated mixtures, and real animal samples. Compared to commonly used autosomal STRs , SNPs , or published MH panels [ , , , ], the results showed that our 50plex MH panel provided higher genetic polymorphism and held a greater potential for forensic applications, such as individual identification, degradation detection, mixture interpretation, kinship analysis, etc.
2.1. MH Selection We used the homemade MH screening software combined with PHASE v2.1.1 ( https://stephenslab.uchicago.edu/phase/download.html , accessed on 1 January 2022, Seattle, WA, USA) to analyze the 1000 G data (Phase 3). Based on a previous study by our research group , we extracted MHs consisting of two or more SNPs within 80 bp in the CHS and an effective number of alleles (A e ) value ≥ 3, and estimated the theoretical value of population haplotype frequencies. On this basis, we screened candidate MHs according to the following criteria: (1) all SNPs of MHs must show a minor allele frequency (MAF) > 0 in the dbSNP database; (2) an A e value ≥ 4 because MHs with high A e can enhance individual identification, mixture interpretation, and kinship analysis ; (3) the MH with the largest A e from all overlapping sequences in each group, taking each autosome as a unit; (4) the MHs with apparent repeat motifs in the base sequence were removed; (5) the initial set of MHs with a physical position ≥10 Mb were selected as an interval to avoid linkage disequilibrium (LD) among the selected MHs; and (6) only MHs for which functional primers could be designed. 2.2. Primer Design After obtaining the candidate MHs, we handed over the region of interest (ROI), that is, the physical location information of the MHs, to iGeneTech Biotechnology Beijing Co., Ltd. using the online MFEprimer v3.1 ( https://mfeprimer3.igenetech.com/muld , accessed on 19 January 2022, Beijing, China) to design and validate multiple PCR primers that targeted the genomic sequence of the MHs in our panel. Based on thermodynamic stability , highly specific multiplex primers were designed on both sides of the ROI; the amplicon was 120–200 bp. We then evaluated primer dimerization and non-specific amplification, tested the designed and synthesized primers, and replaced primers with a poor detection effect. 2.3. Sample Collection Peripheral blood samples of 137 unrelated Southwest Chinese Han individuals were collected after obtaining informed consent with the approval of the Medical Ethics Committee of Sichuan University (No. KS2022770). Genomic DNA over 18 ng/μL, extracted using the phenol-chloroform method, were quantified using the Qubit™ dsDNA HS Assay Kit on a Qubit ® 4.0 Fluorometer according to the manufacturer’s protocol ( https://assets.thermofisher.com/TFS-Assets/LSG/manuals/MAN0017209_Qubit_4_Fluorometer_UG.pdf , accessed on 14 April 2022, Thermo Fisher Scientific, Waltham, MC, USA). 2.4. Sensitivity Design and Accuracy Verification For the sensitivity study, 10, 5, 1, 0.5, 0.25, and 0.125 ng of 2800 M control DNA (Promega, Madison, WI, USA) were input into the MPS platform. All DNA libraries were prepared manually and run on an Illumina ® NovaSeq TM 6000 system, according to the manufacturer’s protocol ( https://emea.support.illumina.com/downloads/novaseq-6000-system-guide-1000000019358.html , accessed on 9 October 2022, Illumina, San Diego, CA, USA). Eighteen samples (1 sample × 6 gradients × 3 replicates) were placed on the same NovaSeq 6000 chip. Seven unrelated samples were randomly selected, and their original bam files obtained from MPS were input into the Integrative Genomics Viewer (IGV) v2.16.0 ( https://software.broadinstitute.org/software/igv/userguide , accessed on 10 October 2022, Cambridge, MA, USA) to analyze the genotype of all the target 50 MHs. Among them, two MH loci and four unrelated samples were randomly selected for Sanger sequencing (Tsingke Biotechnology Co., Ltd., Beijing, China). Finally, the MH genotypes, obtained using the pipelines developed by our laboratory, were compared with those obtained by IGV and Sanger sequencing simultaneously. 2.5. Library Preparation and Sequencing Library preparation and multiplex capture for ROI sequencing were performed following the procedure shown in , according to the manufacturer’s protocol (see ). The first round of multiple PCR reactions is to obtain the amplicon product of the target region. By the NovaSeq 6000 S4 Reagent Kit v1.5, the multiple PCR reaction system contained 3.5 μL of Enhancer buffer NB (1N), 2.5 μL of enhancer buffer M, 10 μL of IGT-EM808 polymerase, 5 μL of primer pool, 1–5 ng a DNA/reaction tube, and finally made up to 30 μL with ddH 2 O. The multiple PCR reaction conditions consisted of a preincubation at 95 °C for 3 min 30 s, followed by 22 cycles of 98 °C for 20 s, 60 °C for 4 min, and a final extension at 72 °C for 5 min on an ETC811 PCR thermocycler (Dongsheng Innovation Biotechnology Co., Ltd., Beijing, China) using a customized MultipSeq ® Custom Panel (iGeneTech Biotechnology Beijing Co., Ltd., Beijing, China) with amplicons between 120 and 200 bp. The pure amplification product was obtained through the first round of magnetic bead purification, which was used as the template for the second round of PCR reaction. In the second round of adapter PCR reaction, sequencing adapters were introduced to both sides of the amplicon product to obtain a library. The adapter sequence PCR reaction system contained 2.5 μL of Enhancer buffer M, 10 μL of IGT-EM808 polymerase, 2 μL of CDI Primer (premix adapter primer), 13.5 μL of PCR product mixture, and finally made up to 30 μL with ddH 2 O. The adapter sequence PCR reaction conditions consisted of a preincubation at 95 °C for 3 min 30 s, followed by 9 cycles of 98 °C for 20 s, 58 °C for 60 s, 72 °C for 30 s, and a final extension at 72 °C for 5 min on an ETC811 PCR thermocycler. The pure amplicon library was obtained through the second round of magnetic bead purification. The obtained library was then subjected to strict concentration measurements using the Qubit™ dsDNA HS Assay kit and the Qsep400™ system for quality inspection according to the manufacturer’s protocol ( https://apps.bioptic.com.tw/webdl/Instrument/F0043_Qsep400%20Operation%20Manual-%20Hardware%20-ENG-E.pdf , accessed on 9 October 2022, BiOptic, New Taipei City, Taiwan, China). Subsequently, sequencing was performed on an Illumin ® NovaSeq TM 6000 system using amplicon-targeted capture in PE150 paired-end sequencing mode. 2.6. Sequencing Data Analysis The raw image data obtained after sequencing were converted and deduplicated from base calling files using the bcl2fastq v2.20.0.422 (Illumina, San Diego, CA, USA). The resulting raw sequencing sequences (FASTQ files) were submitted to Trimmomatic v0.38 (Max Planck Institute, Potsdam, BB, Germany) and FastQC v0.11.3 (Babraham Institute, Cambridge, UK) in-house quality control software to remove low-quality reads, followed by the Bwa v0.7.12 (Wellcome Trust Sanger Institute, Cambridge, UK) and Samtools to align them with the reference human genome (Hg19, GRCh37). Single BAM files were submitted to variant calling at SNP/INDEL sites using Samtools v1.9 (UChicago, Chicago, IL, USA) and Varscan v2.4.3 (UWashington, Seattle, WA, USA) to generate VCF files . Raw identification calls for SNV and InDels were further filtered using the thresholds read depth > 4, mapping quality > 20, and variant quality score > 20. Variation loci were annotated using Annovar v201707 (UPenn, Philadelphia, PA, USA). Annotation databases included ExAC, ESP6500, 1000 Genomes, gnomAD, SIFT, CADD, and Polyphen 2. We then used our laboratory pipelines for MH calling using the CIGAR and MD: Z tag information of BAM files . The minimum DOC for each target region and threshold for each MH allele were set to 100× and 25×, respectively, for further analysis. After initial filtering with a threshold of 25 reads, the default minimum read coverage for an allele was set at 5%. If the number of reads for an allele are below this value, the alleles will not be called. The default minimum value for allele frequency for heterozygous markers was set at 10%. If two or more alleles are detected at a marker, any single allele must have coverage of at least this percentage of total reads at the marker to be called. The default minimum value for allele frequency for homozygous markers was set at 90%. A single allele at a marker must have coverage of at least this percentage of total reads at the marker to be called. We displayed the alleles of each MH and compiled the DoCs (i.e., depth of sequencing) and ACRs in an Excel output format. The ACR was defined as the lower coverage of the allele at a heterozygous locus divided by the higher coverage in a single gDNA sample. It is commonly used to assess the balance between the two alleles of heterozygotes detected by high-throughput sequencing of genetic markers. 2.7. Statistical Analysis Based on the above pipeline, we obtained the allelic genotype, AF, and forensic statistical parameters of 50 MHs among 137 Southwest Chinese Han individuals, including homozygosity (Hom), heterozygosity (Het), match probability (MP), discrimination power (DP), probability of exclusion (PE), polymorphism information content (PIC), and the typical paternity index (TPI) by using the Modified-Powerstates v. 1.2 (Promega, Madison, WI, USA) . Then we used the following formula to calculate combined match probability (CMP), combined discrimination power (CDP), and combined probability of exclusion (CPE), respectively, including CMP = 1 − ΣP(1 − MP 1 ) (1 − MP 2 ) (1 − MP 3 ) … (1 − MP 50 ), CDP = 1 − ΣP(1 − DP 1 ) (1 − DP 2 ) (1 − DP 3 ) … (1 − DP 50 ) and CPE = 1 − ΣP(1 − PE 1 ) (1 − PE 2 ) (1 − PE 3 ) … (1 − PE 50 ), where 1 … 50 represent the 50 MHs. The A e value was calculated as the reciprocal of homozygosity: 1/∑p i 2 , where p i is the frequency of allele i and summation includes all alleles at the MH. In addition, the Hardy–Weinberg equilibrium (HWE) p -value and LD value were calculated using Arlequin v3.5 (University of Berne, Lausanne, Switzerland) . 2.8. Mixture Design Two unrelated individuals were randomly selected to simulate the two-person DNA mixtures. The minor DNA amount was fixed at 0.5 ng, and different major DNA amounts were then added to form mixtures at ratios of 1:1, 1:3, 1:5, 1:10, 1:20, and 1:40. For MPS detection to evaluate the efficiency of the panel, 1 μL of each mixture was used. All mixtures were prepared using TE (Solarbio Science & Technology Co., Ltd. Beijing, China) and sterile 0.2 mL amplification tubes (Axygen Scientific, Union City, CA, USA), and samples were stored at −20 °C until use. The degree of mixing was detected using the AGCU EX22 kit (Applied ScienTech, Suzhou, Jiangsu, China) on an ABI 3500 Genetic Analyzer according to the manufacturer’s protocol ( https://tools.thermofisher.com/content/sfs/manuals/4401661.pdf , accessed on 14 July 2022, Applied Biosystems, Thermo Fisher Scientific, Waltham, MC, USA). The results were analyzed using the GeneMapper ID-X v1.2 according to the manufacturer’s protocol ( https://assets.thermofisher.com/TFS-Assets/LSG/manuals/cms_072557.pdf , accessed on 17 July 2022, Applied Biosystems, Thermo Fisher Scientific, Waltham, MC, USA). 2.9. Degradation Design To simulate single-source degraded samples, two randomly extracted DNA samples were diluted to a concentration of 5 ng/μL and treated with DNase I (Thermo Fisher Scientific, Waltham, MC, USA), respectively . Subsequently, 45 μL of intact DNA (5 ng/μL) was mixed with 3.75 μL of 10× MgCl 2 buffer (Thermo Fisher Scientific, Waltham, MC, USA). To the mixture, 0.6 μL of 0.3 U/μL DNase I was added, followed by incubation at 37 °C, after which 10 μL of degraded DNA from the incubated mixture was removed at predetermined time intervals (2.5, 5, 10, and 15 min, respectively), and placed in separate sterile 0.2 mL amplification tubes (Axygen Scientific, Union City, CA, USA), respectively. EDTA (1.6 μL, 30 mM) was immediately added to each tube and incubated at 65 °C for 10 min to stop DNA degradation. The degree of degradation was then evaluated using the AGCU EX22 Kit on an ABI 3500 Genetic Analyzer and the High Sensitivity DNA Kit on an Agilent 2100 Bioanalyzer according to the manufacturer’s protocol ( https://www.agilent.com/cs/library/usermanuals/public/2100_Bioanalyzer_Expert_USR.pdf , accessed on 30 July 2022, Agilent Technologies, Santa Clara, CA, USA). For the MPS, 1 μL of each sample treated with DNase I was used. To simulate mixed degradations, one of the above single-source degradations was set as the minor DNA and fixed at 0.5 ng, and the other was set as the major DNA. The major DNA, degraded at different times, was added to corresponding minor DNA to form mixtures at a ratio of 1:10. The subsequent evaluation and detection processes of degraded degrees were the same as the above single-source degradations. For the MPS, 1 μL of each 1:10 mixed degradation was used. 2.10. Species Specificity We tested common animal DNA to assess the specificity of our panel because non-human DNA may be present in forensic biological evidence. Thus, animal DNA samples from cats, bovines, chickens, ducks, fish, pigs, rabbits, and sheep were sequenced using multi-PCR targeted capture sequencing in the same manner as human DNA, with an input DNA amount of 3.753–6.506 ng.
We used the homemade MH screening software combined with PHASE v2.1.1 ( https://stephenslab.uchicago.edu/phase/download.html , accessed on 1 January 2022, Seattle, WA, USA) to analyze the 1000 G data (Phase 3). Based on a previous study by our research group , we extracted MHs consisting of two or more SNPs within 80 bp in the CHS and an effective number of alleles (A e ) value ≥ 3, and estimated the theoretical value of population haplotype frequencies. On this basis, we screened candidate MHs according to the following criteria: (1) all SNPs of MHs must show a minor allele frequency (MAF) > 0 in the dbSNP database; (2) an A e value ≥ 4 because MHs with high A e can enhance individual identification, mixture interpretation, and kinship analysis ; (3) the MH with the largest A e from all overlapping sequences in each group, taking each autosome as a unit; (4) the MHs with apparent repeat motifs in the base sequence were removed; (5) the initial set of MHs with a physical position ≥10 Mb were selected as an interval to avoid linkage disequilibrium (LD) among the selected MHs; and (6) only MHs for which functional primers could be designed.
After obtaining the candidate MHs, we handed over the region of interest (ROI), that is, the physical location information of the MHs, to iGeneTech Biotechnology Beijing Co., Ltd. using the online MFEprimer v3.1 ( https://mfeprimer3.igenetech.com/muld , accessed on 19 January 2022, Beijing, China) to design and validate multiple PCR primers that targeted the genomic sequence of the MHs in our panel. Based on thermodynamic stability , highly specific multiplex primers were designed on both sides of the ROI; the amplicon was 120–200 bp. We then evaluated primer dimerization and non-specific amplification, tested the designed and synthesized primers, and replaced primers with a poor detection effect.
Peripheral blood samples of 137 unrelated Southwest Chinese Han individuals were collected after obtaining informed consent with the approval of the Medical Ethics Committee of Sichuan University (No. KS2022770). Genomic DNA over 18 ng/μL, extracted using the phenol-chloroform method, were quantified using the Qubit™ dsDNA HS Assay Kit on a Qubit ® 4.0 Fluorometer according to the manufacturer’s protocol ( https://assets.thermofisher.com/TFS-Assets/LSG/manuals/MAN0017209_Qubit_4_Fluorometer_UG.pdf , accessed on 14 April 2022, Thermo Fisher Scientific, Waltham, MC, USA).
For the sensitivity study, 10, 5, 1, 0.5, 0.25, and 0.125 ng of 2800 M control DNA (Promega, Madison, WI, USA) were input into the MPS platform. All DNA libraries were prepared manually and run on an Illumina ® NovaSeq TM 6000 system, according to the manufacturer’s protocol ( https://emea.support.illumina.com/downloads/novaseq-6000-system-guide-1000000019358.html , accessed on 9 October 2022, Illumina, San Diego, CA, USA). Eighteen samples (1 sample × 6 gradients × 3 replicates) were placed on the same NovaSeq 6000 chip. Seven unrelated samples were randomly selected, and their original bam files obtained from MPS were input into the Integrative Genomics Viewer (IGV) v2.16.0 ( https://software.broadinstitute.org/software/igv/userguide , accessed on 10 October 2022, Cambridge, MA, USA) to analyze the genotype of all the target 50 MHs. Among them, two MH loci and four unrelated samples were randomly selected for Sanger sequencing (Tsingke Biotechnology Co., Ltd., Beijing, China). Finally, the MH genotypes, obtained using the pipelines developed by our laboratory, were compared with those obtained by IGV and Sanger sequencing simultaneously.
Library preparation and multiplex capture for ROI sequencing were performed following the procedure shown in , according to the manufacturer’s protocol (see ). The first round of multiple PCR reactions is to obtain the amplicon product of the target region. By the NovaSeq 6000 S4 Reagent Kit v1.5, the multiple PCR reaction system contained 3.5 μL of Enhancer buffer NB (1N), 2.5 μL of enhancer buffer M, 10 μL of IGT-EM808 polymerase, 5 μL of primer pool, 1–5 ng a DNA/reaction tube, and finally made up to 30 μL with ddH 2 O. The multiple PCR reaction conditions consisted of a preincubation at 95 °C for 3 min 30 s, followed by 22 cycles of 98 °C for 20 s, 60 °C for 4 min, and a final extension at 72 °C for 5 min on an ETC811 PCR thermocycler (Dongsheng Innovation Biotechnology Co., Ltd., Beijing, China) using a customized MultipSeq ® Custom Panel (iGeneTech Biotechnology Beijing Co., Ltd., Beijing, China) with amplicons between 120 and 200 bp. The pure amplification product was obtained through the first round of magnetic bead purification, which was used as the template for the second round of PCR reaction. In the second round of adapter PCR reaction, sequencing adapters were introduced to both sides of the amplicon product to obtain a library. The adapter sequence PCR reaction system contained 2.5 μL of Enhancer buffer M, 10 μL of IGT-EM808 polymerase, 2 μL of CDI Primer (premix adapter primer), 13.5 μL of PCR product mixture, and finally made up to 30 μL with ddH 2 O. The adapter sequence PCR reaction conditions consisted of a preincubation at 95 °C for 3 min 30 s, followed by 9 cycles of 98 °C for 20 s, 58 °C for 60 s, 72 °C for 30 s, and a final extension at 72 °C for 5 min on an ETC811 PCR thermocycler. The pure amplicon library was obtained through the second round of magnetic bead purification. The obtained library was then subjected to strict concentration measurements using the Qubit™ dsDNA HS Assay kit and the Qsep400™ system for quality inspection according to the manufacturer’s protocol ( https://apps.bioptic.com.tw/webdl/Instrument/F0043_Qsep400%20Operation%20Manual-%20Hardware%20-ENG-E.pdf , accessed on 9 October 2022, BiOptic, New Taipei City, Taiwan, China). Subsequently, sequencing was performed on an Illumin ® NovaSeq TM 6000 system using amplicon-targeted capture in PE150 paired-end sequencing mode.
The raw image data obtained after sequencing were converted and deduplicated from base calling files using the bcl2fastq v2.20.0.422 (Illumina, San Diego, CA, USA). The resulting raw sequencing sequences (FASTQ files) were submitted to Trimmomatic v0.38 (Max Planck Institute, Potsdam, BB, Germany) and FastQC v0.11.3 (Babraham Institute, Cambridge, UK) in-house quality control software to remove low-quality reads, followed by the Bwa v0.7.12 (Wellcome Trust Sanger Institute, Cambridge, UK) and Samtools to align them with the reference human genome (Hg19, GRCh37). Single BAM files were submitted to variant calling at SNP/INDEL sites using Samtools v1.9 (UChicago, Chicago, IL, USA) and Varscan v2.4.3 (UWashington, Seattle, WA, USA) to generate VCF files . Raw identification calls for SNV and InDels were further filtered using the thresholds read depth > 4, mapping quality > 20, and variant quality score > 20. Variation loci were annotated using Annovar v201707 (UPenn, Philadelphia, PA, USA). Annotation databases included ExAC, ESP6500, 1000 Genomes, gnomAD, SIFT, CADD, and Polyphen 2. We then used our laboratory pipelines for MH calling using the CIGAR and MD: Z tag information of BAM files . The minimum DOC for each target region and threshold for each MH allele were set to 100× and 25×, respectively, for further analysis. After initial filtering with a threshold of 25 reads, the default minimum read coverage for an allele was set at 5%. If the number of reads for an allele are below this value, the alleles will not be called. The default minimum value for allele frequency for heterozygous markers was set at 10%. If two or more alleles are detected at a marker, any single allele must have coverage of at least this percentage of total reads at the marker to be called. The default minimum value for allele frequency for homozygous markers was set at 90%. A single allele at a marker must have coverage of at least this percentage of total reads at the marker to be called. We displayed the alleles of each MH and compiled the DoCs (i.e., depth of sequencing) and ACRs in an Excel output format. The ACR was defined as the lower coverage of the allele at a heterozygous locus divided by the higher coverage in a single gDNA sample. It is commonly used to assess the balance between the two alleles of heterozygotes detected by high-throughput sequencing of genetic markers.
Based on the above pipeline, we obtained the allelic genotype, AF, and forensic statistical parameters of 50 MHs among 137 Southwest Chinese Han individuals, including homozygosity (Hom), heterozygosity (Het), match probability (MP), discrimination power (DP), probability of exclusion (PE), polymorphism information content (PIC), and the typical paternity index (TPI) by using the Modified-Powerstates v. 1.2 (Promega, Madison, WI, USA) . Then we used the following formula to calculate combined match probability (CMP), combined discrimination power (CDP), and combined probability of exclusion (CPE), respectively, including CMP = 1 − ΣP(1 − MP 1 ) (1 − MP 2 ) (1 − MP 3 ) … (1 − MP 50 ), CDP = 1 − ΣP(1 − DP 1 ) (1 − DP 2 ) (1 − DP 3 ) … (1 − DP 50 ) and CPE = 1 − ΣP(1 − PE 1 ) (1 − PE 2 ) (1 − PE 3 ) … (1 − PE 50 ), where 1 … 50 represent the 50 MHs. The A e value was calculated as the reciprocal of homozygosity: 1/∑p i 2 , where p i is the frequency of allele i and summation includes all alleles at the MH. In addition, the Hardy–Weinberg equilibrium (HWE) p -value and LD value were calculated using Arlequin v3.5 (University of Berne, Lausanne, Switzerland) .
Two unrelated individuals were randomly selected to simulate the two-person DNA mixtures. The minor DNA amount was fixed at 0.5 ng, and different major DNA amounts were then added to form mixtures at ratios of 1:1, 1:3, 1:5, 1:10, 1:20, and 1:40. For MPS detection to evaluate the efficiency of the panel, 1 μL of each mixture was used. All mixtures were prepared using TE (Solarbio Science & Technology Co., Ltd. Beijing, China) and sterile 0.2 mL amplification tubes (Axygen Scientific, Union City, CA, USA), and samples were stored at −20 °C until use. The degree of mixing was detected using the AGCU EX22 kit (Applied ScienTech, Suzhou, Jiangsu, China) on an ABI 3500 Genetic Analyzer according to the manufacturer’s protocol ( https://tools.thermofisher.com/content/sfs/manuals/4401661.pdf , accessed on 14 July 2022, Applied Biosystems, Thermo Fisher Scientific, Waltham, MC, USA). The results were analyzed using the GeneMapper ID-X v1.2 according to the manufacturer’s protocol ( https://assets.thermofisher.com/TFS-Assets/LSG/manuals/cms_072557.pdf , accessed on 17 July 2022, Applied Biosystems, Thermo Fisher Scientific, Waltham, MC, USA).
To simulate single-source degraded samples, two randomly extracted DNA samples were diluted to a concentration of 5 ng/μL and treated with DNase I (Thermo Fisher Scientific, Waltham, MC, USA), respectively . Subsequently, 45 μL of intact DNA (5 ng/μL) was mixed with 3.75 μL of 10× MgCl 2 buffer (Thermo Fisher Scientific, Waltham, MC, USA). To the mixture, 0.6 μL of 0.3 U/μL DNase I was added, followed by incubation at 37 °C, after which 10 μL of degraded DNA from the incubated mixture was removed at predetermined time intervals (2.5, 5, 10, and 15 min, respectively), and placed in separate sterile 0.2 mL amplification tubes (Axygen Scientific, Union City, CA, USA), respectively. EDTA (1.6 μL, 30 mM) was immediately added to each tube and incubated at 65 °C for 10 min to stop DNA degradation. The degree of degradation was then evaluated using the AGCU EX22 Kit on an ABI 3500 Genetic Analyzer and the High Sensitivity DNA Kit on an Agilent 2100 Bioanalyzer according to the manufacturer’s protocol ( https://www.agilent.com/cs/library/usermanuals/public/2100_Bioanalyzer_Expert_USR.pdf , accessed on 30 July 2022, Agilent Technologies, Santa Clara, CA, USA). For the MPS, 1 μL of each sample treated with DNase I was used. To simulate mixed degradations, one of the above single-source degradations was set as the minor DNA and fixed at 0.5 ng, and the other was set as the major DNA. The major DNA, degraded at different times, was added to corresponding minor DNA to form mixtures at a ratio of 1:10. The subsequent evaluation and detection processes of degraded degrees were the same as the above single-source degradations. For the MPS, 1 μL of each 1:10 mixed degradation was used.
We tested common animal DNA to assess the specificity of our panel because non-human DNA may be present in forensic biological evidence. Thus, animal DNA samples from cats, bovines, chickens, ducks, fish, pigs, rabbits, and sheep were sequenced using multi-PCR targeted capture sequencing in the same manner as human DNA, with an input DNA amount of 3.753–6.506 ng.
3.1. MH Selection and Primer Design A total of 178 candidate MHs were screened from 1000 G (Phase 3), and the MPS-based protocol allowed primer design and multiplex detection of 128 of these MHs in a single assay. Six rounds of optimization were performed on the initially constructed panel using six samples (the company’s internal standard DNA H01, 2800 M, and four experimental samples). Some MHs were excluded, such as those with many nonspecific amplification products, large amplification and sequencing deviations between different samples, and low sequencing coverage. Fifty MHs were reserved to ensure the best system performance of the panel ( , ) and distributed on 21 autosomes (no target MH on chr22 after six rounds of optimization). We observed 1–5 MHs on each autosome (average 2.38), with each MH comprising 3–15 SNPs (total 251, average 4.83), marker lengths of 11–81 bp (average 65.58 bp), and an amplicon of 123–198 bp (average 156.02 bp). Specific information on the 50 MHs and primers is provided in . 3.2. Sensitivity and Accuracy Analysis For three replicates with different inputs of 2800 M (10, 5, 1, 0.5, 0.25, and 0.125 ng), we detected complete profiles for all 50 MHs at 0.25 ng. Only one MH (MH-37) dropout was observed in the third replicate at 0.125 ng, as the reads were 20×, which is below the analytical threshold of 25× ( ). The overall DoCs were 801.24–11,010.84× (average 5623.39×) and decreased gradually with decreasing DNA input (linear correlation coefficient R 2 = 0.8814) ( ). The minor DNA of the non-degraded and degraded mixtures in the next simulation study was fixed at 0.5 ng. The MH, sample numbers, and Sanger primers are shown in . We did not observe inconsistent haplotypes among Sanger sequencing, IGV, or our pipeline in the analyzed MH loci or unrelated individuals. shows the corresponding genotypes of the three analysis methods for a random MH in a random sample. The results showed 100% concordance. presents the remaining examples. 3.3. Panel Performance Fifty MHs of all 137 unrelated Southwest Chinese Han individuals in this study were consistently captured and sequenced to obtain complete MH alleles. These samples were genotyped at 1.825–25.992 ng of input DNA using the DoCs and ACRs of all 50 MHs to assess the panel sequencing performance. The average DoC was 7928.39 ± 4990.952× ( ). The average ACR was 0.90 ± 0.045, and 96% of the MHs (48/50) exhibited a proportion of allele balance ≥ 80% ( ), indicating the panel had a good balance in detecting heterozygotes (i.e., good heterozygosity balance). No correlation was found between the DoCs and ACRs (linear correlation coefficient R 2 = 0.0771). 3.4. Polymorphism Information All the 50 MHs in our panel were successfully sequenced. Haplotype (i.e., allele) frequencies calculated from sequencing data from all 137 unrelated individuals are shown in and . Each MH had 2–23 alleles (average 7), of which 3 MHs showed 2–3 alleles, 4 MHs showed 4 alleles, 15 MHs showed 5 alleles, 12 MHs showed 6 alleles, 3 MHs showed 7 alleles, and 13 MHs showed 8 or more alleles. The frequencies of all the 350 alleles ranged from 0.004–0.803. Based on allele frequencies ( ), forensic parameters ( ) showed that the Hom, Het, and A e were 0.133–0.665 (average 0.266), 0.335–0.867 (average 0.734), and 1.503–7.547 (average 4.192), respectively. Among the 50 MHs, 10 A e were <3.0, 8 A e were ≥3.0, 24 A e were ≥4.0, 3 A e were ≥5.0, 3 A e were ≥6.0, and 2 A e were ≥7.0. We observed that both the A e and Het increased with increasing alleles, with R 2 of 0.6294 and 0.3166, respectively ( ). Meanwhile, A e increased with increasing Het (R 2 = 0.9222, ). The highest Het (0.801–0.867) also had the highest A e (5.026–7.547). In general, Het and A e were larger when there were more alleles of an MH, and the frequency of each allele tended to be the same. We also observed that the MP, CMP, DP, CDP, PE, CPE, PIC, and TPI were 0.032–0.484 (average 0.127), 0.999180791, 0.516–0.968 (average 0.873), 1–3.109 × 10 −49 , 0.086–0.747 (average 0.481), 1–8.727 × 10 −16 , 0.308–0.855 (average 0.692), and 0.770–4.029 (average 2.018), respectively ( ). Among the 50 MHs, MH-8 showed the highest polymorphism. For MH-8, the Het, A e , MP, DP, and PIC were 0.867, 7.547, 0.032, 0.968, and 0.855, respectively. After Bonferroni correction, we observed that all 50 MHs had no significant bias in HWE ( p = 0.05/50 = 0.001) or LD detection ( p = 0.05/2485 = 0.00004081) ( ). 3.5. Mixture Analysis The 50-MH panel was developed as a stand-alone forensic panel but could also be used as a complement to STR markers. To explore the detection threshold of the mixture ratio, the simulated two-person mixtures were genotyped after a series of dilutions (1:1, 1:3, 1:5, 1:10, 1:20, and 1:40). Based on the sensitivity results, the minor DNA was fixed at 0.5 ng, and the major DNA was added at the mixing ratio. The AGCU EX22 Kit (Applied ScienTech, Jiangsu, China) can only detect the complete genotype of the major and minor DNA at a 1:1 ratio. Thus, minor DNA was incompletely genotyped at the mixing ratios of 1:3, 1:5, and 1:10, and the identity-informative alleles of STR were partially dropped. Minor DNA was undetectable at 1:20 and 1:40, and the identity-informative alleles of STR completely dropped out ( ). The overall DoCs of the MPS-based 50plex MH panel were 24,597.48–41,927.99× (average 31,121.65×) and was able to detect the complete genotype of major and minor DNA at a ratio as low as 1:40, with a maximum number of individual alleles of 132 ( ). For a two-person mixture with 1 µL of input DNA, complete MH profiles of the minor DNA were observed at a ratio as low as 1:40, and 100% (61/61) of unique alleles for the minor DNA were reported. 3.6. Analysis of Degraded Samples The lengths of the DNA fragments ranged from 120 to 320 bp after different DNase I treatment times (2.5, 5, 10, and 15 min). The degree of degradation of single and mixed samples detected using the Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) is shown in . The degradation results were consistent with the fragment distribution of STR genotypes ( ). Long-STR genotyping failed when random single DNA was treated with DNase I at 37 °C for 2.5, 5, 10, and 15 min ( ). In contrast, the MPS-based 50-MH panel successfully obtained complete alleles in all single degraded DNAs ( ), with an overall DoCs of 7336.50–18,408.12× (average 14,420.24×). Long STR genotyping failed when the simulated two-person mixtures were treated with DNase I at 37 °C for 2.5, 5, 10, or 15 min ( ). However, the overall DoCs of the MPS-based 50-MH panel were 1464.69–49,182.18× (average 22,211.13×). The complete profiles of the major and minor DNA were successfully obtained in six types of degraded mixtures of 1:10–2.5 and 1:10–5 (except for the poor sequencing result caused by the low-quality library construction of the 1:10–5-1 sample). Only 1–4 unique allele (identity-informative allele) dropouts of minor DNA were observed in the other four degraded mixtures of 1:10–10 and 1:10–15. The overall detection rates were 93–100% ( , ). These results suggested that 50plex MHs were more efficient than CE-STRs in sequencing and genotyping degraded single and mixed DNAs. 3.7. Species-Specific Analysis Complete genotypes of 50-plex MHs were not achieved for all eight animal DNA samples with 1 µL of DNA input. For animal DNA, the overall DoCs ranged from 103.00 to 548.00× (average 322.00×) and 2–8 MHs were detected for each DNA ( ). For MHs, the overall DoCs ranged from 33.00 to 337.00× (average 103.04×), of which only 25 MHs containing 1–4 alleles were genotyped. The current data showed that our panel incompletely genotyped different animal samples with very low signals, so the species specificity of the 50-plex assay is sufficient for routine casework situations.
A total of 178 candidate MHs were screened from 1000 G (Phase 3), and the MPS-based protocol allowed primer design and multiplex detection of 128 of these MHs in a single assay. Six rounds of optimization were performed on the initially constructed panel using six samples (the company’s internal standard DNA H01, 2800 M, and four experimental samples). Some MHs were excluded, such as those with many nonspecific amplification products, large amplification and sequencing deviations between different samples, and low sequencing coverage. Fifty MHs were reserved to ensure the best system performance of the panel ( , ) and distributed on 21 autosomes (no target MH on chr22 after six rounds of optimization). We observed 1–5 MHs on each autosome (average 2.38), with each MH comprising 3–15 SNPs (total 251, average 4.83), marker lengths of 11–81 bp (average 65.58 bp), and an amplicon of 123–198 bp (average 156.02 bp). Specific information on the 50 MHs and primers is provided in .
For three replicates with different inputs of 2800 M (10, 5, 1, 0.5, 0.25, and 0.125 ng), we detected complete profiles for all 50 MHs at 0.25 ng. Only one MH (MH-37) dropout was observed in the third replicate at 0.125 ng, as the reads were 20×, which is below the analytical threshold of 25× ( ). The overall DoCs were 801.24–11,010.84× (average 5623.39×) and decreased gradually with decreasing DNA input (linear correlation coefficient R 2 = 0.8814) ( ). The minor DNA of the non-degraded and degraded mixtures in the next simulation study was fixed at 0.5 ng. The MH, sample numbers, and Sanger primers are shown in . We did not observe inconsistent haplotypes among Sanger sequencing, IGV, or our pipeline in the analyzed MH loci or unrelated individuals. shows the corresponding genotypes of the three analysis methods for a random MH in a random sample. The results showed 100% concordance. presents the remaining examples.
Fifty MHs of all 137 unrelated Southwest Chinese Han individuals in this study were consistently captured and sequenced to obtain complete MH alleles. These samples were genotyped at 1.825–25.992 ng of input DNA using the DoCs and ACRs of all 50 MHs to assess the panel sequencing performance. The average DoC was 7928.39 ± 4990.952× ( ). The average ACR was 0.90 ± 0.045, and 96% of the MHs (48/50) exhibited a proportion of allele balance ≥ 80% ( ), indicating the panel had a good balance in detecting heterozygotes (i.e., good heterozygosity balance). No correlation was found between the DoCs and ACRs (linear correlation coefficient R 2 = 0.0771).
All the 50 MHs in our panel were successfully sequenced. Haplotype (i.e., allele) frequencies calculated from sequencing data from all 137 unrelated individuals are shown in and . Each MH had 2–23 alleles (average 7), of which 3 MHs showed 2–3 alleles, 4 MHs showed 4 alleles, 15 MHs showed 5 alleles, 12 MHs showed 6 alleles, 3 MHs showed 7 alleles, and 13 MHs showed 8 or more alleles. The frequencies of all the 350 alleles ranged from 0.004–0.803. Based on allele frequencies ( ), forensic parameters ( ) showed that the Hom, Het, and A e were 0.133–0.665 (average 0.266), 0.335–0.867 (average 0.734), and 1.503–7.547 (average 4.192), respectively. Among the 50 MHs, 10 A e were <3.0, 8 A e were ≥3.0, 24 A e were ≥4.0, 3 A e were ≥5.0, 3 A e were ≥6.0, and 2 A e were ≥7.0. We observed that both the A e and Het increased with increasing alleles, with R 2 of 0.6294 and 0.3166, respectively ( ). Meanwhile, A e increased with increasing Het (R 2 = 0.9222, ). The highest Het (0.801–0.867) also had the highest A e (5.026–7.547). In general, Het and A e were larger when there were more alleles of an MH, and the frequency of each allele tended to be the same. We also observed that the MP, CMP, DP, CDP, PE, CPE, PIC, and TPI were 0.032–0.484 (average 0.127), 0.999180791, 0.516–0.968 (average 0.873), 1–3.109 × 10 −49 , 0.086–0.747 (average 0.481), 1–8.727 × 10 −16 , 0.308–0.855 (average 0.692), and 0.770–4.029 (average 2.018), respectively ( ). Among the 50 MHs, MH-8 showed the highest polymorphism. For MH-8, the Het, A e , MP, DP, and PIC were 0.867, 7.547, 0.032, 0.968, and 0.855, respectively. After Bonferroni correction, we observed that all 50 MHs had no significant bias in HWE ( p = 0.05/50 = 0.001) or LD detection ( p = 0.05/2485 = 0.00004081) ( ).
The 50-MH panel was developed as a stand-alone forensic panel but could also be used as a complement to STR markers. To explore the detection threshold of the mixture ratio, the simulated two-person mixtures were genotyped after a series of dilutions (1:1, 1:3, 1:5, 1:10, 1:20, and 1:40). Based on the sensitivity results, the minor DNA was fixed at 0.5 ng, and the major DNA was added at the mixing ratio. The AGCU EX22 Kit (Applied ScienTech, Jiangsu, China) can only detect the complete genotype of the major and minor DNA at a 1:1 ratio. Thus, minor DNA was incompletely genotyped at the mixing ratios of 1:3, 1:5, and 1:10, and the identity-informative alleles of STR were partially dropped. Minor DNA was undetectable at 1:20 and 1:40, and the identity-informative alleles of STR completely dropped out ( ). The overall DoCs of the MPS-based 50plex MH panel were 24,597.48–41,927.99× (average 31,121.65×) and was able to detect the complete genotype of major and minor DNA at a ratio as low as 1:40, with a maximum number of individual alleles of 132 ( ). For a two-person mixture with 1 µL of input DNA, complete MH profiles of the minor DNA were observed at a ratio as low as 1:40, and 100% (61/61) of unique alleles for the minor DNA were reported.
The lengths of the DNA fragments ranged from 120 to 320 bp after different DNase I treatment times (2.5, 5, 10, and 15 min). The degree of degradation of single and mixed samples detected using the Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) is shown in . The degradation results were consistent with the fragment distribution of STR genotypes ( ). Long-STR genotyping failed when random single DNA was treated with DNase I at 37 °C for 2.5, 5, 10, and 15 min ( ). In contrast, the MPS-based 50-MH panel successfully obtained complete alleles in all single degraded DNAs ( ), with an overall DoCs of 7336.50–18,408.12× (average 14,420.24×). Long STR genotyping failed when the simulated two-person mixtures were treated with DNase I at 37 °C for 2.5, 5, 10, or 15 min ( ). However, the overall DoCs of the MPS-based 50-MH panel were 1464.69–49,182.18× (average 22,211.13×). The complete profiles of the major and minor DNA were successfully obtained in six types of degraded mixtures of 1:10–2.5 and 1:10–5 (except for the poor sequencing result caused by the low-quality library construction of the 1:10–5-1 sample). Only 1–4 unique allele (identity-informative allele) dropouts of minor DNA were observed in the other four degraded mixtures of 1:10–10 and 1:10–15. The overall detection rates were 93–100% ( , ). These results suggested that 50plex MHs were more efficient than CE-STRs in sequencing and genotyping degraded single and mixed DNAs.
Complete genotypes of 50-plex MHs were not achieved for all eight animal DNA samples with 1 µL of DNA input. For animal DNA, the overall DoCs ranged from 103.00 to 548.00× (average 322.00×) and 2–8 MHs were detected for each DNA ( ). For MHs, the overall DoCs ranged from 33.00 to 337.00× (average 103.04×), of which only 25 MHs containing 1–4 alleles were genotyped. The current data showed that our panel incompletely genotyped different animal samples with very low signals, so the species specificity of the 50-plex assay is sufficient for routine casework situations.
In this study, we developed a thermodynamic stability-based multiple PCR (i.e., highly specific multiplex primers) capture-sequencing protocol targeting 50 MHs based on the Illumina HiSeq platform. The forensic power of the 50-plex-MH panel in 137 unrelated individuals was evaluated based on DoCs and ACRs. The sensitivity, accuracy, polymorphism, forensic parameters, degraded samples, mixtures, and animal samples of the panel performed adequately, thereby indicating that our panel was a powerful forensic tool and could provide a good supplement and enhancement to existing detection methods. Based on our previous studies of 15 SNP-SNP MHs , we comprehensively optimized the MH screening, sequencing, and analysis protocols in this study. Microhaplotypes combine the advantages of STRs and SNPs, with no stutter peak or amplification bias, short markers and amplicons, low mutation and recombination rates, and high polymorphism. They are recognized as powerful markers for various forensic purposes . Compared with phased Sanger sequencing and CE platforms, single sequence reads of MPS can cover a wide range of analyzed MHs and are highly informative following MH detection. Therefore, they can be used to analyze true haplotypes. Moreover, MPS is a powerful platform for simultaneously analyzing several target areas and different sample types, thereby addressing relevant forensic questions in a single assay . At present, most MHs of reported panels are selected from published articles .Therefore, the current screening method is not systematic, and its genome coverage is not extensive. The number of MHs in some panels is small, and the detection platform still uses first-generation sequencing [ , , , ]. Moreover, the analysis methods of some MPS panels, such as Flfinder and MHtyper , are more suitable for their own research analyses. These panels are limited by the number of loci, so the performances of polymorphism, forensic parameters, and mixture detection are limited. To compensate for these deficiencies, we aimed to develop a method that quickly and effectively screens short and high-A e MHs sets (including SNPs only) in a target population using our developed MH screening software combined with PHASE software based on the 1000 G . High-throughput sequencing of multiple markers and different sample types was performed using the MPS platform. Finally, automatic sequencing data analysis was performed using our developed pipeline. We initially selected 178 candidate MHs and retained 50 MHs after six optimizations to ensure the best system efficiency of the panel. Only one of the 50 MHs (MH-32) was included in the Kidd-reported MH panel (mh13KK-218) after comparison with the ALFRED database and other reported MHs. The remaining 49 MHs were novel and unreported ( ). The marker length and amplicon of the 50 MHs were 11–81 bp (average 65.58 bp) and 123–198 bp (average 156.02 bp), respectively, which were shorter than those of other panels. For example, the marker lengths of 60, 56, 40, 30, and 18 MHs have been reported as 20–116 , 17–218 , 8–114 , 63–423 , and 14–103 bp , respectively. The amplicons of 74, 56, 30, and 21 MHs have been reported as 157–325 , 115–263 , 63–423 (average 216) , and 125–375 bp , respectively. The Het and A e of the 50 MHs were 0.335–0.867 (average 0.734) and 1.503–7.547 (average 4.192), respectively, which were higher than those of other panels. For example, Oldoni et al. reported that the Het and A e of the 74 MHs were 0.51–0.78 and 1.307–6.010 (median 2.706) , respectively. The A e of the 56, 40, and 30 MHs have been reported as 1.74–6.98 (average 3.45) , 2.62–4.41 (average 3.61) and 3.91 , respectively. Studies have shown that Het > 0.4 and A e > 3.0 loci can be effectively used to analyze individual identification, kinship testing, degradation, mixtures, and ancestral inferences . Therefore, our panel has significant research value for forensic applications. Among the 50 MHs, one MH (MH-24) had three pairs of primers after optimization and testing. The amplicons were the same, and therefore did not affect data analysis. We added sensitivity gradients of 10, 5, 1, 0.5, 0.25, and 0.125 ng, with three replications showing sensitivities as low as 0.25 ng ( ). This provided a theoretical basis for the scientific setting of minor DNA amounts for subsequent studies on non-degraded and degraded mixtures. A multiple PCR-targeted capture and sequencing protocol based on MPS was used to obtain the complete genotypes of 50 MHs from 137 unrelated Southwest Chinese Han individuals. Combined with the sensitivity results, the DNA input for sequencing was 0.25–26 ng; the greater the DNA input, the higher the sequencing depth. The average DoC was 7928.39 ± 4990.95×, the average ACR was 0.90 ± 0.05, and 96% of MHs (48/50) showed an allele balance ratio ≥ 80% ( ), indicating that the sequencing efficiency of our panel was high. Each MH had an average of seven alleles, and 85.7% (300/350) of alleles had a frequency ≥0.01, with the highest being 0.803, indicating good polymorphism in our panel ( , ). The sensitivity of 250 pg is in the range reported for other MPS-based systems used for forensic STR analysis and will be sufficient for many routine applications. For samples with low DNA amounts, such as minute traces, touch DNA, or degraded samples, further improvement of our system will be required. For sequencing data analysis, we tried the Flfinder we had developed earlier , but because of the proximity of SNPs in some MHs, it could not meet the input file format requirements of Flfinder. Therefore, on the basis of Flfinder, we created a set of scripts using the Python and R languages for MHs calling. We compared read thresholds of 15×, 20×, 25×, and 30×, and found that at 25× the alignment accuracy of calling obtained by our pipeline and IGV was the highest, which was also consistent with Sanger sequencing ( , ). Heterozygosity (Het) is the most important parameter for familial identification, as a higher Het at the locus increases the chance that the associated allele will be uncommon in a given population, but is more likely to be found in relatives than in unrelated individuals . In our study, we observed that A e increased with increasing Het, and the highest A e corresponded to the highest Het. This is related to the number and frequency of the alleles in the population. Therefore, the selection of the most informative marker for familial identification depends on the A e value. The A e value is also an important index for evaluating the ability of a mixture analysis . For our 50-plex MH panel, Het values of more than 98% (49/50) of MHs were >0.40, A e values of more than 80% (40/50) MHs > 3.0, and CDP and CPE were 1–3.109 × 10 −49 and 1–8.727 × 10 −16 , respectively ( ). The results showed that our panel has surpassed the capacity of commonly used 23 STRs or 52 SNPs and several other reported MH panels , indicating that our panel could be potentially effective for future applications in individual identification, kinship testing, mixture interpretation, and non-invasive prenatal paternity testing (NIPPT) . For undegraded mixtures, single-degraded samples, and degraded mixtures, complete STR genotypes could not be detected using the AGCU EX22 Kit (Applied ScienTech, Jiangsu, China) (except for the 1:1 undegraded mixture) ( ). However, our MPS-based panel was able to observe all complete MH genotypes ( and ). For the degraded mixtures, a ratio of 1:10 was selected for analysis because it was the lowest limit at which STR could detect the mixture, and matched the actual proportion of cell-free fetal DNA (cffDNA) in maternal plasma (range 5–20%, average 9–10%) . We set 1, 3, and 5 µL of DNA input to explore the effects of sequencing genotypes corresponding to different sequencing inputs. The degraded fragment at 15 min was too short to be combined with STR genotypes, so only degradations at 2.5, 5, and 10 min were simulated. The Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) performed well in detecting degraded samples, basically conforming to the fragment distribution of STR genotypes ( ). The detection rate of minor DNA unique (effective) alleles was 93–100% in the nine simulated degraded mixtures ( ). When the DNA input was 1, 3, and 5 μL, the results showed that 3 and 5 μL performed better, which provided a solid theoretical basis for the DNA input in further degraded mixtures research. In addition, maternal plasma DNA containing cffNDA is a special degraded mixture essentially, in which cffNDA accounts for about 10% on average, and the median fragment length is about 143 bp owing to its apoptotic nature . Therefore, we suggest that the DNA input for MPS be 3 or 5 µL to improve the detection rate for the future degraded mixtures and NIPPT study.
In this study, we constructed an MPS-based 50-plex MH panel for forensic DNA analysis combined with multiple PCR-targeted capture sequencing technology and a homemade calling pipeline. We comprehensively explored the potential of the panel for forensic applications, including sensitivity, accuracy, polymorphism, forensic parameters, undegraded mixtures, single-degraded samples, degraded mixtures, and species specificity. We also improved the primer optimization of our panel, explored the influence of different DNA inputs on the efficiency of MH detection in mixtures, and developed a universally applicable MHs forensic analysis software package. Furthermore, our panel characterized a new set of 49 MHs, which may contribute to an international community consensus on a possible MH core panel. In a nutshell, the current findings demonstrated that our MPS-based 50plex MH panel is a unique and powerful DNA tool. It is also an alternative method that can complement and improve the interpretation ability of mixtures and the efficiency of kinship testing with traditional STRs. Our future studies will focus on more family sample pairs to evaluate the value of the panel in NIPPT.
|
Whole Mitochondrial Genome Detection and Analysis of Two- to Four-Generation Maternal Pedigrees Using a New Massively Parallel Sequencing Panel
|
40a72c83-7690-458d-baa4-e31f3f04de9e
|
10137955
|
Forensic Medicine[mh]
|
Circular double-stranded mitochondrial DNA (mtDNA) has been commonly detected in forensic cases where the samples are aged or lack nuclear DNA (e.g., rootless hair shafts or aged bones) . It is also frequently used in genealogy, archaeology, evolutionary anthropology, and medical genetics . As the mtDNA haplotypes are theoretically the same among individuals from the same maternal lineage, mtDNA is often used in maternal familial investigations and for tracing the ancestry of remains . However, when using the traditional Sanger-type sequencing, the detection of the whole mitochondrial genome (mtGenome) is laborious and time-consuming; thus, only hypervariable regions are usually detected in forensic practice. Additionally, the comparison guidelines were constructed for the control region only . However, several studies have demonstrated that the variants in the coding region of the mtGenome can improve the discrimination power and haplogroup estimation [ , , , ]. Moreover, limited by the sensitivity of electrophoretograms, a minor allele frequency (MAF) of 15~20% has typically been used for point heteroplasmy (PHP) calls . Additionally, distinguishing length heteroplasmy (LHP) in the homopolymeric stretches (C-stretches) often fails due to the fluorescence detection nature of Sanger-type sequencing. At present, massively parallel sequencing (MPS), whereby a large number of markers and samples can be detected in one sequencing run , provides a new platform for mtGenome typing. As one of the first forensic genetic markers evaluated using MPS, mtDNA has been studied in populations worldwide [ , , , , ] and in various tissues [ , , , , ] through a variety of MPS multiplexes of the control region and mtGenome. In addition, the MAF of PHP calls ranges from 1% to 10% [ , , , , ], even dropping to as low as 0.15% . Successful detection of C-stretches has also been realized . Studies evaluating the germline bottleneck size and mtDNA transmissions in pedigrees and those distinguishing between the mother and offspring have also been carried out with MPS detection of the mtGenome. Both long-amplicon multiplexes (usually divided into two ~8 kb amplicons or one ~9 kb amplicon plus one ~11 kb amplicon) [ , , ] and short-amplicon multiplexes (e.g., the Precision ID mtDNA Whole Genome Panel) have been established for the mtDNA mtGenome. Long amplicons can decrease the risk of contamination of nuclear mitochondrial DNA reads (NUMTs) and amplification failure caused by SNPs in the primer-binding region, while the requirement for the input genome DNA (gDNA) is relatively high (usually 1 ng) . In addition, long amplicons are hard to amplify due to the homopolymeric stretches, and the fragmentation processes prior to sequencing increase the time of manual operation . On the contrary, short-amplicon multiplexes require a relatively low quantity of input gDNA (usually 100 pg), while the NUMT contamination is non-negligible . In addition, short-amplicon multiplexes are more suitable for degraded samples, and no extra fragmentation process is required as the amplicon size is suitable for the length of the sequencing reads. The fire-new ForenSeq mtDNA Whole Genome Kit (abbr. as ForenSeq mtGenome Kit below, Verogen, San Diego, CA, USA) uses a 2-PCR approach, wherein small overlapping amplicons (60–209 bp, with a mean of 131 bp) are generated to achieve detection coverage of the whole mtGenome using a small amount of input DNA (20 pg to 100 pg). A total of 245 amplicons from 663 primers are enriched in 2-tiled primer mixes (with a 17 bp overlap on average, and a 3 bp overlap minimum). Additionally, the ForenSeq Universal Analysis Software v2.1 (abbr. as ForenSeq UAS v2.1 below, Verogen, San Diego, CA, USA) can automatically generate mtDNA variant calls that follow the nomenclature of the Scientific Working Group on DNA Analysis Methods (SWGDAM) after being compared with the revised Cambridge Reference Sequence (rCRS). Meanwhile, an EMPOP-required format file can be exported for further use. The present study aimed to evaluate the performance of the ForenSeq mtGenome Kit in the detection of the mtGenome in blood samples and hair shafts, the ability of this panel to distinguish heteroplasmy, and the characteristics of variant transmission and variant difference between maternal relatives. Thus, we used this well-validated multiplex to detect the blood samples and hair shafts of thirty-three individuals from two- to four-generation pedigrees. The sequencing results of the blood samples and hair shafts from the same individual and the concordance between the twice-sequenced libraries were evaluated. We presented the PHPs and LHPs in detail. Additionally, the variant transmission between mother and offspring pairs and variant differences between the individuals from ten more maternal relatives were also estimated.
2.1. Materials and Samples All samples were anonymously collected from Guangdong Han Chinese people with informed consent. This study was approved by the Ethics Committee of Zhongshan School of Medicine, Sun Yat-sen University (No. [2020]044). Both peripheral blood samples and hair shaft samples were collected from 11 individuals in a 4-generation pedigree, and peripheral blood samples were collected from 6 individuals in a 3-generation pedigree and 16 individuals in 8 2-generation pedigrees (the family trees for the 4- and 3-generation pedigrees are presented in ). In total, 44 samples from 10 pedigrees were obtained, which included 33 blood samples and 11 hair shafts. Blood was collected using an EDTA anticoagulant tube or sterilized filter, and rootless hair shafts were obtained by cutting 0.5 cm above the scalps of the individuals. 2.2. DNA Extraction and Quantification Prior to DNA extraction, the hair shafts were cleaned with 1% SDS, 5%(W/V) NaClO, sterilized distilled water, 10% ethanol, and 100% ethanol in turn . The first 2 cm hair shaft from each individual was used for the following steps. The genomic DNA of the hair shafts or blood samples was extracted using the QIAamp DNA Investigator Kit (Qiagen, Hilden, NRW, Germany; cat#56504) following the manufacturer’s instructions and was quantified with a Qubit 3.0 fluorometer using the Qubit dsDNA HS Assay Kit (Invitrogen, Eugene, OR, USA). Nuclease-free water was utilized as a negative extraction control during the DNA extraction methods (NC-EXH represents the negative extraction control for hair shaft extraction; NC-EXB represents the negative extraction control for blood sample extraction). 2.3. Library Preparation and Sequencing DNA libraries were constructed using the ForenSeq mtGenome Kit according to the manufacturer’s instructions . Briefly, the gDNA sample HL60 (Millipore-Sigma, St. Louis, MO, USA) was used as a positive amplification control, and nuclease-free water (provided in the ForenSeq mtGenome Kit) was used as a negative amplification control (NC-AMP). Thus, a total of 48 libraries were constructed. Then, the libraries were purified once and normalized using the bead-based method according to the instructions. A total of 16 normalized libraries were pooled together, and 5 μL of the pooled libraries was denatured, diluted, and finally added to the Miseq FGx Reagent Kit cartridge (Illumina, San Diego, CA, USA) for the first sequencing run, as recommended, to undergo paired-end sequencing-by-synthesis reactions using the Miseq FGx instrument (Illumina, San Diego, CA, USA). Additionally, the number of pooled libraries and volume of the loaded dilution in the following four runs were adjusted dynamically according to the previous sequencing quality ( ). Sixteen libraries were sequenced twice for the concordance study, resulting in a total of sixty-four sequenced libraries. Strict cleaning and separation methods were used to control and avoid contamination throughout the experiment according to the recommendations of the International Society of Forensic Genetics (ISFG) and SWGDAM [ , , ]. 2.4. Variant Calling and Data Analysis The raw sequencing data were first analyzed using ForenSeq UAS v2.1 . The sequencing quality metrics and total depth of each sample were obtained directly from the software. The read depth at each position was obtained using the VCF file. The strand bias, which measures the balance between forward and reverse reads at a particular position, was calculated as follows: 1− (read depth of the direction with a smaller number of reads/read depth of the direction with a larger number of reads). A strand bias value of 0 indicates no strand bias, and 1.0 indicates the presence of reads in only 1 direction . The mtGenome coverage of a sample was calculated as the count of the genotyped nucleotide position divided by 16,569 (the expected total position). The relative read depth (RRD) was calculated as the read depth of the negative control divided by the read depth of the positive control to evaluate the contamination level of the negative controls. The NUMTs and byproducts were removed automatically using UAS v2.1, and the sequence strings with mixed bases were screened with BLAST tools ( https://blast.ncbi.nlm.nih.gov/ , accessed on 20 December 2022). For variant calling, the Verogen mtDNA whole-genome analysis method (default thresholds) was used, i.e., a variant call is supported when it meets or exceeds the analytical threshold (AT, 6%), interpretation threshold (IT, 6%), minimum Q-score (Q-score = 30), and minimum read count (reads = 45). The frequency of a variant refers to the total number of reads for the particular variant call divided by the total number of reads at that nucleotide position. The variants called using UAS v2.1 were checked using the ‘Alignment’ function on the EMPOP website , wherein SAM 2.0 was used on the basis of 5440 haplogroup motifs (PhyloTree, Build 17 ) following the phylogenetic concept and the recommendations of the ISFG . A total of 2 C-stretch-related variants that deviated from the nomenclature were corrected manually as follows: (1) 16189c should be reported as 16189C and 16193c; (2) 310Y should be reported as 309.1, 309.2, etc., depending on the number of insertions in the read for this region . All unexpected variants and heteroplasmies were checked using the Integrative Genomics Viewer (IGV) . Haplogroups of mtDNA haplotypes were assigned using the ‘Haplogrouping’ function on the EMPOP website . The variant concordance between the twice-sequenced libraries was also evaluated. The pairwise difference in coverage, the pairwise difference in the total depth, and the pairwise difference in the variant frequency were measured as the targeted data of the second sequencing run minus those of the first sequencing run. Pictures were generated using Excel and the ‘ggplot2′ package in the R software. 2.5. Variant Transmission and Variant Differences in Maternal Pedigrees The variant transmission was evaluated between mother and offspring pairs, and the variant differences were evaluated between individuals from all maternal relatives. To classify the transmission of heteroplasmy between relatives, the types of heteroplasmy were defined as follows: inherited heteroplasmy was heteroplasmy that was observed in both the mother and her offspring; de novo heteroplasmy was heteroplasmy that was observed only in the offspring and absent in the mother; disappearing heteroplasmy was heteroplasmy that was only observed in the mother and was absent in her offspring . The MAF change during transmission was estimated using the MAF of the later generation minus the MAF of the former generation . Considering the fact that various types of tissues may be used in forensic genetics applications, the variants in blood samples and hair shafts from the two maternal relatives were cross-compared in pairs.
All samples were anonymously collected from Guangdong Han Chinese people with informed consent. This study was approved by the Ethics Committee of Zhongshan School of Medicine, Sun Yat-sen University (No. [2020]044). Both peripheral blood samples and hair shaft samples were collected from 11 individuals in a 4-generation pedigree, and peripheral blood samples were collected from 6 individuals in a 3-generation pedigree and 16 individuals in 8 2-generation pedigrees (the family trees for the 4- and 3-generation pedigrees are presented in ). In total, 44 samples from 10 pedigrees were obtained, which included 33 blood samples and 11 hair shafts. Blood was collected using an EDTA anticoagulant tube or sterilized filter, and rootless hair shafts were obtained by cutting 0.5 cm above the scalps of the individuals.
Prior to DNA extraction, the hair shafts were cleaned with 1% SDS, 5%(W/V) NaClO, sterilized distilled water, 10% ethanol, and 100% ethanol in turn . The first 2 cm hair shaft from each individual was used for the following steps. The genomic DNA of the hair shafts or blood samples was extracted using the QIAamp DNA Investigator Kit (Qiagen, Hilden, NRW, Germany; cat#56504) following the manufacturer’s instructions and was quantified with a Qubit 3.0 fluorometer using the Qubit dsDNA HS Assay Kit (Invitrogen, Eugene, OR, USA). Nuclease-free water was utilized as a negative extraction control during the DNA extraction methods (NC-EXH represents the negative extraction control for hair shaft extraction; NC-EXB represents the negative extraction control for blood sample extraction).
DNA libraries were constructed using the ForenSeq mtGenome Kit according to the manufacturer’s instructions . Briefly, the gDNA sample HL60 (Millipore-Sigma, St. Louis, MO, USA) was used as a positive amplification control, and nuclease-free water (provided in the ForenSeq mtGenome Kit) was used as a negative amplification control (NC-AMP). Thus, a total of 48 libraries were constructed. Then, the libraries were purified once and normalized using the bead-based method according to the instructions. A total of 16 normalized libraries were pooled together, and 5 μL of the pooled libraries was denatured, diluted, and finally added to the Miseq FGx Reagent Kit cartridge (Illumina, San Diego, CA, USA) for the first sequencing run, as recommended, to undergo paired-end sequencing-by-synthesis reactions using the Miseq FGx instrument (Illumina, San Diego, CA, USA). Additionally, the number of pooled libraries and volume of the loaded dilution in the following four runs were adjusted dynamically according to the previous sequencing quality ( ). Sixteen libraries were sequenced twice for the concordance study, resulting in a total of sixty-four sequenced libraries. Strict cleaning and separation methods were used to control and avoid contamination throughout the experiment according to the recommendations of the International Society of Forensic Genetics (ISFG) and SWGDAM [ , , ].
The raw sequencing data were first analyzed using ForenSeq UAS v2.1 . The sequencing quality metrics and total depth of each sample were obtained directly from the software. The read depth at each position was obtained using the VCF file. The strand bias, which measures the balance between forward and reverse reads at a particular position, was calculated as follows: 1− (read depth of the direction with a smaller number of reads/read depth of the direction with a larger number of reads). A strand bias value of 0 indicates no strand bias, and 1.0 indicates the presence of reads in only 1 direction . The mtGenome coverage of a sample was calculated as the count of the genotyped nucleotide position divided by 16,569 (the expected total position). The relative read depth (RRD) was calculated as the read depth of the negative control divided by the read depth of the positive control to evaluate the contamination level of the negative controls. The NUMTs and byproducts were removed automatically using UAS v2.1, and the sequence strings with mixed bases were screened with BLAST tools ( https://blast.ncbi.nlm.nih.gov/ , accessed on 20 December 2022). For variant calling, the Verogen mtDNA whole-genome analysis method (default thresholds) was used, i.e., a variant call is supported when it meets or exceeds the analytical threshold (AT, 6%), interpretation threshold (IT, 6%), minimum Q-score (Q-score = 30), and minimum read count (reads = 45). The frequency of a variant refers to the total number of reads for the particular variant call divided by the total number of reads at that nucleotide position. The variants called using UAS v2.1 were checked using the ‘Alignment’ function on the EMPOP website , wherein SAM 2.0 was used on the basis of 5440 haplogroup motifs (PhyloTree, Build 17 ) following the phylogenetic concept and the recommendations of the ISFG . A total of 2 C-stretch-related variants that deviated from the nomenclature were corrected manually as follows: (1) 16189c should be reported as 16189C and 16193c; (2) 310Y should be reported as 309.1, 309.2, etc., depending on the number of insertions in the read for this region . All unexpected variants and heteroplasmies were checked using the Integrative Genomics Viewer (IGV) . Haplogroups of mtDNA haplotypes were assigned using the ‘Haplogrouping’ function on the EMPOP website . The variant concordance between the twice-sequenced libraries was also evaluated. The pairwise difference in coverage, the pairwise difference in the total depth, and the pairwise difference in the variant frequency were measured as the targeted data of the second sequencing run minus those of the first sequencing run. Pictures were generated using Excel and the ‘ggplot2′ package in the R software.
The variant transmission was evaluated between mother and offspring pairs, and the variant differences were evaluated between individuals from all maternal relatives. To classify the transmission of heteroplasmy between relatives, the types of heteroplasmy were defined as follows: inherited heteroplasmy was heteroplasmy that was observed in both the mother and her offspring; de novo heteroplasmy was heteroplasmy that was observed only in the offspring and absent in the mother; disappearing heteroplasmy was heteroplasmy that was only observed in the mother and was absent in her offspring . The MAF change during transmission was estimated using the MAF of the later generation minus the MAF of the former generation . Considering the fact that various types of tissues may be used in forensic genetics applications, the variants in blood samples and hair shafts from the two maternal relatives were cross-compared in pairs.
3.1. Sequencing Overview A total of 64 libraries (4 of which were controls) were sequenced in this study, in which 16 libraries were sequenced twice. The average cluster density was 1560.6 ± 228.32 k/mm 2 , and the average total read depth was 562,592 ± 331,109×, for all sequenced libraries ( , ). The average total read depths for all anticoagulant blood samples, all blood stain samples, and all hair shafts were 362,102 ± 99,978×, 912,202 ± 228,578×, and 953,650 ± 196,052×, respectively. As shown in , relatively low average read depths were mainly observed in the nucleotide positions (nps) 303–347, 3550–3606, 5307–5347, 6718–6810, 12,466–12,614, and 15,519–15,581. The average read depths of the control region and 37 gene-coding regions are shown in . The mtGenome coverage of the 60 target libraries ranged from 97.37% to 100% (with an average of 99.61% ± 0.60%). In the results of the 60 sample libraries, only 1.68% of all the positions had a strand bias value over 0.6 ( ). Most of the strand bias values were equal to 1.0 at nps 303~347, which indicated only 1 direction of the sequencing reads. In addition, high strand biases were also observed at nps 1003–1018, 1095–1176, 1495–1546, 2262–2266, 2684–2694, 4565–4665, 6142–6149, 6784–6810, 7560–7590, 7960–7985, and 13,496–13,513. A total of 34 variants were observed in HL60 using the default analysis thresholds, with an average read depth of 4107.34 ± 3378.87×. Among the observed variants, 33 were SNPs, and one was an insertion (315.1C). No base call was observed at np 6734. The haplogroup of HL60 used was assigned as J2b1a1a via ‘Haplogrouping’ tool. As for the negative control, no base call or variants were called in NC-AMP and NC-EXB using the default thresholds, while 22 variants were called in NC-EXH, with an average read depth of 627.68 ± 464.62×. The average RRDs of NC-AMP and HL60, NC-EXB and HL60, and NC-EXH and HL60 were 0.07%, 0.03%, and 5.40%, respectively ( ). Among the 16 twice-sequenced libraries, both the total read depth and mtGenome coverage in the second sequencing run were higher than those in the corresponding libraries in the first sequencing run, except for P15-H and P14-B ( ). The pairwise depth differences were 40,738×~267,576× in the libraries, and the pairwise coverage differences were 0.07~1.94% ( ). Overall, except for the two libraries from P14-B, the read depth and coverage in the second run were higher than those in the first for all the twice-sequenced libraries. However, when comparing the observed variants between the two sequencing results, differences occurred only in the C-stretch regions (see for details). The results of the better-performing libraries were used for the following analysis. As for the 11 hair shafts and the corresponding blood samples from the same individuals, the total read depth did not show a significant tendency towards either of the tissues ( ), and the mtGenome coverages were all 100%, except for P15-H (99.67%; no base call was reported at 54 nucleotide positions). 3.2. mtDNA Variant Polymorphism In the blood samples from 33 individuals, a total of 1247 variants were observed at 172 nucleotide positions, and 178 types of variants were included. Of the 178 types of variants, 167 were SNPs, 8 were insertions, and 3 were deletions ( ). A total of 396 variants (31.76%; 56 types) were observed in the control region, 849 variants (68.08%; 121 types) were observed in the coding region, and 2 variants (1 type) were observed in the non-coding region ( ). The largest number of variants were observed in the 12S RNA (RNR1) and cytochrome b (CYB) gene-coding regions, with 136 and 132 variants, respectively. The variant distribution at each nucleotide position is presented in . Variants 73G, 263G, 315.1C, 1438G, 2706G, 4769G, 7028T, 8860G, 11719A, 14766T, and 15326G were observed in all samples. Ten different mtDNA haplotypes were observed in the mothers from ten pedigrees in the control region, coding region, and whole mtGenome. Inside the coding region, different haplotypes were observed in 17 gene-coding regions, while the remaining 20 regions showed the same haplotypes. The top three haplotype types were observed in NADH dehydrogenase 5 (ND5), ATP synthase 6 (ATP6), and CYB, with nine, eight, and eight haplotypes, respectively ( ). The tri-alleles T9824A/C and C13683A/G were observed in this study, wherein variant call A was observed in four samples and variant call C was observed in two samples at np 9824, and variant calls A and G were observed in two samples, separately, at np13683. A total of 10 unique haplogroups were assigned in the 10 pedigrees, of which 5 were nested in super haplogroup M and 5 were nested in super haplogroup N. The assigned haplogroups were the same among the family members from the same maternal pedigrees ( ). 3.3. Heteroplasmy Among the 33 blood samples, 12 (36.36%) samples were observed to have 1~2 PHPs (14 PHPs in total; ), wherein 10 were a mixed base of C and T (Y), 2 were a mixed base of A and C (M), 1 was a mixed base of A and G (R), and 1 was a mixed base of C and G (S). Four PHPs were observed in the control region, while the remaining ten were observed in the coding region. The MAFs of these PHPs were 6.39% ~ 35.95%. In the 11 hair samples, 7 (77.78%) samples were observed to have 1~3 PHPs (12 in total; ), wherein 3 were a mixed base of C and T (Y), 2 were a mixed base of A and C (M), 6 were a mixed base of A and G (R), and 1 was a mixed base of C and G (S). One PHP was observed in the control region, while the remaining ten were observed in the coding region. The MAFs ranged from 6.97% to 49.09% ( ). The mtDNA region of the ND5 gene contained the largest number of PHPs (four in the blood samples and three in the hair shafts) in both the blood samples and the hair shafts. In the 11 pairs of same-origin blood samples and hair shafts, no overlapping PHPs were observed in any tissue pairs. Length variations that were different from the rCRS were observed at nps 303–315, 955–966, 12,417–12,426, 16,180–16,194, 249, 514–524, and 8271–8279. Sequence variations in the first four regions were caused by base insertions of C or A (np 12,425), while variations in the last three regions were due to base or fragment deletions. Excepting np 249 (249DEL), LHPs were observed in all the regions described above ( ). To locate the accurate position of the poly-C or poly-A stretches, we took the non-repetitive bases at both ends of a fragment as the anchor position to grab the target fragment and used an abbreviated form to show the variant. For example, variant type ‘A-7C-T-6C-G’ represents sequence ‘ACCCCCCCTCCCCCCG’ from nps 302 to 316, wherein the consecutive Cs are interrupted by 310T, and compared with the rCRS, the variant of this fragment is 315.1C. The 315.1C variant was observed in all 44 samples, except for 1 sample that presented no base call in this region. Additionally, more than 7 Cs were observed at nps 303–309 in 35 samples. No transition of T > C was observed at np 310. LHP was observed when eight consecutive Cs existed. As shown in , the ‘A-8C-T-6C-G’ variant type was related to two haplotypes (309.1C and 309.1c), wherein the difference occurred at the frequency (or read count) of the reads with eight Cs. The T961C variant led to a sequence with more than 10 consecutive Cs at nps 955–966. A total of 20 samples showed a sequence with 12 consecutive Cs varying in length, and 4 samples were observed to have a sequence with 13 consecutive Cs. Additionally, it is worth mentioning that the read count dropped precipitously at np 964, whose average read count was only 21.97% ± 5.23% of that at np 963. Similarly, the length variation at nps 16,180–16,194 was caused by T16189C, and the T > C transition at nps 16,182 and 16,183 exacerbated the C-stretches. As the only polyadenine stretch, LHP happened when nine consecutive As occurred at np 12425. Mixed variation in 523A, 524C, and 523–524DEL (with an average frequency of 84.52% ± 0.98%) was observed at nps 523–524. A similar situation was also observed for the 9bp deletion fragment at nps 8272–8280 ( ). The average frequency of deletion at nps 8275–8280 was 88.20% ± 2.52%, which led to a mixture of base calls and deletions under the default thresholds. 3.4. Variant Transmission between Mother and Offspring Pairs A total of 53 comparisons were carried out for the 23 mother–offspring pairs ( ), in which 23 comparisons were between blood samples of the two members, 10 comparisons were between hair shafts of the two members, 10 comparisons were between the mother’s blood sample and the offspring’s hair shaft, and 10 comparisons were between the mother’s hair shaft and the offspring’s blood sample. Both homoplasmic variants and heteroplasmic variants were compared. As T961C was related to the LHPs at nps 956–965, the different variants at np 961 were incorporated within the LHPs at nps 956–965, and the differences were counted once in this region. The mtDNA haplotypes were completely the same in 11 comparisons, and consistent haplotypes were observed in 49 comparisons when heteroplasmic variants were ignored. All homoplasmic variants were transmitted from the mother to her offspring in both the blood samples and hair shafts, except the hair shaft of P06 (P06-H), in which a de novo 4475C (located in the ND2-coding region) was observed. Thus, the haplotype of P06-H was different from the haplotypes of their mother and her offspring. When heteroplasmic variants were considered, 42 comparisons showed different haplotypes ( ). The differences were observed in the control region, RNR1, TV, TL1, ND1, TA, TY, ATP6, ND4, ND5, ND6, and CYB. The greatest number of different regions was observed between P01-H and P08-H, where five different regions were observed. The largest number of pairwise different variants was also observed in the same comparison pair, where five different PHPs and two different LHPs were observed ( ). Inherited PHPs were observed in Family1 (152Y), Family8 (3386Y), P08-P15 (9083Y), and P12-P18 (14215Y); conversely, de novo and disappearing PHPs were the majority ( ). The pairwise MAF differences between the mother and offspring were 10.19% and 12.54% in 3386Y and 9083Y, while the values were 1.31% in 152Y and 0.51% in 14215Y ( ). A total of six groups of immediate three-generational family members (i.e., grandmother–mother–offspring) and one group of immediate four-generational family members (i.e., grandmother–mother–offspring–great-grandson) were included in this study. PHPs were observed in both the blood samples and hair shafts in the four-generational group and two three-generational groups and were also observed in the hair shafts of one three-generational group ( and ). The transmission of PHPs 13678M and 9083Y was observed in the blood samples of the grandmother–mother–offspring–great-offspring group (P01–P03–P12–P18) and a grandmother–mother–offspring group (P01–P08–P15), respectively. Additionally, the transmission of PHP 13679M was observed in the hair shafts of the P01–P08–P15 group ( ). The MAF changes during transmission among the generations are shown in . As shown in this figure, de novo/disappearing PHPs made up the majority of the PHPs, and no PHP was transmitted throughout the pedigree. 3.5. Variant Differences between Maternal Relatives in Three/Four-Generation Pedigrees Besides the mother–offspring pairs, comparisons were also carried out in 10 other types of maternal relationships ( ). Similar to the mother–offspring pairs, when only homoplasmic variants were compared, an mtGenome haplotype difference was observed only in pairs where P06-H was included (eight in full-sibling pairs, six in maternal aunt and maternal nephew/niece pairs, and two in maternal grandaunt and maternal grandnephew pairs). When taking the heteroplasmic variants into consideration, different mtDNA haplotypes were observed in most of the comparisons, especially in the comparisons of third-degree relationships, where differences were observed in almost all comparisons ( ). The involved regions were the same as those in the mother–offspring pairs. The number of different comparisons for each type of maternal relative is shown in and .
A total of 64 libraries (4 of which were controls) were sequenced in this study, in which 16 libraries were sequenced twice. The average cluster density was 1560.6 ± 228.32 k/mm 2 , and the average total read depth was 562,592 ± 331,109×, for all sequenced libraries ( , ). The average total read depths for all anticoagulant blood samples, all blood stain samples, and all hair shafts were 362,102 ± 99,978×, 912,202 ± 228,578×, and 953,650 ± 196,052×, respectively. As shown in , relatively low average read depths were mainly observed in the nucleotide positions (nps) 303–347, 3550–3606, 5307–5347, 6718–6810, 12,466–12,614, and 15,519–15,581. The average read depths of the control region and 37 gene-coding regions are shown in . The mtGenome coverage of the 60 target libraries ranged from 97.37% to 100% (with an average of 99.61% ± 0.60%). In the results of the 60 sample libraries, only 1.68% of all the positions had a strand bias value over 0.6 ( ). Most of the strand bias values were equal to 1.0 at nps 303~347, which indicated only 1 direction of the sequencing reads. In addition, high strand biases were also observed at nps 1003–1018, 1095–1176, 1495–1546, 2262–2266, 2684–2694, 4565–4665, 6142–6149, 6784–6810, 7560–7590, 7960–7985, and 13,496–13,513. A total of 34 variants were observed in HL60 using the default analysis thresholds, with an average read depth of 4107.34 ± 3378.87×. Among the observed variants, 33 were SNPs, and one was an insertion (315.1C). No base call was observed at np 6734. The haplogroup of HL60 used was assigned as J2b1a1a via ‘Haplogrouping’ tool. As for the negative control, no base call or variants were called in NC-AMP and NC-EXB using the default thresholds, while 22 variants were called in NC-EXH, with an average read depth of 627.68 ± 464.62×. The average RRDs of NC-AMP and HL60, NC-EXB and HL60, and NC-EXH and HL60 were 0.07%, 0.03%, and 5.40%, respectively ( ). Among the 16 twice-sequenced libraries, both the total read depth and mtGenome coverage in the second sequencing run were higher than those in the corresponding libraries in the first sequencing run, except for P15-H and P14-B ( ). The pairwise depth differences were 40,738×~267,576× in the libraries, and the pairwise coverage differences were 0.07~1.94% ( ). Overall, except for the two libraries from P14-B, the read depth and coverage in the second run were higher than those in the first for all the twice-sequenced libraries. However, when comparing the observed variants between the two sequencing results, differences occurred only in the C-stretch regions (see for details). The results of the better-performing libraries were used for the following analysis. As for the 11 hair shafts and the corresponding blood samples from the same individuals, the total read depth did not show a significant tendency towards either of the tissues ( ), and the mtGenome coverages were all 100%, except for P15-H (99.67%; no base call was reported at 54 nucleotide positions).
In the blood samples from 33 individuals, a total of 1247 variants were observed at 172 nucleotide positions, and 178 types of variants were included. Of the 178 types of variants, 167 were SNPs, 8 were insertions, and 3 were deletions ( ). A total of 396 variants (31.76%; 56 types) were observed in the control region, 849 variants (68.08%; 121 types) were observed in the coding region, and 2 variants (1 type) were observed in the non-coding region ( ). The largest number of variants were observed in the 12S RNA (RNR1) and cytochrome b (CYB) gene-coding regions, with 136 and 132 variants, respectively. The variant distribution at each nucleotide position is presented in . Variants 73G, 263G, 315.1C, 1438G, 2706G, 4769G, 7028T, 8860G, 11719A, 14766T, and 15326G were observed in all samples. Ten different mtDNA haplotypes were observed in the mothers from ten pedigrees in the control region, coding region, and whole mtGenome. Inside the coding region, different haplotypes were observed in 17 gene-coding regions, while the remaining 20 regions showed the same haplotypes. The top three haplotype types were observed in NADH dehydrogenase 5 (ND5), ATP synthase 6 (ATP6), and CYB, with nine, eight, and eight haplotypes, respectively ( ). The tri-alleles T9824A/C and C13683A/G were observed in this study, wherein variant call A was observed in four samples and variant call C was observed in two samples at np 9824, and variant calls A and G were observed in two samples, separately, at np13683. A total of 10 unique haplogroups were assigned in the 10 pedigrees, of which 5 were nested in super haplogroup M and 5 were nested in super haplogroup N. The assigned haplogroups were the same among the family members from the same maternal pedigrees ( ).
Among the 33 blood samples, 12 (36.36%) samples were observed to have 1~2 PHPs (14 PHPs in total; ), wherein 10 were a mixed base of C and T (Y), 2 were a mixed base of A and C (M), 1 was a mixed base of A and G (R), and 1 was a mixed base of C and G (S). Four PHPs were observed in the control region, while the remaining ten were observed in the coding region. The MAFs of these PHPs were 6.39% ~ 35.95%. In the 11 hair samples, 7 (77.78%) samples were observed to have 1~3 PHPs (12 in total; ), wherein 3 were a mixed base of C and T (Y), 2 were a mixed base of A and C (M), 6 were a mixed base of A and G (R), and 1 was a mixed base of C and G (S). One PHP was observed in the control region, while the remaining ten were observed in the coding region. The MAFs ranged from 6.97% to 49.09% ( ). The mtDNA region of the ND5 gene contained the largest number of PHPs (four in the blood samples and three in the hair shafts) in both the blood samples and the hair shafts. In the 11 pairs of same-origin blood samples and hair shafts, no overlapping PHPs were observed in any tissue pairs. Length variations that were different from the rCRS were observed at nps 303–315, 955–966, 12,417–12,426, 16,180–16,194, 249, 514–524, and 8271–8279. Sequence variations in the first four regions were caused by base insertions of C or A (np 12,425), while variations in the last three regions were due to base or fragment deletions. Excepting np 249 (249DEL), LHPs were observed in all the regions described above ( ). To locate the accurate position of the poly-C or poly-A stretches, we took the non-repetitive bases at both ends of a fragment as the anchor position to grab the target fragment and used an abbreviated form to show the variant. For example, variant type ‘A-7C-T-6C-G’ represents sequence ‘ACCCCCCCTCCCCCCG’ from nps 302 to 316, wherein the consecutive Cs are interrupted by 310T, and compared with the rCRS, the variant of this fragment is 315.1C. The 315.1C variant was observed in all 44 samples, except for 1 sample that presented no base call in this region. Additionally, more than 7 Cs were observed at nps 303–309 in 35 samples. No transition of T > C was observed at np 310. LHP was observed when eight consecutive Cs existed. As shown in , the ‘A-8C-T-6C-G’ variant type was related to two haplotypes (309.1C and 309.1c), wherein the difference occurred at the frequency (or read count) of the reads with eight Cs. The T961C variant led to a sequence with more than 10 consecutive Cs at nps 955–966. A total of 20 samples showed a sequence with 12 consecutive Cs varying in length, and 4 samples were observed to have a sequence with 13 consecutive Cs. Additionally, it is worth mentioning that the read count dropped precipitously at np 964, whose average read count was only 21.97% ± 5.23% of that at np 963. Similarly, the length variation at nps 16,180–16,194 was caused by T16189C, and the T > C transition at nps 16,182 and 16,183 exacerbated the C-stretches. As the only polyadenine stretch, LHP happened when nine consecutive As occurred at np 12425. Mixed variation in 523A, 524C, and 523–524DEL (with an average frequency of 84.52% ± 0.98%) was observed at nps 523–524. A similar situation was also observed for the 9bp deletion fragment at nps 8272–8280 ( ). The average frequency of deletion at nps 8275–8280 was 88.20% ± 2.52%, which led to a mixture of base calls and deletions under the default thresholds.
A total of 53 comparisons were carried out for the 23 mother–offspring pairs ( ), in which 23 comparisons were between blood samples of the two members, 10 comparisons were between hair shafts of the two members, 10 comparisons were between the mother’s blood sample and the offspring’s hair shaft, and 10 comparisons were between the mother’s hair shaft and the offspring’s blood sample. Both homoplasmic variants and heteroplasmic variants were compared. As T961C was related to the LHPs at nps 956–965, the different variants at np 961 were incorporated within the LHPs at nps 956–965, and the differences were counted once in this region. The mtDNA haplotypes were completely the same in 11 comparisons, and consistent haplotypes were observed in 49 comparisons when heteroplasmic variants were ignored. All homoplasmic variants were transmitted from the mother to her offspring in both the blood samples and hair shafts, except the hair shaft of P06 (P06-H), in which a de novo 4475C (located in the ND2-coding region) was observed. Thus, the haplotype of P06-H was different from the haplotypes of their mother and her offspring. When heteroplasmic variants were considered, 42 comparisons showed different haplotypes ( ). The differences were observed in the control region, RNR1, TV, TL1, ND1, TA, TY, ATP6, ND4, ND5, ND6, and CYB. The greatest number of different regions was observed between P01-H and P08-H, where five different regions were observed. The largest number of pairwise different variants was also observed in the same comparison pair, where five different PHPs and two different LHPs were observed ( ). Inherited PHPs were observed in Family1 (152Y), Family8 (3386Y), P08-P15 (9083Y), and P12-P18 (14215Y); conversely, de novo and disappearing PHPs were the majority ( ). The pairwise MAF differences between the mother and offspring were 10.19% and 12.54% in 3386Y and 9083Y, while the values were 1.31% in 152Y and 0.51% in 14215Y ( ). A total of six groups of immediate three-generational family members (i.e., grandmother–mother–offspring) and one group of immediate four-generational family members (i.e., grandmother–mother–offspring–great-grandson) were included in this study. PHPs were observed in both the blood samples and hair shafts in the four-generational group and two three-generational groups and were also observed in the hair shafts of one three-generational group ( and ). The transmission of PHPs 13678M and 9083Y was observed in the blood samples of the grandmother–mother–offspring–great-offspring group (P01–P03–P12–P18) and a grandmother–mother–offspring group (P01–P08–P15), respectively. Additionally, the transmission of PHP 13679M was observed in the hair shafts of the P01–P08–P15 group ( ). The MAF changes during transmission among the generations are shown in . As shown in this figure, de novo/disappearing PHPs made up the majority of the PHPs, and no PHP was transmitted throughout the pedigree.
Besides the mother–offspring pairs, comparisons were also carried out in 10 other types of maternal relationships ( ). Similar to the mother–offspring pairs, when only homoplasmic variants were compared, an mtGenome haplotype difference was observed only in pairs where P06-H was included (eight in full-sibling pairs, six in maternal aunt and maternal nephew/niece pairs, and two in maternal grandaunt and maternal grandnephew pairs). When taking the heteroplasmic variants into consideration, different mtDNA haplotypes were observed in most of the comparisons, especially in the comparisons of third-degree relationships, where differences were observed in almost all comparisons ( ). The involved regions were the same as those in the mother–offspring pairs. The number of different comparisons for each type of maternal relative is shown in and .
4.1. Sequencing Overview In this study, we used a fire-new mtDNA Whole Genome Kit that was designed with small amplicons (averaging at 131 bp) to sequence the mtGenome in blood samples and hair shafts. Amplification of small amplicons is more effective for degraded samples, while the risk of NUMT contamination also increases . The two-PCR approach with tiled amplicons can facilitate the confirmation of variants that reside at the primer-binding sites: when a primer-binding site mutation exists under a primer in one primer set, that variant can be reliably detected in amplicons extended from the companion primer set . This strategy ensures the successful detection of the mtGenome with a high genome coverage. However, in our study, we still observed a low read depth or even no base call at some nucleotide fragments in most of the samples, and these fragments overlapped with the validation study of the ForenSeq mtGenome Kit . This phenomenon may be caused by the failure of amplification at these positions. In this study, the quantity of the input DNA and the processes before library pooling followed the manufacturer’s instructions completely. When using the recommended number of pooling libraries (16 libraries) and the recommended volume of pooled libraries (5 μL) that was added to the Miseq sequencing reagent cartridge, 73.33% of the sequenced libraries (11/15, negative control not included) showed a total read depth of lower than 300,000×. To maximize the data coverage and quality, we lowered the number of pooling libraries to 12 and increased the volume of pooled libraries to 5.3 μL in the later sequencing runs. These adjustments proved to be effective at increasing the total read depth and mtGenome coverage throughout the results of run 1, run 4, and run 5 ( ). Thus, reducing the number of pooling libraries and increasing the volume of pooled libraries can help to obtain more sequencing reads and a larger coverage, which is meaningful for challenging samples. On the other hand, the total read depths of the anticoagulant blood samples (runs 4 and 5, collected using an EDTA anticoagulant tube) were only half as deep as those for the blood stains (run 3, collected using a sterilized filter) and hair shafts (run 2). Excluding the difference in the concentration of the input DNA and normalized libraries, this phenomenon may be caused by the in vitro time of the samples and the storage methods, as the anticoagulant blood samples were in vitro for three years and the freezing–thawing process can decrease the mitochondrial membrane potential and lead to damage to the mitochondria . This phenomenon reminded us to store the material under relatively stable conditions and avoid repeated freezing and thawing, and we recommend that the materials be prepared as dry stains and stored in dry and constant temperature conditions if possible. This kit performed well in terms of the sequence strand balance. The percentage of positions with strand bias (1.68%) was lower than that in samples sequenced using the Ion Torrent PGM platform (10%) , the Ion Torrent S5 platform (16%) , and MiSeq FGx (3.06%, with 2 long mtDNA amplicons) . Due to the poly-C stretches, the forward sequences between nps 303 and 347 did not meet the alignment requirements and were soft-clipped, thus leading to sequencing reads with only a reverse direction . Completely consistent homoplasmic variants were observed in the HL60 samples in our study, previous studies , and the SRM2392-I certificate , while for three nucleotide positions (nps 2445, 4821, and 12,071), the involved PHPs were different ( ). At np 2445, the genotype was T2445 in the SRM2392-Icertificate and our study, while it was 2445Y in . At np 4821, the other 3 studies reported an rCRS genotype (G4821), while our study showed PHP 4821R (7.4% of allele A). At np 12,071, the other 3 studies reported PHP 12071Y, while in our study, an rCRS genotype (T12071) was observed, with a frequency of 1.1% for allele C, and was not identified as a PHP allele. According to the study by Cihlar et al. , lot-to-lot variation in control DNAs was observed; thus, it is not surprising that different PHPs were observed in the HL60 samples that were from different lots. Additionally, with the high sensitivity and resolution power of MPS, we were able to distinguish more PHPs with lower thresholds. In the study herein and a validation study of the same mtGenome detection kit , 6% was used for the PHP calling, while the value was 10% in , and the haplotypes in the SRM2392-I certificate were confirmed using Sanger-type sequencing. Moreover, the sequencing platform, sequencing chemistry , and analysis software can also cause variability. In spite of the differences in the nucleotide positions of the PHPs, the sequencing results of HL60 in this study are reliable. Due to the high sensitivity of the mtGenome multiplex and the unavoidable inclusion of aerosol during the pipetting process, sporadic base calls were unavoidable in the negative controls. When the default variant calling thresholds were used, no variant call was observed in NC-EXB and NC-AMP, which indicates ignorable contamination during the DNA extraction process of the blood samples and the library construction and sequencing process. On the contrary, remarkable variants were present in the negative control of the hair shaft extraction (NC-EXH). Among the 22 observed variants, 8 overlapped with HL60 and P01-H. When compared with all the samples in this study, there were still nine variants that were not traced. Additionally, these variants were not recorded in the EMPOP database , and many unexpected variants were observed when assigning the haplogroup of NC-EXH. Thus, we inferred that the contamination of NC-EXH was neither single-sourced nor researched-sample-sourced. According to the consistent haplotypes of the blood samples and hair shafts from the same individual and the consistent haplotypes between maternal relatives (heteroplasmic variants were not considered), we can conclude that the source of contamination had little influence on the detected samples in this study and can be ignored when performing variant calls in hair shaft samples. Overall, the ForenSeq mtGenome Kit is suitable for mtDNA detection in blood samples and challenging samples such as rootless hair shafts, and the sequencing results are reliable in this study. 4.2. mtDNA Polymorphism and Heteroplasmy The distribution of variants in the mtGenome in this study was similar to that in the North Han Chinese population reported by Zhou et al. and in the Shanghai Han Chinese population reported by Ma et al. . Similar distributions of variants in the control region and coding region were observed in previous studies (72.16% ± 1.10% on average) [ , , , ]. In this study, 100% unique haplotypes were observed in the 10 pedigrees when considering the control region, which suggested the high polymorphism of the control region, even if only 1122 bps of the mtDNA sequence were included ( ). Meanwhile, some studies also reported an increase in the power of evidence when comparing the mtGenome with the control region, wherein the haplotype diversity increased by 0.02% to 0.21% and the random match probability decreased by 2.02% to 35% [ , , ]. In this study, relatively high polymorphisms were observed in the coding regions of ATP6, ND5, CYB, ND2, and CO1, where there were nine, nine, eight, seven, and seven types of haplotypes, respectively ( ). This suggests that we can selectively detect the control region and these high-polymorphism coding regions to balance the discrimination power and cost. However, no overlapping PHPs have been observed in previous studies [ , , , ], which indicates a random happening of the nucleotide positions of PHPs [ , , ]. Moreover, different from Li et al.’s observation that most PHPs (MAF = 2%) were distributed in the control region in 12 types of tissues from 152 individuals, 83.33% of the PHPs were observed in the coding region in this study, and this is in agreement with previous studies using MPS platforms (84.38% in , 75.00% in , 70.83% in , and 64.49% in ). No shared PHPs were observed in the blood and hair shaft pairs as only 11 individuals were detected in this study. The haplogroup assignments were consistent between the blood samples and corresponding hair shafts and were also consistent among relatives in the same pedigree, indicating that the variants at the heteroplasmic positions are not the expected variants for haplogroup assignment and will not influence the assignment results. Extreme strand bias was only observed at nps 303–315 and np 16193.1 in the 16,180–16,194 fragment. 310T interrupted the C-stretches at nps 303–315, which resulted in nine consecutive Cs in this region. Variant 309.1c was called when both the read depth and frequency of reads with the insertion of C and reads without the insertion of C (reference reads) reached the thresholds; otherwise, variant 309.1C was called. A relatively high frequency occurred when the total read depth was low, even with a read count that was less than 45 reads (e.g., 20×/100× = 20%); nonetheless, a homoplasmic variant was called. A similar situation existed in the np 955–966 and np 16,180–16,194 fragments. An N-1 stutter in np 16,189 was observed, and the count of stutter reads was 13.74% of that of the parental allele. This is similar to the mechanism that results in stutter products associated with the detection of short tandem repeat (STR) markers . Although the dinucleotide repeat (AC)5 was observed in the 514–524 fragment, an average frequency of 84.52% ± 0.98% of (AC)4 was also observed. The (AC)4 repeat is more frequent in Asian populations . The 9 bp deletion between nps 8272 and 8280 that has been deeply investigated in the Chinese population was also reported in this study. Positions 12,418–12,425 are an 8 bp polyadenine stretch, and a mixture of molecules in this region has been previously described in a report on mtDNA heteroplasmy from MPS data . Just et al. also reported that 88.8% of 588 samples had detectible LHP around position 12,425 based on Sanger-type sequencing . Though the sequencing-by-synthesis mechanism of the Illumina sequencing platform can reduce the risk of sequencing error in the polymeric region, the lower thresholds of variant calls (e.g., 6% in UAS v2.1) may increase the complexity of variants in regions with length variations. Thus, we recommend that caution be taken when classifying LHP calls and that the minimum variant frequency of length variation should be raised (e.g., to 20% for insertions and 30 % for deletions ). Additionally, we suggest that the manufacturer provides an independent option for the threshold settings of length variant calls in the UAS updates. 4.3. Variants in Maternal Relatives The variant transmission of the mtGenome between the maternal family members in this study still followed maternal heredity. Meanwhile, we also observed differences that were mainly caused by heteroplasmic variants. These variants are de novo or disappearing variants in the lineage. The transmission of inherited heteroplasmic variants can improve the weight of evidence when evaluating individuals from the same maternal lineage, while different heteroplasmies may have the potential to distinguish individuals from the same maternal ancestry, especially when the heteroplasmic variants are only shared in tissues of the particular individual . In this study, the six groups of immediate three-generational family members and the group of immediate four-generational family members were from only two maternal lineages; thus, limited information on the characteristics of PHP transmission was found. PHP that was transmitted from the grandmother to the third- or fourth-generation member was not observed. Meanwhile, in the observed inherited PHPs, the direction and magnitude of the frequency change during transmission were moderate and seemed to be random, and this is consistent with Liu et al.’s research on sixteen four-generation pedigrees . Zaidi et al. demonstrated that divergence between the mother and offspring increases with the mother’s age at childbirth , while in this study, divergence was not observed among the mother and her five offspring, where the first offspring was ten years older than the fifth offspring. Heteroplasmy allele frequencies can be affected by the germline bottleneck, by the potential decrease in the content of mtDNA during embryonic development, and by selection . No correlation between the MAF changes among the transmissions was observed, suggesting that most heteroplasmic variants were functionally neutral or mildly deleterious and were not eliminated by selection . In this study, though only four inherited PHPs were observed, the length variations were inherited in most of the maternal relatives, which can also improve the matching probability of the same maternal origin among the relatives. Optimizing the thresholds for LHP variant calling or using a major molecule comparison helps to decrease the complexity of a variant mixture and can support the same maternal origin relationship further. Meanwhile, more pedigree studies are needed to further confirm the observation that more variant differences existed between the relatives of third-degree relationships. As for the mtGenome haplotype comparisons between maternal relatives, the revised guidelines by the ISFG recommends that differences in both PHPs and LHPs do not constitute evidence for excluding two identical haplotypes deriving from the same source or the same maternal lineage . However, Connell et al. proved that the sequence comparison guidelines for the mtDNA control region recommended by SWGDAM are also suitable for multigeneration whole mtGenome analysis, wherein samples differing at two or more nucleotide positions (excluding length heteroplasmy) can be excluded when coming from the same maternal lineage (reported as ‘exclusion’), samples differing at a single position should only be reported as ‘inconclusive’, and samples having the same sequence should be reported as ‘cannot exclude’. They also recommended that caution should be taken when classifying heteroplasmic changes as differences for human identification . As the ForenSeq mtGenome Kit is highly sensitive and has the potential to distinguish LHPs, we wondered how many differences between maternal relatives would be observed using this panel. Thus, we implemented three levels of comparison, wherein the first comparison considered homoplasmic variants only, the second comparison considered both homoplasmic variants and point heteroplasmy, and the third comparison considered both homoplasmic variants and heteroplasmies (both PHPs and LHPs). In this study, following the guidelines, when comparing the haplotypes with homoplasmic variants only, 100% of the reports obtained were ‘cannot exclude’ in the grandmother and grandson/granddaughter pairs, uncle and nephew/niece pairs, first-cousin pairs, great-grandmother and great-grandson pairs, granduncle and grandnephew pairs, cousin-aunt and cousin-nephew pairs, and cousin-uncle and cousin-nephew pairs. A report of ‘inconclusive’ was obtained in pairs of the P06-H sample, with four pairs in mother and offspring pairs, eight pairs in full-sibling pairs, six pairs in aunt and nephew/niece pairs, and two pairs in grandaunt and grandnephew pairs. No report of ‘exclusion’ was obtained under this comparison condition. When comparing the haplotypes with both homoplasmic variants and PHPs, a 33.33% to 100% count of ‘exclusion’ was reported in the 11 types of maternal relationships. Furthermore, when LHPs were also compared, the percentage of ‘exclusion’ increased from 62.26% to 100% in the 11 types of maternal relationships. Especially in the great-grandmother and great-grandson pairs, granduncle and grandnephew pairs, and cousin-uncle and cousin-nephew pairs, 100% ‘exclusion’ was reported ( ). Similarly, in the study by Connell et al., wherein mtGenome haplotypes of 2339 maternal pairs from 18 meioses were compared, the prevalence of inconclusive identification increased by 6%, and the prevalence of false exclusions was 0.34% when PHPs were considered . Since they used a higher MAF (10%) than ours, fewer PHPs were observed (6.67%), resulting in fewer mtDNA differences than in our study when PHPs were considered. Overall, in this study, the comparison guidelines were suitable for the mtGenome haplotype comparison between all maternal relatives and throughout the multigenerational comparisons when only homoplasmic variants were considered. Given the high false exclusion ratio, we suggest that different heteroplasmic variants should not be counted as inconsistencies when performing mtDNA haplotype comparisons, while consistent heteroplasmic variants can be considered as enhanced evidence for maternal lineage confirmation. Additionally, this is consistent with the recommendations of the ISFG .
In this study, we used a fire-new mtDNA Whole Genome Kit that was designed with small amplicons (averaging at 131 bp) to sequence the mtGenome in blood samples and hair shafts. Amplification of small amplicons is more effective for degraded samples, while the risk of NUMT contamination also increases . The two-PCR approach with tiled amplicons can facilitate the confirmation of variants that reside at the primer-binding sites: when a primer-binding site mutation exists under a primer in one primer set, that variant can be reliably detected in amplicons extended from the companion primer set . This strategy ensures the successful detection of the mtGenome with a high genome coverage. However, in our study, we still observed a low read depth or even no base call at some nucleotide fragments in most of the samples, and these fragments overlapped with the validation study of the ForenSeq mtGenome Kit . This phenomenon may be caused by the failure of amplification at these positions. In this study, the quantity of the input DNA and the processes before library pooling followed the manufacturer’s instructions completely. When using the recommended number of pooling libraries (16 libraries) and the recommended volume of pooled libraries (5 μL) that was added to the Miseq sequencing reagent cartridge, 73.33% of the sequenced libraries (11/15, negative control not included) showed a total read depth of lower than 300,000×. To maximize the data coverage and quality, we lowered the number of pooling libraries to 12 and increased the volume of pooled libraries to 5.3 μL in the later sequencing runs. These adjustments proved to be effective at increasing the total read depth and mtGenome coverage throughout the results of run 1, run 4, and run 5 ( ). Thus, reducing the number of pooling libraries and increasing the volume of pooled libraries can help to obtain more sequencing reads and a larger coverage, which is meaningful for challenging samples. On the other hand, the total read depths of the anticoagulant blood samples (runs 4 and 5, collected using an EDTA anticoagulant tube) were only half as deep as those for the blood stains (run 3, collected using a sterilized filter) and hair shafts (run 2). Excluding the difference in the concentration of the input DNA and normalized libraries, this phenomenon may be caused by the in vitro time of the samples and the storage methods, as the anticoagulant blood samples were in vitro for three years and the freezing–thawing process can decrease the mitochondrial membrane potential and lead to damage to the mitochondria . This phenomenon reminded us to store the material under relatively stable conditions and avoid repeated freezing and thawing, and we recommend that the materials be prepared as dry stains and stored in dry and constant temperature conditions if possible. This kit performed well in terms of the sequence strand balance. The percentage of positions with strand bias (1.68%) was lower than that in samples sequenced using the Ion Torrent PGM platform (10%) , the Ion Torrent S5 platform (16%) , and MiSeq FGx (3.06%, with 2 long mtDNA amplicons) . Due to the poly-C stretches, the forward sequences between nps 303 and 347 did not meet the alignment requirements and were soft-clipped, thus leading to sequencing reads with only a reverse direction . Completely consistent homoplasmic variants were observed in the HL60 samples in our study, previous studies , and the SRM2392-I certificate , while for three nucleotide positions (nps 2445, 4821, and 12,071), the involved PHPs were different ( ). At np 2445, the genotype was T2445 in the SRM2392-Icertificate and our study, while it was 2445Y in . At np 4821, the other 3 studies reported an rCRS genotype (G4821), while our study showed PHP 4821R (7.4% of allele A). At np 12,071, the other 3 studies reported PHP 12071Y, while in our study, an rCRS genotype (T12071) was observed, with a frequency of 1.1% for allele C, and was not identified as a PHP allele. According to the study by Cihlar et al. , lot-to-lot variation in control DNAs was observed; thus, it is not surprising that different PHPs were observed in the HL60 samples that were from different lots. Additionally, with the high sensitivity and resolution power of MPS, we were able to distinguish more PHPs with lower thresholds. In the study herein and a validation study of the same mtGenome detection kit , 6% was used for the PHP calling, while the value was 10% in , and the haplotypes in the SRM2392-I certificate were confirmed using Sanger-type sequencing. Moreover, the sequencing platform, sequencing chemistry , and analysis software can also cause variability. In spite of the differences in the nucleotide positions of the PHPs, the sequencing results of HL60 in this study are reliable. Due to the high sensitivity of the mtGenome multiplex and the unavoidable inclusion of aerosol during the pipetting process, sporadic base calls were unavoidable in the negative controls. When the default variant calling thresholds were used, no variant call was observed in NC-EXB and NC-AMP, which indicates ignorable contamination during the DNA extraction process of the blood samples and the library construction and sequencing process. On the contrary, remarkable variants were present in the negative control of the hair shaft extraction (NC-EXH). Among the 22 observed variants, 8 overlapped with HL60 and P01-H. When compared with all the samples in this study, there were still nine variants that were not traced. Additionally, these variants were not recorded in the EMPOP database , and many unexpected variants were observed when assigning the haplogroup of NC-EXH. Thus, we inferred that the contamination of NC-EXH was neither single-sourced nor researched-sample-sourced. According to the consistent haplotypes of the blood samples and hair shafts from the same individual and the consistent haplotypes between maternal relatives (heteroplasmic variants were not considered), we can conclude that the source of contamination had little influence on the detected samples in this study and can be ignored when performing variant calls in hair shaft samples. Overall, the ForenSeq mtGenome Kit is suitable for mtDNA detection in blood samples and challenging samples such as rootless hair shafts, and the sequencing results are reliable in this study.
The distribution of variants in the mtGenome in this study was similar to that in the North Han Chinese population reported by Zhou et al. and in the Shanghai Han Chinese population reported by Ma et al. . Similar distributions of variants in the control region and coding region were observed in previous studies (72.16% ± 1.10% on average) [ , , , ]. In this study, 100% unique haplotypes were observed in the 10 pedigrees when considering the control region, which suggested the high polymorphism of the control region, even if only 1122 bps of the mtDNA sequence were included ( ). Meanwhile, some studies also reported an increase in the power of evidence when comparing the mtGenome with the control region, wherein the haplotype diversity increased by 0.02% to 0.21% and the random match probability decreased by 2.02% to 35% [ , , ]. In this study, relatively high polymorphisms were observed in the coding regions of ATP6, ND5, CYB, ND2, and CO1, where there were nine, nine, eight, seven, and seven types of haplotypes, respectively ( ). This suggests that we can selectively detect the control region and these high-polymorphism coding regions to balance the discrimination power and cost. However, no overlapping PHPs have been observed in previous studies [ , , , ], which indicates a random happening of the nucleotide positions of PHPs [ , , ]. Moreover, different from Li et al.’s observation that most PHPs (MAF = 2%) were distributed in the control region in 12 types of tissues from 152 individuals, 83.33% of the PHPs were observed in the coding region in this study, and this is in agreement with previous studies using MPS platforms (84.38% in , 75.00% in , 70.83% in , and 64.49% in ). No shared PHPs were observed in the blood and hair shaft pairs as only 11 individuals were detected in this study. The haplogroup assignments were consistent between the blood samples and corresponding hair shafts and were also consistent among relatives in the same pedigree, indicating that the variants at the heteroplasmic positions are not the expected variants for haplogroup assignment and will not influence the assignment results. Extreme strand bias was only observed at nps 303–315 and np 16193.1 in the 16,180–16,194 fragment. 310T interrupted the C-stretches at nps 303–315, which resulted in nine consecutive Cs in this region. Variant 309.1c was called when both the read depth and frequency of reads with the insertion of C and reads without the insertion of C (reference reads) reached the thresholds; otherwise, variant 309.1C was called. A relatively high frequency occurred when the total read depth was low, even with a read count that was less than 45 reads (e.g., 20×/100× = 20%); nonetheless, a homoplasmic variant was called. A similar situation existed in the np 955–966 and np 16,180–16,194 fragments. An N-1 stutter in np 16,189 was observed, and the count of stutter reads was 13.74% of that of the parental allele. This is similar to the mechanism that results in stutter products associated with the detection of short tandem repeat (STR) markers . Although the dinucleotide repeat (AC)5 was observed in the 514–524 fragment, an average frequency of 84.52% ± 0.98% of (AC)4 was also observed. The (AC)4 repeat is more frequent in Asian populations . The 9 bp deletion between nps 8272 and 8280 that has been deeply investigated in the Chinese population was also reported in this study. Positions 12,418–12,425 are an 8 bp polyadenine stretch, and a mixture of molecules in this region has been previously described in a report on mtDNA heteroplasmy from MPS data . Just et al. also reported that 88.8% of 588 samples had detectible LHP around position 12,425 based on Sanger-type sequencing . Though the sequencing-by-synthesis mechanism of the Illumina sequencing platform can reduce the risk of sequencing error in the polymeric region, the lower thresholds of variant calls (e.g., 6% in UAS v2.1) may increase the complexity of variants in regions with length variations. Thus, we recommend that caution be taken when classifying LHP calls and that the minimum variant frequency of length variation should be raised (e.g., to 20% for insertions and 30 % for deletions ). Additionally, we suggest that the manufacturer provides an independent option for the threshold settings of length variant calls in the UAS updates.
The variant transmission of the mtGenome between the maternal family members in this study still followed maternal heredity. Meanwhile, we also observed differences that were mainly caused by heteroplasmic variants. These variants are de novo or disappearing variants in the lineage. The transmission of inherited heteroplasmic variants can improve the weight of evidence when evaluating individuals from the same maternal lineage, while different heteroplasmies may have the potential to distinguish individuals from the same maternal ancestry, especially when the heteroplasmic variants are only shared in tissues of the particular individual . In this study, the six groups of immediate three-generational family members and the group of immediate four-generational family members were from only two maternal lineages; thus, limited information on the characteristics of PHP transmission was found. PHP that was transmitted from the grandmother to the third- or fourth-generation member was not observed. Meanwhile, in the observed inherited PHPs, the direction and magnitude of the frequency change during transmission were moderate and seemed to be random, and this is consistent with Liu et al.’s research on sixteen four-generation pedigrees . Zaidi et al. demonstrated that divergence between the mother and offspring increases with the mother’s age at childbirth , while in this study, divergence was not observed among the mother and her five offspring, where the first offspring was ten years older than the fifth offspring. Heteroplasmy allele frequencies can be affected by the germline bottleneck, by the potential decrease in the content of mtDNA during embryonic development, and by selection . No correlation between the MAF changes among the transmissions was observed, suggesting that most heteroplasmic variants were functionally neutral or mildly deleterious and were not eliminated by selection . In this study, though only four inherited PHPs were observed, the length variations were inherited in most of the maternal relatives, which can also improve the matching probability of the same maternal origin among the relatives. Optimizing the thresholds for LHP variant calling or using a major molecule comparison helps to decrease the complexity of a variant mixture and can support the same maternal origin relationship further. Meanwhile, more pedigree studies are needed to further confirm the observation that more variant differences existed between the relatives of third-degree relationships. As for the mtGenome haplotype comparisons between maternal relatives, the revised guidelines by the ISFG recommends that differences in both PHPs and LHPs do not constitute evidence for excluding two identical haplotypes deriving from the same source or the same maternal lineage . However, Connell et al. proved that the sequence comparison guidelines for the mtDNA control region recommended by SWGDAM are also suitable for multigeneration whole mtGenome analysis, wherein samples differing at two or more nucleotide positions (excluding length heteroplasmy) can be excluded when coming from the same maternal lineage (reported as ‘exclusion’), samples differing at a single position should only be reported as ‘inconclusive’, and samples having the same sequence should be reported as ‘cannot exclude’. They also recommended that caution should be taken when classifying heteroplasmic changes as differences for human identification . As the ForenSeq mtGenome Kit is highly sensitive and has the potential to distinguish LHPs, we wondered how many differences between maternal relatives would be observed using this panel. Thus, we implemented three levels of comparison, wherein the first comparison considered homoplasmic variants only, the second comparison considered both homoplasmic variants and point heteroplasmy, and the third comparison considered both homoplasmic variants and heteroplasmies (both PHPs and LHPs). In this study, following the guidelines, when comparing the haplotypes with homoplasmic variants only, 100% of the reports obtained were ‘cannot exclude’ in the grandmother and grandson/granddaughter pairs, uncle and nephew/niece pairs, first-cousin pairs, great-grandmother and great-grandson pairs, granduncle and grandnephew pairs, cousin-aunt and cousin-nephew pairs, and cousin-uncle and cousin-nephew pairs. A report of ‘inconclusive’ was obtained in pairs of the P06-H sample, with four pairs in mother and offspring pairs, eight pairs in full-sibling pairs, six pairs in aunt and nephew/niece pairs, and two pairs in grandaunt and grandnephew pairs. No report of ‘exclusion’ was obtained under this comparison condition. When comparing the haplotypes with both homoplasmic variants and PHPs, a 33.33% to 100% count of ‘exclusion’ was reported in the 11 types of maternal relationships. Furthermore, when LHPs were also compared, the percentage of ‘exclusion’ increased from 62.26% to 100% in the 11 types of maternal relationships. Especially in the great-grandmother and great-grandson pairs, granduncle and grandnephew pairs, and cousin-uncle and cousin-nephew pairs, 100% ‘exclusion’ was reported ( ). Similarly, in the study by Connell et al., wherein mtGenome haplotypes of 2339 maternal pairs from 18 meioses were compared, the prevalence of inconclusive identification increased by 6%, and the prevalence of false exclusions was 0.34% when PHPs were considered . Since they used a higher MAF (10%) than ours, fewer PHPs were observed (6.67%), resulting in fewer mtDNA differences than in our study when PHPs were considered. Overall, in this study, the comparison guidelines were suitable for the mtGenome haplotype comparison between all maternal relatives and throughout the multigenerational comparisons when only homoplasmic variants were considered. Given the high false exclusion ratio, we suggest that different heteroplasmic variants should not be counted as inconsistencies when performing mtDNA haplotype comparisons, while consistent heteroplasmic variants can be considered as enhanced evidence for maternal lineage confirmation. Additionally, this is consistent with the recommendations of the ISFG .
In recent years, some mtDNA detection panels have been established for forensic use based on MPS platforms. In this study, we used a new mtGenome detection panel, the ForenSeq mtGenome Kit, to detect blood samples and hair shafts from maternal-pedigree individuals. Firstly, we demonstrated the effectiveness of this panel in blood and hair shaft detection through the deep read depth and complete mtGenome coverage. Secondly, the ForenSeq mtGenome Kit plus ForenSeq UAS v2.1 software can distinguish between PHPs and LHPs clearly, while the threshold setting function for LHP needs to be improved. Thirdly, we observed a stable transmission of homoplasmic variants among maternal pedigrees and variable differences in heteroplasmic variants between maternal relatives. Random MAF changes among three/four-generation pedigrees were also observed. Lastly, we proved that a high risk of false exclusion will present itself if the differences in both PHPs and LHPs are included for exclusion. In the future, hair shafts from different body parts and various types of tissues, especially challenge materials, should be investigated to obtain more comprehensive knowledge of the mtGenome.
|
Hepatitis C Infection and Treatment among Injecting Drug Users Attending General Practice: A Systematic Review and Meta-Analysis
|
6c79eecc-140b-4b53-8af3-bb4a430ca2e7
|
10138322
|
Family Medicine[mh]
|
Hepatitis C virus (HCV) causes both acute and chronic forms of hepatitis. More than 170 million people have been diagnosed with HCV worldwide, with 71 million people suffering from the chronic form of the disease . The prevalence rate in the European Union region is 1.5%, with the highest rates existing in the Eastern Mediterranean (2.3%) . There are large disparities in HCV infection rates between different sections of the population, although injecting drug users (IDUs) have been shown to represent a significant proportion internationally. Infections rates in this group has been shown to be between 13 to 84% in various national populations . Those suffering from chronic HCV infection are at an increased risk of developing severe and potentially fatal liver diseases such as liver cirrhosis or hepatocellular carcinoma in the later stages of life . Estimates for the proportion of mortality from hepatocellular carcinoma and chronic liver disease attributable to HCV are scarce across countries in the EU. However, overall estimates from 2015 show that 55% of liver cancer deaths, 44.7% of cirrhosis, and numerous other chronic liver disease deaths could have been attributed to hepatitis B and HCV infections in the EU/EEA region . Liver disease-related mortality is 7.5% in IDUs with an HCV infection . Human immunodeficiency virus (HIV) is an additional complication in this population, with historical infection rates as high as 60% among some IDU populations . General practice (GP) is the first point of patient contact with the healthcare system in many countries and provides primary, personalized, and ongoing care to individuals and families in these communities. The care provided in general practice is sensitive to local community issues which often include roles as patient advocates . The focus of the care provided to this patient group includes issues related to drug use and blood borne viral disease, but it extends far beyond this. General practice provides a holistic patient-centred system of care that encompasses all aspects of the physical and psychological wellbeing of this patient group, part of which entails developing and maintaining significant long-term relationships between doctors and patients. Of the total number of patients with hepatitis C (HCV) who visited their GP doctor, the proportion of intravenous drug users (IDUs) was 76% in the United Kingdom (UK) and 70–80% in Ireland. . However, HCV infection among IDUs is thought to remain underdiagnosed, which poses a significant health risk to both IDUs themselves as well as their injecting and sexual partners . The treatment of HCV was revolutionized in 2011 with the introduction of oral direct-acting antivirals (DAAs) which have shorter treatment durations, fewer side-effects, and higher levels of patient acceptability than other medications . Before DAA, pegylated interferon injection plus ribavirin oral administration was the conventional treatment and had a treatment success rate between 42–65% and poor tolerability due to adverse side effects . The services provided by general practice for this patient group include the screening and diagnosis of HCV and other blood borne viral infections. They also include the evaluation, investigation, and treatment of healthcare issues both related and unrelated to IDU and HCV as well as patient referral to specialist care as appropriate; in some cases, DAA-based treatment regimens may be delivered in general practice. It has previously been shown that IDUs with HCV on OST achieved a sustained virological response (SVR) rate of 71% compared to patients outside this cohort in a GP setting. This demonstrates the key role of OST in successful treatment uptake and outcomes in this group . Other studies have also shown that IDUs with HCV have a lower probability of receiving antiviral therapy than non-IDU HCV patient groups . One of the barriers to successful HCV treatment in IDUs is their often chaotic lifestyles, which often involve homelessness, alcohol and illicit drug abuse, social isolation, poor medication compliance, and various forms of social stigma. The fear and stigma of investigations such as HIV testing have been shown to be an additional consideration in this patient group . Those diagnosed with HCV and initiated on or referred for treatment often display high rates of dropout due these factors which also include co-existing psychiatric illnesses and other psychosocial problems . This study aims to investigate and estimate (i) the prevalence of hepatitis C amongst IDUs, (ii) diagnostic actions, (iii) antiviral treatments, and (iv) cure rates, in a general practice setting, with inclusion of a meta-analysis where appropriate.
2.1. Data Source and Search Strategy This is a systematic review and meta-analysis conducted following the review guidelines provided in the Cochrane Handbook for Systematic Reviews of Interventions . We limited our search to EMBASE, PubMed, and Cochrane Central Register of Controlled Trials. The initial database search was performed between the 1st of October and the 18th of November 2020, with an updated search carried out on 1 March 2023. The search terms used were: “hepatitis C”, “hepacivirus”, “general practice”, “drug users”, “intravenous drug users”, and “antiviral agents”. Controlled vocabulary terms (MesH term and Emtree entries) along with Boolean operators (OR, AND, and NOT) were combined to make a search strategy. 2.2. Screening and Eligibility We followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines to select studies . Two authors (M.T. and S.D.) independently searched for relevant literature using the search terms and criteria. All the identified studies were uploaded to a bibliographic management software (EndNote X9 for Windows). All duplicate studies were removed, and the titles and abstracts of included studies were screened for eligibility. All eligible studies were transferred to Covidence, an online review management software. In Covidence, a full-text analysis of all included studies was performed. The agreement score (Cohen Kappa) between the two reviewers was 81% in the first phase of full-text eligibility screening. Consensus was reached on the outstanding remaining studies with input from a third reviewer (G.B.). 2.3. Inclusion and Exclusion Criteria A study was considered eligible if it was conducted in general practice, had hepatitis C positive IDUs as its study participants, and reported on HCV prevalence and treatment outcomes. The studies excluded were systematic reviews or opinion articles, editorials, pharmacological studies, and studies conducted in a primary care setting other than GP practices, such as in methadone clinics, opioid treatment centres, and care centres providing services through the integration of specialist centres with primary care delivery at the community level. Studies not published in English were also excluded from the analysis. 2.4. Quality Assessment We used the standard checklist produced by the US National Health Lung and Blood Institute for assessing observational cohort and cross-sectional studies to judge the quality of cohort and cross-sectional studies , while Cochrane’s tools for assessing risk of bias were used to assess the quality of randomized controlled trials. The outcome of assessed bias was recorded as high risk, low risk, or unclear. The focus of the quality assessment was: (i) objectiveness and clarity of the research questions, (ii) study population definition and clarity, (iii) inclusion and exclusion criteria, (iv) sample size and power justification, (v) use of valid and reliable measures across study participants, and (vi) consideration of the potential confounding variables for cohort and cross-sectional studies. For the controlled trials, the assessment elements were: (i) appropriate randomization, (ii) group consistency in the intervention and control arms, and (iii) baseline and blinded assignments of the groups, as well as all other elements used to assess cross-sectional studies. 2.5. Study Outcomes Outcome data extracted from the eligible studies included: treatment effectiveness in terms of cure rates, SVR rates, treatment adherence rates, reinfection rates, HCV-related comorbidities, details of medication regimes (type, duration, and dosage), and any adverse drug events. Information such as SVR, adherence, reinfections, and adverse drug events were never reported in the studies. 2.6. Data Extraction and Analysis Data extraction was independently performed in Covidence by M.T. and S.D. Both data extraction and quality assessment were customized to collect information on our variables of interest. The final data entry was conducted after pretesting three studies. The variables collected included: year of publication, study population, study sites, number of practices, demographic characteristics, IDUs and the number of HCV infections, information on OST and treatment of HCV infections, drugs and duration of treatment, the number of patients cured, and chronic conditions. The characteristics and findings of the studies included were summarized and structured using tables and figures where applicable. We performed a meta-analysis of the studies using the DerSimonian and Laird random-effects model with inverse variance weighting . A forest plot was used to show the pooled estimates, where the diamond shape represents the overall effect estimate and the small boxes with horizontal lines show the effect estimates for individual studies ( , and ). The length of the horizontal lines and width of the diamond illustrate the confidence interval. Statistical heterogeneity was assessed using the chi-square test of heterogeneity and the I 2 statistic for measuring inconsistency, with higher I 2 values indicating higher heterogeneity. Based on Cochrane’s recommendations, we considered I 2 values of 30–60% as indicating moderate heterogeneity in the studies and values above this range as indicating substantial heterogeneity. The meta-analysis and risk of bias analyses was performed using the “Meta” package in R version 4.0.3 (accessed on 10 October 2020).
This is a systematic review and meta-analysis conducted following the review guidelines provided in the Cochrane Handbook for Systematic Reviews of Interventions . We limited our search to EMBASE, PubMed, and Cochrane Central Register of Controlled Trials. The initial database search was performed between the 1st of October and the 18th of November 2020, with an updated search carried out on 1 March 2023. The search terms used were: “hepatitis C”, “hepacivirus”, “general practice”, “drug users”, “intravenous drug users”, and “antiviral agents”. Controlled vocabulary terms (MesH term and Emtree entries) along with Boolean operators (OR, AND, and NOT) were combined to make a search strategy.
We followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines to select studies . Two authors (M.T. and S.D.) independently searched for relevant literature using the search terms and criteria. All the identified studies were uploaded to a bibliographic management software (EndNote X9 for Windows). All duplicate studies were removed, and the titles and abstracts of included studies were screened for eligibility. All eligible studies were transferred to Covidence, an online review management software. In Covidence, a full-text analysis of all included studies was performed. The agreement score (Cohen Kappa) between the two reviewers was 81% in the first phase of full-text eligibility screening. Consensus was reached on the outstanding remaining studies with input from a third reviewer (G.B.).
A study was considered eligible if it was conducted in general practice, had hepatitis C positive IDUs as its study participants, and reported on HCV prevalence and treatment outcomes. The studies excluded were systematic reviews or opinion articles, editorials, pharmacological studies, and studies conducted in a primary care setting other than GP practices, such as in methadone clinics, opioid treatment centres, and care centres providing services through the integration of specialist centres with primary care delivery at the community level. Studies not published in English were also excluded from the analysis.
We used the standard checklist produced by the US National Health Lung and Blood Institute for assessing observational cohort and cross-sectional studies to judge the quality of cohort and cross-sectional studies , while Cochrane’s tools for assessing risk of bias were used to assess the quality of randomized controlled trials. The outcome of assessed bias was recorded as high risk, low risk, or unclear. The focus of the quality assessment was: (i) objectiveness and clarity of the research questions, (ii) study population definition and clarity, (iii) inclusion and exclusion criteria, (iv) sample size and power justification, (v) use of valid and reliable measures across study participants, and (vi) consideration of the potential confounding variables for cohort and cross-sectional studies. For the controlled trials, the assessment elements were: (i) appropriate randomization, (ii) group consistency in the intervention and control arms, and (iii) baseline and blinded assignments of the groups, as well as all other elements used to assess cross-sectional studies.
Outcome data extracted from the eligible studies included: treatment effectiveness in terms of cure rates, SVR rates, treatment adherence rates, reinfection rates, HCV-related comorbidities, details of medication regimes (type, duration, and dosage), and any adverse drug events. Information such as SVR, adherence, reinfections, and adverse drug events were never reported in the studies.
Data extraction was independently performed in Covidence by M.T. and S.D. Both data extraction and quality assessment were customized to collect information on our variables of interest. The final data entry was conducted after pretesting three studies. The variables collected included: year of publication, study population, study sites, number of practices, demographic characteristics, IDUs and the number of HCV infections, information on OST and treatment of HCV infections, drugs and duration of treatment, the number of patients cured, and chronic conditions. The characteristics and findings of the studies included were summarized and structured using tables and figures where applicable. We performed a meta-analysis of the studies using the DerSimonian and Laird random-effects model with inverse variance weighting . A forest plot was used to show the pooled estimates, where the diamond shape represents the overall effect estimate and the small boxes with horizontal lines show the effect estimates for individual studies ( , and ). The length of the horizontal lines and width of the diamond illustrate the confidence interval. Statistical heterogeneity was assessed using the chi-square test of heterogeneity and the I 2 statistic for measuring inconsistency, with higher I 2 values indicating higher heterogeneity. Based on Cochrane’s recommendations, we considered I 2 values of 30–60% as indicating moderate heterogeneity in the studies and values above this range as indicating substantial heterogeneity. The meta-analysis and risk of bias analyses was performed using the “Meta” package in R version 4.0.3 (accessed on 10 October 2020).
3.1. Selection of Included Studies shows the 1063 studies selected after removing the 263 duplicates from the 1299 studies extracted from the database. We retained 69 studies for full-text review after excluding 166 studies following title and abstract screening. Finally, data were extracted from a total of 18 studies after 51 studies were excluded following a full-text review ( : PRISMA). The reasons for exclusion were: (i) wrong study type (16 studies), (ii) wrong study population (15 studies), (iii) wrong setting (14 studies), (iv) study not in English language (5 studies), and (v) wrong study outcome (1 study). 3.2. Characteristics of the 18 Included Studies A total of 20,956 participants were enrolled across 440 GP practices between 1997 to 2020 with the study duration ranging between 1.5 and 89 months; 13/18 (72%) studies were published before 2012 and only two explored the role of DAA treatment in general practice . The majority of the studies were from the UK (6) and Ireland (5) and the rest were from Australia (3) and other European countries. Almost all studies published in Ireland were from the same research group. Most studies reported on GP sites providing OST care. Of the 18 studies selected, five were controlled intervention studies. Males represented 64% of the study participants, and the average age was between 25 and 47 years. 3.3. Prevalence of HCV among Patients with a History of Intravenous Drug Use Of the 18 studies, 15 were included in the meta-analysis of the prevalence of HCV among IDUs . Measurement of seroprevalence and/or screening of HCV infection was listed as one of the main objectives in 10/15 studies . Overall, the prevalence of HCV infection among IDUs in GP was 46% (95% confidence interval (CI), 26–67%); however, the studies had significant heterogeneity (I 2 = 100%, p = 0.00). A subgroup analysis comparing studies published before and after 2010 did not show any significant change in the prevalence of HCV among IDUs . 3.4. HCV Diagnosis, Treatment, and Cure Rates Information relating to specific genotypes was reported in four studies and treatment-related outcomes was reported in 11 studies . Genotype 1 and genotype 3 were noted in all four of these studies . Only one study, which involved 70 participants, reported the duration of HCV infection, with a mean duration of 19 years . Eleven studies reported information related to treatment outcomes . A total of 9% (174/1954) of patients were treated for their HCV infections. This treatment rate was above 60% in two studies , 40% in one study , and below 5% in six studies . Only four studies reported specific medication regimes while drug information, including the doses and duration relating to genotypic information, was reported in only two studies . Interferon was prescribed in studies conducted in 2000 and 2005 . Peginterferon plus ribavirin was prescribed for 48 weeks in a study from 2013 , while the most recent study published in 2019 used DAAs in combination with ribavirin for 12 weeks . The number of participants treated and cured was available for four studies ; hence, a meta-analysis was run to estimate the pooled ‘proportion of cure’ rate. The estimated cure rate was 64% (95% CI, 43–83%; ). The included studies had substantial heterogeneity (I 2 = 68%, p = 0.02). The cure rate was higher in the studies published after 2010 (72%) compared to those published before (43%) . 3.5. Opiate Substitution Therapy (OST) Of the 14 studies which reported the OST status of study participants, 13 involved general practice centres which provided this service . The OST status of the participating practices was unclear in four studies; these four studies contributed 279/440 (63.4%) of all practices and 16,303/20,956 (77.8%) of all patients involved in this review . The meta-analysis of these 12 studies estimated an overall proportion of 91% (95% CI, 53–100%) of patients on OST . The differences between the studies reported before and after 2010 were not significant . 3.6. Chronic Conditions Thirteen studies published information related to concomitant medical conditions and alcohol misuse amongst study participants . HIV co-infection rates were reported in 10 studies , with an overall prevalence of 10% (427/6201). The rates of psychiatric disorders and alcohol misuse were reported in two studies , and the overall prevalence for these conditions was above 70% (332/472) and 20% (104/472), respectively. 3.7. Risk of Bias Assessment An assessment of the risk of bias in the selected studies showed that 50% of the studies were ranked as having a high risk of bias related to the criteria of sample size and power calculation. The risk of bias was categorised as unclear in 80% of the studies when assessed for their measurement and adjustment for key confounding variables . Of the five included randomized trials , 30% of the studies had a high risk of bias related to the blinding of study participants. The risk of bias was classified as unclear in 75% of these studies in relation to the groups having similar baseline characteristics, 70% in relation to the blinding of study participants, and 60% each in relation to describing the study as an RCT for all outcomes assessed and randomization (not shown in the figure).
shows the 1063 studies selected after removing the 263 duplicates from the 1299 studies extracted from the database. We retained 69 studies for full-text review after excluding 166 studies following title and abstract screening. Finally, data were extracted from a total of 18 studies after 51 studies were excluded following a full-text review ( : PRISMA). The reasons for exclusion were: (i) wrong study type (16 studies), (ii) wrong study population (15 studies), (iii) wrong setting (14 studies), (iv) study not in English language (5 studies), and (v) wrong study outcome (1 study).
A total of 20,956 participants were enrolled across 440 GP practices between 1997 to 2020 with the study duration ranging between 1.5 and 89 months; 13/18 (72%) studies were published before 2012 and only two explored the role of DAA treatment in general practice . The majority of the studies were from the UK (6) and Ireland (5) and the rest were from Australia (3) and other European countries. Almost all studies published in Ireland were from the same research group. Most studies reported on GP sites providing OST care. Of the 18 studies selected, five were controlled intervention studies. Males represented 64% of the study participants, and the average age was between 25 and 47 years.
Of the 18 studies, 15 were included in the meta-analysis of the prevalence of HCV among IDUs . Measurement of seroprevalence and/or screening of HCV infection was listed as one of the main objectives in 10/15 studies . Overall, the prevalence of HCV infection among IDUs in GP was 46% (95% confidence interval (CI), 26–67%); however, the studies had significant heterogeneity (I 2 = 100%, p = 0.00). A subgroup analysis comparing studies published before and after 2010 did not show any significant change in the prevalence of HCV among IDUs .
Information relating to specific genotypes was reported in four studies and treatment-related outcomes was reported in 11 studies . Genotype 1 and genotype 3 were noted in all four of these studies . Only one study, which involved 70 participants, reported the duration of HCV infection, with a mean duration of 19 years . Eleven studies reported information related to treatment outcomes . A total of 9% (174/1954) of patients were treated for their HCV infections. This treatment rate was above 60% in two studies , 40% in one study , and below 5% in six studies . Only four studies reported specific medication regimes while drug information, including the doses and duration relating to genotypic information, was reported in only two studies . Interferon was prescribed in studies conducted in 2000 and 2005 . Peginterferon plus ribavirin was prescribed for 48 weeks in a study from 2013 , while the most recent study published in 2019 used DAAs in combination with ribavirin for 12 weeks . The number of participants treated and cured was available for four studies ; hence, a meta-analysis was run to estimate the pooled ‘proportion of cure’ rate. The estimated cure rate was 64% (95% CI, 43–83%; ). The included studies had substantial heterogeneity (I 2 = 68%, p = 0.02). The cure rate was higher in the studies published after 2010 (72%) compared to those published before (43%) .
Of the 14 studies which reported the OST status of study participants, 13 involved general practice centres which provided this service . The OST status of the participating practices was unclear in four studies; these four studies contributed 279/440 (63.4%) of all practices and 16,303/20,956 (77.8%) of all patients involved in this review . The meta-analysis of these 12 studies estimated an overall proportion of 91% (95% CI, 53–100%) of patients on OST . The differences between the studies reported before and after 2010 were not significant .
Thirteen studies published information related to concomitant medical conditions and alcohol misuse amongst study participants . HIV co-infection rates were reported in 10 studies , with an overall prevalence of 10% (427/6201). The rates of psychiatric disorders and alcohol misuse were reported in two studies , and the overall prevalence for these conditions was above 70% (332/472) and 20% (104/472), respectively.
An assessment of the risk of bias in the selected studies showed that 50% of the studies were ranked as having a high risk of bias related to the criteria of sample size and power calculation. The risk of bias was categorised as unclear in 80% of the studies when assessed for their measurement and adjustment for key confounding variables . Of the five included randomized trials , 30% of the studies had a high risk of bias related to the blinding of study participants. The risk of bias was classified as unclear in 75% of these studies in relation to the groups having similar baseline characteristics, 70% in relation to the blinding of study participants, and 60% each in relation to describing the study as an RCT for all outcomes assessed and randomization (not shown in the figure).
4.1. Summary It is notable that this systematic review identified only 18 studies fitting the inclusion criteria of being based in general practice and involving hepatitis C positive IDUs. Twelve of the 18 studies were dated from before 2011 (when the introduction of DAAs made a major impact on treatment options). A limited number of these studies reported on the treatment outcomes of participants, and the quality of the studies was highly variable. General practice has the potential to contribute significantly to the holistic, long-term care of the wide range of health and psychosocial issues affecting this group of patients and may also play a key role in contributing to the targeted treatment of hepatitis C. Although general practice in many healthcare systems is likely to be performing many such roles already, there is a real dearth of research data exploring this aspect of care. This review identified 18 studies that reported on the care of hepatitis C infections in IDUs which were based in a GP-based primary care setting. All but four of the included studies were from the 1990s. Adequate information that allowed for the estimation of the prevalence of hepatitis C amongst IDUs utilizing care was available in 15 studies, with the pooled prevalence rate found to be 46%. Apart from one randomized trial in 2019 , none of the studies reported the duration of HCV infection amongst study participants. Diagnostic information, primarily the specific HCV genotype, was reported in four studies, and treatment-related outcomes were reported in 11 studies. The majority of the studies reported treatment uptake rates below 5%, except three studies which had rates between 41 and 74% . The higher percentages could be due to fact that the primary objectives of these three studies were treatment uptake and cure rates. In contrast, the primary objectives of the other studies were prevalence rates, risk factor identification, and treatment care processes. Other reasons for low treatment uptake may include the difficulty of follow-up in this population of participants and a low level of awareness regarding treatment options amongst clinicians. Another confounding factor is likely the need for issues such as alcohol misuse and HIV treatment to be appropriately managed before commencing hepatitis C treatment in many cases . However, further independent research is required to understand the perspectives of both care providers and IDUs related to treatment uptake rates. Similarly, information related to the specific hepatitis treatment regimens used was available for only four studies. This precluded us from performing a meta-analysis to compare the effectiveness of various treatment regimes. Hence, it could be an area for exploration in future primary care research initiatives. An analysis looking at the available genotypic information showed that more genotype 1 patients received treatment in the Seidenberg et al. study and more genotype 3 patients received treatment in the Jack et al. study, which likely reflects local seroprevalence rates . The specific drug regimens prescribed to treat study participants were reported in four studies, and dose and duration data were reported in two of these. Between 2002 and 2005 , two studies reported the use of interferon treatment by injection, as interferon was the drug of choice during that period. As previously discussed, the treatment options for hepatitis C have developed and evolved over the years. Previous studies have shown that DAA drugs have a higher sustained virological response and fewer side effects compared to their predecessors. The shorter duration of DAA treatment and oral route of administration means that they are less burdensome to both patients and physicians. The number of patients cured was reported in four studies, with an estimated cure rate of 64%. However, the proportion obtained in the meta-analysis was not sufficient to draw a definitive conclusion because of the small number of studies and the sample size of 110. Specifically, Anderson et al. had only two patients treated in their study. Among the studies that reported on OST use, the majority of the GP practices included in these studies were found to provide OST services, and 91% of the IDUs diagnosed with hepatitis C were found to be on OST. The rate of reporting related to participants’ medical co-morbidities, such as specific diagnoses and quantitative data, was very poor. In studies where such data were reported, it was often unclear. Specific conditions listed included HIV co-infection (10 studies), hepatitis B co-infection (7 studies), liver fibrosis (3 studies), and psychiatric disorders (2 studies). Significantly more patients in this cohort were noted to have been diagnosed with a psychiatric disorder than with an HIV co-infection. In one study, psychiatric disorders, particularly depression, were found to be associated with significantly increased levels of active drug misuse . 4.2. Strengths and Limitations In the meta-analysis, significant heterogeneity was observed among the included studies. To some extent, heterogeneity can be explained and overcome by subgroup analysis and sensitivity analyses; however, the lack of studies reporting treatment information did not allow us to perform subgroup and sensitivity analyses. The majority of the studies included were cross-sectional and retrospective cohort reviews designed to study seroprevalence and risk factors without any comparison group; hence, a comparison of the prevalence between different groups, such as age and gender, was not feasible. Even though treatment information was reported in 11 studies, information specific to cure rates, drugs, dose, and duration was missing from many studies, which precluded us from conducting a robust data analysis comparing different treatment types based on genotype information. Despite this, we tried to explore differences in the prevalence of HCV in IDUs, OST use, and cure rates by comparing studies published before and after 2010. The analysis showed no significant difference in the prevalence and proportion of OST use before and after 2010 among HCV-infected IDUs. The HCV cure rate was found to be higher in the studies conducted after 2011; however, there were only two studies included in each subgroup in the meta-analysis. In addition, 50% of the papers included were assessed as having a high of bias. Therefore, considering both the high heterogeneity and high risk of bias in these studies, the findings of this review should be interpreted cautiously. The studies included in this review spanned the period from 1991 to 2018. Many of the studies (67%) were conducted during the period before DAA became widely available (2011). Even though the included studies were insufficient in number and power such that their findings cannot be generalized, our analysis showed encouraging progress in the level of care provided to IDU hepatitis C patients attending general practice after 2011. However, between the 1990s and 2010s, there were significantly fewer developments in the care of this cohort of patients. The causes of such limited development are difficult to identify based on the current evidence. The literature suggests that during this period, novel antiviral therapies were still in their infancy and fewer therapeutic options suitable for use in the community were available. The introduction of pegylated interferon and ribavirin (with a 50% virological cure rate in generally adherent patients in 2001/2005) were indicators of a brighter future for hepatitis C care . Our review indicates an increased level of activity since the subsequent introduction of DAA therapies and clearly establishes the need for high quality research to maximise the potential of such therapies in the community. 4.3. Comparison with the Literature The prevalence and effectiveness of HCV treatment has been documented independently in the literature . A systematic review and meta-analysis of the prevalence and treatment of hepatitis C among IDUs, particularly in general practice, has not been published yet. Our study showed a pooled prevalence of between 26 and 67% of IDUs diagnosed with hepatitis C infection in primary care. This finding is similar to the prevalence rates reported among IDUs in Iran and Pakistan ; however, it is lower than the prevalence rate of 80% reported in the EU region . The study in the EU region was not solely a study among IDUs, and the proportion was part of a subgroup analysis obtained from the general population. On the other hand, the study populations from Iran and Pakistan mostly involved patients from community drug treatment centres, with a few studies involving patients from secondary care. Our study reported variation in hepatitis C treatment uptake, with an overall treatment uptake of 9% amongst this cohort of patients, which is similar to the treatment uptake reported in a study conducted in opioid-dependent patients from OST centres . The proportion in our study was three times lower than the treatment uptake reported in a systematic review from the EU region; however, the setting in that study was not restricted to general practice care . Even so, these studies report a significantly higher treatment uptake among IDUs utilizing specialist services based in primary care, with a significantly higher SVR also being reported . The reasons for the differences between GP-only and specialist-based services may be related to different patient or service characteristics, differences in how the studies report their information, or other unspecified factors. Our study estimated the reported HCV cure rate in GP to be between 43 and 80%, which differed from the cure rate of 19–88% reported by Lazarus et al. . Once again, this difference could be due to the very low number of studies included in our review compared to other studies or to the difference in study location and settings.
It is notable that this systematic review identified only 18 studies fitting the inclusion criteria of being based in general practice and involving hepatitis C positive IDUs. Twelve of the 18 studies were dated from before 2011 (when the introduction of DAAs made a major impact on treatment options). A limited number of these studies reported on the treatment outcomes of participants, and the quality of the studies was highly variable. General practice has the potential to contribute significantly to the holistic, long-term care of the wide range of health and psychosocial issues affecting this group of patients and may also play a key role in contributing to the targeted treatment of hepatitis C. Although general practice in many healthcare systems is likely to be performing many such roles already, there is a real dearth of research data exploring this aspect of care. This review identified 18 studies that reported on the care of hepatitis C infections in IDUs which were based in a GP-based primary care setting. All but four of the included studies were from the 1990s. Adequate information that allowed for the estimation of the prevalence of hepatitis C amongst IDUs utilizing care was available in 15 studies, with the pooled prevalence rate found to be 46%. Apart from one randomized trial in 2019 , none of the studies reported the duration of HCV infection amongst study participants. Diagnostic information, primarily the specific HCV genotype, was reported in four studies, and treatment-related outcomes were reported in 11 studies. The majority of the studies reported treatment uptake rates below 5%, except three studies which had rates between 41 and 74% . The higher percentages could be due to fact that the primary objectives of these three studies were treatment uptake and cure rates. In contrast, the primary objectives of the other studies were prevalence rates, risk factor identification, and treatment care processes. Other reasons for low treatment uptake may include the difficulty of follow-up in this population of participants and a low level of awareness regarding treatment options amongst clinicians. Another confounding factor is likely the need for issues such as alcohol misuse and HIV treatment to be appropriately managed before commencing hepatitis C treatment in many cases . However, further independent research is required to understand the perspectives of both care providers and IDUs related to treatment uptake rates. Similarly, information related to the specific hepatitis treatment regimens used was available for only four studies. This precluded us from performing a meta-analysis to compare the effectiveness of various treatment regimes. Hence, it could be an area for exploration in future primary care research initiatives. An analysis looking at the available genotypic information showed that more genotype 1 patients received treatment in the Seidenberg et al. study and more genotype 3 patients received treatment in the Jack et al. study, which likely reflects local seroprevalence rates . The specific drug regimens prescribed to treat study participants were reported in four studies, and dose and duration data were reported in two of these. Between 2002 and 2005 , two studies reported the use of interferon treatment by injection, as interferon was the drug of choice during that period. As previously discussed, the treatment options for hepatitis C have developed and evolved over the years. Previous studies have shown that DAA drugs have a higher sustained virological response and fewer side effects compared to their predecessors. The shorter duration of DAA treatment and oral route of administration means that they are less burdensome to both patients and physicians. The number of patients cured was reported in four studies, with an estimated cure rate of 64%. However, the proportion obtained in the meta-analysis was not sufficient to draw a definitive conclusion because of the small number of studies and the sample size of 110. Specifically, Anderson et al. had only two patients treated in their study. Among the studies that reported on OST use, the majority of the GP practices included in these studies were found to provide OST services, and 91% of the IDUs diagnosed with hepatitis C were found to be on OST. The rate of reporting related to participants’ medical co-morbidities, such as specific diagnoses and quantitative data, was very poor. In studies where such data were reported, it was often unclear. Specific conditions listed included HIV co-infection (10 studies), hepatitis B co-infection (7 studies), liver fibrosis (3 studies), and psychiatric disorders (2 studies). Significantly more patients in this cohort were noted to have been diagnosed with a psychiatric disorder than with an HIV co-infection. In one study, psychiatric disorders, particularly depression, were found to be associated with significantly increased levels of active drug misuse .
In the meta-analysis, significant heterogeneity was observed among the included studies. To some extent, heterogeneity can be explained and overcome by subgroup analysis and sensitivity analyses; however, the lack of studies reporting treatment information did not allow us to perform subgroup and sensitivity analyses. The majority of the studies included were cross-sectional and retrospective cohort reviews designed to study seroprevalence and risk factors without any comparison group; hence, a comparison of the prevalence between different groups, such as age and gender, was not feasible. Even though treatment information was reported in 11 studies, information specific to cure rates, drugs, dose, and duration was missing from many studies, which precluded us from conducting a robust data analysis comparing different treatment types based on genotype information. Despite this, we tried to explore differences in the prevalence of HCV in IDUs, OST use, and cure rates by comparing studies published before and after 2010. The analysis showed no significant difference in the prevalence and proportion of OST use before and after 2010 among HCV-infected IDUs. The HCV cure rate was found to be higher in the studies conducted after 2011; however, there were only two studies included in each subgroup in the meta-analysis. In addition, 50% of the papers included were assessed as having a high of bias. Therefore, considering both the high heterogeneity and high risk of bias in these studies, the findings of this review should be interpreted cautiously. The studies included in this review spanned the period from 1991 to 2018. Many of the studies (67%) were conducted during the period before DAA became widely available (2011). Even though the included studies were insufficient in number and power such that their findings cannot be generalized, our analysis showed encouraging progress in the level of care provided to IDU hepatitis C patients attending general practice after 2011. However, between the 1990s and 2010s, there were significantly fewer developments in the care of this cohort of patients. The causes of such limited development are difficult to identify based on the current evidence. The literature suggests that during this period, novel antiviral therapies were still in their infancy and fewer therapeutic options suitable for use in the community were available. The introduction of pegylated interferon and ribavirin (with a 50% virological cure rate in generally adherent patients in 2001/2005) were indicators of a brighter future for hepatitis C care . Our review indicates an increased level of activity since the subsequent introduction of DAA therapies and clearly establishes the need for high quality research to maximise the potential of such therapies in the community.
The prevalence and effectiveness of HCV treatment has been documented independently in the literature . A systematic review and meta-analysis of the prevalence and treatment of hepatitis C among IDUs, particularly in general practice, has not been published yet. Our study showed a pooled prevalence of between 26 and 67% of IDUs diagnosed with hepatitis C infection in primary care. This finding is similar to the prevalence rates reported among IDUs in Iran and Pakistan ; however, it is lower than the prevalence rate of 80% reported in the EU region . The study in the EU region was not solely a study among IDUs, and the proportion was part of a subgroup analysis obtained from the general population. On the other hand, the study populations from Iran and Pakistan mostly involved patients from community drug treatment centres, with a few studies involving patients from secondary care. Our study reported variation in hepatitis C treatment uptake, with an overall treatment uptake of 9% amongst this cohort of patients, which is similar to the treatment uptake reported in a study conducted in opioid-dependent patients from OST centres . The proportion in our study was three times lower than the treatment uptake reported in a systematic review from the EU region; however, the setting in that study was not restricted to general practice care . Even so, these studies report a significantly higher treatment uptake among IDUs utilizing specialist services based in primary care, with a significantly higher SVR also being reported . The reasons for the differences between GP-only and specialist-based services may be related to different patient or service characteristics, differences in how the studies report their information, or other unspecified factors. Our study estimated the reported HCV cure rate in GP to be between 43 and 80%, which differed from the cure rate of 19–88% reported by Lazarus et al. . Once again, this difference could be due to the very low number of studies included in our review compared to other studies or to the difference in study location and settings.
A dearth of good quality research data exists in relation to the current and potential roles of general practice in the holistic care of IDUs with HCV. In particular, very limited data that explores the potential of DAAs in general practice exists, although data from primary care-based specialist services are promising. Further research aimed at exploring these issues is required in general practice. Overall, the prevalence of HCV among IDUs in GP care was above 60%. In addition, OST appears to be an important element in care. As diagnostic and treatment information was reported in only a small number of studies, this made drawing concrete conclusions from the current study difficult. This study indicates the need for future research involving this target population to better inform the service requirements and resource allocation needed in general practices. Future research to identify the causes of low treatment uptake and cure rates will be essential and will help optimise treatment acceptance and compliance amongst this vulnerable patient group.
|
Attachment, Feeding Practices, Family Routines and Childhood Obesity: A Systematic Review of the Literature
|
5434d695-48cf-4ab8-9e1d-42e209e09b0f
|
10138359
|
Family Medicine[mh]
|
Childhood obesity is a chronic disease that is considered a major public health problem . It is defined by excess body fat and is associated with a risk of cardiometabolic diseases , and psychological and relational disorders . Excess body fat in children can also put them more at risk of becoming obese adults . Childhood obesity is difficult to prevent and treat because of its complex etiologies. Genetic factors, physical activity, sedentary lifestyle, and access to food have been the main topics of etiological studies . Other studies were conducted to understand better the etiology and risk factors of this disease in order to prevent and intervene effectively amongst obese children and their families, while taking into account the complexity of this disease . Researchers in family therapy have demonstrated that the family environment, such as multiple or dyadic parent–child interactions, are significant in the prevention and treatment of pediatric obesity . These factors can induce obesogenic dysfunctional eating behaviors in children . This recent body of research helped to better understand the development of childhood obesity and why interventions primarily aimed exclusively at changing parental dietary practices fail in the long term . Developmental researchers have operationalized some of these family and relational factors as the quality of the child’s attachment or the parental feeding practices and family routines, assessing their influence on childhood obesity from a unidimensional perspective . The first dyadic parent–child relationship factor studied concerns the parent and the child’s attachment quality. Since the child’s attachment sustains the maturation of his/her brain structures that are involved in the development of self-regulation skills , it seems to be implicated in the risk of development of childhood overweight and obesity . More precisely, the caregiver’s (CG) capacity to answer sensibly to the child’s attachment needs conditions the child’s attachment quality developing during the first year. According to attachment theory, the adult’s availability and responsiveness to their children distress cues is influenced by the representations of their own attachment relationships with their CG during childhood . Secure CG are be more capable of perceiving accurately their children’s distress cues and responding to them effectively , thereby permitting the child to experience secure relationships in times of distress. In a feeding context, secure parents should be sensible enough to identify and adapt to their children’s feeding, hunger, and satiety cues, and also have a greater emotional attunement, leading to positive and enjoyable feeding sessions . These specific attachment relational patterns between the child and the CG in feeding and non-feeding contexts also condition how the child regulates his/her stress and negative emotions . For instance, insecure avoidant and ambivalent children, as well as disorganized children, show more dysregulated stress and negative emotion responses compared with secure children . Such dysregulated responses could affect the development of some of their physiological systems such as food intake, and thus influence their weight . The second dyadic parent–child relationship factor concerns parental feeding practices. Feeding is a primary parental task during the child’s first year of life. It is also a relevant context to assess the quality of the parent–child relationship , which can modulate feeding practices that will influence the development of the child’s eating behaviors . In contrast to parenting styles, which are considered as emotional climates between the parent and child and are defined through different levels of warmth/responsiveness and control/demandingness, parental feeding practices are specific behaviors or actions focused on eating, performed intentionally or unintentionally, and used for educational purposes that affect the child’s beliefs, behaviors, and attitudes towards feeding . According to Vaughn et al. they can be divided into three categories : Coercive control refers to practices in which the parent dominates the child so that he/she behaves as the parent desires. It includes restrictive, pressuring, or instrumental and emotional feeding behaviors which meet the parents’ needs more than the child’s needs, including satiety cues. Structuring practices refer to how parents organize the child’s eating environment to promote the consumption of healthy food. They include modeling eating behaviors and repeated exposure to healthy and various foods. Finally, autonomy support practices are practices that enhance the child’s independence and autonomy to help him/her make healthy choices for him or herself. They include encouraging behaviors and non-food rewards. Autonomous and structuring practices are supportive parenting approaches in guiding children to eat healthily while meeting their emotional and physiological needs. These practices would, therefore, have a favorable impact on the child’s weight development . Family routines are organized around different dimensions of family daily life such as bedtime, activities, and mealtimes, making them predictable for the child thanks to their repetition over time . Routines serve a developmental function as they become more organized over time, and children can have a more active role in them . They are also related to the quality of the parent–child relationship and the child’s weight gain . Indeed, sharing a family meal four or more times per week is linked with more fruit and vegetable consumption by the child and less consumption of high-calorie foods, which decreases the risk of childhood obesity . In their review, Kininmonth et al. highlight that the greater the child’s accessibility to screens, especially in the bedroom , and to high-calorie foods , the greater the risk of the child’s overweight and obesity. Thus, routines around different dimensions of the child’s life (e.g., meals, screen time etc.) structure the family environment that guides his/her behavior. They also include the family’s emotional climate that supports the child’s development , and may become, under certain conditions, a risk factor for child’s obesity development. Despite the existence of individual links between these factors and childhood obesity, researchers suggest that there are no direct causes of childhood obesity, but rather a transactional process that connects family, interpersonal, and biological systems together . However, the possible mechanisms behind these links had not been assessed. To address this gap, multifactorial and transactional models were recently proposed and tested, demonstrating the mutual influence these factors have on one another, and on the development of childhood obesity . In addition, the child’s compromised self-regulatory abilities were proposed as a possible mediating mechanism between family routines, feeding practices, the quality of the child’s attachment and the risk of childhood obesity . In the literature we found different types of self-regulation: The associations between this illness and general self-regulation ability was mostly inconsistent , but statistically stronger with behavioral , emotional , and appetite self-regulation abilities. Therefore, since this is a recent field of research, there is as yet no scientific consensus about which type of self-regulation ability is mostly involved between the factors mention above and the risk of childhood obesity. The first aim of this systematic review is to synthesize the data on the links between the quality of child’s attachment, parental feeding practices, and family routines and the risk of childhood obesity. The second aim is to assess if these links are mediated by specific self-regulatory capacities. This work addresses two gaps in the literature related to childhood obesity development: (1) The analysis of multifactorial and transactional data to provide a better understanding of children’s food choices and weight trajectories during their development ; and (2) The analysis of this subject across different developmental periods to understand how the risks of childhood obesity may evolve . To achieve the latter, we organized our data according to different childhood developmental periods, which to our knowledge, has never been undertaken before. Thus, we propose a model synthesizing the data of our review for each of our identified developmental periods. This approach allows the identification in early development of the individual differences that may influence developmental trajectories .
This systematic review is based on an integrative approach of data collection that allows the inclusion of results from studies with different methodologies , and on the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) method. The inclusion criteria for studies were as follows: (1) independent empirical articles or theses written in French or English; (2) studies dealing with attachment and feeding practices or attachment and family routines and the risk of childhood obesity; (3) studies including overweight or obese children (0–18 years); (4) studies performed over the last 12 years (2010–2022); (5) studies with either a longitudinal or a cross-sectional study design and using a quantitative and/or a qualitative methodology; and (6) literature reviews proposing a multifactorial conceptual model of childhood obesity risk with or without a standardized methodology. Our review did not include the analysis of book chapters or non-peer-reviewed articles. The exclusion criteria were as follows: (1) articles focusing primarily on adults, (2) articles focusing on a childhood obesity intervention program, (3) studies not published in French or English, and (4) studies with samples of children with a mental disorder, mental retardation, or physical illness (e.g., autism, eating disorder, etc.). Based on these criteria, we searched during February and March 2022 for relevant articles. We used three electronic databases: SCOPUS (Science Direct), Pubmed, and Semantic Scholar (see ). The keywords used were in French and English: “Child obesity AND feeding practice AND attachment” OR “Child obesity AND Family Routines AND attachment” OR “Obésité enfant ET Attachement ET routines familiales”. We chose not to include keywords related to child self-regulation because we wanted to focus our research on attachment, feeding practices and family routines. Self-regulatory abilities are considered here as mediating mechanisms between these factors and childhood obesity, and not as main factors. Furthermore, one of the aims of this review is to assess what types of self-regulation are highlighted in these studies or reviews. With the combination of all the keywords, a total of 1651 articles were found. There were 1219 articles concerning attachment, feeding practices and childhood obesity, with 989 of these in English and 230 in French. There were 432 regarding attachment, family routines and childhood obesity, with of these 405 in English and 27 in French. After reading the titles and their abstracts, 21 eligible publications were identified and analyzed. The inclusion and exclusion criteria were checked by one reviewer (author 1) and then verified by the second reviewer (author 2). A total of 10 articles met our criteria: 7 empirical articles and 3 literature reviews. For the empirical papers, we collected information about the author(s), the year of publication, the country, the sample’s characteristics (age, ethnicity, socioeconomic level, level of education, BMI, and family structure if noted), the design of the study, the variables evaluated and how they were assessed, the methods for assessing the parent and child’s BMI, the main results, and the quality of the studies. For the literature reviews, we collected information about the author(s), year of publication, country, topic of the review, developmental milestones assessed, and key findings. Then we analyzed the results from the selected papers and assigned the data to three main themes: “Attachment, feeding practices, childhood obesity”, “Attachment, family routines, childhood obesity”, and “Attachment, feeding practices, family routines, childhood obesity”. We also grouped the data into three developmental stages (see ). We determined the boundaries of these stages based on data from the literature about significant developmental milestones in relation to the development of the child’s general-self regulation ability and on the age distribution of the children in the empirical studies and as indicated in the models. The assessment of the quality of our included papers followed the adapted version of the “National Heart, Lung, and Blood Institute’s Quality Assessment Tool for Observational Cohort and Cross-sectional Studies” , a tool consisting of 14 items. Since the selected reviews did not follow a systematic review methodology, only the quality of the included empirical studies could be assessed. The criteria scoring was based on the method used by Beckers et al. and Burnett et al. in their literature reviews . One criterion was not considered applicable to the included studies, and therefore removed (i.e., “were the outcome assessors blinded to the exposure status of participants”). This item asks whether the outcome assessors of a study know which participants were exposed to a particular experimental condition. This was not the case in the studies included in our review because the same investigators measured both the experimental exposure of participants and assessed the outcomes. Depending on the study design of the empirical studies, the number of criteria applied was also different: for the two longitudinal studies, 13 criteria were considered in assessing their quality (see in ), and 9 were considered for the cross-sectional studies (see in ). The papers were given scores based on their correspondence with the criteria (0 = no correspondence, 1 = yes). There were four key criteria for longitudinal studies and three for cross-sectional studies (0 = no, 0.5 = partially met, 1 = yes). The total score for each study was calculated as the sum of the scores of the items, and the consideration of the individual scores of the 4 key criteria. The quality of all the empirical papers was assessed and scored by one reviewer (author 1), and then verified by the second reviewer (author 2). Any disagreement between the reviewers was discussed until a consensus was reached. The risk of biased assessments was assessed by one reviewer (author 1) following the Critical Appraisal Skills Programme (CASP) checklists for systematic review and cohort studies (see and in ).
3.1. Characteristics of the Analyzed Papers Our literature review gathers a total of ten articles, including seven empirical studies and three literature reviews that did not follow a standardized methodology. The publication country of the papers is diverse but mainly English-speaking: the United States (6), the United Kingdom and the United States (1), the United Kingdom(1), Poland (1), and Australia (1). The first article of the decade was published in 2014 and the last in 2020. There are three empirical studies and one literature review on “attachment, feeding practices and childhood obesity”: the USA (1), the USA and the United Kingdom (1), the United Kingdom (1), Australia (1). There are three empirical studies (one of which is included in a thesis) and a review of the literature on “attachment, family routines, and childhood obesity”: the United States (3), Poland (1). There is one empirical study and one literature review on “attachment, eating practices, family routines, and childhood obesity risk”: the USA (2). There are no papers in French on this topic. Only two empirical studies were longitudinal, with the other five being cross-sectional. The empirical studies had several measurement tools: self-administered questionnaires (six studies), observational data (two studies), a survey (one study), an index of a time estimate (one study), and a one-item scale (one study). Two studies also used qualitative methods based on semi-structured interviews. Three studies used mixed methods, combining self-administered questionnaires, semi-structured interviews, observational data, a one-item scale, surveys, and a time estimate index. For all the empirical studies included, the total number of participants was 1325, ranging from a minimum of 77 participants to a maximum of 497. The three literature reviews did not follow a standardized method which means that they have more speculative characteristics. The review published by Fiese and Bost proposes a conceptual model on the regulatory and self-regulatory processes that connect different dimensions (biological, self-regulation, family regulation, food environment) involved in increasing or decreasing the risk of childhood obesity. The review published by Saltzman et al. proposes three different developmental pathways related to the development of childhood obesity, and we chose to concentrate on the “risk” developmental pathway. The review published by Bergmeier et al. proposes a conceptual model that focuses on parent–child relationships to understand how their interactions around feeding can affect a child’s weight status. 3.2. Study Quality and Risk of Bias Assessment Based on the assessment, three studies were rated as “Good” (two cross-sectional and one longitudinal study), and four as “Fair” (three cross-sectional and one longitudinal study). The quality scores of the five cross-sectional studies ranged from 6.5 to 5.5, with an average of 6.7. The quality scores of the two longitudinal studies ranged from 11 to 10.5, with an average of 10.75, which indicates an overall good-quality corpus . None of the three cross-sectional studies reported statistical adjustments for potential confounding variables , two did not report inclusion and exclusion criteria , and one did not present a sample size justification . One longitudinal study did not present inclusion and exclusion criteria or report all statistical adjustment for potential confounding variables . Another study did not state clear times points of measurements . Following the Critical Appraisal Skills Program (CASP) checklists for systematic review and cohort studies , all the cross-sectional and longitudinal studies addressed a clearly focused issue, recruited their cohort in an acceptable way, had replicable and comparable results with other evidence, and used validated tools , with the exception of one cross-sectional study that used a non-validated tool . The cross-sectional studies had data based on self-reports, which provides a less robust basis for changes in clinical practice . The longitudinal studies included either observational data or a mixed methodology, which provide robust evidence for recommendations of change in clinical practice . In general, the included studies had limited bias. The three reviews clearly addressed their topic and all the important outcomes were considered . However, since the authors did not use a standardized methodology, there is no clear information on how they screened the included papers, on their quality, or the replicability of results for a local population. Even if the results of their review are precisely synthesized and important outcomes considered, the lack of information concerning the used methodology, paper screening, sources and quality of included papers indicate potential bias. 3.3. Analysis of Papers First Developmental Period (0–2 Years) The studies reviewed for the 0–2 years developmental period are described in . In general, insecurely attached young children seem to be at risk of gaining weight via compromised general self-regulation . The CG and the child’s insecure attachment, in addition to family risk factors, can directly affect the development of the child’s appetite self-regulation abilities, and indirectly via poor parental responsiveness to feeding . Secure fathers are more attuned to their infants during feeding in contrast to dismissing fathers. Fathers with unresolved attachment trauma use more controlling behaviors, which may compromise the development of the child’s eating self-regulation . Parents using permissive and indulgent feeding practices put their child at risk of overweight and obesity through emotional eating and their answer to the child’s negative emotions, these factors being related to their attachment quality . Finally, a higher number of routines around dinner was linked with less appetite dysregulation in children with highly insecure mothers, and conversely, the presence of “Household Chaos (HC)” was associated with higher levels of appetite dysregulation in children whose mothers also reported low levels of emotional responsiveness . 3.4. Second Developmental Period (2–8 Years) The studies reviewed for the 2–8 years developmental period are described in . Globally, insecure CGs seemed to have fewer mealtime routines and allowed their children more screen time, which in turn predicted their children’s consumption of unhealthy foods. They tended to use negative emotional regulation strategies and had emotional pressuring feeding styles that are related to unhealthy food consumption in children . More specifically, anxious attachment CGs seem to more frequently have children with a diminished eating self-regulation ability, with this association being mediated by controlling/persuasive feeding practices . Maternal anxious attachment is also linked with emotional feeding practices and emotional eating in children and pre-adolescents. These mothers used emotional feeding practices primarily in response to the child’s emotional eating . 3.5. Third Developmental Period (8–18 Years) The studies reviewed for the 8–18 years developmental period are described in . To summarize, from middle childhood to teenagehood, children with insecure attachment are more likely to have appetite dysregulation, leading them to consume more high-calorie foods and to engage in obesogenic behaviors. CGs’ insecure attachment is linked with an emotional and social eating regulation: anxious attachment predicts emotional eating, and avoidant attachment poorer control and organization of nutrition . There is a link between the CGs’ obesogenic behaviors and the transmission of such behaviors to their children in general and feeding contexts . Such modeling is influenced by their attachment quality . Yet, the CG’s obesogenic behaviors and their transmission are not linked with the child/adolescent’s own obesogenic behavior. To conclude, appetite or eating dysregulation and emotional regulation strategies are the forms of self-regulation most frequently found in the interactions between our primary factors. General self-regulatory abilities were only mentioned in the three models but were not considered in the empirical studies.
Our literature review gathers a total of ten articles, including seven empirical studies and three literature reviews that did not follow a standardized methodology. The publication country of the papers is diverse but mainly English-speaking: the United States (6), the United Kingdom and the United States (1), the United Kingdom(1), Poland (1), and Australia (1). The first article of the decade was published in 2014 and the last in 2020. There are three empirical studies and one literature review on “attachment, feeding practices and childhood obesity”: the USA (1), the USA and the United Kingdom (1), the United Kingdom (1), Australia (1). There are three empirical studies (one of which is included in a thesis) and a review of the literature on “attachment, family routines, and childhood obesity”: the United States (3), Poland (1). There is one empirical study and one literature review on “attachment, eating practices, family routines, and childhood obesity risk”: the USA (2). There are no papers in French on this topic. Only two empirical studies were longitudinal, with the other five being cross-sectional. The empirical studies had several measurement tools: self-administered questionnaires (six studies), observational data (two studies), a survey (one study), an index of a time estimate (one study), and a one-item scale (one study). Two studies also used qualitative methods based on semi-structured interviews. Three studies used mixed methods, combining self-administered questionnaires, semi-structured interviews, observational data, a one-item scale, surveys, and a time estimate index. For all the empirical studies included, the total number of participants was 1325, ranging from a minimum of 77 participants to a maximum of 497. The three literature reviews did not follow a standardized method which means that they have more speculative characteristics. The review published by Fiese and Bost proposes a conceptual model on the regulatory and self-regulatory processes that connect different dimensions (biological, self-regulation, family regulation, food environment) involved in increasing or decreasing the risk of childhood obesity. The review published by Saltzman et al. proposes three different developmental pathways related to the development of childhood obesity, and we chose to concentrate on the “risk” developmental pathway. The review published by Bergmeier et al. proposes a conceptual model that focuses on parent–child relationships to understand how their interactions around feeding can affect a child’s weight status.
Based on the assessment, three studies were rated as “Good” (two cross-sectional and one longitudinal study), and four as “Fair” (three cross-sectional and one longitudinal study). The quality scores of the five cross-sectional studies ranged from 6.5 to 5.5, with an average of 6.7. The quality scores of the two longitudinal studies ranged from 11 to 10.5, with an average of 10.75, which indicates an overall good-quality corpus . None of the three cross-sectional studies reported statistical adjustments for potential confounding variables , two did not report inclusion and exclusion criteria , and one did not present a sample size justification . One longitudinal study did not present inclusion and exclusion criteria or report all statistical adjustment for potential confounding variables . Another study did not state clear times points of measurements . Following the Critical Appraisal Skills Program (CASP) checklists for systematic review and cohort studies , all the cross-sectional and longitudinal studies addressed a clearly focused issue, recruited their cohort in an acceptable way, had replicable and comparable results with other evidence, and used validated tools , with the exception of one cross-sectional study that used a non-validated tool . The cross-sectional studies had data based on self-reports, which provides a less robust basis for changes in clinical practice . The longitudinal studies included either observational data or a mixed methodology, which provide robust evidence for recommendations of change in clinical practice . In general, the included studies had limited bias. The three reviews clearly addressed their topic and all the important outcomes were considered . However, since the authors did not use a standardized methodology, there is no clear information on how they screened the included papers, on their quality, or the replicability of results for a local population. Even if the results of their review are precisely synthesized and important outcomes considered, the lack of information concerning the used methodology, paper screening, sources and quality of included papers indicate potential bias.
First Developmental Period (0–2 Years) The studies reviewed for the 0–2 years developmental period are described in . In general, insecurely attached young children seem to be at risk of gaining weight via compromised general self-regulation . The CG and the child’s insecure attachment, in addition to family risk factors, can directly affect the development of the child’s appetite self-regulation abilities, and indirectly via poor parental responsiveness to feeding . Secure fathers are more attuned to their infants during feeding in contrast to dismissing fathers. Fathers with unresolved attachment trauma use more controlling behaviors, which may compromise the development of the child’s eating self-regulation . Parents using permissive and indulgent feeding practices put their child at risk of overweight and obesity through emotional eating and their answer to the child’s negative emotions, these factors being related to their attachment quality . Finally, a higher number of routines around dinner was linked with less appetite dysregulation in children with highly insecure mothers, and conversely, the presence of “Household Chaos (HC)” was associated with higher levels of appetite dysregulation in children whose mothers also reported low levels of emotional responsiveness .
The studies reviewed for the 0–2 years developmental period are described in . In general, insecurely attached young children seem to be at risk of gaining weight via compromised general self-regulation . The CG and the child’s insecure attachment, in addition to family risk factors, can directly affect the development of the child’s appetite self-regulation abilities, and indirectly via poor parental responsiveness to feeding . Secure fathers are more attuned to their infants during feeding in contrast to dismissing fathers. Fathers with unresolved attachment trauma use more controlling behaviors, which may compromise the development of the child’s eating self-regulation . Parents using permissive and indulgent feeding practices put their child at risk of overweight and obesity through emotional eating and their answer to the child’s negative emotions, these factors being related to their attachment quality . Finally, a higher number of routines around dinner was linked with less appetite dysregulation in children with highly insecure mothers, and conversely, the presence of “Household Chaos (HC)” was associated with higher levels of appetite dysregulation in children whose mothers also reported low levels of emotional responsiveness .
The studies reviewed for the 2–8 years developmental period are described in . Globally, insecure CGs seemed to have fewer mealtime routines and allowed their children more screen time, which in turn predicted their children’s consumption of unhealthy foods. They tended to use negative emotional regulation strategies and had emotional pressuring feeding styles that are related to unhealthy food consumption in children . More specifically, anxious attachment CGs seem to more frequently have children with a diminished eating self-regulation ability, with this association being mediated by controlling/persuasive feeding practices . Maternal anxious attachment is also linked with emotional feeding practices and emotional eating in children and pre-adolescents. These mothers used emotional feeding practices primarily in response to the child’s emotional eating .
The studies reviewed for the 8–18 years developmental period are described in . To summarize, from middle childhood to teenagehood, children with insecure attachment are more likely to have appetite dysregulation, leading them to consume more high-calorie foods and to engage in obesogenic behaviors. CGs’ insecure attachment is linked with an emotional and social eating regulation: anxious attachment predicts emotional eating, and avoidant attachment poorer control and organization of nutrition . There is a link between the CGs’ obesogenic behaviors and the transmission of such behaviors to their children in general and feeding contexts . Such modeling is influenced by their attachment quality . Yet, the CG’s obesogenic behaviors and their transmission are not linked with the child/adolescent’s own obesogenic behavior. To conclude, appetite or eating dysregulation and emotional regulation strategies are the forms of self-regulation most frequently found in the interactions between our primary factors. General self-regulatory abilities were only mentioned in the three models but were not considered in the empirical studies.
This paper aimed at synthesizing multifactorial and transactional data resulting from studies and reviews assessing the links between the child’s and CG’s attachment quality, parental feeding practices, family routines and the risk of childhood obesity across three developmental periods (see ). It also aimed to assess the mediation of these links by specific self-regulatory capacities across different developmental periods. In general, this literature review showed that the CG’s and child’s attachment quality was associated with controlling or permissive feeding practices, few family routines, and the modeling of obesogenic behaviors. These were mostly mediated by appetite dysregulation and emotional regulation strategies and influenced the child’s food consumption and weight trajectory toward overweight and obesity status. 4.1. First Developmental Period (0–2 Years) Among the most studied concepts at this developmental period, the quality of parental attachment in the three models presented in the literature review papers and in the two empirical studies , as well as the quality of infant attachment presented in the same models, were considered. Other factors affecting the CG–child relational quality were operationalized through feeding and emotional responsiveness in two of the models and in one of the studies . Indeed, feeding is a context that contributes to the formation of the early child attachment relationship . Secure parents have more sensible responses to the child’s eating behaviors and cues, which supports the child’s innate ability to self-regulate their food intake . Conversely, less sensible interactions with an insecure CG, or especially with unresolved attachment trauma, during feeding can compromise this self-regulation ability , leading to a risk of weight gain. On the one hand, detached fathers in Reisz et al.’s study are less attuned with their child during feeding as a result of an emotional deactivation strategy for coping with a potentially stressful feeding context or to minimize the importance of relationships . On the other hand, when facing the hungry child’s distress, and similar to the results of Messina et al. , unresolved trauma memories could reactivate within fathers and lead to an attempt of the CG to regain control using controlling feeding practices . Among the least studied concepts during this developmental period, feeding practices were included in two models and assessed in one study. Reisz et al. demonstrated that fathers with unresolved attachment trauma had more controlling feeding practices with their sons than with their daughters . These findings follow the results of studies in which fathers are reported to engage in more controlling practices in general than mothers . Using this type of practice, CGs perceive less well the child’s signals and cues , which prevents the child from learning to identify and correctly regulate his or her physiological signals of hunger/satiety . This may increase the risk of emerging difficulties in his or her eating self-regulation skills , and so of weight gain. Finally, family routines were included in two models and in one study . In their “at-risk” model, Saltzman et al. proposed that in addition to poor attachment quality and the CG’s low responsiveness to eating, some family routines would affect the development of the child’s self-regulatory abilities . This hypothesis was confirmed by the same author . Routines can be important for children exposed to frequent stress, including an insecure attachment relationship, because they provide stable and predictable interactions . When routines are disrupted or unstable, the child’s environment can become chaotic , which prevents the fluidity of interactions essential for a healthy development and underlies the links between this type of family environment and childhood obesity. 4.2. Second Developmental Period (2–8 Years) Parental attachment, which is among the most studied concepts in this developmental period, was assessed in four of the studies , with the CG’s anxious attachment mainly emphasized in these data. Feeding practices were also frequently assessed in this review and included in three studies . The CG’s anxious attachment was indirectly linked with the child’s eating self-regulation ability through persuasive controlling feeding practices . This kind of feeding practice is the only one that has been longitudinally related to child overweight . Because anxious CGs have little ability to manage their distress, they may activate their own attachment system when facing their child’s distress due to hunger. Therefore, they can use controlling feeding practices in response to their anxiety . As Hardman et al. demonstrated, they may be at risk of teaching their child dysfunctional emotional regulation strategies such as the use of food to regulate negative emotions . If used consistently, and through a parent–child transmission mechanism, these strategies seem to induce emotional eating within the child . Indeed, anxiously attached CGs have lower distress regulation capacities. Relying on external sources to help them to manage food consumption puts them at risk of developing emotional eating . Thus, we hypothesize that the child seems to integrate the CG’s regulation model at early stages of development, which induces a misidentification/confusion of his/her emotional signals with hunger/satiety physiological signals, and therefore, increases emotional eating. As demonstrated by Hardman et al., anxious mothers tend to reinforce this regulation strategy by responding to their child’s emotional overeating with more emotional eating strategies that are considered controlling feeding practices . This is an interesting finding that is consistent with previous research indicating that maternal feeding practice is firstly “child responsive” and that the child’s overweight status is not primally induced by a controlling feeding practice as was often stated before . Family routines around meals and other dimensions of family life such as screen time were only assessed in two studies. Bost et al. reported that insecure parents tended to have fewer mealtime routines and more television time , which is a risk factor for childhood obesity . More screen time could be interpreted as a way to reject or to avoid interactions and the child’s negative emotions . 4.3. Third Developmental Period (8–18 Years) One of the concepts focused on in both studies that investigate this period are the CG’s and child’s “obesogenic behavior”, including mealtime routines and their “modeling” (conceptually similar to feeding practices) towards the child or adolescent. There were no associations between the child/adolescent obesogenic behavior and the CG’s own obesogenic behavior and modeling , which can be explained by the fact that children who are 8–18 years old spend less time with their CGs, and therefore, are less influenced by their family environment. Thus, the influence of parental obesogenic behavior modeling on the child’s own behaviors may be strongest when the child is young and predominantly heteroregulated. Concerning child attachment quality, it was only assessed in one of the studies which demonstrated that insecure children/teenagers had a higher risk of consuming high-calorie foods. Interestingly, and in contrast to the adult sample in this review, it was the avoidant children who had more “obesogenic” behaviors, this link being mediated by their eating self-regulation ability. The latter was operationalized by emotional eating and low behavioral regulation, and given that emotional eating is also considered as an emotional avoidance strategy , we hypothesize that the use of eating to manage emotions could be part of an emotional suppression strategy found in this type of attachment. An avoidant attachment, and the use of emotional eating as an avoidance strategy, could impoverish the child’s emotional awareness and lead to the risk of developing alexithymia, a psychopathology linked with avoidant teenager profiles in the literature , and increased risk of overweight and obesity . Future studies should test the association between the child/teenager’s avoidant attachment, the regular recourse to emotional eating as an emotional regulator, the development of alexithymia, and weight gain. 4.4. Practical Applications From infancy till middle childhood, children are highly influenced by their family environment and their interactions with their CG . When facing families in which children are overweight or obese, presenting along with their CG’s insecure or unresolved attachment cues, the latter having also controlling feeding practices and few family routines, clinicians can target these relational factors to help these families achieve healthier habits, behaviors, emotional and appetite regulation strategies. Assessing the level of motivation of the CGs to change their routines and feeding practices seems important since changing habits can be difficult in the longer term . Clinicians can use motivational interview techniques to help them identify unhealthy routines. Stable routines are important protective factors for children with insecure mothers , making them key elements to investigate for clinicians. They can also assess the type of feeding practices CG’s use and work on to establish more stable and predictable routines and to introduce more structure and autonomy in feeding practices. Clinicians need to assess the quality of the CG’s attachment representations in order to see how they influence the interactions between the adult and the child during stressful events (i.e., when a child cries during feeding). Since unresolved trauma attachment memories and insecure attachment representations influence the CG’s interpretation of the child’s emotional cues and their behavior , practicians can help these CG’s to better understand their attachment needs and those of their child to identify and respond to them more sensibly. In a feeding context, this will help the CG discriminate between attachment/emotional cues and hunger/satiety cues expressed by the child, which will also help the child in learning how to better regulate him or herself within these dimensions. Practicians can assess the CG’s and child’s attachment quality, as well as how the CG manages the child’s feeding and emotional needs, by observing their interactions during a meal in ecological or clinical settings. For attachment quality more specifically, practicians can use several tools including the Massie–Campbell Mother–Infant Attachment Indicator During Stress Scale (0–18 months) , which is a standardized observation of attachment behaviors in small stress context that encompasses several family routines such as dressing, bathing, and feeding contexts, and reunion episodes. An additional tool is the Coding Interactive Behavior system which helps to code interactions between the child (between 2 and 36 months) and his or her CG to assess maternal sensibility . In our corpus, maternal anxious attachment was linked with emotional eating within children and pre-adolescents . If the CG or children present emotional eating, practicians can work with them on developing other emotional regulation skills and help patients to discriminate more easily emotional and hunger/satiety clues, thereby reducing confusion. Dialectical behavioral therapy (DBT) or acceptance and commitment therapy (ACT) are therapeutic interventions that have proven to help adults or teenagers with emotional eating and overweight/obesity . The Dutch Eating Behavior Questionnaire (DEBQ) is a widely used tool that helps to identify emotional eating . When children are old enough to be able to regulate themselves, they are less influenced by the obesogenic routines and behaviors of their family members . Since insecure avoidant teenagers have a higher risk of consuming high-calorie foods , practicians need to work on the quality of the therapeutic relationship before engaging in a specific therapeutic or motivational approach so that the relational experience modulates the patient’s internal working models concerning attachment needs. Diverse therapeutic approaches can be proposed including ACT and DBT, but also Fonagy’s and Bateman’s mentalisation-based treatment that helps patients access their internal experiences and consider differently their thoughts and emotions . As previously mentioned, assessing their daily routines and feeding habits can be useful to identify obesogenic behaviors and work on their motivation to change such behaviors. 4.5. Study Limitations and Future Research Directions The analyzed studies show important findings and reflections on our subject, but there are also some limitations that raise several future research directions. Concerning attachment quality, of the 10 studies reviewed, only Lamson et al. assessed the child or adolescent’s attachment . Given that children’s attachment is constructed during the first year of life, it would seem important to consider this variable in future studies of infants and preschoolers. Additionally, only the study by Reisz et al. addressed unresolved representations of attachment in adults ; the other six studies focused on the axes of security or avoidant/anxious insecurity. It also appears important to consider parental mental health since it can interfere with the CG’s ability to respond consistently and sensitively to the child’s needs and some of the child’s temperament dimensions associated with dysfunctional eating behaviors . Moreover, when obesogenic behaviors are assessed, they are often reduced to food consumption, whereas Lamson et al. also included mealtime routines and other dimensions of family life . Socio-economic level was not identified in the studies of Hardman et al. or Lamson et al. . Education level was not assessed in the study of Powell et al., and child’s gender was not considered in the studies of Hardman et al. and Pasztak-Opilka et al. Ethnicity, socio-economic and educational level, and gender are important demographic factors to control in the data analysis as potentially affecting the development of the child’s weight status . The studies by Powell et al., Hardman et al., Pasztak-Opilka et al., Bost et al., and Lamson et al. used only self-report questionnaires, which may be subject to auto-perception bias. The validity of the results should be strengthened by using other means of assessment, so the use of a mixed method is recommended as most of the studies are also cross-sectional. Finally, it is necessary for future studies to include more fathers in their samples.
Among the most studied concepts at this developmental period, the quality of parental attachment in the three models presented in the literature review papers and in the two empirical studies , as well as the quality of infant attachment presented in the same models, were considered. Other factors affecting the CG–child relational quality were operationalized through feeding and emotional responsiveness in two of the models and in one of the studies . Indeed, feeding is a context that contributes to the formation of the early child attachment relationship . Secure parents have more sensible responses to the child’s eating behaviors and cues, which supports the child’s innate ability to self-regulate their food intake . Conversely, less sensible interactions with an insecure CG, or especially with unresolved attachment trauma, during feeding can compromise this self-regulation ability , leading to a risk of weight gain. On the one hand, detached fathers in Reisz et al.’s study are less attuned with their child during feeding as a result of an emotional deactivation strategy for coping with a potentially stressful feeding context or to minimize the importance of relationships . On the other hand, when facing the hungry child’s distress, and similar to the results of Messina et al. , unresolved trauma memories could reactivate within fathers and lead to an attempt of the CG to regain control using controlling feeding practices . Among the least studied concepts during this developmental period, feeding practices were included in two models and assessed in one study. Reisz et al. demonstrated that fathers with unresolved attachment trauma had more controlling feeding practices with their sons than with their daughters . These findings follow the results of studies in which fathers are reported to engage in more controlling practices in general than mothers . Using this type of practice, CGs perceive less well the child’s signals and cues , which prevents the child from learning to identify and correctly regulate his or her physiological signals of hunger/satiety . This may increase the risk of emerging difficulties in his or her eating self-regulation skills , and so of weight gain. Finally, family routines were included in two models and in one study . In their “at-risk” model, Saltzman et al. proposed that in addition to poor attachment quality and the CG’s low responsiveness to eating, some family routines would affect the development of the child’s self-regulatory abilities . This hypothesis was confirmed by the same author . Routines can be important for children exposed to frequent stress, including an insecure attachment relationship, because they provide stable and predictable interactions . When routines are disrupted or unstable, the child’s environment can become chaotic , which prevents the fluidity of interactions essential for a healthy development and underlies the links between this type of family environment and childhood obesity.
Parental attachment, which is among the most studied concepts in this developmental period, was assessed in four of the studies , with the CG’s anxious attachment mainly emphasized in these data. Feeding practices were also frequently assessed in this review and included in three studies . The CG’s anxious attachment was indirectly linked with the child’s eating self-regulation ability through persuasive controlling feeding practices . This kind of feeding practice is the only one that has been longitudinally related to child overweight . Because anxious CGs have little ability to manage their distress, they may activate their own attachment system when facing their child’s distress due to hunger. Therefore, they can use controlling feeding practices in response to their anxiety . As Hardman et al. demonstrated, they may be at risk of teaching their child dysfunctional emotional regulation strategies such as the use of food to regulate negative emotions . If used consistently, and through a parent–child transmission mechanism, these strategies seem to induce emotional eating within the child . Indeed, anxiously attached CGs have lower distress regulation capacities. Relying on external sources to help them to manage food consumption puts them at risk of developing emotional eating . Thus, we hypothesize that the child seems to integrate the CG’s regulation model at early stages of development, which induces a misidentification/confusion of his/her emotional signals with hunger/satiety physiological signals, and therefore, increases emotional eating. As demonstrated by Hardman et al., anxious mothers tend to reinforce this regulation strategy by responding to their child’s emotional overeating with more emotional eating strategies that are considered controlling feeding practices . This is an interesting finding that is consistent with previous research indicating that maternal feeding practice is firstly “child responsive” and that the child’s overweight status is not primally induced by a controlling feeding practice as was often stated before . Family routines around meals and other dimensions of family life such as screen time were only assessed in two studies. Bost et al. reported that insecure parents tended to have fewer mealtime routines and more television time , which is a risk factor for childhood obesity . More screen time could be interpreted as a way to reject or to avoid interactions and the child’s negative emotions .
One of the concepts focused on in both studies that investigate this period are the CG’s and child’s “obesogenic behavior”, including mealtime routines and their “modeling” (conceptually similar to feeding practices) towards the child or adolescent. There were no associations between the child/adolescent obesogenic behavior and the CG’s own obesogenic behavior and modeling , which can be explained by the fact that children who are 8–18 years old spend less time with their CGs, and therefore, are less influenced by their family environment. Thus, the influence of parental obesogenic behavior modeling on the child’s own behaviors may be strongest when the child is young and predominantly heteroregulated. Concerning child attachment quality, it was only assessed in one of the studies which demonstrated that insecure children/teenagers had a higher risk of consuming high-calorie foods. Interestingly, and in contrast to the adult sample in this review, it was the avoidant children who had more “obesogenic” behaviors, this link being mediated by their eating self-regulation ability. The latter was operationalized by emotional eating and low behavioral regulation, and given that emotional eating is also considered as an emotional avoidance strategy , we hypothesize that the use of eating to manage emotions could be part of an emotional suppression strategy found in this type of attachment. An avoidant attachment, and the use of emotional eating as an avoidance strategy, could impoverish the child’s emotional awareness and lead to the risk of developing alexithymia, a psychopathology linked with avoidant teenager profiles in the literature , and increased risk of overweight and obesity . Future studies should test the association between the child/teenager’s avoidant attachment, the regular recourse to emotional eating as an emotional regulator, the development of alexithymia, and weight gain.
From infancy till middle childhood, children are highly influenced by their family environment and their interactions with their CG . When facing families in which children are overweight or obese, presenting along with their CG’s insecure or unresolved attachment cues, the latter having also controlling feeding practices and few family routines, clinicians can target these relational factors to help these families achieve healthier habits, behaviors, emotional and appetite regulation strategies. Assessing the level of motivation of the CGs to change their routines and feeding practices seems important since changing habits can be difficult in the longer term . Clinicians can use motivational interview techniques to help them identify unhealthy routines. Stable routines are important protective factors for children with insecure mothers , making them key elements to investigate for clinicians. They can also assess the type of feeding practices CG’s use and work on to establish more stable and predictable routines and to introduce more structure and autonomy in feeding practices. Clinicians need to assess the quality of the CG’s attachment representations in order to see how they influence the interactions between the adult and the child during stressful events (i.e., when a child cries during feeding). Since unresolved trauma attachment memories and insecure attachment representations influence the CG’s interpretation of the child’s emotional cues and their behavior , practicians can help these CG’s to better understand their attachment needs and those of their child to identify and respond to them more sensibly. In a feeding context, this will help the CG discriminate between attachment/emotional cues and hunger/satiety cues expressed by the child, which will also help the child in learning how to better regulate him or herself within these dimensions. Practicians can assess the CG’s and child’s attachment quality, as well as how the CG manages the child’s feeding and emotional needs, by observing their interactions during a meal in ecological or clinical settings. For attachment quality more specifically, practicians can use several tools including the Massie–Campbell Mother–Infant Attachment Indicator During Stress Scale (0–18 months) , which is a standardized observation of attachment behaviors in small stress context that encompasses several family routines such as dressing, bathing, and feeding contexts, and reunion episodes. An additional tool is the Coding Interactive Behavior system which helps to code interactions between the child (between 2 and 36 months) and his or her CG to assess maternal sensibility . In our corpus, maternal anxious attachment was linked with emotional eating within children and pre-adolescents . If the CG or children present emotional eating, practicians can work with them on developing other emotional regulation skills and help patients to discriminate more easily emotional and hunger/satiety clues, thereby reducing confusion. Dialectical behavioral therapy (DBT) or acceptance and commitment therapy (ACT) are therapeutic interventions that have proven to help adults or teenagers with emotional eating and overweight/obesity . The Dutch Eating Behavior Questionnaire (DEBQ) is a widely used tool that helps to identify emotional eating . When children are old enough to be able to regulate themselves, they are less influenced by the obesogenic routines and behaviors of their family members . Since insecure avoidant teenagers have a higher risk of consuming high-calorie foods , practicians need to work on the quality of the therapeutic relationship before engaging in a specific therapeutic or motivational approach so that the relational experience modulates the patient’s internal working models concerning attachment needs. Diverse therapeutic approaches can be proposed including ACT and DBT, but also Fonagy’s and Bateman’s mentalisation-based treatment that helps patients access their internal experiences and consider differently their thoughts and emotions . As previously mentioned, assessing their daily routines and feeding habits can be useful to identify obesogenic behaviors and work on their motivation to change such behaviors.
The analyzed studies show important findings and reflections on our subject, but there are also some limitations that raise several future research directions. Concerning attachment quality, of the 10 studies reviewed, only Lamson et al. assessed the child or adolescent’s attachment . Given that children’s attachment is constructed during the first year of life, it would seem important to consider this variable in future studies of infants and preschoolers. Additionally, only the study by Reisz et al. addressed unresolved representations of attachment in adults ; the other six studies focused on the axes of security or avoidant/anxious insecurity. It also appears important to consider parental mental health since it can interfere with the CG’s ability to respond consistently and sensitively to the child’s needs and some of the child’s temperament dimensions associated with dysfunctional eating behaviors . Moreover, when obesogenic behaviors are assessed, they are often reduced to food consumption, whereas Lamson et al. also included mealtime routines and other dimensions of family life . Socio-economic level was not identified in the studies of Hardman et al. or Lamson et al. . Education level was not assessed in the study of Powell et al., and child’s gender was not considered in the studies of Hardman et al. and Pasztak-Opilka et al. Ethnicity, socio-economic and educational level, and gender are important demographic factors to control in the data analysis as potentially affecting the development of the child’s weight status . The studies by Powell et al., Hardman et al., Pasztak-Opilka et al., Bost et al., and Lamson et al. used only self-report questionnaires, which may be subject to auto-perception bias. The validity of the results should be strengthened by using other means of assessment, so the use of a mixed method is recommended as most of the studies are also cross-sectional. Finally, it is necessary for future studies to include more fathers in their samples.
This literature review has highlighted the impact of dyadic and multiple family relationship factors on the development of the child’s weight status. Thus, the quality of parental and child attachment, parental feeding practices and family routines can induce dysfunctional eating behaviors and compromise the child’s self-regulation capacities, especially the child’s eating and emotional self-regulation, which can result in weight gain and child obesity. The results of this review are part of a recent research focus on childhood obesity, and we propose new research topics to understand other facets of this illness, as well as how to better prevent and treat it by changing the child’s environment on several relational levels.
|
Interventional Oncology and Immuno-Oncology: Current Challenges and Future Trends
|
234365fa-a324-4307-b518-3a251e040406
|
10138371
|
Internal Medicine[mh]
|
In recent years, we have witnessed the exponential expansion and impact of interventional radiology in oncology. Cross-sectional imaging techniques play a crucial role in the diagnosis, treatment planning, and follow-up of cancer patients, but also provide the ability to perform minimally invasive approaches to procure tissue for histological diagnosis, including the genetic material necessary to develop better tailored and biologically driven treatments. This permits personalized medicine, which potentially maximizes therapeutic effects . Moreover, there is increasing attention in the scientific and medical community on the development of interventional oncology techniques and procedures as locoregional approaches to be employed in cancer treatment in a multidisciplinary cancer management setting. Currently, percutaneous interventional approaches are being performed for the treatment of a wide range of both primary and secondary malignancies as an alternative or in combination with surgery and other treatment modalities . Indeed, multidisciplinary guidelines for the treatment of HCC and RCC now incorporate their use . Interventional oncology has the unique capability to treat malignancy in a locoregional fashion, enabling curative (ablative treatments), disease control (intra-arterial chemo- or radio-embolization), and palliative treatment . Locoregional eradication therapy involves the application of different energy sources that, despite a variety of mechanisms of action (such as heat, freezing, or electricity), induce effective necrosis of the tumor core and apoptosis of the adjacent tissue, with a substantial preservation of healthy parenchyma . The destruction of the tumor, along with the release of necrotic material, creates in situ availability of antigens that may be recognized by the immune system as a threat and potentially trigger an immune response throughout the body, bringing to the so-called abscopal effect . With the advent of immunotherapy in cancer care and the introduction of immune checkpoint inhibitors, several efforts have been made to investigate the synergy of such treatment in combination with interventional radiology treatments, as this new class of drug influences the immunologic microenvironment of the tumor-acting on several key target molecules, restoring immune system function against malignancy . Although preliminary evidence from immune checkpoint inhibitors monotherapy is promising, the greatest potential of these treatments is likely to be achieved in their combination with other treatments that can trigger an immune response . In this manuscript, we review the most recent advances in locoregional interventional oncology treatments and their interactions with immune checkpoint inhibitors. depicts the most important interventional oncology techniques and their mode of action.
Based upon multi-disciplinary guidelines, percutaneous interventional techniques are currently considered a possible therapeutic strategy to treat both primary and secondary tumors in multiple anatomical sites . However, therapeutic outcomes of interventional techniques are frequently limited by recurrence and distant metastasis. Recent pre-clinical and clinical studies have suggested that percutaneous ablative therapies lead to an alteration of the patient’s immuno-profile . Among these immunological effects, for some therapies such as cryoablation and IRE, the central area of necrosis caused by the percutaneous ablation induces antigenic release that leads to an antigenic presentation by dendritic cells, increase in serum cytokines level, activation of the CTLA4 cascade and T cell response . On the other hand, the peripheral area of apoptosis induced by ablation downregulates the immunological system . These interactions produce both local and systemic effects, including occasionally the abscopal effect of distant tumor shrinkage . While the immune response induced by ablation alone appears to be transient, there is strong evidence that it could potentially enhance the effect of immunotherapies . In this section, we review the evidence regarding the most common percutaneous interventional techniques and their interaction with immunotherapies in cancer treatment. 2.1. Radiofrequency Ablation (RFA) A sufficiently high thermal insult induces coagulation necrosis in the target tissue and cytokines and antigens blood release, leading to both local and systemic effects . Slovak et al. demonstrated that, in a VX-2 rabbit liver cancer model, the combination of RFA plus CpG-B (a factor that stimulates innate immunity) increased the presence of activating lymphocyte and the rabbit survival compared with either RFA or CpG-B alone . Schneider et al. analyzed the ablated area in non-small cell lung cancer (NSCLC), demonstrating a surge of CD4+ and CD8+ lymphocytes in the peripheral zone and an intensification of pro-inflammatory cytokines . Mizukoshi et al. investigated the immune responses before and after RFA in 69 HCC patients and highlighted that there was a significant increase in tumor-associated antigen (TAA)-specific T cells in the peripheral blood of 62.3% of patients. Moreover, the number of TAA-specific T cells after RFA was predictive of HCC recurrence after ablation by univariate and multivariate analyses . A comparative study involving patients with intermediate to advanced stage HCC investigated the efficacy of RFA plus monoclonal antibody (131I-chTNT) as a combination therapy . This study demonstrated that such combination therapy is significantly more effective than RFA alone, as demonstrated by the longer survival time of patients who received RFA plus 131I-chTNT compared to those who received the RFA alone ( p = 0.052) . Regarding the systemic abscopal effects of radiofrequency ablation, it has been demonstrated in a colon-cancer murine model that the combination of RFA with a vaccine encoding CEA produces regression of distal metastasis and a significant increase in CEA-specific CD4+ T cells compared with RFA or vaccine alone ( p < 0.0001; p = 0.0003, respectively) . Nakagawa et al. demonstrated that the administration of dendritic cells stimulated by OK-432 (a clinical bacterial product that can induce DC maturation) after RFA increased the number of CD8+ T cells infiltrating untreated secondary tumors as compared to RFA alone ( p < 0.001) . However, the abscopal effect achieved by the RFA alone is weak, transient, or even occasionally counterproductive . In fact, there is the risk of inducing an immunologically tolerogenic state if the RFA is not supported by immunotherapy. Indeed, it has been shown in rat breast cancer that RFA alone stimulates hepatocyte growth factor (HGF) and vascular endothelial growth factor (VEGF), leading to unwanted effects, such as an increased cell replication (evaluated by Ki-67) and microvascular density in distant tumors . Therefore, the deactivation of HGF (using PHA-665752) and VEGF (using semaxanib) pathways may improve clinical outcomes of RFA ablation . This opposite tumorigenic effect of RFA may also explain a worse prognosis of ablated HCC compared with surgical resection. Other clinical consequences of this pro-tumorigenic effect are the evidence that an incomplete radiofrequency ablation enhances neo-angiogenesis in HCC and tumoral growth in non-small cell lung cancer . 2.2. Cryoablation Cryoablation is based on a cycle of freezing and thawing that causes intra- and extracellular ice crystal formation, damage to the cell membrane, osmotic pressure changes, and, thus, cellular dehydration. The use of cooling energy makes cryoablation suitable for lesions close to vital structures . While the central area of the ablation is composed of necrotic tissue, the peripheral boundary is largely composed of apoptotic cells . Despite all ablation techniques releasing tumor antigens, cryoablation avoids protein denaturation and preserves native antigen structures . As a consequence, serum levels of interleukin-1 (IL-1), IL-6, NF-κβ, and TNF-α are significantly higher after cryoablation compared to other ablative therapies, suggesting a stronger immunostimulatory response . In a renal cell carcinoma model, the combination of cryoablation and anti-PDL1 drug lead to anti-tumor immune responses and delayed tumor growth of distant untreated tumors . Furthermore, in a melanoma model, den Brok et al. demonstrated that the combination of cryoablation and CpG-B induced the regression of the existing secondary tumors in 40% of cryoablation-treated mice, suggesting strong abscopal effects . In metastatic liver cancer patients, Niu et al. demonstrated that the combination of cryoablation and immunotherapy leads to a significantly increased median overall survival (OS) compared with cryoablation or immunotherapy alone (32 vs. 17.5 vs. 3 months; p < 0.05) . Similar results in terms of OS and immune responses were reported in patients with lung, renal cell, and hepatocellular cancers treated with cryoablation and allogeneic NK cell transfers and in patients with breast cancer treated with cryoablation plus anti-CTLA4 and anti-PD1 . Despite the volume and the level of evidence being lower for cryoablation with respect to RFA, the combination of cryoablation with immunotherapy appears to offer promising results representing an optimistic basis for further investigations. 2.3. Irreversible Electroporation (IRE) IRE is a novel non-thermal ablation technology based on the application of pulsatile and targeted high-voltage electric energy that alters the current potential of the cellular membrane, leading to permanent nanopore formation within the lipid bilayer membrane. This membranous disruption results in loss of homeostasis with subsequent cellular apoptosis and death. The first significant evidence of the immunological effects of IRE was reported in 2016 by Bulvik et al., who demonstrated a greater lymphocyte infiltration and tumor size reduction for IRE compared to RFA in an HCC murine model . Furthermore, in preclinical models of hepatocellular carcinoma, Vivas et al. showed that the administration of an immunostimulant drug (Poly-ICLC) before IRE was able to increase the immunogenic response and reduce tumor growth compared to both IRE and Poly-ICLC alone (40%, p < 0.05) . These findings were confirmed by Alnagger et al., who reported an increased median overall survival (10.1 months of the IRE-NK group vs. 8.9 months of the IRE alone group, p = 0.0078) and a decrease in alpha-fetoprotein expression in patients with metastatic liver tumor (IV stage) treated with IRE plus allogeneic NK cell immunotherapy . The same strategy (IRE plus NK vs. IRE alone) was investigated by Yang et al., who demonstrated longer median progression-free survival (PFS) and overall survival (OS) (PFS 15.1 vs. 10.6 months, p < 0.05, OS 17.9 vs. 23.2 months, p < 0.05) with a reduction of circulating tumor cells in patients who received combination therapy . IRE has also been evaluated in other clinical contexts, as classical systemic immunotherapy has only limited efficacy against pancreatic ductal adenocarcinoma (PDAC) due to the presence of an immunosuppressive tumor-associated stroma, and the rationale of studies on IRE is that ablative therapies could destroy the pancreatic immunosuppressive microenvironment, leading to a greater response to systemic immunotherapy . Zhao et al. utilized a mouse model of PDAC to demonstrate that the association of IRE and systemic anti-PD1 treatment promotes CD8+ T cell infiltration and increases overall survival when compared to both IRE and anti-PD1 as monotherapy . Narayanan et al. also utilized a mouse model of PDAC. They combined IRE with systemic anti-PD1 and an intra-tumoral TLR7 agonist. This triple strategy improved local response compared to IRE alone and promoted regression of untreated concomitant metastases . These encouraging results have brought to first preliminary human studies, showing that IRE combined with NK cells or allogenic Vγ9Vδ2 T cell infusion has prolonging effects on progression-free survival rates (11 versus 8.5 months), overall response rates at 1 month, and overall survival rates (14.5 versus 11 months) when compared to IRE alone in PDAC patients . The soon-to-be-conducted PANFIRE-III trial (NCT04612530) will also combine IRE, systemic anti-PD1, and an intra-tumoral TLR9 agonist in metastasized PDAC human patients. 2.4. Microwave Ablation (MWA) The association of MWA with immunotherapy is weaker, as preliminary studies suggested that MWA is less immunogenic compared to RFA and cryoablation . However, Leutche et al. uncovered de-novo or enhanced tumor-specific T-cell responses in 30% of patients with hepatocellular carcinoma (HCC) treated with MWA alone. The T-cell response was associated with longer progression-free survival (27.5 vs. 10.0 months) . In the same study, the analysis of HCC samples (n = 18) of patients receiving combined MWA and resection revealed superior disease-free survival in patients with high T-cell sample infiltration at the time of thermal ablation (37.4 vs. 13.1 months). Regarding the synergic effect of MWA and immunotherapy, Chen et al. demonstrated that the combination of MW and GM-CSF significantly increased the free tumor survival and decreased the tumor volume in a murine hepatoma model . Similar results were obtained in human patients with HCC, although in this initial study, the increase was not statistically significant . Additionally, in a pilot study by Zhou P et al., the application of adoptive immunotherapy in association with MWA for HCC patients was demonstrated to be safe and capable of increasing the percentage of peripheral lymphocytes . 2.5. High-Intensity Focal Ultrasound (HIFU) and Laser-Induced Thermotherapy (LiTT) Less evidence can be found in the literature for other ablation techniques. HIFU has been used for primary and secondary malignancy of the breast, soft tissue, bone, pancreas, kidney, and liver . Yet, although HIFU can induce cytokine release and stress response with an augmented CD4+/CD8+ ratio, it appears to be less immunogenic compared with RFA and cryoablation . LiTT has been reported to increase the level of cytokines (IL-6, TNFRI, and CRP levels) in liver malignancies . Moreover, Vogl et al. highlighted that the levels of CD3+, CD4+, and CD8+ were increased after LiTT (12.73 ± 4.83 vs. 92.09 ± 12.04; 4.36 ± 3.32 vs. 42.92 ± 16.68; 3.64 ± 1.77 vs. 47.54 ± 15.68; p < 0.05) with an associated improvement in cytotoxic effects (RLU = 1493 ± 1954.68 vs. 7260 ± 3929.76; p < 0.001) . Ablative techniques associated with immunotherapy seem to obtain a synergistic effect, as ablative therapies alone can increase neoangiogenesis when complete ablation is not achieved, also leading to immune tolerance. Most of the literature studies were made on RFA, whereas the combination of immunotherapy with cryoablation, MWA, IRE, and HIFU is emerging as a promising alternative to RFA on a great variety of target lesions. summarizes the pros and cons of the current practice of locoregional percutaneous interventional oncology treatments when associated with immunotherapy, also reporting results of preclinical studies. describes in what lesions the clinical trials investigated the role of immunotherapy associated with ablative treatments.
A sufficiently high thermal insult induces coagulation necrosis in the target tissue and cytokines and antigens blood release, leading to both local and systemic effects . Slovak et al. demonstrated that, in a VX-2 rabbit liver cancer model, the combination of RFA plus CpG-B (a factor that stimulates innate immunity) increased the presence of activating lymphocyte and the rabbit survival compared with either RFA or CpG-B alone . Schneider et al. analyzed the ablated area in non-small cell lung cancer (NSCLC), demonstrating a surge of CD4+ and CD8+ lymphocytes in the peripheral zone and an intensification of pro-inflammatory cytokines . Mizukoshi et al. investigated the immune responses before and after RFA in 69 HCC patients and highlighted that there was a significant increase in tumor-associated antigen (TAA)-specific T cells in the peripheral blood of 62.3% of patients. Moreover, the number of TAA-specific T cells after RFA was predictive of HCC recurrence after ablation by univariate and multivariate analyses . A comparative study involving patients with intermediate to advanced stage HCC investigated the efficacy of RFA plus monoclonal antibody (131I-chTNT) as a combination therapy . This study demonstrated that such combination therapy is significantly more effective than RFA alone, as demonstrated by the longer survival time of patients who received RFA plus 131I-chTNT compared to those who received the RFA alone ( p = 0.052) . Regarding the systemic abscopal effects of radiofrequency ablation, it has been demonstrated in a colon-cancer murine model that the combination of RFA with a vaccine encoding CEA produces regression of distal metastasis and a significant increase in CEA-specific CD4+ T cells compared with RFA or vaccine alone ( p < 0.0001; p = 0.0003, respectively) . Nakagawa et al. demonstrated that the administration of dendritic cells stimulated by OK-432 (a clinical bacterial product that can induce DC maturation) after RFA increased the number of CD8+ T cells infiltrating untreated secondary tumors as compared to RFA alone ( p < 0.001) . However, the abscopal effect achieved by the RFA alone is weak, transient, or even occasionally counterproductive . In fact, there is the risk of inducing an immunologically tolerogenic state if the RFA is not supported by immunotherapy. Indeed, it has been shown in rat breast cancer that RFA alone stimulates hepatocyte growth factor (HGF) and vascular endothelial growth factor (VEGF), leading to unwanted effects, such as an increased cell replication (evaluated by Ki-67) and microvascular density in distant tumors . Therefore, the deactivation of HGF (using PHA-665752) and VEGF (using semaxanib) pathways may improve clinical outcomes of RFA ablation . This opposite tumorigenic effect of RFA may also explain a worse prognosis of ablated HCC compared with surgical resection. Other clinical consequences of this pro-tumorigenic effect are the evidence that an incomplete radiofrequency ablation enhances neo-angiogenesis in HCC and tumoral growth in non-small cell lung cancer .
Cryoablation is based on a cycle of freezing and thawing that causes intra- and extracellular ice crystal formation, damage to the cell membrane, osmotic pressure changes, and, thus, cellular dehydration. The use of cooling energy makes cryoablation suitable for lesions close to vital structures . While the central area of the ablation is composed of necrotic tissue, the peripheral boundary is largely composed of apoptotic cells . Despite all ablation techniques releasing tumor antigens, cryoablation avoids protein denaturation and preserves native antigen structures . As a consequence, serum levels of interleukin-1 (IL-1), IL-6, NF-κβ, and TNF-α are significantly higher after cryoablation compared to other ablative therapies, suggesting a stronger immunostimulatory response . In a renal cell carcinoma model, the combination of cryoablation and anti-PDL1 drug lead to anti-tumor immune responses and delayed tumor growth of distant untreated tumors . Furthermore, in a melanoma model, den Brok et al. demonstrated that the combination of cryoablation and CpG-B induced the regression of the existing secondary tumors in 40% of cryoablation-treated mice, suggesting strong abscopal effects . In metastatic liver cancer patients, Niu et al. demonstrated that the combination of cryoablation and immunotherapy leads to a significantly increased median overall survival (OS) compared with cryoablation or immunotherapy alone (32 vs. 17.5 vs. 3 months; p < 0.05) . Similar results in terms of OS and immune responses were reported in patients with lung, renal cell, and hepatocellular cancers treated with cryoablation and allogeneic NK cell transfers and in patients with breast cancer treated with cryoablation plus anti-CTLA4 and anti-PD1 . Despite the volume and the level of evidence being lower for cryoablation with respect to RFA, the combination of cryoablation with immunotherapy appears to offer promising results representing an optimistic basis for further investigations.
IRE is a novel non-thermal ablation technology based on the application of pulsatile and targeted high-voltage electric energy that alters the current potential of the cellular membrane, leading to permanent nanopore formation within the lipid bilayer membrane. This membranous disruption results in loss of homeostasis with subsequent cellular apoptosis and death. The first significant evidence of the immunological effects of IRE was reported in 2016 by Bulvik et al., who demonstrated a greater lymphocyte infiltration and tumor size reduction for IRE compared to RFA in an HCC murine model . Furthermore, in preclinical models of hepatocellular carcinoma, Vivas et al. showed that the administration of an immunostimulant drug (Poly-ICLC) before IRE was able to increase the immunogenic response and reduce tumor growth compared to both IRE and Poly-ICLC alone (40%, p < 0.05) . These findings were confirmed by Alnagger et al., who reported an increased median overall survival (10.1 months of the IRE-NK group vs. 8.9 months of the IRE alone group, p = 0.0078) and a decrease in alpha-fetoprotein expression in patients with metastatic liver tumor (IV stage) treated with IRE plus allogeneic NK cell immunotherapy . The same strategy (IRE plus NK vs. IRE alone) was investigated by Yang et al., who demonstrated longer median progression-free survival (PFS) and overall survival (OS) (PFS 15.1 vs. 10.6 months, p < 0.05, OS 17.9 vs. 23.2 months, p < 0.05) with a reduction of circulating tumor cells in patients who received combination therapy . IRE has also been evaluated in other clinical contexts, as classical systemic immunotherapy has only limited efficacy against pancreatic ductal adenocarcinoma (PDAC) due to the presence of an immunosuppressive tumor-associated stroma, and the rationale of studies on IRE is that ablative therapies could destroy the pancreatic immunosuppressive microenvironment, leading to a greater response to systemic immunotherapy . Zhao et al. utilized a mouse model of PDAC to demonstrate that the association of IRE and systemic anti-PD1 treatment promotes CD8+ T cell infiltration and increases overall survival when compared to both IRE and anti-PD1 as monotherapy . Narayanan et al. also utilized a mouse model of PDAC. They combined IRE with systemic anti-PD1 and an intra-tumoral TLR7 agonist. This triple strategy improved local response compared to IRE alone and promoted regression of untreated concomitant metastases . These encouraging results have brought to first preliminary human studies, showing that IRE combined with NK cells or allogenic Vγ9Vδ2 T cell infusion has prolonging effects on progression-free survival rates (11 versus 8.5 months), overall response rates at 1 month, and overall survival rates (14.5 versus 11 months) when compared to IRE alone in PDAC patients . The soon-to-be-conducted PANFIRE-III trial (NCT04612530) will also combine IRE, systemic anti-PD1, and an intra-tumoral TLR9 agonist in metastasized PDAC human patients.
The association of MWA with immunotherapy is weaker, as preliminary studies suggested that MWA is less immunogenic compared to RFA and cryoablation . However, Leutche et al. uncovered de-novo or enhanced tumor-specific T-cell responses in 30% of patients with hepatocellular carcinoma (HCC) treated with MWA alone. The T-cell response was associated with longer progression-free survival (27.5 vs. 10.0 months) . In the same study, the analysis of HCC samples (n = 18) of patients receiving combined MWA and resection revealed superior disease-free survival in patients with high T-cell sample infiltration at the time of thermal ablation (37.4 vs. 13.1 months). Regarding the synergic effect of MWA and immunotherapy, Chen et al. demonstrated that the combination of MW and GM-CSF significantly increased the free tumor survival and decreased the tumor volume in a murine hepatoma model . Similar results were obtained in human patients with HCC, although in this initial study, the increase was not statistically significant . Additionally, in a pilot study by Zhou P et al., the application of adoptive immunotherapy in association with MWA for HCC patients was demonstrated to be safe and capable of increasing the percentage of peripheral lymphocytes .
Less evidence can be found in the literature for other ablation techniques. HIFU has been used for primary and secondary malignancy of the breast, soft tissue, bone, pancreas, kidney, and liver . Yet, although HIFU can induce cytokine release and stress response with an augmented CD4+/CD8+ ratio, it appears to be less immunogenic compared with RFA and cryoablation . LiTT has been reported to increase the level of cytokines (IL-6, TNFRI, and CRP levels) in liver malignancies . Moreover, Vogl et al. highlighted that the levels of CD3+, CD4+, and CD8+ were increased after LiTT (12.73 ± 4.83 vs. 92.09 ± 12.04; 4.36 ± 3.32 vs. 42.92 ± 16.68; 3.64 ± 1.77 vs. 47.54 ± 15.68; p < 0.05) with an associated improvement in cytotoxic effects (RLU = 1493 ± 1954.68 vs. 7260 ± 3929.76; p < 0.001) . Ablative techniques associated with immunotherapy seem to obtain a synergistic effect, as ablative therapies alone can increase neoangiogenesis when complete ablation is not achieved, also leading to immune tolerance. Most of the literature studies were made on RFA, whereas the combination of immunotherapy with cryoablation, MWA, IRE, and HIFU is emerging as a promising alternative to RFA on a great variety of target lesions. summarizes the pros and cons of the current practice of locoregional percutaneous interventional oncology treatments when associated with immunotherapy, also reporting results of preclinical studies. describes in what lesions the clinical trials investigated the role of immunotherapy associated with ablative treatments.
In current practice, interventional intra-arterial treatments of tumors include the use of a wide variety of active tumoricidal agents, including conventional transcatheter arterial chemoembolization (cTACE), drug-eluting beads transcatheter arterial chemoembolization (DEB-TACE) and transarterial radioembolization (TARE) . Chemoembolization (both cTACE and DEB-TACE) is the treatment of choice in patients with intermediate-stage HCC (Barcelona Clinic Liver Cancer—BCLC stage B), while radioembolization can potentially be used as an alternative based on level 2 of evidence in these patients . Moreover, there is the ever-increasing use of these techniques for the treatment of hepatic metastases, including colorectal and neuroendocrine cancer . In this section, we review the evidence of interaction between immunotherapy and the most common endovascular interventional techniques. 3.1. Transarterial Chemoembolization (TACE) TACE is commonly used in patients with unresectable HCC with preserved liver function . Conventional TACE (cTACE) commonly delivers an emulsion of lipiodol and chemotherapeutic agent (most often doxorubicin or cisplatin) followed by gelatine sponge as the embolic agent, whereas the most recent drug-eluting beads (DEB) TACE utilizes drug-eluting beads preloaded with the chemotherapeutic agent, with a reported reduction of drug-related side effects due to a better pharmacokinetic profile . Both types of TACE induce local tumor necrosis by the occlusion of feeding arteries, leading secondarily to the release of tumor antigens, which activate the immune response . Moreover, TACE can potentially modify the cytokine spectrum and the activation level of T cells . TACE stimulates the secretion interleukines (IL) as IL-1 and IL-10, and of Interferon-γ, with activation of T helper-17 and T helper-1 cells . TACE also leads to a modulation of immunosuppressive factors such as T-regulatory cells, PD1/PDL1, and HIF-1α, potentially bringing immune tolerance . The combination of TACE and immunotherapy may amplify the antitumoral effect. In a pilot study, Sangro et al. used a CTLA4 inhibitor (tremelimumab) combined with TACE in 21 advanced HCC patients, showing promising results, with a good safety profile and a median survival of 8 months. Tremelimumab was administered intravenously at a dose of 15 mg/kg every 90 days until progression or intolerable toxicity . Duffy et al. showed the efficacy of combined treatment of TACE/ablation and tremelimumab in a group of 32 patients with advanced HCC (75% of patients had the progressive disease). The median overall survival was 12.3 months, and most patients showed a reduction of tumor load, tumor reduction in non-ablated or non-embolized areas, and intratumoral infiltration of CD8+ T cells . A phase-I clinical trial (NCT 03143270) in patients with advanced HCC treated with nivolumab (a PD1 inhibitor) combined with DEB-TACE is ongoing. All patients are scheduled to receive 24 mg of nivolumab intravenously for up to 1 year every 2 weeks. Another open-label single-arm phase II study (NCT 03572582) combines nivolumab with TACE in patients with intermediate HCC is ongoing. Nivolumab treatment will start 2–3 days after the initial TACE and will be administered intravenously (240 mg, fixed-dose) every 2 weeks for up to two years until progression. A second TACE will be performed 8 weeks after the first one. Other investigations using durvalumab (a PDL1 inhibitor) plus tremelimumab combined with TACE are underway, including a phase-II clinical trial (NCT02821754) and a clinical study (NCT03638141). 3.2. Transarterial Radioembolization (TARE) TARE is emerging as a “multi-purpose” treatment in patients with HCC, as it can represent an effective alternative to both TACE and, as more recent studies demonstrate, ablative treatments. Under ideal conditions, it can lead to possible tumor downstaging and also act as a bridge to surgical resection and liver transplantation in selected patients . TARE is usually performed with Yttrium-90 (90Y) in resin or glass microspheres (although there are ever-increasing reports regarding the effectiveness of Holmium-166 (166Ho) in poly(L-lactic acid) (PLLA) microspheres) . As opposed to TACE, TARE utilizes local beta radiation to achieve tumor necrosis instead of artery occlusion. It is the following two-step treatment: where the first part is comprised of pre-treatment angiography with an injection of macroaggregated albumin (MAA) marked with Technetium-99m and a scintigraphy in order to evaluate lung shunt fraction (to avoid the risk of radiation pneumonitis) and to identify arteries that supply the gastrointestinal tract (to avoid ulcerations). The second part is the actual treatment with 90Y-loaded microspheres . When judiciously delivered, TARE has a minimal embolic effect, so the post-embolization syndrome is reduced compared to TACE . Recent studies underline that the immunocompetence of the tumor microenvironment is elevated after 90Y TARE. This effect is possibly explained by the expression of TNF-α by CD4+ T cells for the upregulation of CD8+ T cells, and the APCs ratio is increased . A recent retrospective study of 26 patients with aggressive intermediate-stage or advanced HCC showed that the combination of nivolumab (or nivolumab plus ipilimumab) plus TARE is safe (little treatment toxicity) and has promising results, with a median overall survival of 16.5 months and progression-free survival of 5.7 months. One patient even achieved a complete response . Moreover, a case report has been published of a patient affected by advanced HCC with a macrovascular invasion that was treated with nivolumab plus TARE, obtaining a downstaging and was amenable to surgery (with surgery confirming a complete response) . These results are confirmed by a phase-II trial with 36 patients with an advanced HCC treated with TARE plus nivolumab, showing an overall response rate (31%), and an overall survival of 15.1 months . Other trials are still evaluating TARE plus Nivolumab (NCT03380130, NCT03033446, NCT02837029) and TARE plus Pembrolizumab (NCT03099564). The low volume and level of evidence of studies on combination treatments of immunotherapy and TACE or TARE make it difficult to discuss its safety and long-term efficacy, even though phase-I and -II clinical trials are ongoing and will certainly shed light on this promising combination technique. summarizes the pros and cons of the current practice of locoregional endovascular interventional oncology treatments when associated with immunotherapy, also reporting results of preclinical studies. describes in what lesions the clinical trials investigated the role of immunotherapy associated with endovascular treatments.
TACE is commonly used in patients with unresectable HCC with preserved liver function . Conventional TACE (cTACE) commonly delivers an emulsion of lipiodol and chemotherapeutic agent (most often doxorubicin or cisplatin) followed by gelatine sponge as the embolic agent, whereas the most recent drug-eluting beads (DEB) TACE utilizes drug-eluting beads preloaded with the chemotherapeutic agent, with a reported reduction of drug-related side effects due to a better pharmacokinetic profile . Both types of TACE induce local tumor necrosis by the occlusion of feeding arteries, leading secondarily to the release of tumor antigens, which activate the immune response . Moreover, TACE can potentially modify the cytokine spectrum and the activation level of T cells . TACE stimulates the secretion interleukines (IL) as IL-1 and IL-10, and of Interferon-γ, with activation of T helper-17 and T helper-1 cells . TACE also leads to a modulation of immunosuppressive factors such as T-regulatory cells, PD1/PDL1, and HIF-1α, potentially bringing immune tolerance . The combination of TACE and immunotherapy may amplify the antitumoral effect. In a pilot study, Sangro et al. used a CTLA4 inhibitor (tremelimumab) combined with TACE in 21 advanced HCC patients, showing promising results, with a good safety profile and a median survival of 8 months. Tremelimumab was administered intravenously at a dose of 15 mg/kg every 90 days until progression or intolerable toxicity . Duffy et al. showed the efficacy of combined treatment of TACE/ablation and tremelimumab in a group of 32 patients with advanced HCC (75% of patients had the progressive disease). The median overall survival was 12.3 months, and most patients showed a reduction of tumor load, tumor reduction in non-ablated or non-embolized areas, and intratumoral infiltration of CD8+ T cells . A phase-I clinical trial (NCT 03143270) in patients with advanced HCC treated with nivolumab (a PD1 inhibitor) combined with DEB-TACE is ongoing. All patients are scheduled to receive 24 mg of nivolumab intravenously for up to 1 year every 2 weeks. Another open-label single-arm phase II study (NCT 03572582) combines nivolumab with TACE in patients with intermediate HCC is ongoing. Nivolumab treatment will start 2–3 days after the initial TACE and will be administered intravenously (240 mg, fixed-dose) every 2 weeks for up to two years until progression. A second TACE will be performed 8 weeks after the first one. Other investigations using durvalumab (a PDL1 inhibitor) plus tremelimumab combined with TACE are underway, including a phase-II clinical trial (NCT02821754) and a clinical study (NCT03638141).
TARE is emerging as a “multi-purpose” treatment in patients with HCC, as it can represent an effective alternative to both TACE and, as more recent studies demonstrate, ablative treatments. Under ideal conditions, it can lead to possible tumor downstaging and also act as a bridge to surgical resection and liver transplantation in selected patients . TARE is usually performed with Yttrium-90 (90Y) in resin or glass microspheres (although there are ever-increasing reports regarding the effectiveness of Holmium-166 (166Ho) in poly(L-lactic acid) (PLLA) microspheres) . As opposed to TACE, TARE utilizes local beta radiation to achieve tumor necrosis instead of artery occlusion. It is the following two-step treatment: where the first part is comprised of pre-treatment angiography with an injection of macroaggregated albumin (MAA) marked with Technetium-99m and a scintigraphy in order to evaluate lung shunt fraction (to avoid the risk of radiation pneumonitis) and to identify arteries that supply the gastrointestinal tract (to avoid ulcerations). The second part is the actual treatment with 90Y-loaded microspheres . When judiciously delivered, TARE has a minimal embolic effect, so the post-embolization syndrome is reduced compared to TACE . Recent studies underline that the immunocompetence of the tumor microenvironment is elevated after 90Y TARE. This effect is possibly explained by the expression of TNF-α by CD4+ T cells for the upregulation of CD8+ T cells, and the APCs ratio is increased . A recent retrospective study of 26 patients with aggressive intermediate-stage or advanced HCC showed that the combination of nivolumab (or nivolumab plus ipilimumab) plus TARE is safe (little treatment toxicity) and has promising results, with a median overall survival of 16.5 months and progression-free survival of 5.7 months. One patient even achieved a complete response . Moreover, a case report has been published of a patient affected by advanced HCC with a macrovascular invasion that was treated with nivolumab plus TARE, obtaining a downstaging and was amenable to surgery (with surgery confirming a complete response) . These results are confirmed by a phase-II trial with 36 patients with an advanced HCC treated with TARE plus nivolumab, showing an overall response rate (31%), and an overall survival of 15.1 months . Other trials are still evaluating TARE plus Nivolumab (NCT03380130, NCT03033446, NCT02837029) and TARE plus Pembrolizumab (NCT03099564). The low volume and level of evidence of studies on combination treatments of immunotherapy and TACE or TARE make it difficult to discuss its safety and long-term efficacy, even though phase-I and -II clinical trials are ongoing and will certainly shed light on this promising combination technique. summarizes the pros and cons of the current practice of locoregional endovascular interventional oncology treatments when associated with immunotherapy, also reporting results of preclinical studies. describes in what lesions the clinical trials investigated the role of immunotherapy associated with endovascular treatments.
The combination of interventional radiology treatments with immunotherapies in oncology is growing rapidly. The results, albeit currently with only a low level of evidence, are encouraging, suggesting that more evidence could better define the role of combination therapies in this field. In particular, the greatest evidence is focused on the combination of immunotherapy with RFA or cryoablation, whereas the combination of IRE and immunotherapy may play a greater role in the future, particularly in patients with pancreatic malignancies, where other methods have shown reduced efficacy. The evidence from the literature is still rather limited for TACE and TARE, composed mostly of small monocentric studies. Further research in this field is required, ideally with randomized trials, to better understand how to achieve the desired immunogenic/abscopal effects while limiting unwanted pro-tumorigenic phenomena.
|
Pharmacogenomics on the Treatment Response in Patients with Psoriasis: An Updated Review
|
4bcec6b3-1692-4c99-846b-a66f4b2edf5d
|
10138383
|
Pharmacology[mh]
|
Psoriasis is a chronic, immune-mediated, inflammatory skin disease concomitant with other systemic complications. Environmental, behavioral, and genetic factors play a role in the etiology of the disease. Especially, genetic predisposition is thought to be a key contributor to psoriasis through involvement in immune pathophysiology , and about 40% of patients diagnosed with psoriasis or psoriatic arthritis have a related family history . To date, almost 100 psoriasis susceptibility loci have been identified through selective candidate genes or genome-wide association studies (GWAS) . The pharmacogenetic issue of psoriasis struck a chord after the immunogenetics of psoriasis were outlined gradually, and the need for personalized medicine increased when more and more anti-psoriatic drugs were available and showed variable efficacy among different drugs and individuals. This study aimed to overview the current findings of possible genetically predictive markers for treatment outcomes of psoriasis under the use of systemic and topical medicine. Regards to pathogenesis and immunogenetics of psoriasis , the disease results from an aberrant innate or adaptive immune response associated with T lymphocytes that leads to inflammation, angiogenesis, and epidermal hyperplasia . Genetic or environmental factors can trigger immune-mediated damage for keratinocytes in psoriasis patients. The key pathomechanism of psoriasis is that dendritic cells or macrophages can secrete IL-23 and then stimulate CD4 + Th17 polarization, resulting in the secretion of cytokines, such as IL-17, IL22, TNF-α, etc. Moreover, IL-12 can activate the differentiation of CD4 + Th1 cells, which induces INF-γ, IL-2, and TNF-α synthesis; CD8 + T cells are also known to be activated and can release pro-inflammatory cytokines, including TNF-α and INF-γ. The abundant cytokines lead to epidermal overgrowth, immune over-activation, and neovascularization. Consequently, the positive feedback loop of immune reaction leads to the development and maintenance of psoriatic lesions. The initiation of psoriasis lesion is when antigenic or auto-antigenic stimuli induced by damaged or stressed skin activate antigen-presenting cells (APCs), including dendritic cells (DCs) and macrophages. The process results in producing pro-inflammatory cytokines such as interferon (IFN)-α, tumor necrosis factor (TNF)-α, interleukin (IL)-12, IL-20, and IL-23, and initiates the early phase of cutaneous inflammation in psoriasis . The pro-inflammatory cytokines released from activated APCs promote T cell-mediated immunity through nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) pathway and Janus kinase (JAK)-signal transducer and activator of transcription (STAT) pathway. In addition, engagement of the T cell receptor (TCR) with major histocompatibility complex (MHC)-presenting antigen of APCs activates the calcium–calcineurin–nuclear factor of activated T cells (NFAT) pathway. Thus, these signals result in the migration, differentiation, and activation of naïve effector T cells. In particular, IL-23 stimulates CD4 + T helper 17 (Th17) polarization, which releases IL-17A/F, IL-22, and TNF-α. On the other hand, IL-12 activates the differentiation of the Th1 subset of CD4+ cells, which induces INF-γ, IL-2, and TNF-α synthesis . The inflammatory cytokines secreted from T cells, especially IL-17A, attract many more immune cells, such as neutrophils, enhance angiogenesis, facilitate hyperproliferation of keratinocytes, and promote the further release of cytokines. Additionally, keratinocytes activated by IL-17, IL-22, and IL-20 through JAK-STAT, NF-κB, and calcium–calcineurin–NFAT pathways release C-C motif ligand 20 (CCL20), antimicrobial peptides (AMP), and cytokines; hence, they contribute to the pro-inflammatory environment and amplify the inflammatory response . In brief, the over-activated innate immunity induces exaggerative T cell-mediated autoimmune activation, epidermal overgrowth, and neovascularization. Consequently, a positive feedback loop leads to the development and maintenance of psoriatic lesions. The psoriasis susceptibility genes were found to involve in the entire immunopathogenesis from antigen presentation, cytokines and receptors, signal transductions, and transcription factors to regulators of immune responses ; at the same time, whether these susceptibility genes are potential predictors of treatment response has been investigated. In the following context, we discuss the response-related genes in psoriasis treatment ( , , , , , and , ) and present levels of evidence of the pharmacogenomic association by the PharmGKB annotation scoring system. According to PharmGKB, six levels from 1A to 4 represent high, moderate, and low to unsupported evidence, respectively. 3.1. Methotrexate Methotrexate (MTX) is an antagonist of the enzymes dihydrofolate reductase (DHFR) and thymidylate synthase (TYMS). It is commonly used as a first-line systemic immunosuppressive therapy for moderate to severe psoriasis. However, significant variations in its efficacy and toxicity exist among individuals. Therefore, several studies have identified potential pharmacogenetic factors that can be used to predict the clinical response of MTX . 3.1.1. ABCC1, ABCC2, ABCG2 The genes encoding the efflux transporters of MTX are ATP-binding cassette (ABC) subfamily C member 1 (ABCC1) , ABCC member 2 (ABCC2) , and ABC subfamily G member 2 (ABCG2) . Overexpression of these genes can lead to multidrug resistance by extruding drugs out of the cell through various mechanisms . In regard to psoriasis, a cohort study of 374 British patients found significant positive associations between methotrexate responder, two of ABCG2 (rs17731538, rs13120400), and three SNPs of ABCC1 (rs35592, rs28364006, rs2238476) with rs35592 being the most significant (PASI75 at 3 months, p = 0.008). One cohort study from Slovenia demonstrated that polymorphism of ABCC2 (rs717620) presented an insufficient response to MTX treatment (75% reduction from baseline PASI score (PASI75) at 6 months, p = 0.039) . About toxicity, a British cohort study has noted that the major allele of six SNPs in ABCC1 (rs11075291, rs1967120, rs3784862, rs246240, rs3784864, and rs2238476) was significantly associated with the onset of adverse events, with rs246240 showing the strongest association ( p = 0.0006) . 3.1.2. ADORA2A Adenosine receptors A2a (ADORA2a) is responsible for mediating the metabolic product of methotrexate. One SNP, rs5760410 of ADORA2A , was weakly associated with the onset of toxicity ( p = 0.03) . 3.1.3. ATIC MTX inhibits 5-aminoimidazole-4-carboxamide ribonucleotide formyltransferase (ATIC) , which leads to the accumulation of adenosine, a potent anti-inflammatory agent . Campalani et al. analyzed 188 patients in the United Kingdom (UK) with psoriasis under methotrexate therapy and revealed that allele frequency of ATIC (rs2372536) was significantly increased in patients who discontinued methotrexate owing to intolerable side effects ( p = 0.038) . Another British cohort study found that two SNPs in ATIC (rs2372536 and rs4672768) were associated with the onset of MTX toxicity ( p = 0.01). However, these associations did not remain significant after adjusting for folic acid supplementation . 3.1.4. BHMT Betaine-homocysteine S-methyltransferase (BHMT) is a zinc-containing metalloenzyme responsible for folate-independent remethylation of homocysteine using betaine as the methyl donor . A genotype analysis identified that the BHMT genotype was significantly associated with MTX hepatotoxicity ( p = 0.022) . 3.1.5. DNMT3b DNA methyltransferase 3β (DNMT3b) is a methyltransferase that is involved in de-novo DNA methylation, and its polymorphism is supposed to be associated with increased promoter activity . At least one copy of the variant DNMT3b rs242913 allele has been found to be associated with an insufficient response to MTX when compared to the wild-type ( p = 0.005) . 3.1.6. FOXP3 Forkhead box P3 (FOXP3) appears to function as a master regulator of the regulatory pathway in the development and function of regulatory T cells (Tregs) . A study on a population of 189 southern Indian patients who had used methotrexate for 12 weeks found a significant difference in genotype frequencies of FOXP3 (rs3761548) between responders and non-responders (PASI75 at 3 months, p = 0.003) . 3.1.7. GNMT Glycine N-methyltransferase (GNMT) is a methyltransferase that converts S-adenosylmethionine to S-adenosylhomocysteine and is also a folate-binding protein. The rs10948059 polymorphism is associated with increased expression of the GNMT gene and reduces cell sensitivity to MTX . The patients with at least one variant GNMT allele were more likely to be non-responders to MTX treatment than the reference allele (PASI75 at 6 months, p = 0.0004) . 3.1.8. HLA-Cw6 The human leukocyte antigen (HLA) , known as the human MHC system, regulates the immune system by encoding cell-surface proteins. HLA-Cw6 is a psoriasis susceptibility allele that has been strongly linked to the disease. It was reported that carriers of HLA-Cw6 from southern India had a higher response rate to methotrexate (PASI75 at 3 months, p = 0.003) . A Scotland cohort study with 70 HLA-tested patients demonstrated that more proportion of HLA-Cw6 positive patients was carried on beyond 12 months, as compared to the HLA-Cw6 negative group ( p = 0.05) . 3.1.9. MTHFR The Methylenetetrahydrofolate reductase (MTHFR) enzyme is responsible for catalyzing the formation of 5-methyl-tetrahydrofolic acid, which acts as a methyl donor for the synthesis of methionine from homocysteine. This enzyme is indirectly inhibited by MTX. According to Zhu et al., the PASI 90 response rates to MTX were significantly higher in Han Chinese patients who had the MTHFR rs1801133 TT genotype as compared to those who had the CT and CC genotype (PASI90 at 3 months, p = 0.006). Furthermore, patients with the MTHFR rs1801131 CT genotype had lower PASI 75 response rates to MTX in Han Chinese population (PASI75 at 3 months, p = 0.014). They also had a lower risk of ALT elevation ( p = 0.04) . However, three studies have demonstrated that no significant association was detected between clinical outcomes in individuals with psoriasis treated with methotrexate and SNPs in the MTHFR gene . 3.1.10. SLC19A1 The Solute carrier family 19 , member 1 (SLC19A1) gene encodes the reduced folate carrier (RFC) protein, which actively transports MTX into cells. Multiple point mutations have been identified in SLC19A1 to be associated with impaired MTX transport and resistance to MTX . SLC19A1 (rs1051266) was associated with MTX-induced toxicity instead of efficacy in patients with psoriasis . 3.1.11. SLCO1B1 The encoded protein of solute carrier organic anion transporter family member 1B1 (SLO1B1) is a transmembrane receptor that transports drug compounds into cells. Genetic variations in SLCO1B1 have been linked to delayed MTX clearance and increased toxicity . The haplotype variants have been classified into two groups based on their reported transporter activity: the high-activity group and the low-activity group. Patients with low-activity haplotypes of SLCO1B1 (SLCO1B1*5 and SLCO1B1*15) were less likely to be MTX non-responders as compared to patients with high-activity haplotypes (SLCO1B1*1a and SLCO1B1*1b) (PASI75 at 6 months, p = 0.027) . 3.1.12. TNIP1 TNFAIP3 interacting protein 1 (TNIP1) , as one of the psoriasis susceptibility genes, is related to the immune response IL-23 signaling pathway. A Chinese study mentioned that in 221 patients with psoriasis, the TT genotype of TNIP1 rs10036748 showed a better response to MTX (PASI75 at 3 months, p = 0.043) . 3.1.13. TYMS Thymidylate synthase (TS), encoded by the thymidylate synthase gene (TYMS) , is a critical protein for pyrimidine synthesis and responsible for DNA synthesis and repair, which could be inhibited by MTX . The association of polymorphisms of TYMS , TS levels, and MTX response was found in several diseases . For example, polymorphism rs34743033 is a 28-base pair (bp) with double or triple tandem repeat (2R or 3R) located on the 5′ untranslated region (UTR) . A study performed in European adults with psoriasis found that the rs34743033 3R allele was more frequent in patients with poor therapeutic response to methotrexate, but the loss of significance was noted after the exclusion of palmoplantar pustulosis patients. In addition, this allele was significantly associated with an increased incidence of MTX-induced toxicity in patients who did not receive folic acid ( p = 0.0025). Another TS polymorphism, 3′-UTR 6bp del of rs11280056, was significantly more frequent in patients with an adverse event irrespective of folic acid supplementation ( p = 0.025) . In short, positive genotypic associations were detected with methotrexate responders in ten genes ( ABCC1 , ABCC2 , ABCG2 , DNMT3b , FOXP3 , GNMT , HLA-Cw , MTHFR , SLCO1B1 , TNIP1 ) while the development of methotrexate-related toxicity in five genes ( ABCC1 , ATIC , ADORA2A , BHMT , MTHFR , SLC19A1 , TYMS ). Nonetheless, three British studies seemed to believe that toxicity has overlapped populations; hence, several replicated results may also be owing to similar databases . 3.2. Acitretin Acitretin is an oral vitamin A derivative that is used to treat psoriasis by inhibiting epidermal proliferation, inflammatory processes, and angiogenesis. lists the genetic polymorphisms that have been associated with the response of acitretin in patients with psoriasis. 3.2.1. ApoE Apolipoprotein E (ApoE) is a glycoprotein component of chylomicrons and VLDL. It has a crucial role in regulating lipid profiles and metabolism . The lipid and lipoprotein abnormalities as a consequence of ApoE gene polymorphism are close to the side effects during acitretin therapy. In addition, ApoE levels have been linked with clinical improvement in psoriasis, indicating a potential role of the gene in acitretin treatment for psoriasis . However, according to Campalani, E, et al., while ApoE gene polymorphisms are associated with psoriasis, they do not determine the response of the disease to acitretin . 3.2.2. ANKLE1 Ankyrin repeat and LEM domain containing 1 (ANKLE1) enables endonuclease activity and plays a role in positively regulating the response to DNA damage stimulus and protein export from the nucleus. ANKLE1 rs11086065 AG/GG was associated with an ineffective response compared to the GG genotype in 166 Chinese patients (PASI75 at 3 months, p = 0.003) . 3.2.3. ARHGEF3 Rho guanine nucleotide exchange factor 3 (ARHGEF3) activates Rho GTPase, which involve in bone cell biology. ARHGEF3 rs3821414 CT was associated with a more effective response compared to the TT genotype (PASI75 at 3 months, p = 0.01) . 3.2.4. CRB2 Crumbs cell polarity complex component 2 (CRB2) encodes proteins that are components of the Crumbs cell polarity complex, which plays a crucial role in apical-basal epithelial polarity and cellular adhesion. CRB2 rs1105223 TT/CT was also associated with acitretin efficacy compared to the CC genotype (PASI75 at 3 months, p = 0.048) . 3.2.5. HLA-DQA1*02:01 HLA-DQA1*0201 alleles may act as psoriasis susceptibility genes or may be closely linked to the susceptibility genes in Han Chinese . Among 100 Chinese individuals, those who were positive for the DQA10201 allele demonstrated a more favorable response to acitretin compared to those who were negative for the same allele. (PASI75 at 2 months, p = 0.001) . 3.2.6. HLA-DQB1*02:02 HLA-DQB1 alleles have been mentioned to involve in genetic predisposition to psoriasis vulgaris in the Slovak population . In 100 Chinese patients, the DQB1*0202 -positive patients showed a better response to acitretin than the DQB1*0202 -negative patients (PASI75 at 2 months, p = 0.005) . 3.2.7. HLA-G HLA-G is a nonclassical class I MHC molecule that plays a role in suppressing the immune system by inhibiting natural killer cells and T cells . Among patients treated with acitretin, Borghi, Alessandro, et al. observed a significantly increased frequency of the 14 bp sequence deletion in the exon 8 of the HLA-G allele, functioning as a modification of mRNA stability, in responder patients, in comparison to the non-responders (PASI75 at 4 months, p = 0.008) . 3.2.8. IL-12B Patients with the IL-12B rs3212227 genotype of TG were more responsive to acitretin in the treatment of psoriasis in 43 Chinese patients (PASI50, p = 0.035) . 3.2.9. IL-23R Acitretin was found to improve the secondary non-response to TNFα monoclonal antibody in patients who were homozygous for the AA genotype at the SNP rs112009032 in the IL-23R gene (PASI75, p = 0.02) . 3.2.10. SFRP4 Secreted frizzled-related protein 4 (SFRP4) is a negative regulator of the Wnt signaling pathway, and the downregulation of SFRP4 is a possible mechanism contributing to the hyperplasia of the epidermis of psoriasis . The GG/GT variation of SFRP4 rs1802073 has been found to be associated with a more effective response to acitretin compared to the TT genotype (PASI75 at 3 months, p = 0.007) . 3.2.11. VEGF Vascular endothelial growth factor (VEGF) promotes angiogenesis in the pathophysiology of psoriasis, and the variant of the VEGF gene is supposed to affect the ability of acitretin to downregulate VEGF production . The TT genotype of the VEGF rs833061 was associated with non-response to oral acitretin, whereas the TC genotype was associated with a significant response to acitretin (PASI75 at 3 months, p = 0.01) . However, the result of VEGF polymorphism was not replicated in the population of southern China . 3.3. Cyclosporin Cyclosporine, a calcineurin inhibitor, is commonly used to treat moderate to severe psoriasis. However, clinical studies investigating the pharmacogenetics of cyclosporine in psoriasis patients are currently lacking . 3.3.1. ABCB1 One Greek study enrolled 84 patients revealed that ATP-binding cassette subfamily B member 1 (ABCB1) rs1045642 had statistically significant association with a negative response of cyclosporin (PASI < 50 at 3 months, p = 0.0075) . In 168 Russian patients with psoriasis receiving cyclosporine therapy, a strongly negative association was observed for the TT/CT genotype of ABCB1 rs1045642 (PASI75 at 3 months, p < 0.001), the TT/CT genotype of ABCB1 rs1128503 (PASI75 at 3 months, p = 0.027), and the TT/GT genotype of ABCB1 rs2032582 (PASI75 at 3 months, p = 0.048), respectively. Additionally, the TGC haplotype was significantly linked to a negative response (PASI75 at 3 months, p < 0.001) . 3.3.2. CALM1 Calmodulin (CALM1) is known as a calcium-dependent protein and is related to cell proliferation and epidermal hyperplasia in psoriasis . In 200 Greek patients, the allele T of CALM1 rs12885713 displayed a significantly better response to cyclosporin (PASI75 at 3 months, p = 0.011) . 3.3.3. MALT1 MALT1 encodes MALT1 paracaspase, a potent activator of the transcription factors NF-κB and AP-1, and hence has a role in psoriasis . MALT1 rs287411 allele G was associated with the effective response compared to allele A (PASI75 at 3 months, p < 0.001) . 3.4. Tumor Necrosis Factor Antagonist There are four FDA-approved TNF antagonists for plaque psoriasis, including etanercept, adalimumab, infliximab, and certolizumab pegol. According to our review of the literature, pharmacogenetic research has been mainly focused on the first three drugs. Etanercept is a recombinant fusion protein comprising two extracellular parts of the human tumor necrosis factor receptor 2 (TNFR2) coupled to a human immunoglobulin 1 (IgG1) Fc. Adalimumab is a fully human monoclonal antibody with human TNF binding Fab and human IgG1 Fc backbone, whereas infliximab is a chimeric IgG1 monoclonal antibody composed of a human constant and a murine variable region binding to TNFα . Despite their unique pharmacological profile from each other, TNF antagonists act on the same pathologic mechanism to achieve therapeutic outcomes. Therefore, some pharmacogenetic researchers regarded all TNF antagonists as one category to analyze potential predictive genetic markers under a large-scale population, while some discussed each TNF antagonist separately . 3.4.1. Nonspecific TNF Antagonist Better Response of Efficacy In 144 Spanish patients, carriers of the CT/CC allele in MAP3K1 rs96844 and the CT/TT allele in HLA-C rs12191877 achieved a better PASI75 response at 3 months. The study also found significantly better results for carriers of MAP3K1 polymorphism and CT/TT in CDKAL1 rs6908425 at 6 months . Another study enrolled 70 patients in Spain implicated that patients harboring high-affinity alleles, FCGR2A-H131R (rs1801274) and FCGR 3A-V158F (rs396991), contribute to better mean BSA improvement but not PASI improvement at 6–8 weeks after anti-TNF treatment of psoriasis . The result between FCGR 3A-V158F (rs396991) and response to anti-TNFα therapy (PASI75 at 6 months, p = 0.005), especially etanercept (PASI75 at 6 months, p = 0.01), was replicated in 100 Caucasian patients from Greece, while FCGR2A-H131R (rs1801274) was found to be no association . A study conducted in 199 Greek patients found an association between carriers of CT/CC in HLA-C rs10484554 and a good response to anti-TNF agents (PASI 75 at 6 months, p = 0.0032), especially adalimumab ( p = 0.0007) . In 238 Caucasian adults in Spain, the rs4819554 promoter SNP allele A of the IL17RA gene was significantly more prevalent among responders at week 12 . Moreover, several genetic variants exert favorable effects at 6 months of treatment in 109 patients with psoriasis from Spain, including GG genotype of IL23R rs11209026 (PASI90 p = 0.006), GG genotype of TNF-a-238 rs361525 (PASI75, p = 0.049), CT/TT genotypes of TNF-a-857 rs1799724 (PASI75, p = 0.006, ΔPASI, p = 0.004; BSA, p = 0.009), and TT genotype of TNF-a-1031 rs1799964 (PASI75, p = 0.038; ΔPASI, p = 0.041; at 3 months, PASI75, p = 0.047) . Poor Response of Efficacy In 144 Spanish patients, four SNPs were associated with the inability to achieve PASI75 at three months, including AG/GG allele in PGLYRP4-24 rs2916205, CC allele in ZNF816A rs9304742, AA allele in CTNNA2 rs11126740, and AG/GG allele in IL12B rs2546890. Additionally, the results for polymorphisms in the IL12B gene were replicated at six months and one year. The study also obtained significant results for the FCGR2A and HTR2A polymorphism at 6 months . Notably, the result of the FCGR2A polymorphism showed variability between studies . In 376 Danish patients, five SNPs, which are IL1B (rs1143623, rs1143627), LY96 (rs11465996), and TLR2 (rs11938228, rs4696480), were all associated with nonresponse to treatment . One study found a higher frequency of G-carriers of the TNFRSF1B rs1061622 among non-responders (PASI < 50) compared to cases achieving PASI75 to TNF blockers in 90 Caucasians from Spain . Toxicity Among the 161 Caucasian patients, the polymorphism rs10782001 in FBXL19 and rs11209026 in IL23R may contribute to an increased risk of the secondary development of psoriasiform reactions owing to TNF blocking. In addition, in 70 Spanish patients, the copy number variation (CNV) harboring three genes (ARNT2, LOC101929586, and MIR5572) was related to the occurrence of paradoxical psoriasiform reactions at 3 and 6 months ( p = 0.006) . In contrast, the presence of rs3087243 in CTLA4 , rs651630 in SLC12A8 , or rs1800453 in TAP1 was related to protection against psoriasiform lesions . Interestingly, the IL23R rs11209026 polymorphism was reported as having a protective role reported in classical psoriasis. 3.4.2. Etanercept (ETA/ETN) CD84 Cluster of Differentiation 84 (CD84) gene encodes a membrane glycoprotein, which enhances IFN-γ secretion in activated T cells . In 161 patients from the Netherlands, the GA genotype in CD84 (rs6427528) had a more sensitive response to etanercept than the referential GG genotype (ΔPASI at 3 months, p = 0.025) . FCGR3A This gene encodes a receptor for the Fc portion of immunoglobulin G, where the TNF antagonist binds specifically. In 100 psoriasis patients in Greece, the study showed an association with FCGR3A-V158F (rs396991) and better response to etanercept (PASI75 at 6 months, p = 0.01) . TNFAIP3 TNFα induced protein 3 (TNFAIP3) plays a protective role against the harmful effects of inflammation and is involved in immune regulation . Rs610604 in TNFAIP3 showed associations with good responses to etanercept (PASI75 at 6 months, p = 0.007) . TNF, TNFRSF1B TNFα transmits signals through TNF receptor superfamily member 1B (TNFRSF1B) , which exhibits predominantly on Tregs and is responsible for initiating immune modulation . Carriage of TNF-857C (rs1799724) or TNFRSF1B-676T (rs1061622) alleles was associated with a positive response to drug treatment in patients treated with etanercept (PASI75 at 6 months, p = 0.002 and p = 0.001, respectively) . 3.4.3. Adalimumab (ADA) & Infliximab (IFX/INF) CPM CPM (Carboxypeptidase M) is involved in the maturation of macrophages in psoriasis pathogenesis . The CNV of the CPM gene was significantly associated with adalimumab response among 70 Spanish patients (PASI75 at 3 and 6 months, p < 0.05) . HLA The rs9260313 in the HLA-A gene was found to be associated with more favorable responses to adalimumab (PASI75 at 6 months, p = 0.05) . Among 169 Spanish patients, HLA-Cw06 positivity had a better response to adalimumab. (PASI75 at 6 months, p = 0.018) . IL17F IL-17F , activated by IL23/Th17, is recognized as having a critical role in the pathogenesis of psoriasis. In a cohort study in Spain, carriers of TC genotype in IL-17F rs763780 were associated with a lack of response to adalimumab ( n = 67, PASI75 at weeks 24–28, p = 0.0044) while interestingly, with better response to infliximab ( n = 37, PASI at weeks 12–16, p = 0.023; PASI at weeks 24–28, p = 0.02). NFKBIZ The nuclear factor of kappa light polypeptide gene enhancer in B cells inhibitor , zeta (NFKBIZ) gene encodes an atypical inhibitor of nuclear factor κB (IκB) protein, involved in inflammatory signaling of psoriasis . Among 169 Spanish patients, the deletion of NFKBIZ rs3217713 had a better response to adalimumab (PASI75 at 6 months, p = 0.015) . TNF, TNFRSF1B None of the genotyped SNPs of TNF , TNFRSF1A , and TNFRSF1B genes were associated with responsiveness to treatment with infliximab or adalimumab . TRAF3IP2 TNF receptor-associated factor 3 interacting protein 2 (TRAF3IP2) involves in IL-17 signaling and interacts with members of the Rel/NF-κB transcription factor family . The rs13190932 in the TRAF3IP2 gene showed associations with a favorable response to infliximab (PASI75 at 6 months, p = 0.041) . 3.5. IL-12/IL-23 Antagonist Ustekinumab, as an IL12/IL23 antagonist, targets the p40 subunit that is shared by IL-12 and IL-23, whereas guselkumab, tildrakizumab, and risankizumab target the p19 subunit of IL-23. These four drugs are efficacious in treating moderate to severe plaque psoriasis . While ustekinumab is the earliest commercially available drug among IL23 antagonists, relatively abundant studies of the association between the response and gene status have been conducted. In contrast, there is limited research on the genetic predictors of clinical response to guselkumab, tildrakizumab, and risankizumab . 3.5.1. Ustekinumab (UTK) Better Response of Efficacy In a Spanish study enrolled 69 patients, good responders at 4 months were associated with CC genotype in ADAM33 rs2787094 ( p = 0.015), CG/CC genotype in HTR2A rs6311 ( p = 0.037), GT/TT genotype in IL-13 rs848 ( p = 0.037), CC genotype in NFKBIA rs2145623 ( p = 0.024), and CT/CC genotype in TNFR1 rs191190 . Rs151823 and rs26653 in the ERAP1 gene showed associations with a favorable response to anti-IL-12/23 therapy among 22 patients from the UK. Several studies exhibited that the presence of the HLA-Cw*06 or Cw*06:02 allele may serve as a predictor of faster response and better response to ustekinumab in Italian, Dutch, Belgian, American, and Chinese patients . A recent meta-analysis study confirmed that HLA-C*06:02 -positive patients had higher response rates (PASI76 at 6 months, p < 0 .001) . In addition, the presence of the GG genotype on the IL12B rs6887695 SNP and the absence of the AA genotype on the IL12B rs3212227 or the GG genotype on the IL6 rs1800795 SNP significantly increased the probability of therapeutic success in HLA-Cw6 -positive patients . Rs10484554 in the HLA-Cw gene did not show an association with a good response to ustekinumab in a Greek population . Patients with heterozygous genotype (CT) in the IL12B rs3213094 showed better PASI improvement to ustekinumab than the reference genotype (CC) (∆PASI at 3 months, p = 0.017), but the result was not replicated with regard to PASI75 . The genetic polymorphism of TIRAP rs8177374 and TLR5 rs5744174 were associated with a better response in the Danish population (PASI75 at 3 months, p = 0.0051 and p = 0.0012, respectively) . Poor Response of Efficacy In a Spanish study that enrolled 69 patients treating psoriasis with ustekinumab, poor responders at 4 months were associated with CG/CC genotype in CHUK rs11591741 ( p = 0.029), CT/CC genotype in C9orf72 rs774359 ( p = 0.016), AG/GG in C17orf51 rs1975974 ( p = 0.012), CT genotype in SLC22A4 rs1050152 ( p = 0.037), GT/TT genotype in STAT4 rs7574865 ( p = 0.015) and CT/CC genotype in ZNF816A rs9304742 ( p = 0.012) . Among 376 Danish patients, genetic variants of IL1B rs1143623 and rs1143627 related to increased IL-1β levels may be unfavorable outcomes (PASI75 at 3 months, p = 0.0019 and 0.0016, respectively), similar results with anti-TNF agents . An association between the TC genotype of IL-17F rs763780 and no response to ustekinumab was found in 70 Spanish (PASI75 at 3 and 6 months, p = 0.022 and p = 0.016, respectively) . Patients with homozygous (GG) for the rs610604 SNP in TNFAIP3 showed a worse PASI improvement to ustekinumab ( p = 0.031) than the TT genotype . Carriers of allele G in TNFRSF1B rs1061622 under anti-TNF or anti-IL-12/IL-23 treatment tended to be non-responders in 90 patients from Spain (PASI < 50 at 6 months, p = 0.05) . 3.6. IL-17 Antagonist Secukinumab and ixekizumab are human monoclonal antibodies that bind to the protein interleukin IL-17A, while brodalumab is a human monoclonal antibody of IL17R, which means a pan inhibitor of IL-17A, IL-17F, and IL-25. The three IL-17 antagonists are currently used in the treatment of moderate-to-severe psoriasis . 3.6.1. Secukinumab (SCK) and Ixekizumab (IXE) and Brodalumab (BDL) HLA-Cw6 The responses to SCK were comparable up to 18 months between HLA-Cw*06 -positive and -negative patients, as it is highly effective regardless of the HLA-Cw6 status in Italy and Switzerland . IL-17 No associations were found between the five genetic variants of IL-17 (rs2275913, rs8193037, rs3819025, rs7747909, and rs3748067) and ΔPASI, PASI75, or PASI90 after 12 and 24 weeks of anti-IL-17A agents, including SCK and IXE in European . The lack of pharmacogenetic data for BDL was noted during the review. 3.7. PDE4 Antagonist Apremilast, a selective phosphodiesterase 4 (PDE4) inhibitor, is used to treat plaque psoriasis. A Russian study identified 78 pre-selected single-nucleotide polymorphisms, increased minor allele of IL1β (rs1143633), IL4 (IL13) (rs20541), IL23R (rs2201841), and TNFα (rs1800629) genes that are associated with the better outcome in 34 patients (PASI75 at 6.5 months, p = 0.05, p = 0.04, p = 0.03, p = 0.03, respectively) . 3.8. Topical Agents Globally used topical therapies for psoriasis include retinoids, vitamin D analogs, corticosteroids, and coal tar. Lack of evidence emphasizes the association between treatment response and pharmacogenetics of corticosteroids, retinoids, and coal tar. The link between VDR genes, encoding the nuclear hormone receptor for vitamin D3, and the response to calcipotriol has been discussed but remained controversial in different populations . Lindioil is another topical medicine refined from Chinese herbs and is effective in treating plaque psoriasis . It has been reported that HLA-Cw*06:02 positivity showed a better response (PASI75 at 3 months, p = 0.033) while HLA-Cw*01:02 positivity showed a poor response in 72 patients (PASI 75 at 2.5 months, p = 0.019) . Methotrexate (MTX) is an antagonist of the enzymes dihydrofolate reductase (DHFR) and thymidylate synthase (TYMS). It is commonly used as a first-line systemic immunosuppressive therapy for moderate to severe psoriasis. However, significant variations in its efficacy and toxicity exist among individuals. Therefore, several studies have identified potential pharmacogenetic factors that can be used to predict the clinical response of MTX . 3.1.1. ABCC1, ABCC2, ABCG2 The genes encoding the efflux transporters of MTX are ATP-binding cassette (ABC) subfamily C member 1 (ABCC1) , ABCC member 2 (ABCC2) , and ABC subfamily G member 2 (ABCG2) . Overexpression of these genes can lead to multidrug resistance by extruding drugs out of the cell through various mechanisms . In regard to psoriasis, a cohort study of 374 British patients found significant positive associations between methotrexate responder, two of ABCG2 (rs17731538, rs13120400), and three SNPs of ABCC1 (rs35592, rs28364006, rs2238476) with rs35592 being the most significant (PASI75 at 3 months, p = 0.008). One cohort study from Slovenia demonstrated that polymorphism of ABCC2 (rs717620) presented an insufficient response to MTX treatment (75% reduction from baseline PASI score (PASI75) at 6 months, p = 0.039) . About toxicity, a British cohort study has noted that the major allele of six SNPs in ABCC1 (rs11075291, rs1967120, rs3784862, rs246240, rs3784864, and rs2238476) was significantly associated with the onset of adverse events, with rs246240 showing the strongest association ( p = 0.0006) . 3.1.2. ADORA2A Adenosine receptors A2a (ADORA2a) is responsible for mediating the metabolic product of methotrexate. One SNP, rs5760410 of ADORA2A , was weakly associated with the onset of toxicity ( p = 0.03) . 3.1.3. ATIC MTX inhibits 5-aminoimidazole-4-carboxamide ribonucleotide formyltransferase (ATIC) , which leads to the accumulation of adenosine, a potent anti-inflammatory agent . Campalani et al. analyzed 188 patients in the United Kingdom (UK) with psoriasis under methotrexate therapy and revealed that allele frequency of ATIC (rs2372536) was significantly increased in patients who discontinued methotrexate owing to intolerable side effects ( p = 0.038) . Another British cohort study found that two SNPs in ATIC (rs2372536 and rs4672768) were associated with the onset of MTX toxicity ( p = 0.01). However, these associations did not remain significant after adjusting for folic acid supplementation . 3.1.4. BHMT Betaine-homocysteine S-methyltransferase (BHMT) is a zinc-containing metalloenzyme responsible for folate-independent remethylation of homocysteine using betaine as the methyl donor . A genotype analysis identified that the BHMT genotype was significantly associated with MTX hepatotoxicity ( p = 0.022) . 3.1.5. DNMT3b DNA methyltransferase 3β (DNMT3b) is a methyltransferase that is involved in de-novo DNA methylation, and its polymorphism is supposed to be associated with increased promoter activity . At least one copy of the variant DNMT3b rs242913 allele has been found to be associated with an insufficient response to MTX when compared to the wild-type ( p = 0.005) . 3.1.6. FOXP3 Forkhead box P3 (FOXP3) appears to function as a master regulator of the regulatory pathway in the development and function of regulatory T cells (Tregs) . A study on a population of 189 southern Indian patients who had used methotrexate for 12 weeks found a significant difference in genotype frequencies of FOXP3 (rs3761548) between responders and non-responders (PASI75 at 3 months, p = 0.003) . 3.1.7. GNMT Glycine N-methyltransferase (GNMT) is a methyltransferase that converts S-adenosylmethionine to S-adenosylhomocysteine and is also a folate-binding protein. The rs10948059 polymorphism is associated with increased expression of the GNMT gene and reduces cell sensitivity to MTX . The patients with at least one variant GNMT allele were more likely to be non-responders to MTX treatment than the reference allele (PASI75 at 6 months, p = 0.0004) . 3.1.8. HLA-Cw6 The human leukocyte antigen (HLA) , known as the human MHC system, regulates the immune system by encoding cell-surface proteins. HLA-Cw6 is a psoriasis susceptibility allele that has been strongly linked to the disease. It was reported that carriers of HLA-Cw6 from southern India had a higher response rate to methotrexate (PASI75 at 3 months, p = 0.003) . A Scotland cohort study with 70 HLA-tested patients demonstrated that more proportion of HLA-Cw6 positive patients was carried on beyond 12 months, as compared to the HLA-Cw6 negative group ( p = 0.05) . 3.1.9. MTHFR The Methylenetetrahydrofolate reductase (MTHFR) enzyme is responsible for catalyzing the formation of 5-methyl-tetrahydrofolic acid, which acts as a methyl donor for the synthesis of methionine from homocysteine. This enzyme is indirectly inhibited by MTX. According to Zhu et al., the PASI 90 response rates to MTX were significantly higher in Han Chinese patients who had the MTHFR rs1801133 TT genotype as compared to those who had the CT and CC genotype (PASI90 at 3 months, p = 0.006). Furthermore, patients with the MTHFR rs1801131 CT genotype had lower PASI 75 response rates to MTX in Han Chinese population (PASI75 at 3 months, p = 0.014). They also had a lower risk of ALT elevation ( p = 0.04) . However, three studies have demonstrated that no significant association was detected between clinical outcomes in individuals with psoriasis treated with methotrexate and SNPs in the MTHFR gene . 3.1.10. SLC19A1 The Solute carrier family 19 , member 1 (SLC19A1) gene encodes the reduced folate carrier (RFC) protein, which actively transports MTX into cells. Multiple point mutations have been identified in SLC19A1 to be associated with impaired MTX transport and resistance to MTX . SLC19A1 (rs1051266) was associated with MTX-induced toxicity instead of efficacy in patients with psoriasis . 3.1.11. SLCO1B1 The encoded protein of solute carrier organic anion transporter family member 1B1 (SLO1B1) is a transmembrane receptor that transports drug compounds into cells. Genetic variations in SLCO1B1 have been linked to delayed MTX clearance and increased toxicity . The haplotype variants have been classified into two groups based on their reported transporter activity: the high-activity group and the low-activity group. Patients with low-activity haplotypes of SLCO1B1 (SLCO1B1*5 and SLCO1B1*15) were less likely to be MTX non-responders as compared to patients with high-activity haplotypes (SLCO1B1*1a and SLCO1B1*1b) (PASI75 at 6 months, p = 0.027) . 3.1.12. TNIP1 TNFAIP3 interacting protein 1 (TNIP1) , as one of the psoriasis susceptibility genes, is related to the immune response IL-23 signaling pathway. A Chinese study mentioned that in 221 patients with psoriasis, the TT genotype of TNIP1 rs10036748 showed a better response to MTX (PASI75 at 3 months, p = 0.043) . 3.1.13. TYMS Thymidylate synthase (TS), encoded by the thymidylate synthase gene (TYMS) , is a critical protein for pyrimidine synthesis and responsible for DNA synthesis and repair, which could be inhibited by MTX . The association of polymorphisms of TYMS , TS levels, and MTX response was found in several diseases . For example, polymorphism rs34743033 is a 28-base pair (bp) with double or triple tandem repeat (2R or 3R) located on the 5′ untranslated region (UTR) . A study performed in European adults with psoriasis found that the rs34743033 3R allele was more frequent in patients with poor therapeutic response to methotrexate, but the loss of significance was noted after the exclusion of palmoplantar pustulosis patients. In addition, this allele was significantly associated with an increased incidence of MTX-induced toxicity in patients who did not receive folic acid ( p = 0.0025). Another TS polymorphism, 3′-UTR 6bp del of rs11280056, was significantly more frequent in patients with an adverse event irrespective of folic acid supplementation ( p = 0.025) . In short, positive genotypic associations were detected with methotrexate responders in ten genes ( ABCC1 , ABCC2 , ABCG2 , DNMT3b , FOXP3 , GNMT , HLA-Cw , MTHFR , SLCO1B1 , TNIP1 ) while the development of methotrexate-related toxicity in five genes ( ABCC1 , ATIC , ADORA2A , BHMT , MTHFR , SLC19A1 , TYMS ). Nonetheless, three British studies seemed to believe that toxicity has overlapped populations; hence, several replicated results may also be owing to similar databases . The genes encoding the efflux transporters of MTX are ATP-binding cassette (ABC) subfamily C member 1 (ABCC1) , ABCC member 2 (ABCC2) , and ABC subfamily G member 2 (ABCG2) . Overexpression of these genes can lead to multidrug resistance by extruding drugs out of the cell through various mechanisms . In regard to psoriasis, a cohort study of 374 British patients found significant positive associations between methotrexate responder, two of ABCG2 (rs17731538, rs13120400), and three SNPs of ABCC1 (rs35592, rs28364006, rs2238476) with rs35592 being the most significant (PASI75 at 3 months, p = 0.008). One cohort study from Slovenia demonstrated that polymorphism of ABCC2 (rs717620) presented an insufficient response to MTX treatment (75% reduction from baseline PASI score (PASI75) at 6 months, p = 0.039) . About toxicity, a British cohort study has noted that the major allele of six SNPs in ABCC1 (rs11075291, rs1967120, rs3784862, rs246240, rs3784864, and rs2238476) was significantly associated with the onset of adverse events, with rs246240 showing the strongest association ( p = 0.0006) . Adenosine receptors A2a (ADORA2a) is responsible for mediating the metabolic product of methotrexate. One SNP, rs5760410 of ADORA2A , was weakly associated with the onset of toxicity ( p = 0.03) . MTX inhibits 5-aminoimidazole-4-carboxamide ribonucleotide formyltransferase (ATIC) , which leads to the accumulation of adenosine, a potent anti-inflammatory agent . Campalani et al. analyzed 188 patients in the United Kingdom (UK) with psoriasis under methotrexate therapy and revealed that allele frequency of ATIC (rs2372536) was significantly increased in patients who discontinued methotrexate owing to intolerable side effects ( p = 0.038) . Another British cohort study found that two SNPs in ATIC (rs2372536 and rs4672768) were associated with the onset of MTX toxicity ( p = 0.01). However, these associations did not remain significant after adjusting for folic acid supplementation . Betaine-homocysteine S-methyltransferase (BHMT) is a zinc-containing metalloenzyme responsible for folate-independent remethylation of homocysteine using betaine as the methyl donor . A genotype analysis identified that the BHMT genotype was significantly associated with MTX hepatotoxicity ( p = 0.022) . DNA methyltransferase 3β (DNMT3b) is a methyltransferase that is involved in de-novo DNA methylation, and its polymorphism is supposed to be associated with increased promoter activity . At least one copy of the variant DNMT3b rs242913 allele has been found to be associated with an insufficient response to MTX when compared to the wild-type ( p = 0.005) . Forkhead box P3 (FOXP3) appears to function as a master regulator of the regulatory pathway in the development and function of regulatory T cells (Tregs) . A study on a population of 189 southern Indian patients who had used methotrexate for 12 weeks found a significant difference in genotype frequencies of FOXP3 (rs3761548) between responders and non-responders (PASI75 at 3 months, p = 0.003) . Glycine N-methyltransferase (GNMT) is a methyltransferase that converts S-adenosylmethionine to S-adenosylhomocysteine and is also a folate-binding protein. The rs10948059 polymorphism is associated with increased expression of the GNMT gene and reduces cell sensitivity to MTX . The patients with at least one variant GNMT allele were more likely to be non-responders to MTX treatment than the reference allele (PASI75 at 6 months, p = 0.0004) . The human leukocyte antigen (HLA) , known as the human MHC system, regulates the immune system by encoding cell-surface proteins. HLA-Cw6 is a psoriasis susceptibility allele that has been strongly linked to the disease. It was reported that carriers of HLA-Cw6 from southern India had a higher response rate to methotrexate (PASI75 at 3 months, p = 0.003) . A Scotland cohort study with 70 HLA-tested patients demonstrated that more proportion of HLA-Cw6 positive patients was carried on beyond 12 months, as compared to the HLA-Cw6 negative group ( p = 0.05) . The Methylenetetrahydrofolate reductase (MTHFR) enzyme is responsible for catalyzing the formation of 5-methyl-tetrahydrofolic acid, which acts as a methyl donor for the synthesis of methionine from homocysteine. This enzyme is indirectly inhibited by MTX. According to Zhu et al., the PASI 90 response rates to MTX were significantly higher in Han Chinese patients who had the MTHFR rs1801133 TT genotype as compared to those who had the CT and CC genotype (PASI90 at 3 months, p = 0.006). Furthermore, patients with the MTHFR rs1801131 CT genotype had lower PASI 75 response rates to MTX in Han Chinese population (PASI75 at 3 months, p = 0.014). They also had a lower risk of ALT elevation ( p = 0.04) . However, three studies have demonstrated that no significant association was detected between clinical outcomes in individuals with psoriasis treated with methotrexate and SNPs in the MTHFR gene . The Solute carrier family 19 , member 1 (SLC19A1) gene encodes the reduced folate carrier (RFC) protein, which actively transports MTX into cells. Multiple point mutations have been identified in SLC19A1 to be associated with impaired MTX transport and resistance to MTX . SLC19A1 (rs1051266) was associated with MTX-induced toxicity instead of efficacy in patients with psoriasis . The encoded protein of solute carrier organic anion transporter family member 1B1 (SLO1B1) is a transmembrane receptor that transports drug compounds into cells. Genetic variations in SLCO1B1 have been linked to delayed MTX clearance and increased toxicity . The haplotype variants have been classified into two groups based on their reported transporter activity: the high-activity group and the low-activity group. Patients with low-activity haplotypes of SLCO1B1 (SLCO1B1*5 and SLCO1B1*15) were less likely to be MTX non-responders as compared to patients with high-activity haplotypes (SLCO1B1*1a and SLCO1B1*1b) (PASI75 at 6 months, p = 0.027) . TNFAIP3 interacting protein 1 (TNIP1) , as one of the psoriasis susceptibility genes, is related to the immune response IL-23 signaling pathway. A Chinese study mentioned that in 221 patients with psoriasis, the TT genotype of TNIP1 rs10036748 showed a better response to MTX (PASI75 at 3 months, p = 0.043) . Thymidylate synthase (TS), encoded by the thymidylate synthase gene (TYMS) , is a critical protein for pyrimidine synthesis and responsible for DNA synthesis and repair, which could be inhibited by MTX . The association of polymorphisms of TYMS , TS levels, and MTX response was found in several diseases . For example, polymorphism rs34743033 is a 28-base pair (bp) with double or triple tandem repeat (2R or 3R) located on the 5′ untranslated region (UTR) . A study performed in European adults with psoriasis found that the rs34743033 3R allele was more frequent in patients with poor therapeutic response to methotrexate, but the loss of significance was noted after the exclusion of palmoplantar pustulosis patients. In addition, this allele was significantly associated with an increased incidence of MTX-induced toxicity in patients who did not receive folic acid ( p = 0.0025). Another TS polymorphism, 3′-UTR 6bp del of rs11280056, was significantly more frequent in patients with an adverse event irrespective of folic acid supplementation ( p = 0.025) . In short, positive genotypic associations were detected with methotrexate responders in ten genes ( ABCC1 , ABCC2 , ABCG2 , DNMT3b , FOXP3 , GNMT , HLA-Cw , MTHFR , SLCO1B1 , TNIP1 ) while the development of methotrexate-related toxicity in five genes ( ABCC1 , ATIC , ADORA2A , BHMT , MTHFR , SLC19A1 , TYMS ). Nonetheless, three British studies seemed to believe that toxicity has overlapped populations; hence, several replicated results may also be owing to similar databases . Acitretin is an oral vitamin A derivative that is used to treat psoriasis by inhibiting epidermal proliferation, inflammatory processes, and angiogenesis. lists the genetic polymorphisms that have been associated with the response of acitretin in patients with psoriasis. 3.2.1. ApoE Apolipoprotein E (ApoE) is a glycoprotein component of chylomicrons and VLDL. It has a crucial role in regulating lipid profiles and metabolism . The lipid and lipoprotein abnormalities as a consequence of ApoE gene polymorphism are close to the side effects during acitretin therapy. In addition, ApoE levels have been linked with clinical improvement in psoriasis, indicating a potential role of the gene in acitretin treatment for psoriasis . However, according to Campalani, E, et al., while ApoE gene polymorphisms are associated with psoriasis, they do not determine the response of the disease to acitretin . 3.2.2. ANKLE1 Ankyrin repeat and LEM domain containing 1 (ANKLE1) enables endonuclease activity and plays a role in positively regulating the response to DNA damage stimulus and protein export from the nucleus. ANKLE1 rs11086065 AG/GG was associated with an ineffective response compared to the GG genotype in 166 Chinese patients (PASI75 at 3 months, p = 0.003) . 3.2.3. ARHGEF3 Rho guanine nucleotide exchange factor 3 (ARHGEF3) activates Rho GTPase, which involve in bone cell biology. ARHGEF3 rs3821414 CT was associated with a more effective response compared to the TT genotype (PASI75 at 3 months, p = 0.01) . 3.2.4. CRB2 Crumbs cell polarity complex component 2 (CRB2) encodes proteins that are components of the Crumbs cell polarity complex, which plays a crucial role in apical-basal epithelial polarity and cellular adhesion. CRB2 rs1105223 TT/CT was also associated with acitretin efficacy compared to the CC genotype (PASI75 at 3 months, p = 0.048) . 3.2.5. HLA-DQA1*02:01 HLA-DQA1*0201 alleles may act as psoriasis susceptibility genes or may be closely linked to the susceptibility genes in Han Chinese . Among 100 Chinese individuals, those who were positive for the DQA10201 allele demonstrated a more favorable response to acitretin compared to those who were negative for the same allele. (PASI75 at 2 months, p = 0.001) . 3.2.6. HLA-DQB1*02:02 HLA-DQB1 alleles have been mentioned to involve in genetic predisposition to psoriasis vulgaris in the Slovak population . In 100 Chinese patients, the DQB1*0202 -positive patients showed a better response to acitretin than the DQB1*0202 -negative patients (PASI75 at 2 months, p = 0.005) . 3.2.7. HLA-G HLA-G is a nonclassical class I MHC molecule that plays a role in suppressing the immune system by inhibiting natural killer cells and T cells . Among patients treated with acitretin, Borghi, Alessandro, et al. observed a significantly increased frequency of the 14 bp sequence deletion in the exon 8 of the HLA-G allele, functioning as a modification of mRNA stability, in responder patients, in comparison to the non-responders (PASI75 at 4 months, p = 0.008) . 3.2.8. IL-12B Patients with the IL-12B rs3212227 genotype of TG were more responsive to acitretin in the treatment of psoriasis in 43 Chinese patients (PASI50, p = 0.035) . 3.2.9. IL-23R Acitretin was found to improve the secondary non-response to TNFα monoclonal antibody in patients who were homozygous for the AA genotype at the SNP rs112009032 in the IL-23R gene (PASI75, p = 0.02) . 3.2.10. SFRP4 Secreted frizzled-related protein 4 (SFRP4) is a negative regulator of the Wnt signaling pathway, and the downregulation of SFRP4 is a possible mechanism contributing to the hyperplasia of the epidermis of psoriasis . The GG/GT variation of SFRP4 rs1802073 has been found to be associated with a more effective response to acitretin compared to the TT genotype (PASI75 at 3 months, p = 0.007) . 3.2.11. VEGF Vascular endothelial growth factor (VEGF) promotes angiogenesis in the pathophysiology of psoriasis, and the variant of the VEGF gene is supposed to affect the ability of acitretin to downregulate VEGF production . The TT genotype of the VEGF rs833061 was associated with non-response to oral acitretin, whereas the TC genotype was associated with a significant response to acitretin (PASI75 at 3 months, p = 0.01) . However, the result of VEGF polymorphism was not replicated in the population of southern China . Apolipoprotein E (ApoE) is a glycoprotein component of chylomicrons and VLDL. It has a crucial role in regulating lipid profiles and metabolism . The lipid and lipoprotein abnormalities as a consequence of ApoE gene polymorphism are close to the side effects during acitretin therapy. In addition, ApoE levels have been linked with clinical improvement in psoriasis, indicating a potential role of the gene in acitretin treatment for psoriasis . However, according to Campalani, E, et al., while ApoE gene polymorphisms are associated with psoriasis, they do not determine the response of the disease to acitretin . Ankyrin repeat and LEM domain containing 1 (ANKLE1) enables endonuclease activity and plays a role in positively regulating the response to DNA damage stimulus and protein export from the nucleus. ANKLE1 rs11086065 AG/GG was associated with an ineffective response compared to the GG genotype in 166 Chinese patients (PASI75 at 3 months, p = 0.003) . Rho guanine nucleotide exchange factor 3 (ARHGEF3) activates Rho GTPase, which involve in bone cell biology. ARHGEF3 rs3821414 CT was associated with a more effective response compared to the TT genotype (PASI75 at 3 months, p = 0.01) . Crumbs cell polarity complex component 2 (CRB2) encodes proteins that are components of the Crumbs cell polarity complex, which plays a crucial role in apical-basal epithelial polarity and cellular adhesion. CRB2 rs1105223 TT/CT was also associated with acitretin efficacy compared to the CC genotype (PASI75 at 3 months, p = 0.048) . HLA-DQA1*0201 alleles may act as psoriasis susceptibility genes or may be closely linked to the susceptibility genes in Han Chinese . Among 100 Chinese individuals, those who were positive for the DQA10201 allele demonstrated a more favorable response to acitretin compared to those who were negative for the same allele. (PASI75 at 2 months, p = 0.001) . HLA-DQB1 alleles have been mentioned to involve in genetic predisposition to psoriasis vulgaris in the Slovak population . In 100 Chinese patients, the DQB1*0202 -positive patients showed a better response to acitretin than the DQB1*0202 -negative patients (PASI75 at 2 months, p = 0.005) . HLA-G is a nonclassical class I MHC molecule that plays a role in suppressing the immune system by inhibiting natural killer cells and T cells . Among patients treated with acitretin, Borghi, Alessandro, et al. observed a significantly increased frequency of the 14 bp sequence deletion in the exon 8 of the HLA-G allele, functioning as a modification of mRNA stability, in responder patients, in comparison to the non-responders (PASI75 at 4 months, p = 0.008) . Patients with the IL-12B rs3212227 genotype of TG were more responsive to acitretin in the treatment of psoriasis in 43 Chinese patients (PASI50, p = 0.035) . Acitretin was found to improve the secondary non-response to TNFα monoclonal antibody in patients who were homozygous for the AA genotype at the SNP rs112009032 in the IL-23R gene (PASI75, p = 0.02) . Secreted frizzled-related protein 4 (SFRP4) is a negative regulator of the Wnt signaling pathway, and the downregulation of SFRP4 is a possible mechanism contributing to the hyperplasia of the epidermis of psoriasis . The GG/GT variation of SFRP4 rs1802073 has been found to be associated with a more effective response to acitretin compared to the TT genotype (PASI75 at 3 months, p = 0.007) . Vascular endothelial growth factor (VEGF) promotes angiogenesis in the pathophysiology of psoriasis, and the variant of the VEGF gene is supposed to affect the ability of acitretin to downregulate VEGF production . The TT genotype of the VEGF rs833061 was associated with non-response to oral acitretin, whereas the TC genotype was associated with a significant response to acitretin (PASI75 at 3 months, p = 0.01) . However, the result of VEGF polymorphism was not replicated in the population of southern China . Cyclosporine, a calcineurin inhibitor, is commonly used to treat moderate to severe psoriasis. However, clinical studies investigating the pharmacogenetics of cyclosporine in psoriasis patients are currently lacking . 3.3.1. ABCB1 One Greek study enrolled 84 patients revealed that ATP-binding cassette subfamily B member 1 (ABCB1) rs1045642 had statistically significant association with a negative response of cyclosporin (PASI < 50 at 3 months, p = 0.0075) . In 168 Russian patients with psoriasis receiving cyclosporine therapy, a strongly negative association was observed for the TT/CT genotype of ABCB1 rs1045642 (PASI75 at 3 months, p < 0.001), the TT/CT genotype of ABCB1 rs1128503 (PASI75 at 3 months, p = 0.027), and the TT/GT genotype of ABCB1 rs2032582 (PASI75 at 3 months, p = 0.048), respectively. Additionally, the TGC haplotype was significantly linked to a negative response (PASI75 at 3 months, p < 0.001) . 3.3.2. CALM1 Calmodulin (CALM1) is known as a calcium-dependent protein and is related to cell proliferation and epidermal hyperplasia in psoriasis . In 200 Greek patients, the allele T of CALM1 rs12885713 displayed a significantly better response to cyclosporin (PASI75 at 3 months, p = 0.011) . 3.3.3. MALT1 MALT1 encodes MALT1 paracaspase, a potent activator of the transcription factors NF-κB and AP-1, and hence has a role in psoriasis . MALT1 rs287411 allele G was associated with the effective response compared to allele A (PASI75 at 3 months, p < 0.001) . One Greek study enrolled 84 patients revealed that ATP-binding cassette subfamily B member 1 (ABCB1) rs1045642 had statistically significant association with a negative response of cyclosporin (PASI < 50 at 3 months, p = 0.0075) . In 168 Russian patients with psoriasis receiving cyclosporine therapy, a strongly negative association was observed for the TT/CT genotype of ABCB1 rs1045642 (PASI75 at 3 months, p < 0.001), the TT/CT genotype of ABCB1 rs1128503 (PASI75 at 3 months, p = 0.027), and the TT/GT genotype of ABCB1 rs2032582 (PASI75 at 3 months, p = 0.048), respectively. Additionally, the TGC haplotype was significantly linked to a negative response (PASI75 at 3 months, p < 0.001) . Calmodulin (CALM1) is known as a calcium-dependent protein and is related to cell proliferation and epidermal hyperplasia in psoriasis . In 200 Greek patients, the allele T of CALM1 rs12885713 displayed a significantly better response to cyclosporin (PASI75 at 3 months, p = 0.011) . MALT1 encodes MALT1 paracaspase, a potent activator of the transcription factors NF-κB and AP-1, and hence has a role in psoriasis . MALT1 rs287411 allele G was associated with the effective response compared to allele A (PASI75 at 3 months, p < 0.001) . There are four FDA-approved TNF antagonists for plaque psoriasis, including etanercept, adalimumab, infliximab, and certolizumab pegol. According to our review of the literature, pharmacogenetic research has been mainly focused on the first three drugs. Etanercept is a recombinant fusion protein comprising two extracellular parts of the human tumor necrosis factor receptor 2 (TNFR2) coupled to a human immunoglobulin 1 (IgG1) Fc. Adalimumab is a fully human monoclonal antibody with human TNF binding Fab and human IgG1 Fc backbone, whereas infliximab is a chimeric IgG1 monoclonal antibody composed of a human constant and a murine variable region binding to TNFα . Despite their unique pharmacological profile from each other, TNF antagonists act on the same pathologic mechanism to achieve therapeutic outcomes. Therefore, some pharmacogenetic researchers regarded all TNF antagonists as one category to analyze potential predictive genetic markers under a large-scale population, while some discussed each TNF antagonist separately . 3.4.1. Nonspecific TNF Antagonist Better Response of Efficacy In 144 Spanish patients, carriers of the CT/CC allele in MAP3K1 rs96844 and the CT/TT allele in HLA-C rs12191877 achieved a better PASI75 response at 3 months. The study also found significantly better results for carriers of MAP3K1 polymorphism and CT/TT in CDKAL1 rs6908425 at 6 months . Another study enrolled 70 patients in Spain implicated that patients harboring high-affinity alleles, FCGR2A-H131R (rs1801274) and FCGR 3A-V158F (rs396991), contribute to better mean BSA improvement but not PASI improvement at 6–8 weeks after anti-TNF treatment of psoriasis . The result between FCGR 3A-V158F (rs396991) and response to anti-TNFα therapy (PASI75 at 6 months, p = 0.005), especially etanercept (PASI75 at 6 months, p = 0.01), was replicated in 100 Caucasian patients from Greece, while FCGR2A-H131R (rs1801274) was found to be no association . A study conducted in 199 Greek patients found an association between carriers of CT/CC in HLA-C rs10484554 and a good response to anti-TNF agents (PASI 75 at 6 months, p = 0.0032), especially adalimumab ( p = 0.0007) . In 238 Caucasian adults in Spain, the rs4819554 promoter SNP allele A of the IL17RA gene was significantly more prevalent among responders at week 12 . Moreover, several genetic variants exert favorable effects at 6 months of treatment in 109 patients with psoriasis from Spain, including GG genotype of IL23R rs11209026 (PASI90 p = 0.006), GG genotype of TNF-a-238 rs361525 (PASI75, p = 0.049), CT/TT genotypes of TNF-a-857 rs1799724 (PASI75, p = 0.006, ΔPASI, p = 0.004; BSA, p = 0.009), and TT genotype of TNF-a-1031 rs1799964 (PASI75, p = 0.038; ΔPASI, p = 0.041; at 3 months, PASI75, p = 0.047) . Poor Response of Efficacy In 144 Spanish patients, four SNPs were associated with the inability to achieve PASI75 at three months, including AG/GG allele in PGLYRP4-24 rs2916205, CC allele in ZNF816A rs9304742, AA allele in CTNNA2 rs11126740, and AG/GG allele in IL12B rs2546890. Additionally, the results for polymorphisms in the IL12B gene were replicated at six months and one year. The study also obtained significant results for the FCGR2A and HTR2A polymorphism at 6 months . Notably, the result of the FCGR2A polymorphism showed variability between studies . In 376 Danish patients, five SNPs, which are IL1B (rs1143623, rs1143627), LY96 (rs11465996), and TLR2 (rs11938228, rs4696480), were all associated with nonresponse to treatment . One study found a higher frequency of G-carriers of the TNFRSF1B rs1061622 among non-responders (PASI < 50) compared to cases achieving PASI75 to TNF blockers in 90 Caucasians from Spain . Toxicity Among the 161 Caucasian patients, the polymorphism rs10782001 in FBXL19 and rs11209026 in IL23R may contribute to an increased risk of the secondary development of psoriasiform reactions owing to TNF blocking. In addition, in 70 Spanish patients, the copy number variation (CNV) harboring three genes (ARNT2, LOC101929586, and MIR5572) was related to the occurrence of paradoxical psoriasiform reactions at 3 and 6 months ( p = 0.006) . In contrast, the presence of rs3087243 in CTLA4 , rs651630 in SLC12A8 , or rs1800453 in TAP1 was related to protection against psoriasiform lesions . Interestingly, the IL23R rs11209026 polymorphism was reported as having a protective role reported in classical psoriasis. 3.4.2. Etanercept (ETA/ETN) CD84 Cluster of Differentiation 84 (CD84) gene encodes a membrane glycoprotein, which enhances IFN-γ secretion in activated T cells . In 161 patients from the Netherlands, the GA genotype in CD84 (rs6427528) had a more sensitive response to etanercept than the referential GG genotype (ΔPASI at 3 months, p = 0.025) . FCGR3A This gene encodes a receptor for the Fc portion of immunoglobulin G, where the TNF antagonist binds specifically. In 100 psoriasis patients in Greece, the study showed an association with FCGR3A-V158F (rs396991) and better response to etanercept (PASI75 at 6 months, p = 0.01) . TNFAIP3 TNFα induced protein 3 (TNFAIP3) plays a protective role against the harmful effects of inflammation and is involved in immune regulation . Rs610604 in TNFAIP3 showed associations with good responses to etanercept (PASI75 at 6 months, p = 0.007) . TNF, TNFRSF1B TNFα transmits signals through TNF receptor superfamily member 1B (TNFRSF1B) , which exhibits predominantly on Tregs and is responsible for initiating immune modulation . Carriage of TNF-857C (rs1799724) or TNFRSF1B-676T (rs1061622) alleles was associated with a positive response to drug treatment in patients treated with etanercept (PASI75 at 6 months, p = 0.002 and p = 0.001, respectively) . 3.4.3. Adalimumab (ADA) & Infliximab (IFX/INF) CPM CPM (Carboxypeptidase M) is involved in the maturation of macrophages in psoriasis pathogenesis . The CNV of the CPM gene was significantly associated with adalimumab response among 70 Spanish patients (PASI75 at 3 and 6 months, p < 0.05) . HLA The rs9260313 in the HLA-A gene was found to be associated with more favorable responses to adalimumab (PASI75 at 6 months, p = 0.05) . Among 169 Spanish patients, HLA-Cw06 positivity had a better response to adalimumab. (PASI75 at 6 months, p = 0.018) . IL17F IL-17F , activated by IL23/Th17, is recognized as having a critical role in the pathogenesis of psoriasis. In a cohort study in Spain, carriers of TC genotype in IL-17F rs763780 were associated with a lack of response to adalimumab ( n = 67, PASI75 at weeks 24–28, p = 0.0044) while interestingly, with better response to infliximab ( n = 37, PASI at weeks 12–16, p = 0.023; PASI at weeks 24–28, p = 0.02). NFKBIZ The nuclear factor of kappa light polypeptide gene enhancer in B cells inhibitor , zeta (NFKBIZ) gene encodes an atypical inhibitor of nuclear factor κB (IκB) protein, involved in inflammatory signaling of psoriasis . Among 169 Spanish patients, the deletion of NFKBIZ rs3217713 had a better response to adalimumab (PASI75 at 6 months, p = 0.015) . TNF, TNFRSF1B None of the genotyped SNPs of TNF , TNFRSF1A , and TNFRSF1B genes were associated with responsiveness to treatment with infliximab or adalimumab . TRAF3IP2 TNF receptor-associated factor 3 interacting protein 2 (TRAF3IP2) involves in IL-17 signaling and interacts with members of the Rel/NF-κB transcription factor family . The rs13190932 in the TRAF3IP2 gene showed associations with a favorable response to infliximab (PASI75 at 6 months, p = 0.041) . Better Response of Efficacy In 144 Spanish patients, carriers of the CT/CC allele in MAP3K1 rs96844 and the CT/TT allele in HLA-C rs12191877 achieved a better PASI75 response at 3 months. The study also found significantly better results for carriers of MAP3K1 polymorphism and CT/TT in CDKAL1 rs6908425 at 6 months . Another study enrolled 70 patients in Spain implicated that patients harboring high-affinity alleles, FCGR2A-H131R (rs1801274) and FCGR 3A-V158F (rs396991), contribute to better mean BSA improvement but not PASI improvement at 6–8 weeks after anti-TNF treatment of psoriasis . The result between FCGR 3A-V158F (rs396991) and response to anti-TNFα therapy (PASI75 at 6 months, p = 0.005), especially etanercept (PASI75 at 6 months, p = 0.01), was replicated in 100 Caucasian patients from Greece, while FCGR2A-H131R (rs1801274) was found to be no association . A study conducted in 199 Greek patients found an association between carriers of CT/CC in HLA-C rs10484554 and a good response to anti-TNF agents (PASI 75 at 6 months, p = 0.0032), especially adalimumab ( p = 0.0007) . In 238 Caucasian adults in Spain, the rs4819554 promoter SNP allele A of the IL17RA gene was significantly more prevalent among responders at week 12 . Moreover, several genetic variants exert favorable effects at 6 months of treatment in 109 patients with psoriasis from Spain, including GG genotype of IL23R rs11209026 (PASI90 p = 0.006), GG genotype of TNF-a-238 rs361525 (PASI75, p = 0.049), CT/TT genotypes of TNF-a-857 rs1799724 (PASI75, p = 0.006, ΔPASI, p = 0.004; BSA, p = 0.009), and TT genotype of TNF-a-1031 rs1799964 (PASI75, p = 0.038; ΔPASI, p = 0.041; at 3 months, PASI75, p = 0.047) . Poor Response of Efficacy In 144 Spanish patients, four SNPs were associated with the inability to achieve PASI75 at three months, including AG/GG allele in PGLYRP4-24 rs2916205, CC allele in ZNF816A rs9304742, AA allele in CTNNA2 rs11126740, and AG/GG allele in IL12B rs2546890. Additionally, the results for polymorphisms in the IL12B gene were replicated at six months and one year. The study also obtained significant results for the FCGR2A and HTR2A polymorphism at 6 months . Notably, the result of the FCGR2A polymorphism showed variability between studies . In 376 Danish patients, five SNPs, which are IL1B (rs1143623, rs1143627), LY96 (rs11465996), and TLR2 (rs11938228, rs4696480), were all associated with nonresponse to treatment . One study found a higher frequency of G-carriers of the TNFRSF1B rs1061622 among non-responders (PASI < 50) compared to cases achieving PASI75 to TNF blockers in 90 Caucasians from Spain . Toxicity Among the 161 Caucasian patients, the polymorphism rs10782001 in FBXL19 and rs11209026 in IL23R may contribute to an increased risk of the secondary development of psoriasiform reactions owing to TNF blocking. In addition, in 70 Spanish patients, the copy number variation (CNV) harboring three genes (ARNT2, LOC101929586, and MIR5572) was related to the occurrence of paradoxical psoriasiform reactions at 3 and 6 months ( p = 0.006) . In contrast, the presence of rs3087243 in CTLA4 , rs651630 in SLC12A8 , or rs1800453 in TAP1 was related to protection against psoriasiform lesions . Interestingly, the IL23R rs11209026 polymorphism was reported as having a protective role reported in classical psoriasis. In 144 Spanish patients, carriers of the CT/CC allele in MAP3K1 rs96844 and the CT/TT allele in HLA-C rs12191877 achieved a better PASI75 response at 3 months. The study also found significantly better results for carriers of MAP3K1 polymorphism and CT/TT in CDKAL1 rs6908425 at 6 months . Another study enrolled 70 patients in Spain implicated that patients harboring high-affinity alleles, FCGR2A-H131R (rs1801274) and FCGR 3A-V158F (rs396991), contribute to better mean BSA improvement but not PASI improvement at 6–8 weeks after anti-TNF treatment of psoriasis . The result between FCGR 3A-V158F (rs396991) and response to anti-TNFα therapy (PASI75 at 6 months, p = 0.005), especially etanercept (PASI75 at 6 months, p = 0.01), was replicated in 100 Caucasian patients from Greece, while FCGR2A-H131R (rs1801274) was found to be no association . A study conducted in 199 Greek patients found an association between carriers of CT/CC in HLA-C rs10484554 and a good response to anti-TNF agents (PASI 75 at 6 months, p = 0.0032), especially adalimumab ( p = 0.0007) . In 238 Caucasian adults in Spain, the rs4819554 promoter SNP allele A of the IL17RA gene was significantly more prevalent among responders at week 12 . Moreover, several genetic variants exert favorable effects at 6 months of treatment in 109 patients with psoriasis from Spain, including GG genotype of IL23R rs11209026 (PASI90 p = 0.006), GG genotype of TNF-a-238 rs361525 (PASI75, p = 0.049), CT/TT genotypes of TNF-a-857 rs1799724 (PASI75, p = 0.006, ΔPASI, p = 0.004; BSA, p = 0.009), and TT genotype of TNF-a-1031 rs1799964 (PASI75, p = 0.038; ΔPASI, p = 0.041; at 3 months, PASI75, p = 0.047) . In 144 Spanish patients, four SNPs were associated with the inability to achieve PASI75 at three months, including AG/GG allele in PGLYRP4-24 rs2916205, CC allele in ZNF816A rs9304742, AA allele in CTNNA2 rs11126740, and AG/GG allele in IL12B rs2546890. Additionally, the results for polymorphisms in the IL12B gene were replicated at six months and one year. The study also obtained significant results for the FCGR2A and HTR2A polymorphism at 6 months . Notably, the result of the FCGR2A polymorphism showed variability between studies . In 376 Danish patients, five SNPs, which are IL1B (rs1143623, rs1143627), LY96 (rs11465996), and TLR2 (rs11938228, rs4696480), were all associated with nonresponse to treatment . One study found a higher frequency of G-carriers of the TNFRSF1B rs1061622 among non-responders (PASI < 50) compared to cases achieving PASI75 to TNF blockers in 90 Caucasians from Spain . Among the 161 Caucasian patients, the polymorphism rs10782001 in FBXL19 and rs11209026 in IL23R may contribute to an increased risk of the secondary development of psoriasiform reactions owing to TNF blocking. In addition, in 70 Spanish patients, the copy number variation (CNV) harboring three genes (ARNT2, LOC101929586, and MIR5572) was related to the occurrence of paradoxical psoriasiform reactions at 3 and 6 months ( p = 0.006) . In contrast, the presence of rs3087243 in CTLA4 , rs651630 in SLC12A8 , or rs1800453 in TAP1 was related to protection against psoriasiform lesions . Interestingly, the IL23R rs11209026 polymorphism was reported as having a protective role reported in classical psoriasis. CD84 Cluster of Differentiation 84 (CD84) gene encodes a membrane glycoprotein, which enhances IFN-γ secretion in activated T cells . In 161 patients from the Netherlands, the GA genotype in CD84 (rs6427528) had a more sensitive response to etanercept than the referential GG genotype (ΔPASI at 3 months, p = 0.025) . FCGR3A This gene encodes a receptor for the Fc portion of immunoglobulin G, where the TNF antagonist binds specifically. In 100 psoriasis patients in Greece, the study showed an association with FCGR3A-V158F (rs396991) and better response to etanercept (PASI75 at 6 months, p = 0.01) . TNFAIP3 TNFα induced protein 3 (TNFAIP3) plays a protective role against the harmful effects of inflammation and is involved in immune regulation . Rs610604 in TNFAIP3 showed associations with good responses to etanercept (PASI75 at 6 months, p = 0.007) . TNF, TNFRSF1B TNFα transmits signals through TNF receptor superfamily member 1B (TNFRSF1B) , which exhibits predominantly on Tregs and is responsible for initiating immune modulation . Carriage of TNF-857C (rs1799724) or TNFRSF1B-676T (rs1061622) alleles was associated with a positive response to drug treatment in patients treated with etanercept (PASI75 at 6 months, p = 0.002 and p = 0.001, respectively) . Cluster of Differentiation 84 (CD84) gene encodes a membrane glycoprotein, which enhances IFN-γ secretion in activated T cells . In 161 patients from the Netherlands, the GA genotype in CD84 (rs6427528) had a more sensitive response to etanercept than the referential GG genotype (ΔPASI at 3 months, p = 0.025) . This gene encodes a receptor for the Fc portion of immunoglobulin G, where the TNF antagonist binds specifically. In 100 psoriasis patients in Greece, the study showed an association with FCGR3A-V158F (rs396991) and better response to etanercept (PASI75 at 6 months, p = 0.01) . TNFα induced protein 3 (TNFAIP3) plays a protective role against the harmful effects of inflammation and is involved in immune regulation . Rs610604 in TNFAIP3 showed associations with good responses to etanercept (PASI75 at 6 months, p = 0.007) . TNFα transmits signals through TNF receptor superfamily member 1B (TNFRSF1B) , which exhibits predominantly on Tregs and is responsible for initiating immune modulation . Carriage of TNF-857C (rs1799724) or TNFRSF1B-676T (rs1061622) alleles was associated with a positive response to drug treatment in patients treated with etanercept (PASI75 at 6 months, p = 0.002 and p = 0.001, respectively) . CPM CPM (Carboxypeptidase M) is involved in the maturation of macrophages in psoriasis pathogenesis . The CNV of the CPM gene was significantly associated with adalimumab response among 70 Spanish patients (PASI75 at 3 and 6 months, p < 0.05) . HLA The rs9260313 in the HLA-A gene was found to be associated with more favorable responses to adalimumab (PASI75 at 6 months, p = 0.05) . Among 169 Spanish patients, HLA-Cw06 positivity had a better response to adalimumab. (PASI75 at 6 months, p = 0.018) . IL17F IL-17F , activated by IL23/Th17, is recognized as having a critical role in the pathogenesis of psoriasis. In a cohort study in Spain, carriers of TC genotype in IL-17F rs763780 were associated with a lack of response to adalimumab ( n = 67, PASI75 at weeks 24–28, p = 0.0044) while interestingly, with better response to infliximab ( n = 37, PASI at weeks 12–16, p = 0.023; PASI at weeks 24–28, p = 0.02). NFKBIZ The nuclear factor of kappa light polypeptide gene enhancer in B cells inhibitor , zeta (NFKBIZ) gene encodes an atypical inhibitor of nuclear factor κB (IκB) protein, involved in inflammatory signaling of psoriasis . Among 169 Spanish patients, the deletion of NFKBIZ rs3217713 had a better response to adalimumab (PASI75 at 6 months, p = 0.015) . TNF, TNFRSF1B None of the genotyped SNPs of TNF , TNFRSF1A , and TNFRSF1B genes were associated with responsiveness to treatment with infliximab or adalimumab . TRAF3IP2 TNF receptor-associated factor 3 interacting protein 2 (TRAF3IP2) involves in IL-17 signaling and interacts with members of the Rel/NF-κB transcription factor family . The rs13190932 in the TRAF3IP2 gene showed associations with a favorable response to infliximab (PASI75 at 6 months, p = 0.041) . CPM (Carboxypeptidase M) is involved in the maturation of macrophages in psoriasis pathogenesis . The CNV of the CPM gene was significantly associated with adalimumab response among 70 Spanish patients (PASI75 at 3 and 6 months, p < 0.05) . The rs9260313 in the HLA-A gene was found to be associated with more favorable responses to adalimumab (PASI75 at 6 months, p = 0.05) . Among 169 Spanish patients, HLA-Cw06 positivity had a better response to adalimumab. (PASI75 at 6 months, p = 0.018) . IL-17F , activated by IL23/Th17, is recognized as having a critical role in the pathogenesis of psoriasis. In a cohort study in Spain, carriers of TC genotype in IL-17F rs763780 were associated with a lack of response to adalimumab ( n = 67, PASI75 at weeks 24–28, p = 0.0044) while interestingly, with better response to infliximab ( n = 37, PASI at weeks 12–16, p = 0.023; PASI at weeks 24–28, p = 0.02). The nuclear factor of kappa light polypeptide gene enhancer in B cells inhibitor , zeta (NFKBIZ) gene encodes an atypical inhibitor of nuclear factor κB (IκB) protein, involved in inflammatory signaling of psoriasis . Among 169 Spanish patients, the deletion of NFKBIZ rs3217713 had a better response to adalimumab (PASI75 at 6 months, p = 0.015) . None of the genotyped SNPs of TNF , TNFRSF1A , and TNFRSF1B genes were associated with responsiveness to treatment with infliximab or adalimumab . TNF receptor-associated factor 3 interacting protein 2 (TRAF3IP2) involves in IL-17 signaling and interacts with members of the Rel/NF-κB transcription factor family . The rs13190932 in the TRAF3IP2 gene showed associations with a favorable response to infliximab (PASI75 at 6 months, p = 0.041) . Ustekinumab, as an IL12/IL23 antagonist, targets the p40 subunit that is shared by IL-12 and IL-23, whereas guselkumab, tildrakizumab, and risankizumab target the p19 subunit of IL-23. These four drugs are efficacious in treating moderate to severe plaque psoriasis . While ustekinumab is the earliest commercially available drug among IL23 antagonists, relatively abundant studies of the association between the response and gene status have been conducted. In contrast, there is limited research on the genetic predictors of clinical response to guselkumab, tildrakizumab, and risankizumab . 3.5.1. Ustekinumab (UTK) Better Response of Efficacy In a Spanish study enrolled 69 patients, good responders at 4 months were associated with CC genotype in ADAM33 rs2787094 ( p = 0.015), CG/CC genotype in HTR2A rs6311 ( p = 0.037), GT/TT genotype in IL-13 rs848 ( p = 0.037), CC genotype in NFKBIA rs2145623 ( p = 0.024), and CT/CC genotype in TNFR1 rs191190 . Rs151823 and rs26653 in the ERAP1 gene showed associations with a favorable response to anti-IL-12/23 therapy among 22 patients from the UK. Several studies exhibited that the presence of the HLA-Cw*06 or Cw*06:02 allele may serve as a predictor of faster response and better response to ustekinumab in Italian, Dutch, Belgian, American, and Chinese patients . A recent meta-analysis study confirmed that HLA-C*06:02 -positive patients had higher response rates (PASI76 at 6 months, p < 0 .001) . In addition, the presence of the GG genotype on the IL12B rs6887695 SNP and the absence of the AA genotype on the IL12B rs3212227 or the GG genotype on the IL6 rs1800795 SNP significantly increased the probability of therapeutic success in HLA-Cw6 -positive patients . Rs10484554 in the HLA-Cw gene did not show an association with a good response to ustekinumab in a Greek population . Patients with heterozygous genotype (CT) in the IL12B rs3213094 showed better PASI improvement to ustekinumab than the reference genotype (CC) (∆PASI at 3 months, p = 0.017), but the result was not replicated with regard to PASI75 . The genetic polymorphism of TIRAP rs8177374 and TLR5 rs5744174 were associated with a better response in the Danish population (PASI75 at 3 months, p = 0.0051 and p = 0.0012, respectively) . Poor Response of Efficacy In a Spanish study that enrolled 69 patients treating psoriasis with ustekinumab, poor responders at 4 months were associated with CG/CC genotype in CHUK rs11591741 ( p = 0.029), CT/CC genotype in C9orf72 rs774359 ( p = 0.016), AG/GG in C17orf51 rs1975974 ( p = 0.012), CT genotype in SLC22A4 rs1050152 ( p = 0.037), GT/TT genotype in STAT4 rs7574865 ( p = 0.015) and CT/CC genotype in ZNF816A rs9304742 ( p = 0.012) . Among 376 Danish patients, genetic variants of IL1B rs1143623 and rs1143627 related to increased IL-1β levels may be unfavorable outcomes (PASI75 at 3 months, p = 0.0019 and 0.0016, respectively), similar results with anti-TNF agents . An association between the TC genotype of IL-17F rs763780 and no response to ustekinumab was found in 70 Spanish (PASI75 at 3 and 6 months, p = 0.022 and p = 0.016, respectively) . Patients with homozygous (GG) for the rs610604 SNP in TNFAIP3 showed a worse PASI improvement to ustekinumab ( p = 0.031) than the TT genotype . Carriers of allele G in TNFRSF1B rs1061622 under anti-TNF or anti-IL-12/IL-23 treatment tended to be non-responders in 90 patients from Spain (PASI < 50 at 6 months, p = 0.05) . Better Response of Efficacy In a Spanish study enrolled 69 patients, good responders at 4 months were associated with CC genotype in ADAM33 rs2787094 ( p = 0.015), CG/CC genotype in HTR2A rs6311 ( p = 0.037), GT/TT genotype in IL-13 rs848 ( p = 0.037), CC genotype in NFKBIA rs2145623 ( p = 0.024), and CT/CC genotype in TNFR1 rs191190 . Rs151823 and rs26653 in the ERAP1 gene showed associations with a favorable response to anti-IL-12/23 therapy among 22 patients from the UK. Several studies exhibited that the presence of the HLA-Cw*06 or Cw*06:02 allele may serve as a predictor of faster response and better response to ustekinumab in Italian, Dutch, Belgian, American, and Chinese patients . A recent meta-analysis study confirmed that HLA-C*06:02 -positive patients had higher response rates (PASI76 at 6 months, p < 0 .001) . In addition, the presence of the GG genotype on the IL12B rs6887695 SNP and the absence of the AA genotype on the IL12B rs3212227 or the GG genotype on the IL6 rs1800795 SNP significantly increased the probability of therapeutic success in HLA-Cw6 -positive patients . Rs10484554 in the HLA-Cw gene did not show an association with a good response to ustekinumab in a Greek population . Patients with heterozygous genotype (CT) in the IL12B rs3213094 showed better PASI improvement to ustekinumab than the reference genotype (CC) (∆PASI at 3 months, p = 0.017), but the result was not replicated with regard to PASI75 . The genetic polymorphism of TIRAP rs8177374 and TLR5 rs5744174 were associated with a better response in the Danish population (PASI75 at 3 months, p = 0.0051 and p = 0.0012, respectively) . Poor Response of Efficacy In a Spanish study that enrolled 69 patients treating psoriasis with ustekinumab, poor responders at 4 months were associated with CG/CC genotype in CHUK rs11591741 ( p = 0.029), CT/CC genotype in C9orf72 rs774359 ( p = 0.016), AG/GG in C17orf51 rs1975974 ( p = 0.012), CT genotype in SLC22A4 rs1050152 ( p = 0.037), GT/TT genotype in STAT4 rs7574865 ( p = 0.015) and CT/CC genotype in ZNF816A rs9304742 ( p = 0.012) . Among 376 Danish patients, genetic variants of IL1B rs1143623 and rs1143627 related to increased IL-1β levels may be unfavorable outcomes (PASI75 at 3 months, p = 0.0019 and 0.0016, respectively), similar results with anti-TNF agents . An association between the TC genotype of IL-17F rs763780 and no response to ustekinumab was found in 70 Spanish (PASI75 at 3 and 6 months, p = 0.022 and p = 0.016, respectively) . Patients with homozygous (GG) for the rs610604 SNP in TNFAIP3 showed a worse PASI improvement to ustekinumab ( p = 0.031) than the TT genotype . Carriers of allele G in TNFRSF1B rs1061622 under anti-TNF or anti-IL-12/IL-23 treatment tended to be non-responders in 90 patients from Spain (PASI < 50 at 6 months, p = 0.05) . In a Spanish study enrolled 69 patients, good responders at 4 months were associated with CC genotype in ADAM33 rs2787094 ( p = 0.015), CG/CC genotype in HTR2A rs6311 ( p = 0.037), GT/TT genotype in IL-13 rs848 ( p = 0.037), CC genotype in NFKBIA rs2145623 ( p = 0.024), and CT/CC genotype in TNFR1 rs191190 . Rs151823 and rs26653 in the ERAP1 gene showed associations with a favorable response to anti-IL-12/23 therapy among 22 patients from the UK. Several studies exhibited that the presence of the HLA-Cw*06 or Cw*06:02 allele may serve as a predictor of faster response and better response to ustekinumab in Italian, Dutch, Belgian, American, and Chinese patients . A recent meta-analysis study confirmed that HLA-C*06:02 -positive patients had higher response rates (PASI76 at 6 months, p < 0 .001) . In addition, the presence of the GG genotype on the IL12B rs6887695 SNP and the absence of the AA genotype on the IL12B rs3212227 or the GG genotype on the IL6 rs1800795 SNP significantly increased the probability of therapeutic success in HLA-Cw6 -positive patients . Rs10484554 in the HLA-Cw gene did not show an association with a good response to ustekinumab in a Greek population . Patients with heterozygous genotype (CT) in the IL12B rs3213094 showed better PASI improvement to ustekinumab than the reference genotype (CC) (∆PASI at 3 months, p = 0.017), but the result was not replicated with regard to PASI75 . The genetic polymorphism of TIRAP rs8177374 and TLR5 rs5744174 were associated with a better response in the Danish population (PASI75 at 3 months, p = 0.0051 and p = 0.0012, respectively) . In a Spanish study that enrolled 69 patients treating psoriasis with ustekinumab, poor responders at 4 months were associated with CG/CC genotype in CHUK rs11591741 ( p = 0.029), CT/CC genotype in C9orf72 rs774359 ( p = 0.016), AG/GG in C17orf51 rs1975974 ( p = 0.012), CT genotype in SLC22A4 rs1050152 ( p = 0.037), GT/TT genotype in STAT4 rs7574865 ( p = 0.015) and CT/CC genotype in ZNF816A rs9304742 ( p = 0.012) . Among 376 Danish patients, genetic variants of IL1B rs1143623 and rs1143627 related to increased IL-1β levels may be unfavorable outcomes (PASI75 at 3 months, p = 0.0019 and 0.0016, respectively), similar results with anti-TNF agents . An association between the TC genotype of IL-17F rs763780 and no response to ustekinumab was found in 70 Spanish (PASI75 at 3 and 6 months, p = 0.022 and p = 0.016, respectively) . Patients with homozygous (GG) for the rs610604 SNP in TNFAIP3 showed a worse PASI improvement to ustekinumab ( p = 0.031) than the TT genotype . Carriers of allele G in TNFRSF1B rs1061622 under anti-TNF or anti-IL-12/IL-23 treatment tended to be non-responders in 90 patients from Spain (PASI < 50 at 6 months, p = 0.05) . Secukinumab and ixekizumab are human monoclonal antibodies that bind to the protein interleukin IL-17A, while brodalumab is a human monoclonal antibody of IL17R, which means a pan inhibitor of IL-17A, IL-17F, and IL-25. The three IL-17 antagonists are currently used in the treatment of moderate-to-severe psoriasis . 3.6.1. Secukinumab (SCK) and Ixekizumab (IXE) and Brodalumab (BDL) HLA-Cw6 The responses to SCK were comparable up to 18 months between HLA-Cw*06 -positive and -negative patients, as it is highly effective regardless of the HLA-Cw6 status in Italy and Switzerland . IL-17 No associations were found between the five genetic variants of IL-17 (rs2275913, rs8193037, rs3819025, rs7747909, and rs3748067) and ΔPASI, PASI75, or PASI90 after 12 and 24 weeks of anti-IL-17A agents, including SCK and IXE in European . The lack of pharmacogenetic data for BDL was noted during the review. HLA-Cw6 The responses to SCK were comparable up to 18 months between HLA-Cw*06 -positive and -negative patients, as it is highly effective regardless of the HLA-Cw6 status in Italy and Switzerland . IL-17 No associations were found between the five genetic variants of IL-17 (rs2275913, rs8193037, rs3819025, rs7747909, and rs3748067) and ΔPASI, PASI75, or PASI90 after 12 and 24 weeks of anti-IL-17A agents, including SCK and IXE in European . The lack of pharmacogenetic data for BDL was noted during the review. The responses to SCK were comparable up to 18 months between HLA-Cw*06 -positive and -negative patients, as it is highly effective regardless of the HLA-Cw6 status in Italy and Switzerland . No associations were found between the five genetic variants of IL-17 (rs2275913, rs8193037, rs3819025, rs7747909, and rs3748067) and ΔPASI, PASI75, or PASI90 after 12 and 24 weeks of anti-IL-17A agents, including SCK and IXE in European . The lack of pharmacogenetic data for BDL was noted during the review. Apremilast, a selective phosphodiesterase 4 (PDE4) inhibitor, is used to treat plaque psoriasis. A Russian study identified 78 pre-selected single-nucleotide polymorphisms, increased minor allele of IL1β (rs1143633), IL4 (IL13) (rs20541), IL23R (rs2201841), and TNFα (rs1800629) genes that are associated with the better outcome in 34 patients (PASI75 at 6.5 months, p = 0.05, p = 0.04, p = 0.03, p = 0.03, respectively) . Globally used topical therapies for psoriasis include retinoids, vitamin D analogs, corticosteroids, and coal tar. Lack of evidence emphasizes the association between treatment response and pharmacogenetics of corticosteroids, retinoids, and coal tar. The link between VDR genes, encoding the nuclear hormone receptor for vitamin D3, and the response to calcipotriol has been discussed but remained controversial in different populations . Lindioil is another topical medicine refined from Chinese herbs and is effective in treating plaque psoriasis . It has been reported that HLA-Cw*06:02 positivity showed a better response (PASI75 at 3 months, p = 0.033) while HLA-Cw*01:02 positivity showed a poor response in 72 patients (PASI 75 at 2.5 months, p = 0.019) . Psoriasis has been proven to be genetically affected over half a century . With the breakthrough of the technique of genetic analysis, more and more psoriasis susceptibility genes have been widely detected and analyzed as predictive markers of treatment response when unexplained and unsatisfied treatment responses and side effects have been recorded . In addition, several reviews have highlighted the findings of pharmacogenomics in psoriasis in the last ten years . In the review, regarding efficacy, carriers of HLA-Cw*06 positivity implied a more favorable response in the treatment of methotrexate and ustekinumab. HLA-Cw6 status was not indicative of treatment response to adalimumab, etanercept, and secukinumab. Polymorphism of ABCB1 rs1045642 may indicate poor responses in Greek and Russian. However, there are some limitations in the current review. First, the relevant data of anti-IL17 agents were lacking, which reflects that it is relatively novel to the market and shows outstanding responses irrespective of genotype. Further genetic analysis of acitretin, cyclosporin, and apremilast is worth exploring. Secondly, the majority of the included pharmacogenomic studies of psoriasis were from Europe and America. This implies the limited application to Asians and Africans. It may reflect that Europe and America have more clinical trial studies or drug options, resulting in interest in studying treatment responses for psoriasis than in other areas . In addition, the accessibility of gene-analysis resources may affect the development of pharmacogenomic studies. Thirdly, the protocol to identify the related gene varies between studies. A generalized and standardized method would facilitate the utilization and replication of the pharmacogenomic studies. Fourthly, pharmGKB is a comprehensive resource that curates knowledge about the impact of genetic variation on drug responses for clinicians . The level of evidence of the pharmacogenetic results in this database mostly remains low (level three) due to conflicted results, small cases, or a single study. Whereas biomarkers must show a relatively strong effect in order to be of use in clinical decision-making, replicated large cohort studies of each medical therapy are required in different ethnic groups. The use of the global polygenic risk score allowed for the prediction of onset psoriasis in Chinese and Russians . The establishment of the polygenic score for psoriasis treatment response may be developed in the future. In addition, tofacitinib, a kind of Janus kinase (JAK) inhibitor, was approved by FDA for psoriatic arthritis in 2017. Although no indication of psoriasis alone is approved, pharmacogenetic research of JAK inhibitor is expected considering its potential cardiovascular and cancer risk in patients with rheumatoid arthritis . This review article updates the current pharmacogenomic studies of treatment outcomes for psoriasis. A standardized protocol could be established for utilization and comparison worldwide. Currently, high-throughput whole exome sequencing (WES) or whole genome sequencing (WGS) can rapidly obtain comprehensive genetic information for individuals . Genetic basic research promotes the progress of personalized medicine. Its development contributes to the precision of the effective treatment individually, providing alternatives when treatment fails, preventing adverse effects, and reducing the economic burden of treating psoriasis.
|
Enterprise-Based Participatory Action Research in the Development of a Basic Occupational Health Service Model in Thailand
|
d12f9270-e066-4e24-ac8f-645fafef3ab9
|
10138501
|
Preventive Medicine[mh]
|
The International Labor Organization (ILO, Geneva, Switzerland) estimates that 2.9 million men and women die annually due to illnesses or accidents at work. In addition, 402 million workers worldwide suffer non-fatal occupational injuries . In Thailand, the number of employers increased by approximately 15% between 2017 and 2021 (371,432 vs. 436,817, respectively). The respective number of employees in 2017 and 2021 was 9,777,751 and 11,172,844—an increase of approximately 12%. Remarkably, despite the rise in employers and employees, the number of occupational injuries or illnesses on record at the Social Security Office decreased from 86,278 to 78,245 between 2017 and 2021, and the number of occupational injuries or diseases similarly decreased . Thus, in the face of increasing industrial activities in the last two decades, which introduce new hazards and new health outcomes , one way to address the occupational health issues of the working population is to strengthen the provision of occupational health services (OHS) and in particular BOHS . The ILO OHS Convention 161 defines OHS as, “services entrusted with essentially preventive functions and responsible for advising the employer, the workers, and their representatives in the undertaking on the requirements for establishing and maintaining a safe and healthy working environment which will facilitate optimal physical and mental health regarding work and the adaptation of work to the capabilities of workers in the light of their state of physical and mental health.” Therefore, ILO Convention C161 was ratified to protect workers against sickness, disease, and injury from his/her employment . Any OHS program is thus established to prevent work-related injuries and illnesses . As such, OHS have beneficial impacts on the workplace with respect to protecting worker health, promoting health, mental well-being, and workability, and the prevention of illness and accidents . Notwithstanding good intentions, OHS are unequally distributed and vary from country to country, depending on ILO C161 ratification . OHS coverage is high (75% to 97%) in countries that have ratified ILO C161, such as Croatia, Finland, and Macedonia. In comparison, OHS coverage in countries that have not ratified ILO C161 is low, ranging between 5% and 10% (i.e., China and India) . The estimated global coverage of OHS is 18.8%, so 80% of the world’s working population does not have access to OHS . Thailand’s OHS coverage is inconsistent and systematic assessments are limited. One survey of OHS coverage at provincial public health offices and primary care units revealed that OHS activities ranged between 16 and 100 percent. Still, the results are not generalizable owing to the limited sample . OHS in Thailand is administered by three government ministries: the Ministry of Labor, the Ministry of Public Health, and the Ministry of Industry. At the intermediate level, Thailand has no agencies responsible for supporting occupational health services including (1) providing educational and training programs for occupational health professionals’ qualification, certification, and competencies; (2) occupational health service standards regulated by law; and (3) quality evaluation methods for occupational health services in enterprises. At the local level, there are no specific occupational health service organizations at the organizational level mandated by laws and regulations in Thailand. The Ministry of Public Health developed OHS standards to audit occupational health professionals that provide OHS to enterprises. The Ministry of Labor would not enforce OHS standards, and no OHS recommendations exist for enterprises . Since Thailand has not ratified the ILO C161 Convention , Thai laws concerning OHS functions and OHP duties are incomplete. Developing a BOHS model would identify BOHS provision in the workplace and identify the roles of OHP in Thailand. The future of OHS laws remains unpredictable, and an enterprise-concerning policy may be an additional OHS provision in the workplace. PAR was deemed appropriate in this study because it helps the participants to understand the causes of problems that lead to the development of action plans for sustainable problem-solving, on-the-job BOHS provision, and identifying the functions of OHP. Although Thai laws have not mandated using BOHS, applying the PAR model will be essential to fostering internal cooperation. Once the Thai law requires BOHS, the PAR model will quickly proceed along with other Thai laws. Therefore, the research question is, ‘How will an in-plant model of basic occupational health services in workplaces be?’ A company was chosen to develop a BOHS model in Thailand to identify and understand the causes of issues leading to expanding the BOHS model in Thailand. Generally, other enterprises could modify the BOHS model based on occupational health concerns. The current study aimed to use PAR to establish an OHS model in a large-sized enterprise in northeastern Thailand. 2.1. Occupational Health Service Model The occupational health service models at the enterprise level have been classified depending on places that provide occupational health services, for instance, an in-plant model, an inter-enterprise model, an industry-oriented model, hospital outpatient clinics, private health centers, primary health care units, and a social security model. The in-plant model was applied in large enterprises to provide occupational health services. Both full ranges of occupational health services and non-occupational health services were provided by multidisciplinary staff such as occupational physicians, occupational nurses, occupational hygienists, ergonomists, toxicologists, occupational physiologists, laboratory and x-ray technicians, physiologists, social workers, health educators, counselors, and industrial psychologists. Smaller enterprises provided occupational health services by one or more full-time occupational health nurses and a part-time occupational physician who prepared standing orders for procedures, medication, and visits as necessary. In addition to this, enterprises connected with external service suppliers to provide in-plant specialized occupational health services (i.e., occupational hygiene, toxicology, and safety engineering). Other occupational health service models cannot provide a full range and high quality of services due to a lack of familiarity with the workplace and the limitations of occupational health personnel . This research will apply the in-plant model to developing occupational health services in the workplace because this research aims to develop occupational health services in the workplace as a model. Therefore, the occupational health service model should have the greatest coverage and highest quality of the occupational health service model. 2.2. Development of Occupational Health Services at the National Level The occupational health service infrastructure system is divided into three classes: national, intermediate, and local. Each infrastructure level has different objectives, as shown in ; the national level regulates laws and policies, the intermediate level supports services, and the local level follows the service provisions . At the national level, Finland’s authority responsible for regulation and policy is the Ministry of Social Affairs and Health (MoSAH), Malaysia’s is the Ministry of Human Resources (MoHR), and in Vietnam, it is the Ministry of Health (MoH). At the intermediate level, the agencies for support services, for example, training, consultation, certification, research, and development, are in Finland, the Finnish Institute of Occupational Health (FIOH), in Malaysia, the National Institute of Occupational Safety and Health (NIOSH), and in Vietnam, the National Institute of Occupational and Environmental Health (NIOEH). Lastly, at the local level, the occupational health service infrastructure at the organizational level comprises laws or regulations, collective agreements between employers and employees, and organizational personnel. Depending on the company size, service provision agencies in Finland and Malaysia are specific occupational health providers. In contrast, there are no particular organizations in Vietnam. compares the organizations responsible for occupational health services in Finland, Malaysia, and Vietnam . 2.3. Development of Occupational Health Services in Thailand At the national level, Thailand’s occupational health services infrastructure includes the Ministry of Labor, the Ministry of Public Health, and the Ministry of Industry. The Ministry of Labor controls social security to promote compensation for work-related (Workmen Compensation Fund) and non-work-related diseases (social security). The Department of Labor Protection and Welfare promotes workplace safety, regulates companies, and supports academic institutions. The Ministry of Public Health regulates the Department of Disease Control, the Department of Medical Service, and the Office of the Permanent Secretary. The Department of Disease Control provides secondary occupational health services in hospitals. The Department of Medical Service provides tertiary occupational health services (Nopparatrajathanee Hospital). The Office of the Permanent Secretary provides the Provincial Public Health Office and the Primary Care Unit. The Ministry of Industry publishes Regulation of Ministry of Industry No. 4409 (B.E.2555), including the Guideline for Examination Due to Occupational Chemical and Physical Hazards in Workplaces . At the intermediate level, Thailand has no agency responsible for supporting occupational health services (1) to provide educational and training programs for occupational health professionals’ qualification, certification, and competencies; (2) occupational health service standards regulated by law; and (3) quality evaluation methods for occupational health services in enterprises. At the local level, there are no specific occupational health bodies services at the organizational level mandated by laws and regulations in Thailand. Instead, the Thailand OHS model comprises a tertiary referral center, in-house OHS, and public occupational health clinic . 2.4. Occupational Health Professionals The undertakings may organize occupational health services, public authorities, official services, social security institutions, and competent authorities based on national conditions and practice. In addition, some countries have regulations relating to occupational health services depending on the size of the enterprise, such as in-plant occupational health services in large-sized enterprises and group services in small-sized enterprises. Occupational health physicians: management activities for employees, including preplacement medical examination, medical surveillance, medical removal, return to work, follow-up, investigation of occupational poisoning or occupational disease, health promotion, post-employment medical examination, and implementation of an occupational health program in the workplace such as periodic education, providing advice on workplace health and safety issues, helping with audit/evaluation of the occupational health program in the workplace, and maintaining the medical records of employees. compares the responsibilities of occupational physicians in the US, Malaysia, and Thailand . Occupational health nurses: manage cases and provide treatment, follow-up, referrals, and emergency care for occupational injuries and illnesses; counsel workers regarding occupational injuries and illnesses, emotional problems, and substance abuse; promote health and health education; and advise the employer on legal and regulatory compliance, assist in risk management such as collecting health and hazard data, and use the data to prevent injuries and illnesses. According to the Thai Ministry of Health, doctors and nurses must be provided when there are 200 employees or more. For 200 or more employees, one or more nurses (not specific to occupational nurses) are employed during all working hours, and one or more general practitioners (not specific to occupational physicians) are used twice or more per week and 6 h or more per week. For 1000 or more employees, two or more nurses (not specific to occupational nurses) are employed throughout the work period, and one or more general practitioners (not specific to occupational physicians) are used three times or more per week and 12 h or more per week . compares the responsibilities of occupational health nurses in the US, Malaysia, and Thailand . Safety officers: advising the employer on the safety and health measures in the workplace, inspecting the machinery, plant, and equipment, substance, appliances, and processes used in the workplace that may impact employees’ health, and investigating occupational injuries and illnesses. According to the Thai Ministry of Health, safety officers are regulated, whereas other occupational health professionals are not. The number, types, and responsibilities of safety officers vary depending on the type of enterprise and the number of employees. Furthermore, in addition to the duties of the safety officers mentioned above, the safety officers have responsibilities for hazard identification, risk assessment, advice on safety policies, education, and training of employees for safe work, data collection, and analysis to report occupational health injuries and illnesses . compares the responsibilities of safety officers in the US, Malaysia, and Thailand . 2.5. Participatory Action Research This study aims to establish and apply occupational health services in an enterprise. Because most of the data of this study will be collected in qualitative data to explore the problems by empowering people to share participants’ opinions to find appropriate methods to arrange the difficulties, qualitative research is the most relevant in this study. There are many types of qualitative research, but the most appropriate type of qualitative research in this study is participatory action research because participatory action research begins with fundamental problems or issues in societies with limiting self-determination and self-development leading to participatory action. Research develops to create political debate, discussion, and change. Similar to this research, issues at the start need to be solved appropriately. Three types of action research are categorized by different objectives of each type, including: Technical/scientific/collaborative action research; Practical/mutual collaborative/deliberative action research; Emancipating/enhancing/critical science/participatory action research. Action research of each type has different aspects in the source of concerns, methods, goals, and outcomes. The person who identifies issues varies depending on each type, the researchers in the first type of action research, the researchers and participants in the second type of action research, and the participants with the assistance of researchers in the third type of action research. Each action research’s goals differ from the first type of action research as it tests a particular intervention based on a pre-specified theoretical framework. The second type of action research understands new common problems, causes, and plans for changing processes. The third type of action research explains and resolves actual problems in a specific setting and assists participants in identifying and raising the consciousness of participants’ fundamental problems. The first type of action research outcome is efficient, immediate, but unsustainable change. The second type of action research is immediate enthusiasm and short-lived interventions. The third type of action research is achieved, and sustained changes focus on personal and cultural norms. For the reasons mentioned above, the objectives of this study are establishing and applying occupational health services in an enterprise; therefore, participants should participate significantly in this study to identify the problems in the enterprise of the participants until the plan change process is sustainable and the problems are resolved in the awareness of participants. Therefore, this study’s most appropriate action research is participatory action research . The occupational health service models at the enterprise level have been classified depending on places that provide occupational health services, for instance, an in-plant model, an inter-enterprise model, an industry-oriented model, hospital outpatient clinics, private health centers, primary health care units, and a social security model. The in-plant model was applied in large enterprises to provide occupational health services. Both full ranges of occupational health services and non-occupational health services were provided by multidisciplinary staff such as occupational physicians, occupational nurses, occupational hygienists, ergonomists, toxicologists, occupational physiologists, laboratory and x-ray technicians, physiologists, social workers, health educators, counselors, and industrial psychologists. Smaller enterprises provided occupational health services by one or more full-time occupational health nurses and a part-time occupational physician who prepared standing orders for procedures, medication, and visits as necessary. In addition to this, enterprises connected with external service suppliers to provide in-plant specialized occupational health services (i.e., occupational hygiene, toxicology, and safety engineering). Other occupational health service models cannot provide a full range and high quality of services due to a lack of familiarity with the workplace and the limitations of occupational health personnel . This research will apply the in-plant model to developing occupational health services in the workplace because this research aims to develop occupational health services in the workplace as a model. Therefore, the occupational health service model should have the greatest coverage and highest quality of the occupational health service model. The occupational health service infrastructure system is divided into three classes: national, intermediate, and local. Each infrastructure level has different objectives, as shown in ; the national level regulates laws and policies, the intermediate level supports services, and the local level follows the service provisions . At the national level, Finland’s authority responsible for regulation and policy is the Ministry of Social Affairs and Health (MoSAH), Malaysia’s is the Ministry of Human Resources (MoHR), and in Vietnam, it is the Ministry of Health (MoH). At the intermediate level, the agencies for support services, for example, training, consultation, certification, research, and development, are in Finland, the Finnish Institute of Occupational Health (FIOH), in Malaysia, the National Institute of Occupational Safety and Health (NIOSH), and in Vietnam, the National Institute of Occupational and Environmental Health (NIOEH). Lastly, at the local level, the occupational health service infrastructure at the organizational level comprises laws or regulations, collective agreements between employers and employees, and organizational personnel. Depending on the company size, service provision agencies in Finland and Malaysia are specific occupational health providers. In contrast, there are no particular organizations in Vietnam. compares the organizations responsible for occupational health services in Finland, Malaysia, and Vietnam . At the national level, Thailand’s occupational health services infrastructure includes the Ministry of Labor, the Ministry of Public Health, and the Ministry of Industry. The Ministry of Labor controls social security to promote compensation for work-related (Workmen Compensation Fund) and non-work-related diseases (social security). The Department of Labor Protection and Welfare promotes workplace safety, regulates companies, and supports academic institutions. The Ministry of Public Health regulates the Department of Disease Control, the Department of Medical Service, and the Office of the Permanent Secretary. The Department of Disease Control provides secondary occupational health services in hospitals. The Department of Medical Service provides tertiary occupational health services (Nopparatrajathanee Hospital). The Office of the Permanent Secretary provides the Provincial Public Health Office and the Primary Care Unit. The Ministry of Industry publishes Regulation of Ministry of Industry No. 4409 (B.E.2555), including the Guideline for Examination Due to Occupational Chemical and Physical Hazards in Workplaces . At the intermediate level, Thailand has no agency responsible for supporting occupational health services (1) to provide educational and training programs for occupational health professionals’ qualification, certification, and competencies; (2) occupational health service standards regulated by law; and (3) quality evaluation methods for occupational health services in enterprises. At the local level, there are no specific occupational health bodies services at the organizational level mandated by laws and regulations in Thailand. Instead, the Thailand OHS model comprises a tertiary referral center, in-house OHS, and public occupational health clinic . The undertakings may organize occupational health services, public authorities, official services, social security institutions, and competent authorities based on national conditions and practice. In addition, some countries have regulations relating to occupational health services depending on the size of the enterprise, such as in-plant occupational health services in large-sized enterprises and group services in small-sized enterprises. Occupational health physicians: management activities for employees, including preplacement medical examination, medical surveillance, medical removal, return to work, follow-up, investigation of occupational poisoning or occupational disease, health promotion, post-employment medical examination, and implementation of an occupational health program in the workplace such as periodic education, providing advice on workplace health and safety issues, helping with audit/evaluation of the occupational health program in the workplace, and maintaining the medical records of employees. compares the responsibilities of occupational physicians in the US, Malaysia, and Thailand . Occupational health nurses: manage cases and provide treatment, follow-up, referrals, and emergency care for occupational injuries and illnesses; counsel workers regarding occupational injuries and illnesses, emotional problems, and substance abuse; promote health and health education; and advise the employer on legal and regulatory compliance, assist in risk management such as collecting health and hazard data, and use the data to prevent injuries and illnesses. According to the Thai Ministry of Health, doctors and nurses must be provided when there are 200 employees or more. For 200 or more employees, one or more nurses (not specific to occupational nurses) are employed during all working hours, and one or more general practitioners (not specific to occupational physicians) are used twice or more per week and 6 h or more per week. For 1000 or more employees, two or more nurses (not specific to occupational nurses) are employed throughout the work period, and one or more general practitioners (not specific to occupational physicians) are used three times or more per week and 12 h or more per week . compares the responsibilities of occupational health nurses in the US, Malaysia, and Thailand . Safety officers: advising the employer on the safety and health measures in the workplace, inspecting the machinery, plant, and equipment, substance, appliances, and processes used in the workplace that may impact employees’ health, and investigating occupational injuries and illnesses. According to the Thai Ministry of Health, safety officers are regulated, whereas other occupational health professionals are not. The number, types, and responsibilities of safety officers vary depending on the type of enterprise and the number of employees. Furthermore, in addition to the duties of the safety officers mentioned above, the safety officers have responsibilities for hazard identification, risk assessment, advice on safety policies, education, and training of employees for safe work, data collection, and analysis to report occupational health injuries and illnesses . compares the responsibilities of safety officers in the US, Malaysia, and Thailand . This study aims to establish and apply occupational health services in an enterprise. Because most of the data of this study will be collected in qualitative data to explore the problems by empowering people to share participants’ opinions to find appropriate methods to arrange the difficulties, qualitative research is the most relevant in this study. There are many types of qualitative research, but the most appropriate type of qualitative research in this study is participatory action research because participatory action research begins with fundamental problems or issues in societies with limiting self-determination and self-development leading to participatory action. Research develops to create political debate, discussion, and change. Similar to this research, issues at the start need to be solved appropriately. Three types of action research are categorized by different objectives of each type, including: Technical/scientific/collaborative action research; Practical/mutual collaborative/deliberative action research; Emancipating/enhancing/critical science/participatory action research. Action research of each type has different aspects in the source of concerns, methods, goals, and outcomes. The person who identifies issues varies depending on each type, the researchers in the first type of action research, the researchers and participants in the second type of action research, and the participants with the assistance of researchers in the third type of action research. Each action research’s goals differ from the first type of action research as it tests a particular intervention based on a pre-specified theoretical framework. The second type of action research understands new common problems, causes, and plans for changing processes. The third type of action research explains and resolves actual problems in a specific setting and assists participants in identifying and raising the consciousness of participants’ fundamental problems. The first type of action research outcome is efficient, immediate, but unsustainable change. The second type of action research is immediate enthusiasm and short-lived interventions. The third type of action research is achieved, and sustained changes focus on personal and cultural norms. For the reasons mentioned above, the objectives of this study are establishing and applying occupational health services in an enterprise; therefore, participants should participate significantly in this study to identify the problems in the enterprise of the participants until the plan change process is sustainable and the problems are resolved in the awareness of participants. Therefore, this study’s most appropriate action research is participatory action research . 3.1. Study Design The selection of an enterprise for the case study was made from large enterprises with more than 200 employees. A large enterprise was selected as a case study for developing occupational health services in the workplace due to (1) the employer’s awareness and willingness to provide financing for OHS. In contrast, the small-to-medium enterprise survey focused more on safety injuries than occupational health. (2) Due to previous experience with occupational health problems, the large enterprise can identify occupational health problems and needs. According to The Ministerial Regulation, Specification of Occupational Safety, Hygiene, and Environment Management Standards (2006), this enterprise size was category 2, including one or more nurses and safety officers. We selected an aluminum production industry in the northeast region . The Conceptual framework of PAR cyclical activities was shown in . 3.2. Participants and Recruitment The thirty research participants in this study included: The employer, including a manager, participated in analyzing problems and causes and generated action plans based on the focus group discussion to solve problems during the planning phase. Employees, including 20 sector heads and employees, were the key informants who provided the majority of the information in the focus group discussion on occupational health problems and the occupational health needs of the occupational health service in the analysis of the problem and cause through focus group discussion. They represented those who were assigned to all career fields. The selection of workers, who participated in focus group discussions (FGDs) as key informants, was carried out by snowball sampling. The inclusion criteria of the participants were workers who had a work-related illness, work-related injury, or other health problems or were a client of OHS. The researcher initially contacted the safety officers to inform them of the inclusion criteria for the research participants. As a result, safety officers grouped and appointed sector heads and employees. Next, we recruited participants for the study through interviews in person at the enterprise. Sector heads and workers suggested additional participants based on the inclusion criteria, up to 20. The occupational safety and health professionals included an occupational physician, three occupational health nurses, and two safety officers. The occupational physician and three occupational health nurses educated and counseled managers, sector heads, and worker representatives in each sector of the occupational health service. Two safety officers provided primary (FGDs) and secondary (documents) data for the problem and cause analysis, and based on focus group discussion, they generated action plans in the planning process. They carried out the assigned action plans as part of the action and observation phase. The moderator motivated the participants to express their opinions on the problem and caused analysis and development of action plan processes. The inclusion criteria for the moderator were workplace employment, good communication skills, and the ability to communicate with others in the company. In this study, the safety officer was the moderator. 3.3. Data Collection Primary data were drawn from participant observations, walk-through surveys, field notes, focus group discussions, and meeting minutes. Secondary data were extracted from documentation, including industrial hygiene data, safety data sheets, lists of preplacement and periodic medical examinations, reports of occupational injuries and illnesses, OHS provided in the first aid room, illness records, medical unit statistics, the fit-for-work system, return to work assessments, and medical surveillance. In addition, the researchers requested permission to observe and take notes during the focus group discussions. In order to protect the identity and privacy of the participants, no audio, video, or photographs were taken or recorded. Each focus group discussion session took approximately 1 to 1.5 h. A researcher developed the PAR model in the aluminum production industry in northeastern Thailand between August 2021 and 2022. The PAR cyclical activities included two loops of cyclical movements. The first loop included four phases: (1) the preparation phase; (2) the planning phase; (3) the action and observation phase; and (4) the reflection phase. 3.4. Data Analysis Data were collected from both primary and secondary data. Primary data were drawn from participant observations, walk-through surveys, field notes, focus group discussions, and meeting minutes. Secondary data were extracted from documentation, including industrial hygiene data, safety data sheets, lists of preplacement and periodic medical examinations, occupational injuries and illnesses reports, OHS provided in the first aid room, illness records, medical unit statistics, the fit-for-work system, return to work assessments, and the medical surveillance system. The data were analyzed using both inductive and deductive thematic analyses. 3.4.1. Situation Analysis This study applied the five key components of occupational health service following the ILO C161 Convention for situation analysis, including (1) policy: objective occupational health service, following national policy, all workers, action plans, and consulted organizations; (2) functions: eleven items of occupational health service functions, (3) organization: provision for establishment, organization, and cooperation; (4) operation: multidisciplinary personnel, inform of health hazards, the working environment, the relationship between ill health and hazards; and (5) provision: authority responsible for supervising and advising occupational health service. Situation analysis compared pre and post participatory action research on occupational health service development . 3.4.2. Problem and Cause Analysis and Development of Action Plans This study applied thematic analysis to analyze the problem and cause in phase 2. The thematic analysis processes were (1) starting with data collection, a researcher identified the selected data to analyze; (2) inductive analysis was performed by reviewing, interpreting, and identifying the relationships of the collected data and categorizing common coding into six main themes in loop 1 and 3 main themes in loop 2; and (3) deductive analysis was conducted by comparing the OHS relationships in the ILO C161 Convention. 3.5. Participatory Action Research The first loop included four phases: (1) the preparation phase; (2) the planning phase; (3) the action and observation phase; and (4) the reflection phase. 3.5.1. Phase 1: Preparation Phase We reviewed the company’s industrial hygiene data, safety data sheets, preplacement and periodic medical examination lists, occupational injury and illness reports, and OHS provided in the first aid room. 3.5.2. Phase 2: Planning Phase A situational analysis was performed using the ILO C161 Convention. A problem and cause analysis was also conducted, including participant observations, a walk-through survey on occupational health risks and problems, and a survey on occupational health illnesses and OHS education provided to workers by OHP and FGDs conducted by moderators. During the research, the participants developed action plans while the researchers observed and took minutes. FGDs were conducted to create action plans and assess the feasibility of those action plans. The researchers then reviewed the action plans of managers, sector heads, workers, safety officers, and human resources workers with an OHP consultant. 3.5.3. Phase 3: Action and Observation Phase A researcher supported the preparation of responsible persons and related documents, such as Thai laws, guidelines, and manuals, for the research participants before implementing action plans. The responsible people carried out the implementation of the action plans. The results and problems that arose during the implementation of the action plans were collected through participant observations and FGDs. 3.5.4. Phase 4: Reflection Phase This phase was characterized by summarizing and analyzing the data after implementing the action plans. The resulting information was prepared by one of the researchers. Finally, FGDs were conducted to assess the achievement of the goals and to plan second-loop improvements. The second loop included three phases: (1) the planning phase using FGDs, after identifying the action plan implementation problems to develop new action plans; (2) the action and observation phase, conducted in the same way as the first loop; and (3) the reflection phase summarizing and analyzing the data after implementing the action plans. Finally, the returned information was prepared by a researcher. Therefore, the development of an OHS model is summarized as follows . The selection of an enterprise for the case study was made from large enterprises with more than 200 employees. A large enterprise was selected as a case study for developing occupational health services in the workplace due to (1) the employer’s awareness and willingness to provide financing for OHS. In contrast, the small-to-medium enterprise survey focused more on safety injuries than occupational health. (2) Due to previous experience with occupational health problems, the large enterprise can identify occupational health problems and needs. According to The Ministerial Regulation, Specification of Occupational Safety, Hygiene, and Environment Management Standards (2006), this enterprise size was category 2, including one or more nurses and safety officers. We selected an aluminum production industry in the northeast region . The Conceptual framework of PAR cyclical activities was shown in . The thirty research participants in this study included: The employer, including a manager, participated in analyzing problems and causes and generated action plans based on the focus group discussion to solve problems during the planning phase. Employees, including 20 sector heads and employees, were the key informants who provided the majority of the information in the focus group discussion on occupational health problems and the occupational health needs of the occupational health service in the analysis of the problem and cause through focus group discussion. They represented those who were assigned to all career fields. The selection of workers, who participated in focus group discussions (FGDs) as key informants, was carried out by snowball sampling. The inclusion criteria of the participants were workers who had a work-related illness, work-related injury, or other health problems or were a client of OHS. The researcher initially contacted the safety officers to inform them of the inclusion criteria for the research participants. As a result, safety officers grouped and appointed sector heads and employees. Next, we recruited participants for the study through interviews in person at the enterprise. Sector heads and workers suggested additional participants based on the inclusion criteria, up to 20. The occupational safety and health professionals included an occupational physician, three occupational health nurses, and two safety officers. The occupational physician and three occupational health nurses educated and counseled managers, sector heads, and worker representatives in each sector of the occupational health service. Two safety officers provided primary (FGDs) and secondary (documents) data for the problem and cause analysis, and based on focus group discussion, they generated action plans in the planning process. They carried out the assigned action plans as part of the action and observation phase. The moderator motivated the participants to express their opinions on the problem and caused analysis and development of action plan processes. The inclusion criteria for the moderator were workplace employment, good communication skills, and the ability to communicate with others in the company. In this study, the safety officer was the moderator. Primary data were drawn from participant observations, walk-through surveys, field notes, focus group discussions, and meeting minutes. Secondary data were extracted from documentation, including industrial hygiene data, safety data sheets, lists of preplacement and periodic medical examinations, reports of occupational injuries and illnesses, OHS provided in the first aid room, illness records, medical unit statistics, the fit-for-work system, return to work assessments, and medical surveillance. In addition, the researchers requested permission to observe and take notes during the focus group discussions. In order to protect the identity and privacy of the participants, no audio, video, or photographs were taken or recorded. Each focus group discussion session took approximately 1 to 1.5 h. A researcher developed the PAR model in the aluminum production industry in northeastern Thailand between August 2021 and 2022. The PAR cyclical activities included two loops of cyclical movements. The first loop included four phases: (1) the preparation phase; (2) the planning phase; (3) the action and observation phase; and (4) the reflection phase. Data were collected from both primary and secondary data. Primary data were drawn from participant observations, walk-through surveys, field notes, focus group discussions, and meeting minutes. Secondary data were extracted from documentation, including industrial hygiene data, safety data sheets, lists of preplacement and periodic medical examinations, occupational injuries and illnesses reports, OHS provided in the first aid room, illness records, medical unit statistics, the fit-for-work system, return to work assessments, and the medical surveillance system. The data were analyzed using both inductive and deductive thematic analyses. 3.4.1. Situation Analysis This study applied the five key components of occupational health service following the ILO C161 Convention for situation analysis, including (1) policy: objective occupational health service, following national policy, all workers, action plans, and consulted organizations; (2) functions: eleven items of occupational health service functions, (3) organization: provision for establishment, organization, and cooperation; (4) operation: multidisciplinary personnel, inform of health hazards, the working environment, the relationship between ill health and hazards; and (5) provision: authority responsible for supervising and advising occupational health service. Situation analysis compared pre and post participatory action research on occupational health service development . 3.4.2. Problem and Cause Analysis and Development of Action Plans This study applied thematic analysis to analyze the problem and cause in phase 2. The thematic analysis processes were (1) starting with data collection, a researcher identified the selected data to analyze; (2) inductive analysis was performed by reviewing, interpreting, and identifying the relationships of the collected data and categorizing common coding into six main themes in loop 1 and 3 main themes in loop 2; and (3) deductive analysis was conducted by comparing the OHS relationships in the ILO C161 Convention. This study applied the five key components of occupational health service following the ILO C161 Convention for situation analysis, including (1) policy: objective occupational health service, following national policy, all workers, action plans, and consulted organizations; (2) functions: eleven items of occupational health service functions, (3) organization: provision for establishment, organization, and cooperation; (4) operation: multidisciplinary personnel, inform of health hazards, the working environment, the relationship between ill health and hazards; and (5) provision: authority responsible for supervising and advising occupational health service. Situation analysis compared pre and post participatory action research on occupational health service development . This study applied thematic analysis to analyze the problem and cause in phase 2. The thematic analysis processes were (1) starting with data collection, a researcher identified the selected data to analyze; (2) inductive analysis was performed by reviewing, interpreting, and identifying the relationships of the collected data and categorizing common coding into six main themes in loop 1 and 3 main themes in loop 2; and (3) deductive analysis was conducted by comparing the OHS relationships in the ILO C161 Convention. The first loop included four phases: (1) the preparation phase; (2) the planning phase; (3) the action and observation phase; and (4) the reflection phase. 3.5.1. Phase 1: Preparation Phase We reviewed the company’s industrial hygiene data, safety data sheets, preplacement and periodic medical examination lists, occupational injury and illness reports, and OHS provided in the first aid room. 3.5.2. Phase 2: Planning Phase A situational analysis was performed using the ILO C161 Convention. A problem and cause analysis was also conducted, including participant observations, a walk-through survey on occupational health risks and problems, and a survey on occupational health illnesses and OHS education provided to workers by OHP and FGDs conducted by moderators. During the research, the participants developed action plans while the researchers observed and took minutes. FGDs were conducted to create action plans and assess the feasibility of those action plans. The researchers then reviewed the action plans of managers, sector heads, workers, safety officers, and human resources workers with an OHP consultant. 3.5.3. Phase 3: Action and Observation Phase A researcher supported the preparation of responsible persons and related documents, such as Thai laws, guidelines, and manuals, for the research participants before implementing action plans. The responsible people carried out the implementation of the action plans. The results and problems that arose during the implementation of the action plans were collected through participant observations and FGDs. 3.5.4. Phase 4: Reflection Phase This phase was characterized by summarizing and analyzing the data after implementing the action plans. The resulting information was prepared by one of the researchers. Finally, FGDs were conducted to assess the achievement of the goals and to plan second-loop improvements. The second loop included three phases: (1) the planning phase using FGDs, after identifying the action plan implementation problems to develop new action plans; (2) the action and observation phase, conducted in the same way as the first loop; and (3) the reflection phase summarizing and analyzing the data after implementing the action plans. Finally, the returned information was prepared by a researcher. Therefore, the development of an OHS model is summarized as follows . We reviewed the company’s industrial hygiene data, safety data sheets, preplacement and periodic medical examination lists, occupational injury and illness reports, and OHS provided in the first aid room. A situational analysis was performed using the ILO C161 Convention. A problem and cause analysis was also conducted, including participant observations, a walk-through survey on occupational health risks and problems, and a survey on occupational health illnesses and OHS education provided to workers by OHP and FGDs conducted by moderators. During the research, the participants developed action plans while the researchers observed and took minutes. FGDs were conducted to create action plans and assess the feasibility of those action plans. The researchers then reviewed the action plans of managers, sector heads, workers, safety officers, and human resources workers with an OHP consultant. A researcher supported the preparation of responsible persons and related documents, such as Thai laws, guidelines, and manuals, for the research participants before implementing action plans. The responsible people carried out the implementation of the action plans. The results and problems that arose during the implementation of the action plans were collected through participant observations and FGDs. This phase was characterized by summarizing and analyzing the data after implementing the action plans. The resulting information was prepared by one of the researchers. Finally, FGDs were conducted to assess the achievement of the goals and to plan second-loop improvements. The second loop included three phases: (1) the planning phase using FGDs, after identifying the action plan implementation problems to develop new action plans; (2) the action and observation phase, conducted in the same way as the first loop; and (3) the reflection phase summarizing and analyzing the data after implementing the action plans. Finally, the returned information was prepared by a researcher. Therefore, the development of an OHS model is summarized as follows . 4.1. Participants’ Demographic Characteristics The mean age of the participants (n = 30) was 42.18 years. The highest age was for the occupational health nurse job was an age of 58. The participants were classified by job or disease criteria. The job criteria consisted of: (1) the employer; (2) an occupational physician; (3) an occupational health nurse; and 4) a safety officer. The disease criteria comprised: (1) work-related illnesses, including lower back pain and myofascial pain syndrome; (2) work-related injuries, including accidents and foot fractures; and (3) other health problems, including anemia, asthma, clavicle fracture, diabetes mellitus, hypertension, kidney disease, migraine, and thyroid disease. describes the demographic characteristics of the participants. 4.2. Situational Analysis The main findings of the two loops of PAR are as follows. The research participants included managers, safety officers, human resources workers, heads of sectors, workers, and OHP. During the planning phase, a situational analysis was conducted according to the ILO C161 Convention, including the five components in . 4.2.1. Policy The policy of this company is that safety, occupational health, and the environment are the priorities. The policy includes six items, for example that the enterprise will: (1) strictly follow all the relevant laws, regulations, and safety and environment standards; (2) continuously improve the safe work process in order to achieve a safe and healthy working environment; (3) establish a safety, environment, and energy committee (SEE committee); (4) promote, support, train, and assess the risk of safety, occupational health, and the environment to encourage all levels of employees to be aware of working; (5) require all supervisors to supervise work, give advice, coach, and be a role model; and (6) communicate company policies through activities involving employees, stakeholders, and nearby communities. The policy of this company is consistent with all applicable laws and regulations and is comparable to ILO Convention C161. The previous study found the need to maintain and strengthen the worker health and safety policy aiming at the promotion and protection of the worker’s health and consequently, the implementation of positive impact strategies . 4.2.2. Functions Most activities of occupational health, safety, and the environment in 2022 follow policies related to safety activities due to regulations through laws. OHS activities were organized to promote safety activities, including periodic examinations, medical treatment, and prevention. Compared to BOHS, fit for work was changed from not being job specific to a specific job in each department. The list of preplacement examinations was changed from the same medical examination for all workers to a specific department. Return to work was newly developed. Medical surveillance was changed from periodic examination to a medical surveillance program. The first aid room was changed from acute care to first aid, emergency treatment, and emergency preparedness. 4.2.3. Organization The employers and employees were added to the legal and regulatory provisions to establish OHS. Some hospital-based OHS functions were transferred to an in-plant OHS provider. The employers and employees collaborated in PAR to develop the OHS. 4.2.4. Conditions of Operation The responsibilities of the safety officers in all activities were changed to the responsibilities of other personnel in the working process. The workers were informed of the health risks and evaluated the relationship between the risks and their work. A new manual was developed in each department to identify the relationship between hazards and health effects. OHP assessed the association between workers, hazards, and health. 4.2.5. General Provisions Due to the lack of authority at the intermediate level for supervising operations and advising on occupational health services, the company resolved these issues by consulting occupational health professionals who worked at the hospital’s occupational medicine clinic to provide occupational health services, such as health promotion and consultation of abnormal medical examination results. 4.3. PAR Process to Develop Basic Occupational Health Services 4.3.1. PAR Process of Loop 1 The analysis of the problem and cause by the FGDs and participant observation consisted of constructing a research question tool and passing a validity test by three occupational physicians before FGDs. Inductive analysis was applied to the problems and causes by evaluating, interpreting, discovering links, and classifying common coding into six key themes in . The process of developing action plans for resolving problems was found through FGDs and the participant observations, based on six key problem themes in . Implementation in conjunction with the medical surveillance program includes: (1) a walk-through survey; (2) identifying hazards; (3) industrial hygiene data; (4) significant exposure and health risk assessment; and (5) the design of medical surveillance comprising history, physical examination, and biomarkers of exposure and effect; and (6) medical examination . Following the identification of the issues, action plans were developed. The problems, action plans, implementation, and results are shown in . 4.3.2. PAR Process of Loop 2 The limitations of the sub-branch, the facility’s discomfort, and the document’s inadequacy were recognized through a focus group discussion during the evaluation phase. The development of action plans was carried out in response to the issues listed in . 4.4. Four Basic Occupational Health Service Activities Developments 4.4.1. Fit-for-Work Model Development OHS activities related to fit-for-work evaluation were developed: indication, medical evaluation, medical certificate, assessor, and medical evaluation result. The indication was an extended job transfer add-on for new workers. The medical evaluation was transformed from the same medical examination despite a different department to the new preplacement evaluation, specifically each department, by an occupational physician, including a preplacement medical evaluation for specific tasks which may involve a danger to worker’s health, e.g., heat and a hot environment along with the reasons for performing a fit-for-work evaluation including a preplacement and job transfer . The general medical certificate was changed to a fit-for-work certificate form in each department, including a review of the medical history, a general physical examination, and laboratory tests. The fitness for work was assessed by the occupational physician instead of any doctor. Finally, the result of the medical evaluation was altered from the absence of results after assessment to fit and unfit differently in the fit-for-work opinion, including fit for duty, unfit, and fit subject to accommodation because the company could provide workers with jobs in other departments that were recommended . The development of the fit-for-work model is summarized in . 4.4.2. Return to Work Model Development As a result of legislation, the criteria for when a worker should return to work were changed from “those who had an injury or illness and were on sick leave for more than three days” to “those who had a chronic illness with medical restrictions such as heart disease, lung disease, or brain disease , were admitted to the hospital after surgery, frequently took sick days, or were on sick leave for more than three days” along with after a prolonged absence for health reasons, severe illness, or injury according to reasons for performing a fit-for-work evaluation . No change was made to the employee’s return-to-work assessment after the occupational physician detected the indication. The evaluation of returning to work is the responsibility of the occupational physician. A new return-to-work form was created to include additional documentation for medical review. After a return-to-work assessment, the fit-for-work designation was altered from “no management” to temporarily or permanently “fit,” “unfit,” “fit with restrictions,” and “fit with limitations” along with recommending appropriate action to protect the workers and of determining the worker’s suitability for the job and needs for reassignment with the fit-for-work opinion, including fit for duty, unfit, and fit subject to accommodation . The fit note should be a recommendation of the fitness for work advice, work modifications, work solutions, working hours and duties adjustment, and equipment, as in previous studies . The return-to-work location was relocated from a hospital to an enterprise with access to occupational health services. The form for referrals was improved. The development of the return-to-work model is summarized in . 4.4.3. Medical Surveillance Model Development After medical evaluations did not improve, a new medical surveillance program was created. Occupational physicians conducted walk-through surveys, identified hazards, and assessed health risks. A manual of hazards and health effects in each department was developed and distributed to workers to monitor early signs of work-related or occupational diseases. No changes were made to the preplacement examination, so a physician developed new baselines for each department. The new medical surveillance program comprised history taking, physical examination, and biological monitoring both of exposure and effect . An occupational physician and nurse will inform workers of their medical exam results with interpretation and management, not in the medical record book. An occupational physician was responsible for confirming diagnoses, determining possible occupational causes, recommending appropriate action, and determining the worker’s suitability for their job . The participation of the occupational physician was motivated by the implementation of workers’ health surveillance . Similar to the previous study, medical surveillance contributed to the early identification of diseases related to work or not. It was carried out by an occupational physician to provide examinations for employees at specific times such as periodic examinations and leaves of absence or change of function . The justifications for transformation started with education and learning experience which led to improving the knowledge of workers to identify workers’ occupational health needs. Moreover, feedback on the results was an important process to implement BOHS as in a previous study . The development of the medical surveillance model is summarized in . Compared with elements of a medical surveillance program, it comprises: 1. a walk-through survey; 2. known hazards; 3. a measurement area or personal sampling; 4. an action level or health risk assessment; 5. the design of medical surveillance programs; 6. medical examinations at regular intervals; 7. the provision of information to employees; 8. the interpretation of the ongoing data analysis of the test; 9. medical removal; 10. a written report; 11. the employee’s work environment re-evaluated as necessary; 12. medical record keeping; 13. audits; and 14. employer actions . The new activities were developed according to the components of a medical monitoring program listed in the brackets in . 4.4.4. First Aid Room Model Development The steps in the development of the first aid room model were carried out following first aid and the risk management process consisting of: (1) identifying potential causes or needs assessment; (2) assessing the workplace risk; (3) fixing the problems of first aiders and first aid procedures; and (4) reviewing the effectiveness of first aid . The 31 first aiders were trained in first aid practices to respond to life-threatening emergencies through a basic life support (BLS) training program (approximately one first aider for every 47 employees) . The registered nurse employed was responsible for supervising first aid and maintaining the first aid facilities . Applying a triage system in the first aid room procedure and identifying conditions for hospital referral, a new assessment emergency condition was developed from the absence of assessment severity in the first aid room and no identified conditions for hospital referral. This is a recent development, ranging from no assessment after treatment to the nurse being responsible for assessing clinical improvement after a worker’s illness and treatment. The basic life support (BLS) training program was incorporated into developing the new emergency plan. A health promotion program was developed after analyzing the results of a medical exam as part of the problem-solving procedure. The development of the first aid room model is summarized in . 4.5. The in-Plant Basic Occupational Health Service Model The PAR process enabled the participants to share their experiences and collaborate on developing an organizational model for BOHS . The important element of PAR to enable factors to develop BOHS was participants perceiving the need to change and be willing to participate in change in the study . The study used education and learning experience to improve occupational health service development , as well as workers’ occupational health needs, employers’ experience, and feedback from occupational health service providers, to justify this process. A PAR cycle was developed to describe the process, which begins with a situation analysis and concludes with an evaluation of the replanning to ensure sustainability, as depicted in . The education and learning experience enabled workers to help to identify problems in the PAR process . The previous studies found that the development of an occupational health culture among workers, creating awareness, establishing existing structures and procedures, and training by both needs assessment and evaluation together is crucial for successful training and long-term sustainable improvements . The PAR process was the tool for key elements of BOHS development that corresponded to key elements of successful health and safety management, including policy, organization, planning, implementation, feedback to enhance BOHS, and auditing . There is an urgent need for community-based strategies that build local agency in the process of describing relevant issues and identifying acceptable solutions, while building towards sustainable policy change over time . The four rectangles in the middle represent the rationale for the transformation, while the rectangle on the outside represents the resulting OHS activities. After education and experience, a system for early detection and medical surveillance was developed. OHS activities, including management of medical examination results, return to work evaluation, and first aid and emergency treatment, were developed in response to occupational health needs. The employer’s experience led to OHS activities that included fit-for-work evaluation, medical evaluation, and emergency preparedness. In response to the feedback from the OHS provider, OHS activities were developed that included health promotion, OHS in the plant, and recordkeeping. Following the steps in the BOHS activity process, education and learning experience in the PAR process along with information and education, medical surveillance consisted of both work environment surveillance and worker’s health surveillance, early detection of work-related or occupational diseases along with the diagnosis of occupational and work-related diseases, emergency preparedness, and first-aid treatment, and the development of medical records . 4.6. Multidisciplinary Staff Multidisciplinary teams should have clearly defined roles and collaborate on various tasks to provide BOHS in the workplace. Employee education and prevention are the responsibility of occupational health and safety professionals, such as industrial hygienists, industrial engineers, and safety professionals. Safety professionals are responsible for developing procedures, standards, and systems to control and reduce hazards and exposure. Only health professionals, on the other hand, can treat illness and injury beyond first aid. Occupational physicians and occupational health nurses certified in occupational medicine have the skills and competencies for educational training in epidemiology, toxicology, industrial hygiene, recognition and management of occupational illnesses and injuries, research, and general management of a comprehensive occupational health program . Occupational health professionals are responsible for occupational health activities. The responsibility for safety activities is shared with the safety officer. Laws govern occupational health providers, such as physicians, nurses, and safety officers. According to law, the occupational physician is responsible for the preplacement examination, the periodic examination, and return-to-work . 4.6.1. Occupational Physicians Currently, occupational physicians are not regulated like consultants at work or for workplace examinations because laws states that an agreement can be made with a nearby hospital for treatment instead of workplace examinations . According to the findings of this study, unless an examination is conducted following the law, the core competency of an occupational physician in BOHS is specifically preventive functions such as fit-for-work assessment, return-to-work assessment, medical surveillance by designing a medical surveillance program, and a first aid room for medical emergency preparedness and health promotion . 4.6.2. Occupational Health Nurses Nurses, not occupational health nurses, regulate the number of nurses on the job, but no one is responsible for the OHS function. Nurses in the workplace are responsible for first aid and emergency treatment. However, they do not have a preventive function due to a lack of interaction with managers and the safety committee or authority to effectively recommend appropriate preventive measures such as worker counseling and health education programs . 4.6.3. Safety Officers Both the number of safety officers and their duties in the workplace are regulated by law. The duties of safety officers have been defined in terms of safety and occupational health . According to the study of 26 necessary competencies and the proficiency of safety officers in Thailand, the employer expected safety officers to perform safety and occupational health activities . According to the current study’s findings, occupational health activities should require recommendations from occupational health professionals and safety officers to achieve practical implementation of occupational health activities . The mean age of the participants (n = 30) was 42.18 years. The highest age was for the occupational health nurse job was an age of 58. The participants were classified by job or disease criteria. The job criteria consisted of: (1) the employer; (2) an occupational physician; (3) an occupational health nurse; and 4) a safety officer. The disease criteria comprised: (1) work-related illnesses, including lower back pain and myofascial pain syndrome; (2) work-related injuries, including accidents and foot fractures; and (3) other health problems, including anemia, asthma, clavicle fracture, diabetes mellitus, hypertension, kidney disease, migraine, and thyroid disease. describes the demographic characteristics of the participants. The main findings of the two loops of PAR are as follows. The research participants included managers, safety officers, human resources workers, heads of sectors, workers, and OHP. During the planning phase, a situational analysis was conducted according to the ILO C161 Convention, including the five components in . 4.2.1. Policy The policy of this company is that safety, occupational health, and the environment are the priorities. The policy includes six items, for example that the enterprise will: (1) strictly follow all the relevant laws, regulations, and safety and environment standards; (2) continuously improve the safe work process in order to achieve a safe and healthy working environment; (3) establish a safety, environment, and energy committee (SEE committee); (4) promote, support, train, and assess the risk of safety, occupational health, and the environment to encourage all levels of employees to be aware of working; (5) require all supervisors to supervise work, give advice, coach, and be a role model; and (6) communicate company policies through activities involving employees, stakeholders, and nearby communities. The policy of this company is consistent with all applicable laws and regulations and is comparable to ILO Convention C161. The previous study found the need to maintain and strengthen the worker health and safety policy aiming at the promotion and protection of the worker’s health and consequently, the implementation of positive impact strategies . 4.2.2. Functions Most activities of occupational health, safety, and the environment in 2022 follow policies related to safety activities due to regulations through laws. OHS activities were organized to promote safety activities, including periodic examinations, medical treatment, and prevention. Compared to BOHS, fit for work was changed from not being job specific to a specific job in each department. The list of preplacement examinations was changed from the same medical examination for all workers to a specific department. Return to work was newly developed. Medical surveillance was changed from periodic examination to a medical surveillance program. The first aid room was changed from acute care to first aid, emergency treatment, and emergency preparedness. 4.2.3. Organization The employers and employees were added to the legal and regulatory provisions to establish OHS. Some hospital-based OHS functions were transferred to an in-plant OHS provider. The employers and employees collaborated in PAR to develop the OHS. 4.2.4. Conditions of Operation The responsibilities of the safety officers in all activities were changed to the responsibilities of other personnel in the working process. The workers were informed of the health risks and evaluated the relationship between the risks and their work. A new manual was developed in each department to identify the relationship between hazards and health effects. OHP assessed the association between workers, hazards, and health. 4.2.5. General Provisions Due to the lack of authority at the intermediate level for supervising operations and advising on occupational health services, the company resolved these issues by consulting occupational health professionals who worked at the hospital’s occupational medicine clinic to provide occupational health services, such as health promotion and consultation of abnormal medical examination results. The policy of this company is that safety, occupational health, and the environment are the priorities. The policy includes six items, for example that the enterprise will: (1) strictly follow all the relevant laws, regulations, and safety and environment standards; (2) continuously improve the safe work process in order to achieve a safe and healthy working environment; (3) establish a safety, environment, and energy committee (SEE committee); (4) promote, support, train, and assess the risk of safety, occupational health, and the environment to encourage all levels of employees to be aware of working; (5) require all supervisors to supervise work, give advice, coach, and be a role model; and (6) communicate company policies through activities involving employees, stakeholders, and nearby communities. The policy of this company is consistent with all applicable laws and regulations and is comparable to ILO Convention C161. The previous study found the need to maintain and strengthen the worker health and safety policy aiming at the promotion and protection of the worker’s health and consequently, the implementation of positive impact strategies . Most activities of occupational health, safety, and the environment in 2022 follow policies related to safety activities due to regulations through laws. OHS activities were organized to promote safety activities, including periodic examinations, medical treatment, and prevention. Compared to BOHS, fit for work was changed from not being job specific to a specific job in each department. The list of preplacement examinations was changed from the same medical examination for all workers to a specific department. Return to work was newly developed. Medical surveillance was changed from periodic examination to a medical surveillance program. The first aid room was changed from acute care to first aid, emergency treatment, and emergency preparedness. The employers and employees were added to the legal and regulatory provisions to establish OHS. Some hospital-based OHS functions were transferred to an in-plant OHS provider. The employers and employees collaborated in PAR to develop the OHS. The responsibilities of the safety officers in all activities were changed to the responsibilities of other personnel in the working process. The workers were informed of the health risks and evaluated the relationship between the risks and their work. A new manual was developed in each department to identify the relationship between hazards and health effects. OHP assessed the association between workers, hazards, and health. Due to the lack of authority at the intermediate level for supervising operations and advising on occupational health services, the company resolved these issues by consulting occupational health professionals who worked at the hospital’s occupational medicine clinic to provide occupational health services, such as health promotion and consultation of abnormal medical examination results. 4.3.1. PAR Process of Loop 1 The analysis of the problem and cause by the FGDs and participant observation consisted of constructing a research question tool and passing a validity test by three occupational physicians before FGDs. Inductive analysis was applied to the problems and causes by evaluating, interpreting, discovering links, and classifying common coding into six key themes in . The process of developing action plans for resolving problems was found through FGDs and the participant observations, based on six key problem themes in . Implementation in conjunction with the medical surveillance program includes: (1) a walk-through survey; (2) identifying hazards; (3) industrial hygiene data; (4) significant exposure and health risk assessment; and (5) the design of medical surveillance comprising history, physical examination, and biomarkers of exposure and effect; and (6) medical examination . Following the identification of the issues, action plans were developed. The problems, action plans, implementation, and results are shown in . 4.3.2. PAR Process of Loop 2 The limitations of the sub-branch, the facility’s discomfort, and the document’s inadequacy were recognized through a focus group discussion during the evaluation phase. The development of action plans was carried out in response to the issues listed in . The analysis of the problem and cause by the FGDs and participant observation consisted of constructing a research question tool and passing a validity test by three occupational physicians before FGDs. Inductive analysis was applied to the problems and causes by evaluating, interpreting, discovering links, and classifying common coding into six key themes in . The process of developing action plans for resolving problems was found through FGDs and the participant observations, based on six key problem themes in . Implementation in conjunction with the medical surveillance program includes: (1) a walk-through survey; (2) identifying hazards; (3) industrial hygiene data; (4) significant exposure and health risk assessment; and (5) the design of medical surveillance comprising history, physical examination, and biomarkers of exposure and effect; and (6) medical examination . Following the identification of the issues, action plans were developed. The problems, action plans, implementation, and results are shown in . The limitations of the sub-branch, the facility’s discomfort, and the document’s inadequacy were recognized through a focus group discussion during the evaluation phase. The development of action plans was carried out in response to the issues listed in . 4.4.1. Fit-for-Work Model Development OHS activities related to fit-for-work evaluation were developed: indication, medical evaluation, medical certificate, assessor, and medical evaluation result. The indication was an extended job transfer add-on for new workers. The medical evaluation was transformed from the same medical examination despite a different department to the new preplacement evaluation, specifically each department, by an occupational physician, including a preplacement medical evaluation for specific tasks which may involve a danger to worker’s health, e.g., heat and a hot environment along with the reasons for performing a fit-for-work evaluation including a preplacement and job transfer . The general medical certificate was changed to a fit-for-work certificate form in each department, including a review of the medical history, a general physical examination, and laboratory tests. The fitness for work was assessed by the occupational physician instead of any doctor. Finally, the result of the medical evaluation was altered from the absence of results after assessment to fit and unfit differently in the fit-for-work opinion, including fit for duty, unfit, and fit subject to accommodation because the company could provide workers with jobs in other departments that were recommended . The development of the fit-for-work model is summarized in . 4.4.2. Return to Work Model Development As a result of legislation, the criteria for when a worker should return to work were changed from “those who had an injury or illness and were on sick leave for more than three days” to “those who had a chronic illness with medical restrictions such as heart disease, lung disease, or brain disease , were admitted to the hospital after surgery, frequently took sick days, or were on sick leave for more than three days” along with after a prolonged absence for health reasons, severe illness, or injury according to reasons for performing a fit-for-work evaluation . No change was made to the employee’s return-to-work assessment after the occupational physician detected the indication. The evaluation of returning to work is the responsibility of the occupational physician. A new return-to-work form was created to include additional documentation for medical review. After a return-to-work assessment, the fit-for-work designation was altered from “no management” to temporarily or permanently “fit,” “unfit,” “fit with restrictions,” and “fit with limitations” along with recommending appropriate action to protect the workers and of determining the worker’s suitability for the job and needs for reassignment with the fit-for-work opinion, including fit for duty, unfit, and fit subject to accommodation . The fit note should be a recommendation of the fitness for work advice, work modifications, work solutions, working hours and duties adjustment, and equipment, as in previous studies . The return-to-work location was relocated from a hospital to an enterprise with access to occupational health services. The form for referrals was improved. The development of the return-to-work model is summarized in . 4.4.3. Medical Surveillance Model Development After medical evaluations did not improve, a new medical surveillance program was created. Occupational physicians conducted walk-through surveys, identified hazards, and assessed health risks. A manual of hazards and health effects in each department was developed and distributed to workers to monitor early signs of work-related or occupational diseases. No changes were made to the preplacement examination, so a physician developed new baselines for each department. The new medical surveillance program comprised history taking, physical examination, and biological monitoring both of exposure and effect . An occupational physician and nurse will inform workers of their medical exam results with interpretation and management, not in the medical record book. An occupational physician was responsible for confirming diagnoses, determining possible occupational causes, recommending appropriate action, and determining the worker’s suitability for their job . The participation of the occupational physician was motivated by the implementation of workers’ health surveillance . Similar to the previous study, medical surveillance contributed to the early identification of diseases related to work or not. It was carried out by an occupational physician to provide examinations for employees at specific times such as periodic examinations and leaves of absence or change of function . The justifications for transformation started with education and learning experience which led to improving the knowledge of workers to identify workers’ occupational health needs. Moreover, feedback on the results was an important process to implement BOHS as in a previous study . The development of the medical surveillance model is summarized in . Compared with elements of a medical surveillance program, it comprises: 1. a walk-through survey; 2. known hazards; 3. a measurement area or personal sampling; 4. an action level or health risk assessment; 5. the design of medical surveillance programs; 6. medical examinations at regular intervals; 7. the provision of information to employees; 8. the interpretation of the ongoing data analysis of the test; 9. medical removal; 10. a written report; 11. the employee’s work environment re-evaluated as necessary; 12. medical record keeping; 13. audits; and 14. employer actions . The new activities were developed according to the components of a medical monitoring program listed in the brackets in . 4.4.4. First Aid Room Model Development The steps in the development of the first aid room model were carried out following first aid and the risk management process consisting of: (1) identifying potential causes or needs assessment; (2) assessing the workplace risk; (3) fixing the problems of first aiders and first aid procedures; and (4) reviewing the effectiveness of first aid . The 31 first aiders were trained in first aid practices to respond to life-threatening emergencies through a basic life support (BLS) training program (approximately one first aider for every 47 employees) . The registered nurse employed was responsible for supervising first aid and maintaining the first aid facilities . Applying a triage system in the first aid room procedure and identifying conditions for hospital referral, a new assessment emergency condition was developed from the absence of assessment severity in the first aid room and no identified conditions for hospital referral. This is a recent development, ranging from no assessment after treatment to the nurse being responsible for assessing clinical improvement after a worker’s illness and treatment. The basic life support (BLS) training program was incorporated into developing the new emergency plan. A health promotion program was developed after analyzing the results of a medical exam as part of the problem-solving procedure. The development of the first aid room model is summarized in . OHS activities related to fit-for-work evaluation were developed: indication, medical evaluation, medical certificate, assessor, and medical evaluation result. The indication was an extended job transfer add-on for new workers. The medical evaluation was transformed from the same medical examination despite a different department to the new preplacement evaluation, specifically each department, by an occupational physician, including a preplacement medical evaluation for specific tasks which may involve a danger to worker’s health, e.g., heat and a hot environment along with the reasons for performing a fit-for-work evaluation including a preplacement and job transfer . The general medical certificate was changed to a fit-for-work certificate form in each department, including a review of the medical history, a general physical examination, and laboratory tests. The fitness for work was assessed by the occupational physician instead of any doctor. Finally, the result of the medical evaluation was altered from the absence of results after assessment to fit and unfit differently in the fit-for-work opinion, including fit for duty, unfit, and fit subject to accommodation because the company could provide workers with jobs in other departments that were recommended . The development of the fit-for-work model is summarized in . As a result of legislation, the criteria for when a worker should return to work were changed from “those who had an injury or illness and were on sick leave for more than three days” to “those who had a chronic illness with medical restrictions such as heart disease, lung disease, or brain disease , were admitted to the hospital after surgery, frequently took sick days, or were on sick leave for more than three days” along with after a prolonged absence for health reasons, severe illness, or injury according to reasons for performing a fit-for-work evaluation . No change was made to the employee’s return-to-work assessment after the occupational physician detected the indication. The evaluation of returning to work is the responsibility of the occupational physician. A new return-to-work form was created to include additional documentation for medical review. After a return-to-work assessment, the fit-for-work designation was altered from “no management” to temporarily or permanently “fit,” “unfit,” “fit with restrictions,” and “fit with limitations” along with recommending appropriate action to protect the workers and of determining the worker’s suitability for the job and needs for reassignment with the fit-for-work opinion, including fit for duty, unfit, and fit subject to accommodation . The fit note should be a recommendation of the fitness for work advice, work modifications, work solutions, working hours and duties adjustment, and equipment, as in previous studies . The return-to-work location was relocated from a hospital to an enterprise with access to occupational health services. The form for referrals was improved. The development of the return-to-work model is summarized in . After medical evaluations did not improve, a new medical surveillance program was created. Occupational physicians conducted walk-through surveys, identified hazards, and assessed health risks. A manual of hazards and health effects in each department was developed and distributed to workers to monitor early signs of work-related or occupational diseases. No changes were made to the preplacement examination, so a physician developed new baselines for each department. The new medical surveillance program comprised history taking, physical examination, and biological monitoring both of exposure and effect . An occupational physician and nurse will inform workers of their medical exam results with interpretation and management, not in the medical record book. An occupational physician was responsible for confirming diagnoses, determining possible occupational causes, recommending appropriate action, and determining the worker’s suitability for their job . The participation of the occupational physician was motivated by the implementation of workers’ health surveillance . Similar to the previous study, medical surveillance contributed to the early identification of diseases related to work or not. It was carried out by an occupational physician to provide examinations for employees at specific times such as periodic examinations and leaves of absence or change of function . The justifications for transformation started with education and learning experience which led to improving the knowledge of workers to identify workers’ occupational health needs. Moreover, feedback on the results was an important process to implement BOHS as in a previous study . The development of the medical surveillance model is summarized in . Compared with elements of a medical surveillance program, it comprises: 1. a walk-through survey; 2. known hazards; 3. a measurement area or personal sampling; 4. an action level or health risk assessment; 5. the design of medical surveillance programs; 6. medical examinations at regular intervals; 7. the provision of information to employees; 8. the interpretation of the ongoing data analysis of the test; 9. medical removal; 10. a written report; 11. the employee’s work environment re-evaluated as necessary; 12. medical record keeping; 13. audits; and 14. employer actions . The new activities were developed according to the components of a medical monitoring program listed in the brackets in . The steps in the development of the first aid room model were carried out following first aid and the risk management process consisting of: (1) identifying potential causes or needs assessment; (2) assessing the workplace risk; (3) fixing the problems of first aiders and first aid procedures; and (4) reviewing the effectiveness of first aid . The 31 first aiders were trained in first aid practices to respond to life-threatening emergencies through a basic life support (BLS) training program (approximately one first aider for every 47 employees) . The registered nurse employed was responsible for supervising first aid and maintaining the first aid facilities . Applying a triage system in the first aid room procedure and identifying conditions for hospital referral, a new assessment emergency condition was developed from the absence of assessment severity in the first aid room and no identified conditions for hospital referral. This is a recent development, ranging from no assessment after treatment to the nurse being responsible for assessing clinical improvement after a worker’s illness and treatment. The basic life support (BLS) training program was incorporated into developing the new emergency plan. A health promotion program was developed after analyzing the results of a medical exam as part of the problem-solving procedure. The development of the first aid room model is summarized in . The PAR process enabled the participants to share their experiences and collaborate on developing an organizational model for BOHS . The important element of PAR to enable factors to develop BOHS was participants perceiving the need to change and be willing to participate in change in the study . The study used education and learning experience to improve occupational health service development , as well as workers’ occupational health needs, employers’ experience, and feedback from occupational health service providers, to justify this process. A PAR cycle was developed to describe the process, which begins with a situation analysis and concludes with an evaluation of the replanning to ensure sustainability, as depicted in . The education and learning experience enabled workers to help to identify problems in the PAR process . The previous studies found that the development of an occupational health culture among workers, creating awareness, establishing existing structures and procedures, and training by both needs assessment and evaluation together is crucial for successful training and long-term sustainable improvements . The PAR process was the tool for key elements of BOHS development that corresponded to key elements of successful health and safety management, including policy, organization, planning, implementation, feedback to enhance BOHS, and auditing . There is an urgent need for community-based strategies that build local agency in the process of describing relevant issues and identifying acceptable solutions, while building towards sustainable policy change over time . The four rectangles in the middle represent the rationale for the transformation, while the rectangle on the outside represents the resulting OHS activities. After education and experience, a system for early detection and medical surveillance was developed. OHS activities, including management of medical examination results, return to work evaluation, and first aid and emergency treatment, were developed in response to occupational health needs. The employer’s experience led to OHS activities that included fit-for-work evaluation, medical evaluation, and emergency preparedness. In response to the feedback from the OHS provider, OHS activities were developed that included health promotion, OHS in the plant, and recordkeeping. Following the steps in the BOHS activity process, education and learning experience in the PAR process along with information and education, medical surveillance consisted of both work environment surveillance and worker’s health surveillance, early detection of work-related or occupational diseases along with the diagnosis of occupational and work-related diseases, emergency preparedness, and first-aid treatment, and the development of medical records . Multidisciplinary teams should have clearly defined roles and collaborate on various tasks to provide BOHS in the workplace. Employee education and prevention are the responsibility of occupational health and safety professionals, such as industrial hygienists, industrial engineers, and safety professionals. Safety professionals are responsible for developing procedures, standards, and systems to control and reduce hazards and exposure. Only health professionals, on the other hand, can treat illness and injury beyond first aid. Occupational physicians and occupational health nurses certified in occupational medicine have the skills and competencies for educational training in epidemiology, toxicology, industrial hygiene, recognition and management of occupational illnesses and injuries, research, and general management of a comprehensive occupational health program . Occupational health professionals are responsible for occupational health activities. The responsibility for safety activities is shared with the safety officer. Laws govern occupational health providers, such as physicians, nurses, and safety officers. According to law, the occupational physician is responsible for the preplacement examination, the periodic examination, and return-to-work . 4.6.1. Occupational Physicians Currently, occupational physicians are not regulated like consultants at work or for workplace examinations because laws states that an agreement can be made with a nearby hospital for treatment instead of workplace examinations . According to the findings of this study, unless an examination is conducted following the law, the core competency of an occupational physician in BOHS is specifically preventive functions such as fit-for-work assessment, return-to-work assessment, medical surveillance by designing a medical surveillance program, and a first aid room for medical emergency preparedness and health promotion . 4.6.2. Occupational Health Nurses Nurses, not occupational health nurses, regulate the number of nurses on the job, but no one is responsible for the OHS function. Nurses in the workplace are responsible for first aid and emergency treatment. However, they do not have a preventive function due to a lack of interaction with managers and the safety committee or authority to effectively recommend appropriate preventive measures such as worker counseling and health education programs . 4.6.3. Safety Officers Both the number of safety officers and their duties in the workplace are regulated by law. The duties of safety officers have been defined in terms of safety and occupational health . According to the study of 26 necessary competencies and the proficiency of safety officers in Thailand, the employer expected safety officers to perform safety and occupational health activities . According to the current study’s findings, occupational health activities should require recommendations from occupational health professionals and safety officers to achieve practical implementation of occupational health activities . Currently, occupational physicians are not regulated like consultants at work or for workplace examinations because laws states that an agreement can be made with a nearby hospital for treatment instead of workplace examinations . According to the findings of this study, unless an examination is conducted following the law, the core competency of an occupational physician in BOHS is specifically preventive functions such as fit-for-work assessment, return-to-work assessment, medical surveillance by designing a medical surveillance program, and a first aid room for medical emergency preparedness and health promotion . Nurses, not occupational health nurses, regulate the number of nurses on the job, but no one is responsible for the OHS function. Nurses in the workplace are responsible for first aid and emergency treatment. However, they do not have a preventive function due to a lack of interaction with managers and the safety committee or authority to effectively recommend appropriate preventive measures such as worker counseling and health education programs . Both the number of safety officers and their duties in the workplace are regulated by law. The duties of safety officers have been defined in terms of safety and occupational health . According to the study of 26 necessary competencies and the proficiency of safety officers in Thailand, the employer expected safety officers to perform safety and occupational health activities . According to the current study’s findings, occupational health activities should require recommendations from occupational health professionals and safety officers to achieve practical implementation of occupational health activities . 5.1. Conclusions PAR improves the basic occupational health service model by bringing stakeholders together to identify needs and experiences, develop action plans, and implement solutions. The study’s findings include a better understanding of the problem and its causes in Thai enterprises and suggestions for future development in similar settings. shows the limitations of creating OHS. There is (a) no ratification of ILO C161, (b) no responsible organization to provide educational and training programs for the qualification, certification, and competencies of occupational health professionals, (c) no law-regulated occupational health service standards, (d) no responsible organization has implemented quality evaluation methods for OHS in enterprises, and (e) a misunderstanding that the provision of basic occupational health services is the responsibility of safety officers. shows the organizational factors required for the sustainable development of OHS: (a) policy support: OHS activities were carried out according to OHS policy, (b) the employer’s provision: the employer was aware of fundamental occupational health issues and addressed their root causes, (c) education and learning experiences were crucial tools to empower personnel in OHS development to identify occupational health problems and occupational health needs , and (d) OHS development planning and continuity evaluation. 5.2. Recommendations The recommendations will be implemented with limitations to develop OHS at national and enterprise levels. Limitations to creating a national OHS include (a) no ILO C161 ratification, (b) no responsible organization to provide educational and training programs for occupational health professionals’ qualification, certification, and competencies, (c) no occupational health service standards regulated by law, (d) no responsible organization has implemented quality evaluation methods for OHS in enterprises, and (e) a misunderstanding that providing basic occupational health services is the responsibility of safety officers. However, the future of OHS laws remains unpredictable, and enterprise- concerned policy may be an additional OHS provision at work. BOHS can be developed by the organizational factors required for the sustainable development of BOHS comprised of (a) policy support: OHS activities are conducted according to OHS policy, (b) employer provision: the employer is consciously aware of fundamental occupational health issues and resolves their root causes, (c) education and learning experiences are crucial tools to empower personnel in the development of OHS for identifying occupational health problems and occupational health needs , and (d) OHS development planning and continuity evaluation. Recommendations for the enterprise include: (a) developing BOHS following the ILO C161 Convention under the policy, (b) contacting the hospital’s occupational medicine clinic to provide and counsel for the development of occupational health services, (c) conducting internal audits to ensure continuous development of OHS and, (d) identifying OHP as duties in the working process. According to the stepwise development of occupational health services, there is step II: BOHS infrastructure that varies according to local conditions and needs for developing BOHS content. The occupational physician and occupational health nurse provide BOHS with the support of a safety officer with knowledge and experience in accident prevention and basic safety . National recommendations include: (a) ratifying the ILO; (b) establishing responsible organizations for training, qualification, and certification; (c) implementing national OHS laws and standards following the ILO C161 Convention, which clarified occupational health service functions and occupational health professional duties; and (d) organizations for auditors, such as the Healthcare Accreditation Institute (a public organization) for auditing hospital settings, which should be strengthened and standardized. According to the stepwise development of occupational health services, step III: international standard service is the minimum objective for each nation as mandated by the ILO C161 Convention. The content of OHS is predominantly preventive, although curative services can also be provided appropriately. Multidisciplinary personnel, especially occupational physicians, should have specialized training from specialized units (such as an institute of occupational health) . PAR improves the basic occupational health service model by bringing stakeholders together to identify needs and experiences, develop action plans, and implement solutions. The study’s findings include a better understanding of the problem and its causes in Thai enterprises and suggestions for future development in similar settings. shows the limitations of creating OHS. There is (a) no ratification of ILO C161, (b) no responsible organization to provide educational and training programs for the qualification, certification, and competencies of occupational health professionals, (c) no law-regulated occupational health service standards, (d) no responsible organization has implemented quality evaluation methods for OHS in enterprises, and (e) a misunderstanding that the provision of basic occupational health services is the responsibility of safety officers. shows the organizational factors required for the sustainable development of OHS: (a) policy support: OHS activities were carried out according to OHS policy, (b) the employer’s provision: the employer was aware of fundamental occupational health issues and addressed their root causes, (c) education and learning experiences were crucial tools to empower personnel in OHS development to identify occupational health problems and occupational health needs , and (d) OHS development planning and continuity evaluation. The recommendations will be implemented with limitations to develop OHS at national and enterprise levels. Limitations to creating a national OHS include (a) no ILO C161 ratification, (b) no responsible organization to provide educational and training programs for occupational health professionals’ qualification, certification, and competencies, (c) no occupational health service standards regulated by law, (d) no responsible organization has implemented quality evaluation methods for OHS in enterprises, and (e) a misunderstanding that providing basic occupational health services is the responsibility of safety officers. However, the future of OHS laws remains unpredictable, and enterprise- concerned policy may be an additional OHS provision at work. BOHS can be developed by the organizational factors required for the sustainable development of BOHS comprised of (a) policy support: OHS activities are conducted according to OHS policy, (b) employer provision: the employer is consciously aware of fundamental occupational health issues and resolves their root causes, (c) education and learning experiences are crucial tools to empower personnel in the development of OHS for identifying occupational health problems and occupational health needs , and (d) OHS development planning and continuity evaluation. Recommendations for the enterprise include: (a) developing BOHS following the ILO C161 Convention under the policy, (b) contacting the hospital’s occupational medicine clinic to provide and counsel for the development of occupational health services, (c) conducting internal audits to ensure continuous development of OHS and, (d) identifying OHP as duties in the working process. According to the stepwise development of occupational health services, there is step II: BOHS infrastructure that varies according to local conditions and needs for developing BOHS content. The occupational physician and occupational health nurse provide BOHS with the support of a safety officer with knowledge and experience in accident prevention and basic safety . National recommendations include: (a) ratifying the ILO; (b) establishing responsible organizations for training, qualification, and certification; (c) implementing national OHS laws and standards following the ILO C161 Convention, which clarified occupational health service functions and occupational health professional duties; and (d) organizations for auditors, such as the Healthcare Accreditation Institute (a public organization) for auditing hospital settings, which should be strengthened and standardized. According to the stepwise development of occupational health services, step III: international standard service is the minimum objective for each nation as mandated by the ILO C161 Convention. The content of OHS is predominantly preventive, although curative services can also be provided appropriately. Multidisciplinary personnel, especially occupational physicians, should have specialized training from specialized units (such as an institute of occupational health) .
|
Environmental Monitoring of
|
678abb20-24c5-448d-a6bb-2300fc546559
|
10138562
|
Microbiology[mh]
|
Legionella is an aerobic, non-spore-forming, and Gram-negative pathogen , which was discovered in 1976 in Philadelphia following an outbreak of cases of pneumonia in a hotel . Over 65 species belong to this genus , about 20 of which can cause disease in humans . Moreover, Legionella pneumophila includes 15 serogroups, and serogroup 1 is the most dangerous to humans. Thus, the attention focused on this serogroup is high . Furthermore, previous studies demonstrated human infections with serogroup 3 [ , , ], serogroup 9 , and serogroup 6 . Favourable factors to the growth of this bacterium are temperatures between 25 °C and 45 °C (but ranges between 5.7 °C and 63.0 °C can determine its survival); stagnant water , in which Legionella in amoebae copiously reproduces ; low flow ; pH values between 5.5 and 9.2; biofilm and protozoa ; inorganic elements (iron, zinc and potassium) and organic and inorganic compounds ; existence of water systems of old facilities ; dead branches in complex water structures ; low total chlorine levels ; and low free residual chlorine levels . In addition, man-made disasters , natural disasters, flooding , water system interruptions, changes in disinfection methods of water systems, and water network breaks are risk factors for Legionella growth. Construction activities (such as demolition, repressurization, excavation, underground utility connections, commissioning at building opening, and water efficiency challenges) have been associated with healthcare-associated Legionella infections and deaths . Hospitals need a water safety plan (WSP) to control Legionella proliferation, which includes the implementation of safety measures to ensure the water quality of facilities during demolition and construction activities and reduce the risk of exposure of patients . Furthermore, hospitals need a WSP not only during construction or demolition activities. In fact, the World Health Organization (WHO) in 2004 recommended the creation of WSPs by all water suppliers: each of them should engage a group of water experts to assess the risks associated with water exposure, develop and implement strategies to prevent damage to public health, and evaluate the strategies’ effectiveness . Such a plan aims to minimize colonization of Legionella from the source of water supply to the devices in contact with users . Chlorine-based disinfection is the most common active measure used worldwide, and in Italy, against Legionella growth and spread in building water systems . Levels of free residual oxidant (FRO), within 0.20 ppm and 4.0 ppm in potable water, have been correlated with reduced risks of growth and spread of disease outbreaks . The presence of Legionella can be detected in soil and water sources, such as showers, hot tubs, air-conditioning systems , cooling towers, whirlpools, baths, fountains, ice machines, medical equipment such as aerosols from respiratory devices, eye-wash stations, dental units , hot-water recirculation systems , hot- and cold-water systems, faucets , heater–cooler units, and heater units for cardiac procedures . Legionella infection occurs through inhalation or aspiration of droplets released from contaminated water . In addition, potting soil can also transmit the bacterium, but the mechanism still remains unknown . People who are most affected by Legionella infections are smokers , alcohol abusers , males, those of advanced age , and people with previous diseases, such as acquired immunodeficiency syndrome, hematologic malignancy , and diabetes mellitus . Consequently, it is necessary to monitor the presence of Legionella in hospitals as these are places where immunocompromised patients reside . In healthcare settings, the presence of Legionella has been linked to contaminated water reservoirs, cooling towers and air-conditioning systems [ , , ]. The occurrence of Legionella spp. in the water distribution systems of hospitals and healthcare facilities is a possible concern for hospital populations, due to the vulnerable nature of patients admitted to specific wards, including intensive care, hematology, cardiology, hemodialysis, and pulmonology . The objective to be achieved in healthcare facilities is to minimize colonization by Legionella or, in the case of facilities housing immunocompromised individuals, its total absence (not detectable with the analytical method used) . Legionella infection may have several consequences: it may cause Legionnaires’ disease (LD) (a severe pneumonia), Pontiac fever (a flu-like condition), or it may remain asymptomatic. Because of the lack of specific symptoms and the absence of severity associated with this entity, Pontiac fever is often undiagnosed and under-notified . There is a difference between patients who are hospitalized for LD and others who acquire the infection in hospital due to susceptible medical conditions . LD can be the cause of CAP (community-acquired pneumonia) and pneumonias that are acquired in situations other than travel, domestic environments, and hospitals, and it can be the cause of up to 30% of these types of pneumonias that require hospitalization . Legionella infection can also be acquired in hospitals (hospital-associated infection): individuals most at risk from such an infection are those with prior situations that render them vulnerable, such as immunodeficiency, stem cell or organ transplantation, and obstructive pulmonary disease . Legionella infections are becoming a public health problem due to their incidence and costs . In order to prevent and control Legionella infections sourced from the colonization of water systems, many countries have developed guidelines or regulations. Several organizations, including the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), the WHO, the Centers for Medicare and Medicaid Services (CMS), and the Centers for Disease Control and Prevention (CDC), recommend the creation of water management programs aimed at preventing the growth and spread of Legionella . In the COVID-19 era, it was reported that 20% of patients had a Legionella co-infection during hospitalization . COVID-19 patients have an increased risk for both hospitalization and residual lung impairment . In 2021, the number of Italian individuals affected by Legionellosis was 2726, and its incidence was 46.0 cases per million population . The incidence increased compared to 2020, when the value was 34.3 cases per million population. Of the 2726 notified cases, 83.6% had a community origin, 9.4% were associated with travel, 3.7% had a nosocomial origin, 3.1% were associated with closed communities (nursing homes for elderly people, and healthcare or rehabilitation facilities), and 0.2% had another exposure (prison or communities). Infections of nosocomial origin increased from 68 to 102 cases from 2020 to 2021 . However, Legionella infections are underestimated, and it is reported that less than 5% are diagnosed . Legionellosis is a condition that can be avoided if the bacterium is not present in the environment, so monitoring and preventing its presence is important . Environmental monitoring of Legionella is an approach performed on several occasions in hospitals in Italian regions by Deiana et al. , Ditommaso et al. , Vincenti et al. , De Giglio et al. , Laganà et al. , Arrigo et al. , Pasquarella et al. , and Torre et al. ; in non-hospital facilities by Totaro et al. , Sabatini et al. , and De Filippis et al. ; and in both hospital and non-hospital facilities by Felice et al. , Leoni et al. , and Mazzotta et al. . Moreover, several studies were also conducted using air samples by Montagna et al. [ , , ]. To our knowledge, very few studies performed environmental monitoring of Legionella in the Campania region, Southern Italy. In particular, a study by Torre et al. monitored the bacterium in 50 hospitals in the Campania region during the period 2008–2014 . As the new Italian National Guidelines were published in 2015, it was decided to conduct this study to assess the effectiveness of the application of the new Guidelines in preventing Legionella colonization in water systems. Thus, the aim of this study was to analyze hospital environmental monitoring of Legionella in the Campania region over the 5-year study period. In detail, the purposes of the study were (i) to evaluate the presence of Legionella in tested water samples; (ii) to estimate the prevalence of species and of individual L. pneumophila serogroups; and (iii) to assess the influence of several parameters, such as temperature and free residual chlorine, on the presence of this bacterium.
2.1. Study Area and Hospital Characteristics The water sample collection was conducted from January 2018 to December 2022 in two provinces of the Campania region, Southern Italy (Naples and Caserta) ( ). The samples were collected in hospitals which ask us to routinely perform Legionella testing every six months as part of WSPs as required by the Italian regulatory system for validation of a well-maintained building water system. The water samples were collected from 26 hospitals, all of which possessed the following characteristics: the presence of a single building, construction between 1970 and 1980, the presence of a maximum of five floors, and the presence of the number of beds served ranging between 20 and 200. These hospitals all have WSPs in place to respond to positive samples. We monitored 26 hospitals compared to the 50 hospitals in the study by Torre et al. , because some of the 50 hospitals were excluded since monitoring had not been carried out for all the years of the study, or these facilities were not comparable to the others (in terms of number of beds served, number of buildings, number of floors, construction period, or water treatment disinfection methods). Water provided to the hospitals (which contains free chlorine as residual drinking water disinfectant) comes from the public supply system of the cities of Naples and Caserta, and it reaches the hospitals via a single pipeline. The hospital sites within the present study included buildings hosting patients considered at increased risk (Medicine, Pneumology, Geriatrics, Surgeries, etc.) over the study period, inclusive of the time during the COVID-19 pandemic. Samples were collected every six months, as required by the Italian National Guidelines for hospitals with this type of wards. If a sample tested positive, according to the Italian National Guidelines, the sampled point was decontaminated and re-sampled 1, 3, and 6 months later (these mandatory follow-up lab samples and corresponding data were excluded from the present study’s analysis, as the purpose of the work was to monitor Legionella during routine monitoring activities). Therefore, we included the initial positive samples and excluded the follow-up samples. The water treatment disinfection methods used in these hospitals are based on the application of hypochlorite. 2.2. Sample Collection We chose the sampling locations described in the Italian National Guidelines . In addition, this study used similar sampling locations (i.e., faucets, showers, and tank bottoms) within the building premise plumbing system of the study by Torre et al. . Our study, in addition to the previous sampling locations, also analyzed samples collected from the cold-water circuit, as mentioned in the new Guidelines. In detail, for each hot-water system, as described in the Italian National Guidelines , the following sampling locations were carried out: supply, recirculation, and tank bottoms, with at least 3 representative points (furthest in the water distribution and coldest). Tank bottom refers to a hot- or cold-water storage tank. For each cold-water system, the following sampling locations were carried out: tank bottoms, with at least 2 representative points (furthest in the water distribution and hottest). In addition, water was collected from air-treatment units (ATUs). Following the observation of the positivity rate for Legionella , it was decided to increase the sampled points. The COVID-19 pandemic prompted us to further increase the number of samplings, and with its end, we decided to decrease the number of sampled points. Of the total of 3365 water samples, 2065 originated from manual taps and showers (1485 from hot water and 580 from cold water), 780 from tank bottoms (in particular, 520 from hot water and 260 from cold water), and 520 from ATUs. The water samples were collected during the day (later in the morning, when the hospital was already in action). There were no point-of-use (POU) filters at any point of sampling. According to the Italian National Guidelines, 1 L of water was collected by the laboratory staff in sterile polyethylene bottles enriched with 0.01% sodium thiosulfate to neutralize chlorine action . All samples were collected without flushing and flaming or disinfecting at the point of discharge to simulate the common use of water, i.e., the exposure of a user. The water samples were univocally identified on a spreadsheet at the time of collection. At sampling, water temperature and residual chlorine were measured by our laboratory staff. Water temperature (expressed in °C) was obtained using a calibrated thermometer (TFA Digitales Einstichthermometer, TFA-Dostmann, GmbH & Co. KG, Wertheim-Reicholzheim, Germany), and free residual chlorine (expressed in mg/L) was monitored using a colorimetric diethyl-p-phenylenediamine (DPD) method (MQuant; Merck, Darmstadt, Germany). In detail, the test was based on a semi-quantitative measurement of free chlorine by visual comparison of the color of the measurement solution and a set of colors contained in a color card comparator. The samples were transported, divided between hot- and cold-water samples at room temperature, and protected from light. Calibrations (kits and equipment) and microbiological analysis were performed in our laboratory, which is accredited according to the ISO 17025 and periodically performs proficiency tests. No samples were damaged during transport or showed a suspect appearance, such as different coloring or the presence of sediment or soil. Therefore, all samples collected were included and analyzed in the study. 2.3. Microbiological Analysis and Identification The microbiological analysis was conducted within 2 hours after the collection of the samples, in accordance with the UNI EN ISO 11731:2017 . Briefly, the samples were filtered on polycarbonate membrane filters with a pore size of 0.2 µm (Sartorius). The membrane was kept in 10 mL of the original water sample and vortexed. A total of 200 µL of water previously treated at (50 ± 1) °C for (30 ± 2) min. and the same volume of untreated water were inoculated on Petri plates containing Legionella Agar Base (Oxoid) medium and being supplemented with Legionella Growth Supplement (BCYE) (Oxoid) and Legionella Selective Supplement (GVPC) (Oxoid). The plates were incubated at (36 ± 2) °C with 2.5% CO 2 and under a humid atmosphere. After 10 days, the presence or absence of colonies was evaluated. The presumed Legionella colonies were cultured both on BCYE agar and BCYE agar without L-cysteine (BCYE-cys agar). The growth of colonies on the BCYE agar and not on the BCYE-cys agar suggested Legionella positivity of the samples. Determination of species and serogroups was conducted by the latex agglutination test (Oxoid) and the anti- Legionella pneumophila monovalent serum (Biogenetics). The colony counts were reported in terms of CFU (colony-forming units)/L. The test results were sent to the hospitals. 2.4. Statistical Analysis A descriptive statistical analysis was performed as previously described . It included, for bacterial load values of positive samples, detection of geometric mean (Log 10 CFU/L), standard deviation, median, percentile range, and interquartile range . This analysis was conducted using Microsoft Excel. Normality tests were conducted using the Shapiro–Francia test to check for data distribution. The non-parametric Mann–Whitney U test was used to determine the connections between the presence of Legionella and residual chlorine (expressed in mg/L) and water temperature (expressed in °C) . The statistical results were interpreted at the level of significance p < 0.05. The χ 2 was calculated using Doornik–Hansen test. Multiple linear regression analysis (MLRA) was used to confirm the results of the Mann–Whitney U test. The statistical calculations were performed using the STATA MP v14.0 statistical software program (College Station, TX, USA).
The water sample collection was conducted from January 2018 to December 2022 in two provinces of the Campania region, Southern Italy (Naples and Caserta) ( ). The samples were collected in hospitals which ask us to routinely perform Legionella testing every six months as part of WSPs as required by the Italian regulatory system for validation of a well-maintained building water system. The water samples were collected from 26 hospitals, all of which possessed the following characteristics: the presence of a single building, construction between 1970 and 1980, the presence of a maximum of five floors, and the presence of the number of beds served ranging between 20 and 200. These hospitals all have WSPs in place to respond to positive samples. We monitored 26 hospitals compared to the 50 hospitals in the study by Torre et al. , because some of the 50 hospitals were excluded since monitoring had not been carried out for all the years of the study, or these facilities were not comparable to the others (in terms of number of beds served, number of buildings, number of floors, construction period, or water treatment disinfection methods). Water provided to the hospitals (which contains free chlorine as residual drinking water disinfectant) comes from the public supply system of the cities of Naples and Caserta, and it reaches the hospitals via a single pipeline. The hospital sites within the present study included buildings hosting patients considered at increased risk (Medicine, Pneumology, Geriatrics, Surgeries, etc.) over the study period, inclusive of the time during the COVID-19 pandemic. Samples were collected every six months, as required by the Italian National Guidelines for hospitals with this type of wards. If a sample tested positive, according to the Italian National Guidelines, the sampled point was decontaminated and re-sampled 1, 3, and 6 months later (these mandatory follow-up lab samples and corresponding data were excluded from the present study’s analysis, as the purpose of the work was to monitor Legionella during routine monitoring activities). Therefore, we included the initial positive samples and excluded the follow-up samples. The water treatment disinfection methods used in these hospitals are based on the application of hypochlorite.
We chose the sampling locations described in the Italian National Guidelines . In addition, this study used similar sampling locations (i.e., faucets, showers, and tank bottoms) within the building premise plumbing system of the study by Torre et al. . Our study, in addition to the previous sampling locations, also analyzed samples collected from the cold-water circuit, as mentioned in the new Guidelines. In detail, for each hot-water system, as described in the Italian National Guidelines , the following sampling locations were carried out: supply, recirculation, and tank bottoms, with at least 3 representative points (furthest in the water distribution and coldest). Tank bottom refers to a hot- or cold-water storage tank. For each cold-water system, the following sampling locations were carried out: tank bottoms, with at least 2 representative points (furthest in the water distribution and hottest). In addition, water was collected from air-treatment units (ATUs). Following the observation of the positivity rate for Legionella , it was decided to increase the sampled points. The COVID-19 pandemic prompted us to further increase the number of samplings, and with its end, we decided to decrease the number of sampled points. Of the total of 3365 water samples, 2065 originated from manual taps and showers (1485 from hot water and 580 from cold water), 780 from tank bottoms (in particular, 520 from hot water and 260 from cold water), and 520 from ATUs. The water samples were collected during the day (later in the morning, when the hospital was already in action). There were no point-of-use (POU) filters at any point of sampling. According to the Italian National Guidelines, 1 L of water was collected by the laboratory staff in sterile polyethylene bottles enriched with 0.01% sodium thiosulfate to neutralize chlorine action . All samples were collected without flushing and flaming or disinfecting at the point of discharge to simulate the common use of water, i.e., the exposure of a user. The water samples were univocally identified on a spreadsheet at the time of collection. At sampling, water temperature and residual chlorine were measured by our laboratory staff. Water temperature (expressed in °C) was obtained using a calibrated thermometer (TFA Digitales Einstichthermometer, TFA-Dostmann, GmbH & Co. KG, Wertheim-Reicholzheim, Germany), and free residual chlorine (expressed in mg/L) was monitored using a colorimetric diethyl-p-phenylenediamine (DPD) method (MQuant; Merck, Darmstadt, Germany). In detail, the test was based on a semi-quantitative measurement of free chlorine by visual comparison of the color of the measurement solution and a set of colors contained in a color card comparator. The samples were transported, divided between hot- and cold-water samples at room temperature, and protected from light. Calibrations (kits and equipment) and microbiological analysis were performed in our laboratory, which is accredited according to the ISO 17025 and periodically performs proficiency tests. No samples were damaged during transport or showed a suspect appearance, such as different coloring or the presence of sediment or soil. Therefore, all samples collected were included and analyzed in the study.
The microbiological analysis was conducted within 2 hours after the collection of the samples, in accordance with the UNI EN ISO 11731:2017 . Briefly, the samples were filtered on polycarbonate membrane filters with a pore size of 0.2 µm (Sartorius). The membrane was kept in 10 mL of the original water sample and vortexed. A total of 200 µL of water previously treated at (50 ± 1) °C for (30 ± 2) min. and the same volume of untreated water were inoculated on Petri plates containing Legionella Agar Base (Oxoid) medium and being supplemented with Legionella Growth Supplement (BCYE) (Oxoid) and Legionella Selective Supplement (GVPC) (Oxoid). The plates were incubated at (36 ± 2) °C with 2.5% CO 2 and under a humid atmosphere. After 10 days, the presence or absence of colonies was evaluated. The presumed Legionella colonies were cultured both on BCYE agar and BCYE agar without L-cysteine (BCYE-cys agar). The growth of colonies on the BCYE agar and not on the BCYE-cys agar suggested Legionella positivity of the samples. Determination of species and serogroups was conducted by the latex agglutination test (Oxoid) and the anti- Legionella pneumophila monovalent serum (Biogenetics). The colony counts were reported in terms of CFU (colony-forming units)/L. The test results were sent to the hospitals.
A descriptive statistical analysis was performed as previously described . It included, for bacterial load values of positive samples, detection of geometric mean (Log 10 CFU/L), standard deviation, median, percentile range, and interquartile range . This analysis was conducted using Microsoft Excel. Normality tests were conducted using the Shapiro–Francia test to check for data distribution. The non-parametric Mann–Whitney U test was used to determine the connections between the presence of Legionella and residual chlorine (expressed in mg/L) and water temperature (expressed in °C) . The statistical results were interpreted at the level of significance p < 0.05. The χ 2 was calculated using Doornik–Hansen test. Multiple linear regression analysis (MLRA) was used to confirm the results of the Mann–Whitney U test. The statistical calculations were performed using the STATA MP v14.0 statistical software program (College Station, TX, USA).
3.1. Positivity of Analyzed Samples During the 5-year study period, the laboratory analyzed a total of 3365 water samples. Among these samples, positivity for Legionella was detected in 708 samples, representing 21.0% of the total. The number of samples per year, along with the number of positive samples per year, is shown in and . Specifically, the number of positive samples decreased from 164 (34.2%) to 111 (14.7%). shows the number of total samples, the number of positive samples, and the percentage for different sampling locations. Of the total number of cold-water samples (840), 37 (4.4%) tested positive. In particular, 11 originated from tank bottoms and 26 from taps and showers. In the hot-water samples, out of the 2005 total samples, 652 (32.5%) were positive. A total of 143 samples came from tank bottoms and 509 from taps and showers. In the ATU samples, a positivity rate of 3.7% was observed (19 positive samples out of a total of 520). 3.2. Distribution of Species and Serogroups among Positive Samples exhibits the percentage of positivity of Legionella species and serogroups in the years 2018–2022. In detail, L. pneumophila was the most represented (98.6% versus 1.4% of non- pneumophila Legionella spp.). Serogroups 2–14 had a positivity percentage of 70.9%, and serogroup 1 has a positivity percentage of 27.7%. Among serogroups 2–14, out of the total number of isolated species and serogroups, the most represented were serogroups 6 (24.5%), 3 (18.9%), and 8 (23.3%). Serogroups 5 and 10, on the other hand, were detected at a rate of 3.1% and 1.1%, respectively ( ). In addition, in 27 samples of the 708 positive samples, the simultaneous presence of two species or serogroups was found. Particularly, out of these 27 samples, 22 samples tested positive for both serogroups 1 and 3 (3.1%), 1 sample tested positive for serogroups 1 and 6 (0.1%), 2 samples tested positive for serogroups 1 and 8 (0.3%), 1 sample tested positive for serogroups 1 and 10 (0.1%), and 1 sample tested positive for serogroup 5 and non- pneumophila Legionella spp. (0.1%). 3.3. Water Temperature and Residual Chlorine Of the 3365 analyzed samples, 520 were taken from the ATUs in which the temperature and residual chlorine parameters were not carried out, while the remaining 2845 samples were divided into hot-water samples (≥26.0 °C, n. 2,005) and cold-water samples (≤25.8 °C, n. 840), with a mean temperature of 37.5 °C (range of 11.0–91.0 °C) for all the samples collected. shows different ranges of temperature and, for each interval, the number of analyzed samples, the number of positive samples, the minimum Legionella concentration, the maximum Legionella concentration, and the geometric mean of the positive samples. The majority of Legionella positive samples were found in the temperature ranges of 26.0–30.9 °C, 31.0–35.9 °C, and 36.0–40.9 °C (57.4%, 46.9%, and 48.8%). The percentage of positive samples decreased with increasing temperatures (from 48.8% for the temperature range of 36.0–40.9 °C to 19.4% for temperatures ≥56.0 °C). Regarding the bacterial concentration of the positive samples, a minimum value corresponding to 1.70 Log 10 CFU/L was observed at all temperature ranges, while the highest maximum value was observed at the temperature range of 46.0–50.9 °C (4.36 Log 10 CFU/L). In addition, the minimum value of the geometric mean was 2.81 Log 10 CFU/L in temperatures ≤20 °C, and it increased with increasing temperature, up to the range of 26.0–30.9 °C (3.10 Log 10 CFU/L). For the temperature range of 41.0–45.9 °C and above, there was a decrease in the geometric mean value as temperature increased. At temperatures higher than 62 °C, no positivity for Legionella was observed. 3.4. Statistical Analysis The results of the statistical analysis carried out to examine the bacterial load values of the positive samples are shown in . This table reports the number of total samples, the number of positive samples and percentage of positivity, the geometric mean (Log 10 CFU/L), the standard deviation, the median, the percentile range, and the interquartile range. Moreover, the results show the normality of all data collected, and the Doornik–Hansen test for multivariate normality provides a χ 2 = 1.09 × 10 5 . The lowest Legionella concentration value recorded during the analysis was 1.70 Log 10 CFU/L in all years, while the highest value was 4.36 Log 10 CFU/L in 2019. The results of the MRLA ( ) indicated that no statistically significances were recorded for the water samples’ temperature ( p -value = 0.526), in contrast with residual chlorine ( p -value < 0.05). Furthermore, the MRLA results revealed the statistically significant correlation between residual chlorine and Legionella concentration, in which was negative, as suggested by the t-value (−6.19) and p -value (<0.05). The boxplot regarding the influence of residual chlorine is shown in .
During the 5-year study period, the laboratory analyzed a total of 3365 water samples. Among these samples, positivity for Legionella was detected in 708 samples, representing 21.0% of the total. The number of samples per year, along with the number of positive samples per year, is shown in and . Specifically, the number of positive samples decreased from 164 (34.2%) to 111 (14.7%). shows the number of total samples, the number of positive samples, and the percentage for different sampling locations. Of the total number of cold-water samples (840), 37 (4.4%) tested positive. In particular, 11 originated from tank bottoms and 26 from taps and showers. In the hot-water samples, out of the 2005 total samples, 652 (32.5%) were positive. A total of 143 samples came from tank bottoms and 509 from taps and showers. In the ATU samples, a positivity rate of 3.7% was observed (19 positive samples out of a total of 520).
exhibits the percentage of positivity of Legionella species and serogroups in the years 2018–2022. In detail, L. pneumophila was the most represented (98.6% versus 1.4% of non- pneumophila Legionella spp.). Serogroups 2–14 had a positivity percentage of 70.9%, and serogroup 1 has a positivity percentage of 27.7%. Among serogroups 2–14, out of the total number of isolated species and serogroups, the most represented were serogroups 6 (24.5%), 3 (18.9%), and 8 (23.3%). Serogroups 5 and 10, on the other hand, were detected at a rate of 3.1% and 1.1%, respectively ( ). In addition, in 27 samples of the 708 positive samples, the simultaneous presence of two species or serogroups was found. Particularly, out of these 27 samples, 22 samples tested positive for both serogroups 1 and 3 (3.1%), 1 sample tested positive for serogroups 1 and 6 (0.1%), 2 samples tested positive for serogroups 1 and 8 (0.3%), 1 sample tested positive for serogroups 1 and 10 (0.1%), and 1 sample tested positive for serogroup 5 and non- pneumophila Legionella spp. (0.1%).
Of the 3365 analyzed samples, 520 were taken from the ATUs in which the temperature and residual chlorine parameters were not carried out, while the remaining 2845 samples were divided into hot-water samples (≥26.0 °C, n. 2,005) and cold-water samples (≤25.8 °C, n. 840), with a mean temperature of 37.5 °C (range of 11.0–91.0 °C) for all the samples collected. shows different ranges of temperature and, for each interval, the number of analyzed samples, the number of positive samples, the minimum Legionella concentration, the maximum Legionella concentration, and the geometric mean of the positive samples. The majority of Legionella positive samples were found in the temperature ranges of 26.0–30.9 °C, 31.0–35.9 °C, and 36.0–40.9 °C (57.4%, 46.9%, and 48.8%). The percentage of positive samples decreased with increasing temperatures (from 48.8% for the temperature range of 36.0–40.9 °C to 19.4% for temperatures ≥56.0 °C). Regarding the bacterial concentration of the positive samples, a minimum value corresponding to 1.70 Log 10 CFU/L was observed at all temperature ranges, while the highest maximum value was observed at the temperature range of 46.0–50.9 °C (4.36 Log 10 CFU/L). In addition, the minimum value of the geometric mean was 2.81 Log 10 CFU/L in temperatures ≤20 °C, and it increased with increasing temperature, up to the range of 26.0–30.9 °C (3.10 Log 10 CFU/L). For the temperature range of 41.0–45.9 °C and above, there was a decrease in the geometric mean value as temperature increased. At temperatures higher than 62 °C, no positivity for Legionella was observed.
The results of the statistical analysis carried out to examine the bacterial load values of the positive samples are shown in . This table reports the number of total samples, the number of positive samples and percentage of positivity, the geometric mean (Log 10 CFU/L), the standard deviation, the median, the percentile range, and the interquartile range. Moreover, the results show the normality of all data collected, and the Doornik–Hansen test for multivariate normality provides a χ 2 = 1.09 × 10 5 . The lowest Legionella concentration value recorded during the analysis was 1.70 Log 10 CFU/L in all years, while the highest value was 4.36 Log 10 CFU/L in 2019. The results of the MRLA ( ) indicated that no statistically significances were recorded for the water samples’ temperature ( p -value = 0.526), in contrast with residual chlorine ( p -value < 0.05). Furthermore, the MRLA results revealed the statistically significant correlation between residual chlorine and Legionella concentration, in which was negative, as suggested by the t-value (−6.19) and p -value (<0.05). The boxplot regarding the influence of residual chlorine is shown in .
Legionella is a bacterium that colonizes soils, freshwater, and building water systems . The aim of this paper was to analyze the prevalence of Legionella and its individual species and serogroups in hospitals of the Campania region over the period 2018–2022 and to assess the effectiveness of the Italian National Guidelines of 2015 in preventing Legionella colonization in water systems. In addition, the purpose was to test whether there was a relationship between the presence of Legionella with two variables, residual chlorine and water temperature. In this study, we observed a 21.0% positivity rate. The positivity rate for Legionella decreased from 2018 (34.2%) to 2022 (14.7%). The number of positive samples gradually decreased, and this explains the decrease in the positivity percentage for this microorganism ( ). Deiana et al. , on the other hand, monitored the presence of Legionella from 2010 to 2020 in a university hospital in the Sardinia region, Italy. Regarding the period 2018–2020, they also observed a decrease in the percentage of Legionella positivity in this region. The authors posit that the decrease in Legionella positivity over the course of the study is likely attributed to the hospitals receiving the study’s results and performing environmental hazard controls. The COVID-19 pandemic in 2020 and 2021 decreased attention to environmental Legionella prevention, leading to an increase in Legionella positivity in 2021. This increase was not observed in 2020 because the increase in hospitalization led to the need to implement hygiene rules (e.g., hand washing) , which could have promoted water flow and prevented stagnation. Stagnation is known to be responsible for the disappearance and poor stability of residual disinfectant and microbial growth of Legionella . Moreover, the monitored hospitals were all in constant activity during the pandemic. The pandemic also directed our attention to new locations to monitor for Legionella : monitoring more locations in hospitals meant preventing Legionella infections in patients. The percentage of positivity observed at the different sampling locations was 19.7% for tank bottoms, 25.9% for taps and showers, and 3.7% for ATUs. This study, in general, used similar sampling locations (i.e., faucets, showers, and tank bottoms) within the building premise plumbing system of the study by Torre et al. , conducted in the years 2008–2014 in hospitals in the Campania region, which showed a percentage of positivity of 27.2% for tank bottoms, 31.9% for taps and showers, and 4.0% for ATUs. The authors supposed that this decrease could be explained by the publication of the Italian National Guidelines in 2015 and the implementation of a WSP by hospitals, which resulted in the collaboration of different personnel (laboratory staff, technical office, water disinfection staff, etc.) in order to ensure water safety. In addition, the highest percentage of Legionella detection was observed in taps and showers, which are a source of exposure for patients (taps and showers can generate aerosols containing the bacterium that can be inhaled) . Out of a total of 840 cold-water samples (temperature range of 11.0–25.9 °C), 37 (4.4%) tested positive. The presence of Legionella in cold-water samples was also found in the study by Arvand et al., 2011 , who detected a percentage of positivity of 40% in cold-water samples (temperature range of 7–29 °C) . Our results are not surprising, since out of the total number of cold-water samples taken (840), 458 (54.5%) showed a temperature between 21.0 and 25.9 °C, which are values near the range of temperatures that can create favorable conditions for Legionella multiplication. The results indicated that L. pneumophila was the most isolated species. The serogroups 2–14 showed a positivity rate of 70.8% for the positive samples, which was similar to that found in Turkish hospitals (70.8%) by Yilmaz and Orhan , in Italian non-hospital facilities (71.7%) by Sabatini et al. , in hospital samples (68.7%) collected by Deiana et al. , and in Polish non-hospital samples (64.3%) collected by Stojek and Dutkiewicz . Regarding the individual serogroups, the highest percentage of positivity was found for serogroup 1. This finding is congruent with what is found in clinical diagnosis, according to which the L . pneumophila type 1 is the serogroup that most commonly affects humans [ , , ]. Amemura-Maekawa et al., in fact, isolated an 80.2% percentage of serogroup 1-positive samples in Japanese clinical specimens collected from 1980 to 2008 . Regarding serogroups 2–14, the presence of serogroups different from serogroup 1 (6, 8, and 3) was the same as found previously . Serogroups 3 and 6 were also among the serogroups isolated from humans in Italy during 1987 and 2009 , so these results had correspondence with what was previously reported in humans. These findings were also in agreement with the results determined by Perola et al., who, through a genotyping analysis, found an association between the isolates from two Legionella serogroup 5-positive patients residing for several days in a hospital and the environmental isolates from the hospital’s water supply . Serogroups 3 and 6 were found in environmental water samples in several studies [ , , , , ]. Particularly, serogroup 3 was isolated in hospitals in Iran , while Pignato et al. revealed the prevalence of this serogroup in Italian hospitals, which was sometimes associated with serogroup 6 . Isolated serogroup 8 was reported by De Giglio et al. , Papadakis et al. , and Sakhaee et al. . The presence of serogroups different from serogroup 1 suggests that attention should be focused on the clinical diagnosis of other serogroups, considering that the most widely used test for Legionella diagnosis in human specimens (the urinary antigen test) shows a low sensitivity for non- L. pneumophila type 1, resulting in underreporting of Legionella infections . Regarding temperature, in our study, we observed that more hot-water samples were positive than cold-water samples (32.5% vs. 4.4%). This finding is consistent with that observed in the work by Stojek et al. conducted in Poland, who observed a higher frequency of Legionella isolation in hot-water samples (88.6%) . In addition, the highest percentages of positive samples were found at temperatures ranging from 26.0 to 40.9 °C, which are temperatures corresponding to the optimal growth range of Legionella . For temperatures above this range, the percentage decreased. With regard to the bacterium’s concentrations, the highest values were found at a temperature range of 46.0–50.9 °C, and above this temperature range, these values decreased. In the study by Boppe et al. conducted in a hospital where the correlation between the concentration of L. pneumophila and the maximum water temperature at the point of use was investigated, a significant decrease in the bacterium’s concentrations was observed at temperatures ≥55 °C . Our analysis revealed a Legionella concentration minimum value of 1.70 Log 10 CFU/L in all years, and the highest value of 4.36 Log 10 CFU/L was observed in 2019. In the study by Torre et al., the minimum and maximum values observed were 2.00 and 7.45 Log 10 CFU/L . On the other hand, in the study by Girolamini et al., the minimum and maximum concentration values observed in a hospital in the Emilia-Romagna region, Italy, which water was treated with a biocide, were between <1.70 and 5.80 Log 10 CFU/L . The Italian National Guidelines for the prevention and control of Legionellosis indicate the types of interventions required to be conducted in healthcare facilities, depending on the positivity rate, the Legionella concentration (expressed in CFU/L), and the presence or absence of clinical cases . In addition, procedures must be performed to obtain non-detection of Legionella in air-treatment systems and water in wards housing profoundly immunocompromised and very high-risk patients (transplant centers, oncology, and hematology) and in the water for tank births. If a sample tested positive, according to the Italian National Guidelines, the sampled point is decontaminated and re-sampled one, three, and six months later (these mandatory follow-up lab samples and corresponding data were excluded from the present study’s analysis). Regarding the trends of Legionella concentration found, this study revealed a statistically significant negative correlation with residual chlorine. This finding was in agreement with D’Alessandro et al. , who observed a statistically significant reduction in positive samples following the increase in free chlorine in water. Furthermore, Totaro et al. found a moderate relationship between the presence of Legionella and a decrease in total chlorine concentration . The same result was reported by Masaka et al., who found a weak statistically significant negative correlation with both L. pneumophila serogroups 1 and 2–14 . A statistically significant negative correlation between Legionella concentration and total chlorine residual concentration was reported by Zhang et al. . Moreover, in their study, Rafiee et al. argued that there was proportionality between residual chlorine content and the presence of Legionella . The mechanism by which chlorine combats the presence of Legionella involves it interaction with the pathogen’s cell membrane . This results in a dispersion of the cell’s macromolecules and subsequent modification of the cell’s chemical, physical, and biological processes . Furthermore, no statistically significant correlation was found with water temperature, as reported by Masaka et al. and Pierre et al. , and in contrast to the evidence found by Rakić et al. , who found a correlation with L. pneumophila . In addition, De Giglio et al. observed a weak correlation between Legionella concentrations and water temperature . In summary, the results of the study confirm that chlorine disinfection is an effective method to control Legionella ; this is in accordance with the results reported by Paranjape et al., who found that continuous application of chlorine inhibits the presence of Legionella in cooling towers, and by Orsi et al., who found that shock and continuous hyperchlorination significantly reduce the number of Legionella positive samples .
The limitations of this study were that sampling covered only two provinces of the Campania region in the South of Italy. Thus, in the future, it would be interesting to expand the study to other provinces in order to have a complete overview of the situation in the study area. Moreover, the sampling covered only one type of facility. Consequently, a future goal could be to expand the analysis to other types of facilities, such as recreational centers, tourist sites, military bases, schools, and others, since it is possible to identify the bacterium in these structures . In addition, only two variables were analyzed to evaluate the relationship with the presence of Legionella (water temperature and residual chlorine). Furthermore, we did not collect information about cases of Legionella infections in the monitored hospitals, and it would be interesting to investigate this aspect in future studies and assess whether there is a correlation between these two variables, as performed in the study by Perola et al. . Finally, the circumstance that, in a high percentage of samples (79.0%), Legionella is not detected by the culture method does not mean that all samples are really negative for Legionella . There may be a percentage of samples that have viable but non-culturable (VBNC) Legionella , and the culture method cannot detect them. VBNC is a state of Legionella that can be established under unfavorable environmental conditions and retain virulence. Under favorable conditions and in free-living amoebae, the bacterium can resuscitate and become pathogenic again . For example, a study conducted from 2010 to 2015 in an Italian university hospital subjected to continuous monochloramine treatment determined that 34.5% of the negative samples, as determined by the culture method, tested positive for the VBNC state. In addition, 18% of 22 tested samples were positive based on the resuscitation test . This situation could occur in some of our negative samples. Future goals will include testing for the VBNC state. This paper could be a starting point for future analyses, given the importance of environmental monitoring and the number of studies monitoring this bacterium in the Campania region in South of Italy.
Our environmental monitoring revealed bacterial contamination of water in this region, even though its presence decreased in percentage positivity from 2018 to 2022. Despite this decline, attention should not be reduced, especially in facilities housing immunocompromised individuals who are more susceptible to Legionella infection. The most represented species was confirmed to be L . pneumophila . Serogroups 2–14 were found at a high rate. Regarding total serogroups, the most represented was serogroup 1, followed by serogroups 6, 3, and 8. These results suggest the need for continuous environmental monitoring of Legionella and highlight the importance of focusing on the clinical diagnosis of serogroups different from serogroup 1 as well, since the most widely used test on human samples shows a high sensitivity only for this serogroup. The highest concentration of positive samples was observed in the temperature range corresponding to the optimal growth temperature of the bacterium. In addition, the negative correlation between residual chlorine and the presence of Legionella confirmed that chlorine disinfection is an effective method for preventing Legionella contamination. The implementation of a WSP, through the collaboration of different groups of professionals, is one of the main approaches to control Legionella contamination in hospitals and prevent infections.
|
White Spots: Prevention in Orthodontics—Systematic Review of the Literature
|
20ed8957-a883-48b6-b5f1-245f943cf4b1
|
10138765
|
Dental[mh]
|
White spots (WS) frequently occur during orthodontic therapy with fixed appliances [ , , , , ]. They usually appear at the gingival and buccal parts of teeth. The teeth most affected by these lesions are the canines and the upper lateral incisors . In these areas, losses of enamel mineralization have been determined, which clinically present as more or less extensive areas that are chalky white or brown, porous, and rough to the touch, a phenomenon related to the different diffusion of light compared to normally mineralized enamel . These irreversible lesions of the enamel, if left untreated, evolve into caries . The incidence of WS is strictly related to oral hygiene maneuvers and should always be detected by orthodontists at early stages. The recent pandemic situation could have reduced the capability to manage these clinical situations due to the reduced number of appointments . In cases of extended WS or decay, there is a need to perform esthetic restorations; modern restorative materials have several colours and various translucency qualities, allowing them to mirror the optical behaviour of teeth and provide a natural appearance . Their impact on patients’ oral health and smile aesthetics can be very important, hence the importance of WS prevention, which is mainly based on the right selection of patient candidate for orthodontics. The patient in need of orthodontic therapy should first be educated in the most proper home oral hygiene techniques, should acquire a good level of hygiene before even starting orthodontic therapy, and should know that the orthodontic device will hinder common hygiene manoeuvers by representing a receptacle for plaque and bacteria . In addition to hygiene, other factors associated with the occurrence of WS include: sex, age, length of therapy, type of treatment , characteristics of the oral bacterial flora, diet followed by the patient, and changes in the microbiota of his or her mouth, all of which have been analyzed in several studies [ , , ] ( ).
2.1. Protocol and Registration This systematic review was conducted according to the standards of Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) . The present systematic review has been performed in accordance with the principles of PRISMA and the International Prospective Register of Systematic Review Registry guidelines (ID 405569). 2.2. Search Processing The keywords used in the databases (Scopus, Web of Science, and Pubmed) for the selection of the publications under review were “White Spots” and “fixed orthodont*”, using the word “AND” as the Boolean operator. The search focused exclusively on articles published in English in the past 5 years (January 2018–January 2023) ( ). 2.3. Eligibility Criteria The reviewers worked in pairs, identifying work that met the following inclusion criteria: (1) studies performed only on human subjects; (2) clinical studies or case reports; (3) studies performed on subjects who were in orthodontic therapy (fixed therapy); and (4) studies regarding WS prophylaxis in subjects who were in orthodontic therapy (fixed therapy). Exclusion criteria were: (1) studies involving therapy of WS after orthodontic therapy; (2) studies involving cure of WS unrelated to orthodontics; (3) in vitro studies; (4) animal studies; (5) systematic reviews, narrative reviews, and meta-analyses. 2.4. Data Processing The screening process, which was conducted by reading the titles and abstracts of the articles selected in the previous identification phase, has allowed excluding all those publications that deviated from the topics examined. Subsequently, the full texts of publications deemed to meet the agreed inclusion criteria, were read. Disagreements between reviewers on article selection were discussed and resolved.
This systematic review was conducted according to the standards of Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) . The present systematic review has been performed in accordance with the principles of PRISMA and the International Prospective Register of Systematic Review Registry guidelines (ID 405569).
The keywords used in the databases (Scopus, Web of Science, and Pubmed) for the selection of the publications under review were “White Spots” and “fixed orthodont*”, using the word “AND” as the Boolean operator. The search focused exclusively on articles published in English in the past 5 years (January 2018–January 2023) ( ).
The reviewers worked in pairs, identifying work that met the following inclusion criteria: (1) studies performed only on human subjects; (2) clinical studies or case reports; (3) studies performed on subjects who were in orthodontic therapy (fixed therapy); and (4) studies regarding WS prophylaxis in subjects who were in orthodontic therapy (fixed therapy). Exclusion criteria were: (1) studies involving therapy of WS after orthodontic therapy; (2) studies involving cure of WS unrelated to orthodontics; (3) in vitro studies; (4) animal studies; (5) systematic reviews, narrative reviews, and meta-analyses.
The screening process, which was conducted by reading the titles and abstracts of the articles selected in the previous identification phase, has allowed excluding all those publications that deviated from the topics examined. Subsequently, the full texts of publications deemed to meet the agreed inclusion criteria, were read. Disagreements between reviewers on article selection were discussed and resolved.
Keyword searches of the Web of Science (432), Scopus (309), and Pubmed (274) databases yielded a total of 1015 articles. The subsequent elimination of duplicates (456) resulted in the inclusion of 559 articles. Of these 559 studies, 483 were excluded—62 because they were review and 421 because they were off topic. The writers successfully sought the remaining 76 papers for retrieval, and evaluated their eligibility. The eligibility phase ended with the inclusion of 16 publications for this work ( ). Results of each study were reported in . The excluded articles have been reported in the ( ).
Among the most well-known and scientifically validated preventive measures is the use of fluoride in toothpastes, mouthwashes, varnishes, mousses, and cements for bonding brackets and other fixed orthodontic devices . Some strategies, such as antimicrobial toothpastes, amorphous calcium casein phosphopeptides, sealants, lasers, and the presence of antimicrobial substances in orthodontic biomaterials, can effectively prevent WSL in orthodontics . The purpose of this work is to investigate the possible roles of fluoroprophylaxis and other preventive strategies which can help patients and clinicians reduce the occurrence of WS during orthodontic therapy [ , , ]. Fixed orthodontics can have negative repercussions on oral health, as they make home oral hygiene manoeuvres more difficult and are receptacles for bacteria and food debris. This is associated with a higher incidence of WS, caries, and periodontal problems. 4.1. Fixed Orthodontics and Salivary Changes In a clinical study published in 2019 by Jurela et al., 83 patients with a medium age of 15.14 ± 1.66 (52 men and 31 women) receiving FOT were examined . The study’s goal was to estimate the patients’ clinical and salivary changes and see whether there were statistically meaningful variations concerning the type of braces they wore (conventional vs. self-ligating brackets) . The DMFT index is the most common population-based measure of caries experience. This index evaluates the total of a person’s decaying, missing, and filled permanent teeth or surfaces. It was considered at the beginning and after six months of orthodontic treatment. The consequences of treatment on salivary flow, the aspects of WS, and the plaque index were also considered. Six months following the start of therapy, the study discovered an important rise in DMFT index and salivary flow in all patients, without discrimination depending on the type of fixed appliance utilized (different types of brackets or ligatures). The considerable drop in salivary pH and rise in plaque index may be one cause of the rise in DMFT index. Because increased salivary flow is associated with a rise of the plaque index, which reduces pH, it does not seem to be good to reduce the possibility of carious lesion occurrence . 4.2. Streptococcus mutants and Lactobacillus In a 2019 comparative prospective study, Jin et al. examined the evolution of these 2 bacterial species in the saliva of people treated with fixed therapy . At four separate time points—T1 before therapy, T2 3 months after appliance fitting, T3 6 months after fitting, and T4 18 months after fitting—the saliva of 15 patients receiving FOT was examined. Lactobacillus increased slightly but not significantly over the 18 months of treatment, while total bacteria remained unchanged. The quantity of S. mutans was very different between the two types of brackets, after remaining stable for the first six months and increasing dramatically at T4 ( p < 0.05) . Patients with conventional brackets had a higher amount of S. mutans than did those with self-ligating brackets ( p < 0.05), who had a stable concentration of S. mutans during this period. The levels of sIgA, MPO, and LDH did not modify during orthodontic treatment. There was no link between sIgA and bacterial quantity. In conclusion, S. mutans increased significantly in patients wearing traditional braces during the last treatment period, suggesting that WS may develop after prolonged orthodontic therapy . 4.3. Fixed Orthodontics and Caries Pinto et al., examined INSO (incidence of active caries lesions) in 135 people aged 10 to 30 years. They were split into 2 groups, the first including 70 people who received no orthodontic therapy (G0), and the second including 65 people who received FOT for one year (G1). The plaque index, gingival, and caries indices were assessed at 0 and one years after treatment. One operator evaluated all teeth for caries, examining both active and inactive and early-stage and cavitated lesions. According to the work, the orthodontically treated group had a statistically higher incidence of active caries than the G0 group. In addition, the G1 group had a statistically greater mean increase in active caries. According to the results of this study, people who received FOT for one year had a significantly higher incidence and growth of active caries lesions than did people who did not receive fixed orthodontic therapy. 4.4. Enamel Etching and WS Enamel etching performed before the location of brackets is also believed to be responsible for the rise in caries in subjects undergoing fixed therapy. The study by Yagci et al., 2019 examined possible distinctions between partial and full etching . This was a double-blind randomized controlled trial of 20 patients with a medium age of 16.75 years, excellent dental hygiene, malocclusion, and fixed orthodontic therapy. Full or partial etching treatment was randomly performed on 40 maxillary arches . Quantitative fluorescence images were taken at the start of orthodontic treatment, three (T1) and six (T2) months later, and at the conclusion of the braces removal phase (T3). Using quantitative light fluorescence software, the presence of WS was assessed before and after drilling, and the results were rated with Student’s t -test. The research showed that, in terms of Q and A scores at T2, the group with complete etching significantly outperformed the group with partial etching ( p < 0.05). At every time point, F scores considerably increased in the TE group, but only at T1 and T3 in the PE group. There were no changes between the TE and PE groups at T3 ( p > 0.05), though. Regardless of the etching approach, the study indicated that the presence of WS were primarily seen in the upper lateral incisors. Although PE is better during the initial 6 months, in terms of long-term WS creation, there is no distinction between PE and TE . 4.5. Prevention of WS in Orthodontics During orthodontic treatment and in the post-orthodontic phase to achieve remineralization, numerous strategies are employed to prevent enamel demineralization. Use of casein phosphopeptide-containing products, antibacterial products, and fluoride-containing products are examples. Chlorhexidine is the most widely used antibacterial agent for dental usage because it is highly effective against Streptococcus mutans . A study by Shimpo et al., assessed the preventive impact of antimicrobial therapy in addition to fluoride application during FOT . With the addictions of fluoride and professional mechanical teeth cleaning, it has been discovered that tooth surface disinfection therapy also helps WS reduction during FOT. 4.6. Prevention with Fluoride Several studies have found the utility of fluoride toothpaste in the reduction of WS caused by orthodontic therapy [ , , , ]. In a prospective study by Kau et al., with three groups of patients receiving orthodontic care , Clinpro 5000 was administered to 35 people, Clinpro Tooth Crème was administered to 32 people, and MI Paste Plus was administered to 33 people in every group. For four months, the chosen product was used two times a day for two minutes. Subjects were examined once each month, for 4 months. At each visit, the Enamel Decalcification Index (EDI) was utilized to calculate the amount of WS per square. Compared to previous research, the usage of Clinpro 5000, Clinpro Crème, and MI paste Plus all had a decreasing effect on WS lesions. Clinpro 5000 slightly outperformed the other two test pastes in relation to effectiveness. The clinical trial conducted in 2019 by Smyth et al. came to similar conclusions . A recently introduced fluoride varnish containing 1.5% ammonium fluoride was considered in a 2019 clinical study by Sonesson et al., who ascertained that regular varnish applications reduced the quantity of WS during fixed therapy . Sealants act as physical barriers to bacterial acids and plaque. While good at preventing WS, sealants do peel off over time, predominantly in the gum area, leaving the enamel exposed to plaque and acid bacteria. Sealants like ProSeal have been proven to totally prevent mineral loss from enamel if they stay on the tooth surface, but the application of the product should be repeated every few months . With the growing attention on the host’s innate defense system, more minimally invasive and human-friendly therapies have been considered, like the use of formulas containing enzymes, probiotics, and plant extracts. Intrinsic defense factors in saliva are the enzymes peroxidase, lysozyme, and lactoferrin. These proteins can limit bacterial or fungal growth, interfere with bacterial glucose uptake or glucose metabolism, and promote bacterial aggregation and elimination . Cheng et al., in a 2019 clinical work, compared the effects between enzyme-containing and conventional toothpastes on orthodontic patients . The prevention of WS and plaque reduction effects among orthodontic patients in the first three months of treatment were not significantly different between enzyme-containing and conventional toothpastes, according to the study. In the first three months of treatment, neither gingival bleeding nor visible plaque among orthodontic patients who used fluoride- and enzyme-containing toothpastes significantly increased. However, the gingival bleeding and visible plaque significantly decreased . 4.7. Active Oxygen-Containing Toothpaste George et al., in an experiment conducted in 2022, examined how streptococcus mutations and WS responded to toothpaste with active oxygen . Active oxygen toothpaste resulted in a more pronounced reduction of WS than did fluoride toothpaste. Its impact was limited, though. Both toothpaste varieties had minimal effects on WSLs. Toothpaste containing active oxygen is effective in the same manner as toothpaste containing fluoride . 4.8. Prevention with CO 2 Laser As a result of removing the organic matrix, improving fluoride absorption, and increasing the binding surface area of ions, including calcium and fluoride, fluoride and laser act synergistically to strengthen enamel resistance to acids. Fluoride affects the creation of fluorohydroxyapatite crystals, changes demineralization and remineralization, and affects bacterial plaque [ , , , ]. Mahmoudzadeh et al.’s 2019 RCT aimed to estimate the effect of carbon dioxide (CO 2 ) laser on the prophylaxis of WS associated with fixed therapy . In this work, 554 teeth from 95 patients were considered. The 95 patients were divided into 2 groups, at random: the laser group (278 teeth), and the control group (276 teeth from 47 patients). The front teeth of the maxilla in the laser group were made aware of the CO 2 laser with the following characteristics: wavelength 10.6 m, power 0.4 mw, frequency 5 Hz, diameter 0.2 mm, and pulse time 9 s. An operator applied laser irradiation for 20 s while maintaining a 5 mm distance from the buccal surface and moving back and forth continuously . Similar placebo light exposure took place for the control group. Six months after receiving radiation, patients were brought back in to have the incidence, size, and cruelty of the injuries evaluated. Data were collected twice: immediately after adherence to the attack, and six months later. Better lesions and a decrease in lesion incidences were seen during six months with CO 2 laser use . The laser is believed to cause a chemical change in enamel crystals, removing cavities through remineralization. According to the study’s findings, gingival lesions were not affected by laser irradiation, even though it was effective on the incisal, mesial, and distal regions. Unlike the gingival area, where CO 2 laser had no noticeable impact, the extent of lesions in the incisal, mesial, and distal regions was drastically reduced after treatment. Additionally, while the mesial and incisal portions of the lesion showed a significant reduction in severity, the gingival and distal regions showed little improvement. In the gingival area, the laser was ineffective, most likely because of changes in the thickness and structure of the enamel. Since gingival regions are frequently affected by WSLs, laser settings at these locations should be modified to aid in the reduction of these lesions. Additionally, better oral hygiene can lower the incidence of gingival lesions (due to increased plaque accumulation) . The study by Belcheva et al., which began in September 2021 and whose follow-up phase will last until September 2023, is intriguing in the line of research on the encouraging effects of lasers . Investigating how fluoride varnish and CO 2 laser treatment can lessen the frequency, severity, and extent of WS lesions during fixed orthodontic therapy is the goal. An RCT will involve kids between the ages of 12 and 18 who need fixed therapy and are at a high risk of developing cavities. The buccal surfaces of the patient’s upper anterior teeth will receive fluoride therapy alone in one group, and fluoride therapy in addition to bonding orthodontic brackets in the other group. Following radiotherapy, the patients’ conditions will be reevaluated six and twelve months later . 4.9. Primer with Antibacterial Numerous studies on bonding products containing antibacterial substances exist in the literature, and all have shown encouraging results [ , , , ]. The aim of the study by Oz et al., is to clinically evaluate an antibacterial primer containing monomer in the prophylaxis of WS during fixed therapy . The study’s findings demonstrate that there was no discernible difference between the antibacterial monomer-containing primer group and the control group in terms of their capacity to prevent demineralization during orthodontic treatment . Degrazia et al., examined the demineralization and antibacterial properties of an experimental orthodontic adhesive made of triazine and niobium bioglass phosphate (TAT) around attachments placed on enamel surfaces . From the results of this study, the growing of S. mutans and total streptococcus were inhibited by the adhesive in the triazine and niobium phosphate-based bioglass, which had an anti-demineralization impact. This product can prevent the loss of enamel minerals.
In a clinical study published in 2019 by Jurela et al., 83 patients with a medium age of 15.14 ± 1.66 (52 men and 31 women) receiving FOT were examined . The study’s goal was to estimate the patients’ clinical and salivary changes and see whether there were statistically meaningful variations concerning the type of braces they wore (conventional vs. self-ligating brackets) . The DMFT index is the most common population-based measure of caries experience. This index evaluates the total of a person’s decaying, missing, and filled permanent teeth or surfaces. It was considered at the beginning and after six months of orthodontic treatment. The consequences of treatment on salivary flow, the aspects of WS, and the plaque index were also considered. Six months following the start of therapy, the study discovered an important rise in DMFT index and salivary flow in all patients, without discrimination depending on the type of fixed appliance utilized (different types of brackets or ligatures). The considerable drop in salivary pH and rise in plaque index may be one cause of the rise in DMFT index. Because increased salivary flow is associated with a rise of the plaque index, which reduces pH, it does not seem to be good to reduce the possibility of carious lesion occurrence .
In a 2019 comparative prospective study, Jin et al. examined the evolution of these 2 bacterial species in the saliva of people treated with fixed therapy . At four separate time points—T1 before therapy, T2 3 months after appliance fitting, T3 6 months after fitting, and T4 18 months after fitting—the saliva of 15 patients receiving FOT was examined. Lactobacillus increased slightly but not significantly over the 18 months of treatment, while total bacteria remained unchanged. The quantity of S. mutans was very different between the two types of brackets, after remaining stable for the first six months and increasing dramatically at T4 ( p < 0.05) . Patients with conventional brackets had a higher amount of S. mutans than did those with self-ligating brackets ( p < 0.05), who had a stable concentration of S. mutans during this period. The levels of sIgA, MPO, and LDH did not modify during orthodontic treatment. There was no link between sIgA and bacterial quantity. In conclusion, S. mutans increased significantly in patients wearing traditional braces during the last treatment period, suggesting that WS may develop after prolonged orthodontic therapy .
Pinto et al., examined INSO (incidence of active caries lesions) in 135 people aged 10 to 30 years. They were split into 2 groups, the first including 70 people who received no orthodontic therapy (G0), and the second including 65 people who received FOT for one year (G1). The plaque index, gingival, and caries indices were assessed at 0 and one years after treatment. One operator evaluated all teeth for caries, examining both active and inactive and early-stage and cavitated lesions. According to the work, the orthodontically treated group had a statistically higher incidence of active caries than the G0 group. In addition, the G1 group had a statistically greater mean increase in active caries. According to the results of this study, people who received FOT for one year had a significantly higher incidence and growth of active caries lesions than did people who did not receive fixed orthodontic therapy.
Enamel etching performed before the location of brackets is also believed to be responsible for the rise in caries in subjects undergoing fixed therapy. The study by Yagci et al., 2019 examined possible distinctions between partial and full etching . This was a double-blind randomized controlled trial of 20 patients with a medium age of 16.75 years, excellent dental hygiene, malocclusion, and fixed orthodontic therapy. Full or partial etching treatment was randomly performed on 40 maxillary arches . Quantitative fluorescence images were taken at the start of orthodontic treatment, three (T1) and six (T2) months later, and at the conclusion of the braces removal phase (T3). Using quantitative light fluorescence software, the presence of WS was assessed before and after drilling, and the results were rated with Student’s t -test. The research showed that, in terms of Q and A scores at T2, the group with complete etching significantly outperformed the group with partial etching ( p < 0.05). At every time point, F scores considerably increased in the TE group, but only at T1 and T3 in the PE group. There were no changes between the TE and PE groups at T3 ( p > 0.05), though. Regardless of the etching approach, the study indicated that the presence of WS were primarily seen in the upper lateral incisors. Although PE is better during the initial 6 months, in terms of long-term WS creation, there is no distinction between PE and TE .
During orthodontic treatment and in the post-orthodontic phase to achieve remineralization, numerous strategies are employed to prevent enamel demineralization. Use of casein phosphopeptide-containing products, antibacterial products, and fluoride-containing products are examples. Chlorhexidine is the most widely used antibacterial agent for dental usage because it is highly effective against Streptococcus mutans . A study by Shimpo et al., assessed the preventive impact of antimicrobial therapy in addition to fluoride application during FOT . With the addictions of fluoride and professional mechanical teeth cleaning, it has been discovered that tooth surface disinfection therapy also helps WS reduction during FOT.
Several studies have found the utility of fluoride toothpaste in the reduction of WS caused by orthodontic therapy [ , , , ]. In a prospective study by Kau et al., with three groups of patients receiving orthodontic care , Clinpro 5000 was administered to 35 people, Clinpro Tooth Crème was administered to 32 people, and MI Paste Plus was administered to 33 people in every group. For four months, the chosen product was used two times a day for two minutes. Subjects were examined once each month, for 4 months. At each visit, the Enamel Decalcification Index (EDI) was utilized to calculate the amount of WS per square. Compared to previous research, the usage of Clinpro 5000, Clinpro Crème, and MI paste Plus all had a decreasing effect on WS lesions. Clinpro 5000 slightly outperformed the other two test pastes in relation to effectiveness. The clinical trial conducted in 2019 by Smyth et al. came to similar conclusions . A recently introduced fluoride varnish containing 1.5% ammonium fluoride was considered in a 2019 clinical study by Sonesson et al., who ascertained that regular varnish applications reduced the quantity of WS during fixed therapy . Sealants act as physical barriers to bacterial acids and plaque. While good at preventing WS, sealants do peel off over time, predominantly in the gum area, leaving the enamel exposed to plaque and acid bacteria. Sealants like ProSeal have been proven to totally prevent mineral loss from enamel if they stay on the tooth surface, but the application of the product should be repeated every few months . With the growing attention on the host’s innate defense system, more minimally invasive and human-friendly therapies have been considered, like the use of formulas containing enzymes, probiotics, and plant extracts. Intrinsic defense factors in saliva are the enzymes peroxidase, lysozyme, and lactoferrin. These proteins can limit bacterial or fungal growth, interfere with bacterial glucose uptake or glucose metabolism, and promote bacterial aggregation and elimination . Cheng et al., in a 2019 clinical work, compared the effects between enzyme-containing and conventional toothpastes on orthodontic patients . The prevention of WS and plaque reduction effects among orthodontic patients in the first three months of treatment were not significantly different between enzyme-containing and conventional toothpastes, according to the study. In the first three months of treatment, neither gingival bleeding nor visible plaque among orthodontic patients who used fluoride- and enzyme-containing toothpastes significantly increased. However, the gingival bleeding and visible plaque significantly decreased .
George et al., in an experiment conducted in 2022, examined how streptococcus mutations and WS responded to toothpaste with active oxygen . Active oxygen toothpaste resulted in a more pronounced reduction of WS than did fluoride toothpaste. Its impact was limited, though. Both toothpaste varieties had minimal effects on WSLs. Toothpaste containing active oxygen is effective in the same manner as toothpaste containing fluoride .
2 Laser As a result of removing the organic matrix, improving fluoride absorption, and increasing the binding surface area of ions, including calcium and fluoride, fluoride and laser act synergistically to strengthen enamel resistance to acids. Fluoride affects the creation of fluorohydroxyapatite crystals, changes demineralization and remineralization, and affects bacterial plaque [ , , , ]. Mahmoudzadeh et al.’s 2019 RCT aimed to estimate the effect of carbon dioxide (CO 2 ) laser on the prophylaxis of WS associated with fixed therapy . In this work, 554 teeth from 95 patients were considered. The 95 patients were divided into 2 groups, at random: the laser group (278 teeth), and the control group (276 teeth from 47 patients). The front teeth of the maxilla in the laser group were made aware of the CO 2 laser with the following characteristics: wavelength 10.6 m, power 0.4 mw, frequency 5 Hz, diameter 0.2 mm, and pulse time 9 s. An operator applied laser irradiation for 20 s while maintaining a 5 mm distance from the buccal surface and moving back and forth continuously . Similar placebo light exposure took place for the control group. Six months after receiving radiation, patients were brought back in to have the incidence, size, and cruelty of the injuries evaluated. Data were collected twice: immediately after adherence to the attack, and six months later. Better lesions and a decrease in lesion incidences were seen during six months with CO 2 laser use . The laser is believed to cause a chemical change in enamel crystals, removing cavities through remineralization. According to the study’s findings, gingival lesions were not affected by laser irradiation, even though it was effective on the incisal, mesial, and distal regions. Unlike the gingival area, where CO 2 laser had no noticeable impact, the extent of lesions in the incisal, mesial, and distal regions was drastically reduced after treatment. Additionally, while the mesial and incisal portions of the lesion showed a significant reduction in severity, the gingival and distal regions showed little improvement. In the gingival area, the laser was ineffective, most likely because of changes in the thickness and structure of the enamel. Since gingival regions are frequently affected by WSLs, laser settings at these locations should be modified to aid in the reduction of these lesions. Additionally, better oral hygiene can lower the incidence of gingival lesions (due to increased plaque accumulation) . The study by Belcheva et al., which began in September 2021 and whose follow-up phase will last until September 2023, is intriguing in the line of research on the encouraging effects of lasers . Investigating how fluoride varnish and CO 2 laser treatment can lessen the frequency, severity, and extent of WS lesions during fixed orthodontic therapy is the goal. An RCT will involve kids between the ages of 12 and 18 who need fixed therapy and are at a high risk of developing cavities. The buccal surfaces of the patient’s upper anterior teeth will receive fluoride therapy alone in one group, and fluoride therapy in addition to bonding orthodontic brackets in the other group. Following radiotherapy, the patients’ conditions will be reevaluated six and twelve months later .
Numerous studies on bonding products containing antibacterial substances exist in the literature, and all have shown encouraging results [ , , , ]. The aim of the study by Oz et al., is to clinically evaluate an antibacterial primer containing monomer in the prophylaxis of WS during fixed therapy . The study’s findings demonstrate that there was no discernible difference between the antibacterial monomer-containing primer group and the control group in terms of their capacity to prevent demineralization during orthodontic treatment . Degrazia et al., examined the demineralization and antibacterial properties of an experimental orthodontic adhesive made of triazine and niobium bioglass phosphate (TAT) around attachments placed on enamel surfaces . From the results of this study, the growing of S. mutans and total streptococcus were inhibited by the adhesive in the triazine and niobium phosphate-based bioglass, which had an anti-demineralization impact. This product can prevent the loss of enamel minerals.
WS are a common and equally dreaded complication of fixed orthodontics, as they risk seriously compromising the aesthetic and functional outcomes. WS prophylaxis begins with the correct choice and motivation of the subject to maintain good hygiene. In this regard, good oral hygiene with a fluoride-containing toothpaste is the essential starting point for the effective removal of food scraps and bacterial biofilm that are deposited on teeth and braces. In addition, fluoride administration with mouthwashes for home use as well as gels, varnishes, and sealants for periodic professional use may be considered, depending on the case. The use of lasers as an adjunct to fluoride is a readily available avenue for clinicians, effective in the prevention of demineralization but also in the repair processes of early-stage lesions. The hope is that international guidelines for the use of fluoride products, antibacterial agents, and laser use can be developed in the future. More research is required to establish precise and repeatable protocols for laser use. Countless studies in the literature have evaluated the efficacy of toothpastes and other products containing various substances with antibacterial effects, many of which have yielded encouraging results that merit further study. The orthodontist must always remember that the resolution of malocclusions is a goal that must be pursued hand-in-hand with the achievement and maintenance of the patient’s oral and dental health, and in this sense, it is hoped that caries prevention campaigns will have an ever-increasing prevalence and following.
|
The Clinical Application of Immunohistochemical Expression of Notch4 Protein in Patients with Colon Adenocarcinoma
|
df830e1e-bfa7-447d-ac61-199621e27a2c
|
10138794
|
Anatomy[mh]
|
Colorectal cancer (CRC) is the third most common cancer worldwide . Important factors associated with the development of this type of cancer include drinking alcohol, smoking, unhealthy dietary patterns and obesity . Among factors which are clearly correlated with disease development, it should also be mentioned about ageing, genetic mutations and hereditary factors . The development of colorectal cancer occurs gradually as a sequence of specific morphological and genetic changes. One of the most common types of colorectal cancer is adenocarcinoma (COAD). This type of cancer develops from colorectal glandular epithelial cells. Under pathological conditions, the shape of epithelial cells changes and grows out of control, leading to the development of adenoma and adenocarcinoma . With the increase in colorectal cancer screening, a significant reduction in the incidence of COAD has been observed; however, the mortality is expected to reach 60% by 2035 . It should be noted that despite advances in currently available standard treatments, e.g., surgery, chemotherapy, radiotherapy and immunotherapy, the 5-year overall survival (OS) of patients with COAD remains poor . Therefore, there is an urgent need for the development of novel biomarkers to improve the outcome of patients, allowing earlier therapeutic intervention and reducing the increasing burden of COAD . Notch signalling is one of the conserved and well-characterised signalling systems involved in tissue homeostasis and the development of many diseases, including cancer . There are four Notch receptors (Notch1-4) and five Notch ligands (Delta-like (DLL) 1, 3, 4 and Jagged (JAG) 1–2) in mammals. The Notch full-length receptor is initially cleaved (S1 cleavage), and after this, it undergoes post-translational modification (glycosylation) in the Golgi apparatus. From there, it is transported into the plasma membrane as a transmembrane heterodimeric protein . In the canonical Notch signalling pathway, the Notch receptors are proteolytically cleaved by the ADAM10 disintegrin and metalloprotease (ADAM10) domain (S2 cleavage) as well as the γ-secretase complex (S3 cleavage), leading to the release of the Notch intracellular domain (NICD). This domain enters the nucleus and binds to the DNA-binding protein CSL (CBF-1 (RBPJ)/suppressor of hairless/Lag1), which recruits mastermind-like protein (MAML) to activate transcription of Notch target genes such as the Hes and Hey families . Some studies revealed that cancer stem cell self-renewal, epithelial to mesenchymal transition (EMT), and radio- or chemotherapy resistance might be related to mutations and amplifications of the Notch ligand or/and Notch receptors . Among them, it is worth noting the Notch4 receptor. The high expression of this protein was found in such cancers as hepatocellular carcinoma (HCC) , intrahepatic cholangiocarcinoma , melanoma , oral squamous cell carcinoma (OSCC) , breast carcinoma (BC) , gastric carcinoma , non-small cell lung carcinoma (NSCLC) and acute myeloid leukaemia (AML) . In patients with colorectal cancer, Shaik et al. demonstrated that expression of Notch4 was significantly higher than in healthy individuals or patients with adenoma . Similar results have been obtained in the Chinese population . Nevertheless, we have little data on the clinical application of Notch4 protein expression in European patients with colorectal cancer, especially with colon adenocarcinoma. With this in mind, we decided to investigate the expression of Notch4 protein in European (Polish) patients with colon adenocarcinoma without any therapy prior to radical surgery. Moreover, we also investigated the association between Notch4 protein expression and the clinicopathological factors. The prognostic activity of Notch4 protein was analysed in relation to 5-the year survival of patients, which is a very significant factor from the clinical oncology point of view. In view of the Notch signalling pathway and the stages of formation of both precursors and active Notch receptors, it can be hypothesised that Notch4 expression is associated with the cell membrane, the nuclear membrane and matrix, and the Golgi apparatus. Unfortunately, studies on the localisation of the Notch4 receptor within the cells, especially in cancerous cells, are rare. One of these studies assesses the intranuclear and intranucleolar localisation of Notch4 in breast cancer cells . Therefore, the aim of our research was also to determine Notch4 intracellular localisation within the cells of colon adenocarcinoma tissue by the use of the immunogold labelling method and TEM.
2.1. Patients’ Characteristics The study included 129 colon adenocarcinomas. Among patients, there were 67 men and 62 women (mean age: 65 years; range: 56–77 years). In 66 (51.16%) cases, the tumours were situated in the right colon and the left colon in 63 (48.84%). The three histological differentiation levels were used for classification, which were as follows: G1 (well-differentiated cancers where cells are similar to healthy cells)—25 cases (19.38%), G2 (moderately differentiated where cells are somewhat like healthy cells)—66 cases (51.16%) and G3 (poorly differentiated where the cells are not similar to healthy colonocytes)—38 (29.46%). . Among the adenocarcinomas, 15 (11.63%) were at T1 depth of invasion (the tumour has grown into the submucosa), 18 (13.95%) were at T2 (The tumour has grown into the muscularis propria), 73 (56.59%) were at T3 (the tumour has grown through the muscularis propria and into the subserosa), and 23 (17.83%) were at T4 (the tumour has grown into the surface of the visceral peritoneum). In samples of colon adenocarcinoma, the positive immunohistochemical reaction indicating the presence of Notch4 protein was observed in the cytoplasm and nucleus of cancer cells. The positive reaction was also detected within the cells of healthy colon tissue . It is important to mention that expression was described as strong in the vast majority of colorectal adenocarcinoma tissues, whereas expression in cells of healthy surgical margins was determined to be low. 2.2. Correlations between Notch4 Immunohistochemical Expression and Clinicopathological Parameters Among the study cohort, 101 (78.29%) colon adenocarcinoma samples showed a high level of immunohistochemical expression of Notch4 protein, whereas only 28 (21.71%) demonstrated a low level of immunoreactivity. The immunohistochemical status of Notch4 was correlated with the clinicopathological features of patients and the 5-year survival rate. The level of Notch4 expression was found to be significantly related to the histological grade of the tumour ( p < 0.001, Chi 2 test). Notch4 protein expression was found to be high in 6 (24.00%), 58 (87.88%), and 37 (97.37%) of G1, G2, and G3 tumours, respectively. In contrast, the low level of immunohistochemical expression of notch4 protein was found in 19 (76.00%), 8 (12.12%), and 1 (2.63%) of G1, G2, and G3 tumours, respectively. Furthermore, Notch4 expression was associated with the expression of PCNA antigen ( p < 0.001, Chi 2 Yatesa test). Notch4 protein was found to be highly expressed in 5 (23.81%) and 96 (88.89%) samples with low and high levels of PCNA immunoreactivity, respectively . It is worth noting that Notch4 expression was also related to angioinvasion ( p < 0.001, Chi 2 Yatesa test). High Notch4 immunohistochemical expression was found in 89 (89.90%) of the patients with no angioinvasion, while low immunoreactivity was found in 10 (10.10%) patients. In contrast, 12 (40.00%) patients with angioinvasion had high Notch4 expression, whereas 18 (60.00%) patients had low Notch4 immunoreactivity. Notch4 immunohistochemical expression was also related to the depth of invasion ( p < 0.001, Chi 2 Yatesa test). For those patients who were characterised as T1/T2, a high level of immunohistochemical reaction was noted in 18 (54.55%) and a low level of expression was detected in 15 (45.45%). For T3/T4 patients, a strong Notch4 immunohistochemical reaction was reported in 83 (86.46%) patients, while low expression was detected in 13 (13.54%) . 2.3. Prognostic Role of Notch4 Expression in Colon Adenocarcinoma The prognostic significance of Notch4 expression in colon adenocarcinoma patients was analysed in relation to a 5-year survival rate. All samples were assessed by Kaplan–Meier survival curves. The 5-year survival rate was significantly higher in the group of patients where a low level of Notch4 expression was found (log-rank, p < 0.001) . Additionally, the value of Notch4 expression in the context of the 5-year survival rate was evaluated in patients’ subgroups stratified by grade of histological differentiation, depth of invasion, staging and PCNA expression . Interestingly, the expression of Notch4 was not related to the 5-year survival rate in patients stratified according to G1 (log-rank test, p = 0.412, G2 (log-rank test, p = 0.181) and G3 (log-rank test, p = 0.007). In contrast, in the group of patients with T1/T2 depth of invasion, patients with a low level of Notch4 immunohistochemical reaction showed significantly higher 5-year survival in comparison to patients with high expression of this protein (log-rank test, p < 0.011). Similar results have been obtained in patients with T3/T4 depth of invasion (log-rank test, p = 0.001). Moreover, in patients with stage I of the disease, the low expression of Notch4 was also associated with the 5-year survival rate (log-rank test, p < 0.001). In patients with stage III, low expression of Notch4 was also associated with a 5-year survival rate, although these results were not statistically significant (log-rank test, p = 0.052). Interestingly, the expression of Notch4 was associated with the 5-year survival rate of patients with a high level of PCNA expression. The patients with a high level of this antigen and a low level of Notch4 expression has significantly higher 5-year survival rate (log-rank test, p = 0.006). Univariate Cox regression analyses revealed that Notch4 immunohistochemical expression, histological grade, depth of invasion, angioinvasion and expression of PCNA were significant prognostic factors. Multivariate analysis found that, in our cohort of patients, the degree of histological differentiation and Notch4 expression is regarded as independent predictors related to 5-year survival in patients with colon adenocarcinoma . 2.4. Immunofluorescence Staining Based on the study by Frithiof et al. , we wanted to check the expression of Notch4 in colon adenocarcinomas using immunofluorescence. Therefore, we randomly selected 50 slides with tissue sections treated with anti-Notch4 antibody and Dako Liquid Permanent Red (10 controls, 25 described previously as low expression by the use of IHC and 25 described previously as high expression by the use of IHC). However, it is important to point out that we used this technique as a supplementary one. Nevertheless, the results obtained are very promising and suggest that tissue sections stained with anti-Notch4 antibody and treated with LPR chromogen can be used for immunofluorescence analysis. The intensity of Notch4 expression in both non-neoplastic tissue and tumour tissue was determined using software Zen 2 (blue edition). A fluorescent signal (red colour signal) of varying intensity was found in cells of non-neoplastic mucosa and cancer cells. In some cancer cells, the expression and fluorescence signal was found in the cytoplasm of the apical parts of the cells, while in others, intense fluorescence was found throughout all cytoplasm of the cells or in the cell nuclei . 2.5. Intracellular Localisation of Notch4 by the Method of Immunogold Labelling with the Use of Transmission Electron Microscopy (TEM) The immunogold labelling method was used to reveal the localisation of Notch4 protein at the cellular level within colorectal adenocarcinoma tissues and in non-neoplastic cells from surgical margins. In non-neoplastic cells, electron-dense granules were detected in close proximity to the cellular membrane and in the apical part of the cells. In cancer cells, black granules were found within the cisterns of the rough endoplasmic reticulum and in mitochondria. In some cancer cells, granules indicating the presence of Notch4 were visible within the nuclei. . In the fibroblasts of non-pathological colon tissue, Notch 4 expression was found in the cell membrane, the nuclear membrane and the endoplasmic reticulum. Images showing the immunocytochemical localisation of Notch4 in fibroblasts are shown in the .
The study included 129 colon adenocarcinomas. Among patients, there were 67 men and 62 women (mean age: 65 years; range: 56–77 years). In 66 (51.16%) cases, the tumours were situated in the right colon and the left colon in 63 (48.84%). The three histological differentiation levels were used for classification, which were as follows: G1 (well-differentiated cancers where cells are similar to healthy cells)—25 cases (19.38%), G2 (moderately differentiated where cells are somewhat like healthy cells)—66 cases (51.16%) and G3 (poorly differentiated where the cells are not similar to healthy colonocytes)—38 (29.46%). . Among the adenocarcinomas, 15 (11.63%) were at T1 depth of invasion (the tumour has grown into the submucosa), 18 (13.95%) were at T2 (The tumour has grown into the muscularis propria), 73 (56.59%) were at T3 (the tumour has grown through the muscularis propria and into the subserosa), and 23 (17.83%) were at T4 (the tumour has grown into the surface of the visceral peritoneum). In samples of colon adenocarcinoma, the positive immunohistochemical reaction indicating the presence of Notch4 protein was observed in the cytoplasm and nucleus of cancer cells. The positive reaction was also detected within the cells of healthy colon tissue . It is important to mention that expression was described as strong in the vast majority of colorectal adenocarcinoma tissues, whereas expression in cells of healthy surgical margins was determined to be low.
Among the study cohort, 101 (78.29%) colon adenocarcinoma samples showed a high level of immunohistochemical expression of Notch4 protein, whereas only 28 (21.71%) demonstrated a low level of immunoreactivity. The immunohistochemical status of Notch4 was correlated with the clinicopathological features of patients and the 5-year survival rate. The level of Notch4 expression was found to be significantly related to the histological grade of the tumour ( p < 0.001, Chi 2 test). Notch4 protein expression was found to be high in 6 (24.00%), 58 (87.88%), and 37 (97.37%) of G1, G2, and G3 tumours, respectively. In contrast, the low level of immunohistochemical expression of notch4 protein was found in 19 (76.00%), 8 (12.12%), and 1 (2.63%) of G1, G2, and G3 tumours, respectively. Furthermore, Notch4 expression was associated with the expression of PCNA antigen ( p < 0.001, Chi 2 Yatesa test). Notch4 protein was found to be highly expressed in 5 (23.81%) and 96 (88.89%) samples with low and high levels of PCNA immunoreactivity, respectively . It is worth noting that Notch4 expression was also related to angioinvasion ( p < 0.001, Chi 2 Yatesa test). High Notch4 immunohistochemical expression was found in 89 (89.90%) of the patients with no angioinvasion, while low immunoreactivity was found in 10 (10.10%) patients. In contrast, 12 (40.00%) patients with angioinvasion had high Notch4 expression, whereas 18 (60.00%) patients had low Notch4 immunoreactivity. Notch4 immunohistochemical expression was also related to the depth of invasion ( p < 0.001, Chi 2 Yatesa test). For those patients who were characterised as T1/T2, a high level of immunohistochemical reaction was noted in 18 (54.55%) and a low level of expression was detected in 15 (45.45%). For T3/T4 patients, a strong Notch4 immunohistochemical reaction was reported in 83 (86.46%) patients, while low expression was detected in 13 (13.54%) .
The prognostic significance of Notch4 expression in colon adenocarcinoma patients was analysed in relation to a 5-year survival rate. All samples were assessed by Kaplan–Meier survival curves. The 5-year survival rate was significantly higher in the group of patients where a low level of Notch4 expression was found (log-rank, p < 0.001) . Additionally, the value of Notch4 expression in the context of the 5-year survival rate was evaluated in patients’ subgroups stratified by grade of histological differentiation, depth of invasion, staging and PCNA expression . Interestingly, the expression of Notch4 was not related to the 5-year survival rate in patients stratified according to G1 (log-rank test, p = 0.412, G2 (log-rank test, p = 0.181) and G3 (log-rank test, p = 0.007). In contrast, in the group of patients with T1/T2 depth of invasion, patients with a low level of Notch4 immunohistochemical reaction showed significantly higher 5-year survival in comparison to patients with high expression of this protein (log-rank test, p < 0.011). Similar results have been obtained in patients with T3/T4 depth of invasion (log-rank test, p = 0.001). Moreover, in patients with stage I of the disease, the low expression of Notch4 was also associated with the 5-year survival rate (log-rank test, p < 0.001). In patients with stage III, low expression of Notch4 was also associated with a 5-year survival rate, although these results were not statistically significant (log-rank test, p = 0.052). Interestingly, the expression of Notch4 was associated with the 5-year survival rate of patients with a high level of PCNA expression. The patients with a high level of this antigen and a low level of Notch4 expression has significantly higher 5-year survival rate (log-rank test, p = 0.006). Univariate Cox regression analyses revealed that Notch4 immunohistochemical expression, histological grade, depth of invasion, angioinvasion and expression of PCNA were significant prognostic factors. Multivariate analysis found that, in our cohort of patients, the degree of histological differentiation and Notch4 expression is regarded as independent predictors related to 5-year survival in patients with colon adenocarcinoma .
Based on the study by Frithiof et al. , we wanted to check the expression of Notch4 in colon adenocarcinomas using immunofluorescence. Therefore, we randomly selected 50 slides with tissue sections treated with anti-Notch4 antibody and Dako Liquid Permanent Red (10 controls, 25 described previously as low expression by the use of IHC and 25 described previously as high expression by the use of IHC). However, it is important to point out that we used this technique as a supplementary one. Nevertheless, the results obtained are very promising and suggest that tissue sections stained with anti-Notch4 antibody and treated with LPR chromogen can be used for immunofluorescence analysis. The intensity of Notch4 expression in both non-neoplastic tissue and tumour tissue was determined using software Zen 2 (blue edition). A fluorescent signal (red colour signal) of varying intensity was found in cells of non-neoplastic mucosa and cancer cells. In some cancer cells, the expression and fluorescence signal was found in the cytoplasm of the apical parts of the cells, while in others, intense fluorescence was found throughout all cytoplasm of the cells or in the cell nuclei .
The immunogold labelling method was used to reveal the localisation of Notch4 protein at the cellular level within colorectal adenocarcinoma tissues and in non-neoplastic cells from surgical margins. In non-neoplastic cells, electron-dense granules were detected in close proximity to the cellular membrane and in the apical part of the cells. In cancer cells, black granules were found within the cisterns of the rough endoplasmic reticulum and in mitochondria. In some cancer cells, granules indicating the presence of Notch4 were visible within the nuclei. . In the fibroblasts of non-pathological colon tissue, Notch 4 expression was found in the cell membrane, the nuclear membrane and the endoplasmic reticulum. Images showing the immunocytochemical localisation of Notch4 in fibroblasts are shown in the .
The Notch signalling pathway was first characterised to play an oncogenic role in T-cell Acute lymphoblastic leukaemia (T-ALL) ; however, the similar role of the Notch receptors and Notch ligands have been found in breast cancer , lung adenocarcinoma , hepatocellular carcinoma and ovarian cancer . The potential mechanisms of carcinogenesis associated with the oncogenic activity of Notch involve such biological events as the control of the phenotype of cancer-initiating cells, upregulation of tumour-associated signalling factors such as P53, facilitation of tumour angiogenesis and invasion, and cell cycle regulation . It should also be noted that Notch may function as a tumour suppressor in other cancers, like squamous cell carcinoma (SCC) and neuroendocrine tumours . The anti-tumour activity is related to the regulation of the malignant transcription factors, downstream suppressor gene activation and suppression of the cell cycle . A great number of studies have addressed the role of Notch4 in cancer, particularly the molecular mechanisms associated with it. The majority of studies have suggested that Notch4 expression is upregulated during the development of cancer. Moreover, this receptor is also known to be involved in the regulation of stem cell-like self-renewal, epithelial-mesenchymal transition (EMT), radio/chemoresistance and angiogenesis. Interestingly, the expression level of Notch4 is different in different types of tumours . Results of our study demonstrated that expression of Notch4 in colon adenocarcinoma tissue was clearly upregulated in comparison to that observed in the healthy tissue of the surgical margin. The results that we obtained using immunohistochemistry and immunofluorescence techniques revealed that Notch4 expression is associated with the cytoplasm and cell nucleus. Furthermore, by the use of the immunogold labelling method, we have confirmed the Notch4 presence in the cytoplasm and nucleus of tumour cells. Notch4’s nuclear and nucleolar localisation has been found by Saini et al. in breast cancer cells. Probably, it can stabilise the DNA repair machinery, thus allowing the recovery of cells under genotoxic stress damage . In colon adenocarcinoma cells, the black granules indicating the presence of Notch4 antigen were localised in the cytoplasm. They were detected mostly in the vicinity of membranous organelles, including the endoplasmic reticulum and mitochondria. Tumour cells are marked by the fact that different signalling pathways are altered in relation to healthy cells. Moreover, these cells reside in a tumour-specific microenvironment where a network of interactions exists. Perhaps as a result of a disrupted protein transport system, a feature that is quite common in cancer cells, this protein may have been displaced into the mitochondria, as can be seen in the images showing its immunocytochemical localisation within the cancer cells . It is also possible that the Notch signalling pathway itself has been disrupted, and these pathway proteins have been incorrectly directed to other organelles, including the endoplasmic reticulum or mitochondria . It should be mentioned that in our cohort of patients, approximately 78% of colon adenocarcinoma specimens demonstrated high Notch4 protein expression, while low levels of immunoreactivity were found only in 22% of cases. High Notch4 expression was markedly correlated with the histological grade of the tumour ( p < 0.001, Chi 2 test), depth of invasion ( p < 0.001, Chi 2 Yatesa test), angioinvasion (( p < 0.001, Chi2Yatesa test) and PCNA immunohistochemical expression ( p < 0.001, Chi 2 Yatesa test). Interestingly, the strong expression of Notch4 protein was noted in 24% of G1 tumours, 88% of G2 tumours and 97% of G3 tumours. These results may indicate that Notch4 plays an important role in colon adenocarcinoma progression and may be an identification biomarker for patients with a more aggressive form of this malignancy. In this context, it is worth noting that Notch4 expression has also been associated with PCNA immunohistochemical expression ( p < 0.001, Chi 2 Yatesa test). The high level of Notch4 reactivity was revealed in 24% and 89% of samples with low and high PCNA expression, respectively. PCNA, a non-histone nuclear protein, has a molecular mass of 36 kDa and is a specific marker of cell division. Its action is associated with DNA polymerase, synthesised shortly before the S-phase of the cell cycle. However, this protein is also connected with the machinery associated with the DNA repair mechanism . Nevertheless, this protein could be as important as ki67 in terms of prognosis. During the planning of our research, we decided also to examine the value of Notch4 expression in terms of 5-year survival in a cohort of patients who were stratified according to low and high levels of expression of PCNA. In patients with high levels of PCNA expression, there was a statistically significant difference in estimated survival time between those with high and low levels of Notch4 expression (log-rank, p = 0.006). For example, patients with low expression of PCNA had a median survival rate of 45 months, whereas the patients with high expression had a median survival rate of 24 months. Evaluation of PCNA and Notch4 expression may therefore have a significant clinical relevance by indicating patients who may have a significantly worse prognosis. Similarly, patients in the T1/T2 group showed a significant statistical difference in survival time. Patients characterised by a high level of Notch4 expression had a significantly lower survival time than the group with a low level of Notch4 expression (log-rank, p < 0.001). Similar results were obtained in the group of T3/T4 (log-rank, p = 0.010). In the context of our study, interesting results have been obtained by Ahn et al., who demonstrated that in patients with hepatocellular carcinoma, the high expression of Notch4 is correlated with low Edmondson grade, low AJCC T-stage, lack of microvascular invasion, absence of intrahepatic metastases and low serum AFP levels . In contrast, in patients with intrahepatic cholangiocarcinoma, the high Notch4 expression correlates with high serum CA125 levels . In patients with oral squamous cell carcinoma, the high expression of Notch4 is correlated with poor differentiation, advanced clinical stage, periosteal invasion and lymph node metastasis . Qian et al. demonstrated that the activation of Notch4 was related to the induction of gastric cancer growth in vitro and in vivo, while Notch4 inhibition using Notch4 siRNA had opposite effects . In patients with Non-Small Cell Lung Cancer (NSCLC), Notch4 expression was positively associated with tumour size, lymph node metastasis (LNM), distal metastasis (DM), and depth of invasion (T). Patients with a high level of this protein had significantly lower OS than patients with a low level of Notch4 expression . Probably the poor clinical outcome of cancer patients with high expression of Notch4 is associated with its role in the mechanism of EMT, which is a very significant molecular event leading to cancer metastasis. Zhang et al. revealed that activation of Notch4 signalling, which is dependent on the activation of NF-kB, promotes the growth, metastasis, and EMT of tumour cells in prostate cancer . In melanoma and head and neck squamous cell carcinoma, Notch4 signalling induces EMT by stimulating the expression of EMT markers such as Vimentin and Twist1 and downregulating the expression of E-cadherin . In this place, it is worth noting the role of Notch4 in melanoma. In this cancer, upregulation of Notch4 expression promotes metastasis through the regulation of Twist1 expression, which indicates a poor prognosis. Nevertheless, others have reported that high Notch4 expression enhances the expression of E-cadherin and attenuates melanoma malignant behaviour. Importantly, Notch4 may induce suppression of Snail2 and Twist1 through downstream Hey1 and Hey2 targets and is non-canonically mediated in WM9 and WM164 melanoma cell lines . Probably, the poor clinical prognosis in high Notch4 patients might be related to vascular mimicry (VM), which is a tumour microcirculation system imitating the layout of the embryonic vascular network to provide oxygen and nutrients to tumour cells and, importantly, is epithelium independent . The expression of Notch4 and VM has shown a positive connection in the case of NSCLC and HCC patients . Bao et al. revealed that the oncogenic circular RNA 7 stimulated Notch4 expression in HCC, enhancing VM development and inhibiting miR-7-5p expression in HCC . It may be beneficial to suppress Notch4 signalling in cancer because Notch4 is frequently recognised as a crucial participant in oncogenesis. The varied functions of Notch4 signalling in cancer, as well as the possible outcomes and clinical utility of applying multiple Notch4-targeting treatment techniques, are likely to be further clarified in studies.
4.1. Patients and Tumour Samples Tissue colon material collected from the patients undergoing colon resection at the Municipal Hospital in Jaworzno between January 2014 and December 2015 with histopathologically confirmed colon adenocarcinoma was used for the study. Patients who received preoperative radiotherapy or chemotherapy, patients with serious complications or distant metastasis, patients undergoing resection from tumour recurrence, patients with adenocarcinoma in the setting of inflammatory bowel disease and patients with histopathologically confirmed subtype other than adenocarcinoma were excluded from the study. Based on an established protocol, histopathological sections containing tumour fragments and adjacent tissue sections without tumour lesions were taken from each surgical specimen. The collected samples were fixed in formalin and embedded in paraffin blocks. In the next step, the paraffin blocks were cut, and sections were routinely stained with H&E to confirm the histopathological diagnosis. Sections containing tissue margins were also assessed. If tumour cells were found, the material was excluded from the study. To determine whether Notch4 protein had prognostic significance, patients were followed up for 5 years to estimate the 5-year survival rate. 4.2. Immunohistochemical and Immunofluorescence Staining Paraffin-embedded tissue blocks with formalin-fixed colon adenocarcinoma specimens and resected margins were cut into 4-µm-thick sections, fixed on Polysine slides and deparaffinised in xylene and rehydrated through a graded series of alcohol. To retrieve the antigenicity, the tissue sections were treated with microwaves in a 10 mM citrate buffer (pH 6.0) for 8 min each. Subsequently, sections were incubated with antibodies to Notch4 (GeneTex. polyclonal antibody. Cat. No. GTX03453, final dilution 1:600, Irvine, CA, USA), which targeted cleaved N-terminous epitope, and PCNA (GeneTex. polyclonal antibody. Cat. No. GTX100539, final dilution 1:600, Irvine, CA, USA). For visualisation of protein expression, the sections were treated with BrightVision (Cat. No. DPVB55HRP WellMed BV, ’t Holland 31, 6921 GX Duiven, The Netherlands) detected system and Permanent AP Red Chromogen (Dako LPR from Agilent Technologies Code K0640). Mayer’s haematoxylin was used to counterstain the nuclei. In addition, the expression of Notch4 and PCNA was studied in sections of healthy mucosa from patients undergoing screening colonoscopy with no inflammatory or cancerous lesions. For the analysis of the results of the immunohistochemical staining, we have adapted the immunoreactive score on the basis of previous publications . The scoring of Notch4 expression and PCNA expression was based on both the intensity and extension of immunohistochemical reaction determining the presence of Notch4. The intensity was graded as follows: 0, no signals; 1, weak; 2, moderate; and 3, strong staining. The frequency of positive cells was determined semiquantitatively by assessing the whole section, and each sample was scored on a scale of 0 to 4: 0, negative; 1, positive staining in 10–25% cells, 2, 26–50% cells; 3, 51–75% cells; and 4, 76–100% cells. A total score of 0–12 was finally calculated and graded as; I, score 0–1; II, 2–4; III, 5–8; IV, 9–12. Grade I was considered negative, and grades II, III and IV were positive. Grades I and II represented no or weak staining (low expression), and grades III and IV represented strong staining (high expression). The evaluation was carried out by two independent pathologists. Differences were again assessed until consensus was obtained. Additionally, tissue sections treated with anti-Notch4 antibody and Dako Liquid Permanent Red (LPR) were visualised with a confocal fluorescent microscope (Zeiss LSM 980 with Airscan 2; Zeiss; Germany). LPR fluorescence representing Notch4 protein was visualised with 592 nm excitation and 574–735 nm emission using TexRed filters sets. The intensity of Notch4 expression in both non-neoplastic tissue and tumour tissue was determined using software Zen 2 (blue edition) (Zeiss; Germany). 4.3. Immunogold Electron Microscopy For the study with the use of immunogold labelling methods, the selected areas of non-neoplastic colon tissue from surgical margins and samples of colon adenocarcinomas (10 patients) were fixed in 4% paraformaldehyde in 0.1 M phosphate-buffered saline (PBS) for 2 h at room temperature and then washed several times in PBS. After washing, the specimens were dehydrated in a graded ethanol series and infiltrated in a 2:1 ( v : v ) ethanol/LR White mixture and 1:2 ( v : v ) for 30 min each on ice. Afterwards, the samples were infiltrated in pure LR White Acrylic resins (Sigma Aldrich Cat. No. L9774). Ultra-thin sections (70 nm) were cut with a RMC Boeckeler Power Tomo PC ultramicrotome with a diamond knife (45°; Diatom AG, Biel, Switzerland). Ultrasections were mounted on 200-mesh nickel grids coated with Formvar and immunolabelled. Sections on the grids were preincubated first for 30 min by floating on drops of 50 mM NH4 Cl in PBS and subsequently blocked for 30 min on drops of 1% BSA in PBS. The grids were then incubated overnight (16–18 h) at 4 °C with primary anti-Notch4 antibody (GeneTex. polyclonal antibody. Cat. No. GTX03453) diluted 1:20 in BSA. The bound antibodies were localised by incubating the sections for 1 h on Immunogold-conjugated goat anti-mouse IgG 15 nm (BBInternational BBI Solutions, Sittingbourne, UK) diluted 1:100. Lastly, the grids were washed on PBS drops (five changes, 5 min each) and water (three changes, 3 min each) before staining with 0.5% aqueous uranyl acetate. In controls, the primary antibody was not used. The grids were then air-dried and analysed in a TECNAI 12 G2 Spirit Bio Twin FEI Company transmission electron microscope at 120 kV. Images were captured using a Morada CCD camera (Gatan RIO 9, Pleasanton. CA, USA). 4.4. Statistical Analysis The associations between the IHC expression of Notch4 and clinical parameters were analysed statistically with Statistica 9.1 (Software, StatSoft, Cracow, Poland). All the quantitative variables were described as medians and ranges. The Chi 2 test and Chi 2 Yatesa test were used to compare the analysed groups. The Yates correction was applied to 2 × 2 tables when at least one of the boxes had an expected count of less than 10. Kaplan–Meier analysis and the log-rank test were used to verify the relationship between the intensity of Notch4 expression and 5-the year survival rate of patients. The results were considered statistically significant when p < 0.05. The correlation of signal intensity indicating the presence of Notch4 protein between the different groups, i.e., non-pathological tissue and in tissues previously identified by IHC as low expression and high expression, was determined by an ANOVA test.
Tissue colon material collected from the patients undergoing colon resection at the Municipal Hospital in Jaworzno between January 2014 and December 2015 with histopathologically confirmed colon adenocarcinoma was used for the study. Patients who received preoperative radiotherapy or chemotherapy, patients with serious complications or distant metastasis, patients undergoing resection from tumour recurrence, patients with adenocarcinoma in the setting of inflammatory bowel disease and patients with histopathologically confirmed subtype other than adenocarcinoma were excluded from the study. Based on an established protocol, histopathological sections containing tumour fragments and adjacent tissue sections without tumour lesions were taken from each surgical specimen. The collected samples were fixed in formalin and embedded in paraffin blocks. In the next step, the paraffin blocks were cut, and sections were routinely stained with H&E to confirm the histopathological diagnosis. Sections containing tissue margins were also assessed. If tumour cells were found, the material was excluded from the study. To determine whether Notch4 protein had prognostic significance, patients were followed up for 5 years to estimate the 5-year survival rate.
Paraffin-embedded tissue blocks with formalin-fixed colon adenocarcinoma specimens and resected margins were cut into 4-µm-thick sections, fixed on Polysine slides and deparaffinised in xylene and rehydrated through a graded series of alcohol. To retrieve the antigenicity, the tissue sections were treated with microwaves in a 10 mM citrate buffer (pH 6.0) for 8 min each. Subsequently, sections were incubated with antibodies to Notch4 (GeneTex. polyclonal antibody. Cat. No. GTX03453, final dilution 1:600, Irvine, CA, USA), which targeted cleaved N-terminous epitope, and PCNA (GeneTex. polyclonal antibody. Cat. No. GTX100539, final dilution 1:600, Irvine, CA, USA). For visualisation of protein expression, the sections were treated with BrightVision (Cat. No. DPVB55HRP WellMed BV, ’t Holland 31, 6921 GX Duiven, The Netherlands) detected system and Permanent AP Red Chromogen (Dako LPR from Agilent Technologies Code K0640). Mayer’s haematoxylin was used to counterstain the nuclei. In addition, the expression of Notch4 and PCNA was studied in sections of healthy mucosa from patients undergoing screening colonoscopy with no inflammatory or cancerous lesions. For the analysis of the results of the immunohistochemical staining, we have adapted the immunoreactive score on the basis of previous publications . The scoring of Notch4 expression and PCNA expression was based on both the intensity and extension of immunohistochemical reaction determining the presence of Notch4. The intensity was graded as follows: 0, no signals; 1, weak; 2, moderate; and 3, strong staining. The frequency of positive cells was determined semiquantitatively by assessing the whole section, and each sample was scored on a scale of 0 to 4: 0, negative; 1, positive staining in 10–25% cells, 2, 26–50% cells; 3, 51–75% cells; and 4, 76–100% cells. A total score of 0–12 was finally calculated and graded as; I, score 0–1; II, 2–4; III, 5–8; IV, 9–12. Grade I was considered negative, and grades II, III and IV were positive. Grades I and II represented no or weak staining (low expression), and grades III and IV represented strong staining (high expression). The evaluation was carried out by two independent pathologists. Differences were again assessed until consensus was obtained. Additionally, tissue sections treated with anti-Notch4 antibody and Dako Liquid Permanent Red (LPR) were visualised with a confocal fluorescent microscope (Zeiss LSM 980 with Airscan 2; Zeiss; Germany). LPR fluorescence representing Notch4 protein was visualised with 592 nm excitation and 574–735 nm emission using TexRed filters sets. The intensity of Notch4 expression in both non-neoplastic tissue and tumour tissue was determined using software Zen 2 (blue edition) (Zeiss; Germany).
For the study with the use of immunogold labelling methods, the selected areas of non-neoplastic colon tissue from surgical margins and samples of colon adenocarcinomas (10 patients) were fixed in 4% paraformaldehyde in 0.1 M phosphate-buffered saline (PBS) for 2 h at room temperature and then washed several times in PBS. After washing, the specimens were dehydrated in a graded ethanol series and infiltrated in a 2:1 ( v : v ) ethanol/LR White mixture and 1:2 ( v : v ) for 30 min each on ice. Afterwards, the samples were infiltrated in pure LR White Acrylic resins (Sigma Aldrich Cat. No. L9774). Ultra-thin sections (70 nm) were cut with a RMC Boeckeler Power Tomo PC ultramicrotome with a diamond knife (45°; Diatom AG, Biel, Switzerland). Ultrasections were mounted on 200-mesh nickel grids coated with Formvar and immunolabelled. Sections on the grids were preincubated first for 30 min by floating on drops of 50 mM NH4 Cl in PBS and subsequently blocked for 30 min on drops of 1% BSA in PBS. The grids were then incubated overnight (16–18 h) at 4 °C with primary anti-Notch4 antibody (GeneTex. polyclonal antibody. Cat. No. GTX03453) diluted 1:20 in BSA. The bound antibodies were localised by incubating the sections for 1 h on Immunogold-conjugated goat anti-mouse IgG 15 nm (BBInternational BBI Solutions, Sittingbourne, UK) diluted 1:100. Lastly, the grids were washed on PBS drops (five changes, 5 min each) and water (three changes, 3 min each) before staining with 0.5% aqueous uranyl acetate. In controls, the primary antibody was not used. The grids were then air-dried and analysed in a TECNAI 12 G2 Spirit Bio Twin FEI Company transmission electron microscope at 120 kV. Images were captured using a Morada CCD camera (Gatan RIO 9, Pleasanton. CA, USA).
The associations between the IHC expression of Notch4 and clinical parameters were analysed statistically with Statistica 9.1 (Software, StatSoft, Cracow, Poland). All the quantitative variables were described as medians and ranges. The Chi 2 test and Chi 2 Yatesa test were used to compare the analysed groups. The Yates correction was applied to 2 × 2 tables when at least one of the boxes had an expected count of less than 10. Kaplan–Meier analysis and the log-rank test were used to verify the relationship between the intensity of Notch4 expression and 5-the year survival rate of patients. The results were considered statistically significant when p < 0.05. The correlation of signal intensity indicating the presence of Notch4 protein between the different groups, i.e., non-pathological tissue and in tissues previously identified by IHC as low expression and high expression, was determined by an ANOVA test.
Based on the results obtained in the Cox regression model, Notch4 has been identified as a protein connected with the reduced 5-year survival of colon adenocarcinoma patients. The multivariate analysis revealed that the grade of histological differentiation and immunohistochemical expression of Notch4 in colon adenocarcinoma tissue could be considered independent prognostic factors. In this place, it should be pointed out that our study is the first which demonstrate immunohistochemical expression of Notch4 in colon adenocarcinoma tissues in patients from European populations. Furthermore, it also reveals the prognostic value of Notch4 expression in patients stratified along certain criteria that are relevant from a clinical oncology point. In this case, PCNA expression level, depth of invasion (T-value) and angioinvasion were taken into account. Our work is also the first to show the localisation of Notch4 in tumour tissue at the electron microscopic level by using the immunogold labelling method and confocal fluorescence microscope. Nevertheless, our study has some limitations that need to be mentioned. The size of the studied cohort was limited, and the patients came from a single hospital, which may introduce a selection bias into the study. Future studies should be conducted to increase the sample size and in vitro experiments to understand the mechanism of Notch4 activity.
|
A Low-Cost Modular Imaging System for Rapid, Multiplexed Immunofluorescence Detection in Clinical Tissues
|
24b0d162-5f46-4822-8466-e1805eae84cc
|
10138925
|
Anatomy[mh]
|
Current cancer diagnosis methods are comprised of clinical examination, radiological imaging, and histopathological analysis of tissue biopsies and surgical resections, which provide insight into a patient’s type and stage of cancer . Physicians have depended upon histopathology, which is the “gold standard” for visualization and pathological interpretation of tissue biopsies. Pathological analyses of tumor biopsies have broad utility in cancer diagnosis, prognosis, and treatment stratification. Hematoxylin and eosin (H&E)-stained histologic sections are considered the gold standard by pathologists and can be used for a variety of applications, such as identifying malignant tumors, segmentation of glands in the prostate, grading of breast cancer pathology, and classification of early pancreatic cancer . The immunohistochemistry (IHC) method, chromogenic immunohistochemistry (CIH), is used to complement H&E staining, which stains the tissue morphology, to detect the presence of specific protein markers for accurate tumor classification and diagnosis. While H&E and CIH stains provide enough information for some applications, there are many cases, such as tumor differentiation and tumor immune microenvironment (TIME) profiling, where more data are needed. In addition, conventional CIH is limited to a few markers per tissue section and chromogenic systems used for the staining saturate easily, restricting quantitative analysis . In these cases, labeling the cells with antibodies for immunofluorescence imaging can allow for multiplexing, increase the sensitivity and dynamic range, and provide additional information for further characterization . Even though immunofluorescence provides clinical value, it currently requires expensive imaging hardware, and the acquisition of a large field of views to generate sufficient data can be very time-intensive. The ability to multiplex immunofluorescence markers enables studies that investigate cellular co-expression , cellular spatial relationships , and tissue heterogeneity , to name a few. In the field of immunotherapy, understanding the cellular composition and spatial distribution within the sample, which is referred to as spatial biology, has become important . By profiling immune checkpoint inhibitors, which reduce T-cell inhibition and allow them to fight cancer cells, cancer treatments have benefited . Cutaneous T-cell lymphoma (CTCL) is a type of cancer that starts in white blood cells called T cells (T lymphocytes), which typically help fight pathogens in the immune system . In CTCL, T cells develop abnormalities, causing them to attack the skin and cause rash-like skin erythema, patches of raised or scaly skin, and sometimes skin tumors . Unfortunately, the exact cause of CTCL is still unknown. As CTCL tissue samples contain high levels of T cells, they are a good positive control for T-cell markers such as CD3, CD8, and CD14 . Hence, we selected CTCL tissue samples to be our model system to demonstrate detection of these T-cell markers, which vary from low to high abundance to demonstrate the sensitivity of our imaging platform. To take full advantage of the clinical value of immunofluorescence, a robust, inexpensive, high-throughput imaging platform that can be deployed immediately to any laboratory or clinic, including those in low-resource settings to image clinical tissue samples with immunofluorescence, is highly desired. To address this need, we have developed a robust, inexpensive (<$9000), and portable imaging platform for tissue samples, the Tissue Imager, that can be placed on the benchtop of any basic laboratory. Our Tissue Imager uses a 3D printable design and widely available components to excite fluorescence of fluorophore-conjugated secondary antibodies that are detected with an inexpensive 20-megapixel CMOS camera module coupled with a long working distance, 10× objective, with sufficient spatial resolution to provide cellular resolution and sensitivity to detect a wide range of protein abundance levels. We demonstrate that, with clinical patient samples, this imaging platform can obtain image resolutions or par with a commercial epifluorescence microscope that is >10 times more expensive while, at the same time, providing a larger field of view to increase imaging throughput. Our low-cost, high-throughput, and portable platform can immediately benefit the scientific community, and eventually, the healthcare community as well.
2.1. System Design While existing low-cost microscopy platforms designed for biological fluorescence utilize the camera of cellphones with various illumination schemes such as on-axis epi-illumination , off-axis inclined illuminations , butt-coupling , and total internal reflection , there have been limitations regarding spatial resolution, field of view, and the maximum number of spectral channels. To obtain sub-cellular spatial resolution (~1 μm) and multiplexed fluorescence images for clinical tissue biopsy samples mounted on glass microscope slides (25 mm × 75 mm × 1 mm), a device would need to feature an objective lens of reasonably high numerical aperture (NA~0.3) as well as multiple spectral windows for illumination and detection of several different fluorophores. For the imager to be inexpensive and able to image samples in a high-throughput manner, it should be portable, low-cost, and easy to use by technicians with minimal training. Our Tissue Imager meets all these requirements to image tissue samples with multiplexed immunofluorescence staining, as illustrated in A–C. The tissue samples are first stained with antibodies, then imaged with the Tissue Imager, followed by analysis. The overall design of the Tissue Imager can be seen in D, with an overall dimension of 25 cm × 25 cm × 42 cm. A photograph of the assembled device is shown in E. Clinical tissue sample sections from pathology centers are typically also placed on glass microscope slides of 25 mm × 75 mm, hence our Tissue Imager was designed to accommodate this format on the sample stage. The images obtained with a 20-megapixel CMOS camera (3648 × 5472 px) correspond to a 1.8 × 2.6 mm 2 field of view (FOV) with a pixel size at the sample of 0.48 μm , sufficiently large enough to image a typical human biopsy section such as skin tissue. To validate and benchmark our Tissue Imager, we acquired reference images with a Nikon Ti-1000E widefield microscope using a 10× objective with a similar sample pixel size of 0.65 μm. Tissue sample images were acquired from the top with a 10×-long working distance objective (Mitutoyo Plan Apochromat, NA 0.28), followed by a six-position motorized filter wheel (five bandpass filters currently used with center wavelengths 460 nm, 530 nm, 577 nm, 645 nm, and 690 nm) to spectrally select the fluorescence emission from each fluorophore type on the sample. The fluorescence was then imaged with a tube lens (f = 100 mm) onto the chip of a 20-megapixel CMOS camera and read out via a USB 3 interface compatible with most computers and operating systems. Fluorescence excitation was achieved using a ring-like structure above the sample that held five LEDs, each coupled to a condenser lens (f = 20 mm) and cleanup filter (center wavelengths 365 nm, 460 nm, 520 nm, 585 nm, and 630 nm). The sample was placed below onto an xyz sample stage allowing for field-of-view position and focus adjustments. The entire setup was enclosed in a box made from black ¼”-thick laser-cut acrylic boards. This light-tight enclosure prevented external light from contaminating the resulting images. All components including the optics, camera, LEDs, and structural supports were integrated into a CAD model that could be manufactured on a larger scale at a low cost. To adjust and evaluate the illumination homogeneity, images of a reference microscope glass slide were taken for each channel. The remaining variations could be easily corrected through software, as seen in the H&E image obtained as an RGYB image . In addition to fluorescence imaging, our device also allowed for the acquisition of brightfield images as required for IHC- or H&E-stained samples. For this purpose, separate images with red (630 nm), green (520 nm), yellow (577 nm), and blue (460 nm) illumination were taken and merged into a final RGYB color image. 2.2. Evaluation of Specificity and Sensitivity To evaluate the performance of the Tissue Imager, we imaged fluorescence beads of various emission/detection ranges to validate all five spectral channels. After vortexing and diluting each 1 µm FluoSpheres™ Polystyrene Microspheres (blue/green, yellow/green, orange, red, and crimson) sample 1:2000 in PBS, 10 µL of sample solution was pipetted into Countess™ Cell Counting Chamber Slides (Invitrogen, Waltham, MA, USA, C10228). As shown in , each microsphere population was detected in the expected spectral channel. To evaluate potential spectral crosstalk between channels, each microsphere population was imaged in all channels. We found the fluorescence to be specific to the respective channels, demonstrating the specificity of the Tissue Imager and its ability to resolve beads as small as 1 µm in diameter . To determine the sensitivity of the Tissue Imager, the Dragon Green (DG) intensity standard beads (Bangs Laboratories, Fishers, IN, USA, DG06M) of five different intensities (DG1-DG5) were imaged. This standard bead kit is typically used for fluorescence microscopy and flow cytometry calibrations. The standard beads were vortexed and diluted 1:10 in PBS-T (0.025% Tween20), then 10 µL of sample solution was pipetted into Countess™ Cell Counting Chamber Slides (Invitrogen, Waltham, MA, USA, C10228) for imaging. As shown in A, the beads were excited with the 460 nm LED and detected in the 530 nm channel with fluorescence intensities increasing from DG1 to DG5 as expected. In B, the fluorescence intensity for each bead intensity was quantified using ImageJ and plotted to characterize the sensitivity and wide dynamic range (0.24–100% intensity) of the Tissue Imager. 2.3. Evaluation of an Immune Panel on CTCL Tissue Samples The next step was to profile immune markers in clinical tissue samples to demonstrate rapid imaging for a 4-plex protein detection panel. Using our CTCL model, we profiled CD3e, CD8, and CD14 using antibodies. CD3e and CD8 are T-cell markers, while CD14 has been used as a marker for monocytes and macrophages . The nucleus was stained with DAPI. The images obtained from the Tissue Imager were compared to H&E and CD3e and CD8 IHC stains from serial sections of the same FFPE block. The CD3e and CD8 from the same section imaged on the Tissue Imager were also imaged on a Nikon Ti-1000E microscope with a 10× objective lens as a benchmark for immunofluorescence imaging . The images shown in A,B were representative images of a total of seven serial sections that were stained, imaged, and analyzed. CD14 was not imaged on the Nikon microscope due to the absence of a suitable spectral channel. We note that we focused on the T-cell markers CD3e and CD8, which are more commonly used in studies of cutaneous T-cell lymphoma (CTCL). The absence of CD14 staining in the Nikon images does not invalidate the results of the study, as we were still able to demonstrate the Tissue Imager’s ability to detect multiple markers simultaneously and compare its performance to a conventional microscope for CD3e and CD8 staining. As confirmed by the IHC staining (kindly provided by the UCI Dermatology Center), CD3e is highly abundant, while CD8 is less abundant ( C). This allowed us to demonstrate the Tissue Imager’s ability to detect protein markers of various abundance levels. The DAPI- (405 nm), CD3e- (488 nm), CD14- (594 nm), and CD8- (647 nm) stained CTCL tissue section was imaged within six seconds on the Tissue Imager. As seen in D, the intensity of twelve randomly selected cells positive in each channel were measured along with the local background to compare the signal-to-noise ratio (SNR) between images acquired on the Tissue Imager and Nikon. As negative control, tissues were stained with the secondary antibody only . A Bland–Altman plot of the SNR differences between the Tissue Imager and the Nikon microscope is shown in . After acquisition, the images were processed using ImageJ and analyzed using a CellProfiler image analysis pipeline ( A). The CellProfiler pipeline was validated by manually counting six 700 × 700 px regions of interest that were randomly selected throughout the tissue section. The manual counting was used to obtain the percentage of cells positive for each marker and compared to the counts detected in the CellProfiler pipeline. As shown in , there were no significant differences between the manual counts and CellProfiler counts for all three markers (CD3e, CD8, and CD14). By detecting the DAPI-stained nuclei, 8238 cells were found in this image ( B). The cells positive for each marker were then detected and quantified, with 51% of cells expressing CD3e, 16% of cells expressing CD8, and 18% of cells expressing CD14 ( C). The percentages of cells positive for each marker were then plotted for all images ( n = 7), resulting in an average of 49%, 15%, and 12% cells positive for CD3e, CD8, and CD14, respectively ( D). The CellProfiler pipeline also detected cells that co-expressed both CD3e and CD8. On average, 9% of cells were CD3e/CD8-positive, 40% of cells were CD3e-positive/CD8-negative, and 5.7% of cells were CD3e-negative/CD8-positive ( E).
While existing low-cost microscopy platforms designed for biological fluorescence utilize the camera of cellphones with various illumination schemes such as on-axis epi-illumination , off-axis inclined illuminations , butt-coupling , and total internal reflection , there have been limitations regarding spatial resolution, field of view, and the maximum number of spectral channels. To obtain sub-cellular spatial resolution (~1 μm) and multiplexed fluorescence images for clinical tissue biopsy samples mounted on glass microscope slides (25 mm × 75 mm × 1 mm), a device would need to feature an objective lens of reasonably high numerical aperture (NA~0.3) as well as multiple spectral windows for illumination and detection of several different fluorophores. For the imager to be inexpensive and able to image samples in a high-throughput manner, it should be portable, low-cost, and easy to use by technicians with minimal training. Our Tissue Imager meets all these requirements to image tissue samples with multiplexed immunofluorescence staining, as illustrated in A–C. The tissue samples are first stained with antibodies, then imaged with the Tissue Imager, followed by analysis. The overall design of the Tissue Imager can be seen in D, with an overall dimension of 25 cm × 25 cm × 42 cm. A photograph of the assembled device is shown in E. Clinical tissue sample sections from pathology centers are typically also placed on glass microscope slides of 25 mm × 75 mm, hence our Tissue Imager was designed to accommodate this format on the sample stage. The images obtained with a 20-megapixel CMOS camera (3648 × 5472 px) correspond to a 1.8 × 2.6 mm 2 field of view (FOV) with a pixel size at the sample of 0.48 μm , sufficiently large enough to image a typical human biopsy section such as skin tissue. To validate and benchmark our Tissue Imager, we acquired reference images with a Nikon Ti-1000E widefield microscope using a 10× objective with a similar sample pixel size of 0.65 μm. Tissue sample images were acquired from the top with a 10×-long working distance objective (Mitutoyo Plan Apochromat, NA 0.28), followed by a six-position motorized filter wheel (five bandpass filters currently used with center wavelengths 460 nm, 530 nm, 577 nm, 645 nm, and 690 nm) to spectrally select the fluorescence emission from each fluorophore type on the sample. The fluorescence was then imaged with a tube lens (f = 100 mm) onto the chip of a 20-megapixel CMOS camera and read out via a USB 3 interface compatible with most computers and operating systems. Fluorescence excitation was achieved using a ring-like structure above the sample that held five LEDs, each coupled to a condenser lens (f = 20 mm) and cleanup filter (center wavelengths 365 nm, 460 nm, 520 nm, 585 nm, and 630 nm). The sample was placed below onto an xyz sample stage allowing for field-of-view position and focus adjustments. The entire setup was enclosed in a box made from black ¼”-thick laser-cut acrylic boards. This light-tight enclosure prevented external light from contaminating the resulting images. All components including the optics, camera, LEDs, and structural supports were integrated into a CAD model that could be manufactured on a larger scale at a low cost. To adjust and evaluate the illumination homogeneity, images of a reference microscope glass slide were taken for each channel. The remaining variations could be easily corrected through software, as seen in the H&E image obtained as an RGYB image . In addition to fluorescence imaging, our device also allowed for the acquisition of brightfield images as required for IHC- or H&E-stained samples. For this purpose, separate images with red (630 nm), green (520 nm), yellow (577 nm), and blue (460 nm) illumination were taken and merged into a final RGYB color image.
To evaluate the performance of the Tissue Imager, we imaged fluorescence beads of various emission/detection ranges to validate all five spectral channels. After vortexing and diluting each 1 µm FluoSpheres™ Polystyrene Microspheres (blue/green, yellow/green, orange, red, and crimson) sample 1:2000 in PBS, 10 µL of sample solution was pipetted into Countess™ Cell Counting Chamber Slides (Invitrogen, Waltham, MA, USA, C10228). As shown in , each microsphere population was detected in the expected spectral channel. To evaluate potential spectral crosstalk between channels, each microsphere population was imaged in all channels. We found the fluorescence to be specific to the respective channels, demonstrating the specificity of the Tissue Imager and its ability to resolve beads as small as 1 µm in diameter . To determine the sensitivity of the Tissue Imager, the Dragon Green (DG) intensity standard beads (Bangs Laboratories, Fishers, IN, USA, DG06M) of five different intensities (DG1-DG5) were imaged. This standard bead kit is typically used for fluorescence microscopy and flow cytometry calibrations. The standard beads were vortexed and diluted 1:10 in PBS-T (0.025% Tween20), then 10 µL of sample solution was pipetted into Countess™ Cell Counting Chamber Slides (Invitrogen, Waltham, MA, USA, C10228) for imaging. As shown in A, the beads were excited with the 460 nm LED and detected in the 530 nm channel with fluorescence intensities increasing from DG1 to DG5 as expected. In B, the fluorescence intensity for each bead intensity was quantified using ImageJ and plotted to characterize the sensitivity and wide dynamic range (0.24–100% intensity) of the Tissue Imager.
The next step was to profile immune markers in clinical tissue samples to demonstrate rapid imaging for a 4-plex protein detection panel. Using our CTCL model, we profiled CD3e, CD8, and CD14 using antibodies. CD3e and CD8 are T-cell markers, while CD14 has been used as a marker for monocytes and macrophages . The nucleus was stained with DAPI. The images obtained from the Tissue Imager were compared to H&E and CD3e and CD8 IHC stains from serial sections of the same FFPE block. The CD3e and CD8 from the same section imaged on the Tissue Imager were also imaged on a Nikon Ti-1000E microscope with a 10× objective lens as a benchmark for immunofluorescence imaging . The images shown in A,B were representative images of a total of seven serial sections that were stained, imaged, and analyzed. CD14 was not imaged on the Nikon microscope due to the absence of a suitable spectral channel. We note that we focused on the T-cell markers CD3e and CD8, which are more commonly used in studies of cutaneous T-cell lymphoma (CTCL). The absence of CD14 staining in the Nikon images does not invalidate the results of the study, as we were still able to demonstrate the Tissue Imager’s ability to detect multiple markers simultaneously and compare its performance to a conventional microscope for CD3e and CD8 staining. As confirmed by the IHC staining (kindly provided by the UCI Dermatology Center), CD3e is highly abundant, while CD8 is less abundant ( C). This allowed us to demonstrate the Tissue Imager’s ability to detect protein markers of various abundance levels. The DAPI- (405 nm), CD3e- (488 nm), CD14- (594 nm), and CD8- (647 nm) stained CTCL tissue section was imaged within six seconds on the Tissue Imager. As seen in D, the intensity of twelve randomly selected cells positive in each channel were measured along with the local background to compare the signal-to-noise ratio (SNR) between images acquired on the Tissue Imager and Nikon. As negative control, tissues were stained with the secondary antibody only . A Bland–Altman plot of the SNR differences between the Tissue Imager and the Nikon microscope is shown in . After acquisition, the images were processed using ImageJ and analyzed using a CellProfiler image analysis pipeline ( A). The CellProfiler pipeline was validated by manually counting six 700 × 700 px regions of interest that were randomly selected throughout the tissue section. The manual counting was used to obtain the percentage of cells positive for each marker and compared to the counts detected in the CellProfiler pipeline. As shown in , there were no significant differences between the manual counts and CellProfiler counts for all three markers (CD3e, CD8, and CD14). By detecting the DAPI-stained nuclei, 8238 cells were found in this image ( B). The cells positive for each marker were then detected and quantified, with 51% of cells expressing CD3e, 16% of cells expressing CD8, and 18% of cells expressing CD14 ( C). The percentages of cells positive for each marker were then plotted for all images ( n = 7), resulting in an average of 49%, 15%, and 12% cells positive for CD3e, CD8, and CD14, respectively ( D). The CellProfiler pipeline also detected cells that co-expressed both CD3e and CD8. On average, 9% of cells were CD3e/CD8-positive, 40% of cells were CD3e-positive/CD8-negative, and 5.7% of cells were CD3e-negative/CD8-positive ( E).
Here we have demonstrated that our Tissue Imager can achieved an imaging performance on par with commercial epifluorescence microscopes for imaging of a 4-plex immunology panel in human CTCL FFPE tissues. The ability to detect co-expression of multiple protein markers in the same cell at the single-cell level is of high relevance in clinical pathology, particularly for profiling of both the presence of T cells and the abundance of immune checkpoint proteins for patient stratification. The compatibility of Tissue Imager data with an automated marker counting pipeline underscores the capabilities of this device. The main limitation of this study was the relatively small sample size. The study only used tissue samples from a CTCL model, which may not represent the full range of tissues and diseases. A larger sample size and diverse tissue samples may be needed to further validate the findings. Further, as an epifluorescence microscope, the Tissue Imager has a limitation of requiring thin slicing of tissue samples, which means that samples with a thickness greater than 10 µm cannot be imaged effectively. This limitation arises because epifluorescence microscopy lacks optical sectioning capabilities, which leads to high background noise when thicker tissues are imaged. Also, in its current form, the Tissue Imager does not have an objective turret, which means that magnification cannot be changed during imaging. Another limitation of the current setup is the lack of a white light source and RGB filters needed to reproduce the spectral response of a color camera, which limits its use for imaging stains such as H&E and IHC. The current method is to sequentially image with excitation and detection settings E630 nm/D645 nm, E585 nm/D577 nm, E520 nm/D530 nm, and E460 nm/D460 nm to sequentially acquire the red, yellow, green, and blue (RYGB) portions of a brightfield image . This approach can emulate brightfield imaging, but some color variations can be seen compared to a true brightfield image due to the narrow filter bandwidths. The microscope also has limited sensitivity due to the usage of a relatively low numerical aperture objective, which would render it hard to image targets with low copy numbers such as RNA. Additionally, the Tissue Imager is currently not automated, and a motorized stage would be required for high-throughput imaging applications. Finally, if the Tissue Imager is to be used as a medical device in clinical settings, it would require a comprehensive review and approval process to ensure that it meets the necessary regulatory requirements. In the future, some of these limitations could be addressed by incorporating additional features such as additional spectral channels and potentially hyperspectral detection, a motorized/automated sample stage and objective turret, and possibly even fluorescence lifetime detection with time-of-flight-resolving consumer cameras. With additional spectral channels, hyperspectral detection, and/or fluorescence lifetime imaging microscopy (FLIM), users would be able to multiplex higher and remove autofluorescence from images with FLIM analysis. With oligo-conjugated antibodies, users can multiplex above 3–4 plex in a single round of staining and imaging and possibly use combinatorial labeling and FLIM to multiplex and perform decoding to improve detection accuracy. Additionally, the light path could be modified to include a broadband light source and (phase) masks to enable dark-field and phase-contrast imaging. Fluorescence quenchers such as TrueBlack could be used to quench tissue autofluorescence, thus increasing sensitivity, and novel computational tools could be leveraged to maximize the information obtained from the images . Specifically, the use of deep learning-based image analysis approaches could improve the accuracy of cell segmentation in microscopy images .
4.1. Tissue Imager Design The sample slides were illuminated with five different LEDs (365 nm, 460 nm, 520 nm, 585 nm, 630 nm, 120° angle of emission) using a custom-designed 5-channel LED ring mounted above the sample stage. The LEDs were driven with constant current LED drivers, resulting in output powers of 150 lm (460 nm), 115 lm (630 nm), 500 lm (520 nm), and 500 lm (585 nm). After collimation with aspherical lenses of 20 mm focal length (Thorlabs, Newton, NJ, USA), the LED emissions were spectrally cleaned with bandpass filters (365/10 nm, 460/30 nm, 520/20 nm, 585/20 nm, 630/20 nm) (Chroma, Bellows Falls, VT, USA). LEDs were driven by individual DC–DC driver circuits to adjust the current (max 1000 mA each). Fluorescence was collected with a long working distance (WD 34 mm) 10× Mitutoyo Plan Apochromat Objective (Thorlabs, Newton, NJ, USA) coupled to a 1” diameter achromatic tube lens of 100 mm focal length (Thorlabs, Newton, NJ, USA). The custom six-position filter wheel was actuated with a servo motor controlled with an Arduino Nano microcontroller, that was also used to control the LED drivers. Before imaging with a 20 MP monochrome CMOS camera (FLIR Blackfly, FLIR Systems, Goleta, CA, USA), bandpass filters were used to block scattered excitation light (450/50 nm, 530/30 nm, 577/25 nm, 645/30 nm, 690/50 nm) (Chroma, Bellows Falls, VT, USA). All electronics were powered by a 5 V, 3.5 A power supply. After 3D printing of the model (Ultimaker S5, Ultimaker B.V., Utrecht, The Netherlands), all relevant optical components were inserted and attached. 4.2. Optical Resolution Characterization We have taken cross sections of multiple 1 µm fluorescent beads to characterize the optical resolution of our imager. 4.3. Pixel Size Calibration Measurements A 10 mm ruler (R1L3S1P, Thorlabs, Newton, NJ, USA) was imaged with RGB settings for a brightfield image. Since we used a monochrome camera with filters, we emulated RGB image acquisition by sequentially illuminating with blue (460 nm), green (530 nm), and red (630 nm) light and acquisition in the corresponding channels. The image was then quantified using ImageJ by measuring the distance in pixels between 1 division (50 µm) or 2 divisions (100 µm). The µm/pixel value was then calculated, and the average value was obtained . 4.4. Fluorescent Beads Then, 1 µm FluoSpheres™ Polystyrene Microspheres of various colors (Invitrogen, Waltham, MA, USA, F13080, F13081, F13082, F13083, F8816) were vortexed and diluted at 1:2000 with PBS before being pipetted into a Countess™ Cell Counting Chamber Slide (Invitrogen, Waltham, MA, USA, C10228). The Dragon Green Intensity Standard Kit with 5 Intensities (Bangs Laboratories, Fishers, IN, USA, DG06M) (DM1-5) around 8 µm diameter were vortexed, diluted 1:10 in PBS-T (0.025% Tween20), and pipetted into a Countess™ Cell Counting Chamber Slide (Invitrogen, Waltham, MA, USA, C10228). 4.5. Preparation of FFPE Tissues The University of California Irvine IRB approved this study for IRB exemption under protocol number HS# 2019-5054. All methods were carried out in accordance with relevant guidelines and regulations. All human cutaneous T-cell lymphoma (CTCL) cases were de-identified samples to the research team at all points and therefore considered exempt for participation consent by the IRB. Fully characterized human patient skin CTCL FFPE tissues were achieved samples obtained from the UCI Dermatopathology Center, then sectioned to 5 µm-thick slices using a rotary microtome, collected in a water bath at 35 °C, and mounted to positively charged Fisher super frost coated slides (Fisher Scientific, Waltham, MA, USA, 12-550-15). The tissue sections were then baked at 60 °C for 1 h. For antigen unmasking, slides were deparaffinized, rehydrated, then followed by target retrieval (with citrate buffer). 4.6. Antibody Staining The samples were blocked with 10% BSA in PBS for 2 h at room temperature. Antibody solutions containing Rabbit anti-Human CD3e (Abcam, Cambridge, UK, ab52959), Mouse anti-Human CD8 (Abcam, Cambridge, UK, ab75129), and Goat anti-Human CD14 (LifeSpan, Providence, RI, USA, LS-B3012-50) antibodies and 1% BSA in PBS were subsequently added to the samples and incubated overnight at 4 °C. Following a PBS wash, antibody solutions containing fluorescently labeled Donkey anti-Rabbit Alexa 488 (Thermo Fisher, Waltham, MA, USA, A-21206), Donkey anti-Mouse Alexa 647 (ThermoFisher, Waltham, MA, USA, A-31571), and Donkey anti-Goat Alexa 594 (Thermo Fisher, Waltham, MA, USA, A32758) antibodies in 5% secondary raised serum and 1% BSA in PBS were added at room temperature for 1 h. Three 5 min washes at room temperature with RNAse-free PBS were then performed, with the second wash containing 1:1000 Hoechst stain. 4.7. Image Acquisition and Data Transfer Note that 1 µm fluorescence beads were imaged with a camera exposure time of 1000 ms at 365/460 nm excitation/emission, and a camera exposure time of 100 ms was used for all remaining channels. The Dragon Green Intensity Standard Kit (Bangs Laboratories, Fishers, IN, USA, DG06M) was imaged with an exposure time of 1000 ms in the 460/530 nm channel (excitation/emission). Tissue sections were imaged with an exposure time of 300 ms for DAPI staining (excitation 365 nm, detection 460 nm), 1500 ms for the 488 nm (excitation 460 nm, detection 530 nm) and 647 nm (excitation 630 nm, detection 690 nm) channels, and 2000 ms for the 594 nm (excitation 585 nm, detection 645 nm) channel. For all images taken, the camera gain was set to 26 dB. Images were saved in 16-bit TIFF format for further processing. For analysis, the tissue images were cropped 2300 × 2300 px. Validation images were acquired with an inverted Nikon Ti-1000E epifluorescence microscope using a 10× plan apochromat oil objective with a numerical aperture of NA 0.45. Samples were excited with a Spectra-X (Lumencor, Beaverton, OR, USA) LED light source at 395 nm (200 ms; 5% laser power), 470 nm (200 ms; 25% laser power), and 640 nm (200 ms; 50% laser power). Images were acquired with an Andor Zyla 4.2 sCMOS camera. The H&E image in was taken on a Nikon Eclipse E400 with the Nikon Plan Fluor 10×/0.30 DIC objective lens and a QImaging MicroPublisher 6 camera. 4.8. ImageJ Image Processing The open-source software ImageJ was used to pseudocolor the images from each channel acquired on the Tissue Imager (E365 nm/D460 nm: blue; E460 nm/D530 nm: green; E520 nm/D577 nm: yellow; E585 nm/D645 nm: magenta; E630 nm/D690 nm: red). The channels were then merged to generate a merged image. Scale bars were generated using the measurement of 0.48 μm/px. The lookup tables (LUTs) were also adjusted in ImageJ with the same setting across all images that were compared. The SNR (signal-to-noise ratio) was calculated by dividing the intensity value of the protein by that of the background. Protein and background intensity values were averaged for each FOV (field of view). The percentage of positive cells was calculated by dividing the number of cells positive for the protein of interest by the total number of cells detected in the FOV. 4.9. CellProfiler Fluorescence signal intensity was quantified using the open-source software CellProfiler. Raw *.nd2 images from Nikon and composite Tissue Imager images created with another open-source software, ImageJ 1.53c, were fed into a CellProfiler pipeline. In the pipeline, the nuclei were identified using the “IdentifyPrimaryObjects” module then expanded to represent the cell bodies. Protein fluorescence was also identified with the “IdentifyPrimaryObjects” module. Raw channel images were rescaled with the “RescaleIntensity” module for accurate protein and background intensity measurements that were obtained using the “MeasureObjectIntensity” and “MeasureImageIntensity” modules, respectively. Positive cell determination was done using the “RelateObjects” module. 4.10. Manual Counting To validate the CellProfiler pipeline, positive cells were manually counted with the Cell Counter plugin on the open-source software ImageJ. Tissue Imager images were cropped to 700 × 700 px with a total of six fields of view in various regions of the sample. The DAPI channel and the fluorescent channel labeling the protein of interest were merged using ImageJ. Any cell with the fluorescent signal indicative of the presence of the protein was manually marked as a positive cell with a single dot in the image and counted by the Cell Counter. The percentage of cells positive obtained via manual counting was then compared to the percentage of cells positive from the CellProfiler pipeline. 4.11. Statistical Analysis Student (two-sided) t -tests were performed for the comparison between manual counts and CellProfiler counts. For , each fluorescent bead population had 13 beads selected randomly throughout the image for quantification on ImageJ. For and , a total of 7 serial sections of CTCL tissue were stained, imaged, and analyzed. A total of 12 cells in each channel were randomly selected for fluorescence quantification through ImageJ for D. In D, the p value was >0.11 for the 488 nm channel and classified as not significant (n.s.). For , the p values for the student t -test between counting methods were >0.75, >0.82, and >0.22 for CD3e, CD8, and CD14, respectively. 4.12. Workflow Overview A schematic overview of the workflow described above is depicted in .
The sample slides were illuminated with five different LEDs (365 nm, 460 nm, 520 nm, 585 nm, 630 nm, 120° angle of emission) using a custom-designed 5-channel LED ring mounted above the sample stage. The LEDs were driven with constant current LED drivers, resulting in output powers of 150 lm (460 nm), 115 lm (630 nm), 500 lm (520 nm), and 500 lm (585 nm). After collimation with aspherical lenses of 20 mm focal length (Thorlabs, Newton, NJ, USA), the LED emissions were spectrally cleaned with bandpass filters (365/10 nm, 460/30 nm, 520/20 nm, 585/20 nm, 630/20 nm) (Chroma, Bellows Falls, VT, USA). LEDs were driven by individual DC–DC driver circuits to adjust the current (max 1000 mA each). Fluorescence was collected with a long working distance (WD 34 mm) 10× Mitutoyo Plan Apochromat Objective (Thorlabs, Newton, NJ, USA) coupled to a 1” diameter achromatic tube lens of 100 mm focal length (Thorlabs, Newton, NJ, USA). The custom six-position filter wheel was actuated with a servo motor controlled with an Arduino Nano microcontroller, that was also used to control the LED drivers. Before imaging with a 20 MP monochrome CMOS camera (FLIR Blackfly, FLIR Systems, Goleta, CA, USA), bandpass filters were used to block scattered excitation light (450/50 nm, 530/30 nm, 577/25 nm, 645/30 nm, 690/50 nm) (Chroma, Bellows Falls, VT, USA). All electronics were powered by a 5 V, 3.5 A power supply. After 3D printing of the model (Ultimaker S5, Ultimaker B.V., Utrecht, The Netherlands), all relevant optical components were inserted and attached.
We have taken cross sections of multiple 1 µm fluorescent beads to characterize the optical resolution of our imager.
A 10 mm ruler (R1L3S1P, Thorlabs, Newton, NJ, USA) was imaged with RGB settings for a brightfield image. Since we used a monochrome camera with filters, we emulated RGB image acquisition by sequentially illuminating with blue (460 nm), green (530 nm), and red (630 nm) light and acquisition in the corresponding channels. The image was then quantified using ImageJ by measuring the distance in pixels between 1 division (50 µm) or 2 divisions (100 µm). The µm/pixel value was then calculated, and the average value was obtained .
Then, 1 µm FluoSpheres™ Polystyrene Microspheres of various colors (Invitrogen, Waltham, MA, USA, F13080, F13081, F13082, F13083, F8816) were vortexed and diluted at 1:2000 with PBS before being pipetted into a Countess™ Cell Counting Chamber Slide (Invitrogen, Waltham, MA, USA, C10228). The Dragon Green Intensity Standard Kit with 5 Intensities (Bangs Laboratories, Fishers, IN, USA, DG06M) (DM1-5) around 8 µm diameter were vortexed, diluted 1:10 in PBS-T (0.025% Tween20), and pipetted into a Countess™ Cell Counting Chamber Slide (Invitrogen, Waltham, MA, USA, C10228).
The University of California Irvine IRB approved this study for IRB exemption under protocol number HS# 2019-5054. All methods were carried out in accordance with relevant guidelines and regulations. All human cutaneous T-cell lymphoma (CTCL) cases were de-identified samples to the research team at all points and therefore considered exempt for participation consent by the IRB. Fully characterized human patient skin CTCL FFPE tissues were achieved samples obtained from the UCI Dermatopathology Center, then sectioned to 5 µm-thick slices using a rotary microtome, collected in a water bath at 35 °C, and mounted to positively charged Fisher super frost coated slides (Fisher Scientific, Waltham, MA, USA, 12-550-15). The tissue sections were then baked at 60 °C for 1 h. For antigen unmasking, slides were deparaffinized, rehydrated, then followed by target retrieval (with citrate buffer).
The samples were blocked with 10% BSA in PBS for 2 h at room temperature. Antibody solutions containing Rabbit anti-Human CD3e (Abcam, Cambridge, UK, ab52959), Mouse anti-Human CD8 (Abcam, Cambridge, UK, ab75129), and Goat anti-Human CD14 (LifeSpan, Providence, RI, USA, LS-B3012-50) antibodies and 1% BSA in PBS were subsequently added to the samples and incubated overnight at 4 °C. Following a PBS wash, antibody solutions containing fluorescently labeled Donkey anti-Rabbit Alexa 488 (Thermo Fisher, Waltham, MA, USA, A-21206), Donkey anti-Mouse Alexa 647 (ThermoFisher, Waltham, MA, USA, A-31571), and Donkey anti-Goat Alexa 594 (Thermo Fisher, Waltham, MA, USA, A32758) antibodies in 5% secondary raised serum and 1% BSA in PBS were added at room temperature for 1 h. Three 5 min washes at room temperature with RNAse-free PBS were then performed, with the second wash containing 1:1000 Hoechst stain.
Note that 1 µm fluorescence beads were imaged with a camera exposure time of 1000 ms at 365/460 nm excitation/emission, and a camera exposure time of 100 ms was used for all remaining channels. The Dragon Green Intensity Standard Kit (Bangs Laboratories, Fishers, IN, USA, DG06M) was imaged with an exposure time of 1000 ms in the 460/530 nm channel (excitation/emission). Tissue sections were imaged with an exposure time of 300 ms for DAPI staining (excitation 365 nm, detection 460 nm), 1500 ms for the 488 nm (excitation 460 nm, detection 530 nm) and 647 nm (excitation 630 nm, detection 690 nm) channels, and 2000 ms for the 594 nm (excitation 585 nm, detection 645 nm) channel. For all images taken, the camera gain was set to 26 dB. Images were saved in 16-bit TIFF format for further processing. For analysis, the tissue images were cropped 2300 × 2300 px. Validation images were acquired with an inverted Nikon Ti-1000E epifluorescence microscope using a 10× plan apochromat oil objective with a numerical aperture of NA 0.45. Samples were excited with a Spectra-X (Lumencor, Beaverton, OR, USA) LED light source at 395 nm (200 ms; 5% laser power), 470 nm (200 ms; 25% laser power), and 640 nm (200 ms; 50% laser power). Images were acquired with an Andor Zyla 4.2 sCMOS camera. The H&E image in was taken on a Nikon Eclipse E400 with the Nikon Plan Fluor 10×/0.30 DIC objective lens and a QImaging MicroPublisher 6 camera.
The open-source software ImageJ was used to pseudocolor the images from each channel acquired on the Tissue Imager (E365 nm/D460 nm: blue; E460 nm/D530 nm: green; E520 nm/D577 nm: yellow; E585 nm/D645 nm: magenta; E630 nm/D690 nm: red). The channels were then merged to generate a merged image. Scale bars were generated using the measurement of 0.48 μm/px. The lookup tables (LUTs) were also adjusted in ImageJ with the same setting across all images that were compared. The SNR (signal-to-noise ratio) was calculated by dividing the intensity value of the protein by that of the background. Protein and background intensity values were averaged for each FOV (field of view). The percentage of positive cells was calculated by dividing the number of cells positive for the protein of interest by the total number of cells detected in the FOV.
Fluorescence signal intensity was quantified using the open-source software CellProfiler. Raw *.nd2 images from Nikon and composite Tissue Imager images created with another open-source software, ImageJ 1.53c, were fed into a CellProfiler pipeline. In the pipeline, the nuclei were identified using the “IdentifyPrimaryObjects” module then expanded to represent the cell bodies. Protein fluorescence was also identified with the “IdentifyPrimaryObjects” module. Raw channel images were rescaled with the “RescaleIntensity” module for accurate protein and background intensity measurements that were obtained using the “MeasureObjectIntensity” and “MeasureImageIntensity” modules, respectively. Positive cell determination was done using the “RelateObjects” module.
To validate the CellProfiler pipeline, positive cells were manually counted with the Cell Counter plugin on the open-source software ImageJ. Tissue Imager images were cropped to 700 × 700 px with a total of six fields of view in various regions of the sample. The DAPI channel and the fluorescent channel labeling the protein of interest were merged using ImageJ. Any cell with the fluorescent signal indicative of the presence of the protein was manually marked as a positive cell with a single dot in the image and counted by the Cell Counter. The percentage of cells positive obtained via manual counting was then compared to the percentage of cells positive from the CellProfiler pipeline.
Student (two-sided) t -tests were performed for the comparison between manual counts and CellProfiler counts. For , each fluorescent bead population had 13 beads selected randomly throughout the image for quantification on ImageJ. For and , a total of 7 serial sections of CTCL tissue were stained, imaged, and analyzed. A total of 12 cells in each channel were randomly selected for fluorescence quantification through ImageJ for D. In D, the p value was >0.11 for the 488 nm channel and classified as not significant (n.s.). For , the p values for the student t -test between counting methods were >0.75, >0.82, and >0.22 for CD3e, CD8, and CD14, respectively.
A schematic overview of the workflow described above is depicted in .
In summary, the Tissue Imager described here represents a low-cost instrument (<$9000) which is a simple yet sensitive and highly versatile (five fluorescence channels + RYGB brightfield) design that could be reproduced easily, thus being a useful tool in settings such as academic laboratories. This device provides a low-cost platform for scientists to rapidly image clinical samples on lab benchtops or any location with little space available as well as an opportunity for students to gain the knowledge and experience in engineering, instrumentation, and software development. Basic analysis modules are also available on ImageJ, providing users with the opportunity to learn about these algorithms and create their own Tissue Imager workflows. This device could also be used for other applications such as tissue microarray imaging with minimal modifications to enable high-throughput batch sample analysis.
|
Hemolytic-uremic syndrome: 24 years’ experience of a pediatric nephrology unit
|
5d355412-6f01-4b87-827a-bb41f790b52d
|
10139713
|
Internal Medicine[mh]
|
Hemolytic-uremic syndrome (HUS) is a thrombotic microangiopathy characterized by the classic triad: hemolytic anemia, thrombocytopenia and acute kidney injury. In the last decade, there was great progress on the understanding of HUS etiology and pathophysiology. The role of complement regulation was unveiled and a new classification of HUS based on its pathogenic mechanisms, instead of the traditional classification of diarrhea positive HUS (D+HUS) and diarrhea negative HUS (D-HUS), was proposed. The 2016 International Hemolytic Uremic Syndrome Group classification is organized considering HUS etiology as: 1) infection-induced HUS (Shiga toxin producing Escherichia coli, Streptococcus pneumoniae, Influenza A, human immunodeficiency virus); 2) HUS with coexisting diseases or conditions (bone marrow or solid organ transplantation, systemic malignancies, autoimmune conditions, drugs, malignant hypertension); 3) HUS due to cobalamin C disorder; and 4) HUS due to alternative complement pathway dysregulation and mutation in diacylglycerol kinase ε ( DGKE ) gene - . HUS associated with Shiga-toxin producing Escherichia coli (STEC) is the most frequent cause, representing 85 to 90% of all pediatric cases . Invasive infections by Streptococcus pneumoniae account for approximately 5% of cases and genetic mutations associated with dysregulation of the alternative complement pathway account for 5-10% of patients . While there are multiple triggers leading to HUS, all of them are responsible for the same pattern of endothelial cell damage in the microvasculature of multiple organs, mainly the kidney and the brain, and similar clinical and biological abnormalities . HUS is rare, but it can be a severe illness with important morbidity and mortality. Our clinical practice follows the recommendations of the international consensus by Loirat C et al . The introduction of eculizumab, the first drug to effectively block complement activation, has greatly changed the treatment and outcome of patients with HUS due to alternative complement pathway dysregulation. An early recognition of the disease’s presentation and initiation of treatment is vital to minimize organ injury. The aim of this study was to characterize the clinical features, flares, etiology, management, morbidity, and mortality of HUS in pediatric patients admitted to our Unit over the past 24 years.
This was a retrospective and descriptive study of all patients admitted with HUS diagnosis in the Pediatric Nephrology Unit of a Portuguese tertiary hospital during the 24-year-period between January 1996 and March 2020. All medical files were reviewed and demographic, clinical, and laboratory data concerning etiology, severity, management and patient outcome were collected. Patients with no data available were excluded from the study. Clinical data on the last clinical visit was obtained. Minor sequelae were defined as the presence of high blood pressure (HBP) and/or non-nephrotic proteinuria with glomerular filtration rate (GFR) ≥ 90 mL/min/1.73m . Chronic kidney disease (CKD) was defined by a GFR lower than 90 mL/min/1.73m . GFR was estimated using the Schwartz Equation (mL/min/1.73m ) = 0.413 x height (cm)/serum creatinine (mg/dL). STEC was identified by serological and/or microbiological studies of stool samples and polymerase chain reaction (PCR) after 2012. Streptococcus pneumoniae was identified using microbiological studies or urine immunochromatography. Genetic tests have been performed since 2015 in the Laboratory of Molecular Hematology of Coimbra University Hospital Center and in two other private laboratories of molecular diagnostic testing. There was no uniformity on genetic testing nor on the gene panels used. The variants were reported according to the American College of Medical Genetics and Genomics (ACMG) guidelines. In our institution, eculizumab was available since 2015 and was started in the first 24-48 hours if severe HUS and/or a high index of suspicion of an alternative complement pathway dysregulation. Patients treated with eculizumab received the quadrivalent meningococcal conjugate, the serogroup B meningococcal vaccines, and prophylactic antibiotics. Genetic testing and eculizumab were covered by the Portuguese National Health Service. Patients were divided into two historical cohorts for better characterization and evaluation of the follow-up period. Group A included patients admitted before 2015, when genetic testing and eculizumab were not available in our institution, and Group B included patients admitted since 2015. One patient was included in both groups because he had a first episode of HUS before 2015 and relapsed in the second time period when the genetic testing was performed. The study was submitted to and approved by the Ethical Committee of our institution.
Epidemiological Data During the study period there were 29 patients with HUS admitted to the Pediatric Intensive Care Unit of our hospital and followed by our Pediatric Nephrology Unit ( ). Four patients were excluded due to lack of data. Twenty-five patients with 26 events met the inclusion criteria, 64% of whom were male, with a median age at diagnosis of 2 years (2 months-17 years). Group A had 19 individuals and Group B had 7 patients ( ). None of the children had a family history of HUS. The median length of stay at the hospital was 28 days (4-191 days). Clinical and Laboratory Presentation As shown in , the most frequent clinical manifestations were diarrhea (76%), vomiting (68%), fever (48%), and edema (32%). Only three patients had bloody diarrhea. During hospital stay, 96% developed HBP, 72% developed oligoanuria, and 16% had neurological impairment (n=2 seizures, n=2 somnolence). One patient died during the acute phase of the disease due to neurologic involvement, which was present from admission (no etiology was identified). Laboratory findings on admission were: average hemoglobin of 6.3 ± 1.35 g/dL [3.2-9.0 g/dL]; thrombocytopenia in all but one patient, with a mean platelet count of 58,188/µL ± 46,638/µL [11,000-193,000/µL]; mean creatinine of 5.0 ± 3.8 mg/dL [0.7-15.7 mg/dL], corresponding to an average GFR (N=14) of 14.5 ± 10.5 mL/min/1.73m [3.6-35.2mL/min/1.73m ]. Treatment During the acute phase, 14 patients (56%) required renal replacement therapy ( ), 11 patients from group A and 3 patients from group B. Seven (50%) had peritoneal dialysis, three (22%) had continuous venovenous hemodiafiltration (CVVHDF), three (22%) had both therapies, and one patient (7%) required CVVHDF and hemodialysis. Median duration of peritoneal dialysis was 22 days [6-180 days] and of CVVHDF was five days [3-19 days]. Eculizumab was administered to six of the seven patients treated since 2015. Three of the patients who received eculizumab needed renal replacement therapy during acute phase. In contrast, eleven (58%) of the patients of Group A needed renal replacement therapy, representing 86% (n=6) of the patients with infectious etiology. As for other treatments, plasmapheresis was used in two (8%) patients (one had anti-factor H antibodies and the other patient had no identifiable etiology; one of these patients was from Group A and the other from Group B) and 15 (60%) required erythrocyte transfusion. None of the patients required cardiovascular support. Mechanical ventilation was required in five patients (20%), all belonging to Group A. Etiologic Investigation The etiology was identified in nine patients (36%) ( and ). Seven patients had an infection: five with a Shiga-toxin producing E. coli and two with S. pneumoniae (one identified in pleural effusion and the other with S. pneumoniae antigens detected in urine). Six of the seven patients from group B had genetic analysis performed and five of these patients had an identifiable variant. Two patients were diagnosed as having a complement dysregulation HUS: one patient with a complete homozygous deletion of complement factor H-related protein 1 ( CFHR1 ) and complement factor H-related protein 3 ( CFHR3 ), who had anti-factor H antibodies production, and one patient with a likely homozygous pathogenic variant in C3 gene. The other three patients harbored variants of unknown significance (VUS): a complete homozygous deletion of CFHR1 and CFHR3 without anti-factor H antibodies production, a heterozygous mutation in genes encoding complement factor I ( CFI ), and a heterozygous variant in the complement regulatory gene factor H ( CFH ) ( ). None of the patients with neurologic involvement (n=4) had an identifiable etiology, but only two of them had a complete complement dysregulation investigation performed. One of these patients had a heterozygous mutation in CFH gene, which is considered a VUS. Follow-up One patient died during the acute phase of the disease. There were no deaths to report during the follow-up period (24/24 patients). There was data regarding the last clinical visit for 20 patients ( ). Median follow-up duration was 6.5 years (3 months-19.8 years). In Group A (N=19), there was follow-up data of 15 patients, with a median duration of 10.5 years (4.8-19.8 years). At the last clinical visit, 33% (n=5) patients had no renal sequelae, 27% (n=4) had minor sequelae (three patients with HBP and non-nephrotic proteinuria, one with HBP only), 40% (n=6) had CKD, and none of the patients had neurologic sequelae. Only one patient from this cohort relapsed (7 and 9 years after diagnosis). This patient (also included in group B) had a heterozygous mutation in CFI combined with risk haplotypes in CFH and MCP . Only this patient underwent kidney biopsy, which disclosed a membranoproliferative glomerulonephritis type I. From this group, 4/15 (27%) patients, kept renal replacement therapy dependence after the acute phase and all of them underwent kidney transplantation (KT). Among the patients with CKD, 33% (n=2) are receiving conservative treatment and 67% (n=4) started peritoneal dialysis and underwent KT. From the latter group, three patients had a STEC-HUS and one had S. pneumoniae infection. Average time between diagnosis and KT was 6.4 ± 2.7 years [3.5-10 years]. There was no recurrence of disease on renal graft in any of the patients. All patients achieved hematologic remission. In Group B (N=7), we had follow-up data of six patients, with a median duration of 11 months (6.2 months-4.2 years). At the last follow-up evaluation, 33% had no sequelae (n=2; one of these patients had anti-factor H antibodies and eculizumab was not administered), 67% had HBP (n=4), 33% had CKD (n=2), and none of the patients underwent KT. From this group, one patient (17%) kept renal replacement therapy dependence (hemodialysis) after the acute phase, but the patient is currently under conservative treatment. Among the group with CKD (N=2), both patients are under conservative therapy. There were no adverse drug reactions during the acute phase. The only adverse reaction associated to eculizumab use occurred during the follow-up period, characterized by edema of lower limbs during its administration, which improved with antihistamines and a slower infusion. There were no HUS relapses in these patients. In both groups, oligoanuria was part of the clinical presentation in six out of seven patients (86%) that developed CKD and in nine of thirteen patients (69%) that did not develop CKD. All patients achieved hematologic remission. In the group of patients with infection-induced HUS (N=7), all cases were diagnosed before 2015, all patients required renal replacement therapy during the acute phase of disease, all had sequelae, and four out of seven patients with CKD underwent a KT. Of the two patients with complement dysregulation HUS, one developed HBP, and none developed proteinuria or CKD. The patient that developed HBP was treated with eculizumab. There were two patients that continued receiving eculizumab periodically after the acute episode (the patient with a homozygous C3 variant and the patient with a complete homozygous deletion of CFHR1 and CFHR3 with no anti-factor H antibodies production, but with complement consumption) with good response. The first patient mentioned is currently receiving eculizumab for 30 months and the other one for 6 months.
During the study period there were 29 patients with HUS admitted to the Pediatric Intensive Care Unit of our hospital and followed by our Pediatric Nephrology Unit ( ). Four patients were excluded due to lack of data. Twenty-five patients with 26 events met the inclusion criteria, 64% of whom were male, with a median age at diagnosis of 2 years (2 months-17 years). Group A had 19 individuals and Group B had 7 patients ( ). None of the children had a family history of HUS. The median length of stay at the hospital was 28 days (4-191 days).
As shown in , the most frequent clinical manifestations were diarrhea (76%), vomiting (68%), fever (48%), and edema (32%). Only three patients had bloody diarrhea. During hospital stay, 96% developed HBP, 72% developed oligoanuria, and 16% had neurological impairment (n=2 seizures, n=2 somnolence). One patient died during the acute phase of the disease due to neurologic involvement, which was present from admission (no etiology was identified). Laboratory findings on admission were: average hemoglobin of 6.3 ± 1.35 g/dL [3.2-9.0 g/dL]; thrombocytopenia in all but one patient, with a mean platelet count of 58,188/µL ± 46,638/µL [11,000-193,000/µL]; mean creatinine of 5.0 ± 3.8 mg/dL [0.7-15.7 mg/dL], corresponding to an average GFR (N=14) of 14.5 ± 10.5 mL/min/1.73m [3.6-35.2mL/min/1.73m ].
During the acute phase, 14 patients (56%) required renal replacement therapy ( ), 11 patients from group A and 3 patients from group B. Seven (50%) had peritoneal dialysis, three (22%) had continuous venovenous hemodiafiltration (CVVHDF), three (22%) had both therapies, and one patient (7%) required CVVHDF and hemodialysis. Median duration of peritoneal dialysis was 22 days [6-180 days] and of CVVHDF was five days [3-19 days]. Eculizumab was administered to six of the seven patients treated since 2015. Three of the patients who received eculizumab needed renal replacement therapy during acute phase. In contrast, eleven (58%) of the patients of Group A needed renal replacement therapy, representing 86% (n=6) of the patients with infectious etiology. As for other treatments, plasmapheresis was used in two (8%) patients (one had anti-factor H antibodies and the other patient had no identifiable etiology; one of these patients was from Group A and the other from Group B) and 15 (60%) required erythrocyte transfusion. None of the patients required cardiovascular support. Mechanical ventilation was required in five patients (20%), all belonging to Group A.
The etiology was identified in nine patients (36%) ( and ). Seven patients had an infection: five with a Shiga-toxin producing E. coli and two with S. pneumoniae (one identified in pleural effusion and the other with S. pneumoniae antigens detected in urine). Six of the seven patients from group B had genetic analysis performed and five of these patients had an identifiable variant. Two patients were diagnosed as having a complement dysregulation HUS: one patient with a complete homozygous deletion of complement factor H-related protein 1 ( CFHR1 ) and complement factor H-related protein 3 ( CFHR3 ), who had anti-factor H antibodies production, and one patient with a likely homozygous pathogenic variant in C3 gene. The other three patients harbored variants of unknown significance (VUS): a complete homozygous deletion of CFHR1 and CFHR3 without anti-factor H antibodies production, a heterozygous mutation in genes encoding complement factor I ( CFI ), and a heterozygous variant in the complement regulatory gene factor H ( CFH ) ( ). None of the patients with neurologic involvement (n=4) had an identifiable etiology, but only two of them had a complete complement dysregulation investigation performed. One of these patients had a heterozygous mutation in CFH gene, which is considered a VUS.
One patient died during the acute phase of the disease. There were no deaths to report during the follow-up period (24/24 patients). There was data regarding the last clinical visit for 20 patients ( ). Median follow-up duration was 6.5 years (3 months-19.8 years). In Group A (N=19), there was follow-up data of 15 patients, with a median duration of 10.5 years (4.8-19.8 years). At the last clinical visit, 33% (n=5) patients had no renal sequelae, 27% (n=4) had minor sequelae (three patients with HBP and non-nephrotic proteinuria, one with HBP only), 40% (n=6) had CKD, and none of the patients had neurologic sequelae. Only one patient from this cohort relapsed (7 and 9 years after diagnosis). This patient (also included in group B) had a heterozygous mutation in CFI combined with risk haplotypes in CFH and MCP . Only this patient underwent kidney biopsy, which disclosed a membranoproliferative glomerulonephritis type I. From this group, 4/15 (27%) patients, kept renal replacement therapy dependence after the acute phase and all of them underwent kidney transplantation (KT). Among the patients with CKD, 33% (n=2) are receiving conservative treatment and 67% (n=4) started peritoneal dialysis and underwent KT. From the latter group, three patients had a STEC-HUS and one had S. pneumoniae infection. Average time between diagnosis and KT was 6.4 ± 2.7 years [3.5-10 years]. There was no recurrence of disease on renal graft in any of the patients. All patients achieved hematologic remission. In Group B (N=7), we had follow-up data of six patients, with a median duration of 11 months (6.2 months-4.2 years). At the last follow-up evaluation, 33% had no sequelae (n=2; one of these patients had anti-factor H antibodies and eculizumab was not administered), 67% had HBP (n=4), 33% had CKD (n=2), and none of the patients underwent KT. From this group, one patient (17%) kept renal replacement therapy dependence (hemodialysis) after the acute phase, but the patient is currently under conservative treatment. Among the group with CKD (N=2), both patients are under conservative therapy. There were no adverse drug reactions during the acute phase. The only adverse reaction associated to eculizumab use occurred during the follow-up period, characterized by edema of lower limbs during its administration, which improved with antihistamines and a slower infusion. There were no HUS relapses in these patients. In both groups, oligoanuria was part of the clinical presentation in six out of seven patients (86%) that developed CKD and in nine of thirteen patients (69%) that did not develop CKD. All patients achieved hematologic remission. In the group of patients with infection-induced HUS (N=7), all cases were diagnosed before 2015, all patients required renal replacement therapy during the acute phase of disease, all had sequelae, and four out of seven patients with CKD underwent a KT. Of the two patients with complement dysregulation HUS, one developed HBP, and none developed proteinuria or CKD. The patient that developed HBP was treated with eculizumab. There were two patients that continued receiving eculizumab periodically after the acute episode (the patient with a homozygous C3 variant and the patient with a complete homozygous deletion of CFHR1 and CFHR3 with no anti-factor H antibodies production, but with complement consumption) with good response. The first patient mentioned is currently receiving eculizumab for 30 months and the other one for 6 months.
During the 24 year-period of our study, there were 29 cases of HUS referred to our hospital. In Portugal, the incidence of HUS is unknown. However, our Pediatric Intensive Care Unit (PICU) has the highest admission rate in the country, which lead us to think that the low number of cases throughout the years is probably because of the low incidence of this disease . A Norwegian study reports an annual incidence of 0.5 cases per 100,000 children . The overall incidence of HUS in United Kingdom and Ireland is 0.71 per 100,000 children under 16 years of age, and incidence is similar across Europe, Australia, and North America . The classic triad combining thrombocytopenia, hemolytic anemia, and acute kidney injury is the typical hallmark of this condition . Only one patient did not present with thrombocytopenia, which may be absent at presentation in 15-20% of patients. The median age of HUS in our cohort was two years, in accordance to other studies that indicate that HUS is more frequent in children of preschool age . However, this is mainly true for infection-induced HUS, while the onset of HUS associated with complement dysregulation occurs in children almost as frequently as in adults. In our study, mortality was 4%, which corresponds to the rates in developed countries, where HUS mortality is below 5% - . In most cases, HUS presents abruptly and with nonspecific clinical symptoms of HUS, which include reduced urine output and edema, and are usually related to a triggering infectious event. Some clinical manifestations may arise suspicion on the underlying etiology, but in the great majority of cases the clinical picture is not sufficient to identify the condition , . STEC-HUS frequently follows prodromal bloody diarrhea, which was present in only one of the five cases with confirmed STEC infections . All the remaining cases had non-bloody diarrhea. Extrarenal manifestations are thought to occur due to multisystem thrombotic microangiopathy. Neurologic involvement is the most common life-threatening extrarenal manifestation (3-26%) , . In complement dysregulation associated HUS, extrarenal manifestations occur in approximately 20% of cases, with neurologic involvement being the most common, estimated at 10%. Nevertheless, in our cohort study, none of the four patients with CNS involvement had an identified etiology. However, only two of them had genetic testing performed and while one of them had a heterozygous mutation in the CFH gene, this is considered a VUS. The development of HBP in the acute phase is frequent, as shown see in our case series (96%) , , - . Only 36% of the patients had an identifiable etiology, being STEC-HUS the most common infection-induced HUS (71%) and representing 56% of all cases with identifiable etiology. On the other hand, 83% of the patients with a genetic test performed had an identified variant in complement regulatory genes, although only one was considered a pathogenic variant (homozygous C3 variant). Recent functional studies suggest that the variant detected in CFI gene [c.530T>A, p.(Asn177Ile)] is a causative variant, allowing its classification as a likely pathogenic variant instead of a VUS . Moreover, although a complete homozygous deletion of CFHR1 and CFHR3 is not by itself considered pathogenic, it can be considered a risk factor for the development of HUS, and in the majority of cases it is associated with production of anti-factor H antibodies, as occurred in one of our patients , . Other cohorts have described STEC as causing 85-95% of cases in children and genetic dysregulation of the alternative complement pathway as causing 5-10% of cases . The low number of cases with infectious etiology in our sample is probably due in part to limitations in access to techniques of STEC identification, namely the unavailability of polymerase chain reaction techniques for some patients. In addition, these low numbers also reflect the lack of infectious STEC outbreaks in our country, making HUS an even rarer disease. In our cohort, the seven patients with HUS of identified infectious etiology all needed renal replacement therapy and presented sequelae. All of these patients were admitted before genetic testing and eculizumab were available. For this reason, some of them could share genetic etiology and therefore benefit from treatment with eculizumab, which could lead to a better evolution and disease prognosis. Additionally, it is now known that the Shiga-toxin contributes not only to a proinflammatory and prothrombotic status, but it also induces complement activation, suggesting that eculizumab may be useful for patients with STEC-HUS , - . Another factor to consider is that in some patients it was not possible to perform a complete etiology investigation because of the methodological deficiencies mentioned earlier, so some patients with infectious etiology and good evolution may have been missed. Finally, considering that we are a tertiary care hospital, the most serious cases are referred to our hospital. In our cohort, there was no recurrence of disease after KT , , . We emphasize that all the kidney transplanted patients had an infectious etiology identified for HUS, however a complement dysregulation study was not completed for all patients, which could stand for a less likely recurrence. Before the use of eculizumab, outcomes of complement dysregulation-associated HUS were quite poor . Although none of the patients that received eculizumab in our study required KT so far, we cannot take any conclusions, as this is a very small series of cases. Genetic testing for complement pathway study was not available before 2015, making it difficult to understand if the poor prognosis of cases without a known etiology was related to complement dysregulation. All patients were admitted to the Pediatric Intensive Care Unit, given the possible complications associated with this entity. Timely management of HUS is the most important factor in these cases, requiring prompt transfer to a referral treatment center to provide the optimal treatment . Renal replacement therapy is needed in 50-70% of cases in the acute phase of HUS, as seen in our cohort (56%), but there is no clear benefit for a specific type of renal replacement therapy, therefore it should be chosen according to the center experience and the patient’s condition , , . In this cohort, most patients in both groups had sequelae (67%) before and after eculizumab availability, a proportion that is twice as high as that described in the literature. Available data reveals that most patients recover kidney function, but up to about 25% evolve with sequelae, most frequently hypertension, proteinuria, and CKD , . Furthermore, six of seven patients (86%) that developed CKD had oligoanuria as clinical presentation, reflecting the poorer prognosis of these patients . HUS patients must have a long-term follow-up by a pediatric nephrologist, since CKD can occur years after the renal insult . Children have a large renal functional reserve and the non-affected nephrons can compensate for the ones affected by the insult. It is highly recommended to follow these patients carefully in order to diagnose minor sequelae and implement renal protective strategies. Genetic testing helped to increase the identification of HUS etiology. This has an impact on patient prognosis, as it can help to identify a greater proportion of patients with complement pathway dysregulation HUS who may benefit from a prolonged treatment with eculizumab . Nevertheless, we must discriminate between the pathogenic and the likely pathogenic variants, as not all of are known to be responsible for the development of HUS. In our study, only one of the five identified variants was pathogenic.
Over the last two decades there has been a significant progress in the diagnostic and therapeutic approach of patients with HUS, which makes their comparison difficult. Nonetheless, this study is relevant because it presents data from a representative series of patients throughout a long follow-up period. This series confirms the high morbidity of HUS. Due to the severity of this disease, illustrated by the results of this cohort, it is essential to ensure early recognition of these patients and their transfer to a center with pediatric intensive care and nephrology units capable of performing a detailed etiological investigation, renal replacement therapy as necessary, and eventually prompt treatment with eculizumab. Long-term follow-up is required, even in patients who seem to recover completely from the acute phase. The influence that many genetic variants have on HUS development is still an open issue and further understanding in this area is needed. Limitations of the study This study is unique in that it had a long-term follow-up of 24 years and describes two different eras of HUS treatment - before and after eculizumab use. However, it has some limitations, most notably the lack of data of some of the early patients admitted to our unit, which is related to the retrospective nature of the study. Also, genetic testing and extended complement studies have only been available since 2015, which may have led to some diagnosis of HUS being missed due to alternative complement pathway dysregulation.
This study is unique in that it had a long-term follow-up of 24 years and describes two different eras of HUS treatment - before and after eculizumab use. However, it has some limitations, most notably the lack of data of some of the early patients admitted to our unit, which is related to the retrospective nature of the study. Also, genetic testing and extended complement studies have only been available since 2015, which may have led to some diagnosis of HUS being missed due to alternative complement pathway dysregulation.
|
Human spinal cord tissue is an underutilised resource in degenerative cervical myelopathy: findings from a systematic review of human autopsies
|
0805a58d-bc1b-4c26-b326-e680f45a2851
|
10140111
|
Forensic Medicine[mh]
|
Degenerative cervical myelopathy (DCM) is a disabling neurological condition in which degenerative changes to the cervical spine stress and injure the spinal cord . It is considered the most common spinal cord condition worldwide , with a recent meta-analysis of imaging studies estimating a prevalence of 2.3% . DCM is also known around the world by many different names. The term DCM was proposed to unify terminology , as an umbrella term for subtypes of pathology such as cervical spondylotic myelopathy (CSM) and ossification of the posterior longitudinal ligament (OPLL), and replacement for synonymous terms such as cervical stenosis or disc herniation with myelopathy. This has recently been endorsed in a global consensus process called AO Spine RECODE-DCM . Currently, the pathophysiology of DCM is poorly understood . In DCM, chronic compression of the spinal cord by degenerative and aberrant structures leads to both white and grey matter damage. This leads to progressive neurological dysfunction, such as sensory deficits including hypoaesthesia, paraesthesia and allodynia, loss of dexterity, incontinence and tetraplegia . However, the precise mechanisms through which compression causes this damage are unclear. Furthermore, although approximately 1 in 5 adults have asymptomatic spinal cord compression on MRI, only a proportion with spinal cord compression progress to DCM, indicating that in most cases, spinal cord compression does not cause DCM . Most mechanistic insights have arisen from a small number of pre-clinical studies, including goat , rabbit , rodent or canine models . Generally, injury has been replicated by insertion of screws, balloons or expandable polymers. One exception is the Twy twy mouse model, in which hyperosteosis causes a high cervical stenosis. In these studies, macroscopic findings include venous congestion, ischaemia and oedema . Cellular changes demonstrated in animal models include loss of motor- and interneurons, axon degeneration, gliosis and demyelination [ , , , , , , ]. However overall, it has been difficult to simulate a truly chronic injury, with the diverse range of degenerative features (e.g. anterior and posterior compression) seen in DCM. Human autopsy therefore presents an important alternative. While a series by Ito et al. is well cited , the clinical literature on DCM human tissue has not been systematically searched or aggregated and it is uncertain whether other sources exist. Therefore, the objective of this study was to systematically identify studies with histological findings of DCM from human spinal cord specimens, and to aggregate their findings. We also aimed to compare their findings to existing pre-clinical studies.
Search strategy A search strategy was developed which combined existing search filters for DCM , with synonyms for autopsy, cadaver and histopathology, with oversight from a medical librarian. The search was performed using OVID (Wolters Kluwer, Netherlands) from inception to 6 May 2022, and applied to MEDLINE and Embase. The search was prospectively registered with PROSPERO (CRD42021281462, Supplementary Data ). The search strategy as applied to MEDLINE and Embase can be found in Supplementary Data . Study selection The sensitive search strategy yielded 4127 records after removal of duplicates. Titles and abstracts were independently screened by at least two reviewers out of a group of eight (ED, SB, AC, KCM, LJ, UN, AS, AJT) using blinding via Rayyan . This was preceded by a pilot screen of 193 records (5% of total) which were screened by all eight reviewers to ensure concordance and to resolve any potential misunderstandings over inclusion and exclusion criteria. Disagreements were resolved by consensus or discussion with a senior reviewer (BD). Inclusion and exclusion criteria Primary research studies which included findings from the spinal cord specimen of humans with DCM were included. This included cervical spondylotic myelopathy (CSM) and cervical myelopathy secondary to ossification of the posterior longitudinal ligament (OPLL). Articles published in a foreign language or without full text were excluded. Data extraction and analysis Data was extracted from included studies using a piloted proforma which included study details, study type, patient demographics, diagnosis, methods and pathological findings on autopsy. For the purpose of this review, ‘autopsy’ refers to a single human spinal cord examined histologically. As most included studies were case reports or case series, the Joanna Briggs Institute (JBI) critical appraisal tools were used to assess the quality of included studies. For analysis, pathological findings for autopsies in case reports and case series were categorised into demyelination, axon loss, necrosis, cavitation, haemorrhage, gliosis and neuronal loss. These overarching categories were taken from the literature, and chosen to aid comparison with pre-clinical studies . Subsequently, each study was scored according to whether a finding within each subgroup was reported. Graphs were produced in R using the ggplot2 package . Ninety-five percent confidence interval was estimated using binomial calculation. Schematics were created using Inkscape ( http://www.inkscape.org/ ). Public involvement This systematic review aligns with the AO Spine RECODE-DCM, Research Priority number 5, investigating the biological basis of DCM . This priority was established with people living with DCM . The conduct of this individual review did not involve members of the public.
A search strategy was developed which combined existing search filters for DCM , with synonyms for autopsy, cadaver and histopathology, with oversight from a medical librarian. The search was performed using OVID (Wolters Kluwer, Netherlands) from inception to 6 May 2022, and applied to MEDLINE and Embase. The search was prospectively registered with PROSPERO (CRD42021281462, Supplementary Data ). The search strategy as applied to MEDLINE and Embase can be found in Supplementary Data .
The sensitive search strategy yielded 4127 records after removal of duplicates. Titles and abstracts were independently screened by at least two reviewers out of a group of eight (ED, SB, AC, KCM, LJ, UN, AS, AJT) using blinding via Rayyan . This was preceded by a pilot screen of 193 records (5% of total) which were screened by all eight reviewers to ensure concordance and to resolve any potential misunderstandings over inclusion and exclusion criteria. Disagreements were resolved by consensus or discussion with a senior reviewer (BD).
Primary research studies which included findings from the spinal cord specimen of humans with DCM were included. This included cervical spondylotic myelopathy (CSM) and cervical myelopathy secondary to ossification of the posterior longitudinal ligament (OPLL). Articles published in a foreign language or without full text were excluded.
Data was extracted from included studies using a piloted proforma which included study details, study type, patient demographics, diagnosis, methods and pathological findings on autopsy. For the purpose of this review, ‘autopsy’ refers to a single human spinal cord examined histologically. As most included studies were case reports or case series, the Joanna Briggs Institute (JBI) critical appraisal tools were used to assess the quality of included studies. For analysis, pathological findings for autopsies in case reports and case series were categorised into demyelination, axon loss, necrosis, cavitation, haemorrhage, gliosis and neuronal loss. These overarching categories were taken from the literature, and chosen to aid comparison with pre-clinical studies . Subsequently, each study was scored according to whether a finding within each subgroup was reported. Graphs were produced in R using the ggplot2 package . Ninety-five percent confidence interval was estimated using binomial calculation. Schematics were created using Inkscape ( http://www.inkscape.org/ ).
This systematic review aligns with the AO Spine RECODE-DCM, Research Priority number 5, investigating the biological basis of DCM . This priority was established with people living with DCM . The conduct of this individual review did not involve members of the public.
Study summary Search results Our search identified a total of 5532 records (2308 in MEDLINE, 3224 in Embase, 5 from other sources), with 4127 remaining after deduplication. A total of 61 articles were selected for full-text screening, of which 19 were included in the final analysis. A full Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow chart is shown in Fig. . Study characteristics and patient demographics This systematic review included 13 case series and 6 case reports. A total of 150 autopsied patients were included, of whom 71% was male and 29% female and with an average age at death of 67.3. Patient diagnoses within the umbrella of DCM included cervical spondylotic myelopathy (CSM, 13 papers or 68%) and ossification of the posterior longitudinal ligament (OPLL, 6 papers or 32%). An overview of study properties and patient characteristics is shown in Table . Pathological findings Neuronal loss The most common pathological finding was loss of neuronal cell bodies, with 17 studies and a total of 71 autopsies reporting this (46% of autopsies on CSM and 59% of autopsies on OPLL). The primary location of neuronal loss was variable. A total of 12 studies reported that neuronal loss appeared to primarily affect the anterior horns, whereas 5 studies suggested the posterior horns of which 2 studies also noted lateral horn involvement (Fig. ). There are some indications that the site of neuronal loss may be associated with disease severity. For instance, one case series which autopsied seven patients with cervical spondylotic myelopathy (CSM) found that the anterior horns were affected in all patients, but only in the most severe cases was neuronal loss also found in the posterior horn . Another study correspondingly indicated that the anterior horns are more immediately vulnerable to dural sac indentations . Only one study, however, compared histological findings of DCM on human spinal cord to healthy controls: this indicated that neuronal loss in the anterior horn is observed in patients with DCM but not in healthy controls . All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method. Cavitation Another key finding on autopsy of DCM patients was cavitation, with 14 studies and a total of 33 autopsies reporting this (21% of autopsies on CSM and 29% of autopsies on OPLL). This was most commonly described as cystic and related to areas of degeneration. Several different studies were able to correlate the formation of a cystic cavity with the radiological finding of ‘snake-eyes appearance’ on MRI. [ , , ]. In particular, Mizuno et al. (2003, 2005) reported that this snake-eyes appearance could be a result of cystic necrosis occurring secondary to mechanical compression and venous infarction. Pressure of this cystic cavity on remaining surrounding neurons was associated with destruction of the grey matter. Therefore, the radiological finding of snake-eye appearance is likely to be an unfavourable prognostic factor, as it indicates damage visible histopathologically . Demyelination and axon loss White matter changes were also widespread in the included autopsies. Demyelination was reported in 15 studies and a total of 45 autopsies (27% of autopsies on CSM and 53% of autopsies on OPLL). Correspondingly, axon loss was also reported in 13 included studies and a total of 42 autopsies (25% of autopsies on CSM and 53% of autopsies on OPLL). However, most studies simply reported ‘demyelination’, ‘myelin pallor’ or ‘reduced myelin’, meaning it could not be confidently assessed whether this reflects a process of primary demyelination or a general process of axon loss and degeneration. The location of white matter changes was variable, with some indicating pathology was most significant in the posterior and lateral funiculus , while most studies reported white matter degeneration was present throughout. Descriptions of axon loss were confined to white matter. All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method with Luxol fast blue staining to visualise myelin. Gliosis Gliosis was reported in 11 included studies and a total of 39 autopsies (27% of autopsies on CSM and 18% of autopsies on OPLL). All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method. Necrosis Necrosis was reported in 7 studies and a total of 19 autopsies (12% of autopsies with CSM and 25% of autopsies with OPLL). The strength of evidence for this was poor: no studies performed quantification, and either haematoxylin and eosin (H&E) and Kluver-Barrera staining methods were used to visualise necrosis across studies. An overview of included studies and their reported findings is shown in Table . Comparison between OPLL and CSM The relative prevalence of findings amongst CSM and OPLL autopsies is shown in Fig. . While demyelination and axon loss appeared more prevalent in CSM, estimated 95% confidence intervals are overlapping indicating that this is not certainly a significant difference. Comparison to pre-clinical models of DCM A systematic review by Akter et al. investigated pathological findings of DCM reported aggregate findings from animal models. When comparing this to our human autopsy findings, it is notable that although animal and human studies both commonly report neuronal loss, demyelination and axon loss, human studies tend to report necrosis whereas animal studies report apoptosis. Additionally, human studies frequently report cavitation. Furthermore, the role of glial cells in animal studies appears more variable, with reports of both glial cell proliferation and glial cell loss. In contrast, human studies commonly report gliosis (Fig. ).
Search results Our search identified a total of 5532 records (2308 in MEDLINE, 3224 in Embase, 5 from other sources), with 4127 remaining after deduplication. A total of 61 articles were selected for full-text screening, of which 19 were included in the final analysis. A full Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow chart is shown in Fig. . Study characteristics and patient demographics This systematic review included 13 case series and 6 case reports. A total of 150 autopsied patients were included, of whom 71% was male and 29% female and with an average age at death of 67.3. Patient diagnoses within the umbrella of DCM included cervical spondylotic myelopathy (CSM, 13 papers or 68%) and ossification of the posterior longitudinal ligament (OPLL, 6 papers or 32%). An overview of study properties and patient characteristics is shown in Table .
Our search identified a total of 5532 records (2308 in MEDLINE, 3224 in Embase, 5 from other sources), with 4127 remaining after deduplication. A total of 61 articles were selected for full-text screening, of which 19 were included in the final analysis. A full Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow chart is shown in Fig. .
This systematic review included 13 case series and 6 case reports. A total of 150 autopsied patients were included, of whom 71% was male and 29% female and with an average age at death of 67.3. Patient diagnoses within the umbrella of DCM included cervical spondylotic myelopathy (CSM, 13 papers or 68%) and ossification of the posterior longitudinal ligament (OPLL, 6 papers or 32%). An overview of study properties and patient characteristics is shown in Table .
Neuronal loss The most common pathological finding was loss of neuronal cell bodies, with 17 studies and a total of 71 autopsies reporting this (46% of autopsies on CSM and 59% of autopsies on OPLL). The primary location of neuronal loss was variable. A total of 12 studies reported that neuronal loss appeared to primarily affect the anterior horns, whereas 5 studies suggested the posterior horns of which 2 studies also noted lateral horn involvement (Fig. ). There are some indications that the site of neuronal loss may be associated with disease severity. For instance, one case series which autopsied seven patients with cervical spondylotic myelopathy (CSM) found that the anterior horns were affected in all patients, but only in the most severe cases was neuronal loss also found in the posterior horn . Another study correspondingly indicated that the anterior horns are more immediately vulnerable to dural sac indentations . Only one study, however, compared histological findings of DCM on human spinal cord to healthy controls: this indicated that neuronal loss in the anterior horn is observed in patients with DCM but not in healthy controls . All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method. Cavitation Another key finding on autopsy of DCM patients was cavitation, with 14 studies and a total of 33 autopsies reporting this (21% of autopsies on CSM and 29% of autopsies on OPLL). This was most commonly described as cystic and related to areas of degeneration. Several different studies were able to correlate the formation of a cystic cavity with the radiological finding of ‘snake-eyes appearance’ on MRI. [ , , ]. In particular, Mizuno et al. (2003, 2005) reported that this snake-eyes appearance could be a result of cystic necrosis occurring secondary to mechanical compression and venous infarction. Pressure of this cystic cavity on remaining surrounding neurons was associated with destruction of the grey matter. Therefore, the radiological finding of snake-eye appearance is likely to be an unfavourable prognostic factor, as it indicates damage visible histopathologically . Demyelination and axon loss White matter changes were also widespread in the included autopsies. Demyelination was reported in 15 studies and a total of 45 autopsies (27% of autopsies on CSM and 53% of autopsies on OPLL). Correspondingly, axon loss was also reported in 13 included studies and a total of 42 autopsies (25% of autopsies on CSM and 53% of autopsies on OPLL). However, most studies simply reported ‘demyelination’, ‘myelin pallor’ or ‘reduced myelin’, meaning it could not be confidently assessed whether this reflects a process of primary demyelination or a general process of axon loss and degeneration. The location of white matter changes was variable, with some indicating pathology was most significant in the posterior and lateral funiculus , while most studies reported white matter degeneration was present throughout. Descriptions of axon loss were confined to white matter. All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method with Luxol fast blue staining to visualise myelin. Gliosis Gliosis was reported in 11 included studies and a total of 39 autopsies (27% of autopsies on CSM and 18% of autopsies on OPLL). All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method. Necrosis Necrosis was reported in 7 studies and a total of 19 autopsies (12% of autopsies with CSM and 25% of autopsies with OPLL). The strength of evidence for this was poor: no studies performed quantification, and either haematoxylin and eosin (H&E) and Kluver-Barrera staining methods were used to visualise necrosis across studies. An overview of included studies and their reported findings is shown in Table .
The most common pathological finding was loss of neuronal cell bodies, with 17 studies and a total of 71 autopsies reporting this (46% of autopsies on CSM and 59% of autopsies on OPLL). The primary location of neuronal loss was variable. A total of 12 studies reported that neuronal loss appeared to primarily affect the anterior horns, whereas 5 studies suggested the posterior horns of which 2 studies also noted lateral horn involvement (Fig. ). There are some indications that the site of neuronal loss may be associated with disease severity. For instance, one case series which autopsied seven patients with cervical spondylotic myelopathy (CSM) found that the anterior horns were affected in all patients, but only in the most severe cases was neuronal loss also found in the posterior horn . Another study correspondingly indicated that the anterior horns are more immediately vulnerable to dural sac indentations . Only one study, however, compared histological findings of DCM on human spinal cord to healthy controls: this indicated that neuronal loss in the anterior horn is observed in patients with DCM but not in healthy controls . All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method.
Another key finding on autopsy of DCM patients was cavitation, with 14 studies and a total of 33 autopsies reporting this (21% of autopsies on CSM and 29% of autopsies on OPLL). This was most commonly described as cystic and related to areas of degeneration. Several different studies were able to correlate the formation of a cystic cavity with the radiological finding of ‘snake-eyes appearance’ on MRI. [ , , ]. In particular, Mizuno et al. (2003, 2005) reported that this snake-eyes appearance could be a result of cystic necrosis occurring secondary to mechanical compression and venous infarction. Pressure of this cystic cavity on remaining surrounding neurons was associated with destruction of the grey matter. Therefore, the radiological finding of snake-eye appearance is likely to be an unfavourable prognostic factor, as it indicates damage visible histopathologically .
White matter changes were also widespread in the included autopsies. Demyelination was reported in 15 studies and a total of 45 autopsies (27% of autopsies on CSM and 53% of autopsies on OPLL). Correspondingly, axon loss was also reported in 13 included studies and a total of 42 autopsies (25% of autopsies on CSM and 53% of autopsies on OPLL). However, most studies simply reported ‘demyelination’, ‘myelin pallor’ or ‘reduced myelin’, meaning it could not be confidently assessed whether this reflects a process of primary demyelination or a general process of axon loss and degeneration. The location of white matter changes was variable, with some indicating pathology was most significant in the posterior and lateral funiculus , while most studies reported white matter degeneration was present throughout. Descriptions of axon loss were confined to white matter. All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method with Luxol fast blue staining to visualise myelin.
Gliosis was reported in 11 included studies and a total of 39 autopsies (27% of autopsies on CSM and 18% of autopsies on OPLL). All findings were based on haematoxylin and eosin (H&E) staining, with some studies using the Kluver-Barrera method.
Necrosis was reported in 7 studies and a total of 19 autopsies (12% of autopsies with CSM and 25% of autopsies with OPLL). The strength of evidence for this was poor: no studies performed quantification, and either haematoxylin and eosin (H&E) and Kluver-Barrera staining methods were used to visualise necrosis across studies. An overview of included studies and their reported findings is shown in Table .
The relative prevalence of findings amongst CSM and OPLL autopsies is shown in Fig. . While demyelination and axon loss appeared more prevalent in CSM, estimated 95% confidence intervals are overlapping indicating that this is not certainly a significant difference.
A systematic review by Akter et al. investigated pathological findings of DCM reported aggregate findings from animal models. When comparing this to our human autopsy findings, it is notable that although animal and human studies both commonly report neuronal loss, demyelination and axon loss, human studies tend to report necrosis whereas animal studies report apoptosis. Additionally, human studies frequently report cavitation. Furthermore, the role of glial cells in animal studies appears more variable, with reports of both glial cell proliferation and glial cell loss. In contrast, human studies commonly report gliosis (Fig. ).
This systematic review investigated the histological findings of degenerative cervical myelopathy in human spinal cord from autopsy. Few spinal cord specimens have been studied, the vast majority of which used only basic staining techniques such as H&E and Kluver-Barrera. Only one study also used more sophisticated immunohistochemical techniques involving antibodies such as anti-Fas and anti-CD68 . Of the seven predefined histological features of DCM, only haemorrhage was not observed. The most common finding was neuronal loss, but also cavitation, demyelination, axon loss, gliosis and necrosis. Most studies reported the minimum involvement of the anterior horn, with one study linking both anterior and posterior horn involvement to more severe disease. This, alongside the relatively increased reports of cavitation and necrosis, reflects differences when compared to animal models. Comparison with animal studies The significance of the observed differences between animal and human autopsy findings are uncertain. That many findings were consistent, would certainly support the validity of pre-clinical models. For example, the most common finding in this systematic review of neuronal loss has also been well-documented in animal studies on DCM [ , , , ]. Similarly, axonal loss has been demonstrated in small-animal experimental studies [ , , ] as well as non-experimental equine studies . Furthermore, the observed differences could be a consequence of the qualitative comparison, which relied on reported findings. This could therefore be limited due to the detail of the analysis, reporting biases and/or interpretation. For example, although gliosis was a common finding, there was limited further characterisation of this in autopsies. [ , , , , , , , ] Moreover, factors that could influence interpretation, such as the timing or duration of autopsy in relation to death, which are recognised to influence protein degradation and staining, were not reported. In contrast, animal studies explored gliosis in more detail, reporting more variable changes to glial cell types, such as oligodendrocytes and microglia . Pre-clinical experiments too are typically set up to investigate a specific hypothesis and features relevant for this systematic review may be under-reported. Comparison could also be limited by interpretation. For example, it was difficult to assess whether reported findings such as ‘myelin loss’ or ‘myelin pallor’ reflect primary demyelination or global axon loss, although this issue was shared with the benchmarked animal review . However, these differences are noteworthy in the context of the known limitations of animal models and warrant further consideration. For example, most recent animal experiments have inserted prosthetics underneath the lamina, posterior to the spinal cord. This simulates solitary posterior compression which is an unusual feature of DCM in isolation . Finite element analysis has typically shown maximal mechanical stress values in the surround of the compression site . In this context, that most prominent pathobiology reviews on DCM describe the onset of disease in the posterior horn [ , , ], whereas human autopsies find the converse is potentially significant. Particularly given the prominence of motor dysfunction in clinical disease, a construct heavily weighted in the outcome measures of DCM . Cavitation is more commonly reported after traumatic spinal cord injury [ , , ]. This difference has been linked to the more significant destruction within the spinal cord from the high energy trauma but is recognised to evolve over time. The clinical status of patients identified in this study is difficult to ascertain, but the more prevalent finding of cavitation amongst DCM autopsies compared to pre-clinical models is unlikely to be explained by disease severity alone. A more likely explanation is that this reflects a chronicity of injury less easily simulated with animals,after all clinical studies report an average time to diagnosis and treatment from onset of symptoms of 2–5 years, but most DCM will currently go undiagnosed, and DCM in the short-term is rarely fatal . Additionally, while necrosis was a commonly reported histological finding in the human studies included in this review, animal studies have overwhelmingly tended to report apoptosis. Whether this reflects experimental challenges of simulating chronic compression, real species differences or methodological differences in identifying the mechanism of cell loss is unclear. Abnormal autophagy, for example, has been linked to injury in DCM using human spinal cord specimen but notably the only paper which used immunohistochemical techniques in this review (such as anti-Fas antibodies) specifically identified apoptosis . This may suggest that the necrosis reported in older case series and case reports reflects experimental differences (i.e. identification via H&E and KB staining only rather than immunohistochemistry). Indeed, the inability to distinguish necrosis and apoptosis on standard histopathology sections means that dead cells tend to be categorised as ‘ ’ regardless of the pathway by which the cells died . Certainly, combining the use of newer techniques which allow identification of specific histopathological processes with human tissue should provide invaluable insights. Even if these nuances are true differences, they should not undermine the value of pre-clinical models. Animal studies offer obvious benefits of standardisation, experimental freedom, and large sample sizes. In contrast, as is evident in this systematic review, human autopsy studies are highly variable, less numerous and more inconsistent. They too, as aforementioned in the current context, are likely to reflect advanced disease. The ideal framework would therefore be a hybrid approach. This has greatly benefited other central nervous system diseases, particularly with the advent of more sophisticated molecular pathobiological techniques largely developed since most of the DCM autopsy studies have been conducted [ , , ]. To this concept and potential, it is worth highlighting one study by Iwabuchi et al. (2004). Published in the Fukushima Journal of Medical Science, it has received just 2 citations. However, the study reports on a detailed analysis of 68 autopsies, in which a histological diagnosis of DCM was made in 2 cases. This is interesting for several reasons. First, this offers a limited corroboration, using a different modality, to the epidemiology estimates by Smith et al. (2021) . Due to widespread underdiagnosis, a true estimate of DCM prevalence has not been possible. Smith et al. (2021) aggregated healthy volunteer imaging studies and identified asymptomatic cervical cord compression in 24% of adults, and 2.3% prevalence of DCM. Iwabuchi et al. (2004) identified that 12 (18%) had evidence of cord compression, but only 2 (2.9%) histological features of DCM. However more importantly, it provides a histological series more analogous to clinical practice: not all cases with spinal cord compression acquired spinal cord injury, but this was more likely, and more severe with a higher compression ratio. This study therefore indicates the potential for human spinal cord specimen to compliment current research approaches in DCM. Limitations A clear limitation of this systematic review was the inconsistent methodologies, diagnosis coding and reporting of the included studies. Although this is a well-known issue in systematic reviews, inconsistent coding and reporting styles are particular problems in the DCM field. The AO Spine RECODE-DCM aims to create a research toolkit to accelerate research development and improve patient outcomes through more consistent nomenclature and the setting of research priorities (aospine.org/recode) [ , , , ]. Furthermore, due to the nature of included studies, no quantitative data was presented in any of the autopsy findings. This complicates an assessment of the importance of pathological findings, or how they co-exist. Few studies compared to controls, meaning it remains unclear which of the reported findings contribute to disease progression in DCM and which may be incidental in an ageing population. It is also notable that although a total of 150 autopsies were reported in the included studies, only 29% of these were on female patients, despite DCM affecting both men and women. Due to the nature of the included studies being case reports or series of autopsies, consistent clinical data was lacking. In particular, as many patients were selected for inclusion after dying for unrelated reasons to DCM, most studies did not report individual data on duration of disease. Additionally, most papers were written and published before standard validated scoring systems for DCM symptom severity such as the modified Japanese Orthopaedic Association (mJOA) score were in use, limiting a clinical correlation to severity. Integration with other study types is therefore essential. Finally, this article has focused on the potential role for human spinal cord tissue, and does not recognise complimentary insights that could arise from other tissue sources. For example, Laliberte et al. (2021) combined analysis of plasma miRNA in patients with DCM, with targeted experiments in animal and in vitro models to explore the significance of Mir21 expression in DCM outcomes. Additionally, research into the use of biomarkers to monitor DCM progression is emerging, with raised CSF neurofilament light subunit (NF-L) and glial fibrillary acidic protein (GFAP) as well as lower amyloid β peptide being correlated with symptom duration . Overall, this would align with our findings of the value to using a hybrid approach integrating different study types.
The significance of the observed differences between animal and human autopsy findings are uncertain. That many findings were consistent, would certainly support the validity of pre-clinical models. For example, the most common finding in this systematic review of neuronal loss has also been well-documented in animal studies on DCM [ , , , ]. Similarly, axonal loss has been demonstrated in small-animal experimental studies [ , , ] as well as non-experimental equine studies . Furthermore, the observed differences could be a consequence of the qualitative comparison, which relied on reported findings. This could therefore be limited due to the detail of the analysis, reporting biases and/or interpretation. For example, although gliosis was a common finding, there was limited further characterisation of this in autopsies. [ , , , , , , , ] Moreover, factors that could influence interpretation, such as the timing or duration of autopsy in relation to death, which are recognised to influence protein degradation and staining, were not reported. In contrast, animal studies explored gliosis in more detail, reporting more variable changes to glial cell types, such as oligodendrocytes and microglia . Pre-clinical experiments too are typically set up to investigate a specific hypothesis and features relevant for this systematic review may be under-reported. Comparison could also be limited by interpretation. For example, it was difficult to assess whether reported findings such as ‘myelin loss’ or ‘myelin pallor’ reflect primary demyelination or global axon loss, although this issue was shared with the benchmarked animal review . However, these differences are noteworthy in the context of the known limitations of animal models and warrant further consideration. For example, most recent animal experiments have inserted prosthetics underneath the lamina, posterior to the spinal cord. This simulates solitary posterior compression which is an unusual feature of DCM in isolation . Finite element analysis has typically shown maximal mechanical stress values in the surround of the compression site . In this context, that most prominent pathobiology reviews on DCM describe the onset of disease in the posterior horn [ , , ], whereas human autopsies find the converse is potentially significant. Particularly given the prominence of motor dysfunction in clinical disease, a construct heavily weighted in the outcome measures of DCM . Cavitation is more commonly reported after traumatic spinal cord injury [ , , ]. This difference has been linked to the more significant destruction within the spinal cord from the high energy trauma but is recognised to evolve over time. The clinical status of patients identified in this study is difficult to ascertain, but the more prevalent finding of cavitation amongst DCM autopsies compared to pre-clinical models is unlikely to be explained by disease severity alone. A more likely explanation is that this reflects a chronicity of injury less easily simulated with animals,after all clinical studies report an average time to diagnosis and treatment from onset of symptoms of 2–5 years, but most DCM will currently go undiagnosed, and DCM in the short-term is rarely fatal . Additionally, while necrosis was a commonly reported histological finding in the human studies included in this review, animal studies have overwhelmingly tended to report apoptosis. Whether this reflects experimental challenges of simulating chronic compression, real species differences or methodological differences in identifying the mechanism of cell loss is unclear. Abnormal autophagy, for example, has been linked to injury in DCM using human spinal cord specimen but notably the only paper which used immunohistochemical techniques in this review (such as anti-Fas antibodies) specifically identified apoptosis . This may suggest that the necrosis reported in older case series and case reports reflects experimental differences (i.e. identification via H&E and KB staining only rather than immunohistochemistry). Indeed, the inability to distinguish necrosis and apoptosis on standard histopathology sections means that dead cells tend to be categorised as ‘ ’ regardless of the pathway by which the cells died . Certainly, combining the use of newer techniques which allow identification of specific histopathological processes with human tissue should provide invaluable insights. Even if these nuances are true differences, they should not undermine the value of pre-clinical models. Animal studies offer obvious benefits of standardisation, experimental freedom, and large sample sizes. In contrast, as is evident in this systematic review, human autopsy studies are highly variable, less numerous and more inconsistent. They too, as aforementioned in the current context, are likely to reflect advanced disease. The ideal framework would therefore be a hybrid approach. This has greatly benefited other central nervous system diseases, particularly with the advent of more sophisticated molecular pathobiological techniques largely developed since most of the DCM autopsy studies have been conducted [ , , ]. To this concept and potential, it is worth highlighting one study by Iwabuchi et al. (2004). Published in the Fukushima Journal of Medical Science, it has received just 2 citations. However, the study reports on a detailed analysis of 68 autopsies, in which a histological diagnosis of DCM was made in 2 cases. This is interesting for several reasons. First, this offers a limited corroboration, using a different modality, to the epidemiology estimates by Smith et al. (2021) . Due to widespread underdiagnosis, a true estimate of DCM prevalence has not been possible. Smith et al. (2021) aggregated healthy volunteer imaging studies and identified asymptomatic cervical cord compression in 24% of adults, and 2.3% prevalence of DCM. Iwabuchi et al. (2004) identified that 12 (18%) had evidence of cord compression, but only 2 (2.9%) histological features of DCM. However more importantly, it provides a histological series more analogous to clinical practice: not all cases with spinal cord compression acquired spinal cord injury, but this was more likely, and more severe with a higher compression ratio. This study therefore indicates the potential for human spinal cord specimen to compliment current research approaches in DCM.
A clear limitation of this systematic review was the inconsistent methodologies, diagnosis coding and reporting of the included studies. Although this is a well-known issue in systematic reviews, inconsistent coding and reporting styles are particular problems in the DCM field. The AO Spine RECODE-DCM aims to create a research toolkit to accelerate research development and improve patient outcomes through more consistent nomenclature and the setting of research priorities (aospine.org/recode) [ , , , ]. Furthermore, due to the nature of included studies, no quantitative data was presented in any of the autopsy findings. This complicates an assessment of the importance of pathological findings, or how they co-exist. Few studies compared to controls, meaning it remains unclear which of the reported findings contribute to disease progression in DCM and which may be incidental in an ageing population. It is also notable that although a total of 150 autopsies were reported in the included studies, only 29% of these were on female patients, despite DCM affecting both men and women. Due to the nature of the included studies being case reports or series of autopsies, consistent clinical data was lacking. In particular, as many patients were selected for inclusion after dying for unrelated reasons to DCM, most studies did not report individual data on duration of disease. Additionally, most papers were written and published before standard validated scoring systems for DCM symptom severity such as the modified Japanese Orthopaedic Association (mJOA) score were in use, limiting a clinical correlation to severity. Integration with other study types is therefore essential. Finally, this article has focused on the potential role for human spinal cord tissue, and does not recognise complimentary insights that could arise from other tissue sources. For example, Laliberte et al. (2021) combined analysis of plasma miRNA in patients with DCM, with targeted experiments in animal and in vitro models to explore the significance of Mir21 expression in DCM outcomes. Additionally, research into the use of biomarkers to monitor DCM progression is emerging, with raised CSF neurofilament light subunit (NF-L) and glial fibrillary acidic protein (GFAP) as well as lower amyloid β peptide being correlated with symptom duration . Overall, this would align with our findings of the value to using a hybrid approach integrating different study types.
Clearly, a knowledge gap exists in understanding the pathophysiology of DCM. While animal studies can offer experimental freedom and therefore key mechanistic insights, human autopsy studies offer the unique benefit of observing actual DCM histological changes. Integration and collaboration between pre-clinical and clinical research should therefore be a key priority towards understanding the pathophysiology of DCM and improving outcomes for patients.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 23 KB)
|
Evaluating Counseling for Choice in Malawi: A Client-Centered Approach to Contraceptive Counseling
|
736f5827-b1f8-42bb-8649-b84eba2239b1
|
10141422
|
Patient-Centered Care[mh]
|
A human rights–based approach to family planning (FP) programming addresses all levels of the health care system and the surrounding enabling environment to ensure the autonomy, agency, and satisfaction of FP clients. Access to high-quality information and counseling—in addition to affordable, voluntary, and nondiscriminatory contraceptive services and products—is a critical “lever” to pull to achieve quality of care within this system. , Information exchange and interpersonal relations that occur between FP providers and clients have long been recognized as fundamental aspects of quality of care. , Beyond the objective to uphold the client’s right to receive high-quality services, FP clients’ perceptions of quality have also been found to be associated with better contraceptive use dynamics, including increased voluntary method uptake, method satisfaction, and continuation in some settings, although evidence is mixed. – Similarly, anticipatory side effects counseling—counseling that prepares women for contraceptive-induced bleeding changes and other side effects linked with using specific methods that they may experience—has been shown to increase method satisfaction and decrease discontinuation. , However, evidence on structured counseling approaches that improve the quality of information sharing and interpersonal relations, as well as women’s experiences using contraception, remains weak. As a result, the quality of FP counseling remains poor globally. A recent analysis of Demographic and Health Survey data from 25 low- and middle-income countries found that the average country-level Method Information Index score was 34%—meaning that only one-third of current contraceptive users received counseling on more than 1 method, were told about side effects, and were told what to do if side effects occurred. Despite overwhelming evidence that fear and experience of adverse side effects and health concerns are major drivers of contraceptive nonuse and method-related discontinuation among women who wish to avoid pregnancy, – counseling approaches widely used by FP providers in low- and middle-income countries lack an adequate focus on anticipatory side effects counseling. Evidence-based approaches that focus on improving care across these domains—approaches that are tailored to the client’s unique needs, improve information sharing and the client-provider relationship, and strengthen anticipatory side effects counseling—are urgently needed to support informed method choice aligned with clients’ preferences and to reduce negative contraceptive use experiences. Counseling for Choice (C4C) is a new FP counseling approach developed by Population Services International, publicly available at https://www.psi.org/C4C . C4C, which comprises a provider training curriculum and job aid, replaces traditional tiered-effectiveness counseling with structured counseling based on the method attributes most valued by the individual client. C4C also provides a guided structure for comprehensive anticipatory side effects counseling, with a particular focus on menstrual bleeding changes. We used a quasi-experimental study design to evaluate the impact of the C4C intervention on the quality of counseling received, measured by clients’ experiences. C4C replaces traditional tiered-effectiveness counseling with structured counseling based on the method attributes most valued by the individual client.
Contraceptive counseling has evolved as contraceptive approaches and tools have been iteratively developed and updated to improve quality of care. To counsel patients thoroughly on their choices, many clinicians use the autonomous approach to counseling. This involves providing information on all available, medically appropriate methods, with the patient subsequently deciding on a method with minimal provider input. Another common approach is the tiered-effectiveness method. With an effectiveness framework, clinicians present the most effective options first—highlighting voluntary long-acting reversible contraceptive methods. One approach that bridges the gap between the directive versus autonomous approaches is the shared decision-making model—a method that recognizes the expertise of both the provider, who has comprehensive information about methods from a clinical perspective, and the clients, who best understand their own needs and preferences. This and other common counseling approaches, such as Balanced Counseling Strategy Plus (BCS+), employ evidence-based best practices shown to improve quality of care and FP outcomes, such as increased uptake. These tools are widely used in FP programs globally; however, research on the effectiveness of specific approaches and tools to improve person-centered care and impact contraceptive use dynamics is limited. Among available counseling tools, the new C4C approach shares some common components with BCS+, the contraceptive counseling tool developed by Population Council and used across many low- and middle-income countries. BCS+ also prioritizes the demedicalization of provider language during counseling, uses client-centered and shared decision-making approaches, and incorporates specific job aids. From there, BCS+ and C4C diverge. BCS+ integrates tiered-effectiveness counseling into the approach, while C4C recognizes that clients may place a higher value on alternative method benefits—such as use on-demand, low frequency of provider visits required, or immediate return to fertility—and makes it easy for providers to compare contraceptives in relation to these other benefits. Where the algorithm, cards, and medical eligibility criteria information used by BCS+ are separate tools, C4C integrates the full suite of information and tools into a single, all-encompassing job aid. Different than BCS+ cards, the C4C job aid includes pages specifically meant to be viewed by lower-literacy clients. Finally, recognizing that experiencing side effects is a frequently cited reason for method discontinuation, C4C places a focus on anticipatory side effects counseling. While this comparison between C4C and BCS+ is meant to provide a well-known reference point for the community of practice familiar with this tool, our research does not seek to compare these 2 approaches. C4C Intervention and Tools Foundational to the C4C approach are 3 contraceptive counseling tenets: support the client to make an informed decision through clear and relevant information provision; provide high-quality, client-centered interpersonal care; and create a dialogue with clients about side effects, including what to expect and how to manage them. Three tenets inform C4C’s approach: support the client to make an informed decision through clear and relevant information provision; provide high-quality, client-centered interpersonal care; and create a dialogue with clients about side effects. The C4C approach has 2 key components: a 3-day training for providers and the Choice Book job aid for providers to use during counseling ( ). The training provides multiple tools and tech-niques to improve the counseling interaction by creating a dialogue about what matters to the client rather than using the counseling session as a didactic or rote lecture to impart the provider’s perspective (and potential bias) and a long list of facts. The Choice Book is a job aid for providers that includes both provider-facing and client-facing tools, including existing reference tools from the World Health Organization and other sources. shows an example book page illustrating how methods are compared across different attributes. BOX Components of the Counseling for Choice Approach Provider training: 3 days Training modules and activities reviewing Counseling for Choice (C4C) counseling principles Role-play with the Choice Book Choice Book (C4C provider job aid): Counseling matrix: a tool illustrating which contraceptive options offer various contraceptive and lifestyle benefits GATHER: demonstrating how C4C aligns with the GATHER (“Greet, Ask, Help, Explain, and Return”) approach Benefit-specific pages: comparing each method option relative to whether it offers a particular benefit ( ) Method-specific pages: in-depth information about each method, including the 3 Ws: what to do, what to expect, when to come back Other resources and reference tools: NORMAL tool for counseling on contraceptive induced menstrual bleeding changes Quick-start reference for breastfeeding and postabortion clients World Health Organization Medical Eligibility Criteria Job aid for DMPA reinjection DMPA-SC self-injection instructions Job aid for ruling out pregnancy Instructions for management of side effects Scale image of uterus
Foundational to the C4C approach are 3 contraceptive counseling tenets: support the client to make an informed decision through clear and relevant information provision; provide high-quality, client-centered interpersonal care; and create a dialogue with clients about side effects, including what to expect and how to manage them. Three tenets inform C4C’s approach: support the client to make an informed decision through clear and relevant information provision; provide high-quality, client-centered interpersonal care; and create a dialogue with clients about side effects. The C4C approach has 2 key components: a 3-day training for providers and the Choice Book job aid for providers to use during counseling ( ). The training provides multiple tools and tech-niques to improve the counseling interaction by creating a dialogue about what matters to the client rather than using the counseling session as a didactic or rote lecture to impart the provider’s perspective (and potential bias) and a long list of facts. The Choice Book is a job aid for providers that includes both provider-facing and client-facing tools, including existing reference tools from the World Health Organization and other sources. shows an example book page illustrating how methods are compared across different attributes. BOX Components of the Counseling for Choice Approach Provider training: 3 days Training modules and activities reviewing Counseling for Choice (C4C) counseling principles Role-play with the Choice Book Choice Book (C4C provider job aid): Counseling matrix: a tool illustrating which contraceptive options offer various contraceptive and lifestyle benefits GATHER: demonstrating how C4C aligns with the GATHER (“Greet, Ask, Help, Explain, and Return”) approach Benefit-specific pages: comparing each method option relative to whether it offers a particular benefit ( ) Method-specific pages: in-depth information about each method, including the 3 Ws: what to do, what to expect, when to come back Other resources and reference tools: NORMAL tool for counseling on contraceptive induced menstrual bleeding changes Quick-start reference for breastfeeding and postabortion clients World Health Organization Medical Eligibility Criteria Job aid for DMPA reinjection DMPA-SC self-injection instructions Job aid for ruling out pregnancy Instructions for management of side effects Scale image of uterus
Study Design We conducted a quasi-experimental evaluation with an intervention and concurrent comparison group in 50 public and 40 private facilities in 8 districts in Northern, Central, and Southern Malawi (Dwanga, Lilongwe, Mangochi, Mchinji, Mzuzu, Nkhata Bay, Nsanje, and Salima). Intervention facilities were sampled through stratified random sampling of a full roster of facilities offering FP services and counseling. Facilities were stratified first by district, then by public or private sector, and finally by client load. Our goal was, within the districts, to balance the number of public and private facilities with high, medium, and low client flows in the intervention and control groups. Of 30 public and 30 private facilities sampled, 25 and 20, respectively, consented to participate in the C4C intervention. We then selected matched comparison facilities based on FP client volumes and sector (private or public). Included facilities were primarily FP and reproductive health clinics, including franchises, and hospitals with FP and reproductive health services or wards. During the study period, providers in the comparison group continued using tools with which they were well versed and familiar, such as the flipchart approved by the Ministry of Health; the comparison group was not instructed to use a specific FP counseling tool or approach. Selected providers in intervention facilities received a 3-day training on the C4C approach using the Choice Book that would guide the counseling experience. This training included role-play and practice to achieve competency in the counseling approach, which was assessed via quizzes and observation by the lead trainer. Half of the providers who participated in the training were nurse midwife technicians, about one-quarter were medical assistants, and the remaining one-quarter were either clinical officers or nurse midwife assistants. A post hoc review of trainings that all providers in both groups had received in the past 3 years revealed little difference between comparison (standard-of-care) and intervention providers in terms of training received before the C4C intervention. Study Population Between October and December 2018, we enrolled clients seeking FP services at intervention or comparison facilities. All women of reproductive age (aged 18–49 years) seeking FP services—including those initiating contraception, switching methods, or continuing method use—were eligible to participate in in-person study procedures on the date of enrollment. No compensation was provided for participation in the study. Data Collection Data collection began 3 months after the training to allow providers time to become accustomed to the C4C approach. Participants completed 2 surveys on the date of enrollment: a pre-counseling survey before seeing a provider and a second post-counseling survey immediately after seeing a provider. Both the pre- and post-counseling surveys were administered in person in a private area of the clinic. The pre-counseling survey captured demographic information, contraceptive history, and acceptability of specific contraceptive side effects. The post-counseling survey collected information on the method chosen and reasons for selection (including reasons for selecting no method), content of information received during the counseling session, and satisfaction with the counseling experience. Participants were asked in the post-counseling survey to identify their provider; in the final analysis sample, participants who visited an intervention facility but who received counseling from a provider not trained in C4C were excluded. Ascertainment of Dependent Variables We ascertained perceived quality of care using the validated 4-item Person-Centered Contraceptive Counseling scale, which includes individual items on clients’ perceptions of the respectfulness of care, whether they were allowed to voice their contraceptive method preferences, whether they felt their preferences were taken seriously, and whether they felt that they received adequate information to make a decision about a contraceptive method. Individual items are measured on a 5-point Likert scale (poor, fair, good, very good, or excellent). We report the items that comprise the Person-Centered Contraceptive Counseling scale individually and as a summative binary variable, equal to 1 if the highest rating (“excellent”) was given for all 4 items and 0 if otherwise according to published scale scoring guidance. Additional nonvalidated measures were developed to measure key C4C quality domains. For example, within the domain of information exchange and interpersonal relations, confidence using the chosen method was measured on a 5-point Likert scale (from “not at all” to “very confident”); in addition, binary (yes/no) variables were captured on whether the provider addressed all concerns about using contraception, whether the provider asked about prior contraceptive experience, whether the participant trusted the provider to keep the consultation private, and whether the provider helped make a plan for how to remember to use the method (among participants who selected to use short-term methods). In the side effects expectations and management domain, we captured 3 binary (yes/no) variables: whether the provider provided information on potential side effects, whether the provider helped plan to manage potential side effects, and whether the participant anticipated discontinuing her method immediately if she experienced side effects. A table in the Supplement provides further detail on how all independent variables are linked to each of our 3 quality of care domains of interest. Statistical Analysis To estimate the effect of C4C on quality received, we compared participants at intervention versus comparison facilities by fitting multilevel mixed effects models with robust standard errors, with individuals nested within facilities. For Likert scale outcomes, we fit multilevel logistic regression models with random intercepts for health facilities to estimate odds ratios (OR), which can be interpreted as the odds that women in the intervention group gave the highest rating on the Likert scale compared to women in the comparison group. For binary outcome variables, we used analogous mixed effects logistic regression models. Adjusted models include covariates for age (specified as a continuous variable), marital status (modeled categorically as currently married, living with a man as if married, or not currently married or living with a male partner), highest level of educational attainment (none, primary, secondary, or higher), number of living children (none, 1–2, 3–4, or 5 or more), contraceptive method type received at consultation (including none, if no method was chosen after counseling), and facility sector (public or private). The analysis was conducted using STATA version 15.1. Ethical Approval The study was approved by the Research Ethics Board of Population Services International in Washington, DC, and by the National Committee on Research in the Social Sciences and Humanities in Malawi. The district health management team and the head/owner of each participating facility gave permission for data collection at study sites. All participants gave their verbal informed consent before study procedures. The clients in both intervention and control sites gave their consent to participate in the study. The study participants in both intervention and control sites were briefed on the study objectives and all requirements of the consenting process.
We conducted a quasi-experimental evaluation with an intervention and concurrent comparison group in 50 public and 40 private facilities in 8 districts in Northern, Central, and Southern Malawi (Dwanga, Lilongwe, Mangochi, Mchinji, Mzuzu, Nkhata Bay, Nsanje, and Salima). Intervention facilities were sampled through stratified random sampling of a full roster of facilities offering FP services and counseling. Facilities were stratified first by district, then by public or private sector, and finally by client load. Our goal was, within the districts, to balance the number of public and private facilities with high, medium, and low client flows in the intervention and control groups. Of 30 public and 30 private facilities sampled, 25 and 20, respectively, consented to participate in the C4C intervention. We then selected matched comparison facilities based on FP client volumes and sector (private or public). Included facilities were primarily FP and reproductive health clinics, including franchises, and hospitals with FP and reproductive health services or wards. During the study period, providers in the comparison group continued using tools with which they were well versed and familiar, such as the flipchart approved by the Ministry of Health; the comparison group was not instructed to use a specific FP counseling tool or approach. Selected providers in intervention facilities received a 3-day training on the C4C approach using the Choice Book that would guide the counseling experience. This training included role-play and practice to achieve competency in the counseling approach, which was assessed via quizzes and observation by the lead trainer. Half of the providers who participated in the training were nurse midwife technicians, about one-quarter were medical assistants, and the remaining one-quarter were either clinical officers or nurse midwife assistants. A post hoc review of trainings that all providers in both groups had received in the past 3 years revealed little difference between comparison (standard-of-care) and intervention providers in terms of training received before the C4C intervention.
Between October and December 2018, we enrolled clients seeking FP services at intervention or comparison facilities. All women of reproductive age (aged 18–49 years) seeking FP services—including those initiating contraception, switching methods, or continuing method use—were eligible to participate in in-person study procedures on the date of enrollment. No compensation was provided for participation in the study.
Data collection began 3 months after the training to allow providers time to become accustomed to the C4C approach. Participants completed 2 surveys on the date of enrollment: a pre-counseling survey before seeing a provider and a second post-counseling survey immediately after seeing a provider. Both the pre- and post-counseling surveys were administered in person in a private area of the clinic. The pre-counseling survey captured demographic information, contraceptive history, and acceptability of specific contraceptive side effects. The post-counseling survey collected information on the method chosen and reasons for selection (including reasons for selecting no method), content of information received during the counseling session, and satisfaction with the counseling experience. Participants were asked in the post-counseling survey to identify their provider; in the final analysis sample, participants who visited an intervention facility but who received counseling from a provider not trained in C4C were excluded.
We ascertained perceived quality of care using the validated 4-item Person-Centered Contraceptive Counseling scale, which includes individual items on clients’ perceptions of the respectfulness of care, whether they were allowed to voice their contraceptive method preferences, whether they felt their preferences were taken seriously, and whether they felt that they received adequate information to make a decision about a contraceptive method. Individual items are measured on a 5-point Likert scale (poor, fair, good, very good, or excellent). We report the items that comprise the Person-Centered Contraceptive Counseling scale individually and as a summative binary variable, equal to 1 if the highest rating (“excellent”) was given for all 4 items and 0 if otherwise according to published scale scoring guidance. Additional nonvalidated measures were developed to measure key C4C quality domains. For example, within the domain of information exchange and interpersonal relations, confidence using the chosen method was measured on a 5-point Likert scale (from “not at all” to “very confident”); in addition, binary (yes/no) variables were captured on whether the provider addressed all concerns about using contraception, whether the provider asked about prior contraceptive experience, whether the participant trusted the provider to keep the consultation private, and whether the provider helped make a plan for how to remember to use the method (among participants who selected to use short-term methods). In the side effects expectations and management domain, we captured 3 binary (yes/no) variables: whether the provider provided information on potential side effects, whether the provider helped plan to manage potential side effects, and whether the participant anticipated discontinuing her method immediately if she experienced side effects. A table in the Supplement provides further detail on how all independent variables are linked to each of our 3 quality of care domains of interest.
To estimate the effect of C4C on quality received, we compared participants at intervention versus comparison facilities by fitting multilevel mixed effects models with robust standard errors, with individuals nested within facilities. For Likert scale outcomes, we fit multilevel logistic regression models with random intercepts for health facilities to estimate odds ratios (OR), which can be interpreted as the odds that women in the intervention group gave the highest rating on the Likert scale compared to women in the comparison group. For binary outcome variables, we used analogous mixed effects logistic regression models. Adjusted models include covariates for age (specified as a continuous variable), marital status (modeled categorically as currently married, living with a man as if married, or not currently married or living with a male partner), highest level of educational attainment (none, primary, secondary, or higher), number of living children (none, 1–2, 3–4, or 5 or more), contraceptive method type received at consultation (including none, if no method was chosen after counseling), and facility sector (public or private). The analysis was conducted using STATA version 15.1.
The study was approved by the Research Ethics Board of Population Services International in Washington, DC, and by the National Committee on Research in the Social Sciences and Humanities in Malawi. The district health management team and the head/owner of each participating facility gave permission for data collection at study sites. All participants gave their verbal informed consent before study procedures. The clients in both intervention and control sites gave their consent to participate in the study. The study participants in both intervention and control sites were briefed on the study objectives and all requirements of the consenting process.
A total of 1,179 women were enrolled for the in-person study components (N=578 in the comparison group and N=601 in the intervention group). In the full baseline sample, participants were evenly distributed across age groups, with a slightly higher proportion of women aged 18–24 years and a slightly lower proportion of women aged 35 years and older ( ). Most women (520 [90%]) were married and had 1 or more children. In the intervention group, 391 women (69%) chose injectable contraception and 121 (21%) chose implants, while in the comparison group, injectable contraception was more common, and implants were less so (401 [85%] and 31 [7%], respectively). Client Satisfaction and Experience of Quality of Care More women rated their overall counseling experience as poor in the comparison group (32%) compared to the intervention group (8%), while more women in the intervention group rated their experience as excellent (35%) compared to women in the comparison group (8%) ( ). More women in the intervention group rated their experience as excellent compared to women in the comparison group. Receipt of care from C4C-trained providers was associated with statistically significant, positive odds of rating the provider as “excellent” (the highest score) on 4 questions ( ): respecting you as a person (adjusted odds ratio [aOR]=2.65; 95% confidence interval [CI]=1.40, 5.02); letting you say what matters to you about your contraceptive method (aOR=2.20; 95% CI=1.20, 4.00); taking your preferences about contraception seriously (aOR=2.60; 95% CI=1.42, 4.75); and giving enough information to make the best decision about a method (aOR=5.14; 95% CI=2.72, 9.71). Participants in the intervention group had 4.6 times the odds of rating their provider as “excellent” on all 4 questions as the comparison group: 140 participants (23.3%) in the intervention group rated their provider as “excellent” on all 4 questions, relative to just 36 (6.2%) in the comparison group. The person-centered contraceptive counseling measures described in are related to aspects of both domains of information exchange and interpersonal relations. In addition to these validated measures, we measured other aspects of counseling related to these domains with the additional variables in . Participants had 6-fold odds (aOR=6.4; 95% CI=3.08, 13.4) of rating their provider as excellent in addressing all concerns about their contraceptive method relative to those in the comparison group ( ). They were also more likely to report that their provider asked about their previous contraceptive experiences than clients in the comparison group, with 448 (74.5%) reporting “yes” versus 219 (37.9%) in the comparison (OR=6.76; 95% CI=3, 12.92). Clients choosing short-acting methods in the intervention group were more likely to report being helped to make a plan to use their method correctly (453 [79.8%] versus 258 [54.6%], respectively; aOR=6.45; 95% CI=2.57, 16.2). Clients in the intervention group were also more likely to report that they trusted their provider to keep their discussion confidential (575 [96.7%] versus 501 [86.7%]; (aOR=3.06; 95% CI=1.4, 6.67). Lastly, participants were more likely to rate that they were “very confident” in their choice of method in the intervention (280 [49.6%]) versus the comparison group (176 [37.5%]) (OR=1.94, 95% CI=1.0, 3.4), although this difference was not significant at the P =.05 level ( P =.057). Side Effects Expectations and Management Women in the intervention group were more likely to report that the provider told them about possible side effects they might experience (412 [73%]) versus the comparison group (180 [38%]) (aOR=5.98; 95% CI=2.97, 12.03) ( ). Intervention group participants were also significantly more likely to report that their provider had helped them make a plan to manage side effects (441 [78%] versus 194 [41%] in the comparison group; aOR=8.79; 95% CI=3.68, 21.01). Fewer women in the intervention group reported that they would discontinue their method immediately if they experienced side effects (40 [7%]) versus the comparison group (52 [11%]), although this difference was not statistically significant.
More women rated their overall counseling experience as poor in the comparison group (32%) compared to the intervention group (8%), while more women in the intervention group rated their experience as excellent (35%) compared to women in the comparison group (8%) ( ). More women in the intervention group rated their experience as excellent compared to women in the comparison group. Receipt of care from C4C-trained providers was associated with statistically significant, positive odds of rating the provider as “excellent” (the highest score) on 4 questions ( ): respecting you as a person (adjusted odds ratio [aOR]=2.65; 95% confidence interval [CI]=1.40, 5.02); letting you say what matters to you about your contraceptive method (aOR=2.20; 95% CI=1.20, 4.00); taking your preferences about contraception seriously (aOR=2.60; 95% CI=1.42, 4.75); and giving enough information to make the best decision about a method (aOR=5.14; 95% CI=2.72, 9.71). Participants in the intervention group had 4.6 times the odds of rating their provider as “excellent” on all 4 questions as the comparison group: 140 participants (23.3%) in the intervention group rated their provider as “excellent” on all 4 questions, relative to just 36 (6.2%) in the comparison group. The person-centered contraceptive counseling measures described in are related to aspects of both domains of information exchange and interpersonal relations. In addition to these validated measures, we measured other aspects of counseling related to these domains with the additional variables in . Participants had 6-fold odds (aOR=6.4; 95% CI=3.08, 13.4) of rating their provider as excellent in addressing all concerns about their contraceptive method relative to those in the comparison group ( ). They were also more likely to report that their provider asked about their previous contraceptive experiences than clients in the comparison group, with 448 (74.5%) reporting “yes” versus 219 (37.9%) in the comparison (OR=6.76; 95% CI=3, 12.92). Clients choosing short-acting methods in the intervention group were more likely to report being helped to make a plan to use their method correctly (453 [79.8%] versus 258 [54.6%], respectively; aOR=6.45; 95% CI=2.57, 16.2). Clients in the intervention group were also more likely to report that they trusted their provider to keep their discussion confidential (575 [96.7%] versus 501 [86.7%]; (aOR=3.06; 95% CI=1.4, 6.67). Lastly, participants were more likely to rate that they were “very confident” in their choice of method in the intervention (280 [49.6%]) versus the comparison group (176 [37.5%]) (OR=1.94, 95% CI=1.0, 3.4), although this difference was not significant at the P =.05 level ( P =.057).
Women in the intervention group were more likely to report that the provider told them about possible side effects they might experience (412 [73%]) versus the comparison group (180 [38%]) (aOR=5.98; 95% CI=2.97, 12.03) ( ). Intervention group participants were also significantly more likely to report that their provider had helped them make a plan to manage side effects (441 [78%] versus 194 [41%] in the comparison group; aOR=8.79; 95% CI=3.68, 21.01). Fewer women in the intervention group reported that they would discontinue their method immediately if they experienced side effects (40 [7%]) versus the comparison group (52 [11%]), although this difference was not statistically significant.
The novel C4C approach to FP counseling was specifically designed to address common issues with the quality of contraceptive counseling. The approach aims to support the client to make an informed decision about a method that aligns with their self-identified needs and individual preferences for specific method attributes. Clients counseled by C4C providers were more likely to report better care received, with more than 4 times as many reporting their experience as “excellent” overall. We find that the C4C approach improved clients’ experience of care across multiple domains and measures of person-centered care, including information exchange, interpersonal relations, and anticipatory side effects counseling, relative to standard-of-care counseling provided in public and private participating health facilities in Malawi. We find that the C4C approach improved clients’ experience of care across multiple domains and measures of person-centered care relative to standard-of-care counseling. The interpersonal relations quality of care domain in FP is critical to an overall high quality of care experience: a systematic review on the effects of person-centered quality of contraceptive care found that interventions to improve person-centeredness were consistently associated with improved client experience, perceptions of quality, and satisfaction. C4C addresses this domain of quality by anchoring the counseling approach in the core elements of respect, dignity, and empathy and in care that is nondiscriminatory and responsive to unique client needs. Participants in the intervention group of our study consistently rated their providers more positively across indicators of this client-provider relationship, reporting that their providers respected them as a person, let them say what mattered to them, took their preferences seriously, and were trusted to keep their conversation confidential compared to those in the comparison group. A principal tenet of the C4C approach is to enable informed decision-making through clear and relevant information provision, building on counseling approaches such as the World Health Organization’s Decision-Making Tool for Family Planning Clients and Providers and the Balanced Counseling Strategy tool. Participants who received the C4C intervention were more likely to report that they had enough information to select a method that fit their needs and had more confidence in their ability to use their chosen method than participants in the comparison group. Their providers were more likely to ask them about their previous contraceptive use and to address all of their concerns. This exchange of information is critical to ensuring that clients are well informed about contraceptive options that best suit them. It includes having appropriate information to prepare them for side effects they may experience with a chosen method, a factor that is directly correlated to contraceptive use experiences, and method satisfaction over time. Clients counseled by C4C providers were more likely to report receiving this anticipatory side effects counseling and having discussed a plan with their provider for how to manage these side effects. Taken together, the findings from this evaluation suggest that the tailored counseling encouraged by the C4C approach, when compared to the standard of care, enables improved information exchange that helps clients make the best contraceptive choice for them. This is consistent with existing literature that describes improved client experiences when counseling includes clear information tailored to one’s expressed needs and preferences. , Our findings suggest that the C4C approach, when compared to the standard of care, enables improved information exchange that helps clients make the best contraceptive choice for them. While overall counseling received was significantly higher among women in the intervention group, the finding that even women in the intervention group continue to report some dissatisfaction with their counseling experience (14.7% reporting their experience as “fair” or “poor”) indicates that more can be done to further improve counseling, even when using the C4C approach. This study adds to the growing evidence base on the impact of the quality of counseling on client experience. Several studies have found positive effects of interventions to improve client- or person-centeredness and quality of contraceptive counseling on contraceptive use dynamics, hypothesizing that improved perceptions of interpersonal connection with a provider during counseling, having enough information to make an informed choice, and feeling confident to understand and manage side effects may be associated with method initiation and improved method use experiences. , – However, evidence of the impact of counseling on contraceptive use dynamics is mixed. , , While we do not look here at the impact of the C4C approach on method use over time, we do observe that women counseled using C4C were less likely to report that they would discontinue their method immediately if they experienced side effects, relative to those counseled using the standard approach. Although the difference was not statistically significant, this finding suggests that the C4C approach may support women to select methods with side effect profiles that are more tolerable for their preferences or to better prepare women for what they may expect in terms of side effects. Exploring the impact of improved quality in counseling on contraceptive use dynamics and satisfaction with FP methods over time should be a priority for those in the field aiming to develop and use counseling approaches that truly meet client needs. Strengths and Limitations A primary strength of this study is its inclusion of a robust comparison group that allows for direct comparison of key areas of the counseling experience between women who were counseled by C4C-trained providers and women who were not, allowing for more direct conclusions to be drawn regarding the effect that the C4C approach may have on women’s experiences with a provider. There are also some limitations. Though unaware of the specific survey questions to clients or which clients would be surveyed, providers in the intervention group were aware that the new C4C approach on which they were trained would be studied, which may have affected adherence levels to the approach. The pre-counseling survey could have acted as an intervention itself or primed respondents to ask their provider about the topics being asked (e.g., about side effects). This may have improved the quality of counseling observed, but the effect would be expected to be nondifferential by treatment group since all participants received the same pre-counseling survey. Lastly, while it was not within the scope of our project to design a separate training for our comparison group, it is possible that improvements in quality of care could have been seen across some of the same indicators studied here regardless of the specific approach used; the act of simply retraining providers in principles of quality counseling could result in better counseling. Further research could explore the comparative impact of the C4C approach against training in other counseling approaches.
A primary strength of this study is its inclusion of a robust comparison group that allows for direct comparison of key areas of the counseling experience between women who were counseled by C4C-trained providers and women who were not, allowing for more direct conclusions to be drawn regarding the effect that the C4C approach may have on women’s experiences with a provider. There are also some limitations. Though unaware of the specific survey questions to clients or which clients would be surveyed, providers in the intervention group were aware that the new C4C approach on which they were trained would be studied, which may have affected adherence levels to the approach. The pre-counseling survey could have acted as an intervention itself or primed respondents to ask their provider about the topics being asked (e.g., about side effects). This may have improved the quality of counseling observed, but the effect would be expected to be nondifferential by treatment group since all participants received the same pre-counseling survey. Lastly, while it was not within the scope of our project to design a separate training for our comparison group, it is possible that improvements in quality of care could have been seen across some of the same indicators studied here regardless of the specific approach used; the act of simply retraining providers in principles of quality counseling could result in better counseling. Further research could explore the comparative impact of the C4C approach against training in other counseling approaches.
This study strengthens the evidence base for the utility and effectiveness of client-centered contraceptive counseling. Among FP clients in Malawi, we found that the C4C approach improved the perceived quality of care across multiple domains relative to standard counseling approaches. Counseling that focuses on supporting clients’ fully informed choice in method selection, improving client-centeredness of the interaction, and strengthening the client’s understanding of the potential side effects of their chosen method is a promising approach to improving contraceptive counseling and use experiences.
GHSP-D-22-00319-supplement.pdf
|
New insights into the occurrence of continuous cropping obstacles in pea (
|
99889dce-5fc0-486e-a356-1e059babc6d8
|
10141910
|
Microbiology[mh]
|
Limited arable land area and increased market demand have led to the widespread practice of continuous cropping in modern agriculture, which can cause obstacles to continuous cropping resulting in crop growth inhibition and yield reduction. This can cause significant economic losses and seriously impede the sustainable development of modern agriculture [ – ]. Therefore, researching continuous cropping obstacles is of great practical importance. Crop growth, development, and immunity are closely related to soil microorganisms , and the obstacles to continuous cropping result from the integrated plant-soil-microorganism interaction . It is known that changes in soil microbial community structure and diversity are key factors in the occurrence of continuous cropping obstacles . Although many studies have been reported, the results differ due to the influence of factors such as soil types, continuous cropping times, crop type, and genotype . Therefore, it is crucial to further study changes in soil microbial communities under specific conditions to better understand the obstacles to continuous cropping. Bacteria constitute the largest component of the soil microbial community and play a crucial role in soil structure and biological interactions . The present study found that bacterial community composition was susceptible to crop root secretions , and changes in the rhizosphere bacterial community inevitably affected crop growth . Previous studies investigating continuous cropping of peanuts, found that it reduced soil bacterial diversity, down-regulation of auxin and cytokinin synthesis genes in peanut roots, and up-regulation of genes related to abscisic acid, jasmonic acid, salicylic acid, and the ethylene signal transduction pathway . However, it remains to be confirmed whether differential expression of these genes leads to changes in metabolite abundance, thus affecting phenotypes. Moreover, the growth and development of crops under continuous cropping conditions are influenced by multiple metabolic pathways and a variety of complex metabolites, and it is worth investigating whether these pathways and metabolites respond to changes in soil bacteria. The pea crop ( Pisum sativum L.) is the fourth largest legume crop in the world and a strategic commodity for global food security , but there are continuous cropping obstacles in cultivation. The pea plants yield and soil microbial biomass are known to decrease during continuous cropping compared with rotation . However, it has not yet been reported whether pea plants respond to changes in the soil bacterial communities and thus cause changes of plant phenotypes and whether this is affected by the pea genotype. Root systems are in direct contact with the soil, and are the first organs to perceive changes in the soil environment that can subsequently affect plant growth. Therefore, it is crucial to study the response of pea root systems to continuous cropping. We hypothesized that changes in the soil bacterial community under continuous cropping might cause changes in the expression of some metabolic pathways and related genes in pea roots, leading to obstacles to continuous cropping and that the degree of harm varied according to different pea genotypes and continuous cropping times. In this study, we selected two pea genotypes and used soil bacterial 16 S rDNA, pea root transcriptomics, and metabolomics to investigate the effects of continuous cropping on the soil bacterial community and its relationships with changes in pea plant root phenotype. The findings of this study provide a new theoretical basis for elucidating the mechanism of obstacles to continuous cropping in pea plants.
Continuous cropping soils inhibited the growth of pea plants Pot experiments showed that the growth of peas under continuous cropping was inhibited (Fig. a-c, g-i). Morphological observations and index analysis of the material collected from the second pot experiment revealed that the continuous cropping treatments reduced shoot height, shoot (or root) fresh weight, and root length of Ding wan 10, and the degree of inhibition increased under continuous cropping twice treatment (CC2) (Fig. m, n). Compared with the rotation treatment (RT), continuous cropping once treatment (CC1) did not significantly affect the growth of Yun wan 8 (except root length), but all indexes were significantly decreased under CC2 (Fig. o, p). This indicated that Ding wan 10 was more sensitive to continuous cropping treatments, with growth being inhibited under CC1 and the degree of inhibition being aggravated under CC2. Yun wan 8 was more tolerant of continuous cropping and its growth was not significantly inhibited under CC1, but was significantly inhibited by CC2. Phenotypic analysis showed that the root system of pea plants changed (Fig. d-f, j-l), especially after CC2, the root system of the two pea genotypes aged early, and the degree of aging was increased in Ding wan 10 (Fig. d-f). Continuous cropping led to changes in the metabolic level of pea roots Plant phenotypes undergo changes that are inevitably influenced by their metabolite levels. To better comprehend how continuous cropping affects the root metabolism of two pea genotypes, we performed a metabolomic analysis using LC-MS/MS. The score plots, based on the PLS-DA model, demonstrated spatial differences between the data of the different treatments, indicating differences between the groups (Fig. a, b). We applied thresholds (VIP > 1, |FC| > 1, P < 0.05) to screen for DAMs. In the Ding wan 10, we found 131, 432, and 215 DAMs between CC1 and RT, CC2 and RT, and CC2 and CC1, respectively (Fig. c). In the Yun wan 8, we found 206, 337, and 137 DAMs between CC1 and RT, CC2 and RT, and CC2 and CC1, respectively (Fig. c). Under CC2, we observed that the number of DAMs for Ding wan 10 was higher than that of Yun wan 8, indicating that continuous cropping had a greater impact on Ding wan 10 than on Yun wan 8. Autotoxins were identified as the primary cause of continuous cropping obstacles. Our analyses revealed no change in the levels of cinnamic acid and its derivatives, ferulic acid, and coumaric acid analogs in CC1 compared to the RT of Ding wan 10. However, in CC2, the levels of these metabolites increased (Table ). The levels of cinnamic acid derivatives, ferulic acid, and coumaric acid derivatives were higher in CC2 than in CC1 and RT of Ding wan 10 (Table ). In Yun wan 8, there was no change in the levels of ferulic acid and coumaric acid compared with the RT (Table ). The levels of 4-methoxy cinnamic acid and 3-[(1-carboxy vinyl) oxy] benzoic acid were higher in CC2 than in CC1 of Yun wan 8, but the difference was not significant (Table ). These substances have been proven autotoxicity in many crops and may be potential autotoxins in pea roots. Our results indicate that continuous cropping increased levels of potential autotoxins in pea roots, which were secreted into the rhizosphere and could also affect plant growth by influencing rhizosphere microorganisms. We also found differences between the two pea genotypes, with fewer differential potential autotoxin species in Yun wan 8 than Ding wan 10. We performed a Kyoto Encyclopedia of Genes and Genomes (KEGG) classification analysis to understand the metabolic pathways involved in DAMs. In Ding wan 10, compared with the RT, 41 identical metabolic pathways were involved in DAMs between CC1 and CC2, and 33 metabolic pathways were specific to CC2 (Fig. a, b). Compared with the CC1, CC2 had a total of 55 metabolic pathways, 18 of which were identical to the specific metabolic pathways between CC2 and RT (Fig. c). These 18 unique metabolic pathways involved plant hormone signal transduction, amino acid metabolism, biosynthesis of other secondary metabolites, carbohydrate metabolism, lipid metabolism, cofactor and vitamin metabolism, terpenoids, and polyketides metabolism (Fig. b, c). In Yun wan 8, compared with the RT, 49 identical metabolic pathways were involved in DAMs between CC1 and CC2, and 14 metabolic pathways were specific to CC2 (Fig. d, e). Compared with the CC1, CC2 had a total of 53 metabolic pathways, four of which were identical to the specific metabolic pathways between CC2 and RT (Fig. f). Among them, the four unique metabolic pathways involved plant hormone signal transduction, biosynthesis of other secondary metabolites, lipid metabolism, and cofactor and vitamin metabolism (Fig. e, f). The KEGG enrichment analysis revealed that under CC2 of Ding wan 10, flavonoid biosynthesis, nitrogen metabolism, and amino sugar and nucleotide sugar metabolism were enriched ( Fig. b ) . Under CC1, the biosynthetic pathways of flavone and flavonol were significantly enriched ( Fig. a ) . Fatty acid metabolic pathways were significantly enriched in CC2 compared to CC1 ( Fig. c ) . In Yun wan 8, isoflavonoid biosynthesis, flavonoid biosynthesis and riboflavin metabolism were enriched under CC1 ( Fig. d ) . The metabolic pathways enriched under CC2 were linoleic acid metabolism, alpha-linolenic acid metabolism, and sulfur metabolism ( Fig. e ) . Carbon metabolism and glycerolipid metabolism were significantly enriched in CC2 compared to CC1 ( Fig. f ) . These results indicate that the different durations of continuous cropping led to responses in different metabolic pathways in pea roots. Additionally, the metabolic pathways enriched only under CC2 may represent the specific response of peas to severe continuous cropping stress. Response characteristics of pea root transcription level to continuous cropping The expression of related genes regulates metabolite changes. Therefore, we compared the transcriptomes of pea roots under different continuous cropping conditions, verified the data quality through principal component analysis (Fig. a, b), and performed DEGs screening (FDR < 0.05, |FC| > 1.5). In Ding wan 10, compared with the RT, there were 705 DEGs in CC1, 7316 DEGs in CC2, and 2921 DEGs in CC2 compared to CC1 (Fig. c). In Yun wan 8, there were 203 DEGs in CC1, 1374 DEGs in CC2, and 73 DEGs in CC2 compared to CC1 (Fig. c). Numerous genes were differentially expressed in the two pea genotypes under CC2, and the number of DEGs in Ding wan 10 was higher than in Yun wan 8. The gene ontology (GO) functional enrichment analysis of DEGs revealed that in the biological process, the two continuous cropping treatments in Ding wan 10 significantly induced pea roots defense and oxidative stress response processes. Compared with CC1, the CC2 treatment induced the oxidative stress response process (Fig. a, b, c). In Yun wan 8, the defense response was not induced under CC1, but CC2 significantly induced both the defense and oxidative stress response processes (Fig. d, e, f). We performed a KEGG classification analysis to understand the metabolic pathways involved in the DEGs. The most DEGs between CC1 and RT, CC2 and RT, CC2 and CC1 in Ding wan 10, and CC2 and RT in Yun wan 8 were found in the plant-pathogen interaction pathway (Fig. a-c, e). Meanwhile, there are a large number of DEGs involved in plant-pathogen interactions between CC1 and RT, and CC2 and CC1 in Yun wan 8 (Fig. d, f). In this metabolic pathway, FLS2 and BAK1 were differentially expressed between CC2 and RT in Ding wan 10 and Yun wan 8, with more DEGs in Ding wan 10 (Fig. ). Additionally, calcium-dependent and MAPK signaling pathways involved in this metabolic pathway were activated to varying degrees. CNGCS and Rboh were differentially expressed only in Ding wan 10, and many DEGs existed between CC2 and RT, CC2 and CC1 (Fig. ). CaMCML was differentially expressed between different treatments of Ding wan 10, but only between CC2 and RT in Yun wan 8 (Fig. ). NOS and MKK4/5 were differentially expressed between CC1 and RT, CC2 and RT, CC2 and CC1 in Ding wan 10, and CC2 and RT in Yun wan 8 (Fig. ). Moreover, the transcription factors WRKY22 and WRKY33 were differentially expressed in the continuous cropping treatments of both Ding wan 10 and Yun wan 8 (Fig. ). The number of DEGs involved in the plant-pathogen interaction pathway and the calcium-dependent and MAPK signaling pathways in Ding wan 10 was higher than in Yun wan 8, and the number of DEGs in CC2 was higher than in CC1. These results suggest that the roots of Ding wan 10 with different continuous cropping treatments and Yun wan 8 with CC2 may have been attacked by the pathogens and initiated immunity induced by pathogen molecular correlation patterns. However, there were differences in the degree of infestation between the two pea genotypes. Interestingly, we observed that ACS6 was only up-regulated between CC2 and RT in Ding wan 10 ( Fig. ) , suggesting that ethylene synthesis may be one of the reasons why pea plants are sensitive to continuous cropping. In Ding wan 10, we found that DEGs ( ETR, EBF1/2, ERF1/2 ) in the ethylene signaling pathway were up-regulated between treatments (Fig. a). However, EIN2 and EIN3 were only up-regulated between CC2 and RT. In Yun wan 8, only ETR was up-regulated between CC2 and RT (Fig. a). In the jasmonic acid signaling pathway of Ding wan 10 (Fig. b), JAZ and MYC2 were differentially expressed between the continuous cropping treatment and RT, with JAZ up-regulated between CC2 and CC1 and CC2 and RT. In Yun wan 8, JAZ and MYC2 were only down-regulated between CC2 and RT (Fig. b). These results indicate that the ethylene and jasmonic acid signal transduction pathways respond to continuous cropping, but the degree of response differs according to the pea genotypes. Phenylpropanoid biosynthesis and flavonoid biosynthesis are involved in plant immune response. This study, found that most of the genes encoding lignin synthesis were up-regulated in the phenylpropanoid biosynthesis pathway (Fig. a-c, Fig. a-c). There were more DEGs related to lignin synthesis in CC2 than CC1 of Ding wan 10 (Fig. a-c). The changes in DEGs associated with lignin synthesis in Yun wan 8 were similar to those in Ding wan 10, although the differences in the number and variety of DEGs were less pronounced. The changes in DEGs involved in flavonoid biosynthesis were similar to those in lignin (Fig. a-c, Fig. a-c). These results suggest that pea plants could resist the immune response induced by continuous cropping by increasing their content of flavonoids and lignin. Integrative transcriptomic and metabolomic analyses To clarify the metabolic pathways and metabolites related to pea roots and continuous cropping at the gene and metabolite levels, we analyzed the metabolic pathways co-annotated by DEGs and DAMs. We obtained 32 co-annotated pathways of DEGs and DAMs in CC1 of Ding wan 10 (Fig. a), compared to the RT, and 70 co-annotated pathways in CC2 (Fig. b). Similarly, we obtained 33 co-annotated pathways of DEGs and DAMs in CC1 of Yun wan 8 (Fig. a), compared to the RT, and 56 co-annotated pathways in CC2 (Fig. b). When comparing CC1 to CC2 of D10, we obtained 53 co-annotated pathways of DEGs and DAMs (Fig. c) and only four co-annotated pathways in CC2 of Yun wan 8 (Fig. c). Plants rely on signal transduction pathways and secondary metabolites to cope with stress. Our analysis revealed several pathways, such as flavonoid biosynthesis, flavone and flavonol biosynthesis, glutathione metabolism, cysteine and methionine metabolism, phenylpropanoid biosynthesis, fatty acid and linoleic acid metabolic pathways were co-annotated in CC1 and RT, CC2 and RT, CC2 and CC1 of Ding Wan 10 (Fig. a-c). Additionally, alpha-linolenic acid metabolism, biosynthesis of unsaturated fatty acids, diterpenoid biosynthesis, plant hormone signal transduction, and amino sugar and nucleotide sugar metabolism were co-annotated in CC2 and RT, CC2 and CC1 of Ding wan 10 (Fig. b, c). However, sphingolipid metabolism and terpenoid backbone biosynthesis were only co-annotated between CC2 and RT (Fig. b). In Yun wan 8, pathways for flavonoid biosynthesis, glutathione metabolism, cysteine and methionine metabolism, starch and sucrose metabolism, and linoleic acid metabolism were co-annotated between CC1 and RT, CC2 and RT (Fig. a, b). Glutathione metabolism was also co-annotated between CC2 and CC1 (Fig. c). However, alpha-linolenic acid metabolism, phenylpropanoid biosynthesis, terpenoid backbone biosynthesis, plant hormone signal transduction, cutin, suberine and wax biosynthesis, amino sugar and nucleotide sugar metabolism, and fatty acid metabolism were only co-annotated in CC2 (Fig. b). Plant hormone signal transduction was also co-annotated between CC2 and CC1 (Fig. c). These results indicate that the pathways co-annotated in CC1 and CC2 could be the core metabolic pathways of pea roots in response to continuous cropping. On the other hand, pathways annotated only under CC2 could be unique metabolic pathways of pea roots in response to severe continuous cropping. Analysis of soil bacterial diversity and richness under different continuous cropping conditions Bacterial 16 S rDNA sequencing was used to analyze the cultivation soil of two pea genotypes under different continuous cropping conditions. The dilution curves tended to be flat (Fig. a, b), and the coverage ranged from 93.60 to 95.72% (Table ), indicating that the sequencing volume could cover the vast majority of species in the samples. Alpha diversity indexes were calculated based on OTU level to quantify the diversity and richness of the microbial community (Table ). ACE, Chao 1, Shannin and Simpson indices of Ding wan 10 in continuous cropping treatments were lower than in RT, but there was no significant difference among the treatments. The ACE, Chao 1, Shannin and Simpson indices of the continuous cropping treatment in Yun wan 8 were higher than in RT, but there was no significant difference among the treatments. These results showed that the continuous cropping did not affect the diversity and richness of bacteria in pea rhizosphere soil. Differences in the distribution of soil bacteria at the phylum level under continuous cropping conditions We performed 16 S rDNA sequencing on soil samples from Ding wan 10 and Yun wan 8 and identified 26 and 25 phyla at the classification level, respectively. In Ding wan 10, CC2 increased the number of dominant bacteria (> 1%) and reduced the number of rare bacteria (< 0.1%), while common bacteria (0.1-1%) remained unchanged. Similarly, in Yun wan 8, CC2 increased the number of dominant and common bacteria, but reduced the number of rare bacteria. The relative abundance of Uncultured_bacterium_k_Bacteria increased with the increase of continuous cropping in Ding wan 10 soil and was significantly higher under CC2 than in RT and CC1. However, the abundance of other bacteria did not change significantly. In contrast, the relative abundance of various bacteria did not change significantly in Yun wan 8. Differences in the distribution of soil bacteria at the genus level under continuous cropping conditions We also analyzed the abundance of soil bacteria at the genus level for the two pea genotypes under different continuous cropping treatments. The analysis of variance among groups on the abundance of bacteria at the genus level in the different treatments revealed that, among the top 10 species with the smallest p -value (Fig. c, d), four bacteria in Ding wan 10 soil increased or decreased regularly among different treatments. The abundances of Uncultured_bacterium_o_Azospirillales and Polycyclovorans were lowest under CC2 and highest under RT. The abundance of Uncultured_bacterium_f_A21b and Uncultured_bacterium_k_Bacteria increased with increasing continuous cropping times, and was highest under CC2 (Fig. c). In Yun wan 8 soil, the genus Turicibacter increased regularly with increasing continuous cropping times and was highest under CC2 (Fig. d). These results suggest that the changes in these bacteria may be related to continuous cropping obstacles. Integrative soil bacteria and pea roots metabolomics analyses Based on the previous experiments, it can be inferred that there is a certain interaction between soil bacteria and the metabolites of pea roots. To explore this relationship, we conducted a correlation analysis between the metabolites involved in the continuous cropping-related metabolic pathways and the top 10 microorganisms with the highest relative abundance of soil bacteria genera. Our analysis found a significant correlation between certain bacteria and some metabolites from pea roots. In Ding wan 10 CC1, soil Polycyclovorans bacteria were significantly negatively correlated with liquiritigenin and L-Glutamic acid, two metabolites involved in glutathione metabolism and flavonoid biosynthesis pathways, respectively (Fig. a). Uncultured_bacterium_f_A21b was significantly related to metabolites involved in phenylpropanoid biosynthesis, flavonoid biosynthesis, linoleic acid metabolism, cysteine and methionine metabolism, and flavone and flavonol metabolism (Fig. a). In Ding wan 10 CC2, Polycyclovorans were significantly related to metabolites involved in phenylpropanoid biosynthesis, amino sugar and nucleotide sugar metabolism, flavonoid biosynthesis, cysteine and methionine metabolism, fatty acid biosynthesis, terpenoid backbone biosynthesis, and glutathione metabolism pathways (Fig. b). Uncultured_bacterium_f_A21b was significantly related to metabolites involved in glutathione metabolism, amino sugar and nucleotide sugar metabolism, flavonoid biosynthesis, phenylpropanoid biosynthesis, and linoleic acid metabolism (Fig. b). Uncultured_bacterium_o_Azospirillales was significantly associated with metabolites involved in glutathione metabolism, terpenoid backbone biosynthesis, phenylpropanoid biosynthesis, amino sugar and nucleotide sugar metabolism, cysteine and methionine metabolism, fatty acid biosynthesis, flavonoid biosynthesis, linoleic acid metabolism, and biosynthesis of unsaturated fatty acids (Fig. b). In Yun wan 8, Turicibacter in soil from CC1 was significantly associated with glutathione metabolism, linoleic acid metabolism, flavonoid metabolism, and starch and sucrose metabolism (Fig. c). Turicibacter in soil from CC2 was significantly associated with metabolites involved in cysteine and methionine metabolism, terpenoid backbone biosynthesis, phenylpropanoid biosynthesis, alpha-linolenic acid metabolism, fatty acid biosynthesis, linoleic acid metabolism, starch and sucrose metabolism, and amino sugar and nucleotide sugar metabolism (Fig. d). These above results suggest that key bacteria (bacteria with significant changes in relative abundance compared to RT) in soil may influence these metabolic pathways. Additionally, microorganisms with significant changes in relative abundance were found to be significantly related to glutathione metabolism, flavonoid metabolism, and linoleic acid metabolism in CC1 of the two pea genotypes, while microbes with significant changes in relative abundance were significantly related to cysteine and methionine metabolism, phenylpropanoid biosynthesis, amino sugar and nucleotide sugar metabolism, fatty acid biosynthesis, linoleic acid metabolism and terpenoid backbone biosynthesis in CC2 of the two pea genotypes.
Pot experiments showed that the growth of peas under continuous cropping was inhibited (Fig. a-c, g-i). Morphological observations and index analysis of the material collected from the second pot experiment revealed that the continuous cropping treatments reduced shoot height, shoot (or root) fresh weight, and root length of Ding wan 10, and the degree of inhibition increased under continuous cropping twice treatment (CC2) (Fig. m, n). Compared with the rotation treatment (RT), continuous cropping once treatment (CC1) did not significantly affect the growth of Yun wan 8 (except root length), but all indexes were significantly decreased under CC2 (Fig. o, p). This indicated that Ding wan 10 was more sensitive to continuous cropping treatments, with growth being inhibited under CC1 and the degree of inhibition being aggravated under CC2. Yun wan 8 was more tolerant of continuous cropping and its growth was not significantly inhibited under CC1, but was significantly inhibited by CC2. Phenotypic analysis showed that the root system of pea plants changed (Fig. d-f, j-l), especially after CC2, the root system of the two pea genotypes aged early, and the degree of aging was increased in Ding wan 10 (Fig. d-f).
Plant phenotypes undergo changes that are inevitably influenced by their metabolite levels. To better comprehend how continuous cropping affects the root metabolism of two pea genotypes, we performed a metabolomic analysis using LC-MS/MS. The score plots, based on the PLS-DA model, demonstrated spatial differences between the data of the different treatments, indicating differences between the groups (Fig. a, b). We applied thresholds (VIP > 1, |FC| > 1, P < 0.05) to screen for DAMs. In the Ding wan 10, we found 131, 432, and 215 DAMs between CC1 and RT, CC2 and RT, and CC2 and CC1, respectively (Fig. c). In the Yun wan 8, we found 206, 337, and 137 DAMs between CC1 and RT, CC2 and RT, and CC2 and CC1, respectively (Fig. c). Under CC2, we observed that the number of DAMs for Ding wan 10 was higher than that of Yun wan 8, indicating that continuous cropping had a greater impact on Ding wan 10 than on Yun wan 8. Autotoxins were identified as the primary cause of continuous cropping obstacles. Our analyses revealed no change in the levels of cinnamic acid and its derivatives, ferulic acid, and coumaric acid analogs in CC1 compared to the RT of Ding wan 10. However, in CC2, the levels of these metabolites increased (Table ). The levels of cinnamic acid derivatives, ferulic acid, and coumaric acid derivatives were higher in CC2 than in CC1 and RT of Ding wan 10 (Table ). In Yun wan 8, there was no change in the levels of ferulic acid and coumaric acid compared with the RT (Table ). The levels of 4-methoxy cinnamic acid and 3-[(1-carboxy vinyl) oxy] benzoic acid were higher in CC2 than in CC1 of Yun wan 8, but the difference was not significant (Table ). These substances have been proven autotoxicity in many crops and may be potential autotoxins in pea roots. Our results indicate that continuous cropping increased levels of potential autotoxins in pea roots, which were secreted into the rhizosphere and could also affect plant growth by influencing rhizosphere microorganisms. We also found differences between the two pea genotypes, with fewer differential potential autotoxin species in Yun wan 8 than Ding wan 10. We performed a Kyoto Encyclopedia of Genes and Genomes (KEGG) classification analysis to understand the metabolic pathways involved in DAMs. In Ding wan 10, compared with the RT, 41 identical metabolic pathways were involved in DAMs between CC1 and CC2, and 33 metabolic pathways were specific to CC2 (Fig. a, b). Compared with the CC1, CC2 had a total of 55 metabolic pathways, 18 of which were identical to the specific metabolic pathways between CC2 and RT (Fig. c). These 18 unique metabolic pathways involved plant hormone signal transduction, amino acid metabolism, biosynthesis of other secondary metabolites, carbohydrate metabolism, lipid metabolism, cofactor and vitamin metabolism, terpenoids, and polyketides metabolism (Fig. b, c). In Yun wan 8, compared with the RT, 49 identical metabolic pathways were involved in DAMs between CC1 and CC2, and 14 metabolic pathways were specific to CC2 (Fig. d, e). Compared with the CC1, CC2 had a total of 53 metabolic pathways, four of which were identical to the specific metabolic pathways between CC2 and RT (Fig. f). Among them, the four unique metabolic pathways involved plant hormone signal transduction, biosynthesis of other secondary metabolites, lipid metabolism, and cofactor and vitamin metabolism (Fig. e, f). The KEGG enrichment analysis revealed that under CC2 of Ding wan 10, flavonoid biosynthesis, nitrogen metabolism, and amino sugar and nucleotide sugar metabolism were enriched ( Fig. b ) . Under CC1, the biosynthetic pathways of flavone and flavonol were significantly enriched ( Fig. a ) . Fatty acid metabolic pathways were significantly enriched in CC2 compared to CC1 ( Fig. c ) . In Yun wan 8, isoflavonoid biosynthesis, flavonoid biosynthesis and riboflavin metabolism were enriched under CC1 ( Fig. d ) . The metabolic pathways enriched under CC2 were linoleic acid metabolism, alpha-linolenic acid metabolism, and sulfur metabolism ( Fig. e ) . Carbon metabolism and glycerolipid metabolism were significantly enriched in CC2 compared to CC1 ( Fig. f ) . These results indicate that the different durations of continuous cropping led to responses in different metabolic pathways in pea roots. Additionally, the metabolic pathways enriched only under CC2 may represent the specific response of peas to severe continuous cropping stress.
The expression of related genes regulates metabolite changes. Therefore, we compared the transcriptomes of pea roots under different continuous cropping conditions, verified the data quality through principal component analysis (Fig. a, b), and performed DEGs screening (FDR < 0.05, |FC| > 1.5). In Ding wan 10, compared with the RT, there were 705 DEGs in CC1, 7316 DEGs in CC2, and 2921 DEGs in CC2 compared to CC1 (Fig. c). In Yun wan 8, there were 203 DEGs in CC1, 1374 DEGs in CC2, and 73 DEGs in CC2 compared to CC1 (Fig. c). Numerous genes were differentially expressed in the two pea genotypes under CC2, and the number of DEGs in Ding wan 10 was higher than in Yun wan 8. The gene ontology (GO) functional enrichment analysis of DEGs revealed that in the biological process, the two continuous cropping treatments in Ding wan 10 significantly induced pea roots defense and oxidative stress response processes. Compared with CC1, the CC2 treatment induced the oxidative stress response process (Fig. a, b, c). In Yun wan 8, the defense response was not induced under CC1, but CC2 significantly induced both the defense and oxidative stress response processes (Fig. d, e, f). We performed a KEGG classification analysis to understand the metabolic pathways involved in the DEGs. The most DEGs between CC1 and RT, CC2 and RT, CC2 and CC1 in Ding wan 10, and CC2 and RT in Yun wan 8 were found in the plant-pathogen interaction pathway (Fig. a-c, e). Meanwhile, there are a large number of DEGs involved in plant-pathogen interactions between CC1 and RT, and CC2 and CC1 in Yun wan 8 (Fig. d, f). In this metabolic pathway, FLS2 and BAK1 were differentially expressed between CC2 and RT in Ding wan 10 and Yun wan 8, with more DEGs in Ding wan 10 (Fig. ). Additionally, calcium-dependent and MAPK signaling pathways involved in this metabolic pathway were activated to varying degrees. CNGCS and Rboh were differentially expressed only in Ding wan 10, and many DEGs existed between CC2 and RT, CC2 and CC1 (Fig. ). CaMCML was differentially expressed between different treatments of Ding wan 10, but only between CC2 and RT in Yun wan 8 (Fig. ). NOS and MKK4/5 were differentially expressed between CC1 and RT, CC2 and RT, CC2 and CC1 in Ding wan 10, and CC2 and RT in Yun wan 8 (Fig. ). Moreover, the transcription factors WRKY22 and WRKY33 were differentially expressed in the continuous cropping treatments of both Ding wan 10 and Yun wan 8 (Fig. ). The number of DEGs involved in the plant-pathogen interaction pathway and the calcium-dependent and MAPK signaling pathways in Ding wan 10 was higher than in Yun wan 8, and the number of DEGs in CC2 was higher than in CC1. These results suggest that the roots of Ding wan 10 with different continuous cropping treatments and Yun wan 8 with CC2 may have been attacked by the pathogens and initiated immunity induced by pathogen molecular correlation patterns. However, there were differences in the degree of infestation between the two pea genotypes. Interestingly, we observed that ACS6 was only up-regulated between CC2 and RT in Ding wan 10 ( Fig. ) , suggesting that ethylene synthesis may be one of the reasons why pea plants are sensitive to continuous cropping. In Ding wan 10, we found that DEGs ( ETR, EBF1/2, ERF1/2 ) in the ethylene signaling pathway were up-regulated between treatments (Fig. a). However, EIN2 and EIN3 were only up-regulated between CC2 and RT. In Yun wan 8, only ETR was up-regulated between CC2 and RT (Fig. a). In the jasmonic acid signaling pathway of Ding wan 10 (Fig. b), JAZ and MYC2 were differentially expressed between the continuous cropping treatment and RT, with JAZ up-regulated between CC2 and CC1 and CC2 and RT. In Yun wan 8, JAZ and MYC2 were only down-regulated between CC2 and RT (Fig. b). These results indicate that the ethylene and jasmonic acid signal transduction pathways respond to continuous cropping, but the degree of response differs according to the pea genotypes. Phenylpropanoid biosynthesis and flavonoid biosynthesis are involved in plant immune response. This study, found that most of the genes encoding lignin synthesis were up-regulated in the phenylpropanoid biosynthesis pathway (Fig. a-c, Fig. a-c). There were more DEGs related to lignin synthesis in CC2 than CC1 of Ding wan 10 (Fig. a-c). The changes in DEGs associated with lignin synthesis in Yun wan 8 were similar to those in Ding wan 10, although the differences in the number and variety of DEGs were less pronounced. The changes in DEGs involved in flavonoid biosynthesis were similar to those in lignin (Fig. a-c, Fig. a-c). These results suggest that pea plants could resist the immune response induced by continuous cropping by increasing their content of flavonoids and lignin.
To clarify the metabolic pathways and metabolites related to pea roots and continuous cropping at the gene and metabolite levels, we analyzed the metabolic pathways co-annotated by DEGs and DAMs. We obtained 32 co-annotated pathways of DEGs and DAMs in CC1 of Ding wan 10 (Fig. a), compared to the RT, and 70 co-annotated pathways in CC2 (Fig. b). Similarly, we obtained 33 co-annotated pathways of DEGs and DAMs in CC1 of Yun wan 8 (Fig. a), compared to the RT, and 56 co-annotated pathways in CC2 (Fig. b). When comparing CC1 to CC2 of D10, we obtained 53 co-annotated pathways of DEGs and DAMs (Fig. c) and only four co-annotated pathways in CC2 of Yun wan 8 (Fig. c). Plants rely on signal transduction pathways and secondary metabolites to cope with stress. Our analysis revealed several pathways, such as flavonoid biosynthesis, flavone and flavonol biosynthesis, glutathione metabolism, cysteine and methionine metabolism, phenylpropanoid biosynthesis, fatty acid and linoleic acid metabolic pathways were co-annotated in CC1 and RT, CC2 and RT, CC2 and CC1 of Ding Wan 10 (Fig. a-c). Additionally, alpha-linolenic acid metabolism, biosynthesis of unsaturated fatty acids, diterpenoid biosynthesis, plant hormone signal transduction, and amino sugar and nucleotide sugar metabolism were co-annotated in CC2 and RT, CC2 and CC1 of Ding wan 10 (Fig. b, c). However, sphingolipid metabolism and terpenoid backbone biosynthesis were only co-annotated between CC2 and RT (Fig. b). In Yun wan 8, pathways for flavonoid biosynthesis, glutathione metabolism, cysteine and methionine metabolism, starch and sucrose metabolism, and linoleic acid metabolism were co-annotated between CC1 and RT, CC2 and RT (Fig. a, b). Glutathione metabolism was also co-annotated between CC2 and CC1 (Fig. c). However, alpha-linolenic acid metabolism, phenylpropanoid biosynthesis, terpenoid backbone biosynthesis, plant hormone signal transduction, cutin, suberine and wax biosynthesis, amino sugar and nucleotide sugar metabolism, and fatty acid metabolism were only co-annotated in CC2 (Fig. b). Plant hormone signal transduction was also co-annotated between CC2 and CC1 (Fig. c). These results indicate that the pathways co-annotated in CC1 and CC2 could be the core metabolic pathways of pea roots in response to continuous cropping. On the other hand, pathways annotated only under CC2 could be unique metabolic pathways of pea roots in response to severe continuous cropping.
Bacterial 16 S rDNA sequencing was used to analyze the cultivation soil of two pea genotypes under different continuous cropping conditions. The dilution curves tended to be flat (Fig. a, b), and the coverage ranged from 93.60 to 95.72% (Table ), indicating that the sequencing volume could cover the vast majority of species in the samples. Alpha diversity indexes were calculated based on OTU level to quantify the diversity and richness of the microbial community (Table ). ACE, Chao 1, Shannin and Simpson indices of Ding wan 10 in continuous cropping treatments were lower than in RT, but there was no significant difference among the treatments. The ACE, Chao 1, Shannin and Simpson indices of the continuous cropping treatment in Yun wan 8 were higher than in RT, but there was no significant difference among the treatments. These results showed that the continuous cropping did not affect the diversity and richness of bacteria in pea rhizosphere soil.
We performed 16 S rDNA sequencing on soil samples from Ding wan 10 and Yun wan 8 and identified 26 and 25 phyla at the classification level, respectively. In Ding wan 10, CC2 increased the number of dominant bacteria (> 1%) and reduced the number of rare bacteria (< 0.1%), while common bacteria (0.1-1%) remained unchanged. Similarly, in Yun wan 8, CC2 increased the number of dominant and common bacteria, but reduced the number of rare bacteria. The relative abundance of Uncultured_bacterium_k_Bacteria increased with the increase of continuous cropping in Ding wan 10 soil and was significantly higher under CC2 than in RT and CC1. However, the abundance of other bacteria did not change significantly. In contrast, the relative abundance of various bacteria did not change significantly in Yun wan 8.
We also analyzed the abundance of soil bacteria at the genus level for the two pea genotypes under different continuous cropping treatments. The analysis of variance among groups on the abundance of bacteria at the genus level in the different treatments revealed that, among the top 10 species with the smallest p -value (Fig. c, d), four bacteria in Ding wan 10 soil increased or decreased regularly among different treatments. The abundances of Uncultured_bacterium_o_Azospirillales and Polycyclovorans were lowest under CC2 and highest under RT. The abundance of Uncultured_bacterium_f_A21b and Uncultured_bacterium_k_Bacteria increased with increasing continuous cropping times, and was highest under CC2 (Fig. c). In Yun wan 8 soil, the genus Turicibacter increased regularly with increasing continuous cropping times and was highest under CC2 (Fig. d). These results suggest that the changes in these bacteria may be related to continuous cropping obstacles.
Based on the previous experiments, it can be inferred that there is a certain interaction between soil bacteria and the metabolites of pea roots. To explore this relationship, we conducted a correlation analysis between the metabolites involved in the continuous cropping-related metabolic pathways and the top 10 microorganisms with the highest relative abundance of soil bacteria genera. Our analysis found a significant correlation between certain bacteria and some metabolites from pea roots. In Ding wan 10 CC1, soil Polycyclovorans bacteria were significantly negatively correlated with liquiritigenin and L-Glutamic acid, two metabolites involved in glutathione metabolism and flavonoid biosynthesis pathways, respectively (Fig. a). Uncultured_bacterium_f_A21b was significantly related to metabolites involved in phenylpropanoid biosynthesis, flavonoid biosynthesis, linoleic acid metabolism, cysteine and methionine metabolism, and flavone and flavonol metabolism (Fig. a). In Ding wan 10 CC2, Polycyclovorans were significantly related to metabolites involved in phenylpropanoid biosynthesis, amino sugar and nucleotide sugar metabolism, flavonoid biosynthesis, cysteine and methionine metabolism, fatty acid biosynthesis, terpenoid backbone biosynthesis, and glutathione metabolism pathways (Fig. b). Uncultured_bacterium_f_A21b was significantly related to metabolites involved in glutathione metabolism, amino sugar and nucleotide sugar metabolism, flavonoid biosynthesis, phenylpropanoid biosynthesis, and linoleic acid metabolism (Fig. b). Uncultured_bacterium_o_Azospirillales was significantly associated with metabolites involved in glutathione metabolism, terpenoid backbone biosynthesis, phenylpropanoid biosynthesis, amino sugar and nucleotide sugar metabolism, cysteine and methionine metabolism, fatty acid biosynthesis, flavonoid biosynthesis, linoleic acid metabolism, and biosynthesis of unsaturated fatty acids (Fig. b). In Yun wan 8, Turicibacter in soil from CC1 was significantly associated with glutathione metabolism, linoleic acid metabolism, flavonoid metabolism, and starch and sucrose metabolism (Fig. c). Turicibacter in soil from CC2 was significantly associated with metabolites involved in cysteine and methionine metabolism, terpenoid backbone biosynthesis, phenylpropanoid biosynthesis, alpha-linolenic acid metabolism, fatty acid biosynthesis, linoleic acid metabolism, starch and sucrose metabolism, and amino sugar and nucleotide sugar metabolism (Fig. d). These above results suggest that key bacteria (bacteria with significant changes in relative abundance compared to RT) in soil may influence these metabolic pathways. Additionally, microorganisms with significant changes in relative abundance were found to be significantly related to glutathione metabolism, flavonoid metabolism, and linoleic acid metabolism in CC1 of the two pea genotypes, while microbes with significant changes in relative abundance were significantly related to cysteine and methionine metabolism, phenylpropanoid biosynthesis, amino sugar and nucleotide sugar metabolism, fatty acid biosynthesis, linoleic acid metabolism and terpenoid backbone biosynthesis in CC2 of the two pea genotypes.
Many studies regarding continuous cropping obstacles have been reported, and with the development of modern biotechnology, related studies have been transferred from phenotypic and physiological levels to molecular levels, which is of great value for further revealing the occurrence of continuous cropping obstacles. However, the causes of continuous cropping obstacles are complex. Previous studies have focused on the structure and diversity of soil microbial communities, but there are few studies addressing how crops respond to changes in soil microorganisms [ – ]. There are also studies on the correlation between crop gene level and soil metagenome under continuous cropping, but the key role of metabolites has been ignored . Therefore, based on previous research, this study used 16 S rDNA sequencing technology of soil bacteria, transcriptomics, and metabolomics to analyze and summarize the relationship between pea plants and soil bacteria under continuous cropping conditions, and compared the differences in response to continuous cropping of different pea genotypes. Through duplicate pot experiments, it was found that the growth of Ding wan 10 was inhibited under continuous cropping conditions, and the phenotypic variation was particularly obvious under CC2. There was no significant change in Yun wan 8 under CC1, but its growth was significantly inhibited under CC2. This indicated that pea plants already had severe continuous cropping obstacles under CC2, and that the two pea genotypes had different tolerances to continuous cropping. The metabolome forms the biochemical basis of the plant phenotype, and the remodeling of the metabolome under stress reflects the response and defense of plants to stress to a great extent [ – ]. Metabolomics revealed that potential autotoxin species in pea roots were abundant, and the level of autotoxins increased with increasing times of continuous cropping. Among them, the autotoxicity of cinnamic acid on pea plants has been demonstrated . However, the validation method relies on the artificial addition of chemical reagents. It needs to be further verified whether the autotoxin synthesized by the pea plants negatively regulates it using molecular techniques in the future. Autotoxins secreted by plant roots are believed to cause continuous cropping obstacles by producing autotoxicity and affecting soil microbial community structure [ – ]. However, it is still necessary to investigate whether the concentration of autotoxins in the soil causes autotoxicity and the mechanisms of regulation of microbial community structure by autotoxins. This study found that with increasing continuous cropping times, the number of differential metabolites in pea roots increased, and the metabolic pathways involved increased. This means that pea roots adapt to changes in the surrounding environment by changing the composition and content of metabolites. Flavonoid biosynthesis and the linoleic acid metabolism pathway were significantly enriched under different continuous cropping treatments . These metabolic pathways play a key role in plants’ immune response, suggesting that pea roots response to continuous cropping is achieved by regulating metabolic pathways associated with the immune system. We compared the expression of pea root genes under each treatment using high-throughput sequencing. It was found that a large number of DEGs in the roots of the two pea genotypes under different continuous cropping treatments were associated with plant-pathogen interactions. Wyrsch et al. discovered that FLS2 expression could trigger an immune response in roots . FLS2 is induced by flg22 to bind to BAK1 , forming a heterologous aggregate that induces a signaling cascade including reactive oxygen species generation, calcium signaling, MAPK phosphorylation, and gene transcription, ultimately leading to defense response [ – ]. In this study, FLS2 , BAK1 , CaMCML , and MKK4/5 were activated under CC1 treatment in Ding wan 10 and CC2 treatment in Yun wan 8. This indicates that under continuous cropping conditions, the pea roots initiate an immune response that triggers calcium signaling and MAPK signaling cascades, and activates the transcription factor ( WRKY22, WRKY33 ) involved in defense gene expression. The number of DEGs involved in calcium signaling and MAPK signaling cascade responses increased and most of them were up-regulated under CC2 treatment in Ding wan 10. This suggests that the strong response of these genes under CC2 treatment may be responsible for the inhibition of pea plant growth. The phenylpropane biosynthetic pathway and flavonoid biosynthetic pathway are involved in the immune and defense responses of plants [ , , ]. Among them, the accumulation of lignin and flavonoids can resist the invasion of pathogenic bacteria [ – ]. In this study, continuous cropping activated the expression of key genes involved in lignin and flavonoid biosynthesis, especially under CC2 treatment. These results indicate that lignin and flavonoid synthesis was the main defense measures of pea roots against immune response. Furthermore, it was confirmed that the ethylene and jasmonic acid signaling pathways were important pathways for plant pattern-triggered immune (PTI) responses, and that increased levels of ethylene and jasmonic acid inhibit plant growth . In this study, key genes involved in the ethylene and jasmonate signaling pathways ( ETR , EIN2 , EIN3 , EBF1/2 , ERF1/2 ) were found to be up-regulated in CC2 of Ding wan 10. Moreover, ACS6 involved in ethylene synthesis was up-regulated. This indicates that the phytohormone signaling pathway may be a specific response pathway under severe continuous cropping treatment in peas and a possible reason for the differences in tolerance to continuous cropping in different pea genotypes. A combined transcriptome and metabolome analysis showed that the metabolic pathways related to antioxidant synthesis (flavonoid biosynthesis, cysteine and methionine metabolism, and glutathione metabolism pathway) were co-annotated in different treatments of the two pea genotypes. This is similar to a study by Huang et al. on continuous cropping obstacles in sugar beet . In addition, linoleic acid metabolism was the co-annotated metabolic pathway in different continuous cropping treatments. It has been reported that linoleic acid metabolism was a typical PTI response , indicating that continuous cropping triggered an immune response in pea roots and caused oxidative stress. Furthermore, we found that alpha-linolenic acid metabolism was co-annotated under CC2 of the two pea genotypes, which is similar to linoleic acid metabolism . We speculate that although the metabolism of linoleic acid and alpha-linolenic acid are both typical PTI reactions, the conditions under which their metabolisms are initiated may be related to the degree of continuous cropping, and this needs to be verified in future studies. Plant hormone signal transduction, fatty acid biosynthesis, terpenoid backbone biosynthesis, and amino sugar and nucleotide sugar metabolism pathways are related to continuous cropping disorders in peanut, soybean and melon, respectively [ , , ]. In this study, these metabolic pathways were co-annotated under CC2 in only two pea genotypes, indicating that alterations in these metabolic pathways are key to the stunted growth of pea plants and a specific response mechanism made by the pea roots under severe continuous cropping. In this experiment, the cultivation substrates differed for each treatment and the remaining management measures were the same. Therefore, the cause of the change in plant phenotypes could only be due to differences in the soil. The content of soil nutrient elements in each treatment before planting was detected (Table ), and combined with the change in pea phenotypes, it was revealed that soil nutrient elements were not the reason for the growth of pea plants. The presence of autotoxins in the soil is also an important factor in continuous cropping obstacles . However, in most cases, the concentration of autotoxins in the soil is lower than the concentration that causes autotoxic stress in plants, and microorganisms easily degrade the autotoxins in the soil . Some studies have reported that pea autotoxins are mainly secreted during the vegetative growth stage , whereas the soil in this experiment was left fallow for one month at the end of the pea growth period and then left for one month until it dried naturally. Therefore, autotoxins are not the main cause of growth inhibition in pea plants. These results indicate that soil microorganisms play an irreplaceable role in the occurrence of continuous cropping obstacles. Soil microorganisms play a crucial role in crop growth, development, and health [ – ], and have garnered considerable attention in the study of continuous cropping obstacles. In a study on the continuous cropping of peanuts, Li et al. found that bacterial suspensions obtained from continuous cropping soils significantly inhibited the growth of peanut plants, indicating the importance of soil bacteria in continuous cropping . Based on previous research, we analyzed bacterial communities in rhizosphere soils and found that continuous cropping did not affect bacterial diversity in pea rhizosphere soil, which was consistent with the results of a study by Yuan et al. in soybeans , but differed from the findings of Li et al. in peanuts . This difference may be attributed to the crop type and continuous cropping duration. However, continuous cropping significantly altered the relative abundance of four types of bacteria in Ding wan 10 soil and only one in Yun wan 8 soil. The variation in microbial species in the soil of the two pea genotypes may be attributed to differences in the pea genotypes, leading to different root secretions that recruit distinct bacterial communities to the rhizosphere, resulting in variations in bacterial communities in the soil of the two pea genotypes [ – ]. Additionally, continuous cropping led to in the repeated release of the same type of root exudates into the soil, which stimulated the colonization of certain microorganisms in the pea rhizosphere and increased their relative abundance, ultimately triggering an immune response in the pea roots. Future studies should confirm the vital role of these bacteria in continuous cropping obstacles of peas through the isolation, purification, and biological verification of the strains. Furthermore, the disparity in the number of significantly changing bacteria in the rhizosphere soil of the two pea genotypes may be the key to the difference in their sensitivity to continuous cropping. The growth of crops is influenced by microorganisms, such as bacteria in the soil, while the levels of their metabolites primarily regulate changes in crop phenotypes. Therefore, understanding the relationship between soil bacteria and metabolites related to continuous cropping is crucial for determining the occurrence of pea continuous cropping obstacles. This study, found that the significantly altered bacteria in CC1 were significantly associated with immune responses in pea roots and antioxidant synthesis pathways, such as linoleic acid metabolism, glutathione metabolism, and flavonoid metabolism. These results suggest that key soil bacteria may induce mild immune responses in pea roots under mild continuous cropping conditions, leading to oxidative stress. This response may be a common pathway for both pea genotypes in response to mild continuous cropping (Fig. ). The number of root metabolic pathways significantly associated with bacteria increased under CC2 treatment, involving cysteine and methionine metabolism, fatty acid metabolism, phenylpropanoid biosynthesis, terpenoid backbone biosynthesis, linoleic acid metabolism, and amino sugar and nucleotide sugar metabolism pathways. These metabolic pathways are specific response strategies of pea plants to severe continuous cropping environments, and the CC1 treatment did not induce pea plants to modulate these metabolic pathways. Moreover, these metabolic pathways are involved in plants’ oxidative stress and defense responses. For instance, cysteine and methionine metabolism are related to antioxidant biosynthesis , while fatty acids are essential components of cell membranes and important for remodeling membrane fluidity, affecting plant resistance to adversity stresses . This suggests the severe continuous cropping environment may have caused oxidative damage to pea roots and induced membrane lipid peroxidation. Linoleic acid metabolism can affect callose accumulation , which is biosynthesized by UDP glucose, a downstream amino and nucleotide sugar metabolism [ – ]. Under stress, callose deposits instantaneously and reversibly on specific cell walls. The metabolism of amino sugars and nucleoside sugars is related to maintaining and repairing the cell wall , and lignin synthesis depends on the phenylpropane metabolic pathway. The accumulation of lignin and callose can enhance the mechanical strength of plant cell walls and block the diffusion channel between cells. Moreover, terpenoids can improve plant stress resistance [ – ]. This indicates that the pea root system responds to severe continuous cropping environments by regulating multiple defense-related metabolic pathways (Fig. ). These results further illustrate the relationship between changes in bacterial abundance in soil and phenotypic changes in pea plants under continuous cropping conditions. These key bacterial and metabolic pathways warrant further investigation in future studies of continuous cropping obstacles in peas.
In this study, phenotypic observations revealed that continuous cropping inhibited the growth of pea plants. Transcriptomics and metabolomics were used to analyze changes in the pea root system at the gene and metabolic levels, and the number of DEGs and DAMs increased with increasing continuous cropping times. Common changes in genes and metabolites were induced by different continuous cropping treatments in flavonoid metabolism, glutathione metabolism, linoleic acid metabolism, and other metabolic pathways in pea roots. However, severe continuous cropping induced common changes in genes and metabolites in plant hormone signal transduction, fatty acid biosynthesis, terpenoid backbone biosynthesis, and amino sugar and nucleotide sugar metabolism pathways in pea roots. To investigate the effects of soil bacteria on the growth of pea plants, 16 S rDNA technology was used to analyze changes in the soil bacterial community structure. Soil bacterial diversity remained unchanged after continuous cropping, while the relative abundance of bacteria changed. Under mild continuous cropping conditions, bacteria with significant changes in relative abundance affected flavonoid metabolism, glutathione metabolism, and linoleic acid metabolism in pea roots. With increasing continuous cropping times, these bacteria affected the growth of pea plants through cysteine and methionine metabolism, fatty acid metabolism, phenylpropanoid biosynthesis, terpenoid backbone biosynthesis, linoleic acid metabolism, and amino sugar and nucleotide sugar metabolism. Furthermore, the two pea genotypes exhibited different sensitivities to continuous cropping, with the roots of sensitive peas showing more DEGs, DAMs, and bacteria. The results of this study provide a deeper understanding of the defense strategies of pea roots against continuous cropping environments and offer new insights into the occurrence of continuous cropping obstacles in peas.
Acquisition and treatment of test soil The field experiments were performed at the Yuhe village, Anding District, Dingxi City, Gansu Province, an arid and semi-arid rain fed agricultural area in western China. This area belongs to loess hilly and gully area, with a ground span of N35° 17ʹ54ʺ to 36° 02ʹ40ʺ, E104° 12ʹ48ʺ to 105° 01ʹ06ʺ. The average annual sunshine is 2500.1 h; the average annual temperature is 6.3 ℃; the frost-free period is 141 days; the normal annual precipitation is about 400 mm, mostly in autumn; and evaporation is as high as 1500 mm. The trial site had not been planted with legume crops during the previous 10 years. Two pea genotypes seeds were selected for the trial: Ding wan 10 (Common leaf pea) and Yun wan 8 (Semi-leafless pea), both provided by the Dingxi Institute of Agricultural Science in Gansu Province, China. The field experiments were conducted over two years (2020 and 2021), with the preceding crop being corn in 2019. The site was divided into 18 plots (each 18 m 2 ), which were used to cultivate two types of crops: (1) Oil flax was planted in 12 plots, and (2) pea crops were planted in six plots (three plots for each genotype). In the second year of the field experiments (2021), three treatments were applied: (1) Six plots that had grown oil flax in 2020 were used to grow potatoes; (2) Six plots that had grown oil flax in 2020 were used to grow peas (three plots for each genotype); (3) Six plots where peas were planted in 2020 continued to grow peas. Planting, weeding, and harvesting were carried out manually, with planting taking place in April and harvesting in July. The soil pH and the content of soil nutrient elements are presented in Table . Greenhouse pot experiments simulating continuous pea cropping On August 18, 2021, soil samples were collected from unplanted pea soil, soil planted with different pea genotypes for one year, and soil planted with different pea genotypes for two years. Each treatment had three replicate plots, and 25 kg (0–20 cm layer) soil samples were randomly collected from each plot and brought back to the laboratory. After removing visible plant residues from the soil, the samples were allowed to dry naturally for the greenhouse pot experiment. The pot experiment was a two-factor experiment, including pea genotypes and different continuous cropping times. Each pea genotype had three treatments: RT, CC1, and CC2, with nine biological replicates per treatment (three biological replicates per plot × three plots = nine). Four kilograms of soil per pot, and 15 surface sterilized pea seeds were sown in each pot, for 54 pots (two genotypes × three treatments × nine replicates = 54 pots). The pot experiment was repeated twice on August 24 and September 22, 2021, to the ensure reproducibility and authenticity of the experiment. After 16 days of cultivation, plants were carefully removed from the pots and rhizosphere samples were collected by brushing away the soil adhering to the roots. At the same time, the roots of pea plants were cleaned with sterile water. The collected soil and root samples were frozen in liquid nitrogen and stored at − 80 °C. For subsequent analysis, three soil and pea root samples were collected for each treatment. In the following text, we refer to this experiment’s three treatments RT, CC1, CC2. Determination of shoot height, root length, and shoot (or root) fresh weight Root and shoot were separated and the water on the root surface was gently drained, and the shoot (or root) fresh weight were measured immediately. Meanwhile, plant height and root length were measured. Metabolite extraction and metabolomics analysis Pea root metabolites were determined by liquid chromatography with tandem mass spectrometry (LC-MS/MS). Samples were ground after freeze-dried and ground and dissolved in 1.2 mL methanol and 2 µL internal standard (2-Chloro-L-phenylalanine), and vortexed once (30 s) every 30 min, a total of six times. Then, the samples were sonicated for 5 min (incubated in ice water) and incubated for 1 h at − 20 °C to precipitate the proteins. Thereafter, the samples were centrifuged at 4 °C for 15 min (12,000 × g). After centrifugation, the supernatant was collected, filtered through a microporous membrane, and stored in an injection flask for LC-MS/MS analysis. LC-MS/MS analysis was performed on a Waters ACQUITY UPLC I-Class PLUS system. MS raw data were collected using MassLynx (v.4.2, Waters) software. Data processing was carried out using Progenesis QI software. The METLIN database and Biomark self-built library identification database were used for peak annotation identification. DAMs were extracted based on the values retrieved by variable importance in the projection (VIP > 1), T-test ( P < 0.05), and |Fold Change (FC)| > 1. Metabolite analysis was performed using BMK Cloud ( http://www.biocloud.net/ ). Transcriptomics sequencing and bioinformatics analysis RNA was extracted from roots tissues of different treatments, with three biological replicates per treatment. RNAs were extracted using the DP441 test kit. RNA concentration and integrity were detected using Nanodrop 2000 and Agient 2100, respectively. mRNA was enriched and randomly interrupted. Using mRNA as a template, the first strand of cDNA was synthesized with random hexamer primers. The second strand of cDNA was synthesized through dNTPs, RNase H, and DNA polymerase. cDNA was purified using AMPure XP beads, and end repair and linkage sequencing linkers were performed. Fragment size selection was performed with AMPure XP beads, and cDNA libraries were obtained by PCR enrichment. The effective concentration of the library (Library effective concentration > 2 nm) was accurately quantified using Q-PCR. cDNA libraries were sequenced using the Illumina platform. The raw RNA reads were filtered to obtain high-quality clean reads by removing adaptor sequences and low-quality bases. The clean reads were aligned to the pea reference genome using HISAT2 software ( https://urgi.versailles.inra.fr/Species/Pisum ). StringTie was used to assemble the transcript and the gene expression was calculated. The Deseq 21.6.3 software package was used to select DEGs, and FC and False Discovery Rate (FDR) were used as screening criteria. The FDR is obtained by correcting the difference significance p -value; specifically |FC| > 1.5 and FDR < 0.05. GO and KEGG pathway enrichment analysis for differentially expressed genes were implemented using the BMK Cloud platform ( http://www.biocloud.net/ ). The GO term enrichment analysis of DEGs was performed using the GOseq R package . Statistical enrichment of DEGs in the KEGG pathway was performed using KOBAS . Microbial DNA extraction, 16 S rDNA gene sequencing, and data analysis The extraction of nucleic acids was carried out using a DNA test kit (Model: DP812). The nucleic acid concentration was detected using a microplate reader (Model: Synergyhtx). The DNA was amplified by 16 S rDNA primers the full-length of 27 F (5’- AGRGTTTGATYNTGGCTCAG-3’) and 1492R (5’-TASGGHTACCTTGTTASGACTT-3’) , and complete the library construction. The library concentration and size were detected using Qubit and Agilent 2100, respectively, and then sequenced on a sequel II (Pacbio, USA) sequencer. The offline data of PacBio Sequel was in Bam format, and Circular Consensus Sequencing (CCS) files were exported through smrtlink analysis software; barcode identification was performed on CCS sequences, length filtering was performed, chimeras were removed, and Effective CCS was obtained. The UCHIME algorithm (v8.1) was used to detect and remove chimera sequences to obtain clean reads. The unique sequence set was classified into OTUs based on a 97% threshold identity using USEARCH (v10.0) . Subsequently, taxonomy annotation of the OTUs was performed based on the Naive Bayes classifier in QIIME2 using the SILVA database . Alpha diversity was calculated using QIIME2. Group comparisons were performed using ANOVA analysis of variance with the BH-FDR multiple comparisons test. Integrated Analysis of Transcriptome and Metabolome The correlation between DEGs and DAMs was analyzed by Pearson (PCC > |0.8|, P < 0.05). All DEGs and DAMs were simultaneously mapped to the KEGG pathway database to determine the common metabolic pathways in which DEGs and DAMs participated. Integrated Analysis of metabolome and bacteria Correlation analyses between soil bacteria and DAMs were performed using Spearman. Statistical analysis Shoot height, root length, shoot (or root) fresh weight, soil pH, and soil nutrient content measurement and analysis were performed with at least three biological replicates. Statistical analysis, tabulation, and plotting were carried out using SPSS22.0, Microsoft Excel 2010 and Origin 9.
The field experiments were performed at the Yuhe village, Anding District, Dingxi City, Gansu Province, an arid and semi-arid rain fed agricultural area in western China. This area belongs to loess hilly and gully area, with a ground span of N35° 17ʹ54ʺ to 36° 02ʹ40ʺ, E104° 12ʹ48ʺ to 105° 01ʹ06ʺ. The average annual sunshine is 2500.1 h; the average annual temperature is 6.3 ℃; the frost-free period is 141 days; the normal annual precipitation is about 400 mm, mostly in autumn; and evaporation is as high as 1500 mm. The trial site had not been planted with legume crops during the previous 10 years. Two pea genotypes seeds were selected for the trial: Ding wan 10 (Common leaf pea) and Yun wan 8 (Semi-leafless pea), both provided by the Dingxi Institute of Agricultural Science in Gansu Province, China. The field experiments were conducted over two years (2020 and 2021), with the preceding crop being corn in 2019. The site was divided into 18 plots (each 18 m 2 ), which were used to cultivate two types of crops: (1) Oil flax was planted in 12 plots, and (2) pea crops were planted in six plots (three plots for each genotype). In the second year of the field experiments (2021), three treatments were applied: (1) Six plots that had grown oil flax in 2020 were used to grow potatoes; (2) Six plots that had grown oil flax in 2020 were used to grow peas (three plots for each genotype); (3) Six plots where peas were planted in 2020 continued to grow peas. Planting, weeding, and harvesting were carried out manually, with planting taking place in April and harvesting in July. The soil pH and the content of soil nutrient elements are presented in Table .
On August 18, 2021, soil samples were collected from unplanted pea soil, soil planted with different pea genotypes for one year, and soil planted with different pea genotypes for two years. Each treatment had three replicate plots, and 25 kg (0–20 cm layer) soil samples were randomly collected from each plot and brought back to the laboratory. After removing visible plant residues from the soil, the samples were allowed to dry naturally for the greenhouse pot experiment. The pot experiment was a two-factor experiment, including pea genotypes and different continuous cropping times. Each pea genotype had three treatments: RT, CC1, and CC2, with nine biological replicates per treatment (three biological replicates per plot × three plots = nine). Four kilograms of soil per pot, and 15 surface sterilized pea seeds were sown in each pot, for 54 pots (two genotypes × three treatments × nine replicates = 54 pots). The pot experiment was repeated twice on August 24 and September 22, 2021, to the ensure reproducibility and authenticity of the experiment. After 16 days of cultivation, plants were carefully removed from the pots and rhizosphere samples were collected by brushing away the soil adhering to the roots. At the same time, the roots of pea plants were cleaned with sterile water. The collected soil and root samples were frozen in liquid nitrogen and stored at − 80 °C. For subsequent analysis, three soil and pea root samples were collected for each treatment. In the following text, we refer to this experiment’s three treatments RT, CC1, CC2.
Root and shoot were separated and the water on the root surface was gently drained, and the shoot (or root) fresh weight were measured immediately. Meanwhile, plant height and root length were measured.
Pea root metabolites were determined by liquid chromatography with tandem mass spectrometry (LC-MS/MS). Samples were ground after freeze-dried and ground and dissolved in 1.2 mL methanol and 2 µL internal standard (2-Chloro-L-phenylalanine), and vortexed once (30 s) every 30 min, a total of six times. Then, the samples were sonicated for 5 min (incubated in ice water) and incubated for 1 h at − 20 °C to precipitate the proteins. Thereafter, the samples were centrifuged at 4 °C for 15 min (12,000 × g). After centrifugation, the supernatant was collected, filtered through a microporous membrane, and stored in an injection flask for LC-MS/MS analysis. LC-MS/MS analysis was performed on a Waters ACQUITY UPLC I-Class PLUS system. MS raw data were collected using MassLynx (v.4.2, Waters) software. Data processing was carried out using Progenesis QI software. The METLIN database and Biomark self-built library identification database were used for peak annotation identification. DAMs were extracted based on the values retrieved by variable importance in the projection (VIP > 1), T-test ( P < 0.05), and |Fold Change (FC)| > 1. Metabolite analysis was performed using BMK Cloud ( http://www.biocloud.net/ ).
RNA was extracted from roots tissues of different treatments, with three biological replicates per treatment. RNAs were extracted using the DP441 test kit. RNA concentration and integrity were detected using Nanodrop 2000 and Agient 2100, respectively. mRNA was enriched and randomly interrupted. Using mRNA as a template, the first strand of cDNA was synthesized with random hexamer primers. The second strand of cDNA was synthesized through dNTPs, RNase H, and DNA polymerase. cDNA was purified using AMPure XP beads, and end repair and linkage sequencing linkers were performed. Fragment size selection was performed with AMPure XP beads, and cDNA libraries were obtained by PCR enrichment. The effective concentration of the library (Library effective concentration > 2 nm) was accurately quantified using Q-PCR. cDNA libraries were sequenced using the Illumina platform. The raw RNA reads were filtered to obtain high-quality clean reads by removing adaptor sequences and low-quality bases. The clean reads were aligned to the pea reference genome using HISAT2 software ( https://urgi.versailles.inra.fr/Species/Pisum ). StringTie was used to assemble the transcript and the gene expression was calculated. The Deseq 21.6.3 software package was used to select DEGs, and FC and False Discovery Rate (FDR) were used as screening criteria. The FDR is obtained by correcting the difference significance p -value; specifically |FC| > 1.5 and FDR < 0.05. GO and KEGG pathway enrichment analysis for differentially expressed genes were implemented using the BMK Cloud platform ( http://www.biocloud.net/ ). The GO term enrichment analysis of DEGs was performed using the GOseq R package . Statistical enrichment of DEGs in the KEGG pathway was performed using KOBAS .
The extraction of nucleic acids was carried out using a DNA test kit (Model: DP812). The nucleic acid concentration was detected using a microplate reader (Model: Synergyhtx). The DNA was amplified by 16 S rDNA primers the full-length of 27 F (5’- AGRGTTTGATYNTGGCTCAG-3’) and 1492R (5’-TASGGHTACCTTGTTASGACTT-3’) , and complete the library construction. The library concentration and size were detected using Qubit and Agilent 2100, respectively, and then sequenced on a sequel II (Pacbio, USA) sequencer. The offline data of PacBio Sequel was in Bam format, and Circular Consensus Sequencing (CCS) files were exported through smrtlink analysis software; barcode identification was performed on CCS sequences, length filtering was performed, chimeras were removed, and Effective CCS was obtained. The UCHIME algorithm (v8.1) was used to detect and remove chimera sequences to obtain clean reads. The unique sequence set was classified into OTUs based on a 97% threshold identity using USEARCH (v10.0) . Subsequently, taxonomy annotation of the OTUs was performed based on the Naive Bayes classifier in QIIME2 using the SILVA database . Alpha diversity was calculated using QIIME2. Group comparisons were performed using ANOVA analysis of variance with the BH-FDR multiple comparisons test.
The correlation between DEGs and DAMs was analyzed by Pearson (PCC > |0.8|, P < 0.05). All DEGs and DAMs were simultaneously mapped to the KEGG pathway database to determine the common metabolic pathways in which DEGs and DAMs participated.
Correlation analyses between soil bacteria and DAMs were performed using Spearman.
Shoot height, root length, shoot (or root) fresh weight, soil pH, and soil nutrient content measurement and analysis were performed with at least three biological replicates. Statistical analysis, tabulation, and plotting were carried out using SPSS22.0, Microsoft Excel 2010 and Origin 9.
Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 Supplementary Material 5 Supplementary Material 6 Supplementary Material 7 Supplementary Material 8 Supplementary Material 9 Supplementary Material 10 Supplementary Material 11 Supplementary Material 12 Supplementary Material 13
|
Single-cell sequencing reveals the immune microenvironment landscape related to anti-PD-1 resistance in metastatic colorectal cancer with high microsatellite instability
|
b1f25d0d-1e40-4faf-a677-0f22ff8d9e9e
|
10142806
|
Anatomy[mh]
|
Colorectal cancer (CRC) is one of the most common malignant digestive tract tumors and has the second highest mortality rate and the third highest incidence rate among all malignant tumors . Although diagnosis and treatment strategies for CRC have improved rapidly in recent years, the prognosis is unfavorable for many CRC patients. Many patients are not diagnosed until they are already in advanced stages of disease, which is a significant obstacle to effective treatment. The expression of proteins related to mismatch repair deficiency (dMMR) and microsatellite instability-high (MSI-H) status have been widely recognized as valuable for predicting the efficacy of immunotherapy in CRC patients. Compared to traditional cancer therapies, immunotherapy has improved the objective response rate (ORR) of MSI-H mCRC to some extent . However, the ORR of patients with MSI-H/dMMR mCRC with first-line anti-programmed cell death protein-1 (PD-1) monotherapy was only 43.8% . Thus, more than half of patients do not benefit from immunotherapy or chemotherapy ± targeted therapy because of the MSI-H phenotype . In view of this, this study aims to explore the mechanism of anti-PD-1 resistance in patients with MSI-H mCRC. ScRNA-seq has contributed to a better understanding of the immune landscape in MSI-H patients, which in turn has offered novel insights into the mechanisms of immunotherapy. Previous studies have mainly focused on the mechanism of immune resistance in microsatellite stable (MSS) mCRC. Several studies have revealed the molecular mechanism of CD40 and CD73 antagonist therapy at the single-cell level, providing a possible theoretical basis for their combined treatment with immune checkpoint inhibitors for MSS mCRC . However, the mechanism of PD-1 resistance in patients with MSI-H mCRC is still unclear. Therefore, comparing the immune microenvironment between anti-PD-1-resistant and anti-PD-1-sensitive groups by scRNA-seq could help elucidate the mechanisms underlying immunotherapy resistance in MSI-H mCRC patients. In the present study, tissue samples were collected during colonoscopy from treatment-sensitive and -resistant groups of MSI-H mCRC treated with a PD-1 blocker (tislelizumab). Next, we comprehensively analyzed the cell subtypes and key genes of the two groups using scRNA-seq. Follow-up experiments were performed using clinical samples and mouse models to verify the potential mechanism of anti-PD-1 resistance induced by the key cell types or genes indicated by scRNA-seq.
Sample collection 23 MSI-H/dMMR mCRC patients were treated with anti-PD-1 monotherapy at Yunnan Cancer Hospital (The Third Affiliated Hospital of Kunming Medical University) between August 1, 2020, and May 31, 2022. A PD-1 blocker (200 mg, tislelizumab, Bei Gene Ltd., China) was injected intravenously on the 1st day of each 21-day cycle. Efficacy was evaluated radiologically after every third cycle of treatment. Patients were considered sensitive to anti-PD-1 treatment if they showed a complete response (CR) or partial response (PR), and patients were considered resistant to anti-PD-1 treatment if they had progressive disease (PD) or stable disease (SD). Fresh intestinal tumor tissues (2–4 mm) were taken during colonoscopy. Tissue samples from a total of 23 patients (10 sensitive and 13 resistant) were analyzed by immunohistochemistry (IHC) and immunofluorescence (IF). Six patients (3 PR and 3 PD) were randomly selected for scRNA-seq. All participants provided written informed consent before beginning the study. Additionally, the ethics committee of Yunnan Cancer Hospital approved all study protocols that involved human subjects according to the ethical principles described in the Helsinki Declaration. The inclusion criteria were as follows: primary colorectal adenocarcinoma confirmed by colonoscopy biopsy and distant metastasis confirmed by CT/MR; MSI-H/dMMR status confirmed by multiplex qPCR or IHC; undergoing first-line anti-PD-1 monotherapy. Patients were excluded if we were unable to obtain a sample through colonoscopy due to contraindications or we were unable to perform single-cell sequencing due to insufficient cell activity or quantity. Radiology and colonoscopy Siemens (SOMATOM Definition AS +) 128 slice spiral CT was used for plain and enhanced scanning. Patients were fasting and had performed bowel preparation as directed. The scanning range covered the entire abdominal cavity. The scanning layer was 1.0 mm thick with an interval of 0.6 mm. Iohexol (300 mg/ml, 100 ml) was used as the contrast agent. The delay time of arterial phase scanning was 35–40 s, and that of venous phase scanning was 70–80 s. MRI was performed using a Philips Elision 3.0 T, and the scanning sequence included transverse T1WI_Tse, sagittal and coronal T2WI_Tse, high-resolution T2WI_Tse, diffusion-weighted imaging, and multiphase dynamic enhancement. Gadolinium diamine was used as the contrast agent. Colonoscopy was performed using the Olympus CV-290, and tissues were collected by doctors with at least five years of experience. Single-cell preparation Fresh tumor tissues obtained from CRC patients were immediately transferred to MACS C-tubes (Miltenyi Biotec) with digestive enzymes. Then, digestion was performed by a gentleMACS Octo Dissociator (Miltenyi Biotec) (30 min at 37 °C). Single cells were processed by Chromium Controller (10X Genomics) according to the manufacturer's protocol. Briefly, cells were washed with 20 mL of RPMI 1640 (Gibco), filtered through a 70-μm nylon strainer (BD Falcon), collected by centrifugation (330 × g, 10 min, 4 °C), and resuspended in a basic solution containing 0.2% fetal bovine serum (FBS; Gibco). Single-cell capture and library preparation A 10 × Chromium system (10 × Genomics) and library preparation by LC Sciences were utilized to run the single cells according to the recommended protocol for the Chromium Single Cell 30 Reagent Kit (v2 Chemistry). The Illumina HiSeq4000 was used for sequencing, and a 10 × Cell Ranger package (v1.2.0; 10 × Genomics) was used for the postprocessing and quality control of libraries. Quantitative analysis of the single-cell sequencing data All single-cell sequencing data were analyzed by Cell Ranger V6.1.1. The results are shown in Additional file : Table S1. CRC samples from 3 resistant patients (named R1, R2, R3) and 3 sensitive patients (named S1, S2, S3) contained a total of 56,092 cells; the number of genes detected in each sample ranged from 17,564 to 18,093; the median number of cellular unique molecular identifiers ranged from 3,403 to 6,915; and the sequencing saturation ranged from 47 to 63%. These results indicate that, overall, the sequencing quality was sufficient for use in subsequent correlation analysis. Processing of CRC single-cell sequencing data In total, this analysis included 56,092 cells from the sensitive group and resistant group. The Seurat package in R (version 4.0.5) was used for analysis and quality control . Low-quality cells ( n = 3,071) were removed if genes were detected only in < 3 cells or if there were < 200 total genes detected by gene-cell matrixes. Next, the global-scaling method LogNormalize was performed to normalize the gene expression values for the remaining 53,021 cells. Then, the FindVariableFeatures function combined with the vst method in R studio was employed to identify the most variable genes, which were used for dimensionality reduction. After principal component analysis, 2000 highly variable genes were identified. Jackstraw and ScoreJackStraw functions were applied to determine the most significant principal components. Finally, graph-based unsupervised clustering was conducted and visualized using a nonlinear t-distributed stochastic neighbor embedding (t-SNE) plot, defined by the FindNeighbors and FindClusters functions. Identification and characterization of cell subtypes The identities of cell types were characterized using the SingleR (V1.6.1) package based on the Celldex database. The FindMarkers function in the R package Seurat was used to list the markers of each cell cluster with min.pct = 0.5, logfc.threshold = 1, min.diff.pct = 0.3, and P < 0.05. The markers used in this pipeline are listed in Additional file : Table S2. To investigate the molecular mechanisms involved in each cell subtype, biological process (BP), cell composition (CC), and molecular function (MF) GO enrichment analysis and KEGG pathway analysis were performed using the R package clusterProfiler (version 3.14.3), with the threshold of significance set to adjusted (adj.) P < 0.05. Screening of differentially expressed genes (DEGs) The first 2000 highly variable genes were screened using the FindVariableFeatures function combined with the vst method. The FindMarkers function in the R package Seurat was used not only to find the marker genes for different cell subtypes (with screening parameter thresholds of min.pct = 0.5, logfc.threshold = 1, min.diff.pct = 0.3, P < 0.05) but also to identify DEGs of each cell subtype between anti-PD-1-sensitive and anti-PD-1-resistant samples (with the threshold set to |avg_log 2 FC|≥ 0.5, P ≤ 0.05). The overlapping genes between pseudotime-related genes and PD-1 resistance-related DEGs were considered to be candidate key genes. Pseudotime analysis To reveal differences in immune cells in the sensitive and resistant groups, Monocle software (version 2.20.0) was used to analyze sample trajectories and explore the differentiation process. First, a more sophisticated method (dpFeature) was created based on the foundation of cluster and custom developmental marker genes of Monocle. The signature genes with a high degree of dispersion (q < 0.01) were identified among cell subtypes that were selected by dpFeature. Next, DDRTree was applied for dimensionality reduction and pseudotemporal alignment of cells along the trajectory, and finally, the trajectories were visualized as 2D t-SNE maps. Pseudotime-related genes were identified based on q < 0.05 of the abovementioned signature genes. Protein‒protein interaction analysis of candidate key genes The Search Tool for the Retrieval of Interacting Genes (STRING; http://string.embl.de/ ) was used to perform protein‒protein interaction (PPI) analysis on candidate key genes. The STRING database can be used to assess the direct (physical) and indirect (functional) associations of proteins . Cytoscape 3.6.1 was used to establish a network model of PPI analysis results. Based on the STRING online tool, the PPI network of the candidate key genes was constructed with medium confidence = 0.4. The top four genes were selected as the key genes in this study based on the connectivity (degree) of each node in the PPI network. Cell culture and transfection Colorectal cancer cell line CT26 was selected for this study. The plasmid used for cell transfection was synthesized by Ono Company. Eighteen to twenty-four hours prior to lentivirus infection, 3 × 10 5 /well adherent cells were spread in 6-well plates. The number of cells transfected with lentivirus was approximately 6 × 10 5 /well. When cells adhered to the wells and were 70% confluent, the original culture medium was replaced by 2 ml fresh culture medium containing 8 μg/ml polyzoan and a proper amount of virus suspension. Cells were then incubated at 37 °C for 8 h, after which the virus-containing medium was replaced with fresh medium. If transfection was successful, fluorescent protein was visible after 48–72 h. If no fluorescence was observed, the infection protocol was repeated. Puromycin was added to screen for lentivirus overexpression for one month. Animal experiments All animal experiments were approved by the Animal Ethics Committee of Kunming Medical University. In the process of experimental operation, we strictly follow the ARRIVE guidelines. Throughout the experiment, researchers did not know which group the animals taken out of the cage would be assigned to, animal managers and researchers doing the experiment did not know the assignment sequence, and researchers evaluating, testing or quantifying the results of the experiment did not know the means of intervention. BALB/c mice (male, aged 6–8 weeks old, 20-30 g) were purchased from Beijing Sipeifu Biotechnology Co., Ltd. All mice were maintained in an SPF room in the animal-housing facilities at the Kunming Medical University with food and water provided at will. The experiment was carried out after 1 week of adaptive feeding. Based on the degree of freedom (E) of variance analysis proposed by Mead, we estimated the sample size of the experimental animals we need . The total sample size of mice in this study was 25, and they were randomly divided into 5 groups with 5 mice in each group. All mice were randomly divided into 5 groups using a simple random sampling method, defined as OE-NC, OE-IL-1β, OE-IL-1β + Diacerein, OE-IL-1β + Nivolumab, and OE-IL-1β + Diacerein + Nivolumab groups, respectively. For the control group model (OE-NC group), 1 × 10 6 empty vector stabilized cells in 100ul PBS were subcutaneously injected into the flank of mice. For the treatment model, 1 × 10 6 over-expressed IL-1β stably transfected cells in 100ul PBS were subcutaneously injected into the same site of mice. When the tumor volume reached 40mm 3 (i.e. the 7th day), the OE-IL-1β + Diacerein group began intraperitoneal (i.p.) injection of the IL-1β antagonist (Diacerein, 0.07 mg/kg), OE-IL-1β + Nivolumab group received i.p. injection of anti-PD-1 antibody (Nivolumab, 2 mg/kg); The OE-IL-1β + Diacerein + Nivolumab group was treated with a combination of diacerein and Nivolumab intraperitoneally. The mice in the OE-NC group and OE-IL-1β group were administered the same volume of PBS. Diacerein and Nivolumab were intraperitoneally injected every 3 days from the 7th day of tumor cell inoculation. The weight of mice and the volume of tumor were measured in 0d, 7d, 10d, 12d, 14d and 16d in each group, respectively. Tumor volume was calculated as ½ (Length × Width2). On the 16th day after inoculation of tumor cells, all mice were sacrificed with an intraperitoneal injection of sodium pentobarbital (200 mg/kg). Determine mice death based on the disappearance of corneal reflex and the emission of pupils. Tumor tissues were stripped and weighed, and subsequent molecular biological experiment were performed. The selection of 16d as the experimental endpoint is based on the pre- experiment results. The selection of 16d as the experimental endpoint is based on the pre- experiment results. Throughout the entire experiment, the length of tumor should not exceed 20 mm, and the tumor weight should not exceed 10% of the mice weight . Quantitative real-time PCR Total RNA was extracted from the cultured cell line CT26 or tissues using TRIzol reagent (Ambion) and reverse transcribed into cDNA by the SureScript first strand cDNA synthesis kit (Servicebio) according to the manufacturer’s instructions. 2xUniversal Blue SYBR Green qPCR Master Mix (Servicebio) and CFX96 sequence detection system (Bio-Rad, Hercules, CA, USA) were used for qPCR, and the following primers were used: IL-1β (human), forward: 5'- AATCTCCGACCACCACTACA-3' and reverse: 5'-GACAAATCGCTTTTCCATCT-3'; MMP9 (human), forward: 5'-ATGAGCCTCTGGCAGCCCCTGGTCC-3' and reverse: 5'- GGACCAGGGGCTGCCAGAGGCTCAT-3'; GAPDH (human), forward: 5'-CCCATCACCATCTTCCAGG-3' and reverse: 5'-CATCACGCCACAGTTTCCC-3'; IL-1β (mouse), forward: 5'- CCTATGTCTTGCCCGTGG-3' and reverse: 5'- GTGGGTGTGCCGTCTTTC-3'; MMP9 (mouse), forward: 5'-GTGTGTTCCCGTTCATCTTT-3' and reverse: 5'- GCCGTCTATGTCGTCTTTAT-3'; GAPDH (mouse), forward: 5 '- CCTTCCGTGTTCCTACCCC-3' and reverse: 5'-GCCCAAGATGCCCTTCAGT-3'. GAPDH was used as a standardized endogenous control, and 2 −△△CT was used to calculate the relative mRNA expression. Western blotting Protein samples were isolated from tissues or cells using RIPA lysis buffer (Servicebio, Wuhan, China) containing 1% protease and phosphatase inhibitors (PMSF; Servicebio). A BCA protein assay kit (Biyuntian Biotechnology) was used for protein quantification. Sodium dodecyl sulfate‒polyacrylamide gel electrophoresis was applied to separate proteins of different molecular weights, and proteins were transferred to a polyvinylidene fluoride (Servicebio) membrane. The membrane was blocked with 5% skim milk for 90 min at room temperature and subsequently incubated with primary antibodies (Proteintech; IL-1β 1:1000; MMP9 1:1000; β-actin 1:25,000) at 4 °C overnight, followed by incubation with HRP-conjugated secondary antibodies for 2 h at room temperature. Immunohistochemistry and immunofluorescence Paraffin sections of tissues were deparaffinized and rehydrated. Then, sodium citrate buffer was used to extract the antigens to be detected, and antigen retrieval was completed by heat induction. Sectioned tissues were incubated with 3% H 2 O 2 for 15 min to block endogenous peroxidase activity (IF proceeded without this step) and then blocked with PBS containing 5% fetal bovine serum for 30 min. Tissues were incubated with primary antibodies overnight at 4 degrees C, followed by incubation in the dark with the conjugated secondary antibodies at room temperature for another 2 h. DAB was used as nuclear markers for IHC. DAPI (EX:330-380 nm, Em:420 nm) was used to stain the cell nuclei (blue), Alexa Fluor 488 (EX:495 nm, Em:519 nm) was used to stain CD11b (green), Alexa Fluor 555(EX:555 nm, Em:565 nm) was used to stain CD14, CD15 and CD8 (red), Alexa Fluor 594 (EX:590 nm, Em:617 nm) was used to stain CD33 (orange). Flow cytometry Polymorphonuclear (PMN)-MDSCs/monocytic (M)-MDSCs were stained with CD11b-FITC (Biolegend, 101,205, USA), Ly-6G-PE (Biolegend, 127,607, USA), and Ly-6C-APC (Biolegend, 128,016, USA), and CD8 + T cells were stained with CD3-FITC (Biolegend, 100,203, USA) and CD8-PE (Biolegend, 100,707, USA) according to the manufacturer’s instructions. Samples were run on a Guava easyCyte 8HT flow cytometer (Millipore). Forward and side scatter gating were performed using FlowJo_V10. Statistical analysis Bioinformatics analyses were performed using R software. The varElect online tool was applied to analyze the correlation between genes in the PPI network and CRC. A high score indicated a strong correlation, with a significance threshold of P < 0.05 unless otherwise stated. All data are presented as the mean ± standard error (SE) of independent experiments. Two-tailed one-way analysis of variance (ANOVA) with multiple comparison post hoc analysis was used, and P values < 0.05 , P < 0.01 , P < 0.001 , and P < 0.0001 are indicated as significant. Statistical analysis was performed using GraphPad Prism 9.0.
23 MSI-H/dMMR mCRC patients were treated with anti-PD-1 monotherapy at Yunnan Cancer Hospital (The Third Affiliated Hospital of Kunming Medical University) between August 1, 2020, and May 31, 2022. A PD-1 blocker (200 mg, tislelizumab, Bei Gene Ltd., China) was injected intravenously on the 1st day of each 21-day cycle. Efficacy was evaluated radiologically after every third cycle of treatment. Patients were considered sensitive to anti-PD-1 treatment if they showed a complete response (CR) or partial response (PR), and patients were considered resistant to anti-PD-1 treatment if they had progressive disease (PD) or stable disease (SD). Fresh intestinal tumor tissues (2–4 mm) were taken during colonoscopy. Tissue samples from a total of 23 patients (10 sensitive and 13 resistant) were analyzed by immunohistochemistry (IHC) and immunofluorescence (IF). Six patients (3 PR and 3 PD) were randomly selected for scRNA-seq. All participants provided written informed consent before beginning the study. Additionally, the ethics committee of Yunnan Cancer Hospital approved all study protocols that involved human subjects according to the ethical principles described in the Helsinki Declaration. The inclusion criteria were as follows: primary colorectal adenocarcinoma confirmed by colonoscopy biopsy and distant metastasis confirmed by CT/MR; MSI-H/dMMR status confirmed by multiplex qPCR or IHC; undergoing first-line anti-PD-1 monotherapy. Patients were excluded if we were unable to obtain a sample through colonoscopy due to contraindications or we were unable to perform single-cell sequencing due to insufficient cell activity or quantity.
Siemens (SOMATOM Definition AS +) 128 slice spiral CT was used for plain and enhanced scanning. Patients were fasting and had performed bowel preparation as directed. The scanning range covered the entire abdominal cavity. The scanning layer was 1.0 mm thick with an interval of 0.6 mm. Iohexol (300 mg/ml, 100 ml) was used as the contrast agent. The delay time of arterial phase scanning was 35–40 s, and that of venous phase scanning was 70–80 s. MRI was performed using a Philips Elision 3.0 T, and the scanning sequence included transverse T1WI_Tse, sagittal and coronal T2WI_Tse, high-resolution T2WI_Tse, diffusion-weighted imaging, and multiphase dynamic enhancement. Gadolinium diamine was used as the contrast agent. Colonoscopy was performed using the Olympus CV-290, and tissues were collected by doctors with at least five years of experience.
Fresh tumor tissues obtained from CRC patients were immediately transferred to MACS C-tubes (Miltenyi Biotec) with digestive enzymes. Then, digestion was performed by a gentleMACS Octo Dissociator (Miltenyi Biotec) (30 min at 37 °C). Single cells were processed by Chromium Controller (10X Genomics) according to the manufacturer's protocol. Briefly, cells were washed with 20 mL of RPMI 1640 (Gibco), filtered through a 70-μm nylon strainer (BD Falcon), collected by centrifugation (330 × g, 10 min, 4 °C), and resuspended in a basic solution containing 0.2% fetal bovine serum (FBS; Gibco).
A 10 × Chromium system (10 × Genomics) and library preparation by LC Sciences were utilized to run the single cells according to the recommended protocol for the Chromium Single Cell 30 Reagent Kit (v2 Chemistry). The Illumina HiSeq4000 was used for sequencing, and a 10 × Cell Ranger package (v1.2.0; 10 × Genomics) was used for the postprocessing and quality control of libraries.
All single-cell sequencing data were analyzed by Cell Ranger V6.1.1. The results are shown in Additional file : Table S1. CRC samples from 3 resistant patients (named R1, R2, R3) and 3 sensitive patients (named S1, S2, S3) contained a total of 56,092 cells; the number of genes detected in each sample ranged from 17,564 to 18,093; the median number of cellular unique molecular identifiers ranged from 3,403 to 6,915; and the sequencing saturation ranged from 47 to 63%. These results indicate that, overall, the sequencing quality was sufficient for use in subsequent correlation analysis.
In total, this analysis included 56,092 cells from the sensitive group and resistant group. The Seurat package in R (version 4.0.5) was used for analysis and quality control . Low-quality cells ( n = 3,071) were removed if genes were detected only in < 3 cells or if there were < 200 total genes detected by gene-cell matrixes. Next, the global-scaling method LogNormalize was performed to normalize the gene expression values for the remaining 53,021 cells. Then, the FindVariableFeatures function combined with the vst method in R studio was employed to identify the most variable genes, which were used for dimensionality reduction. After principal component analysis, 2000 highly variable genes were identified. Jackstraw and ScoreJackStraw functions were applied to determine the most significant principal components. Finally, graph-based unsupervised clustering was conducted and visualized using a nonlinear t-distributed stochastic neighbor embedding (t-SNE) plot, defined by the FindNeighbors and FindClusters functions.
The identities of cell types were characterized using the SingleR (V1.6.1) package based on the Celldex database. The FindMarkers function in the R package Seurat was used to list the markers of each cell cluster with min.pct = 0.5, logfc.threshold = 1, min.diff.pct = 0.3, and P < 0.05. The markers used in this pipeline are listed in Additional file : Table S2. To investigate the molecular mechanisms involved in each cell subtype, biological process (BP), cell composition (CC), and molecular function (MF) GO enrichment analysis and KEGG pathway analysis were performed using the R package clusterProfiler (version 3.14.3), with the threshold of significance set to adjusted (adj.) P < 0.05.
The first 2000 highly variable genes were screened using the FindVariableFeatures function combined with the vst method. The FindMarkers function in the R package Seurat was used not only to find the marker genes for different cell subtypes (with screening parameter thresholds of min.pct = 0.5, logfc.threshold = 1, min.diff.pct = 0.3, P < 0.05) but also to identify DEGs of each cell subtype between anti-PD-1-sensitive and anti-PD-1-resistant samples (with the threshold set to |avg_log 2 FC|≥ 0.5, P ≤ 0.05). The overlapping genes between pseudotime-related genes and PD-1 resistance-related DEGs were considered to be candidate key genes.
To reveal differences in immune cells in the sensitive and resistant groups, Monocle software (version 2.20.0) was used to analyze sample trajectories and explore the differentiation process. First, a more sophisticated method (dpFeature) was created based on the foundation of cluster and custom developmental marker genes of Monocle. The signature genes with a high degree of dispersion (q < 0.01) were identified among cell subtypes that were selected by dpFeature. Next, DDRTree was applied for dimensionality reduction and pseudotemporal alignment of cells along the trajectory, and finally, the trajectories were visualized as 2D t-SNE maps. Pseudotime-related genes were identified based on q < 0.05 of the abovementioned signature genes.
The Search Tool for the Retrieval of Interacting Genes (STRING; http://string.embl.de/ ) was used to perform protein‒protein interaction (PPI) analysis on candidate key genes. The STRING database can be used to assess the direct (physical) and indirect (functional) associations of proteins . Cytoscape 3.6.1 was used to establish a network model of PPI analysis results. Based on the STRING online tool, the PPI network of the candidate key genes was constructed with medium confidence = 0.4. The top four genes were selected as the key genes in this study based on the connectivity (degree) of each node in the PPI network.
Colorectal cancer cell line CT26 was selected for this study. The plasmid used for cell transfection was synthesized by Ono Company. Eighteen to twenty-four hours prior to lentivirus infection, 3 × 10 5 /well adherent cells were spread in 6-well plates. The number of cells transfected with lentivirus was approximately 6 × 10 5 /well. When cells adhered to the wells and were 70% confluent, the original culture medium was replaced by 2 ml fresh culture medium containing 8 μg/ml polyzoan and a proper amount of virus suspension. Cells were then incubated at 37 °C for 8 h, after which the virus-containing medium was replaced with fresh medium. If transfection was successful, fluorescent protein was visible after 48–72 h. If no fluorescence was observed, the infection protocol was repeated. Puromycin was added to screen for lentivirus overexpression for one month.
All animal experiments were approved by the Animal Ethics Committee of Kunming Medical University. In the process of experimental operation, we strictly follow the ARRIVE guidelines. Throughout the experiment, researchers did not know which group the animals taken out of the cage would be assigned to, animal managers and researchers doing the experiment did not know the assignment sequence, and researchers evaluating, testing or quantifying the results of the experiment did not know the means of intervention. BALB/c mice (male, aged 6–8 weeks old, 20-30 g) were purchased from Beijing Sipeifu Biotechnology Co., Ltd. All mice were maintained in an SPF room in the animal-housing facilities at the Kunming Medical University with food and water provided at will. The experiment was carried out after 1 week of adaptive feeding. Based on the degree of freedom (E) of variance analysis proposed by Mead, we estimated the sample size of the experimental animals we need . The total sample size of mice in this study was 25, and they were randomly divided into 5 groups with 5 mice in each group. All mice were randomly divided into 5 groups using a simple random sampling method, defined as OE-NC, OE-IL-1β, OE-IL-1β + Diacerein, OE-IL-1β + Nivolumab, and OE-IL-1β + Diacerein + Nivolumab groups, respectively. For the control group model (OE-NC group), 1 × 10 6 empty vector stabilized cells in 100ul PBS were subcutaneously injected into the flank of mice. For the treatment model, 1 × 10 6 over-expressed IL-1β stably transfected cells in 100ul PBS were subcutaneously injected into the same site of mice. When the tumor volume reached 40mm 3 (i.e. the 7th day), the OE-IL-1β + Diacerein group began intraperitoneal (i.p.) injection of the IL-1β antagonist (Diacerein, 0.07 mg/kg), OE-IL-1β + Nivolumab group received i.p. injection of anti-PD-1 antibody (Nivolumab, 2 mg/kg); The OE-IL-1β + Diacerein + Nivolumab group was treated with a combination of diacerein and Nivolumab intraperitoneally. The mice in the OE-NC group and OE-IL-1β group were administered the same volume of PBS. Diacerein and Nivolumab were intraperitoneally injected every 3 days from the 7th day of tumor cell inoculation. The weight of mice and the volume of tumor were measured in 0d, 7d, 10d, 12d, 14d and 16d in each group, respectively. Tumor volume was calculated as ½ (Length × Width2). On the 16th day after inoculation of tumor cells, all mice were sacrificed with an intraperitoneal injection of sodium pentobarbital (200 mg/kg). Determine mice death based on the disappearance of corneal reflex and the emission of pupils. Tumor tissues were stripped and weighed, and subsequent molecular biological experiment were performed. The selection of 16d as the experimental endpoint is based on the pre- experiment results. The selection of 16d as the experimental endpoint is based on the pre- experiment results. Throughout the entire experiment, the length of tumor should not exceed 20 mm, and the tumor weight should not exceed 10% of the mice weight .
Total RNA was extracted from the cultured cell line CT26 or tissues using TRIzol reagent (Ambion) and reverse transcribed into cDNA by the SureScript first strand cDNA synthesis kit (Servicebio) according to the manufacturer’s instructions. 2xUniversal Blue SYBR Green qPCR Master Mix (Servicebio) and CFX96 sequence detection system (Bio-Rad, Hercules, CA, USA) were used for qPCR, and the following primers were used: IL-1β (human), forward: 5'- AATCTCCGACCACCACTACA-3' and reverse: 5'-GACAAATCGCTTTTCCATCT-3'; MMP9 (human), forward: 5'-ATGAGCCTCTGGCAGCCCCTGGTCC-3' and reverse: 5'- GGACCAGGGGCTGCCAGAGGCTCAT-3'; GAPDH (human), forward: 5'-CCCATCACCATCTTCCAGG-3' and reverse: 5'-CATCACGCCACAGTTTCCC-3'; IL-1β (mouse), forward: 5'- CCTATGTCTTGCCCGTGG-3' and reverse: 5'- GTGGGTGTGCCGTCTTTC-3'; MMP9 (mouse), forward: 5'-GTGTGTTCCCGTTCATCTTT-3' and reverse: 5'- GCCGTCTATGTCGTCTTTAT-3'; GAPDH (mouse), forward: 5 '- CCTTCCGTGTTCCTACCCC-3' and reverse: 5'-GCCCAAGATGCCCTTCAGT-3'. GAPDH was used as a standardized endogenous control, and 2 −△△CT was used to calculate the relative mRNA expression.
Protein samples were isolated from tissues or cells using RIPA lysis buffer (Servicebio, Wuhan, China) containing 1% protease and phosphatase inhibitors (PMSF; Servicebio). A BCA protein assay kit (Biyuntian Biotechnology) was used for protein quantification. Sodium dodecyl sulfate‒polyacrylamide gel electrophoresis was applied to separate proteins of different molecular weights, and proteins were transferred to a polyvinylidene fluoride (Servicebio) membrane. The membrane was blocked with 5% skim milk for 90 min at room temperature and subsequently incubated with primary antibodies (Proteintech; IL-1β 1:1000; MMP9 1:1000; β-actin 1:25,000) at 4 °C overnight, followed by incubation with HRP-conjugated secondary antibodies for 2 h at room temperature.
Paraffin sections of tissues were deparaffinized and rehydrated. Then, sodium citrate buffer was used to extract the antigens to be detected, and antigen retrieval was completed by heat induction. Sectioned tissues were incubated with 3% H 2 O 2 for 15 min to block endogenous peroxidase activity (IF proceeded without this step) and then blocked with PBS containing 5% fetal bovine serum for 30 min. Tissues were incubated with primary antibodies overnight at 4 degrees C, followed by incubation in the dark with the conjugated secondary antibodies at room temperature for another 2 h. DAB was used as nuclear markers for IHC. DAPI (EX:330-380 nm, Em:420 nm) was used to stain the cell nuclei (blue), Alexa Fluor 488 (EX:495 nm, Em:519 nm) was used to stain CD11b (green), Alexa Fluor 555(EX:555 nm, Em:565 nm) was used to stain CD14, CD15 and CD8 (red), Alexa Fluor 594 (EX:590 nm, Em:617 nm) was used to stain CD33 (orange).
Polymorphonuclear (PMN)-MDSCs/monocytic (M)-MDSCs were stained with CD11b-FITC (Biolegend, 101,205, USA), Ly-6G-PE (Biolegend, 127,607, USA), and Ly-6C-APC (Biolegend, 128,016, USA), and CD8 + T cells were stained with CD3-FITC (Biolegend, 100,203, USA) and CD8-PE (Biolegend, 100,707, USA) according to the manufacturer’s instructions. Samples were run on a Guava easyCyte 8HT flow cytometer (Millipore). Forward and side scatter gating were performed using FlowJo_V10.
Bioinformatics analyses were performed using R software. The varElect online tool was applied to analyze the correlation between genes in the PPI network and CRC. A high score indicated a strong correlation, with a significance threshold of P < 0.05 unless otherwise stated. All data are presented as the mean ± standard error (SE) of independent experiments. Two-tailed one-way analysis of variance (ANOVA) with multiple comparison post hoc analysis was used, and P values < 0.05 , P < 0.01 , P < 0.001 , and P < 0.0001 are indicated as significant. Statistical analysis was performed using GraphPad Prism 9.0.
Efficacy evaluation for MSI-H/dMMR mCRC after anti-PD-1 monotherapy A total of 23 MSI-H/dMMR mCRC patients were treated with first-line anti-PD-1 monotherapy between August 1, 2020, and May 31, 2022. Treatment response was evaluated by radiological examination after every 3 cycles of PD-1 inhibitor. PR was recorded for seven patients, CR was recorded for three patients, SD was recorded for six patients, and PD was recorded for seven patients. The ORR was 43.48% (10/23), and the disease control rate (DCR) was 69.57% (16/23) (Fig. A). The radiological findings for primary and metastatic lesions in six patients pre- and post-immunotherapy can be seen in Fig. B and C. Figure B shows the changes in primary and metastatic lesions in 3 patients in the resistance group (R1, R2 and R3) after anti-PD-1 treatment. The length of the primary lesions of R1 increased from 2.1 cm to 3.2 cm, while that of the metastatic lesions increased from 1.6 cm to 2.7 cm; the primary and metastatic lesions of R2 increased from 0.5 cm to 1.9 cm and from 0.3 cm to 0.8 cm, respectively; the length of primary lesions of R3 increased from 1.5 cm to 1.9 cm, and the number of metastatic lesions increased from 3 to more than 20. The responses of R1, R2 and R3 were evaluated as PD. Similarly, Fig. C shows the changes in primary and metastatic lesions in 3 patients in the sensitive group (S1, S2 and S3) after anti-PD-1 monotherapy. The length of the primary tumor of S1 decreased from 2.0 cm to 1.5 cm and that of the metastatic tumor decreased from 2.1 cm to 1.2 cm; the lengths of the primary and metastatic lesions of S2 decreased from 1.2 cm to 0.7 cm and from 5.3 cm to 4.4 cm, respectively; and the lengths of the primary and metastatic lesions of S3 decreased from 1.3 cm to 0.8 cm and from 2.1 cm to 0.5 cm, respectively. The responses of S1, S2 and S3 were evaluated as PR. Identification of 23 cell clusters based on single-cell sequencing data from CRC samples A total of six patients (three PR and three PD) were randomly selected for scRNA-seq. 10 × Genomics scRNA sequencing datasets were obtained from fresh CRC tissues obtained from three resistant (named R1, R2, R3) and three sensitive (named S1, S2, S3) patients. After removing 3071 low-quality cells, a total of 53,021 cells were used in the final analysis (Fig. A); specifically, there were 7679 cells from R1, 10,797 cells from R2, 9020 from R3, 7880 cells from S1, 9369 cells from S2, and 8276 cells from S3. Figure B shows the top 2000 highly variant genes, with the ten most variable genes labeled. PCA of the 2000 highly variable genes demonstrated no significant separation in CRC cells between resistant and sensitive groups (Fig. C). Twenty principal components were selected based on linear dimensionality reduction analysis (Fig. D). Nonlinear dimensionality reduction of the data according to the dimension value 20 combined with the RunMap function revealed a more uniform distribution of cells in each sample (Fig. E). Subsequently, these cells were classified into 23 cell clusters based on gene expression levels by t-SNE, and the distribution of these cellular taxa was found to be nearly identical across the sensitive and resistant groups (Fig. F). Characterization of the nine cell subtypes Using the R packages SingleR and celldex, 23 cell clusters were annotated into nine cell subtypes: CD8 + T cells, epithelial cells, B cells, dendritic cells (DCs), hematopoietic stem cells, monocytes, fibroblasts, myocytes, and endothelial cells (Fig. A). Combined with Fig. F, the aggregation of CD8 + T cells and monocytes in the sensitive group was significantly higher than that in the resistant group. The number of cells of each cellular subtype in each sample is shown in Additional file : Table S3. A total of 679 marker genes were obtained for the nine cell subtypes (Additional file : Table S2): there were 176 markers for fibroblasts, 143 markers for endothelial cells, 131 markers for myocytes, 67 markers for DCs, 65 markers for monocytes, 40 markers for hematopoietic stem cells, 27 markers for epithelial cells, 17 markers for CD8 + T cells, and 13 markers for B cells (Fig. B). The top gene for each cell subtype is shown in Fig. C. The heatmap of these genes illustrated that each cluster displayed distinct gene expression features (Fig. D). GO analysis showed that these marker genes were mainly enriched in immune-related biological processes such as differentiation of lymphocytes and monocytes and positive regulation of leukocytes. Moreover, they were also significantly associated with “cytokine-mediated signaling pathway” (Additional file : Figure S1A-C). KEGG enrichment analysis (Additional file : Figure S1D) indicated that the marker genes were significantly correlated with immune/inflammatory responses (e.g.“cytokine − cytokine receptor interaction”, “Fc epsilon RI signaling pathway”, “AGE − RAGE signaling pathway”, “cell adhesion molecules”). Cancer-related pathways, including “PI3K-Akt signaling pathway” and “proteoglycans in cancer”, were also significantly enriched. Simulation of cell cluster developmental trajectories and screening of key genes Developmental trajectories were simulated for the nine cell subtypes, and 1623 feature genes were found to be differentially expressed among the cell subpopulations (Additional file : Table S4). Figure A illustrates the relatively high dispersion of these feature genes. Individual cells were then classified by these feature genes using the R package Monocle, and a tree structure of the entire spectrum of differentiation trajectories was constructed (Fig. B). From a cell typing perspective, overall, the proposed temporal evolutionary trend of the cell subtypes was a gradual transition from epithelial cells to intermediate cells (e.g., fibroblasts, hematopoietic stem cells, monocytes, B cells) and eventually to CD8 + T cells (Fig. C). A total of 1454 pseudotime-related genes were identified from the 1623 feature genes ( P < 0.05; Additional file : Table S5). KEGG analysis (Additional file : Figure S2A) revealed that these genes were involved in “cytokine‒cytokine receptor interaction”, “viral protein interaction with cytokine and cytokine receptor”, and “cell cycle”; they were also inextricably linked to the tumor-associated pathways “NF-kappa B signaling pathway”, “p53 signaling pathway”, and “PPAR signaling pathway”. GO-BP analysis revealed that these pseudotime-related genes were related to the immune response, inflammatory response, immune cell differentiation, proliferation, migration, and chemotaxis (Additional file : Figure S2B). Moreover, these genes also performed molecular functions such as “antigen binding”, “immunoglobulin receptor binding”, and “extracellular matrix structural constituent” (Additional file : Figure S2C) in cellular fractions such as “immunoglobulin complex”, “external side of plasma membrane”, and “immunoglobulin complex, circulating” (Additional file : Figure S2D). To observe differences in gene expression for each cell subtype between the sensitive and resistant groups, we performed differential analysis using the FindMarkers function in the Seurat package. A total of 155 DEGs were obtained; 97 were upregulated, and 98 were downregulated (Additional file : Table S6). Functional enrichment analysis indicated that these genes were related to immune and inflammatory responses. DEGs in the CD8 + T-cell subtype were mainly involved in the IFN-related response and oxygen transport, while DEGs in the DC and fibroblast subtypes were associated with the chemokine-related response and neutrophil migration (Additional file : Figure S3A and B). Enrichment results for GO-CC and GO-MF are presented in Additional file : Figure S3B and C. KEGG enrichment analysis showed that DEGs in the B-cell and monocyte subtypes were involved in similar pathways and were enriched in antigen processing and presentation; DEGs in the DC and fibroblast subtypes were involved in cancer-related pathways, including chemokine and IL-17 signaling (Additional file : Figure S3D). Through overlap analysis, 130 common genes were identified among 1454 pseudotime-related genes and 155 (deduplicated) PD-1 resistance-related genes (Fig. D; Additional file : Table S7). A PPI network containing 109 nodes and 435 edges was drawn based on the 130 common genes using the STRING database (Fig. E). VarElect analysis of the 130 common genes identified the two genes with the highest connectivity in the PPI network as IL-1β (score = 17.72) and MMP9 (score = 13.45), indicating that these two genes are most closely associated with CRC (Additional file : Table S8; Additional file : Figure S4). KEGG pathway enrichment analysis of these two key genes showed that they are involved in cytokine‒cytokine receptor interaction (IL-1β) and the MAPK and PI3K-Akt signaling pathways (IL-1β and MMP9). The specific signaling pathway maps are shown in Additional file : Figure S5. Relationship between IL-1β and MDSCs or CD8 + T cells in CRC patients IL-1β and MMP9 were identified as the top two genes with the highest correlation with anti-PD-1 resistance through scRNA-seq. We used IHC to detect the expression of IL-1β and MMP9 in 10 sensitive and 13 resistant tumor tissues from MSI-H/dMMR patients. The IHC grade of IL-1β in IL-1β-positive tissues was significantly higher than that in IL-1β-negative tumor tissues ( P < 0.001; Fig. A). Next, all 23 patients with were categorized into IL-1β-negative or IL-1β-positive groups based on the IHC grade of IL-1β. Of the 13 tissues from resistant patients, eleven tissues had high expression of IL-1β; of the 10 tissues from sensitive patients, only one tissue had high expression (Fig. B). A similar expression trend was observed for another key gene, MMP9 ( P < 0.0001; Fig. C). MMP9 expression was positively correlated with IL-1β expression ( R = 0.6945, P < 0.0001; Fig. D). MDSCs are heterogeneous cells derived from bone marrow that lead to the inactivation of CD8 + T cells and immune resistance. Multiple studies have shown that IL-1β might play crucial roles in the aggregation and differentiation of MDSCs. We further examined the relationship between IL-1β expression and MDSCs using IF. The fluorescence intensity of makers in PMN-MDSCs and M-MDSCs in the tissues of IL-1β-positive patients was significantly higher than that in IL-1β-negative patients ( P < 0.0001; Fig. E-F). CD8 + T cells were identified as one of the most significantly different cell clusters between the sensitive group and resistant group by scRNA-seq. CD8 + T cells in the tumor microenvironment (TME) are essential for the antitumor effects of immunotherapy. We used IF to detect the expression of CD8 + T cell marker in tissues and assessed the correlation between IL-1β and CD8 + T cells. The fluorescence intensity of CD8 in IL-1β positive tissue was significantly lower than that in IL-1β negative tissue ( P < 0.0001; Fig. G). There was a significant correlation between IL-1β and PMN-MDSCs ( R = 0.8168, P < 0.0001; Fig. H), M-MDSCs ( R = 0.7604, P < 0.0001; Fig. I) or CD8 + T cells ( R = 0.7684, P < 0.0001; Fig. J). Combining an IL-1β antagonist with an anti-PD-1 inhibitor overcame anti-PD-1 resistance in a xenograft model To further demonstrate the influence of IL-1β on PD-1 resistance in colorectal cancer, an in vivo xenograft model was used. The human IL-1β gene was stably transfected into CT26 cell lines to increase the expression of IL-1β, as verified by qPCR and western blotting (Additional file : Figure S6). IL-1β-overexpressing or untransfected control CT26 cells were then subcutaneously injected into male BALB/c mice ( n = 5 each group; Fig. A and B). After 7 days of tumor cell inoculation, progressive tumor growth was observed. As shown in Fig. C-D, IL-1β upregulation resulted in a substantial increase in tumor volume, and there was no significant difference in mouse weight among the five groups. We treated the three groups with diacerein, nivolumab or both and observed the growth of tumors to determine whether the IL-1β antagonist cooperates with the immune agent to inhibit the growth of tumors. IL-1β-induced elevation of CRC tumor volume (745.74 ± 188.34) and weight (0.73 ± 0.16) was greatly attenuated by diacerein (619.59 ± 127.03, 0.49 ± 0.09) or nivolumab (540.47 ± 90.92, 0.49 ± 0.10, P < 0.05), and treatment with both drugs led to the greatest attenuation of tumor growth (355.49 ± 39.89, 0.32 ± 0.08, P < 0.001). Diacerein and nivolumab showed marked synergistic effects. Tumor sections from the inoculated mice are shown in Fig. E. IL-1β antagonism attenuates resistance to PD-1 by regulating MDSCs and CD8 + T cells To examine whether the effectiveness of diacerein treatment against IL-1β-induced nivolumab resistance was mediated by MDSCs, flow cytometry was performed using biomarkers for PMN-MDSCs, M-MDSCs, and CD8 + T cells. As shown in Fig. , IL-1β resulted in a significant increase in the number of PMN-MDSCs (17.62 ± 5.56 vs. 4.64 ± 0.84, P < 0.0001) and M-MDSCs (5.61 ± 1.35 vs. 1.26 ± 0.33, P < 0.0001) in the TME, decreased the proportion of CD8 + T cells (2.94 ± 1.17 vs. 4.35 ± 1.37, P < 0.01) in the tumor tissues of mice, and enhanced immunosuppression. Next, we further assessed whether diacerein and nivolumab showed similar synergistic effects in reversing the IL-1β-mediated effect in PMN-MDSCs, M-MDSCs and CD8 + T cells. Monotherapy with either diacerein or nivolumab suppressed the differentiation of PMN-MDSCs (11.15 ± 2.90 vs. 17.621 ± 5.561, 12.0 ± 1.88 vs. 17.621 ± 5.561, P < 0.01 ) and M-MDSCs (3.21 ± 0.77 vs. 1.26 ± 0.33, 4.09 ± 0.90 vs. 1.26 ± 0.33, P < 0.01 ) and increased the number of CD8 + T cells (4.16 ± 0.76 vs. 2.94 ± 1.17, 4.81 ± 1.07 vs. 2.94 ± 1.17, P < 0.01). Figure B and C suggest that compared with nivolumab alone, the combined treatment significantly reduced the number of PMN-MDSCs ( 8.79 ± 2.98 vs. 12.0 ± 1.89, P < 0.01) and M-MDSCs (2.26 ± 0.72 vs. 4.09 ± 0.90, P < 0.001). Figure D shows that the degree of CD8 + T cell aggregation in the monotherapy groups with nivolumab (4.81 ± 1.07 vs. 5.695 ± 0.90, P < 0.05) was substantially reduced compared to that in the combination therapy group. Correlation between IL-1β and MMP9 expression levels in a mouse xenograft model As one of the key genes related to resistance identified by single-cell sequencing, MMP9 is considered to have strong antivascular and immunosuppressive functions. However, the correlation between MMP9 and IL-1β remains unclear. MMP9 was detected in mouse CRC tissue using qPCR, western blotting, and IHC. As shown in Fig. , IL-1β treatment significantly increased MMP9 expression in mouse tumor tissues on the basis of the results of qPCR (9.57 ± 0.26 vs. 1.08 ± 0.44, P < 0.0001) and western blotting (1.13 ± 0.07 vs. 0.21 ± 0.04, P < 0.0001). Next, we assessed whether diacerein could cooperate with nivolumab to reduce the expression of MMP9. Monotherapy with either diacerein (4.21 ± 0.60 vs . 9.57 ± 0.26, P < 0.01) or nivolumab (6.36 ± 1.10 vs. 9.57 ± 0.26, P < 0.01) attenuated the IL-1β-induced upregulation of MMP9 through qPCR analysis. Combination therapy reduced MMP9 expression more potently than monotherapy with diacerein (1.62 ± 0.59 vs. 4.21 ± 0.60, P < 0.0001) or nivolumab (1.62 ± 0.59 vs. 6.36 ± 1.10, P < 0.0001). The expression of MMP9 in tumor tissue was almost positively correlated with that of IL-1β (the value of R2 is shown in Fig. I).
A total of 23 MSI-H/dMMR mCRC patients were treated with first-line anti-PD-1 monotherapy between August 1, 2020, and May 31, 2022. Treatment response was evaluated by radiological examination after every 3 cycles of PD-1 inhibitor. PR was recorded for seven patients, CR was recorded for three patients, SD was recorded for six patients, and PD was recorded for seven patients. The ORR was 43.48% (10/23), and the disease control rate (DCR) was 69.57% (16/23) (Fig. A). The radiological findings for primary and metastatic lesions in six patients pre- and post-immunotherapy can be seen in Fig. B and C. Figure B shows the changes in primary and metastatic lesions in 3 patients in the resistance group (R1, R2 and R3) after anti-PD-1 treatment. The length of the primary lesions of R1 increased from 2.1 cm to 3.2 cm, while that of the metastatic lesions increased from 1.6 cm to 2.7 cm; the primary and metastatic lesions of R2 increased from 0.5 cm to 1.9 cm and from 0.3 cm to 0.8 cm, respectively; the length of primary lesions of R3 increased from 1.5 cm to 1.9 cm, and the number of metastatic lesions increased from 3 to more than 20. The responses of R1, R2 and R3 were evaluated as PD. Similarly, Fig. C shows the changes in primary and metastatic lesions in 3 patients in the sensitive group (S1, S2 and S3) after anti-PD-1 monotherapy. The length of the primary tumor of S1 decreased from 2.0 cm to 1.5 cm and that of the metastatic tumor decreased from 2.1 cm to 1.2 cm; the lengths of the primary and metastatic lesions of S2 decreased from 1.2 cm to 0.7 cm and from 5.3 cm to 4.4 cm, respectively; and the lengths of the primary and metastatic lesions of S3 decreased from 1.3 cm to 0.8 cm and from 2.1 cm to 0.5 cm, respectively. The responses of S1, S2 and S3 were evaluated as PR.
A total of six patients (three PR and three PD) were randomly selected for scRNA-seq. 10 × Genomics scRNA sequencing datasets were obtained from fresh CRC tissues obtained from three resistant (named R1, R2, R3) and three sensitive (named S1, S2, S3) patients. After removing 3071 low-quality cells, a total of 53,021 cells were used in the final analysis (Fig. A); specifically, there were 7679 cells from R1, 10,797 cells from R2, 9020 from R3, 7880 cells from S1, 9369 cells from S2, and 8276 cells from S3. Figure B shows the top 2000 highly variant genes, with the ten most variable genes labeled. PCA of the 2000 highly variable genes demonstrated no significant separation in CRC cells between resistant and sensitive groups (Fig. C). Twenty principal components were selected based on linear dimensionality reduction analysis (Fig. D). Nonlinear dimensionality reduction of the data according to the dimension value 20 combined with the RunMap function revealed a more uniform distribution of cells in each sample (Fig. E). Subsequently, these cells were classified into 23 cell clusters based on gene expression levels by t-SNE, and the distribution of these cellular taxa was found to be nearly identical across the sensitive and resistant groups (Fig. F).
Using the R packages SingleR and celldex, 23 cell clusters were annotated into nine cell subtypes: CD8 + T cells, epithelial cells, B cells, dendritic cells (DCs), hematopoietic stem cells, monocytes, fibroblasts, myocytes, and endothelial cells (Fig. A). Combined with Fig. F, the aggregation of CD8 + T cells and monocytes in the sensitive group was significantly higher than that in the resistant group. The number of cells of each cellular subtype in each sample is shown in Additional file : Table S3. A total of 679 marker genes were obtained for the nine cell subtypes (Additional file : Table S2): there were 176 markers for fibroblasts, 143 markers for endothelial cells, 131 markers for myocytes, 67 markers for DCs, 65 markers for monocytes, 40 markers for hematopoietic stem cells, 27 markers for epithelial cells, 17 markers for CD8 + T cells, and 13 markers for B cells (Fig. B). The top gene for each cell subtype is shown in Fig. C. The heatmap of these genes illustrated that each cluster displayed distinct gene expression features (Fig. D). GO analysis showed that these marker genes were mainly enriched in immune-related biological processes such as differentiation of lymphocytes and monocytes and positive regulation of leukocytes. Moreover, they were also significantly associated with “cytokine-mediated signaling pathway” (Additional file : Figure S1A-C). KEGG enrichment analysis (Additional file : Figure S1D) indicated that the marker genes were significantly correlated with immune/inflammatory responses (e.g.“cytokine − cytokine receptor interaction”, “Fc epsilon RI signaling pathway”, “AGE − RAGE signaling pathway”, “cell adhesion molecules”). Cancer-related pathways, including “PI3K-Akt signaling pathway” and “proteoglycans in cancer”, were also significantly enriched.
Developmental trajectories were simulated for the nine cell subtypes, and 1623 feature genes were found to be differentially expressed among the cell subpopulations (Additional file : Table S4). Figure A illustrates the relatively high dispersion of these feature genes. Individual cells were then classified by these feature genes using the R package Monocle, and a tree structure of the entire spectrum of differentiation trajectories was constructed (Fig. B). From a cell typing perspective, overall, the proposed temporal evolutionary trend of the cell subtypes was a gradual transition from epithelial cells to intermediate cells (e.g., fibroblasts, hematopoietic stem cells, monocytes, B cells) and eventually to CD8 + T cells (Fig. C). A total of 1454 pseudotime-related genes were identified from the 1623 feature genes ( P < 0.05; Additional file : Table S5). KEGG analysis (Additional file : Figure S2A) revealed that these genes were involved in “cytokine‒cytokine receptor interaction”, “viral protein interaction with cytokine and cytokine receptor”, and “cell cycle”; they were also inextricably linked to the tumor-associated pathways “NF-kappa B signaling pathway”, “p53 signaling pathway”, and “PPAR signaling pathway”. GO-BP analysis revealed that these pseudotime-related genes were related to the immune response, inflammatory response, immune cell differentiation, proliferation, migration, and chemotaxis (Additional file : Figure S2B). Moreover, these genes also performed molecular functions such as “antigen binding”, “immunoglobulin receptor binding”, and “extracellular matrix structural constituent” (Additional file : Figure S2C) in cellular fractions such as “immunoglobulin complex”, “external side of plasma membrane”, and “immunoglobulin complex, circulating” (Additional file : Figure S2D). To observe differences in gene expression for each cell subtype between the sensitive and resistant groups, we performed differential analysis using the FindMarkers function in the Seurat package. A total of 155 DEGs were obtained; 97 were upregulated, and 98 were downregulated (Additional file : Table S6). Functional enrichment analysis indicated that these genes were related to immune and inflammatory responses. DEGs in the CD8 + T-cell subtype were mainly involved in the IFN-related response and oxygen transport, while DEGs in the DC and fibroblast subtypes were associated with the chemokine-related response and neutrophil migration (Additional file : Figure S3A and B). Enrichment results for GO-CC and GO-MF are presented in Additional file : Figure S3B and C. KEGG enrichment analysis showed that DEGs in the B-cell and monocyte subtypes were involved in similar pathways and were enriched in antigen processing and presentation; DEGs in the DC and fibroblast subtypes were involved in cancer-related pathways, including chemokine and IL-17 signaling (Additional file : Figure S3D). Through overlap analysis, 130 common genes were identified among 1454 pseudotime-related genes and 155 (deduplicated) PD-1 resistance-related genes (Fig. D; Additional file : Table S7). A PPI network containing 109 nodes and 435 edges was drawn based on the 130 common genes using the STRING database (Fig. E). VarElect analysis of the 130 common genes identified the two genes with the highest connectivity in the PPI network as IL-1β (score = 17.72) and MMP9 (score = 13.45), indicating that these two genes are most closely associated with CRC (Additional file : Table S8; Additional file : Figure S4). KEGG pathway enrichment analysis of these two key genes showed that they are involved in cytokine‒cytokine receptor interaction (IL-1β) and the MAPK and PI3K-Akt signaling pathways (IL-1β and MMP9). The specific signaling pathway maps are shown in Additional file : Figure S5.
+ T cells in CRC patients IL-1β and MMP9 were identified as the top two genes with the highest correlation with anti-PD-1 resistance through scRNA-seq. We used IHC to detect the expression of IL-1β and MMP9 in 10 sensitive and 13 resistant tumor tissues from MSI-H/dMMR patients. The IHC grade of IL-1β in IL-1β-positive tissues was significantly higher than that in IL-1β-negative tumor tissues ( P < 0.001; Fig. A). Next, all 23 patients with were categorized into IL-1β-negative or IL-1β-positive groups based on the IHC grade of IL-1β. Of the 13 tissues from resistant patients, eleven tissues had high expression of IL-1β; of the 10 tissues from sensitive patients, only one tissue had high expression (Fig. B). A similar expression trend was observed for another key gene, MMP9 ( P < 0.0001; Fig. C). MMP9 expression was positively correlated with IL-1β expression ( R = 0.6945, P < 0.0001; Fig. D). MDSCs are heterogeneous cells derived from bone marrow that lead to the inactivation of CD8 + T cells and immune resistance. Multiple studies have shown that IL-1β might play crucial roles in the aggregation and differentiation of MDSCs. We further examined the relationship between IL-1β expression and MDSCs using IF. The fluorescence intensity of makers in PMN-MDSCs and M-MDSCs in the tissues of IL-1β-positive patients was significantly higher than that in IL-1β-negative patients ( P < 0.0001; Fig. E-F). CD8 + T cells were identified as one of the most significantly different cell clusters between the sensitive group and resistant group by scRNA-seq. CD8 + T cells in the tumor microenvironment (TME) are essential for the antitumor effects of immunotherapy. We used IF to detect the expression of CD8 + T cell marker in tissues and assessed the correlation between IL-1β and CD8 + T cells. The fluorescence intensity of CD8 in IL-1β positive tissue was significantly lower than that in IL-1β negative tissue ( P < 0.0001; Fig. G). There was a significant correlation between IL-1β and PMN-MDSCs ( R = 0.8168, P < 0.0001; Fig. H), M-MDSCs ( R = 0.7604, P < 0.0001; Fig. I) or CD8 + T cells ( R = 0.7684, P < 0.0001; Fig. J).
To further demonstrate the influence of IL-1β on PD-1 resistance in colorectal cancer, an in vivo xenograft model was used. The human IL-1β gene was stably transfected into CT26 cell lines to increase the expression of IL-1β, as verified by qPCR and western blotting (Additional file : Figure S6). IL-1β-overexpressing or untransfected control CT26 cells were then subcutaneously injected into male BALB/c mice ( n = 5 each group; Fig. A and B). After 7 days of tumor cell inoculation, progressive tumor growth was observed. As shown in Fig. C-D, IL-1β upregulation resulted in a substantial increase in tumor volume, and there was no significant difference in mouse weight among the five groups. We treated the three groups with diacerein, nivolumab or both and observed the growth of tumors to determine whether the IL-1β antagonist cooperates with the immune agent to inhibit the growth of tumors. IL-1β-induced elevation of CRC tumor volume (745.74 ± 188.34) and weight (0.73 ± 0.16) was greatly attenuated by diacerein (619.59 ± 127.03, 0.49 ± 0.09) or nivolumab (540.47 ± 90.92, 0.49 ± 0.10, P < 0.05), and treatment with both drugs led to the greatest attenuation of tumor growth (355.49 ± 39.89, 0.32 ± 0.08, P < 0.001). Diacerein and nivolumab showed marked synergistic effects. Tumor sections from the inoculated mice are shown in Fig. E.
+ T cells To examine whether the effectiveness of diacerein treatment against IL-1β-induced nivolumab resistance was mediated by MDSCs, flow cytometry was performed using biomarkers for PMN-MDSCs, M-MDSCs, and CD8 + T cells. As shown in Fig. , IL-1β resulted in a significant increase in the number of PMN-MDSCs (17.62 ± 5.56 vs. 4.64 ± 0.84, P < 0.0001) and M-MDSCs (5.61 ± 1.35 vs. 1.26 ± 0.33, P < 0.0001) in the TME, decreased the proportion of CD8 + T cells (2.94 ± 1.17 vs. 4.35 ± 1.37, P < 0.01) in the tumor tissues of mice, and enhanced immunosuppression. Next, we further assessed whether diacerein and nivolumab showed similar synergistic effects in reversing the IL-1β-mediated effect in PMN-MDSCs, M-MDSCs and CD8 + T cells. Monotherapy with either diacerein or nivolumab suppressed the differentiation of PMN-MDSCs (11.15 ± 2.90 vs. 17.621 ± 5.561, 12.0 ± 1.88 vs. 17.621 ± 5.561, P < 0.01 ) and M-MDSCs (3.21 ± 0.77 vs. 1.26 ± 0.33, 4.09 ± 0.90 vs. 1.26 ± 0.33, P < 0.01 ) and increased the number of CD8 + T cells (4.16 ± 0.76 vs. 2.94 ± 1.17, 4.81 ± 1.07 vs. 2.94 ± 1.17, P < 0.01). Figure B and C suggest that compared with nivolumab alone, the combined treatment significantly reduced the number of PMN-MDSCs ( 8.79 ± 2.98 vs. 12.0 ± 1.89, P < 0.01) and M-MDSCs (2.26 ± 0.72 vs. 4.09 ± 0.90, P < 0.001). Figure D shows that the degree of CD8 + T cell aggregation in the monotherapy groups with nivolumab (4.81 ± 1.07 vs. 5.695 ± 0.90, P < 0.05) was substantially reduced compared to that in the combination therapy group.
As one of the key genes related to resistance identified by single-cell sequencing, MMP9 is considered to have strong antivascular and immunosuppressive functions. However, the correlation between MMP9 and IL-1β remains unclear. MMP9 was detected in mouse CRC tissue using qPCR, western blotting, and IHC. As shown in Fig. , IL-1β treatment significantly increased MMP9 expression in mouse tumor tissues on the basis of the results of qPCR (9.57 ± 0.26 vs. 1.08 ± 0.44, P < 0.0001) and western blotting (1.13 ± 0.07 vs. 0.21 ± 0.04, P < 0.0001). Next, we assessed whether diacerein could cooperate with nivolumab to reduce the expression of MMP9. Monotherapy with either diacerein (4.21 ± 0.60 vs . 9.57 ± 0.26, P < 0.01) or nivolumab (6.36 ± 1.10 vs. 9.57 ± 0.26, P < 0.01) attenuated the IL-1β-induced upregulation of MMP9 through qPCR analysis. Combination therapy reduced MMP9 expression more potently than monotherapy with diacerein (1.62 ± 0.59 vs. 4.21 ± 0.60, P < 0.0001) or nivolumab (1.62 ± 0.59 vs. 6.36 ± 1.10, P < 0.0001). The expression of MMP9 in tumor tissue was almost positively correlated with that of IL-1β (the value of R2 is shown in Fig. I).
Compared with traditional chemotherapy or targeted therapy, immunotherapy has a unique molecular mechanism by which it shrinks tumors. It is generally considered that immune checkpoint inhibitors enhance the body’s general immunity by inducing T-cell activation and “releasing the immune brake” in the TME . In a previous study from our center, 29 patients with MSI-H/dMMR locally advanced colorectal cancer received neoadjuvant immunotherapy with a single-agent PD-1 inhibitor, and the ORR was 100% (29/29), consistent with the results of the NICHE study . In the present study, 23 patients with MSI-H/dMMR mCRC received first-line PD-1 inhibitor treatment, but the ORR was only 43.5% (10/23). These divergent responses of MSI-H mCRC to anti-PD-1 monotherapy are likely caused by differences in TME composition . In view of this, through scRNA-seq, we defined various cellular subtypes, screened key genes, and revealed the TME signaling pathways that differ between the anti-PD-1 treatment-sensitive and -resistant groups. Together, the results suggested that the proportions of CD8 + T-cell subsets in the sensitive group were significantly higher than those in the resistant group . Multiple CD8 + T-cell subsets in the TME may affect the response rate of new therapies targeting the immune system . MDSCs are heterogeneous cells derived from bone marrow that promote angiogenesis and immunosuppression . Recent studies have shown that decreasing the aggregation of MDSCs in the TME can significantly increase the infiltration of CD8 + T cells and enhance the antitumor effects of PD-1 inhibition . The immunosuppressive effect of MDSCs might be related to the secretion of various cytokines, such as inducible nitric oxide synthase, arginase 1, interleukin-6, and transforming growth factor-β . In this study, IL-1β ranked first in terms of connectivity in the overlap of pseudotime-related genes and PD-1 resistance-related genes. Monocyte-derived macrophages are a major source of IL-1β production during the immune response to pathogen infection . Previous studies have suggested that IL-1β could promote the invasion and growth of human colon cancer cells , and IL-1β polymorphisms are associated with CRC recurrence . IL-1β blockade has been shown to reverse immunosuppression and has been shown to exhibit synergy with PD-1 inhibitors to promote the elimination of several tumor types, including breast , pancreatic , and renal tumors. Nevertheless, the precise role of IL-1β in CRC immunotherapy has yet to be elucidated. Several studies have suggested that IL-1β can induce the infiltration of MDSCs by producing granulocyte colony stimulating factor, various CXC chemokines, and vascular adhesion molecules . However, the role of IL-1β-mediated MDSC aggregation in CRC immunotherapy remains unclear. MMP9 is a matrix metalloproteinase that has been identified as a component of the angiogenic switch during carcinogenesis . MMP9 mediates tumor invasion, metastasis, and immune escape through a pro-oncogenic signaling pathway and is associated with relapse and poor prognosis in patients with CRC . The combination of MMP9 inhibition with immune checkpoint inhibition enhanced the efficacy of immunotherapy in mouse models of melanoma and breast cancer . MMP9 has also shown promise in the stratification of prognosis and immune checkpoint treatment responsiveness in patients with hepatocellular carcinoma . The present KEGG analysis results for 130 key genes revealed that IL-1β and MMP9 were highly correlated with the MAPK and PI3K-Akt signaling pathways. Recent studies have also indicated that IL-1β upregulates MMP9 expression through different signaling pathways in different diseases . The infiltration of MDSCs could further stimulate the secretion of MMP9 , and IL-1β-driven accumulation of MDSCs might be one of the main sources of MMP9. However, the function of MMP9 upregulation mediated by IL-1β in CRC immunotherapy needs further study. In the present study, we described the cellular landscape of immunotherapy-resistant and immunotherapy-sensitive groups at the single-cell level. The degree of aggregation of CD8 + T cells and monocytes in the sensitive group was significantly higher than that in the resistant group. IL-1β and MMP9 were identified as the two genes with the highest correlation with anti-PD-1 resistance. Moreover, IL-1β-driven infiltration of MDSCs enhanced anti-PD-1 resistance in MSI-H/dMMR CRC.
In the present study, the ORR and DCR of MSI-H/dMMR mCRC treated with first-line PD-1 monotherapy were 43.75% (7/16) and 68.75% (11/16), respectively. IL-1β and CD8 + T cells were found to have the highest correlation with anti-PD-1 resistance among genes and cell types, respectively. Mouse experiments demonstrated that IL-1β-driven MDSC infiltration suppressed the accumulation of CD8 + T cells, which enhanced the anti-PD-1 resistance of MSI-H/dMMR CRC. Together, these findings suggest that IL-1β antagonists may prove promising as new drugs to reverse resistance to PD-1 inhibitors.
Additional file 1: Table S1. Single cell sequencing data of clinical samples from 6 patients with MSI-H/dMMR. Additional file 2: Table S2. Markers used in cell subtype analysis pipeline. Additional file 3: Table S3. Number of cells of 9 cell subtypes in each sample. Additional file 4: Figure S1. GO and KEGG analysis of marker genes. Additional file 5: Table S4. 1623 differentially expressed genes in 9 cell subsets. Additional file 6: Table S5. 1454 pseudotime-related genes were identified from 1623 feature genes. Additional file 7: Figure S2. KEGG and GO analysis of pseudotime-related genes. Additional file 8: Table S6. 155 DEGs in gene expression of each cell subtype between sensitive group and resistant group. Additional file 9: Figure S3. GO and KEGG analysis of PD-1 resistance-related DEGs. Additional file 10: Table S7. 130 common genes among 454 pseudo-time related genes and 155 (de- duplication) PD-1 resistance related genes. Additional file 11: Table S8. VarElect analyse of 130 common genes. Additional file 12: Figure S4. VarElect analysis results of 130 common genes in 454 pseudotime-related genes and 155 (de-duplication) PD-1 resistance related genes DEGs. Figure S5. (A) The cytokine‒cytokine receptor interaction and (B) the MAPK and PI3K-Akt signaling pathway maps of IL-1β and MMP9. Figure S6. Construction of stable CT26 cell line overexpressing IL-1β. (A) Colorectal cancer cell line CT26 used in this experiment. (B) Plasmid map of IL-1β overexpression vector. (C) Screening positive colonies by ampicillin after plasmid transformation. (D) Gel map of plasmid electrophoresis. (E) Western blotting was applied to detect the expression of IL-1β in stably transformed cell lines. (F) qPCR was used to detect the expression of IL-1β in stably transformed cell lines. Additional file 13. Images of the original blots.
|
Senkyunolide I: A Review of Its Phytochemistry, Pharmacology, Pharmacokinetics, and Drug-Likeness
|
3d41e769-5df7-469f-bc9a-b0317365470f
|
10144034
|
Pharmacology[mh]
|
Phthalides are a group of structural specific constituents naturally distributed in several important medicinal herbs in Asia, Europe, and North Africa . Accumulating evidence demonstrated that natural phthalides have various pharmacological activities, including analgesic , anti-inflammatory , antithrombotic , and antiplatelet activities, mostly consistent with the traditional medicinal uses of their natural plant sources. For example, Ligusticum chuanxiong Hort. ( L. chuanxiong ) and Angelica sinensis (Oliv.) Diels ( A. sinensis ), frequently used in traditional Chinese medicine (TCM) to invigorate the circulation of qi and the blood, both contain a high level of phthalide components, typically exceeding 1% in their rhizome or root . One of these phthalides that has been broadly studied is ligustilide (LIG) ( a), which displays analgesic, anti-inflammatory, antihypertensive, and neuroprotective activities on brain injury . However, LIG is not a promising drug candidate due to its instability, potent lipophilicity, poor water solubility, and low bioavailability. Druggability improvement was achieved, to a certain degree, by preparing LIG into a nano-emulsion or a hydroxypropyl-β-cyclodextrin complex , but a specific technique is required and the manufacture cost is high. N-butylphthalide (NBP), first isolated from celery seed, has been licensed in China for the indication of mild and moderate acute ischemic stroke , and clinical trials of its effects on vascular cognitive disorder as well as amyotrophic lateral sclerosis are ongoing . Still, extensive application of NBP is limited, owing to its hepatotoxicity, poor solubility, and unsatisfactory bioavailability . Therefore, discovering natural phthalides with improved druggability from traditional medicinal herbs is both intriguing and meaningful. SI ( b) is also a natural phthalide existing in L. chuanxiong and A. sinensis in relatively low contents and is generally considered as an oxidation product of LIG. It has similar pharmacological activities but significantly superior stability, solubility, safety, bioavailability, and brain accessibility compared with LIG, thus meriting further druggability research and evaluation. In this paper, the physicochemical characteristics, isolation and purification methods, as well as pharmacological and pharmacokinetic properties of SI are overviewed. An illustrated summary is described in . 2.1. Distribution in Nature SI was firstly discovered as a natural phthalide from Ligusticum wallichii Franch in 1983, under the name (Z)-ligustidiol . Subsequently, SI was found in the rhizome of Cnidium officinale Makino in 1984 . According to the published literature to date, SI was found mainly in Umbelliferae plants, including Angelica sinensis (Oliv.) Diels , Ligusticum chuanxiong Hort , Lomatium californicum (Nutt) , Cryptotaenia japonica Hassk , and so on. In general, natural phthalides are distributed mainly in plants belonging to the Umbelliferae family, and are also occasionally found in Cactaceae, Compositae, Lamiaceae, Gentianaceae, and Loganiaceae families. In addition, natural phthalides obtained as fungal and lichen metabolites have been reported . 2.2. Production 2.2.1. Chemical Transformation from LIG Only a trace amount of SI can be found in fresh rhizomes of L. chuanxiong , while more SI is produced by the degradation of LIG during processing and storage. Li and colleagues investigated the chemical changes induced by different processing methods, and the results indicated that the main phthalides in rhizomes of L. chuanxiong , such as LIG and senkyunolide A (SA), decreased significantly. Meanwhile, levistolide A, SI, and its isomer senkyunolide H (SH) increased correspondingly. According to the report, the highest level of SI (0.32 mg/g) was found when fresh rhizomes of L. chuanxiong were dried at 60 °C for 24 h. In addition, the chemical changes of rhizomes of L. chuanxiong during storage were assayed. The results showed that the contents of LIG, coniferyl ferulate, and SA decreased significantly after 2 years of storage at room temperature, resulting in increases in the quantities of SI, SH, ferulic acid, levistilide A, and vanillin. SI increased by 37.6% during the period of storage and was presumed as the dominating oxidative product of LIG . Duric and co-workers found that LIG is relatively stable in plant oil cells. However, the purified LIG became very unstable and inclined to form dimers or trimers under light, whereas when heated in the dark, it mainly transformed into SI and its isomer SH . The results above are consistent with those reported by Lin et al. . Duan and colleagues studied the reaction products of LIG in an electrochemical reactor. Five products were separated and identified, including two dihydroxyl products SI and SH, as well as an epoxide 6,7-epoxyligustilide. The latter is a key intermediate in the transformation of LIG into SI and SH. Processing conditions influence SI production in the rhizome of L. chuanxiong . A steaming process with or without rice wine resulted in higher SI levels compared to a stir-frying process. . A simple mechanism for the transformation of LIG to SI is illustrated in . 2.2.2. Metabolic Transformation of LIG SI is the major metabolite of LIG in vivo and in vitro. Yan et al. found that SI was one of the main metabolites when LIG was injected intravenously in SD rats. Similarly, LIG can be transformed into SI when incubated with small intestinal homogenates or liver microsomes of rats . When incubating LIG with human or rat hepatocytes at 37 °C, SI was found to be the main metabolite, with proportions of 42% and 70%, respectively . Furthermore, research on the enzyme kinetics of LIG incubated with rat liver microsomes demonstrated that CYP3A4, CYP2C9, and CYP1A2 are the main metabolic enzymes involved in the LIG metabolism . However, the key enzyme catalyzing LIG into SI in vivo has not been identified. SI was firstly discovered as a natural phthalide from Ligusticum wallichii Franch in 1983, under the name (Z)-ligustidiol . Subsequently, SI was found in the rhizome of Cnidium officinale Makino in 1984 . According to the published literature to date, SI was found mainly in Umbelliferae plants, including Angelica sinensis (Oliv.) Diels , Ligusticum chuanxiong Hort , Lomatium californicum (Nutt) , Cryptotaenia japonica Hassk , and so on. In general, natural phthalides are distributed mainly in plants belonging to the Umbelliferae family, and are also occasionally found in Cactaceae, Compositae, Lamiaceae, Gentianaceae, and Loganiaceae families. In addition, natural phthalides obtained as fungal and lichen metabolites have been reported . 2.2.1. Chemical Transformation from LIG Only a trace amount of SI can be found in fresh rhizomes of L. chuanxiong , while more SI is produced by the degradation of LIG during processing and storage. Li and colleagues investigated the chemical changes induced by different processing methods, and the results indicated that the main phthalides in rhizomes of L. chuanxiong , such as LIG and senkyunolide A (SA), decreased significantly. Meanwhile, levistolide A, SI, and its isomer senkyunolide H (SH) increased correspondingly. According to the report, the highest level of SI (0.32 mg/g) was found when fresh rhizomes of L. chuanxiong were dried at 60 °C for 24 h. In addition, the chemical changes of rhizomes of L. chuanxiong during storage were assayed. The results showed that the contents of LIG, coniferyl ferulate, and SA decreased significantly after 2 years of storage at room temperature, resulting in increases in the quantities of SI, SH, ferulic acid, levistilide A, and vanillin. SI increased by 37.6% during the period of storage and was presumed as the dominating oxidative product of LIG . Duric and co-workers found that LIG is relatively stable in plant oil cells. However, the purified LIG became very unstable and inclined to form dimers or trimers under light, whereas when heated in the dark, it mainly transformed into SI and its isomer SH . The results above are consistent with those reported by Lin et al. . Duan and colleagues studied the reaction products of LIG in an electrochemical reactor. Five products were separated and identified, including two dihydroxyl products SI and SH, as well as an epoxide 6,7-epoxyligustilide. The latter is a key intermediate in the transformation of LIG into SI and SH. Processing conditions influence SI production in the rhizome of L. chuanxiong . A steaming process with or without rice wine resulted in higher SI levels compared to a stir-frying process. . A simple mechanism for the transformation of LIG to SI is illustrated in . 2.2.2. Metabolic Transformation of LIG SI is the major metabolite of LIG in vivo and in vitro. Yan et al. found that SI was one of the main metabolites when LIG was injected intravenously in SD rats. Similarly, LIG can be transformed into SI when incubated with small intestinal homogenates or liver microsomes of rats . When incubating LIG with human or rat hepatocytes at 37 °C, SI was found to be the main metabolite, with proportions of 42% and 70%, respectively . Furthermore, research on the enzyme kinetics of LIG incubated with rat liver microsomes demonstrated that CYP3A4, CYP2C9, and CYP1A2 are the main metabolic enzymes involved in the LIG metabolism . However, the key enzyme catalyzing LIG into SI in vivo has not been identified. Only a trace amount of SI can be found in fresh rhizomes of L. chuanxiong , while more SI is produced by the degradation of LIG during processing and storage. Li and colleagues investigated the chemical changes induced by different processing methods, and the results indicated that the main phthalides in rhizomes of L. chuanxiong , such as LIG and senkyunolide A (SA), decreased significantly. Meanwhile, levistolide A, SI, and its isomer senkyunolide H (SH) increased correspondingly. According to the report, the highest level of SI (0.32 mg/g) was found when fresh rhizomes of L. chuanxiong were dried at 60 °C for 24 h. In addition, the chemical changes of rhizomes of L. chuanxiong during storage were assayed. The results showed that the contents of LIG, coniferyl ferulate, and SA decreased significantly after 2 years of storage at room temperature, resulting in increases in the quantities of SI, SH, ferulic acid, levistilide A, and vanillin. SI increased by 37.6% during the period of storage and was presumed as the dominating oxidative product of LIG . Duric and co-workers found that LIG is relatively stable in plant oil cells. However, the purified LIG became very unstable and inclined to form dimers or trimers under light, whereas when heated in the dark, it mainly transformed into SI and its isomer SH . The results above are consistent with those reported by Lin et al. . Duan and colleagues studied the reaction products of LIG in an electrochemical reactor. Five products were separated and identified, including two dihydroxyl products SI and SH, as well as an epoxide 6,7-epoxyligustilide. The latter is a key intermediate in the transformation of LIG into SI and SH. Processing conditions influence SI production in the rhizome of L. chuanxiong . A steaming process with or without rice wine resulted in higher SI levels compared to a stir-frying process. . A simple mechanism for the transformation of LIG to SI is illustrated in . SI is the major metabolite of LIG in vivo and in vitro. Yan et al. found that SI was one of the main metabolites when LIG was injected intravenously in SD rats. Similarly, LIG can be transformed into SI when incubated with small intestinal homogenates or liver microsomes of rats . When incubating LIG with human or rat hepatocytes at 37 °C, SI was found to be the main metabolite, with proportions of 42% and 70%, respectively . Furthermore, research on the enzyme kinetics of LIG incubated with rat liver microsomes demonstrated that CYP3A4, CYP2C9, and CYP1A2 are the main metabolic enzymes involved in the LIG metabolism . However, the key enzyme catalyzing LIG into SI in vivo has not been identified. Pure SI is yellowish amorphous powder or sticky oil with a celery-like smell. Unlike most natural phthalides, SI is soluble in water and some organic solvents, such as ethanol, ethyl acetate, and chloroform. Several studies suggested that SI has better drug-like properties compared to LIG. 3.1. Stability The degradation of SI in aqueous solution conforms to first-order degradation kinetics, and the energy of activation (Ea) was 194.86 k J/mol. SI in weakly acidic solution showed a better stability, while its degradation ratio accelerated significantly under alkalescent conditions . It was reported that oxygen is the dominating factor that accelerates the degradation rates of SI and SA induced by light and temperature. At room temperature with daylight, SA was completely converted into butylphthalide within 2 months, while only about 20% of SI was converted into its cis-trans isomer after 5 months of storage, indicating that SI is more stable than SA . Peihua Zhang et al. introduced a methanol extract of L. chuanxiong in boiling water and evaluated the content changes during decoction. As a result, the content of LIG decreased from 14 mg/g to 0.4 mg/g after 20 min, while the SI content increased from 1.4 mg/g to 1.7 mg/g during 60 min of heating. Formula granules are a type of dried decoction of a prepared herbal medicine. In the characteristic chromatograms of both A. sinensis and L. chuanxiong formula granules issued by the National Pharmacopoeia Commission of China, SI is marked as a dominant and characteristic peak, suggesting that SI is stable during decoction, concentration, and drying processes. On the contrary, as the most abundant phthalide both in L. chuanxiong and A. sinensis slices, LIG is almost undetectable in these formula granules . 3.2. Permeability SI has satisfactory permeability and solubility. Yuan and co-workers screened the potential transitional components in L. chuanxiong extract by a serum pharmacochemical method and high-performance liquid chromatography-diode array detection tandem mass spectrometry/mass spectrometry (HPLC-DAD-MS/MS) analysis. SI was identified as a transitional component both in the plasma and cerebrospinal fluid, while ferulic acid was detected only in plasma. SI can pass through the BBB easily, and the AUC of SI in the brain accounted for 77.9% of that in plasma . The water solubility of SI was measured to be 34.3 mg/mL, and the lipid–water partition coefficient was 13.43 . Previous studies revealed that SI exhibits good absorption in the rat gastrointestinal tract, including the jejunum, colon, ileum, and duodenum, and no significant differences in the absorption rate constant and apparent absorption coefficient were observed . The degradation of SI in aqueous solution conforms to first-order degradation kinetics, and the energy of activation (Ea) was 194.86 k J/mol. SI in weakly acidic solution showed a better stability, while its degradation ratio accelerated significantly under alkalescent conditions . It was reported that oxygen is the dominating factor that accelerates the degradation rates of SI and SA induced by light and temperature. At room temperature with daylight, SA was completely converted into butylphthalide within 2 months, while only about 20% of SI was converted into its cis-trans isomer after 5 months of storage, indicating that SI is more stable than SA . Peihua Zhang et al. introduced a methanol extract of L. chuanxiong in boiling water and evaluated the content changes during decoction. As a result, the content of LIG decreased from 14 mg/g to 0.4 mg/g after 20 min, while the SI content increased from 1.4 mg/g to 1.7 mg/g during 60 min of heating. Formula granules are a type of dried decoction of a prepared herbal medicine. In the characteristic chromatograms of both A. sinensis and L. chuanxiong formula granules issued by the National Pharmacopoeia Commission of China, SI is marked as a dominant and characteristic peak, suggesting that SI is stable during decoction, concentration, and drying processes. On the contrary, as the most abundant phthalide both in L. chuanxiong and A. sinensis slices, LIG is almost undetectable in these formula granules . SI has satisfactory permeability and solubility. Yuan and co-workers screened the potential transitional components in L. chuanxiong extract by a serum pharmacochemical method and high-performance liquid chromatography-diode array detection tandem mass spectrometry/mass spectrometry (HPLC-DAD-MS/MS) analysis. SI was identified as a transitional component both in the plasma and cerebrospinal fluid, while ferulic acid was detected only in plasma. SI can pass through the BBB easily, and the AUC of SI in the brain accounted for 77.9% of that in plasma . The water solubility of SI was measured to be 34.3 mg/mL, and the lipid–water partition coefficient was 13.43 . Previous studies revealed that SI exhibits good absorption in the rat gastrointestinal tract, including the jejunum, colon, ileum, and duodenum, and no significant differences in the absorption rate constant and apparent absorption coefficient were observed . 4.1. Analytical Methods The reported analytical methods of SI in herbs and prescriptions, as well as corresponding parameters, are shown in . The SI analyses were generally performed by high-performance liquid chromatography (HPLC) combined with an ultraviolet (UV) or diode array detection (DAD) detector. Most of the separations were carried out on a C18 column using a mixture of acetonitrile and acidic aqueous solution as the mobile phase. In addition, other detective devices, such as electrospray ionization tandem mass spectrometry (ESI-MS) and time-of-flight mass spectrometry (TOF-MS), were used for the structure elucidation and metabolite analysis of SI. 4.2. Content in Medicinal Material and Preparation The contents of SI medicinal materials and preparations are shown in and , respectively. Among the commonly used TCM, SI exists limitedly in A. sinensis and L. chuanxiong. indicates that the maximum content of SI in A. sinensis is 1 mg/g, while it reaches more than 10 mg/g in L. chuanxiong . The reason is presumably that LIG in L. chuanxiong is in a higher level and may produce more SI compared with that in A. sinensis . In addition, SI concentrations in Chuanxiong dispensing granules range from 2.08 to 6.07 mg/g. The relatively high content might be attributed to its good water solubility or accelerated transformation from LIG during decocting, concentrating, or drying processes. summarizes the quantitative analysis of SI in multiple compound preparations containing L . chuanxiong rhizome and A. sinensis root. The results show a large fluctuation from 0.02 to 2.206 mg/g, suggesting that SI content is most likely influenced by material quality, formulation, and preparation technology. The reported analytical methods of SI in herbs and prescriptions, as well as corresponding parameters, are shown in . The SI analyses were generally performed by high-performance liquid chromatography (HPLC) combined with an ultraviolet (UV) or diode array detection (DAD) detector. Most of the separations were carried out on a C18 column using a mixture of acetonitrile and acidic aqueous solution as the mobile phase. In addition, other detective devices, such as electrospray ionization tandem mass spectrometry (ESI-MS) and time-of-flight mass spectrometry (TOF-MS), were used for the structure elucidation and metabolite analysis of SI. The contents of SI medicinal materials and preparations are shown in and , respectively. Among the commonly used TCM, SI exists limitedly in A. sinensis and L. chuanxiong. indicates that the maximum content of SI in A. sinensis is 1 mg/g, while it reaches more than 10 mg/g in L. chuanxiong . The reason is presumably that LIG in L. chuanxiong is in a higher level and may produce more SI compared with that in A. sinensis . In addition, SI concentrations in Chuanxiong dispensing granules range from 2.08 to 6.07 mg/g. The relatively high content might be attributed to its good water solubility or accelerated transformation from LIG during decocting, concentrating, or drying processes. summarizes the quantitative analysis of SI in multiple compound preparations containing L . chuanxiong rhizome and A. sinensis root. The results show a large fluctuation from 0.02 to 2.206 mg/g, suggesting that SI content is most likely influenced by material quality, formulation, and preparation technology. The rhizomes of L. chuanxiong and roots of A. sinensis are commonly used materials for SI extraction, isolation, and purification. Ethanol of high concentration was the most used extraction solvent, followed by methanol and water. Besides the conventional extraction methods, such as reflux, immersion, and ultrasonication, supercritical fluid extraction or ultra-high pressure ultrasonic-assisted extraction was carried out to improve the effect and efficiency. SI separation and purification were mainly performed by different column chromatographic methods, including flash column chromatography, counter-current chromatography, borate gel affinity column chromatography, and preparative HPLC. The packing materials used were silica gel, RP-C18, and macroporous resin. The details of SI extraction and isolation are shown in . The reported pharmacological activities of SI were summarized in and . 6.1. Protection of the Brain 6.1.1. Neuroprotection of Cerebral Ischemia/Hemorrhage Due to the high risks of disability and mortality, cerebral hemorrhage and ischemia remain intractable diseases, resulting in neurologic impairment, tissue necrosis, cell apoptosis, and subsequent complications . Previous studies demonstrated that SI performs significant neuroprotection mainly through antioxidant and anti-apoptotic pathways. Hu et al. investigated the protective effect and possible mechanism of SI (36 and 72 mg/kg, i.v.) on cerebral ischemia–reperfusion (I/R) impairment using the rat transient middle cerebral artery occlusion (tMCAO) model. The results indicated that SI could ameliorate neurological injury, reduce cerebral infarct volume, decrease the malonaldehyde (MDA) content, and increase the superoxide dismutase (SOD) activity of brain tissue. The mechanism involves promoting the expression of p-Erk1/2/t-Erk1/2, c-Nrf2, n-Nrf2, HO-1, and NQO1, and deregulating the expression of Bcl-2, Bax, caspase 3, and caspase 9. The protective effects of compounds (SI, SH, SA, LIG, and ferulic acid) isolated from L. chuanxiong were evaluated on an oxygen–glucose deprivation–reoxygenation (OGD/R) model using cultured SH-SY5Y cells. The results demonstrated that both SI and LIG could improve cell viability, reduce reactive oxygen species (ROS), and lactate dehydrogenase (LDH) levels. SI showed a more potent inhibiting activity on LDH compared to LIG . LIG and its metabolites SI and SH have protective effects on the intracerebral hemorrhage (ICH) model caused by autologous blood injection into CD-1 mice. SI could ameliorate neurological deficit, brain edema, and neuronal injury; alleviate microglia cell and astrocyte activations; and reduce peripheral immune cell infiltration caused by ICH. However, SI is less effective than SH. Inhibition of the Prx1/TLR4/NF-κB signal pathway and anti-neuroinflammatory injury are involved in the potential mechanism of LIG and SH . 6.1.2. Protection against Septic Encephalopathy Sepsis is a systemic inflammatory response syndrome caused by microbial infection. Septic encephalopathy (SE) with cerebrovascular dysfunction and neuron growth inhibition is a common complication. SI (36 and 144 mg/kg, i.p.) ameliorates injury on SE rats by increasing Ngb expression, upregulating the p38 MAPK signal pathway, and the consequent promotion of neuron growth . Sleep quality impairment of sepsis rats would accelerate inflammatory factor release, and the prognosis of sepsis may benefit from sleep improvement . SI demonstrated sleep-improving sedative effects, but its role in sepsis is unclear. Thus, a cecal ligation and puncture (CLP)-induced sepsis model using C57BL/6J mice was established. The results showed that SI (36 mg/kg, i.p.) improved the survival rate and cognitive dysfunction of sepsis mice, ameliorated systemic inflammatory response, reduced apoptotic cells in the hippocampus, and inhibited the inflammatory signaling pathway. Surprisingly, the hypothesis that alleviating sleep deprivation could ameliorate SE injury was further confirmed by reversing the expression of sleep deprivation-related markers BNDF and c-FOS after SI administration . 6.2. Protection of the Liver, Kidneys, and Lungs Blood supply is critical for ameliorating tissue and organ damage caused by persistent ischemia. SI can attenuate hepatic and renal I/R injury through antioxidant, anti-inflammatory, and anti-apoptotic effects. SI (50, 100, and 200 mg/kg) was injected intraperitoneally to the modified liver I/R murine model. As a result, SI (200 mg/kg) decreased TNF-α, IL-1β, and IL-6 in serum; inhibited the phosphorylation of p65 NF-κB and MAPK kinases; and reduced the expression of Bax and Bcl-2. Furthermore, SI can alleviate H 2 O 2 -induced oxidative damage in HuCCT1 cells, promote the nuclear translocation of Nrf-2, and reduce the levels of ROS and MDA . Administration on renal I/R injury mice confirmed that SI can protect renal function and structural integrity, reverse increases in ischemia–induced blood urea nitrogen (BUN) and serum creatinine (SCr), ameliorate pathological renal damage, and inhibit TNF-α and IL-6 secretions. Furthermore, reductions in ROS production as well as endoplasmic reticulum stress-related protein expressions are involved in the potential protection mechanism . It was reported that SI (36 mg/kg, i.p.) could ameliorate sepsis-related lung injury on cecal ligation and puncture-induced sepsis C57BL/6 mice. SI performed its effects by decreasing protein levels and neutrophil infiltration, inhibiting the phosphorylation of JNK, ERK, P38, and p65, and downregulating TNF-α, IL-1β, and IL-6 in plasma and lung tissue. CD42d/GP5 staining results indicated that platelet activation was decreased after SI administration. Moreover, SI could significantly reduce MPO-DNA levels stimulated by phorbol 12-myristate 13-acetate (PMA) . 6.3. Protection of Blood and Vascular Systems 6.3.1. Effects on the Blood System The rhizome of L. chuanxiong , a herb commonly used to promote blood circulation and remove blood clots, has drawn interest due to its anticoagulant and antiplatelet activities. Anticoagulant activity was screened by measuring the binding rates between components from herbal extracts and thrombin (THR) in vitro. Preliminary results showed that SI and isochlorogenic acid C could inhibit the activity of THR. The results of molecular docking revealed that SI and isochlorogenic acid C could bind to the catalytic active site of THR . Similarly, L. chuanxiong extracts were screened for their possible inhibitory effects on THR and Factor Xa (FXa) using an on-line dual-enzyme immobilization microreactor based on capillary electrophoresis. SI, SA, LIG, and ferulic acid exhibited vigorous THR inhibitory activities, while isochlorogenic acid A could effectively inhibit FXa activity . A study eliminated SI from Siwu decoction (SWD) to explore its contribution to the antiplatelet and anticoagulant activities of the formula. The absence of SI resulted in a significantly shortened activated partial thromboplastin time of SWD, while the active sequence of prothrombin time (PT) was inhibited, indicating that SI plays an important role in the activities of SWD . 6.3.2. Effects on the Vascular System SI can promote angiogenesis and it represents vasodilating and antithrombotic effects, thereby providing protection to the vascular system. SI in Guanxinning tablets could ameliorate endogenous thrombus injury in zebrafish through various pathways, including oxidative stress, platelet activation, and coagulation cascade . In addition, it was reported that SI prevents microthrombus formation by attenuating Con A-induced erythrocyte metamorphic damage and reducing erythrocyte aggregation . Suxiao Jiuxin Pill (SX) is a Chinese patent medicine containing extracts of L. chuanxiong and is usually used for coronary heart disease treatment. The potential active components of SX were screened for cell Ca 2+ regulation activity, which is critical for vascular resistance and pressure handing. SI isolated from SX can amplify cardiovascular diastolic activity through calcium antagonistic activity . Additionally, a study on the effect on the endothelial vascular cell model confirmed that SI might promote the formation of the luminal structure of human microvascular endothelial cells and induce endothelial angiogenesis by upregulating placental growth factor . 6.4. Other Pharmacological Effects The analgesic effect of SI was evaluated by an acetic acid-induced writhing test on Kunming mice (8, 16, and 32 mg/kg, i.g.), and the anti-migraine activity was tested by nitroglycerin-induced headaches in SD rats (18, 36, and 72 mg/kg, i.g.). SI (32 mg/kg) significantly elevated the pain thresholds and the number of acetic acid-induced writhing reactions in mice. SI (72 mg/kg) in rats remarkably reduced the NO levels in plasma and brain tissue and increased 5-HT levels in plasma . In another study where rats were dosed with SI (144, 72, and 36 mg/kg, i.p.) to cure the cortical spread of migraine, plasma NO and calcitonin gene-related peptide (CGRP) significantly decreased after SI (144 mg/kg) treatment . It was reported that SI inhibited NF-κB expression in a dose-dependent manner in HEK293 cells, which was stimulated by pro-inflammatory factors TNF-α, IL-1β, and IL-6. Similarly, SI reduced pro-inflammatory factors IL-6 and IL-8 in THP-1 cells induced by lipopolysaccharide . In OGD/R-treated microglial cells, which are often used to evaluate stroke and the consequent inflammatory injury, SI could inhibit proinflammatory cytokines and enzymes, attenuate the nuclear translocation of the NF-κB pathway in BV-2 microglia, and restrain the TLR4/NF-κB pathway or upregulate extracellular heat shock protein 70. These results indicated that SI could effectively inhibit the neuroinflammation induced by stroke . Moreover, SI could attenuate oxidative stress damage by activating the HO-1 pathway and enhancing cellular resistance to hydrogen peroxide-induced oxidative damage . Surprisingly, SI might be used as a potential antitumor agent. Good affinity between SI and C-X-C chemokine receptor type 4 (CXCR4) was observed by affinity detection and SPR ligand screening. The measured affinity constant was 2.94 ± 0.36 μM, indicating that SI might be a potential CXCR4 antagonist that can inhibit the CXCR4-mediated migration of human breast cancer cells . SI showed inhibition capability against cell proliferation. Phthalides from the rhizome of Cnidium chinensis were evaluated on smooth muscle cells from a mouse aorta. The order of proliferation–inhibiting efficacy was as follows: senkyunolide L > SH > senkyunolide J > SI > LIG = senkyunolide A > butylidenephthalide, suggesting that SI had an effect to some extent. However, the underlying mechanism is unclear . The BBB permeability of SI was investigated in MDCK-MDR1 cells. The results indicated that SI could enhance cellular transport by downregulating the expression of claudin-5 and zonula occludens-1, two main tight junction proteins that are closely associated with BBB tightness . Additionally, SI decreased the expression of P-glycoprotein (P-gp), which acts as a drug-efflux pump, via the paracellular route to enhance xenobiotics transport . 6.1.1. Neuroprotection of Cerebral Ischemia/Hemorrhage Due to the high risks of disability and mortality, cerebral hemorrhage and ischemia remain intractable diseases, resulting in neurologic impairment, tissue necrosis, cell apoptosis, and subsequent complications . Previous studies demonstrated that SI performs significant neuroprotection mainly through antioxidant and anti-apoptotic pathways. Hu et al. investigated the protective effect and possible mechanism of SI (36 and 72 mg/kg, i.v.) on cerebral ischemia–reperfusion (I/R) impairment using the rat transient middle cerebral artery occlusion (tMCAO) model. The results indicated that SI could ameliorate neurological injury, reduce cerebral infarct volume, decrease the malonaldehyde (MDA) content, and increase the superoxide dismutase (SOD) activity of brain tissue. The mechanism involves promoting the expression of p-Erk1/2/t-Erk1/2, c-Nrf2, n-Nrf2, HO-1, and NQO1, and deregulating the expression of Bcl-2, Bax, caspase 3, and caspase 9. The protective effects of compounds (SI, SH, SA, LIG, and ferulic acid) isolated from L. chuanxiong were evaluated on an oxygen–glucose deprivation–reoxygenation (OGD/R) model using cultured SH-SY5Y cells. The results demonstrated that both SI and LIG could improve cell viability, reduce reactive oxygen species (ROS), and lactate dehydrogenase (LDH) levels. SI showed a more potent inhibiting activity on LDH compared to LIG . LIG and its metabolites SI and SH have protective effects on the intracerebral hemorrhage (ICH) model caused by autologous blood injection into CD-1 mice. SI could ameliorate neurological deficit, brain edema, and neuronal injury; alleviate microglia cell and astrocyte activations; and reduce peripheral immune cell infiltration caused by ICH. However, SI is less effective than SH. Inhibition of the Prx1/TLR4/NF-κB signal pathway and anti-neuroinflammatory injury are involved in the potential mechanism of LIG and SH . 6.1.2. Protection against Septic Encephalopathy Sepsis is a systemic inflammatory response syndrome caused by microbial infection. Septic encephalopathy (SE) with cerebrovascular dysfunction and neuron growth inhibition is a common complication. SI (36 and 144 mg/kg, i.p.) ameliorates injury on SE rats by increasing Ngb expression, upregulating the p38 MAPK signal pathway, and the consequent promotion of neuron growth . Sleep quality impairment of sepsis rats would accelerate inflammatory factor release, and the prognosis of sepsis may benefit from sleep improvement . SI demonstrated sleep-improving sedative effects, but its role in sepsis is unclear. Thus, a cecal ligation and puncture (CLP)-induced sepsis model using C57BL/6J mice was established. The results showed that SI (36 mg/kg, i.p.) improved the survival rate and cognitive dysfunction of sepsis mice, ameliorated systemic inflammatory response, reduced apoptotic cells in the hippocampus, and inhibited the inflammatory signaling pathway. Surprisingly, the hypothesis that alleviating sleep deprivation could ameliorate SE injury was further confirmed by reversing the expression of sleep deprivation-related markers BNDF and c-FOS after SI administration . Due to the high risks of disability and mortality, cerebral hemorrhage and ischemia remain intractable diseases, resulting in neurologic impairment, tissue necrosis, cell apoptosis, and subsequent complications . Previous studies demonstrated that SI performs significant neuroprotection mainly through antioxidant and anti-apoptotic pathways. Hu et al. investigated the protective effect and possible mechanism of SI (36 and 72 mg/kg, i.v.) on cerebral ischemia–reperfusion (I/R) impairment using the rat transient middle cerebral artery occlusion (tMCAO) model. The results indicated that SI could ameliorate neurological injury, reduce cerebral infarct volume, decrease the malonaldehyde (MDA) content, and increase the superoxide dismutase (SOD) activity of brain tissue. The mechanism involves promoting the expression of p-Erk1/2/t-Erk1/2, c-Nrf2, n-Nrf2, HO-1, and NQO1, and deregulating the expression of Bcl-2, Bax, caspase 3, and caspase 9. The protective effects of compounds (SI, SH, SA, LIG, and ferulic acid) isolated from L. chuanxiong were evaluated on an oxygen–glucose deprivation–reoxygenation (OGD/R) model using cultured SH-SY5Y cells. The results demonstrated that both SI and LIG could improve cell viability, reduce reactive oxygen species (ROS), and lactate dehydrogenase (LDH) levels. SI showed a more potent inhibiting activity on LDH compared to LIG . LIG and its metabolites SI and SH have protective effects on the intracerebral hemorrhage (ICH) model caused by autologous blood injection into CD-1 mice. SI could ameliorate neurological deficit, brain edema, and neuronal injury; alleviate microglia cell and astrocyte activations; and reduce peripheral immune cell infiltration caused by ICH. However, SI is less effective than SH. Inhibition of the Prx1/TLR4/NF-κB signal pathway and anti-neuroinflammatory injury are involved in the potential mechanism of LIG and SH . Sepsis is a systemic inflammatory response syndrome caused by microbial infection. Septic encephalopathy (SE) with cerebrovascular dysfunction and neuron growth inhibition is a common complication. SI (36 and 144 mg/kg, i.p.) ameliorates injury on SE rats by increasing Ngb expression, upregulating the p38 MAPK signal pathway, and the consequent promotion of neuron growth . Sleep quality impairment of sepsis rats would accelerate inflammatory factor release, and the prognosis of sepsis may benefit from sleep improvement . SI demonstrated sleep-improving sedative effects, but its role in sepsis is unclear. Thus, a cecal ligation and puncture (CLP)-induced sepsis model using C57BL/6J mice was established. The results showed that SI (36 mg/kg, i.p.) improved the survival rate and cognitive dysfunction of sepsis mice, ameliorated systemic inflammatory response, reduced apoptotic cells in the hippocampus, and inhibited the inflammatory signaling pathway. Surprisingly, the hypothesis that alleviating sleep deprivation could ameliorate SE injury was further confirmed by reversing the expression of sleep deprivation-related markers BNDF and c-FOS after SI administration . Blood supply is critical for ameliorating tissue and organ damage caused by persistent ischemia. SI can attenuate hepatic and renal I/R injury through antioxidant, anti-inflammatory, and anti-apoptotic effects. SI (50, 100, and 200 mg/kg) was injected intraperitoneally to the modified liver I/R murine model. As a result, SI (200 mg/kg) decreased TNF-α, IL-1β, and IL-6 in serum; inhibited the phosphorylation of p65 NF-κB and MAPK kinases; and reduced the expression of Bax and Bcl-2. Furthermore, SI can alleviate H 2 O 2 -induced oxidative damage in HuCCT1 cells, promote the nuclear translocation of Nrf-2, and reduce the levels of ROS and MDA . Administration on renal I/R injury mice confirmed that SI can protect renal function and structural integrity, reverse increases in ischemia–induced blood urea nitrogen (BUN) and serum creatinine (SCr), ameliorate pathological renal damage, and inhibit TNF-α and IL-6 secretions. Furthermore, reductions in ROS production as well as endoplasmic reticulum stress-related protein expressions are involved in the potential protection mechanism . It was reported that SI (36 mg/kg, i.p.) could ameliorate sepsis-related lung injury on cecal ligation and puncture-induced sepsis C57BL/6 mice. SI performed its effects by decreasing protein levels and neutrophil infiltration, inhibiting the phosphorylation of JNK, ERK, P38, and p65, and downregulating TNF-α, IL-1β, and IL-6 in plasma and lung tissue. CD42d/GP5 staining results indicated that platelet activation was decreased after SI administration. Moreover, SI could significantly reduce MPO-DNA levels stimulated by phorbol 12-myristate 13-acetate (PMA) . 6.3.1. Effects on the Blood System The rhizome of L. chuanxiong , a herb commonly used to promote blood circulation and remove blood clots, has drawn interest due to its anticoagulant and antiplatelet activities. Anticoagulant activity was screened by measuring the binding rates between components from herbal extracts and thrombin (THR) in vitro. Preliminary results showed that SI and isochlorogenic acid C could inhibit the activity of THR. The results of molecular docking revealed that SI and isochlorogenic acid C could bind to the catalytic active site of THR . Similarly, L. chuanxiong extracts were screened for their possible inhibitory effects on THR and Factor Xa (FXa) using an on-line dual-enzyme immobilization microreactor based on capillary electrophoresis. SI, SA, LIG, and ferulic acid exhibited vigorous THR inhibitory activities, while isochlorogenic acid A could effectively inhibit FXa activity . A study eliminated SI from Siwu decoction (SWD) to explore its contribution to the antiplatelet and anticoagulant activities of the formula. The absence of SI resulted in a significantly shortened activated partial thromboplastin time of SWD, while the active sequence of prothrombin time (PT) was inhibited, indicating that SI plays an important role in the activities of SWD . 6.3.2. Effects on the Vascular System SI can promote angiogenesis and it represents vasodilating and antithrombotic effects, thereby providing protection to the vascular system. SI in Guanxinning tablets could ameliorate endogenous thrombus injury in zebrafish through various pathways, including oxidative stress, platelet activation, and coagulation cascade . In addition, it was reported that SI prevents microthrombus formation by attenuating Con A-induced erythrocyte metamorphic damage and reducing erythrocyte aggregation . Suxiao Jiuxin Pill (SX) is a Chinese patent medicine containing extracts of L. chuanxiong and is usually used for coronary heart disease treatment. The potential active components of SX were screened for cell Ca 2+ regulation activity, which is critical for vascular resistance and pressure handing. SI isolated from SX can amplify cardiovascular diastolic activity through calcium antagonistic activity . Additionally, a study on the effect on the endothelial vascular cell model confirmed that SI might promote the formation of the luminal structure of human microvascular endothelial cells and induce endothelial angiogenesis by upregulating placental growth factor . The rhizome of L. chuanxiong , a herb commonly used to promote blood circulation and remove blood clots, has drawn interest due to its anticoagulant and antiplatelet activities. Anticoagulant activity was screened by measuring the binding rates between components from herbal extracts and thrombin (THR) in vitro. Preliminary results showed that SI and isochlorogenic acid C could inhibit the activity of THR. The results of molecular docking revealed that SI and isochlorogenic acid C could bind to the catalytic active site of THR . Similarly, L. chuanxiong extracts were screened for their possible inhibitory effects on THR and Factor Xa (FXa) using an on-line dual-enzyme immobilization microreactor based on capillary electrophoresis. SI, SA, LIG, and ferulic acid exhibited vigorous THR inhibitory activities, while isochlorogenic acid A could effectively inhibit FXa activity . A study eliminated SI from Siwu decoction (SWD) to explore its contribution to the antiplatelet and anticoagulant activities of the formula. The absence of SI resulted in a significantly shortened activated partial thromboplastin time of SWD, while the active sequence of prothrombin time (PT) was inhibited, indicating that SI plays an important role in the activities of SWD . SI can promote angiogenesis and it represents vasodilating and antithrombotic effects, thereby providing protection to the vascular system. SI in Guanxinning tablets could ameliorate endogenous thrombus injury in zebrafish through various pathways, including oxidative stress, platelet activation, and coagulation cascade . In addition, it was reported that SI prevents microthrombus formation by attenuating Con A-induced erythrocyte metamorphic damage and reducing erythrocyte aggregation . Suxiao Jiuxin Pill (SX) is a Chinese patent medicine containing extracts of L. chuanxiong and is usually used for coronary heart disease treatment. The potential active components of SX were screened for cell Ca 2+ regulation activity, which is critical for vascular resistance and pressure handing. SI isolated from SX can amplify cardiovascular diastolic activity through calcium antagonistic activity . Additionally, a study on the effect on the endothelial vascular cell model confirmed that SI might promote the formation of the luminal structure of human microvascular endothelial cells and induce endothelial angiogenesis by upregulating placental growth factor . The analgesic effect of SI was evaluated by an acetic acid-induced writhing test on Kunming mice (8, 16, and 32 mg/kg, i.g.), and the anti-migraine activity was tested by nitroglycerin-induced headaches in SD rats (18, 36, and 72 mg/kg, i.g.). SI (32 mg/kg) significantly elevated the pain thresholds and the number of acetic acid-induced writhing reactions in mice. SI (72 mg/kg) in rats remarkably reduced the NO levels in plasma and brain tissue and increased 5-HT levels in plasma . In another study where rats were dosed with SI (144, 72, and 36 mg/kg, i.p.) to cure the cortical spread of migraine, plasma NO and calcitonin gene-related peptide (CGRP) significantly decreased after SI (144 mg/kg) treatment . It was reported that SI inhibited NF-κB expression in a dose-dependent manner in HEK293 cells, which was stimulated by pro-inflammatory factors TNF-α, IL-1β, and IL-6. Similarly, SI reduced pro-inflammatory factors IL-6 and IL-8 in THP-1 cells induced by lipopolysaccharide . In OGD/R-treated microglial cells, which are often used to evaluate stroke and the consequent inflammatory injury, SI could inhibit proinflammatory cytokines and enzymes, attenuate the nuclear translocation of the NF-κB pathway in BV-2 microglia, and restrain the TLR4/NF-κB pathway or upregulate extracellular heat shock protein 70. These results indicated that SI could effectively inhibit the neuroinflammation induced by stroke . Moreover, SI could attenuate oxidative stress damage by activating the HO-1 pathway and enhancing cellular resistance to hydrogen peroxide-induced oxidative damage . Surprisingly, SI might be used as a potential antitumor agent. Good affinity between SI and C-X-C chemokine receptor type 4 (CXCR4) was observed by affinity detection and SPR ligand screening. The measured affinity constant was 2.94 ± 0.36 μM, indicating that SI might be a potential CXCR4 antagonist that can inhibit the CXCR4-mediated migration of human breast cancer cells . SI showed inhibition capability against cell proliferation. Phthalides from the rhizome of Cnidium chinensis were evaluated on smooth muscle cells from a mouse aorta. The order of proliferation–inhibiting efficacy was as follows: senkyunolide L > SH > senkyunolide J > SI > LIG = senkyunolide A > butylidenephthalide, suggesting that SI had an effect to some extent. However, the underlying mechanism is unclear . The BBB permeability of SI was investigated in MDCK-MDR1 cells. The results indicated that SI could enhance cellular transport by downregulating the expression of claudin-5 and zonula occludens-1, two main tight junction proteins that are closely associated with BBB tightness . Additionally, SI decreased the expression of P-glycoprotein (P-gp), which acts as a drug-efflux pump, via the paracellular route to enhance xenobiotics transport . Up to now, the pharmacokinetic parameters of SI in rats, mice, rabbits, dogs, and humans have been studied with different administration routes, including intravenous injection, intraperitoneal injection, gavage, etc. The reported pharmacokinetic parameters are summarized in . 7.1. Pharmacokinetic Properties of SI The pharmacokinetic properties of SI have been studied on animals (mice, rats, and dogs) via different administration routes . The results indicated that SI would be absorbed rapidly followed by short half-life (<1 h) elimination and acceptable oral bioavailability (>35%) after intragastric administration. SI is widely distributed in tissues and organs in vivo, and the AUC values in descending order were as follows: kidneys > liver > lungs > muscle > brain > heart > thymus > spleen . The pharmacokinetic differences between normal and migrainous rats have been investigated . The results demonstrated that migraines caused some significant changes. For example, the decreased clearance and increased volume of distribution resulted in a several-fold increase in t 1/2 and AUC. The pharmacokinetic parameters of SI were significantly different in normal and migrainous rats, which should be taken into consideration during the design of a clinical dosage regimen for SI. Similarly, the pharmacokinetic differences of SI and SH in normal and migrainous rats after gavage administration of 70% ethanol extract of L. chuanxiong were studied. Compared with normal rats, the absorptions of SI and SH in migraine rats increased significantly, the C max and AUC (0–t) of SI increased by 192% and 184%, while SH increased by 266% and 213%, respectively . Furthermore, the effects of warfarin on the pharmacokinetics of SI in a rat model of biliary drainage following administration of the extract of L. chuanxiong were investigated. It was reported that warfarin could significantly increase the t 1/2 , T max , and C max of SI. The result highlights the importance of drug–herb interactions . The metabolic pathways of SI in vivo involve methylation, hydrolysis, and epoxidation of phase I metabolism, as well as glucuronidation and glutathionylation of phase II metabolism. The mainly metabolic pathways in vivo are shown in . It was reported that after administration of SI in rats, a total of 18 metabolites were identified in bile, 6 in plasma, and 5 in urine . Ma et al. identified four metabolites of SI in bile, namely, SI-6S-O-β-D-glucuronide, SI-7S-O-β-D-glucuronide, SI-7S-S-glutathione, and SI-7R-S-glutathione. He and colleagues found nine metabolites in rat bile and speculated the metabolic pathways. 7.2. Pharmacokinetic Properties of SI Containing Herbal Preparations Up to now, the pharmacokinetics and metabolism of SI were studied in animals administrated not only with pure SI compound, but also with SI containing herbal preparations. A total of 25 compounds were detected in plasma after SD rats were gavaged with L. chuanxiong decoction, among which 13 were absorbed as prototypes. LIG, the main alkyl phthalide in L. chuanxiong , was rapidly absorbed and converted into hydroxyphthalides by phase I metabolism, including SI, SH, senkyunolide F, and senkyunolide G. The absorbed, as well as the generated hydroxyphthalides, were further combined with glutathione or glucuronic acid through phase Ⅱ metabolism . A sequential metabolism approach was developed to study the absorption and metabolism of multiple components in L. chuanxiong decoction at different stages of intestinal bacteria, intestinal wall enzymes, and liver metabolism. After enema administration, SI was quickly absorbed as a prototype and stable at each stage of sequential metabolism . SI was used as index component of several herbal preparations, such as Dachuanxiong Pills , Shaofu Zhuyu Decoction, and Yigan Powder . SI had been detected as one of the main components in plasma and tissues after normal and model animals were administrated. The results confirmed that SI could easily be released from herbal preparations followed by rapid absorption, a short half-time of elimination, and acceptable oral bioavailability in vivo. Previous studies suggested that there were remarkable differences in SI pharmacokinetics between normal and model animals administrated with SI containing herbal preparations. For example, multi-component pharmacokinetics of the Naomaitong formula was performed in normal and stroke rats. The results indicated that the stroke rats had higher values of AUC (0–t) , AUC (0–∞) , t 1/2 , and MRT (0–∞) . The AUC (0–∞) values of SI and LIG were both five times higher than those of the normal rats . Moreover, pharmacokinetics differences were compared after oral administration of Xinshenghua Granules in normal and blood-deficient rats. As a result, a total of 15 components were detected in plasma. However, most of them were eliminated within six hours. The SI values of C max , AUC (0–t), and AUC (0–∞) in a blood-deficient rat model were 23%, 32.6%, and 31.6% higher than those of normal rats, respectively . Based on pharmacokinetic experiments in humans and rats, the active phthalides in Xuebijing injection in the treatment of sepsis were determined. A variety of phthalides (SI, SH, senkyunolide G, senkyunolide N, 3-hydroxy-3-N-butylphthalide, etc.) were detected in human and rat plasma, among which both SI and senkyunolide G have significant exposures in plasma . The pharmacokinetic properties of SI have been studied on animals (mice, rats, and dogs) via different administration routes . The results indicated that SI would be absorbed rapidly followed by short half-life (<1 h) elimination and acceptable oral bioavailability (>35%) after intragastric administration. SI is widely distributed in tissues and organs in vivo, and the AUC values in descending order were as follows: kidneys > liver > lungs > muscle > brain > heart > thymus > spleen . The pharmacokinetic differences between normal and migrainous rats have been investigated . The results demonstrated that migraines caused some significant changes. For example, the decreased clearance and increased volume of distribution resulted in a several-fold increase in t 1/2 and AUC. The pharmacokinetic parameters of SI were significantly different in normal and migrainous rats, which should be taken into consideration during the design of a clinical dosage regimen for SI. Similarly, the pharmacokinetic differences of SI and SH in normal and migrainous rats after gavage administration of 70% ethanol extract of L. chuanxiong were studied. Compared with normal rats, the absorptions of SI and SH in migraine rats increased significantly, the C max and AUC (0–t) of SI increased by 192% and 184%, while SH increased by 266% and 213%, respectively . Furthermore, the effects of warfarin on the pharmacokinetics of SI in a rat model of biliary drainage following administration of the extract of L. chuanxiong were investigated. It was reported that warfarin could significantly increase the t 1/2 , T max , and C max of SI. The result highlights the importance of drug–herb interactions . The metabolic pathways of SI in vivo involve methylation, hydrolysis, and epoxidation of phase I metabolism, as well as glucuronidation and glutathionylation of phase II metabolism. The mainly metabolic pathways in vivo are shown in . It was reported that after administration of SI in rats, a total of 18 metabolites were identified in bile, 6 in plasma, and 5 in urine . Ma et al. identified four metabolites of SI in bile, namely, SI-6S-O-β-D-glucuronide, SI-7S-O-β-D-glucuronide, SI-7S-S-glutathione, and SI-7R-S-glutathione. He and colleagues found nine metabolites in rat bile and speculated the metabolic pathways. Up to now, the pharmacokinetics and metabolism of SI were studied in animals administrated not only with pure SI compound, but also with SI containing herbal preparations. A total of 25 compounds were detected in plasma after SD rats were gavaged with L. chuanxiong decoction, among which 13 were absorbed as prototypes. LIG, the main alkyl phthalide in L. chuanxiong , was rapidly absorbed and converted into hydroxyphthalides by phase I metabolism, including SI, SH, senkyunolide F, and senkyunolide G. The absorbed, as well as the generated hydroxyphthalides, were further combined with glutathione or glucuronic acid through phase Ⅱ metabolism . A sequential metabolism approach was developed to study the absorption and metabolism of multiple components in L. chuanxiong decoction at different stages of intestinal bacteria, intestinal wall enzymes, and liver metabolism. After enema administration, SI was quickly absorbed as a prototype and stable at each stage of sequential metabolism . SI was used as index component of several herbal preparations, such as Dachuanxiong Pills , Shaofu Zhuyu Decoction, and Yigan Powder . SI had been detected as one of the main components in plasma and tissues after normal and model animals were administrated. The results confirmed that SI could easily be released from herbal preparations followed by rapid absorption, a short half-time of elimination, and acceptable oral bioavailability in vivo. Previous studies suggested that there were remarkable differences in SI pharmacokinetics between normal and model animals administrated with SI containing herbal preparations. For example, multi-component pharmacokinetics of the Naomaitong formula was performed in normal and stroke rats. The results indicated that the stroke rats had higher values of AUC (0–t) , AUC (0–∞) , t 1/2 , and MRT (0–∞) . The AUC (0–∞) values of SI and LIG were both five times higher than those of the normal rats . Moreover, pharmacokinetics differences were compared after oral administration of Xinshenghua Granules in normal and blood-deficient rats. As a result, a total of 15 components were detected in plasma. However, most of them were eliminated within six hours. The SI values of C max , AUC (0–t), and AUC (0–∞) in a blood-deficient rat model were 23%, 32.6%, and 31.6% higher than those of normal rats, respectively . Based on pharmacokinetic experiments in humans and rats, the active phthalides in Xuebijing injection in the treatment of sepsis were determined. A variety of phthalides (SI, SH, senkyunolide G, senkyunolide N, 3-hydroxy-3-N-butylphthalide, etc.) were detected in human and rat plasma, among which both SI and senkyunolide G have significant exposures in plasma . The structural variety and biological correspondence of natural products have provided beneficial enlightenment for new drug discovery and development. A valid strategy is to screen potential candidates from traditional herbal medicine with historically proven effects, such as morphine from poppy, artemisinin from sweet wormwood, and salicylic acid from willow bark. Unfortunately, many natural products, despite their significant bioactivity, fail to meet the requirements of qualified drug candidates due to unsatisfactory safety, stability, solubility, bioavailability, or other druggable deficiencies. In this case, their natural or modified derivates are often researched to discover potential substitutes with superior druggable properties and comparable bioactivities. Despite their low bioavailability, LIG and NBP present outstanding neuroprotective effects. SI is an oxidation product and an in vivo metabolite of LIG. Compared with LIG, SI is more chemically stable, easily soluble in water, and presents significantly better bioavailability. Furthermore, SI can permeate the BBB, which means it can access the brain’s disease focus directly. These properties make SI a potentially useful medicinal compound. Nevertheless, further studies need to be performed before SI can be considered a candidate to comprehensively assess its druggability. First, it is necessary to develop a preparation method that can obtain large quantities of SI at a low cost, thus providing substantial material for efficacy assessment, safety studies, and new drug development. Second, the efficacy evaluation and mechanism clarification of SI are still insufficient compared to LIG. In particular, in vivo comparative studies of SI with similar drugs or components, such as NBP and LIG, are needed to address the effectiveness and potential advantages of SI. Third, a structure–activity comparison between SI and similar phthalides would be useful. SI is a product of dihydroxylation of the six and seven double bonds of LIG. The introduction of o-dihydroxyl significantly improves the water solubility of the molecule while leaving BBB transmissibility unchanged. The structural properties and mechanisms of the transmissibility of SI across the BBB deserve more investigation, which may provide valuable references for subsequent structural modifications and the design of other drug molecules.
|
Markers of Chemical and Microbiological Contamination of the Air in the Sport Centers
|
b5359be4-fa2f-4d6b-ad9e-e96122acd629
|
10144153
|
Microbiology[mh]
|
In the modern world, great attention is paid to a healthy lifestyle that includes regular sporting activities that contribute to maintaining a healthy body weight, feeling good, and sustaining energy and a youthful appearance . Physical activity can also prevent hypertension and non-communicable diseases (e.g., heart disease, stroke, diabetes, and site-specific cancers) . According to the World Health Organization (WHO) guidelines, adults need at least 2.5 h of moderate-intensity physical activity weekly . The Deloitte report “Sports Retail Study 2020” mentions that almost 65% of Europeans practice at least one sport discipline, devoting 8.6 h a week to physical activity . Although physical activity has been documented as beneficial to human health, using sports facilities has raised concerns during the COVID-19 pandemic as contributing to the spread of SARS-CoV-2. Various factors influence air quality in sports facilities (e.g., building construction, materials used, ventilation, air humidity and temperature, number of users, and type of physical activity) . Many of these factors can favor the spread and multiplication of microorganisms (i.e., high air humidity from the intense sweat discharge of the users, high particulate matter concentration from the resuspension of particles sedimented on the surfaces, and regular contact between the users and sports equipment) . Most of the air is inhaled through the mouth during physical activities excluding the normal nasal mechanisms for filtration. The increased airflow velocity carries airborne contaminants deeper into the respiratory tract. Thus, increased concentrations of microorganisms, their fragments and metabolites can be introduced into the respiratory tract of exercising individuals and pose a considerable health risk to them . Moreover, the research shows that physical activity increases aerosol emissions due to elevated ventilation and dehydration of the airways, further elevating the bioaerosol concentrations. Furthermore, the air quality in sports facilities depends on CO 2 and other gases and volatile organic compound (VOC) concentrations . With the increase in the intensity of physical activity (and thus breathing), the concentration of CO 2 in sports halls increases. According to the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE), the maximum level of CO 2 in sports facilities is 1000 ppm. Higher CO 2 concentrations indicate poor air quality and acute health symptoms in room users (e.g., headaches and irritation of mucous membranes) and slower work efficiency . Volatile organic compounds (VOCs) are gaseous and can originate from building materials and equipment (furniture, installations, electronics) . Moreover, they are also associated with cleaning, disinfection, and using chemicals and cosmetics. Monocyclic aromatic hydrocarbons (MAH) are particularly important in the VOC group. VOCs can cause serious health effects as many of them exhibit toxic, carcinogenic, mutagenic, or neurotoxic properties. Many VOCs are odorous . Bioaerosols are one of the main transmission routes for infectious diseases . Moreover, human exposure to bioaerosols is associated with a wide range of acute and chronic health problems such as asthma, hay fever, bronchitis, chronic lung failure, diseases of the cardiovascular system, catarrh of the gastrointestinal tract, tuberculosis, legionellosis and allergic reactions as well as sinus and conjunctivitis . Toxins of microbial origin (endotoxins and mycotoxins) play a significant role in inflammatory responses and contribute to the deterioration of lung function, causing other infections . It has been found that over 80 types of fungi (mostly belonging to Cladosporium , Alternaria , Aspergillus , and Fusarium genera) can cause respiratory allergy symptoms and over 100 severe human and animal infections as well as plant diseases . Because humans carry 1012 microorganisms in their epidermis and 1014 microorganisms in the digestive tract, they can pose as the primary source of microorganisms in fitness facilities . Therefore, surfaces in sports facilities can also be a source of pathogenic microorganisms such as methicillin-resistant Staphylococcus aureus (MRSA). It was found that skin-to-skin contact is a primary route of MRSA transmission between athletes, especially in football, wrestling, rugby, and soccer players. Moreover, poor hygiene in equipment has also been implicated in the spread of contagious diseases . Infections caused by MRSA are often aggressive, necrotizing, antibiotic-resistant, and sometimes fatal . Sports facilities have already been the subject of microbiological research. The literature indicates different microbiological air contamination of these types of facilities, ranging from 5.80 × 10 1 to 1.02 × 10 3 CFU m −3 for the number of bacteria and from 2.10 × 10 1 to 1.44 × 10 2 CFU m −3 for the number of fungi [ , , , ]. Conversely, in the case of the bacterial contamination of surfaces, concentrations from 3.9 × 10 2 CFU cm −2 to 3.7 × 10 3 CFU cm −2 were observed . Bacteria of the genera Bacillus , Corynebacterium , Kocuria , Micrococcus , and Pseudomonas Staphylococcus were characteristic of bioaerosols in sports facilities, while on surfaces, the dominance of Staphylococcus , Bacillus , Klebsiella , Escherichia , Enterococcus , Serratia , Aerococcus , and Erwinia has been described . The environment of fitness centers, however, has not yet been comprehensively investigated in terms of the number and species of microorganisms present in these places. Therefore, this study aimed to assess the markers of chemical and microbiological contamination of the air in sports centers. The research included the evaluation of microclimate parameters, particulate matter concentration, selected chemical contaminations, the number of microorganisms in the air and on surfaces, the diversity of microorganisms, and the presence of SARS-CoV-2 in fitness center environments. This is the first study to assess the biodiversity of sports facilities using the metagenome analysis of settled dust. The results are discussed in the context of pathogen transmission and the overall health effects of exposure to the detected contaminants. Moreover, guidelines for maintaining good air quality in sports facilities are proposed.
2.1. Microclimate and Particulate Matter Concentration The mean values of microclimatic conditions and PM concentrations are presented in . The microclimate parameters were also analyzed as daily averages ( ). The measurements were also taken depending on the time of day ( ) and the sampling location ( ). Airflow velocity measured during the experiments ranged from 0 m s −1 (no ventilation or air conditioning, no windows open, minimal number of people present at the sampling site) to 0.69 m s −1 (near an open window). The temperature was between 12.8 °C and 29.8 °C, and the relative humidity was between 44.3% and 78.3%. Microclimatic conditions are essential for achieving the optimal performance and comfort during exercise. The literature shows that an effective temperature below 22 °C degrades exercise performance among women, while an air temperature of 24 °C, with moderate RH, low air velocity, and weak radiation, is recommended at gyms to support exercise, comfort, and energy conservation . The International Fitness Association sets different recommendations for temperature and humidity at commercial gyms . A room temperature below 20 °C degrees and 50% humidity are recommended for aerobic classes, while for aerobics, cardio, weight training and Pilates areas, temperatures should be between 18 and 20 °C with a humidity between 40% and 60%. The relation between thermal comfort and air movement at elevated activity levels was also investigated . Air movement with higher temperatures produced equal or better comfort and perceived air quality below the reference condition for every temperature up to 26 °C. In our study, the air velocities and temperatures did not differ daily; the only difference was observed in the average daily humidity between Friday and the rest of the week ( ). Moreover, the air velocities did not depend on the time of day; the measurement was carried out while the temperature rose by the hour and the relative humidity first dropped (while the air conditioning was running) and then increased ( ). The temperature conditions at different locations were similar (no statistical differences in average temperature were detected; ). Statistically significant differences were detected for the average air velocity and average humidity ( ), corresponding to the number of windows open, the air conditioning running, and the number of people in the facility. The diversified values of these microclimatic parameters might lead to different development conditions for microorganisms between the tested locations. The fitness center environment is very unstable, and many factors affect the parameters of temperature, relative humidity and airflow velocity. In the present study, significantly different air velocity and air humidity conditions were noted, which implies that the microclimate parameters strongly depend on the specific location. There was no correlation between the individual microclimate parameters and the number of windows open, air conditioning running, and the number of people in the facility. Previous research has shown that the high number of people who exercise at closed sports facilities can contribute to the air quality issues inside them. The factors determining exercise intensity also affect air quality . The concentration of fine particles in the indoor air fluctuates depending on the weather conditions. Furthermore, ventilation and air filtration systems at such facilities are essential for proper air exchange and purification. The present study confirmed these results as significantly higher PM concentrations were observed indoors than outdoors ( ). The size distributions of airborne particles for each separate sampling variant are presented in . The PM 2.5 constituted almost all measured dust at the tested locations. Its share in the total quantity of the measured airborne particles was between 99.65% and 99.99%, and the range of the number of particles per size dropped with an increasing particle size. The total suspended PM concentration varied between 0.0445 mg m −3 and 0.0841 mg m −3 and differed significantly between all of the tested locations ( ). The daily averages were higher in the first three days of the experiment and significantly lower in the last two ( ). Considerably higher concentrations were observed at the beginning of each day and just before the facility was closed ( ). Based on full-factorial ANOVA, the main effects and all interactions were confirmed at a significance level of 0.05. According to EU legislation, the annual average concentration of dust with dimensions below 2.5 µm (i.e., the fraction containing the PM 1 fraction) should not exceed 0.025 mg m −3 . In our case, the measured PM 2.5 concentration was more than twice as high as the environmental threshold, independent of the sampling site . This agrees with the literature suggesting that the air quality inside training facilities is often worse than that outdoors . The presence of airborne particulate matter can affect the users’ health and decrease their physical performance by around 5% . Studies show that exposure to high PM concentrations can increase the risk of suffering from various respiratory and circulatory diseases . Moreover, some studies have suggested that people who regularly exercise are more prone to experiencing the effects of air pollutants than those who do not participate in sports . Carbon dioxide (CO 2 ) is the main gaseous air pollutant in sports facilities, connected to a natural product of human respiration . The CO 2 concentration in the fitness club ranged from 800 ppm (reception desk) to 2198 ppm (gym), and for most rooms, it was statistically significantly higher than in atmospheric air. In previous studies, the CO 2 concentrations in the air of sports halls were lower and ranged from 294.8 to 1529 ppm . However, the current studies have not shown any exceedance of the CO 2 concentration limits in the air developed by the WHO and the U.S. Environmental Protection Agency . Formaldehyde concentration in the tested sports facilities ranged from 0.005 mg/m 3 to 0.049 mg/m 3 , and for two out of five rooms, it was statistically significantly lower than in the control (atmospheric) air. This is probably due to the heavy traffic in the parking lot adjacent to the building and the busy street. According to EPA guidelines, the detected values of formaldehyde do not exceed the limits for this compound in the air . 2.2. Volatile Compounds Contamination The volatile compounds were extracted from the air sample by SPME, followed by desorption and analysis with GC-MS. A total of 85 compounds were identified, of which 84 were present in the air collected from the gym, while only 47 were identified in the control sample (background) ( ). Detected compounds were divided into ten groups based on their chemical structures: hydrocarbons (30), terpenes and terpenoids (20), alcohols (10), aldehydes (eight), ketones (six), esters (six), furanes (two), phenols (one), ethers (one), and acids (one). The sources of the compounds that were identified in the indoor air from the gym can differ; for example, the breath air exhaled by people in the gym (e.g., acetone, ethanol, 1-propanol, butyl acetate, acetic acid, acetoin, and 2,3-butanedione), alcohol-based hand disinfectants and equipment cleaning agents , air fresheners and cosmetics (e.g., ethanol, phenol, benzaldehyde, 2-propanol, α-pinene, eucalyptol, linalool, 3-carene, D-limonene, γ-terpinene, α-thujene, β-myrcene, camphene, butane, pentane, acetone, furfural, 3-methyl-1-butanol, 2-methyl-1-butanol, dihydromyrcenol, citronellal, verbenone, menthol), building materials and room finishing materials , and gym equipment (e.g., furfural, toluene, benzene, styrene, xylenes, ethylbenzene, heptane, decane, benzaldehyde, hexanal). The relative amount (%) results showed that phenol was the dominant compound in both the control air sample and air sample from the gym. The presence of phenol results from the widespread use of it and its derivatives, among others, in the production of resins, detergents, medicinal products, disinfectants, and dyes, and thus are found in many common materials including antiseptics, medical preparations, plastics, cosmetics, and health care products . Phenol also gets into the air through car exhaust. Phenol is not classified as a carcinogen but as a toxic substance . Due to its hydrophilic and lipophilic properties, phenol easily penetrates cell membranes and dissolves in cell fractions, causing interaction with specific cellular and tissue structures . Three other identified compounds, D-limonene, toluene, and 2-ethyl-1-hexanol, were characterized by over 1–2% of the relative amount in the indoor air from the gym. Among the indoor air VOCs, terpenes are a common group. Cleaning agents and cosmetics contain essential oils rich in terpenes . Toluene, along with benzene, ethylbenzene, and xylenes from the BTEX group, are classified as toxic compounds, while benzene is also classified as a carcinogenic substance (Group 1). Due to their application for various purposes such as the production of plastics, synthetic fibers, floor coverings, chipboard, oils, greases, and paint, the presence of BTEX is common in indoor air. Long-term exposure to BTEX increases the risk of adverse health consequences . In turn, 2-ethyl-1-hexanol is a common component of fragrances. Moreover, it is commonly used for the production of plasticizers (e.g., diethylhexyl phthalate for polyvinyl chloride resins) as well as in coating products, greases, fillers, and putties. The presence of 2-ethyl-1-hexanol may irritate the mucous membranes of the eyes and nose in humans . 2.3. Determination of Airborne Microorganism Number The average daily number of bacteria in the facilities during the working week ranged from 7.17 × 10 2 CFU m −3 (Wednesday) to 1.68 × 10 3 CFU m −3 (Friday), while the number of fungi ranged from 3.03 × 10 3 CFU m −3 (Wednesday) to 7.34 × 10 3 CFU m −3 (Friday) ( , a). The lowest number of bacteria was recorded at 08:00 (4.48 × 10 2 CFU m −3 ) and the highest at 20:00 (2.39 × 10 3 CFU m −3 ). In turn, the concentration of fungi was the lowest at 16:00 (4.53 × 10 3 CFU m −3 ) and the highest at 08:00 (7.42 × 10 3 CFU m −3 ) ( , b). The most contaminated air was observed in Room no. 2 (the gym), where the bacteria count was 1.66 × 10 3 CFU m −3 , and in Room no. 1 (the reception), where the number of fungi was 7.30 × 10 3 CFU m −3 (daily mean). It is noteworthy that, at the same time, in Room no. 2, the lowest number of fungi (1.90 × 10 3 CFU m −3 ) among the analyzed rooms was recorded. In turn, the lowest number of bacteria was found in Room no. 4 (fitness room on the second floor) ( , c). No statistically significant differences were found in the fitness club’s mean daily numbers of bacteria. In the case of fungi, significant differences among days were observed, with the lowest concentration on Tuesday and the highest on Thursday ( a). Considering the influence of the test hour on the number of microorganisms in the air; it can be concluded that the number of fungi is constant while the number of bacteria changes, which is most likely related to the activity of people in these facilities. Statistically higher numbers of bacteria were recorded at the end of the day—at 20:00 ( b). It was also shown that the number of fungi in the atmospheric air was statistically significantly higher than in the samples collected at the fitness club ( c). The obtained aggregate results from the week of air quality monitoring in the fitness club were subjected to detailed statistical analysis, which showed a very weak correlation between the average number of bacteria and fungi in the air, and the airflow, temperature, relative humidity, and the number of particles in the air ( a–d). Moreover, the correlation between the number of microorganisms in the air, the number of persons present at the sampling location, and the number of open windows were also very weak ( e,f). It is worth noting that the present research showed higher microbiological air contamination (bacteria: 7.17 × 10 2 –1.68 × 10 3 CFU m −3 , fungi: 03 × 10 3 CFU m −3 –7.34 × 10 3 CFU m −3 ) than that in previously published studies. In addition, previous research focused primarily on assessing microbial contamination in sports facilities in schools and universities. Brągoszewska et al. recorded the number of bacteria in the air in a Polish high school gym from 4.20 × 10 2 to 8.75 × 10 2 CFU m −3 depending on the activity of the students . Additionally, other studies conducted in Europe (gyms, fitness rooms, and different facilities in academic sport centers) have shown lower microbial contamination (i.e., 5.80 × 10 1 to 2.00 × 10 4 CFU m −3 for the number of bacteria in the air, and 2.10 × 10 1 to 3.75 × 10 2 CFU m −3 for the number of fungi in sports facilities) . Recently, Boonrattanakij et al. investigated microbial contamination in a bicycle room at a fitness center in Taiwan using the same type of air sampler and culture media as the current study . The authors obtained a lower number of bacteria (4.01 × 10 2 –7.61 × 10 2 CFU m −3 ) and fungi (2.26 × 10 2 –8.37 × 10 2 CFU m −3 ). It should be noted that many factors can be responsible for the differences in microbial contamination shown in current and previous studies, such as building construction and materials, ventilation systems and environmental factors (season, air humidity, temperature) and others . Unfortunately, there are no legal limits on the number of microorganisms in indoor air to which the results obtained in the present study could be referred. The WHO suggests that the total number of microorganisms should not exceed 1.0 × 10 3 CFU m −3 . At the same time, the Polish Commission for Maximum Admissible Concentrations and Intensities for Agents Harmful to Health in the Working Environment has developed limits of 5.0 × 10 3 CFU m −3 for the total number of mesophilic bacteria, and the total number of fungi for residential and public utility facilities . Considering the average daily values of the number of microorganisms obtained in the present study, the number of bacteria in Rooms no. 2 and 5 was exceeded. In addition, the number of fungi in all rooms exceeded the WHO’s recommendations . Referring to the Polish guidelines, only the number of fungi in Room no. 1 (reception) was exceeded, where, most often, there was an open window allowing for the inflow of atmospheric air, which was highly contaminated with fungi during the research period. Statistical analysis showed that atmospheric air could be the fungi source in the fitness club rooms tested. This hypothesis could be tested by comparing the indoor air’s fungal composition to the outdoor air. In contrast, the source of bacteria in the indoor air was probably of human origin, as suggested by the highest values observed in the room where the most intense exercises were performed. This conclusion is supported by previous studies showing that bacteria can be over two times more abundant indoors than outdoors, especially in poorly ventilated and heavily occupied premises . Although it is known that microclimate conditions, especially temperature and relative air humidity, as a rule, correlate with the number of microorganisms in the air in the rooms , this was not observed in the current research. This is probably because the environment of fitness clubs is very specific and unstable, which is mainly related to the varying number of people, who are carriers of specific microbiota, perform exercises of varying intensity, enter/leave rooms, open/close doors, open/close windows, turn on/off fans and air-conditioners, etc. These overlapping factors have unpredictable results; future research should introduce systems for continuously monitoring the microbiological air quality in sports facilities. 2.4. Determination of Surface Microbial Contamination The highest number of bacteria was found in the shoe cabinet and on the table in the reception area, which was used by people waiting (3.8 CFU cm −2 ). No bacteria were found on the exercise bike saddle and at the bottom of the locker used to store personal belongings (0 CFU cm −2 ). The highest number of fungi was found on the MMA training bag (6.2 CFU cm −2 ), while no fungi were present at the bottom of the storage locker ( ). Significant differences were observed in the concentration of bacteria and fungi between the tested surfaces ( p < 0.05). Few studies have presented a quantitative assessment of microbial surface contamination in sports facilities. In the present study, surface microbiological contamination was lower than that in the previously published studies. Boonrattanakij et al. conducted microbiological tests on sports equipment (i.e., bicycle handle, dumbbell, and sit-up bench) . The number of bacteria on the examined surfaces (bicycle handle, dumbbell, and sit-up bench) ranged from 3.9 × 102 CFU cm −2 to 3.7 × 10 3 CFU cm −2 . Notably, guidance was posted in the cloakroom and gym for users to disinfect the exercise equipment and cabinets for personal belongings to prevent the spread of COVID-19. Based on the obtained results, it can be concluded that the users did not follow the recommendations in all cases and/or disinfection was ineffective. 2.5. Diversity of Microorganisms in the Fitness Center Environment The results from the high-throughput DNA sequencing of the settled dust sample collected at the fitness club revealed a high diversity of microorganisms ( a). In total, four hundred and twenty-two (422) genera of bacteria representing 21 phyla were detected in the dust. Although the number of phyla was high, most shared a minimal number of classified reads. The most abundant phyla belonged to Cyanobacteria (46%), Proteobacteria (30%), Actinobacteriota (14%), Firmicutes (6%), and Bacteroidota (2%) ( a). The high number of reads from the Cyanobacteria phylum was a surprise. A closer analysis of the reads based on a sequence similarity search employing the NCBI Nucleotide collection database revealed that these sequences were mainly derived from pine ( Pinus spp.) chloroplast DNA, which suggests that the dust sample was primarily contaminated with pollen. The most abundant bacteria identified in the settled dust from the gym in question belonged to the genus Paracoccus (5.8%), Sphingomonas (3.9%), Micrococcus (3.8%), Escherichia-Shigella (2%), Acinetobacter (1.5%), Enhydrobacter (1.5%), Corynebacterium (1.5%), Kocuria (1.5%), 1174-901-12 ( Rhizobiales ; 1.2%), Bacillus (1.1%), and Rubellimicrobium (1.1%). The presence of mitochondrial DNA was most probably due to the contamination of the dust sample with pine pollen. Following this, the presence of potentially hazardous bacterial genera sequences, according to Directive 2019/1833/EC , was checked among the classified reads of the dust sample. Twenty-eight hazardous genera (Groups 2 or 3) were identified; however, their share in the total number was very low (<7.5% of all classified reads). Of these, the most abundant genera were Escherichia-Shigella (2%), Corynebacterium (1.4%), Bacillus (1%), and Staphylococcus (0.8%) ( and ). So far, the bacteria Bacillus , Corynebacterium , Kocuria , Micrococcus , and Pseudomonas Staphylococcus have been reported as characteristic of the environment of sports facilities, identified by classical (culture) methods [ , , ]. Turkskani et al. isolated bacteria from two Saudi Arabian gyms and identified them based on gene sequences of their 16S rRNA . The authors determined the phylogenetic affiliation of the detected bacteria to the following genera: Bacillus , Brachybacterium , Geobacillus , Microbacterium , Micrococcus , and Staphylococcus . Moreover, Haghverdian et al. demonstrated the prevalence and transmissibility of S. aureus on the surfaces (floor, balls, hands) in sports facilities . The authors observed the viability of S. aureus on sequestered sports balls for 72 h, while another work demonstrated the survival of S. aureus strains for up to 12 days on inanimate surfaces . Recently, Szulc et al., (2023) published the results of the first metagenomic analysis of a bioaerosol from a sports center (a room with a climbing wall). The authors identified bacteria mainly belonging to the genus Cellulosimicrobium , Stenotrophomonas , Acinetobacter , Escherichia , and Lactobacillus in these environments . The present study detected bacteria of the Paracoccus , Sphingomonas , Enhydrobacter , Rubellimicrobium and 1174-901-12 genus, with a share of more than 15%, which have never previously been identified in sports facilities. Paracoccus was isolated from various environments including soils, salines, marine sediments, wastewater, and biofilters. Most include saprophytes, but one species of P. yeei is known to be associated with opportunistic infections in humans . Additionally, Rubellimicrobium are environmental bacteria observed in the soil, air, and slime on industrial machines ; therefore, their presence in a fitness club is unsurprising. Sphingomonas has also been isolated from many environmental samples (soil, sediment, water) including samples chemically contaminated with azo dyes, phenols, dibenzofurans, insecticides, and herbicides . Many Sphingomonas strains have been isolated from human clinical specimens and hospital environments where Sphingomonas paucimobilis , S. mucosissima , and S. adhesiva are most associated with human infections . Genus 1174-901-12 has previously been isolated from soil, ceramic roofs, and photovoltaic panels , indicating that its source may be the external environment or building materials in the fitness club building. So far, only one species of Enhydrobacter is known ( E. aerosaccus ), which was isolated from a eutrophic lake. These bacteria are rare and poorly described in the literature; therefore, it is challenging to conclude their source in the studied fitness club and their potential effects . The ITS-based analysis revealed that, in total, four hundred and eight (408) genera of fungi representing 11 phyla were detected in the dust. The most abundant phyla belonged to Ascomycota (36.4%), Basidiomycota (28.4%), Arthropoda (11.4%), and Anthophyta (8.7%) ( a). The most abundant fungi identified in the settled dust from the gym in question belonged to the genus Mycosphaerella (13.2%), Citellophilus (11.3%), Fusarium (4.1%), Cladosporium (3.7%), Sporobolomyces (1.9%), Mycena (1.9%), Alternaria (1.7%), Trametes (1.6%), Xylodon (1.4%), Itersonilia (1.2%), Vishaniacozyma (1.2%), Epicoccum (1.1%), and Filobasidium (1.0%) ( b). Seven genera of the hazardous category (Groups 2 or 3), according to Directive 2019/1833/EC were found, and their share in the total number was very low (less than 1.5% of all classified reads). Of these, the most abundant genera were Cladosporium (3.7%), Aspergillus (0.9%), and Penicillium (0.4%) ( ). Żyrek et al. indicated the presence of the yeast Candida sp. Małecka-Adamowicz et al. found fungi from the genus Cladosporium , and to a lesser extent, Penicillium , Fusarium , Acremonium , Alternaria , and Aureobasidium . The occurrence of potentially allergenic molds of the genera Aspergillus and Cladosporium in Czech sports facilities was described in . Viegas et al. identified 25 species of fungi occurring in ten gymnasia and found mainly molds from the following genera: Cladosporium , Penicillium , Aspergillus , Mucor , Phoma and Chrysonilia as well as yeasts from the genera: Rhodotorula , Trichosporon mucoides and Cryptococcus uniguttulattus . Szulc et al. indicated the dominance of fungi: Mycosphaerella , Botrytis , Chalastospora , Cladosporium , Itersonilia , Malassezia , Naganishia , Saccharomyces , Sporobolomyces , Trichosporon , and Udeniomyces in sports facilities for climbing activities . The results obtained in the present study differed from the literature data. The genera of fungi: Acremonium , Aureobasidium , Penicillium , Aspergillus , Candida , Mucor , Phoma , Chrysonilia , Rhodotorula , Trichosporon and Cryptococcus , which dominated in earlier studies, were found in low quantities, from 0.01% to 0.8% , which may be related to the seasonal variability of the types of fungi dominant in the atmospheric air shaping the qualitative composition of indoor fungi. Moreover, in the present research, the following fungi: Citellophilus , Mycena , Tramates , Xylodon , Vishniacozyma , Epicoccum , and Filobasidium were identified for the first time in gym facilities. These fungi are common genera and likely come from the outdoor air. They are known as plant parasites but can also be allergic to humans, are often linked to decreased pulmonary function and asthma admissions, and may cause infections, particularly in immunosuppressed patients [ , , , , , , , ]. It should be mentioned that the use of high-throughput sequencing on the Illumina platform made it possible to identify a greater variety of microorganisms found in sports facilities than that previously described in the literature. Metagenomic analysis is increasingly used to study various environmental samples such as soil, water, technical materials (e.g., cardboard, cellulosic materials, collagen), settled dust, and many others [ , , ]. The advantage of this method is the identification of microorganisms directly from the test sample, skipping the cultivation stage, which prevents the loss of species of microorganisms that cannot grow under laboratory conditions . 2.6. Assessment of SARS-CoV-2 Virus Presence in the Fitness Center Environment Selected surfaces in Room no. 2 (gym) were tested for the SARS-CoV-2 virus. In the case of the treadmill touch panel, the result was positive ( ). The results suggest a real risk of the spread of the COVID-19 pandemic in gyms and fitness clubs. It is worth mentioning that between 74 and 164 cases of COVID-19 per day were recorded in Poland between 26 and 30 of July 2021. In the province where the fitness center in question was located, cases ranged between 1 and 13 per day . The detection of the SARS-CoV-2 virus suggests a real risk of the spread of COVID-19 in gyms and fitness clubs. However, studies involving a larger number of tested samples are needed to confirm this hypothesis. The risk of the transmission of COVID-19 may arise from close contact, the emission of droplets, or through fomites. Intensive physical activities in a fitness center favor these factors, mainly due to the increased physical contact, increased concentration of exhaled respiratory droplets in a confined space because of vigorous breathing, and shared communal space and equipment . No RNA of SARS-CoV-2 was detected in the previously performed air and surface studies at a fitness center in the U.S. . SARS-CoV-2 transmission in sports facilities has been previously proven in positive PCR tests of infected users and workers . Conversely, in Norway, Helsingen et al. tested 3764 individuals divided into two groups (with and without access to training at a fitness center) . They found a difference of 0.05% (one versus zero cases) in SARS-CoV-2 RNA test positivity between training and non-training individuals. The authors stated that good hygiene and physical distance in fitness centers did not increase the infection risk of SARS-CoV-2 for individuals without COVID-19-relevant comorbidities in such spaces. Therefore, it is essential to make the users and employees of these facilities aware of the principles of sanitary safety and the proper disinfection of hands and sports equipment. 2.7. Directions for Minimizing Microbiological and Chemical Threats in the Sports Facilities The benefits of physical activities should be strengthened by reducing the exposure to physicochemical and microbiological contamination and consequently by minimizing the risk of possible adverse health effects for users in sports facilities . In the tested fitness club, we found a high concentration of dust, microorganisms, and the SARS-CoV-2 virus. It is worth mentioning that the performed studies had some limitations resulting from (a) the uniqueness of the sample (only one fitness club was tested); (b) the season in which the samples were taken due to the influence of external bioaerosols on the amount and composition of internal bioaerosols; (c) the holiday/vacation season, which meant that the number of users was lower than the rest of the year; (d) the small number of samples taken for analysis for the detection of SARS-CoV-2 and metagenomic analysis. However, studies suggest that air purification systems with proven effectiveness are needed for continuous operation during opening hours in sports facilities. Various chemical and physical methods are currently known and tested for air disinfection. Air disinfection includes filtration, ozonation, exposure to ultraviolet radiation, photocatalysis, and cold plasma . Recently, it has been proposed to use strong electric fields in which the destruction or electroporation of microorganisms occurs . Among these disinfection techniques, chemical fogging, ozonation, and UV radiation of the air are the main solutions available on the market . These methods are currently used in clinical and pharmaceutical objects; however, they seem suitable for sports facilities. One of the ways to prevent the spread of viruses and pathogenic microorganisms in sports facilities is to use floors that feature antibacterial properties and other materials (e.g., clothing and towels) with biostatic properties. It is worth noting that one of the practices related to preventing the spread of the COVID-19 pandemic was the introduction of spray bottles filled with a disinfectant solution in sports centers for wiping the exercise equipment after use. It should be noted that these practices have their weaknesses. Rarely are surface disinfectants at sports facilities in their original packaging, allowing the center to control their composition and the concentration of active substances. Therefore, it is crucial to use EPA-approved disinfectants, consider the type of disinfected surfaces (metal, plastic, leather, etc.), and prepare working solutions of disinfectants following the manufacturer’s guidelines, labeling them properly, and providing detailed instructions for use for the end users. This is important because the effectiveness of disinfection will depend on the contact time of the preparation used with the surface. Common misconduct is spraying and immediately wiping off sports equipment. Such disinfection will not be effective and will even become dangerous for the user and the environment. Therefore, the staff at sports clubs must be properly prepared (trained) to use appropriate safety procedures and personal protective equipment (if necessary) during disinfection. An alternative to sprayed disinfectants can be disinfectant-impregnated wipes, consisting of towels saturated with diluted disinfectant and other compounds (i.e., surfactants, preservatives, enzymes, and perfumes) . Staff and the users of exercise facilities should wash their hands with water and plain soap before entering and leaving and before and after any contact with other people and equipment in sports facilities and avoid sharing towels (preferably use disposable paper towels) and other personal items. Wounds, cuts, scrapes, etc., should be covered with a clean, dry dressing to prevent contamination. The World Health Organization (WHO) recommends alcohol-based formulations to disinfect hands; such formulations have been shown to inactivate SARS-CoV-2 efficiently. Moreover, hydrogen peroxide or povidone-iodine and other biocides possess antiviral properties and can be used to disinfect biological surfaces . The sharing of exercise equipment should be avoided if possible. If this is not possible, the use of a towel is recommended, or, for example, gloves that provide a barrier between the skin and such equipment. After the entire working day, the facility staff should wash and disinfect all common exercise equipment used on a given day. Moreover, objects inside a sports facility that require special attention include countertops, light switches, faucet handles, and doorknobs. Staff should be excluded from the use of damaged equipment (e.g., torn upholstery) that cannot be properly disinfected due to damage. Future research should aim at introducing Internet of Things (IoT) technology systems of constant air quality monitoring in sports facilities (e.g., using multiple sensors including microfluidic chips as well as developing warning systems against exceeding the concentration of suspended dust, or the recommended number of microorganisms in the air).
The mean values of microclimatic conditions and PM concentrations are presented in . The microclimate parameters were also analyzed as daily averages ( ). The measurements were also taken depending on the time of day ( ) and the sampling location ( ). Airflow velocity measured during the experiments ranged from 0 m s −1 (no ventilation or air conditioning, no windows open, minimal number of people present at the sampling site) to 0.69 m s −1 (near an open window). The temperature was between 12.8 °C and 29.8 °C, and the relative humidity was between 44.3% and 78.3%. Microclimatic conditions are essential for achieving the optimal performance and comfort during exercise. The literature shows that an effective temperature below 22 °C degrades exercise performance among women, while an air temperature of 24 °C, with moderate RH, low air velocity, and weak radiation, is recommended at gyms to support exercise, comfort, and energy conservation . The International Fitness Association sets different recommendations for temperature and humidity at commercial gyms . A room temperature below 20 °C degrees and 50% humidity are recommended for aerobic classes, while for aerobics, cardio, weight training and Pilates areas, temperatures should be between 18 and 20 °C with a humidity between 40% and 60%. The relation between thermal comfort and air movement at elevated activity levels was also investigated . Air movement with higher temperatures produced equal or better comfort and perceived air quality below the reference condition for every temperature up to 26 °C. In our study, the air velocities and temperatures did not differ daily; the only difference was observed in the average daily humidity between Friday and the rest of the week ( ). Moreover, the air velocities did not depend on the time of day; the measurement was carried out while the temperature rose by the hour and the relative humidity first dropped (while the air conditioning was running) and then increased ( ). The temperature conditions at different locations were similar (no statistical differences in average temperature were detected; ). Statistically significant differences were detected for the average air velocity and average humidity ( ), corresponding to the number of windows open, the air conditioning running, and the number of people in the facility. The diversified values of these microclimatic parameters might lead to different development conditions for microorganisms between the tested locations. The fitness center environment is very unstable, and many factors affect the parameters of temperature, relative humidity and airflow velocity. In the present study, significantly different air velocity and air humidity conditions were noted, which implies that the microclimate parameters strongly depend on the specific location. There was no correlation between the individual microclimate parameters and the number of windows open, air conditioning running, and the number of people in the facility. Previous research has shown that the high number of people who exercise at closed sports facilities can contribute to the air quality issues inside them. The factors determining exercise intensity also affect air quality . The concentration of fine particles in the indoor air fluctuates depending on the weather conditions. Furthermore, ventilation and air filtration systems at such facilities are essential for proper air exchange and purification. The present study confirmed these results as significantly higher PM concentrations were observed indoors than outdoors ( ). The size distributions of airborne particles for each separate sampling variant are presented in . The PM 2.5 constituted almost all measured dust at the tested locations. Its share in the total quantity of the measured airborne particles was between 99.65% and 99.99%, and the range of the number of particles per size dropped with an increasing particle size. The total suspended PM concentration varied between 0.0445 mg m −3 and 0.0841 mg m −3 and differed significantly between all of the tested locations ( ). The daily averages were higher in the first three days of the experiment and significantly lower in the last two ( ). Considerably higher concentrations were observed at the beginning of each day and just before the facility was closed ( ). Based on full-factorial ANOVA, the main effects and all interactions were confirmed at a significance level of 0.05. According to EU legislation, the annual average concentration of dust with dimensions below 2.5 µm (i.e., the fraction containing the PM 1 fraction) should not exceed 0.025 mg m −3 . In our case, the measured PM 2.5 concentration was more than twice as high as the environmental threshold, independent of the sampling site . This agrees with the literature suggesting that the air quality inside training facilities is often worse than that outdoors . The presence of airborne particulate matter can affect the users’ health and decrease their physical performance by around 5% . Studies show that exposure to high PM concentrations can increase the risk of suffering from various respiratory and circulatory diseases . Moreover, some studies have suggested that people who regularly exercise are more prone to experiencing the effects of air pollutants than those who do not participate in sports . Carbon dioxide (CO 2 ) is the main gaseous air pollutant in sports facilities, connected to a natural product of human respiration . The CO 2 concentration in the fitness club ranged from 800 ppm (reception desk) to 2198 ppm (gym), and for most rooms, it was statistically significantly higher than in atmospheric air. In previous studies, the CO 2 concentrations in the air of sports halls were lower and ranged from 294.8 to 1529 ppm . However, the current studies have not shown any exceedance of the CO 2 concentration limits in the air developed by the WHO and the U.S. Environmental Protection Agency . Formaldehyde concentration in the tested sports facilities ranged from 0.005 mg/m 3 to 0.049 mg/m 3 , and for two out of five rooms, it was statistically significantly lower than in the control (atmospheric) air. This is probably due to the heavy traffic in the parking lot adjacent to the building and the busy street. According to EPA guidelines, the detected values of formaldehyde do not exceed the limits for this compound in the air .
The volatile compounds were extracted from the air sample by SPME, followed by desorption and analysis with GC-MS. A total of 85 compounds were identified, of which 84 were present in the air collected from the gym, while only 47 were identified in the control sample (background) ( ). Detected compounds were divided into ten groups based on their chemical structures: hydrocarbons (30), terpenes and terpenoids (20), alcohols (10), aldehydes (eight), ketones (six), esters (six), furanes (two), phenols (one), ethers (one), and acids (one). The sources of the compounds that were identified in the indoor air from the gym can differ; for example, the breath air exhaled by people in the gym (e.g., acetone, ethanol, 1-propanol, butyl acetate, acetic acid, acetoin, and 2,3-butanedione), alcohol-based hand disinfectants and equipment cleaning agents , air fresheners and cosmetics (e.g., ethanol, phenol, benzaldehyde, 2-propanol, α-pinene, eucalyptol, linalool, 3-carene, D-limonene, γ-terpinene, α-thujene, β-myrcene, camphene, butane, pentane, acetone, furfural, 3-methyl-1-butanol, 2-methyl-1-butanol, dihydromyrcenol, citronellal, verbenone, menthol), building materials and room finishing materials , and gym equipment (e.g., furfural, toluene, benzene, styrene, xylenes, ethylbenzene, heptane, decane, benzaldehyde, hexanal). The relative amount (%) results showed that phenol was the dominant compound in both the control air sample and air sample from the gym. The presence of phenol results from the widespread use of it and its derivatives, among others, in the production of resins, detergents, medicinal products, disinfectants, and dyes, and thus are found in many common materials including antiseptics, medical preparations, plastics, cosmetics, and health care products . Phenol also gets into the air through car exhaust. Phenol is not classified as a carcinogen but as a toxic substance . Due to its hydrophilic and lipophilic properties, phenol easily penetrates cell membranes and dissolves in cell fractions, causing interaction with specific cellular and tissue structures . Three other identified compounds, D-limonene, toluene, and 2-ethyl-1-hexanol, were characterized by over 1–2% of the relative amount in the indoor air from the gym. Among the indoor air VOCs, terpenes are a common group. Cleaning agents and cosmetics contain essential oils rich in terpenes . Toluene, along with benzene, ethylbenzene, and xylenes from the BTEX group, are classified as toxic compounds, while benzene is also classified as a carcinogenic substance (Group 1). Due to their application for various purposes such as the production of plastics, synthetic fibers, floor coverings, chipboard, oils, greases, and paint, the presence of BTEX is common in indoor air. Long-term exposure to BTEX increases the risk of adverse health consequences . In turn, 2-ethyl-1-hexanol is a common component of fragrances. Moreover, it is commonly used for the production of plasticizers (e.g., diethylhexyl phthalate for polyvinyl chloride resins) as well as in coating products, greases, fillers, and putties. The presence of 2-ethyl-1-hexanol may irritate the mucous membranes of the eyes and nose in humans .
The average daily number of bacteria in the facilities during the working week ranged from 7.17 × 10 2 CFU m −3 (Wednesday) to 1.68 × 10 3 CFU m −3 (Friday), while the number of fungi ranged from 3.03 × 10 3 CFU m −3 (Wednesday) to 7.34 × 10 3 CFU m −3 (Friday) ( , a). The lowest number of bacteria was recorded at 08:00 (4.48 × 10 2 CFU m −3 ) and the highest at 20:00 (2.39 × 10 3 CFU m −3 ). In turn, the concentration of fungi was the lowest at 16:00 (4.53 × 10 3 CFU m −3 ) and the highest at 08:00 (7.42 × 10 3 CFU m −3 ) ( , b). The most contaminated air was observed in Room no. 2 (the gym), where the bacteria count was 1.66 × 10 3 CFU m −3 , and in Room no. 1 (the reception), where the number of fungi was 7.30 × 10 3 CFU m −3 (daily mean). It is noteworthy that, at the same time, in Room no. 2, the lowest number of fungi (1.90 × 10 3 CFU m −3 ) among the analyzed rooms was recorded. In turn, the lowest number of bacteria was found in Room no. 4 (fitness room on the second floor) ( , c). No statistically significant differences were found in the fitness club’s mean daily numbers of bacteria. In the case of fungi, significant differences among days were observed, with the lowest concentration on Tuesday and the highest on Thursday ( a). Considering the influence of the test hour on the number of microorganisms in the air; it can be concluded that the number of fungi is constant while the number of bacteria changes, which is most likely related to the activity of people in these facilities. Statistically higher numbers of bacteria were recorded at the end of the day—at 20:00 ( b). It was also shown that the number of fungi in the atmospheric air was statistically significantly higher than in the samples collected at the fitness club ( c). The obtained aggregate results from the week of air quality monitoring in the fitness club were subjected to detailed statistical analysis, which showed a very weak correlation between the average number of bacteria and fungi in the air, and the airflow, temperature, relative humidity, and the number of particles in the air ( a–d). Moreover, the correlation between the number of microorganisms in the air, the number of persons present at the sampling location, and the number of open windows were also very weak ( e,f). It is worth noting that the present research showed higher microbiological air contamination (bacteria: 7.17 × 10 2 –1.68 × 10 3 CFU m −3 , fungi: 03 × 10 3 CFU m −3 –7.34 × 10 3 CFU m −3 ) than that in previously published studies. In addition, previous research focused primarily on assessing microbial contamination in sports facilities in schools and universities. Brągoszewska et al. recorded the number of bacteria in the air in a Polish high school gym from 4.20 × 10 2 to 8.75 × 10 2 CFU m −3 depending on the activity of the students . Additionally, other studies conducted in Europe (gyms, fitness rooms, and different facilities in academic sport centers) have shown lower microbial contamination (i.e., 5.80 × 10 1 to 2.00 × 10 4 CFU m −3 for the number of bacteria in the air, and 2.10 × 10 1 to 3.75 × 10 2 CFU m −3 for the number of fungi in sports facilities) . Recently, Boonrattanakij et al. investigated microbial contamination in a bicycle room at a fitness center in Taiwan using the same type of air sampler and culture media as the current study . The authors obtained a lower number of bacteria (4.01 × 10 2 –7.61 × 10 2 CFU m −3 ) and fungi (2.26 × 10 2 –8.37 × 10 2 CFU m −3 ). It should be noted that many factors can be responsible for the differences in microbial contamination shown in current and previous studies, such as building construction and materials, ventilation systems and environmental factors (season, air humidity, temperature) and others . Unfortunately, there are no legal limits on the number of microorganisms in indoor air to which the results obtained in the present study could be referred. The WHO suggests that the total number of microorganisms should not exceed 1.0 × 10 3 CFU m −3 . At the same time, the Polish Commission for Maximum Admissible Concentrations and Intensities for Agents Harmful to Health in the Working Environment has developed limits of 5.0 × 10 3 CFU m −3 for the total number of mesophilic bacteria, and the total number of fungi for residential and public utility facilities . Considering the average daily values of the number of microorganisms obtained in the present study, the number of bacteria in Rooms no. 2 and 5 was exceeded. In addition, the number of fungi in all rooms exceeded the WHO’s recommendations . Referring to the Polish guidelines, only the number of fungi in Room no. 1 (reception) was exceeded, where, most often, there was an open window allowing for the inflow of atmospheric air, which was highly contaminated with fungi during the research period. Statistical analysis showed that atmospheric air could be the fungi source in the fitness club rooms tested. This hypothesis could be tested by comparing the indoor air’s fungal composition to the outdoor air. In contrast, the source of bacteria in the indoor air was probably of human origin, as suggested by the highest values observed in the room where the most intense exercises were performed. This conclusion is supported by previous studies showing that bacteria can be over two times more abundant indoors than outdoors, especially in poorly ventilated and heavily occupied premises . Although it is known that microclimate conditions, especially temperature and relative air humidity, as a rule, correlate with the number of microorganisms in the air in the rooms , this was not observed in the current research. This is probably because the environment of fitness clubs is very specific and unstable, which is mainly related to the varying number of people, who are carriers of specific microbiota, perform exercises of varying intensity, enter/leave rooms, open/close doors, open/close windows, turn on/off fans and air-conditioners, etc. These overlapping factors have unpredictable results; future research should introduce systems for continuously monitoring the microbiological air quality in sports facilities.
The highest number of bacteria was found in the shoe cabinet and on the table in the reception area, which was used by people waiting (3.8 CFU cm −2 ). No bacteria were found on the exercise bike saddle and at the bottom of the locker used to store personal belongings (0 CFU cm −2 ). The highest number of fungi was found on the MMA training bag (6.2 CFU cm −2 ), while no fungi were present at the bottom of the storage locker ( ). Significant differences were observed in the concentration of bacteria and fungi between the tested surfaces ( p < 0.05). Few studies have presented a quantitative assessment of microbial surface contamination in sports facilities. In the present study, surface microbiological contamination was lower than that in the previously published studies. Boonrattanakij et al. conducted microbiological tests on sports equipment (i.e., bicycle handle, dumbbell, and sit-up bench) . The number of bacteria on the examined surfaces (bicycle handle, dumbbell, and sit-up bench) ranged from 3.9 × 102 CFU cm −2 to 3.7 × 10 3 CFU cm −2 . Notably, guidance was posted in the cloakroom and gym for users to disinfect the exercise equipment and cabinets for personal belongings to prevent the spread of COVID-19. Based on the obtained results, it can be concluded that the users did not follow the recommendations in all cases and/or disinfection was ineffective.
The results from the high-throughput DNA sequencing of the settled dust sample collected at the fitness club revealed a high diversity of microorganisms ( a). In total, four hundred and twenty-two (422) genera of bacteria representing 21 phyla were detected in the dust. Although the number of phyla was high, most shared a minimal number of classified reads. The most abundant phyla belonged to Cyanobacteria (46%), Proteobacteria (30%), Actinobacteriota (14%), Firmicutes (6%), and Bacteroidota (2%) ( a). The high number of reads from the Cyanobacteria phylum was a surprise. A closer analysis of the reads based on a sequence similarity search employing the NCBI Nucleotide collection database revealed that these sequences were mainly derived from pine ( Pinus spp.) chloroplast DNA, which suggests that the dust sample was primarily contaminated with pollen. The most abundant bacteria identified in the settled dust from the gym in question belonged to the genus Paracoccus (5.8%), Sphingomonas (3.9%), Micrococcus (3.8%), Escherichia-Shigella (2%), Acinetobacter (1.5%), Enhydrobacter (1.5%), Corynebacterium (1.5%), Kocuria (1.5%), 1174-901-12 ( Rhizobiales ; 1.2%), Bacillus (1.1%), and Rubellimicrobium (1.1%). The presence of mitochondrial DNA was most probably due to the contamination of the dust sample with pine pollen. Following this, the presence of potentially hazardous bacterial genera sequences, according to Directive 2019/1833/EC , was checked among the classified reads of the dust sample. Twenty-eight hazardous genera (Groups 2 or 3) were identified; however, their share in the total number was very low (<7.5% of all classified reads). Of these, the most abundant genera were Escherichia-Shigella (2%), Corynebacterium (1.4%), Bacillus (1%), and Staphylococcus (0.8%) ( and ). So far, the bacteria Bacillus , Corynebacterium , Kocuria , Micrococcus , and Pseudomonas Staphylococcus have been reported as characteristic of the environment of sports facilities, identified by classical (culture) methods [ , , ]. Turkskani et al. isolated bacteria from two Saudi Arabian gyms and identified them based on gene sequences of their 16S rRNA . The authors determined the phylogenetic affiliation of the detected bacteria to the following genera: Bacillus , Brachybacterium , Geobacillus , Microbacterium , Micrococcus , and Staphylococcus . Moreover, Haghverdian et al. demonstrated the prevalence and transmissibility of S. aureus on the surfaces (floor, balls, hands) in sports facilities . The authors observed the viability of S. aureus on sequestered sports balls for 72 h, while another work demonstrated the survival of S. aureus strains for up to 12 days on inanimate surfaces . Recently, Szulc et al., (2023) published the results of the first metagenomic analysis of a bioaerosol from a sports center (a room with a climbing wall). The authors identified bacteria mainly belonging to the genus Cellulosimicrobium , Stenotrophomonas , Acinetobacter , Escherichia , and Lactobacillus in these environments . The present study detected bacteria of the Paracoccus , Sphingomonas , Enhydrobacter , Rubellimicrobium and 1174-901-12 genus, with a share of more than 15%, which have never previously been identified in sports facilities. Paracoccus was isolated from various environments including soils, salines, marine sediments, wastewater, and biofilters. Most include saprophytes, but one species of P. yeei is known to be associated with opportunistic infections in humans . Additionally, Rubellimicrobium are environmental bacteria observed in the soil, air, and slime on industrial machines ; therefore, their presence in a fitness club is unsurprising. Sphingomonas has also been isolated from many environmental samples (soil, sediment, water) including samples chemically contaminated with azo dyes, phenols, dibenzofurans, insecticides, and herbicides . Many Sphingomonas strains have been isolated from human clinical specimens and hospital environments where Sphingomonas paucimobilis , S. mucosissima , and S. adhesiva are most associated with human infections . Genus 1174-901-12 has previously been isolated from soil, ceramic roofs, and photovoltaic panels , indicating that its source may be the external environment or building materials in the fitness club building. So far, only one species of Enhydrobacter is known ( E. aerosaccus ), which was isolated from a eutrophic lake. These bacteria are rare and poorly described in the literature; therefore, it is challenging to conclude their source in the studied fitness club and their potential effects . The ITS-based analysis revealed that, in total, four hundred and eight (408) genera of fungi representing 11 phyla were detected in the dust. The most abundant phyla belonged to Ascomycota (36.4%), Basidiomycota (28.4%), Arthropoda (11.4%), and Anthophyta (8.7%) ( a). The most abundant fungi identified in the settled dust from the gym in question belonged to the genus Mycosphaerella (13.2%), Citellophilus (11.3%), Fusarium (4.1%), Cladosporium (3.7%), Sporobolomyces (1.9%), Mycena (1.9%), Alternaria (1.7%), Trametes (1.6%), Xylodon (1.4%), Itersonilia (1.2%), Vishaniacozyma (1.2%), Epicoccum (1.1%), and Filobasidium (1.0%) ( b). Seven genera of the hazardous category (Groups 2 or 3), according to Directive 2019/1833/EC were found, and their share in the total number was very low (less than 1.5% of all classified reads). Of these, the most abundant genera were Cladosporium (3.7%), Aspergillus (0.9%), and Penicillium (0.4%) ( ). Żyrek et al. indicated the presence of the yeast Candida sp. Małecka-Adamowicz et al. found fungi from the genus Cladosporium , and to a lesser extent, Penicillium , Fusarium , Acremonium , Alternaria , and Aureobasidium . The occurrence of potentially allergenic molds of the genera Aspergillus and Cladosporium in Czech sports facilities was described in . Viegas et al. identified 25 species of fungi occurring in ten gymnasia and found mainly molds from the following genera: Cladosporium , Penicillium , Aspergillus , Mucor , Phoma and Chrysonilia as well as yeasts from the genera: Rhodotorula , Trichosporon mucoides and Cryptococcus uniguttulattus . Szulc et al. indicated the dominance of fungi: Mycosphaerella , Botrytis , Chalastospora , Cladosporium , Itersonilia , Malassezia , Naganishia , Saccharomyces , Sporobolomyces , Trichosporon , and Udeniomyces in sports facilities for climbing activities . The results obtained in the present study differed from the literature data. The genera of fungi: Acremonium , Aureobasidium , Penicillium , Aspergillus , Candida , Mucor , Phoma , Chrysonilia , Rhodotorula , Trichosporon and Cryptococcus , which dominated in earlier studies, were found in low quantities, from 0.01% to 0.8% , which may be related to the seasonal variability of the types of fungi dominant in the atmospheric air shaping the qualitative composition of indoor fungi. Moreover, in the present research, the following fungi: Citellophilus , Mycena , Tramates , Xylodon , Vishniacozyma , Epicoccum , and Filobasidium were identified for the first time in gym facilities. These fungi are common genera and likely come from the outdoor air. They are known as plant parasites but can also be allergic to humans, are often linked to decreased pulmonary function and asthma admissions, and may cause infections, particularly in immunosuppressed patients [ , , , , , , , ]. It should be mentioned that the use of high-throughput sequencing on the Illumina platform made it possible to identify a greater variety of microorganisms found in sports facilities than that previously described in the literature. Metagenomic analysis is increasingly used to study various environmental samples such as soil, water, technical materials (e.g., cardboard, cellulosic materials, collagen), settled dust, and many others [ , , ]. The advantage of this method is the identification of microorganisms directly from the test sample, skipping the cultivation stage, which prevents the loss of species of microorganisms that cannot grow under laboratory conditions .
Selected surfaces in Room no. 2 (gym) were tested for the SARS-CoV-2 virus. In the case of the treadmill touch panel, the result was positive ( ). The results suggest a real risk of the spread of the COVID-19 pandemic in gyms and fitness clubs. It is worth mentioning that between 74 and 164 cases of COVID-19 per day were recorded in Poland between 26 and 30 of July 2021. In the province where the fitness center in question was located, cases ranged between 1 and 13 per day . The detection of the SARS-CoV-2 virus suggests a real risk of the spread of COVID-19 in gyms and fitness clubs. However, studies involving a larger number of tested samples are needed to confirm this hypothesis. The risk of the transmission of COVID-19 may arise from close contact, the emission of droplets, or through fomites. Intensive physical activities in a fitness center favor these factors, mainly due to the increased physical contact, increased concentration of exhaled respiratory droplets in a confined space because of vigorous breathing, and shared communal space and equipment . No RNA of SARS-CoV-2 was detected in the previously performed air and surface studies at a fitness center in the U.S. . SARS-CoV-2 transmission in sports facilities has been previously proven in positive PCR tests of infected users and workers . Conversely, in Norway, Helsingen et al. tested 3764 individuals divided into two groups (with and without access to training at a fitness center) . They found a difference of 0.05% (one versus zero cases) in SARS-CoV-2 RNA test positivity between training and non-training individuals. The authors stated that good hygiene and physical distance in fitness centers did not increase the infection risk of SARS-CoV-2 for individuals without COVID-19-relevant comorbidities in such spaces. Therefore, it is essential to make the users and employees of these facilities aware of the principles of sanitary safety and the proper disinfection of hands and sports equipment.
The benefits of physical activities should be strengthened by reducing the exposure to physicochemical and microbiological contamination and consequently by minimizing the risk of possible adverse health effects for users in sports facilities . In the tested fitness club, we found a high concentration of dust, microorganisms, and the SARS-CoV-2 virus. It is worth mentioning that the performed studies had some limitations resulting from (a) the uniqueness of the sample (only one fitness club was tested); (b) the season in which the samples were taken due to the influence of external bioaerosols on the amount and composition of internal bioaerosols; (c) the holiday/vacation season, which meant that the number of users was lower than the rest of the year; (d) the small number of samples taken for analysis for the detection of SARS-CoV-2 and metagenomic analysis. However, studies suggest that air purification systems with proven effectiveness are needed for continuous operation during opening hours in sports facilities. Various chemical and physical methods are currently known and tested for air disinfection. Air disinfection includes filtration, ozonation, exposure to ultraviolet radiation, photocatalysis, and cold plasma . Recently, it has been proposed to use strong electric fields in which the destruction or electroporation of microorganisms occurs . Among these disinfection techniques, chemical fogging, ozonation, and UV radiation of the air are the main solutions available on the market . These methods are currently used in clinical and pharmaceutical objects; however, they seem suitable for sports facilities. One of the ways to prevent the spread of viruses and pathogenic microorganisms in sports facilities is to use floors that feature antibacterial properties and other materials (e.g., clothing and towels) with biostatic properties. It is worth noting that one of the practices related to preventing the spread of the COVID-19 pandemic was the introduction of spray bottles filled with a disinfectant solution in sports centers for wiping the exercise equipment after use. It should be noted that these practices have their weaknesses. Rarely are surface disinfectants at sports facilities in their original packaging, allowing the center to control their composition and the concentration of active substances. Therefore, it is crucial to use EPA-approved disinfectants, consider the type of disinfected surfaces (metal, plastic, leather, etc.), and prepare working solutions of disinfectants following the manufacturer’s guidelines, labeling them properly, and providing detailed instructions for use for the end users. This is important because the effectiveness of disinfection will depend on the contact time of the preparation used with the surface. Common misconduct is spraying and immediately wiping off sports equipment. Such disinfection will not be effective and will even become dangerous for the user and the environment. Therefore, the staff at sports clubs must be properly prepared (trained) to use appropriate safety procedures and personal protective equipment (if necessary) during disinfection. An alternative to sprayed disinfectants can be disinfectant-impregnated wipes, consisting of towels saturated with diluted disinfectant and other compounds (i.e., surfactants, preservatives, enzymes, and perfumes) . Staff and the users of exercise facilities should wash their hands with water and plain soap before entering and leaving and before and after any contact with other people and equipment in sports facilities and avoid sharing towels (preferably use disposable paper towels) and other personal items. Wounds, cuts, scrapes, etc., should be covered with a clean, dry dressing to prevent contamination. The World Health Organization (WHO) recommends alcohol-based formulations to disinfect hands; such formulations have been shown to inactivate SARS-CoV-2 efficiently. Moreover, hydrogen peroxide or povidone-iodine and other biocides possess antiviral properties and can be used to disinfect biological surfaces . The sharing of exercise equipment should be avoided if possible. If this is not possible, the use of a towel is recommended, or, for example, gloves that provide a barrier between the skin and such equipment. After the entire working day, the facility staff should wash and disinfect all common exercise equipment used on a given day. Moreover, objects inside a sports facility that require special attention include countertops, light switches, faucet handles, and doorknobs. Staff should be excluded from the use of damaged equipment (e.g., torn upholstery) that cannot be properly disinfected due to damage. Future research should aim at introducing Internet of Things (IoT) technology systems of constant air quality monitoring in sports facilities (e.g., using multiple sensors including microfluidic chips as well as developing warning systems against exceeding the concentration of suspended dust, or the recommended number of microorganisms in the air).
3.1. Tested Fitness Center and Sampling Strategy The research was conducted in a fitness club in Zduńska Wola (central Poland). The tested fitness center is located in a service and commercial building built in the1990s and operates from Monday to Friday from 8:00 to 22:00 and on the weekends from 9:00 to 15:00. The characteristics of the rooms under study are presented in . Samples of the air were collected from five fitness center rooms equipped with an occasional air conditioning system. Moreover, control samples (atmospheric air) in front of the building were collected simultaneously. Samples were collected during the entire working week (Monday–Friday) at 8:00, 12:00, 16:00 and 20:00 under normal operating conditions. At the same time, the microclimate and particulate matter concentrations were analyzed. The microbial contamination was also assessed for 20 surfaces in the fitness center ( ). The chemical contamination of the air was checked in the gym (Room no. 2) in comparison to the control atmospheric air (Room no. 6). Additionally, three samples were taken from the surface of Room no. 2 (gym) to verify the presence of the SARS-CoV-2 virus. A pooled sample of settled dust was also collected to determine the biodiversity of the microorganisms. 3.2. Microclimate, Particulate Matter Concentration Carbon Dioxide, and Formaldehyde Analysis A VelociCalc ® Multi-Function Velocity Meter 9545 (TSI, Dallas, TX, USA) thermo-anemometer was used to establish the temperature, relative humidity, and airflow rate at the selected workstations. The measurements were taken over 2 min at 1 s intervals; averages were logged for each sampling variant (day/hour/location). The concentration of particulate matter (PM 1 ; PM 2.5 ; PM 4 ; PM 10 ; PM total ) was measured using a DustTrak™ DRX Aerosol Monitor 8533 portable laser photometer (TSI, USA). The detection range for particles with diameters ranging from 0.1 to 15 μm was between 0.001 and 150 mg m −3 . The measurements were carried out in triplicate for each location at 1.5 m from the ground level. The sampling rate was set to 3 L min −1 and the sampling interval to 5 s. The total sampling time was 3 min. The carbon dioxide and formaldehyde concentrations were measured using a M200 Multi-functional Air Quality Detector (Temtop, China). 3.3. Volatile Compounds Analysis Detailed analysis of the volatile compounds was carried out using headspace solid-phase microextraction coupled to gas chromatography-mass spectrometry (HS-SPME-GC). Tedlar bags (5 L) were used for the collection of air samples from Rooms no. 2 and 6 (gym and external background). For extraction of the volatile compounds from the air samples, the solid-phase microextraction technique was used with the fiber covered with 50/30 μm divinylbenzene/carboxen/polydimethylsiloxane (DVB/CAR/PDMS) phase (length 1 cm). The SPME fiber was inserted via the sampling port, followed by exposure for 60 min at 20 °C. After the adsorption of volatiles, the fiber was retracted into the needle and transferred to the inlet of the GC apparatus for the desorption of analytes. Desorption was carried out for 5 min at 250 °C. Before each extraction, the fiber was heated for 10 min in the inlet of the GC apparatus at 260 °C for cleaning. A GC-MS system was used for the volatile compound analysis (GC Agilent 7890A and MS Agilent MSD 5975C, Agilent Technologies, Santa Clara, CA, USA). The compounds were separated on a capillary column DB-1ms 60 m × 0.25 mm × 0.25 µm (Agilent Technologies, Santa Clara, CA, USA). All injections were performed in a splitless mode. As a carrier gas, helium was used with a flow rate of 1.1 mL/min. The GC oven temperature was programmed to increase from 30 °C (10 min) to 70 °C at a rate of 2 °C/min and kept for 2 min, then to 235 °C at a rate of 10 °C/min, and finally kept for 3.5 min. The MS ion source, transfer line, and quadrupole analyzer temperatures were 230, 250, and 150 °C, respectively. The electron impact energy was set at 70 eV. The mass spectrometer was operated in full scan mode (SCAN). The qualification of volatiles was performed by a comparison of the obtained spectra with the reference mass spectra from the NIST/EPA/NIH mass spectra library (2012; Version 2.0 g) or with mass spectra obtained from the GC standards and confirmed with the use of the deconvolution procedure. Then, retention indices (RI) were calculated according to the formula proposed by van den Dool and Kratz relative to a homologous series of n-alkanes from C5 to C20. Retention indices were compared with the literature data . Data processing was conducted with Mass Hunter Workstation Software (Agilent, Santa Clara, CA, USA). The relative amounts of volatile compounds were calculated by the individual peak area relative to the total peak areas. 3.4. Determination of Airborne Microorganism Number Air samples were collected in triplicate (20–100 L) per sampling site at the height of about 1.5 m from ground level with an airflow rate of 100 L min −1 using a MAS-100 Eco Air Sampler (Merck, Darmstadt, Germany), according to EN 13098 . The microbiological contamination of the air was determined using: TSA (tryptic soy agar, Merck, Germany) with (0.2%) nystatin to determine the number of bacteria and MEA (MALT EXTRACT agar, Merck, Germany) medium with (0.1%) chloramphenicol to determine the fungi number. The samples were incubated at either 25 ± 2 °C for 5–7 days (fungi) or 30 ± 2 °C for 48 h (bacteria). After incubation, the colonies were counted and corrected based on Feller’s statistical correction table. The results were calculated as the arithmetic mean of three independent repetitions and expressed in CFU m −3 . 3.5. Determination of Surface Microbial Contamination Samples from 20 different surfaces throughout the facility (two independent repetitions) were collected on the first day of testing (Monday) between 8:00 and 10:00 using Hygicult ® TPC (Orion Diagnostica Oy, Espoo, Finland) with the Total Plate Count medium. The collected samples were incubated at 30 ± 2 °C for 3–5 days. Next, the colonies were counted, and the results (arithmetic mean of two independent repetitions) were expressed in CFU cm −2 . 3.6. Detection of SARS-CoV-2 Swabs were taken from approx. 100 cm 2 from three surfaces (treadmill touch panel, panel and grips of elliptical cross trainer, and multi-gym grips) located in the gym (Room no. 2) using R9F buffer (A&A Biotechnology, Gdańsk, Poland). The surfaces were selected based on the highest frequency of use, the presence of direct contact with the user’s hands, and their vicinity to the breathing zone. RNA isolation was performed with the CoV RNA Kit (A&A Biotechnology, Poland). The presence of the SARS-CoV-2 virus RNA in the tested samples was confirmed by Real-Time PCR with Taq-Man probes. The presence of SARS-CoV-2 was tested using the MediPAN-2G+ FAST COVID test (Medicofarma, Warsaw, Poland) kit by A&A Biotechnology (Poland) according to the manufacturer’s instructions. The test detects fragments of two SARS-CoV-2 genes (i.e., ORF1ab (nsp2) and gene S). A synthetic fragment of a plant virus genome was used as a control. 3.7. Determination of Biodiversity Dust deposited on the surface of the gym equipment (10–12 devices located 0.5–2 m from the ground) was collected with steely, dry swabs and refrigerated overnight (4 °C). Then, the samples were combined into one and used for DNA extraction. According to the manufacturer’s instructions, genomic DNA was extracted using the Soil DNA Purification Kit (EURX, Poland). The presence of genomic DNA in the tested samples was confirmed with fluorimetry (Qubit). The extracted DNA concentration was 1 µg mL −1 . Universal primers amplifying the 16S rRNA bacterial gene’s fragment and fungal ITS regions were used in the reaction [ , , ]. Q5 Hot Start High-Fidelity 2X Master Mix (NEB, Ipswich, MA, USA) was used for PCR according to the manufacturer’s instructions. The libraries were prepared and sequenced by Genomed (Warsaw, Poland) using the paired-end technology on the Illumina MiSeq (2 × 300 nt) platform with the use of a v3 Kit (Illumina, San Diego, CA, USA). Automatic initial analysis was performed on the MiSeq sequencer using MiSeq Reporter (MSR) v2.6. The obtained results were next subjected to bioinformatic analysis. Adapter sequences were removed from the reads, which were next subjected to quality control with the Cutadapt program using quality (<20) and the minimal length of (30 nt) threshold . Library reads 16S were further processed using the DADA2 package to separate sequences of biological origin from those generated during the sequencing process. This package was also used for selecting unique sequences of biological origin, the so-called amplicon sequence variant (ASV). Bioinformatics analysis of the reads for species-level classification was performed using the QIIME 2 program based on the Silva 138 database using a hybrid approach . First, ASV sequences were compared with the database to find identical reference sequences using the VSEARCH algorithm . Next, the atypical sequences left over from the previous step were classified based on machine learning, which was performed using SKLearn. ITS library reads classification at the species level was performed using QIIME based on the UNITE v8 reference database . After filtering, as described above, the reads were clustered based on the reference database using the UCLUST algorithm. Chimeric sequences were removed using the USEARCH (usearch61) algorithm. Finally, the taxonomy was assigned to the reference database using the BLAST algorithm. Sequencing data files in the FASTQ format were deposited in the NCBI Sequence Read Archive (SRA) under BioProject accession number PRJNA818521 (BioSampleAcc. SAMN26866224 and Run Acc. SRR18428312 and SRR18428311). 3.8. Statistical Analysis Statistical analysis was carried out with Statistica 13.1 (Statsoft, Tulsa, OK, USA). Descriptive statistics were calculated for all variables of interest. For the microclimate parameters and the number of microorganisms in the air, one-way analysis of variance (ANOVA) was performed for data grouped depending on the sampling day, hour, and location. ANOVA assumptions were checked with the Shapiro–Wilk and Levene tests. When a statistical difference was detected, the means were compared using Tukey’s post hoc test or Dunn’s post hoc tests. Full-factorial ANOVA was performed for the particulate matter concentration, followed by Tukey’s post hoc test. In the case of surface microbial contamination, the Fisher–Snedecor test was carried out for the number of microorganisms averaged over the tested surfaces. The variances in the number of bacteria and fungi on the examined surfaces were heterogeneous. Thus, a t -test was performed for unequal variances. All tests were performed at a significance level of 0.05. Linear regression was performed to check a correlation between the number of bacteria and fungi in the air and other measured parameters. To describe the strength of the correlation, the Evans (1996) guide for the absolute value of correlation coefficient r: 0.00–0.19 “very weak”; 0.20–0.39 “weak”; 0.40–0.59 “moderate”; 0.60–0.79 “strong”; 0.80–1.0 “very strong” was used.
The research was conducted in a fitness club in Zduńska Wola (central Poland). The tested fitness center is located in a service and commercial building built in the1990s and operates from Monday to Friday from 8:00 to 22:00 and on the weekends from 9:00 to 15:00. The characteristics of the rooms under study are presented in . Samples of the air were collected from five fitness center rooms equipped with an occasional air conditioning system. Moreover, control samples (atmospheric air) in front of the building were collected simultaneously. Samples were collected during the entire working week (Monday–Friday) at 8:00, 12:00, 16:00 and 20:00 under normal operating conditions. At the same time, the microclimate and particulate matter concentrations were analyzed. The microbial contamination was also assessed for 20 surfaces in the fitness center ( ). The chemical contamination of the air was checked in the gym (Room no. 2) in comparison to the control atmospheric air (Room no. 6). Additionally, three samples were taken from the surface of Room no. 2 (gym) to verify the presence of the SARS-CoV-2 virus. A pooled sample of settled dust was also collected to determine the biodiversity of the microorganisms.
A VelociCalc ® Multi-Function Velocity Meter 9545 (TSI, Dallas, TX, USA) thermo-anemometer was used to establish the temperature, relative humidity, and airflow rate at the selected workstations. The measurements were taken over 2 min at 1 s intervals; averages were logged for each sampling variant (day/hour/location). The concentration of particulate matter (PM 1 ; PM 2.5 ; PM 4 ; PM 10 ; PM total ) was measured using a DustTrak™ DRX Aerosol Monitor 8533 portable laser photometer (TSI, USA). The detection range for particles with diameters ranging from 0.1 to 15 μm was between 0.001 and 150 mg m −3 . The measurements were carried out in triplicate for each location at 1.5 m from the ground level. The sampling rate was set to 3 L min −1 and the sampling interval to 5 s. The total sampling time was 3 min. The carbon dioxide and formaldehyde concentrations were measured using a M200 Multi-functional Air Quality Detector (Temtop, China).
Detailed analysis of the volatile compounds was carried out using headspace solid-phase microextraction coupled to gas chromatography-mass spectrometry (HS-SPME-GC). Tedlar bags (5 L) were used for the collection of air samples from Rooms no. 2 and 6 (gym and external background). For extraction of the volatile compounds from the air samples, the solid-phase microextraction technique was used with the fiber covered with 50/30 μm divinylbenzene/carboxen/polydimethylsiloxane (DVB/CAR/PDMS) phase (length 1 cm). The SPME fiber was inserted via the sampling port, followed by exposure for 60 min at 20 °C. After the adsorption of volatiles, the fiber was retracted into the needle and transferred to the inlet of the GC apparatus for the desorption of analytes. Desorption was carried out for 5 min at 250 °C. Before each extraction, the fiber was heated for 10 min in the inlet of the GC apparatus at 260 °C for cleaning. A GC-MS system was used for the volatile compound analysis (GC Agilent 7890A and MS Agilent MSD 5975C, Agilent Technologies, Santa Clara, CA, USA). The compounds were separated on a capillary column DB-1ms 60 m × 0.25 mm × 0.25 µm (Agilent Technologies, Santa Clara, CA, USA). All injections were performed in a splitless mode. As a carrier gas, helium was used with a flow rate of 1.1 mL/min. The GC oven temperature was programmed to increase from 30 °C (10 min) to 70 °C at a rate of 2 °C/min and kept for 2 min, then to 235 °C at a rate of 10 °C/min, and finally kept for 3.5 min. The MS ion source, transfer line, and quadrupole analyzer temperatures were 230, 250, and 150 °C, respectively. The electron impact energy was set at 70 eV. The mass spectrometer was operated in full scan mode (SCAN). The qualification of volatiles was performed by a comparison of the obtained spectra with the reference mass spectra from the NIST/EPA/NIH mass spectra library (2012; Version 2.0 g) or with mass spectra obtained from the GC standards and confirmed with the use of the deconvolution procedure. Then, retention indices (RI) were calculated according to the formula proposed by van den Dool and Kratz relative to a homologous series of n-alkanes from C5 to C20. Retention indices were compared with the literature data . Data processing was conducted with Mass Hunter Workstation Software (Agilent, Santa Clara, CA, USA). The relative amounts of volatile compounds were calculated by the individual peak area relative to the total peak areas.
Air samples were collected in triplicate (20–100 L) per sampling site at the height of about 1.5 m from ground level with an airflow rate of 100 L min −1 using a MAS-100 Eco Air Sampler (Merck, Darmstadt, Germany), according to EN 13098 . The microbiological contamination of the air was determined using: TSA (tryptic soy agar, Merck, Germany) with (0.2%) nystatin to determine the number of bacteria and MEA (MALT EXTRACT agar, Merck, Germany) medium with (0.1%) chloramphenicol to determine the fungi number. The samples were incubated at either 25 ± 2 °C for 5–7 days (fungi) or 30 ± 2 °C for 48 h (bacteria). After incubation, the colonies were counted and corrected based on Feller’s statistical correction table. The results were calculated as the arithmetic mean of three independent repetitions and expressed in CFU m −3 .
Samples from 20 different surfaces throughout the facility (two independent repetitions) were collected on the first day of testing (Monday) between 8:00 and 10:00 using Hygicult ® TPC (Orion Diagnostica Oy, Espoo, Finland) with the Total Plate Count medium. The collected samples were incubated at 30 ± 2 °C for 3–5 days. Next, the colonies were counted, and the results (arithmetic mean of two independent repetitions) were expressed in CFU cm −2 .
Swabs were taken from approx. 100 cm 2 from three surfaces (treadmill touch panel, panel and grips of elliptical cross trainer, and multi-gym grips) located in the gym (Room no. 2) using R9F buffer (A&A Biotechnology, Gdańsk, Poland). The surfaces were selected based on the highest frequency of use, the presence of direct contact with the user’s hands, and their vicinity to the breathing zone. RNA isolation was performed with the CoV RNA Kit (A&A Biotechnology, Poland). The presence of the SARS-CoV-2 virus RNA in the tested samples was confirmed by Real-Time PCR with Taq-Man probes. The presence of SARS-CoV-2 was tested using the MediPAN-2G+ FAST COVID test (Medicofarma, Warsaw, Poland) kit by A&A Biotechnology (Poland) according to the manufacturer’s instructions. The test detects fragments of two SARS-CoV-2 genes (i.e., ORF1ab (nsp2) and gene S). A synthetic fragment of a plant virus genome was used as a control.
Dust deposited on the surface of the gym equipment (10–12 devices located 0.5–2 m from the ground) was collected with steely, dry swabs and refrigerated overnight (4 °C). Then, the samples were combined into one and used for DNA extraction. According to the manufacturer’s instructions, genomic DNA was extracted using the Soil DNA Purification Kit (EURX, Poland). The presence of genomic DNA in the tested samples was confirmed with fluorimetry (Qubit). The extracted DNA concentration was 1 µg mL −1 . Universal primers amplifying the 16S rRNA bacterial gene’s fragment and fungal ITS regions were used in the reaction [ , , ]. Q5 Hot Start High-Fidelity 2X Master Mix (NEB, Ipswich, MA, USA) was used for PCR according to the manufacturer’s instructions. The libraries were prepared and sequenced by Genomed (Warsaw, Poland) using the paired-end technology on the Illumina MiSeq (2 × 300 nt) platform with the use of a v3 Kit (Illumina, San Diego, CA, USA). Automatic initial analysis was performed on the MiSeq sequencer using MiSeq Reporter (MSR) v2.6. The obtained results were next subjected to bioinformatic analysis. Adapter sequences were removed from the reads, which were next subjected to quality control with the Cutadapt program using quality (<20) and the minimal length of (30 nt) threshold . Library reads 16S were further processed using the DADA2 package to separate sequences of biological origin from those generated during the sequencing process. This package was also used for selecting unique sequences of biological origin, the so-called amplicon sequence variant (ASV). Bioinformatics analysis of the reads for species-level classification was performed using the QIIME 2 program based on the Silva 138 database using a hybrid approach . First, ASV sequences were compared with the database to find identical reference sequences using the VSEARCH algorithm . Next, the atypical sequences left over from the previous step were classified based on machine learning, which was performed using SKLearn. ITS library reads classification at the species level was performed using QIIME based on the UNITE v8 reference database . After filtering, as described above, the reads were clustered based on the reference database using the UCLUST algorithm. Chimeric sequences were removed using the USEARCH (usearch61) algorithm. Finally, the taxonomy was assigned to the reference database using the BLAST algorithm. Sequencing data files in the FASTQ format were deposited in the NCBI Sequence Read Archive (SRA) under BioProject accession number PRJNA818521 (BioSampleAcc. SAMN26866224 and Run Acc. SRR18428312 and SRR18428311).
Statistical analysis was carried out with Statistica 13.1 (Statsoft, Tulsa, OK, USA). Descriptive statistics were calculated for all variables of interest. For the microclimate parameters and the number of microorganisms in the air, one-way analysis of variance (ANOVA) was performed for data grouped depending on the sampling day, hour, and location. ANOVA assumptions were checked with the Shapiro–Wilk and Levene tests. When a statistical difference was detected, the means were compared using Tukey’s post hoc test or Dunn’s post hoc tests. Full-factorial ANOVA was performed for the particulate matter concentration, followed by Tukey’s post hoc test. In the case of surface microbial contamination, the Fisher–Snedecor test was carried out for the number of microorganisms averaged over the tested surfaces. The variances in the number of bacteria and fungi on the examined surfaces were heterogeneous. Thus, a t -test was performed for unequal variances. All tests were performed at a significance level of 0.05. Linear regression was performed to check a correlation between the number of bacteria and fungi in the air and other measured parameters. To describe the strength of the correlation, the Evans (1996) guide for the absolute value of correlation coefficient r: 0.00–0.19 “very weak”; 0.20–0.39 “weak”; 0.40–0.59 “moderate”; 0.60–0.79 “strong”; 0.80–1.0 “very strong” was used.
High particulate matter, especially the PM 2.5 concentration, was observed in fitness centers that exceeded the environmental threshold and supports statements that air quality inside sports facilities can be worse than that outdoors. Moreover, chemical markers such as CO 2 concentration and VOCs (phenol, toluene, and 2-ethyl-1-hexanol) may be useful for air quality monitoring in sports facilities. Additionally, the concentration of airborne microorganisms was high compared to the previous research and literature recommendations. It is also noteworthy that genera of bacteria ( Escherichia-Shigella , Corynebacterium , Bacillus , Staphylococcus ) and fungi ( Cladosporium , Aspergillus , Penicillium ), potentially belonging to the second and third groups of health hazards following Directive 2019/1833/EC, were detected, albeit at relatively low concentrations. In addition, other species that may be allergenic ( Epicoccum ) or infectious ( Acinetobacter , Sphingomonas , Sporobolomyces ) and the SARS-CoV-2 virus were detected. Due to the possibility of high contamination with chemicals, CO 2 , bacteria and fungi, and the spread of the SARS-CoV-2 virus in sports facilities, air purification systems with proven effectiveness are also needed (e.g., UV flow lamps, photocatalytic ionizers) for continuous operation during opening hours. Future research should aim at introducing systems of constant air quality monitoring in sports facilities (e.g., using multiple sensors including microfluidic chips) and developing warning systems against exceeding the concentration of suspended dust or the recommended number of microorganisms in the air.
|
The Promise of Artificial Intelligence in Digestive Healthcare and the Bioethics Challenges It Presents
|
bc6e08b1-3829-4c9f-bf88-751d6d26fd90
|
10145124
|
Internal Medicine[mh]
|
Medicine is advancing swiftly into the era of Big Data, particularly through the more widespread use of Electronic Health Records (EHRs) and the digitalization of clinical data, intensifying the demands on informatics solutions in healthcare settings. Like all major advances throughout history, the benefits on offer are associated with new rules of engagement. Some 50 years have passed since what is considered to have been the birth of Artificial Intelligence (AI) at the Dartmouth Summer Research Project . This was an intensive 2-month project that set out to obtain solutions to the problems that are faced when attempting to make a machine that can simulate human intelligence. However, it was not until some years later before the first efforts to design biomedical computing solutions based on AI were seen . These efforts are beginning to bear their fruit, and since the turn of the century, we have witnessed truly significant advances in this field, particularly in terms of medical image analysis . Indeed, a search for publications in the PubMed database using the terms “Artificial Intelligence” and “Gastrointestinal Endoscopy” returned 3 articles in 2017, as opposed to 42 in 2022 and 64 in 2021. While the true impact of these practices is yet to be seen in the clinic, their goals are clear: (i) to offer patients more personalized healthcare; (ii) to achieve greater diagnostic/prognostic accuracy; (iii) to reduce human error in clinical practice; and (iv) to reduce the time demands on clinicians as well as enhancing the efficiency of healthcare services. However, the introduction of these tools raises important bioethical issues. Consequently, and before attempting to reap the benefits that they have to offer, it is important to assess how these advances affect patient–clinician relationships , what impact they will have on medical decision making, and how these potential improvements in diagnostic accuracy and efficiency will affect the different healthcare systems around the world. 1.1. The State-of-the-Art in Gastroenterology A number of medical specialties such as Gastroenterology rely heavily on medical images to establish disease diagnosis and patient prognosis, as well as to monitor disease progression. Moreover, in more recent times, some such imaging techniques have been adapted so that they can potentially deliver therapeutic interventions . The digitalization of medical imaging has paved the way for important advances in this field, including the design of AI solutions to aid image acquisition and analysis . Different endoscopy modalities can be used to visualize and monitor the Gastrointestinal (GI) tract, making this an area in which AI models and applications could play an important future role. Indeed, this is reflected in the attempts to design AI-based tools addressing distinct aspects of these examinations and adapting to the different endoscopy techniques employed in the clinic. Accordingly, the development of such AI tools has been the focus of considerable effort of late, mainly with a view to improving the diagnostic accuracy of GI imaging and streamlining these procedures . The term AI is overarching, yet in the context of medical imaging, it can perhaps be more precisely defined by the machine learning (ML) class of AI applications, algorithms that are specifically used to recognize patterns in complex datasets . “Supervised” or “unsupervised” ML models exist; although, the former is perhaps of more interest in this context as they are better suited to attempts at predicting known outputs (e.g., a specific change in a tissue or organ, the presence of a lesion in the mucosa or debris in the tract, etc.). Multi-layered Convolutional Neural Networks (CNNs) are a specific type of deep learning (DL) model, a modality of ML. Significantly, CNNs excel in the analysis, differentiation and classification of medical images and videos, essentially due to their artificial resemblance to neurobiological processes . As might be expected, there have been significant technical advances in endoscopy over the years. Indeed, two decades have now passed since Capsule Endoscopy (CE: also known as Wireless or Video CE) was shown to be a valid minimally invasive diagnostic tool to visualise the intestine in its entirety, including the small bowel (SB) and colon . CE systems involve the use of three main elements. Firstly, there is the capsule that houses the camera, and now perhaps multiple cameras, as well as a light source, a transmitter and a battery. The second element is a sensor system that is necessary to receive the information transmitted by the capsule and that is connected to a recording system. Finally, there is the software required to display the endoscopy images so they can be examined. All these CE elements have undergone significant improvements since they were initially developed. For example, there have been numerous improvements to the capsules (e.g., in their frame acquisition rates, their angle of vision, the number of cameras, and manoeuvrability), as well as to the software used to visualise and examine the images obtained. One of the benefits of CE is that it offers the possibility of examining less inaccessible regions of the intestine, such as the SB, structures that are difficult to access using standard endoscopy protocols. Consequently, CE can be used to evaluate conditions that are complicated to diagnose clearly, such as chronic GI bleeding, tumours and especially SB tumours; mucosal damage; Crohn’s disease (CD); chronic iron-deficiency anaemia; GI polyposis; or celiac disease . There are also fewer contraindications associated with the use of CE; although, these may include disorders of GI motility, GI tract narrowing/obstruction, dysphagia, large GI diverticula or intestinal fistula. Despite the evolution of these systems over the past two decades, they still face a number of challenges, and these will be the target of future improvements. As indicated, software used to aid in the reading and evaluation of the images acquired by CE has also been developed, on the whole, through efforts to decrease the reading times associated with these tests and the accuracy of the results obtained. The time that trained gastroenterologists must dedicate to the analysis of CE examinations is a particularly critical issue, given the number of images generated (ca. 50,000). As such, considerable effort is required to ensure adequate diagnostic yields, with the high associated costs. Accordingly, the main limitation for CE, and particularly Colon Capsule Endoscopy (CCE), as a first-line procedure for the panendoscopic analysis of the entire GI mucosa, is that it is a relatively time-consuming and laborious diagnostic test that requires some expertise in image analysis. In fact, the diagnostic yield for CE is in part hampered by the monotonous and laborious human CE video analysis, which translates into suboptimal diagnostic accuracy, particularly in terms of sensitivity and negative predictive value (NPV). It must also be considered that alterations may only be evident in a few of the frames extracted from CE examinations, which means there is a significant chance that important lesions might be overlooked . Indeed, the inter- and intra-operator error associated with the reading process is one of the main sources of error in these examinations. As a result, there has been much interest from an early stage in the development of these systems to design software that can be used to automatically detect certain features in the images obtained. For example, there have been attempts to include support vector machines (SVMs) within CE systems, in particular for the detection of blood/hematic traces . In this sense, one of the most interesting recent and future developments in CE is the possible incorporation of AI algorithms to automate the detection, differentiation and stratification of specific features of the GI images obtained . 1.2. Automated Analysis and AI Tools to Examine the GI Tract Several studies have showcased the potential of using CNNs in different areas of digestive endoscopy. For example, when performing such examinations, the preparation and cleanliness of the GI tract are fundamental to ensure the validity of the results obtained. Nevertheless, clearly validated scales to assess this feature of endoscopy examinations are still lacking, which has inspired efforts to design AI tools based on CNN models that can automatically evaluate GI tract cleanliness in these tests . Obviously, and in line with the advances in other areas of medicine, many studies have centred on the design of AI tools capable of detecting lesions on or alterations to the GI mucosa likely to be associated with disease , as well as specific characteristics of these changes. Indeed, the potential to apply these systems in real time could offer important benefits to the clinician, particularly when contemplating conditions that require prompt diagnosis and treatment. Moreover, these systems could potentially be used in combination or in conjunction with other AI tools, such as those designed to assess the quality of preparation, or in attempts to not only identify lesions but to also establish their malignant potential . We must also consider that the implementation of AI tools for healthcare administration is likely to have a direct effect on gastroenterology, as it will on other clinical areas. Thus, in light of the increase in the number of AI applications being generated that may potentially be integrated into standard healthcare, it becomes more urgent to address the bioethical issues that surround their use before they are implemented in clinical practice. In this sense, it is important to note that while existing frameworks could be adjusted to regulate the use of clinical AI applications, their disruptive nature makes it more likely that new ‘purpose-built’ regulatory frameworks and guidelines should be drawn up from which regulations can be defined. Moreover, in this process, it will be important to ensure that the AI innovations they are designed to control are enhanced and not limited by the regulations drawn up.
A number of medical specialties such as Gastroenterology rely heavily on medical images to establish disease diagnosis and patient prognosis, as well as to monitor disease progression. Moreover, in more recent times, some such imaging techniques have been adapted so that they can potentially deliver therapeutic interventions . The digitalization of medical imaging has paved the way for important advances in this field, including the design of AI solutions to aid image acquisition and analysis . Different endoscopy modalities can be used to visualize and monitor the Gastrointestinal (GI) tract, making this an area in which AI models and applications could play an important future role. Indeed, this is reflected in the attempts to design AI-based tools addressing distinct aspects of these examinations and adapting to the different endoscopy techniques employed in the clinic. Accordingly, the development of such AI tools has been the focus of considerable effort of late, mainly with a view to improving the diagnostic accuracy of GI imaging and streamlining these procedures . The term AI is overarching, yet in the context of medical imaging, it can perhaps be more precisely defined by the machine learning (ML) class of AI applications, algorithms that are specifically used to recognize patterns in complex datasets . “Supervised” or “unsupervised” ML models exist; although, the former is perhaps of more interest in this context as they are better suited to attempts at predicting known outputs (e.g., a specific change in a tissue or organ, the presence of a lesion in the mucosa or debris in the tract, etc.). Multi-layered Convolutional Neural Networks (CNNs) are a specific type of deep learning (DL) model, a modality of ML. Significantly, CNNs excel in the analysis, differentiation and classification of medical images and videos, essentially due to their artificial resemblance to neurobiological processes . As might be expected, there have been significant technical advances in endoscopy over the years. Indeed, two decades have now passed since Capsule Endoscopy (CE: also known as Wireless or Video CE) was shown to be a valid minimally invasive diagnostic tool to visualise the intestine in its entirety, including the small bowel (SB) and colon . CE systems involve the use of three main elements. Firstly, there is the capsule that houses the camera, and now perhaps multiple cameras, as well as a light source, a transmitter and a battery. The second element is a sensor system that is necessary to receive the information transmitted by the capsule and that is connected to a recording system. Finally, there is the software required to display the endoscopy images so they can be examined. All these CE elements have undergone significant improvements since they were initially developed. For example, there have been numerous improvements to the capsules (e.g., in their frame acquisition rates, their angle of vision, the number of cameras, and manoeuvrability), as well as to the software used to visualise and examine the images obtained. One of the benefits of CE is that it offers the possibility of examining less inaccessible regions of the intestine, such as the SB, structures that are difficult to access using standard endoscopy protocols. Consequently, CE can be used to evaluate conditions that are complicated to diagnose clearly, such as chronic GI bleeding, tumours and especially SB tumours; mucosal damage; Crohn’s disease (CD); chronic iron-deficiency anaemia; GI polyposis; or celiac disease . There are also fewer contraindications associated with the use of CE; although, these may include disorders of GI motility, GI tract narrowing/obstruction, dysphagia, large GI diverticula or intestinal fistula. Despite the evolution of these systems over the past two decades, they still face a number of challenges, and these will be the target of future improvements. As indicated, software used to aid in the reading and evaluation of the images acquired by CE has also been developed, on the whole, through efforts to decrease the reading times associated with these tests and the accuracy of the results obtained. The time that trained gastroenterologists must dedicate to the analysis of CE examinations is a particularly critical issue, given the number of images generated (ca. 50,000). As such, considerable effort is required to ensure adequate diagnostic yields, with the high associated costs. Accordingly, the main limitation for CE, and particularly Colon Capsule Endoscopy (CCE), as a first-line procedure for the panendoscopic analysis of the entire GI mucosa, is that it is a relatively time-consuming and laborious diagnostic test that requires some expertise in image analysis. In fact, the diagnostic yield for CE is in part hampered by the monotonous and laborious human CE video analysis, which translates into suboptimal diagnostic accuracy, particularly in terms of sensitivity and negative predictive value (NPV). It must also be considered that alterations may only be evident in a few of the frames extracted from CE examinations, which means there is a significant chance that important lesions might be overlooked . Indeed, the inter- and intra-operator error associated with the reading process is one of the main sources of error in these examinations. As a result, there has been much interest from an early stage in the development of these systems to design software that can be used to automatically detect certain features in the images obtained. For example, there have been attempts to include support vector machines (SVMs) within CE systems, in particular for the detection of blood/hematic traces . In this sense, one of the most interesting recent and future developments in CE is the possible incorporation of AI algorithms to automate the detection, differentiation and stratification of specific features of the GI images obtained .
Several studies have showcased the potential of using CNNs in different areas of digestive endoscopy. For example, when performing such examinations, the preparation and cleanliness of the GI tract are fundamental to ensure the validity of the results obtained. Nevertheless, clearly validated scales to assess this feature of endoscopy examinations are still lacking, which has inspired efforts to design AI tools based on CNN models that can automatically evaluate GI tract cleanliness in these tests . Obviously, and in line with the advances in other areas of medicine, many studies have centred on the design of AI tools capable of detecting lesions on or alterations to the GI mucosa likely to be associated with disease , as well as specific characteristics of these changes. Indeed, the potential to apply these systems in real time could offer important benefits to the clinician, particularly when contemplating conditions that require prompt diagnosis and treatment. Moreover, these systems could potentially be used in combination or in conjunction with other AI tools, such as those designed to assess the quality of preparation, or in attempts to not only identify lesions but to also establish their malignant potential . We must also consider that the implementation of AI tools for healthcare administration is likely to have a direct effect on gastroenterology, as it will on other clinical areas. Thus, in light of the increase in the number of AI applications being generated that may potentially be integrated into standard healthcare, it becomes more urgent to address the bioethical issues that surround their use before they are implemented in clinical practice. In this sense, it is important to note that while existing frameworks could be adjusted to regulate the use of clinical AI applications, their disruptive nature makes it more likely that new ‘purpose-built’ regulatory frameworks and guidelines should be drawn up from which regulations can be defined. Moreover, in this process, it will be important to ensure that the AI innovations they are designed to control are enhanced and not limited by the regulations drawn up.
The potential benefits that are provided by any new technology must be weighed up against any risks associated with its introduction. Accordingly, if the AI tools that are developed to be used with CE are to fulfil their potential, they must offer guarantees against significant risks, perhaps the most important of which are related to issues of privacy and data protection, unintentional bias in the data and design of the tools, transferability, explainability and responsibility . In addition, it is clear that this is a disruptive technology that will require regulatory guidelines to be put in place to legislate the appropriate use of these tools, guidelines that are on the whole yet to be established. However, it is clear that the need for such regulation has not escaped the healthcare regulators, and, as in other fields, initiatives have been launched to explore the legal aspects surrounding the use of AI tools in healthcare that will clearly be relevant to digestive medicine as well . 2.1. Privacy and Data Management for AI-Based Tools Ensuring the privacy of medical information is increasingly challenging in the digital age. Not only are electronic data easily reproduced, but they are also vulnerable to remote access and manipulation, with economic incentives intensifying cyberattacks on health-related organisations . Breaches of medical confidentiality can have important consequences for patients. Indeed, they may not only be responsible for the shaming or alienation of patients with certain illnesses, but they could even perhaps limit their employment opportunities or affect their health insurance costs. As medical AI applications become more common, and as more data are collected and used/shared more widely, the threat to privacy increases. The hope is that measures such as de-identification will help maintain privacy and will require this process to be adopted more generally in many areas of life. However, the inconvenience associated with these approaches makes this unlikely to occur. Moreover, re-identification of de-identified data is surprisingly easy , and thus, we must perhaps accept that introducing clinical AI applications will compromise our privacy a little. This would be more acceptable if all individuals had the same chance of benefitting from these tools, in the absence of any bias, but at present, this does not appear to be the case (see below). While some progress in personal data protection has been made (e.g., General Data Protection Regulation 2016/79 in the E.U. or the Health Insurance Portability and Accountability Act in the USA: ), further advances with stakeholders are required to specifically address the data privacy issues associated with the deployment of AI applications . The main aim of novel healthcare interventions and technologies is to reduce morbidity and mortality, or to achieve similar health outcomes more efficiently or economically. The evidence favouring the implementation of AI systems in healthcare generally focuses on their relative accuracy compared to gold standards , and as such, there have been fewer clinical trials carried out that measure their effects on outcomes . This emphasis on accuracy may potentially lead to overdiagnosis ; although, this is a phenomenon that may be compensated for by considering other pathological, genomic and clinical data. Hence, it may be necessary to use more extended personal data from EHRs in AI applications to ensure the benefits of the tools are fully reaped and that they do not mislead physicians. One of the advantages of using such algorithms is that they might identify patterns and characteristics that are difficult for the human observer to perceive, and even those that may not currently be included in epidemiological studies, further enhancing diagnostic precision. However, this situation will create important demands on data management, on the safe and secure use of personal information and regarding consent for its use, accentuated by the large amount of quality data required to train and validate DL tools. Traditional opt-in/opt-out models of consent will be difficult to implement on the scale of these data and in such a dynamic environment . Thus, addressing data-related issues will be fundamental to ensure a problem-free incorporation of AI tools into healthcare , perhaps requiring novel approaches to data protection. One possible solution to the question of privacy and data management may come through the emergence of blockchain technologies in healthcare environments. In this sense, recent initiatives into the use of blockchain technology in healthcare may offer possible solutions to some of the problems regarding data handling and management, not least as this technology will facilitate the safer, traceable and efficient handling of an individual’s clinical information . Indeed, the uniqueness of blockchain technology resides in the fact that it permits a massive, secure and decentralized public store of ordered records or events to be established . Indeed, the local storage of medical information is a barrier to sharing this information, as well as potentially compromising its security. Blockchain technology enables data to be carefully protected and safely stored, assuring their immutability . Thus, blockchain technology could help overcome the current fragmentation of a patient’s medical records, potentially benefitting the patient and healthcare professionals alike. Indeed, it could promote communication between healthcare professionals both at the same and perhaps at a different centre, radically reducing the costs associated with sharing medical data . AI applications can benefit from different features of the use of a blockchain, offering trustworthiness, enhanced privacy and traceability. Indeed, when the data used in AI applications (both for training and in general) are acquired from a reliable, secure and trusted platform, AI algorithms will perform better. 2.2. The Issue of Bias in AI Applications Among the most important issues faced by AI applications are those of bias and transferability . Bias may be introduced through the training data employed or by decisions that are made during the design process . In essence, ML systems are shaped by the data on which they are trained and validated, identifying patterns in large datasets that reproduce desired outcomes. Indeed, AI systems are tailor-made, and as such, they are only as good as the data with which they are trained. As such, when these data are incomplete, unrepresentative or poorly interpreted, the end result can be catastrophic . One specific type of bias, spectrum bias, occurs when a diagnostic test is studied in individuals who differ from the population for which the test was intended. Indeed, spectrum bias has been recognized as a potential pitfall for AI applications in capsule endoscopy (CE) , as well as in the field of cardiovascular medicine . Hence, AI learning models might not always be fully valid and applicable to new datasets. In this context, the integration of blockchain-enabled data from other healthcare platforms could serve to augment the number of what would otherwise be underrepresented cases in a dataset, thereby improving the training of the AI application and ultimately, its successful implementation. In real life, any inherent bias in clinical tools cannot be ignored and must be considered before validating AI applications. As a result, overfitting of these models should not be ignored, a phenomenon that occurs when the model is too tightly tuned to the training data, and as a result, it does not function correctly when fed with other data . This can be avoided by using larger datasets for training and by not training the applications excessively, and possibly also by simplifying the models themselves. The way outcomes are identified is also entirely dependent on the data the models are fed. Indeed, there are examples of different pathologies where certain physical characteristics achieve better diagnostic performance, such as lighter rather than darker skin, yet perhaps this is a population that is overrepresented in the training data. Consequently, it is possible that only those with fair skin will fully benefit from such tools . Human decisions may also skew AI tools, such that they may act in discriminatory ways . Disadvantaged groups may not be well-represented in the formative stages of evidence-based medicine , and unless rectified, and human interventions can combat this bias, it will almost certainly be carried over into AI tools. Hence, programmes will need to be established to ensure ethical AI development, such as those contemplated to detect and eliminate bias in data and algorithms . While bias may emerge from poor data collection and evaluation, it can also emerge in systems trained on high-quality datasets. Aggregation bias can emerge from using a single population to design a model that is not optimal for another group . Thus, the potential that bias exists must be faced and not ignored, searching for solutions to overcome this problem rather than rejecting the implementation of AI tools on this basis ( and ). In association with bias, transferability to other settings is a related and significant issue for AI tools . An algorithm trained and tested in one environment will not necessarily perform as well in another environment, and it may need to be retrained on data from the new environment. Even so, transferability is not ensured, and hence, AI tools must be carefully designed, tested and evaluated in each new context prior to their use with patients . This issue also implies there must be significant transparency about the data sources used in the design and development of these systems, with the ensuing demands on data protection and safety. 2.3. The Explainability, Responsibility and the Role of the Clinician in the Era of AI-Based Medicine Another critical issue with regards to the application of DL algorithms is that of explainability and interpretability . When explainable, what an algorithm does and the value it encodes can be readily understood . However, it appears that less explainable algorithms may be more accurate , and thus, it remains unclear if it is possible to achieve both these features at the same time. How algorithms achieve a particular classification or recommendation may even be unclear to some extent to designers and users alike, not least due to the influence of training on the output of the algorithms and that of user interactions. Indeed, in situations where algorithms are being used to address relatively complex medical situations and relationships, this can lead to what is referred to as “black-box medicine”: circumstances in which the basis for clinical decision making becomes less clear . While the explanations a clinician may give for their decisions may not be perfect, they are responsible for these decisions and can usually offer a coherent explanation if necessary. Thus, should AI tools be allowed to make diagnostic, prognostic and management decisions that cannot be explained by a physician ? Some lack of explainability has been widely accepted in modern medicine, with clinicians prescribing aspirin as an analgesic without understanding its mechanism of action for nearly a century . Moreover, it still remains unclear why Lithium acts as a mood stabilizer . If drugs can be prescribed without understanding how they work, then can we not use AI without fully understanding how it reaches a decision? Yet as we move towards greater patient inclusion in their healthcare decisions, the inability of a clinician to fully explain decisions based on AI may become more problematic. Hence, perhaps we are right to seek systems that allow us to trace how conclusions are reached. Moreover, only through some degree of knowledge of AI can physicians be aware of what these tools can actually achieve and when they may be performing irregularly. AI is commonly considered to be of neutral value, neither intrinsically good nor bad, yet it is capable of producing good and bad outcomes. AI algorithms explicitly or implicitly encode values as part of their design , and these values inevitably influence patient outcomes. For example, algorithms will often be designed to prioritise a false-negative rather than false-positive identification, or to perform distinctly depending on the quality of the preparation. While the performance of AI systems would represent a limiting factor for diagnostic success, additional factors will also influence their accuracy and sensitivity, such as the data on which they are trained, how the data are used by the algorithm, and any conscious or unconscious biases that may be introduced. Indeed, the digitalisation of medicine has been said to have shifted the physician’s attention away from the body towards the patient’s data , and the introduction of AI tools runs the risk of further exacerbating this movement. Introducing AI tools into medicine also has implications for the allocation of responsibility regarding treatment decisions and any adverse outcomes based on the use of such tools, as discussed in greater depth elsewhere . At present, there appears to be a void regarding legal responsibility if the use of AI applications produces harm , and there are difficulties in clearly establishing the autonomy and agency of AI . Should any adverse event occur, it is necessary to establish if any party failed in their duty or if errors occurred, attributing responsibility accordingly. Responsibility for the use of the AI will usually be shared between the physician and institution where the treatment was provided, but what of the designers? Responsibility for acting on the basis of the output of the AI will rest with the physician, yet perhaps no party has acted improperly or the AI tool behaved in an unanticipated manner. Indeed, if the machine performs its tasks reliably, there may be no wrongdoing even when it fails. The points in an algorithm at which decisions are made may be complicated to define, and thus, clinicians may be asked to take responsibility for decisions they have not made when using a system that incorporates AI. Importantly, this uncertainty regarding responsibility may influence the trust of a patient in their clinician . Accordingly, the more that clinicians and patients rely upon clinical AI systems, the more that trust may shift away from clinicians toward the AI tools themselves . In relation to the above, the implementation of AI tools may also raise concerns about the role of clinicians. While there are fears that they will be ‘replaced’ by AI tools , the ideal situation would be to take advantage of the strengths of both humans and machines. AI applications could help to compensate for shortages in personnel , they could free up more of a clinicians’ time, enabling them to dedicate this time to their patients or other tasks , or they might enhance the clinician’s capacity in terms of the number of patients they could treat. While decision making in conjunction with AI should involve clinicians, the issue of machine–human disagreement must be addressed . Alternatively, should we be looking for opportunities to introduce fully automated clinical AI solutions? For example, could negative results following AI-based assessment of GI examinations be communicated directly to the patient? While this might be more efficient, it brings into question the individual’s relationship with the clinician. Indeed, the dehumanisation of healthcare may have a detrimental rather than a beneficial effect given the therapeutic value of human contact, attention and empathy . While clinicians may have more time to dedicate to their patients as more automated systems are incorporated into their workflow, they may be less capable to explain AI-based healthcare decision making . Moreover, continued use of AI tools could deteriorate a clinician’s skills, a phenomenon referred to as “de-skilling” , such as their capacity to interpret endoscopy images or to identify less obvious alterations. Conversely, automating workflows may expose clinicians to more images, honing their skills by greater exposure to clinically relevant images, yet maybe at the cost of seeing fewer normal images. In addition, more extended use of automated algorithms may lead to a propensity to accept automated decisions even when they are wrong , with a negative effect on the clinician’s diagnostic precision. Thus, efforts must be made to ensure that the clinician’s professional capacity remains fine-tuned to avoid generating a dependence on automated systems and to avoid any potential loss of skills (e.g., in performing and interpreting endoscopies) when physicians are no longer required to use (the phenomenon of de-skilling has also been dealt with in more detail elsewhere ). Other issues have been raised in association with the clinical introduction of AI applications, such as whether they will lead to greater surveillance of populations and how this should be controlled. Surveillance might compromise privacy but it could also be beneficial, enhancing the data with which the DL applications are trained, so perhaps this is an issue that will be necessary to contemplate in regulatory guidelines. Another issue that also needs to be addressed is the extent to which non-medical specialists such as computer scientists and IT specialists will gain power in clinical settings. Finally, the fragility associated with reliance on AI systems and the potential that monopolies will be established in specific areas of healthcare will also have to be considered . In summary, it will be important to respect a series of criteria when designing and implementing AI-based clinical solutions to ensure that they are trustworthy .
Ensuring the privacy of medical information is increasingly challenging in the digital age. Not only are electronic data easily reproduced, but they are also vulnerable to remote access and manipulation, with economic incentives intensifying cyberattacks on health-related organisations . Breaches of medical confidentiality can have important consequences for patients. Indeed, they may not only be responsible for the shaming or alienation of patients with certain illnesses, but they could even perhaps limit their employment opportunities or affect their health insurance costs. As medical AI applications become more common, and as more data are collected and used/shared more widely, the threat to privacy increases. The hope is that measures such as de-identification will help maintain privacy and will require this process to be adopted more generally in many areas of life. However, the inconvenience associated with these approaches makes this unlikely to occur. Moreover, re-identification of de-identified data is surprisingly easy , and thus, we must perhaps accept that introducing clinical AI applications will compromise our privacy a little. This would be more acceptable if all individuals had the same chance of benefitting from these tools, in the absence of any bias, but at present, this does not appear to be the case (see below). While some progress in personal data protection has been made (e.g., General Data Protection Regulation 2016/79 in the E.U. or the Health Insurance Portability and Accountability Act in the USA: ), further advances with stakeholders are required to specifically address the data privacy issues associated with the deployment of AI applications . The main aim of novel healthcare interventions and technologies is to reduce morbidity and mortality, or to achieve similar health outcomes more efficiently or economically. The evidence favouring the implementation of AI systems in healthcare generally focuses on their relative accuracy compared to gold standards , and as such, there have been fewer clinical trials carried out that measure their effects on outcomes . This emphasis on accuracy may potentially lead to overdiagnosis ; although, this is a phenomenon that may be compensated for by considering other pathological, genomic and clinical data. Hence, it may be necessary to use more extended personal data from EHRs in AI applications to ensure the benefits of the tools are fully reaped and that they do not mislead physicians. One of the advantages of using such algorithms is that they might identify patterns and characteristics that are difficult for the human observer to perceive, and even those that may not currently be included in epidemiological studies, further enhancing diagnostic precision. However, this situation will create important demands on data management, on the safe and secure use of personal information and regarding consent for its use, accentuated by the large amount of quality data required to train and validate DL tools. Traditional opt-in/opt-out models of consent will be difficult to implement on the scale of these data and in such a dynamic environment . Thus, addressing data-related issues will be fundamental to ensure a problem-free incorporation of AI tools into healthcare , perhaps requiring novel approaches to data protection. One possible solution to the question of privacy and data management may come through the emergence of blockchain technologies in healthcare environments. In this sense, recent initiatives into the use of blockchain technology in healthcare may offer possible solutions to some of the problems regarding data handling and management, not least as this technology will facilitate the safer, traceable and efficient handling of an individual’s clinical information . Indeed, the uniqueness of blockchain technology resides in the fact that it permits a massive, secure and decentralized public store of ordered records or events to be established . Indeed, the local storage of medical information is a barrier to sharing this information, as well as potentially compromising its security. Blockchain technology enables data to be carefully protected and safely stored, assuring their immutability . Thus, blockchain technology could help overcome the current fragmentation of a patient’s medical records, potentially benefitting the patient and healthcare professionals alike. Indeed, it could promote communication between healthcare professionals both at the same and perhaps at a different centre, radically reducing the costs associated with sharing medical data . AI applications can benefit from different features of the use of a blockchain, offering trustworthiness, enhanced privacy and traceability. Indeed, when the data used in AI applications (both for training and in general) are acquired from a reliable, secure and trusted platform, AI algorithms will perform better.
Among the most important issues faced by AI applications are those of bias and transferability . Bias may be introduced through the training data employed or by decisions that are made during the design process . In essence, ML systems are shaped by the data on which they are trained and validated, identifying patterns in large datasets that reproduce desired outcomes. Indeed, AI systems are tailor-made, and as such, they are only as good as the data with which they are trained. As such, when these data are incomplete, unrepresentative or poorly interpreted, the end result can be catastrophic . One specific type of bias, spectrum bias, occurs when a diagnostic test is studied in individuals who differ from the population for which the test was intended. Indeed, spectrum bias has been recognized as a potential pitfall for AI applications in capsule endoscopy (CE) , as well as in the field of cardiovascular medicine . Hence, AI learning models might not always be fully valid and applicable to new datasets. In this context, the integration of blockchain-enabled data from other healthcare platforms could serve to augment the number of what would otherwise be underrepresented cases in a dataset, thereby improving the training of the AI application and ultimately, its successful implementation. In real life, any inherent bias in clinical tools cannot be ignored and must be considered before validating AI applications. As a result, overfitting of these models should not be ignored, a phenomenon that occurs when the model is too tightly tuned to the training data, and as a result, it does not function correctly when fed with other data . This can be avoided by using larger datasets for training and by not training the applications excessively, and possibly also by simplifying the models themselves. The way outcomes are identified is also entirely dependent on the data the models are fed. Indeed, there are examples of different pathologies where certain physical characteristics achieve better diagnostic performance, such as lighter rather than darker skin, yet perhaps this is a population that is overrepresented in the training data. Consequently, it is possible that only those with fair skin will fully benefit from such tools . Human decisions may also skew AI tools, such that they may act in discriminatory ways . Disadvantaged groups may not be well-represented in the formative stages of evidence-based medicine , and unless rectified, and human interventions can combat this bias, it will almost certainly be carried over into AI tools. Hence, programmes will need to be established to ensure ethical AI development, such as those contemplated to detect and eliminate bias in data and algorithms . While bias may emerge from poor data collection and evaluation, it can also emerge in systems trained on high-quality datasets. Aggregation bias can emerge from using a single population to design a model that is not optimal for another group . Thus, the potential that bias exists must be faced and not ignored, searching for solutions to overcome this problem rather than rejecting the implementation of AI tools on this basis ( and ). In association with bias, transferability to other settings is a related and significant issue for AI tools . An algorithm trained and tested in one environment will not necessarily perform as well in another environment, and it may need to be retrained on data from the new environment. Even so, transferability is not ensured, and hence, AI tools must be carefully designed, tested and evaluated in each new context prior to their use with patients . This issue also implies there must be significant transparency about the data sources used in the design and development of these systems, with the ensuing demands on data protection and safety.
Another critical issue with regards to the application of DL algorithms is that of explainability and interpretability . When explainable, what an algorithm does and the value it encodes can be readily understood . However, it appears that less explainable algorithms may be more accurate , and thus, it remains unclear if it is possible to achieve both these features at the same time. How algorithms achieve a particular classification or recommendation may even be unclear to some extent to designers and users alike, not least due to the influence of training on the output of the algorithms and that of user interactions. Indeed, in situations where algorithms are being used to address relatively complex medical situations and relationships, this can lead to what is referred to as “black-box medicine”: circumstances in which the basis for clinical decision making becomes less clear . While the explanations a clinician may give for their decisions may not be perfect, they are responsible for these decisions and can usually offer a coherent explanation if necessary. Thus, should AI tools be allowed to make diagnostic, prognostic and management decisions that cannot be explained by a physician ? Some lack of explainability has been widely accepted in modern medicine, with clinicians prescribing aspirin as an analgesic without understanding its mechanism of action for nearly a century . Moreover, it still remains unclear why Lithium acts as a mood stabilizer . If drugs can be prescribed without understanding how they work, then can we not use AI without fully understanding how it reaches a decision? Yet as we move towards greater patient inclusion in their healthcare decisions, the inability of a clinician to fully explain decisions based on AI may become more problematic. Hence, perhaps we are right to seek systems that allow us to trace how conclusions are reached. Moreover, only through some degree of knowledge of AI can physicians be aware of what these tools can actually achieve and when they may be performing irregularly. AI is commonly considered to be of neutral value, neither intrinsically good nor bad, yet it is capable of producing good and bad outcomes. AI algorithms explicitly or implicitly encode values as part of their design , and these values inevitably influence patient outcomes. For example, algorithms will often be designed to prioritise a false-negative rather than false-positive identification, or to perform distinctly depending on the quality of the preparation. While the performance of AI systems would represent a limiting factor for diagnostic success, additional factors will also influence their accuracy and sensitivity, such as the data on which they are trained, how the data are used by the algorithm, and any conscious or unconscious biases that may be introduced. Indeed, the digitalisation of medicine has been said to have shifted the physician’s attention away from the body towards the patient’s data , and the introduction of AI tools runs the risk of further exacerbating this movement. Introducing AI tools into medicine also has implications for the allocation of responsibility regarding treatment decisions and any adverse outcomes based on the use of such tools, as discussed in greater depth elsewhere . At present, there appears to be a void regarding legal responsibility if the use of AI applications produces harm , and there are difficulties in clearly establishing the autonomy and agency of AI . Should any adverse event occur, it is necessary to establish if any party failed in their duty or if errors occurred, attributing responsibility accordingly. Responsibility for the use of the AI will usually be shared between the physician and institution where the treatment was provided, but what of the designers? Responsibility for acting on the basis of the output of the AI will rest with the physician, yet perhaps no party has acted improperly or the AI tool behaved in an unanticipated manner. Indeed, if the machine performs its tasks reliably, there may be no wrongdoing even when it fails. The points in an algorithm at which decisions are made may be complicated to define, and thus, clinicians may be asked to take responsibility for decisions they have not made when using a system that incorporates AI. Importantly, this uncertainty regarding responsibility may influence the trust of a patient in their clinician . Accordingly, the more that clinicians and patients rely upon clinical AI systems, the more that trust may shift away from clinicians toward the AI tools themselves . In relation to the above, the implementation of AI tools may also raise concerns about the role of clinicians. While there are fears that they will be ‘replaced’ by AI tools , the ideal situation would be to take advantage of the strengths of both humans and machines. AI applications could help to compensate for shortages in personnel , they could free up more of a clinicians’ time, enabling them to dedicate this time to their patients or other tasks , or they might enhance the clinician’s capacity in terms of the number of patients they could treat. While decision making in conjunction with AI should involve clinicians, the issue of machine–human disagreement must be addressed . Alternatively, should we be looking for opportunities to introduce fully automated clinical AI solutions? For example, could negative results following AI-based assessment of GI examinations be communicated directly to the patient? While this might be more efficient, it brings into question the individual’s relationship with the clinician. Indeed, the dehumanisation of healthcare may have a detrimental rather than a beneficial effect given the therapeutic value of human contact, attention and empathy . While clinicians may have more time to dedicate to their patients as more automated systems are incorporated into their workflow, they may be less capable to explain AI-based healthcare decision making . Moreover, continued use of AI tools could deteriorate a clinician’s skills, a phenomenon referred to as “de-skilling” , such as their capacity to interpret endoscopy images or to identify less obvious alterations. Conversely, automating workflows may expose clinicians to more images, honing their skills by greater exposure to clinically relevant images, yet maybe at the cost of seeing fewer normal images. In addition, more extended use of automated algorithms may lead to a propensity to accept automated decisions even when they are wrong , with a negative effect on the clinician’s diagnostic precision. Thus, efforts must be made to ensure that the clinician’s professional capacity remains fine-tuned to avoid generating a dependence on automated systems and to avoid any potential loss of skills (e.g., in performing and interpreting endoscopies) when physicians are no longer required to use (the phenomenon of de-skilling has also been dealt with in more detail elsewhere ). Other issues have been raised in association with the clinical introduction of AI applications, such as whether they will lead to greater surveillance of populations and how this should be controlled. Surveillance might compromise privacy but it could also be beneficial, enhancing the data with which the DL applications are trained, so perhaps this is an issue that will be necessary to contemplate in regulatory guidelines. Another issue that also needs to be addressed is the extent to which non-medical specialists such as computer scientists and IT specialists will gain power in clinical settings. Finally, the fragility associated with reliance on AI systems and the potential that monopolies will be established in specific areas of healthcare will also have to be considered . In summary, it will be important to respect a series of criteria when designing and implementing AI-based clinical solutions to ensure that they are trustworthy .
We are clearly at an interesting moment in the history of medicine as we embrace the use of AI and big data as a further step in the era of medical digitalisation. Despite the many challenges that must be faced, this is clearly going to be a disruptive technology in many medical fields, affecting clinical decision making and the doctor–patient dynamic in what will almost certainly be a tremendously positive way. Different levels of automation can be achieved by introducing AI tools into clinical decision-making routines, selecting between fully automated procedures and aids to conventional protocols as specific situations demand. Some issues that must be addressed prior to the clinical implementation of AI tools have already been recognised in healthcare scenarios. For example, bias is an existing problem evident through inequalities in the care received by some populations. AI applications can be used to incorporate and examine large amounts of data, allowing inequalities to be identified and leveraging this technology to address these problems. Through training on different populations, it may be possible to identify specific features of these populations that have an influence on disease prevalence, and/or on its progression and prognosis. Indeed, the identification of population-specific features that are associated with disease will undoubtedly have an important impact on medical research. However, there are other challenges that are posed by these systems that have not been faced previously and that will have to be resolved prior to their widespread incorporation into clinical decision decision-making procedures . Automating procedures is commonly considered to be associated with greater efficiency, reduced costs and savings in time. The growing use of CE in digestive healthcare and the adaptation of these systems to an increasing number of circumstances generates a large amount of information and each examination may require over an hour to analyse. This not only requires the dedication of a clinician or specialist, and their training, but it may increase the chance of errors due to tiredness or monotony (not least as lesions may only be present in a small number of the tens of thousands of images obtained ). DL tools have been developed based on CNNs to be used in conjunction with different CE techniques that aim to detect lesions or abnormalities in the intestinal mucosa . These algorithms are capable of reducing the time required to read these examinations to a question of minutes (depending on the computational infrastructures available). Moreover, they have been shown to be capable of achieving accuracies and results not dissimilar to the current gold standard (expert clinician visual analysis), performances that will most likely improve with time and use. In addition, some of these tools will clearly be able to be used in real time, with the advantages that this will offer to clinicians and patients alike . As well as the savings in time and effort that can be achieved by implementing AI tools, these advances may to some extent also drive the democratization of medicine and help in the application of specialist tools in less well-developed areas. Consequently, the use of AI solutions might reduce the need for specialist training to be able to offer healthcare services in environments that may be more poorly equipped. This may represent an important complement to systems such as CE that involve the use of more portable apparatus capable of being used in areas with more limited access and where patients may not necessarily have access to major medical facilities. Indeed, it may even be possible to use CE in the patient’s home environment. It should also be noted that enhancing the capacity to review and evaluate large numbers of images in a significantly shorter period of time may also offer important benefits in the field of clinical research. Drug discovery programmes and research into other clinical applications are notoriously slow and laborious. Thus, any tools that can help speed up the testing and screening capacities in research pipelines may have important consequences in the development of novel treatments. Moreover, when performing multicentre trials, the variation in the protocols implemented is often an additional and undesired variable. Hence, medical research and clinical trials in particular will benefit from the use of more standardized and less subjective tools. Accordingly, offering researchers the ability to access large amounts of data that have been collected in a uniform manner, even when obtained from different sites, and making it possible to perform medical examinations more swiftly, can only benefit clinical research studies and trials.
In terms of the introduction of AI applications into clinical pipelines, we consider the future to be one of great promise. While it is clear that it will not be seamless and it will require the coordinated effort of many stakeholders, the pot of gold that awaits at the end of the rainbow seems to be getting ever bigger. These applications raise important bioethical issues, not least those related to privacy, data protection, data bias, explainability and responsibility. Consequently, the design and implementation of these tools will need to respect specific criteria to ensure that they are trustworthy . Since these are tools that are breaking new ground, the solutions to these issues may also need to be defined ad hoc, adopting novel procedures. This is an issue that cannot be overlooked as it may be critical to ensure that the opportunities offered by this technology do not slip through our hands.
|
Anatomical Augmentation Using Suture Tape for Acute Syndesmotic Injury in Maisonneuve Fracture: A Case Report
|
406cbd4b-4c54-443f-8546-8f4737b16d4e
|
10145241
|
Suturing[mh]
|
The tibiofibular syndesmosis, a fibrous joint that stabilizes the fibula and tibia, consists of four lateral ligaments: the anterior inferior tibiofibular ligament (AITFL), interosseous ligament (IOL), transverse ligament (TL), and posterior inferior tibiofibular ligament (PITFL). These ligaments stabilize the syndesmosis and prevent the excessive motion of the fibula, such that an appropriate fibular position is maintained; they also play an important role in syndesmotic function and the talar position . Within the syndesmotic ligament complex, the AITFL and PITFL play the most important roles in stabilizing the distal syndesmosis . Distal tibiofibular syndesmotic injury is involved in 10% of all ankle fractures and up to 20% of rotational ankle fractures [ , , ]. The distal tibiofibular syndesmosis is crucial for the congruity and integrity of the ankle joint, which, in turn, is critical for weight-bearing . An injury to these critical structures can lead to significant disability [ , , ]. According to a cadaveric study , in cases of syndesmosis injury, tibiotalar contact pressure can be reduced by 42% with only a 1 mm lateral shift of the talus. The stabilization of the syndesmosis is essential to achieving good long-term, functional outcomes for the ankle joint, and to preventing posttraumatic arthritis . One traditional method for reducing the syndesmosis is a transosseous screw fixation. However, the position, diameter, number, and retrieval of the syndesmotic screws, as well as the method of cortical fixation, remain controversial [ , , , ]. Recently, several studies have reported the use of suture tape for a ligament augmentation in cases of syndesmosis injury [ , , , ]. In one study, this novel fixation method proved to be as effective as screw fixation , while, in a cadaver model, a minimally invasive anatomic augmentation of the anterior and posterior syndesmosis was achieved by using suture tape . In this study, we report a case of unstable syndesmotic injury, in which the anatomical reduction of the syndesmosis was achieved by an augmentation of the AITFL and PITFL using suture tape.
This case report was approved by the Institutional Review Board (IRB) of Soonchunhyang University Cheonan Hospital, Cheonan, South Korea (IRB No. 2023-01-007). The patient provided written informed consent for the publication of this report and the accompanying images. A 39-year-old male presented to the emergency department of our hospital with severe pain and swelling in the right ankle. The patient stated that he fell off a skateboard and rotated his ankle. He had no history of illness, or of genetic or familial diseases. A physical examination revealed ankle swelling, extreme tenderness, and ecchymosis in the medial aspect of the ankle and the proximal fibula. There were no neurological deficits, and the dorsalis pedis and tibialis posterior arteries were palpable. The anteroposterior, lateral, and mortise view right ankle radiographs revealed a widening of the medial clear space and a posterior malleolus fracture. Moreover, the “syndesmosis overlap” was reduced in comparison with the contralateral side. Additionally, a full-length radiograph of the lower leg revealed a proximal fibula fracture ( ). Computed tomography (CT) scans were taken for an accurate evaluation of the syndesmosis. On the axial CT, the fibula was not located in the fibula notch; it was found to be displaced laterally and posteriorly at a point 1 cm above the tibial plafond ( ). The magnetic resonance imaging (MRI) revealed that there were ruptured deltoid ligaments, along with AITFL, PITFL, and interosseous membrane (IOM) injuries ( ). The final diagnosis was a Maisonneuve fracture with a proximal fibular fracture, a syndesmosis injury with an IOM rupture, and a medial deltoid ligament injury; these findings were confirmed during surgery. On day 2 after the injury, the patient underwent a syndesmosis reduction and fixation. The patient was placed on the operating table in the supine position, and arthroscopy was performed using standard anteromedial and anterolateral portals. We did not observe a cartilage injury, syndesmotic instability (lateral malleolus displacement >5 mm), or PITFL rupture at the point of the tibia insertion ( ). We planned to use suture tape for the syndesmosis joint reduction and fixation. InternalBrace (Arthrex, Naples, FL, USA), a nonabsorbable suture tape, was used for the fixation. First, the AITFL rupture was confirmed to be approximately 4 cm above the distal tibiofibular joint. We checked the distal tibial footprints and a 3.4 mm bone tunnel was created. A 2.7 mm drilling was performed on the footprints of the syndesmosis ligament in the distal fibula, from front to back, to create a bone tunnel. The suture tape was passed through and fixed with 3.5 mm interference screws (SwiveLock; Arthrex). After internally rotating the patient’s leg, a longitudinal incision was made approximately 5 cm above the Volkmann tubercle. We palpated the Volkmann tubercle and passed the suture tape between the peroneus tendon and the bone. After reducing the syndesmosis joint, the free ends of the suture tape were fixed to the bone tunnel on the tibia side, which was prepared under C-arm guidance with 4.75 mm SwiveLock® anchors ( ). Then, the medial clear space was reduced to within the normal range. A deltoid ligament repair was not performed and the proximal fibula fracture was treated conservatively. A plain X-ray and CT were performed immediately after the surgery had confirmed a successful syndesmotic reduction ( ). Postoperatively, a short leg splint was worn for approximately 2 weeks. The patient was instructed to use an ankle brace for an additional 2 weeks. Active and passive ankle range of motion exercises were performed from 4 weeks postoperatively, and full weight-bearing walking was then allowed with braces. The braces were removed after 6 weeks. Then, a 3-month rehabilitation program consisting of ankle muscle strength, balance, and functional performance training was completed. An axial CT that was performed 6 months after the surgery revealed a similar alignment of the syndesmosis between the injured and uninjured sides ( ). There were no complications and the patient did not complain of discomfort in daily life. At the 1-year postoperative follow-up exam, the Olerud–Molander Ankle Score and The American Orthodefic Foot and Ankle Society Ankle-Hindfoot scale were at 95 and 90 points, respectively, and the visual analog scale pain score was at 1 point. The range of motion of the ankle joint-- injured° (uninjured°)was checked presenting an ankle dorsiflexion of 15° (20°), an ankle plantar flexion of 40° (40°), a varus of 20° (20°), and a valgus of 10° (10°), showing almost no limitations.
Traumatic distal tibiofibular syndesmosis injuries commonly occur during contact sports. Syndesmotic injuries that are associated with ankle rotation account for approximately 10% of all ankle fractures, >20% of which are treated surgically . A retrospective study found that the proportion of syndesmotic injuries that were sustained by athletes that could be classified as acute sprains was approximately 20% . Missed or improperly treated syndesmosis injuries can result in unnecessary pain or functional impairment, which may ultimately progress to arthritis . Achieving and maintaining an anatomical reduction is important for good long-term, complication-free outcomes in cases of syndesmotic injury . The treatment methods for distal syndesmosis injuries are highly controversial [ , , ]. The traditional fixation method for an unstable syndesmosis is transsyndesmotic screw fixation. Although the number of screws, the fixation period, and the removal time are debatable, this traditional fixation is still the most widely used technique. However, its disadvantages include screw breakage, malreduction, synostosis, the need for screw removal (and diastasis thereafter), delayed weight-bearing, and disuse osteoporosis [ , , ]. Good outcomes of suture-button fixation have been reported by studies that applied this technique to overcome the drawbacks of the traditional fixation [ , , , ]. However, the potential complications of suture-button fixation include soft tissue complications, infections, osteolysis, and heterotopic ossification [ , , ]. In a biomechanical study, suture-button fixation alone did not provide an adequate rotational stability . Forsythe et al. reported that FiberWire-button (Arthrex) fixation was less effective for maintaining syndesmotic reduction in the immediate postoperative period, relative to a metallic screw . Moreover, Teramoto et al. reported that neither single- nor double-suture-button fixation stabilized the syndesmosis in cases of inversion and external rotation, although the former was sufficient for physiologic stability . Several studies have reported good results from using suture tape in conjunction with suture-button fixation for an AITFL augmentation . Nonabsorbable suture tape that is designed for the treatment of ankle lateral instability has been widely applied, while the InternalBrace (Arthrex) was developed in 2012. This device uses SwiveLock screws for a knotless aperture fixation, and FiberTape (Arthrex) fixed to each ligament enhances the repair and augmentation. Nelson proposed an open anatomic repair for AITFL injuries, and reported that this technique can restore the ankle’s mortise stability and facilitate bone repair, in order to promote an early return to functional exercises and activities . Moreover, there is no requirement for a syndesmotic screw fixation. Lee et al. introduced a repair technique for the AITFL by using suture tape under arthroscopic guidance . Although their approach has a basic concept similar to that of Nelson, it also has distinct advantages in terms of weight-bearing and rehabilitation in the early stage after surgery, a lack of any requirements for screw removal, and no functional limitations . Kwon et al. reported that the use of the InternalBrace for AITFL injuries was an effective and safe adjunctive strategy for addressing syndesmotic instability . Lee et al. reported that open anterior syndesmotic repair using suture tape provided a torsional strength that was similar to screw fixation in cases of ankle syndesmotic injury, and suggested that it could serve as an alternative treatment option . The suture tape techniques described above have a notable limitation: they can only be performed when the PITFL is intact. In a cadaver model, Regauer et al. introduced a minimally invasive anterior and posterior augmentation technique using the InternalBrace device . When using such techniques in actual patients, an initial examination should be performed to determine whether the patient is a suitable candidate. If a PITFL rupture is confirmed by an ankle axial CT, an MRI, and arthroscopy, and if a reduction is also deemed to be required, the AITFL and PITFL augmentation can be performed using InternalBrace. To confirm a successful surgical outcome when using the InternalBrace fixation, the degree of syndesmosis reduction should be assessed by an axial CT immediately, through a comparison with the uninjured side.
As a treatment for unstable syndesmosis injury, a ligament augmentation using suture tape provides satisfactory clinical outcomes and can be considered to be a useful and reliable method for anatomical restoration and rapid rehabilitation. However, cadaveric biomechanical studies are needed for validation.
|
Introducing a Rapid DNA Analysis Procedure for Crime Scene Samples Outside of the Laboratory—A Field Experiment
|
5aaffd7e-2011-45a6-a60c-475859ecd0f1
|
10145755
|
Forensic Medicine[mh]
|
The use of DNA analysis in the process of criminal investigation and prosecution has grown exponentially over the past decades and is still increasing [ , , , ]. DNA analysis of biological traces and subsequent database searches can lead to investigative leads, the identification or exclusion of suspects, contribute to the reconstruction of an incident, or provide evidence against suspects [ , , ]. During the crime scene investigation, items and trace evidence are secured at the scene. After selection, the most promising biological traces will be sent for DNA analysis. The resulting DNA profiles can be compared with the profiles of reference DNA profiles (e.g., potential suspects, victims within a case), and with a forensic DNA database ( In the Netherlands, this database contains profiles of suspects, convicts, trace material secured at crime scenes, and deceased victims of unsolved crimes ) for criminal cases. After this, the results are reported back to the investigation team. Worldwide turnaround times for DNA results (time from DNA sampling at the crime scene or the (police) laboratory to DNA report) are longer than desired and can take weeks or months [ , , , ]. This asks for ‘rapid’ solutions, especially during the investigation phase, where forensic evidence increasingly influences the direction and, thereby, the effectiveness of the investigation . Research has shown that a faster criminal investigation can contribute to a better crime approach (see among others ), and that swift action by the police can even double the number of solved cases . Technology-driven innovations, such as “lab on a chip” and miniaturization of computers, radically changed forensic implementation possibilities . Promising technologies became available, enabling rapid DNA analyses outside of the classical laboratory environment [ , , ]. These innovations are in line with the great need articulated by the entire criminal justice chain (from crime scene to court) for rapid DNA results, and the desirability of laboratory analyses at the crime scene . The current available rapid (mobile ( The ANDE is sold as being a mobile solution. The RapidHit and RapidHit ID are retailed as equipment for fixed locations (e.g., booking stations) )) DNA technologies, the ANDE, Rapid ID, and RapidHIT, are less sensitive and robust than DNA analysis performed via the regular procedures in laboratories . Rapid techniques are therefore prone to produce incomplete results when low DNA concentrations are present and they are less suitable for analyzing complex mixture profiles [ , , ], which is the trade-off for the advantages of speed and mobility. Rapid DNA devices are successfully being used in practice for reference buccal samples and disaster victim identification samples, which are cell rich sources . The techniques are also applied on less cell rich samples, such as actual crime scenes traces, of which the results vary [ , , , ]. For now, these technologies are presumed to be mainly suitable for crime scene traces with a high probability of obtaining a full DNA profile, namely blood and saliva traces presumably originating from a single donor . The effects of implementing rapid DNA technologies in the crime scene investigation procedure have only been evaluated to a limited extent. The impact of implementing rapid technologies at the crime scene has been studied on mock crime scenes with fictive rapid analysis tools . These studies show that the implementation of rapid identification techniques, such as tools for rapid analysis and comparison of DNA and fingermarks, can be efficient and effective in the investigative practice, both when it comes to a rapid identification of offenders as well as the quality of the scenario reconstruction . Next to this, the European Network of Forensic Science Institutes (ENSFI) and the Scientific Working Group on DNA Analysis Methods (SWGDAM) have set up additional requirements that should be taken into account before rapid (mobile) DNA technologies can be used for crime scene traces, emphasizing that the use of rapid technologies with crime scene traces should be handled with caution . With the desire and available means for rapid DNA procedures, several large projects (Snelle-ID lijn, snelle DNA-straat, LocalDNA [ , , ]) were initiated in the Netherlands, to investigate the effect of different (mobile) rapid procedures and gain more insight into the results, application possibilities, and impact of rapid information at the start of the investigative process, compared to the regular procedure. One of these projects is LocalDNA. In this project we set up a field experiment, comparing real crime cases and crime scene traces either following a rapid DNA procedure outside of the laboratory (decentral procedure), or a regular DNA analysis procedure at the forensic laboratory. These cases were followed from the start of the crime scene investigation until the apprehension of the suspect. This study is one of the first studies investigating the impact of a rapid DNA procedure compared to a regular DNA procedure. More precisely, this is the first study that investigates the influence of the RapidHIT200 on blood and or saliva traces secured at a crime scene and the impact of these rapid DNA results on the investigative process.
2.1. Rapid DNA Device In this study, we used the RapidHit R-DNA-DB08 direct PCR analysis device ( The RapidHit used in this study is now an obsolete device and is no longer manufactured (in this form) by Thermofisher Scientific. In this study, this device was used to test the principle of a decentralised procedure ) (Thermofisher Scientific, n.d., Waltham, MA, USA; Holland & Wendt, 2015) to perform the DNA analysis in the decentral rapid DNA procedure. This device is capable of analyzing 24 DNA markers and is suitable for analyzing 5 biological samples per run ( The cartridge of the equipment consists of 8 lanes with 3 lanes used for negative, positive and blank control samples and 5 lanes left for samples ). The device is fully automatic and able to obtain raw DNA data from samples within 2 to 3 h that can be processed, interpreted, and compared to the DNA database. The RapidHit was purchased by the National Criminal Investigation Service ( Dienst Landelijke Recherche ) of the Dutch National Police and is located in a vehicle, making it possible to use it in a mobile or decentral (outside of the laboratory) setting. The decentralized process of rapid DNA analysis is validated and accredited for blood and saliva samples . 2.2. Design This study aimed to monitor 50 cases following the decentral rapid DNA procedure and, in parallel, 50 similar cases following the regular DNA procedure. The selected cases were analyzed with the aid of an extensive analysis model consisting of over 800 variables covering general case information, the timeline, enrolled capacity, quality of the investigation and the traces, and detectives’ experience with the different DNA procedures. Variables on the duration and quality of the investigative process were analyzed to investigate the impact of the two procedures on the criminal investigation process. In this paper, we focus on the duration of the criminal investigation process and the quality of the DNA analysis results. To compare results obtained with a decentral rapid DNA analysis procedure and the traditional procedure (quality control), all DNA traces in the field experiment were sampled with a splitable swab ( Copan’s splitable 4N6 FLOQ Swabs Genetics was validated for DNA profiling using the RapidHit and by regular DNA profiling at the NFI ). This splitable swab ensures that the trace material is sampled once and then split: one half of the swab was analyzed with the rapid DNA technology and the second half of the swab followed the regular DNA procedure at the Netherlands Forensic Institute (NFI). Forensic investigators were trained to sample with a rotary motion, in an attempt to obtain a homogeneous distribution of the trace on the swab. The swab was split in a controlled environment by a trained lab technician. A schematic overview of the design of the study is shown in . 2.3. Inclusion Criteria Cases and Traces In the period of November 2020–July 2021, both serious crimes (e.g., homicides, robberies, violent crimes) and volume crimes (e.g., property crime, vandalism) committed in the two participating police regions (Police region Amsterdam and police region Midden-Nederland ) were eligible for this field experiment. Inclusion criteria encompassed the following: (1) the crime scene investigation was conducted by a forensic investigator, (2) assumed blood and/or saliva traces presumably from one donor were present at the scene, and (3) a public prosecutor had given permission ( In the Netherlands, the (forensic) prosecutor is formally in charge of conducting the investigation and formally orders (follow-up) investigations ). For practical reasons, the field experiment was (mainly) deployed on weekdays during office hours. In the period of December 2018–November 2019, in total 50 serious and volume crime cases that followed the regular DNA analysis procedure but would have met the deployment criteria for the decentral rapid procedure were selected and analyzed retrospectively. To determine whether a case would have qualified for the decentral rapid DNA procedure, interviews were conducted with the forensic investigators and the investigation leader. Those cases that met the criteria and would have been selected for the decentral rapid DNA procedure were included in the study as a comparison group. 2.3.1. Exception Trace Sampling Blood and saliva traces were sampled with the previously described splitable swab method. In the case of cigarette butts (saliva traces) we used another procedure. In regular DNA testing at the NFI, (part of) the cigarette butt is examined. Cigarette butts contain substances derived from tobacco and its burning; therefore, purification steps are taken to remove these substances. In the direct PCR analysis with the RapidHit, no purification steps are performed; therefore, this technique is not suitable for examining cigarette butts (and other samples with similar inhibitory substances). Consequently, cigarette butts were not directly examined but swabbed with a regular cotton swab which was analyzed in the RapidHit, thereby attempting to reduce the amount of inhibitors in the sample. For analysis in the regular procedure the cigarette butt was sent to the laboratory, where part of the paper wrapping of the filter of the cigarette was sampled and analyzed as a control, rather than half of a splitable swab as with the other samples. 2.3.2. Trace Result Categorization To analyze and compare the trace results of the decentral rapid DNA procedure and the regular procedure, the trace results are divided into 4 categories: (1) ‘good DNA profile’: a DNA profile suitable for admission to the DNA database ; (2) ‘profile suitable for one-time comparison’: a DNA profile too complex for admission to the DNA database but suitable for comparison against database reference profiles through SmartRank ; (3) ‘profile suitable for comparison within a case’: a complex DNA profile but informative enough to compare to reference profiles within a case; (4) ‘no profile/unsuitable for comparison’. Note that, based on the validation of the system, DNA profiles obtained through the RapidHit analyses needed to be single source to be considered for comparison. Mixed DNA profiles were considered to be unsuitable for comparison. Mixed DNA profiles were considered for comparison in the regular procedure. 2.4. Procedures For this study a new decentral rapid DNA working process was set up and evaluated during the study. Cases following the decentral rapid DNA procedure followed the steps listed below. For the regular DNA procedure, a simplified version of the procedural steps is also outlined below. 2.4.1. Decentral Rapid DNA Procedure Forensic investigators arrive at a crime scene at which potentially suitable traces (blood and/or saliva) for the decentral rapid DNA procedure are present. If such traces are found, the field experiment-coordinator is notified. Collectively, the suitability of the traces is determined based on the criteria: blood and/or saliva traces with a high probability of obtaining a full DNA profile from one donor. Permission for the decentral rapid DNA procedure is requested from the prosecutor, based on the case and proposed selection of traces. Upon agreement, the field experiment-coordinator notifies the forensic lab technicians of the police region concerned, the deployment coordinator of the rapid DNA device of the national police unit, and a DNA expert of the NFI. The rapid DNA device is moved to a suitable location (e.g., investigation site or police station). The forensic investigator samples suitable traces with the splitable swab and photographs the samples and traces. The samples are handed over to the lab technicians of the relevant unit, who split the swab and hand over one half of each sample to the lab technician of the national unit, who enters the sample into the rapid device and starts the run ( Due to the accreditation criteria, only lab technicians of the national police unit were allowed to enter samples in the RapidHit device ). The second half of the swab is analyzed at a later time, following the regular procedure (see next paragraph). Upon completion of the analysis, the data generated by the RapidHit is transmitted via a secure data-connection to the NFI. At the NFI, analysts that are trained at analyzing the RapidHit system electropherogram data analyze the obtained DNA data. The resulting DNA profiles are interpreted by a DNA expert who also performs the comparison of the profiles (if applicable) with profiles within the case. The expert also initiates a DNA database search when the profiles meet the database search criteria. The results are first reported by telephone to the field experiment-coordinator, who informs the forensic investigators and forensic prosecutor. Within 24 h an official DNA expert report is sent by e-mail to the forensic prosecutor, field experiment-coordinator, and other usual recipients of DNA reports. 2.4.2. Regular DNA Procedure (Simplified) Forensic investigators arrive at a crime scene. The forensic investigator samples and photographs the samples and traces. If necessary, DNA sampling/pre-examination is carried out in the police laboratory by lab technicians. The collected samples are prioritized based on potential success rates and crime relatedness. Permission for DNA analysis is requested from the forensic prosecutor based on the case and proposed selection of traces. The samples are sent to a forensic laboratory, where the DNA samples are isolated, quantified, and amplified. Material (DNA extract) is separated and stored for possible future contra-analysis. The DNA experts analyze the obtained samples, interpret the DNA profiles, and compare the profiles (if applicable) with profiles within the case and with profiles stored in the DNA database for criminal cases. The results are reported in an official DNA expert report by e-mail to the forensic prosecutor and other usual recipients of DNA reports.
In this study, we used the RapidHit R-DNA-DB08 direct PCR analysis device ( The RapidHit used in this study is now an obsolete device and is no longer manufactured (in this form) by Thermofisher Scientific. In this study, this device was used to test the principle of a decentralised procedure ) (Thermofisher Scientific, n.d., Waltham, MA, USA; Holland & Wendt, 2015) to perform the DNA analysis in the decentral rapid DNA procedure. This device is capable of analyzing 24 DNA markers and is suitable for analyzing 5 biological samples per run ( The cartridge of the equipment consists of 8 lanes with 3 lanes used for negative, positive and blank control samples and 5 lanes left for samples ). The device is fully automatic and able to obtain raw DNA data from samples within 2 to 3 h that can be processed, interpreted, and compared to the DNA database. The RapidHit was purchased by the National Criminal Investigation Service ( Dienst Landelijke Recherche ) of the Dutch National Police and is located in a vehicle, making it possible to use it in a mobile or decentral (outside of the laboratory) setting. The decentralized process of rapid DNA analysis is validated and accredited for blood and saliva samples .
This study aimed to monitor 50 cases following the decentral rapid DNA procedure and, in parallel, 50 similar cases following the regular DNA procedure. The selected cases were analyzed with the aid of an extensive analysis model consisting of over 800 variables covering general case information, the timeline, enrolled capacity, quality of the investigation and the traces, and detectives’ experience with the different DNA procedures. Variables on the duration and quality of the investigative process were analyzed to investigate the impact of the two procedures on the criminal investigation process. In this paper, we focus on the duration of the criminal investigation process and the quality of the DNA analysis results. To compare results obtained with a decentral rapid DNA analysis procedure and the traditional procedure (quality control), all DNA traces in the field experiment were sampled with a splitable swab ( Copan’s splitable 4N6 FLOQ Swabs Genetics was validated for DNA profiling using the RapidHit and by regular DNA profiling at the NFI ). This splitable swab ensures that the trace material is sampled once and then split: one half of the swab was analyzed with the rapid DNA technology and the second half of the swab followed the regular DNA procedure at the Netherlands Forensic Institute (NFI). Forensic investigators were trained to sample with a rotary motion, in an attempt to obtain a homogeneous distribution of the trace on the swab. The swab was split in a controlled environment by a trained lab technician. A schematic overview of the design of the study is shown in .
In the period of November 2020–July 2021, both serious crimes (e.g., homicides, robberies, violent crimes) and volume crimes (e.g., property crime, vandalism) committed in the two participating police regions (Police region Amsterdam and police region Midden-Nederland ) were eligible for this field experiment. Inclusion criteria encompassed the following: (1) the crime scene investigation was conducted by a forensic investigator, (2) assumed blood and/or saliva traces presumably from one donor were present at the scene, and (3) a public prosecutor had given permission ( In the Netherlands, the (forensic) prosecutor is formally in charge of conducting the investigation and formally orders (follow-up) investigations ). For practical reasons, the field experiment was (mainly) deployed on weekdays during office hours. In the period of December 2018–November 2019, in total 50 serious and volume crime cases that followed the regular DNA analysis procedure but would have met the deployment criteria for the decentral rapid procedure were selected and analyzed retrospectively. To determine whether a case would have qualified for the decentral rapid DNA procedure, interviews were conducted with the forensic investigators and the investigation leader. Those cases that met the criteria and would have been selected for the decentral rapid DNA procedure were included in the study as a comparison group. 2.3.1. Exception Trace Sampling Blood and saliva traces were sampled with the previously described splitable swab method. In the case of cigarette butts (saliva traces) we used another procedure. In regular DNA testing at the NFI, (part of) the cigarette butt is examined. Cigarette butts contain substances derived from tobacco and its burning; therefore, purification steps are taken to remove these substances. In the direct PCR analysis with the RapidHit, no purification steps are performed; therefore, this technique is not suitable for examining cigarette butts (and other samples with similar inhibitory substances). Consequently, cigarette butts were not directly examined but swabbed with a regular cotton swab which was analyzed in the RapidHit, thereby attempting to reduce the amount of inhibitors in the sample. For analysis in the regular procedure the cigarette butt was sent to the laboratory, where part of the paper wrapping of the filter of the cigarette was sampled and analyzed as a control, rather than half of a splitable swab as with the other samples. 2.3.2. Trace Result Categorization To analyze and compare the trace results of the decentral rapid DNA procedure and the regular procedure, the trace results are divided into 4 categories: (1) ‘good DNA profile’: a DNA profile suitable for admission to the DNA database ; (2) ‘profile suitable for one-time comparison’: a DNA profile too complex for admission to the DNA database but suitable for comparison against database reference profiles through SmartRank ; (3) ‘profile suitable for comparison within a case’: a complex DNA profile but informative enough to compare to reference profiles within a case; (4) ‘no profile/unsuitable for comparison’. Note that, based on the validation of the system, DNA profiles obtained through the RapidHit analyses needed to be single source to be considered for comparison. Mixed DNA profiles were considered to be unsuitable for comparison. Mixed DNA profiles were considered for comparison in the regular procedure.
Blood and saliva traces were sampled with the previously described splitable swab method. In the case of cigarette butts (saliva traces) we used another procedure. In regular DNA testing at the NFI, (part of) the cigarette butt is examined. Cigarette butts contain substances derived from tobacco and its burning; therefore, purification steps are taken to remove these substances. In the direct PCR analysis with the RapidHit, no purification steps are performed; therefore, this technique is not suitable for examining cigarette butts (and other samples with similar inhibitory substances). Consequently, cigarette butts were not directly examined but swabbed with a regular cotton swab which was analyzed in the RapidHit, thereby attempting to reduce the amount of inhibitors in the sample. For analysis in the regular procedure the cigarette butt was sent to the laboratory, where part of the paper wrapping of the filter of the cigarette was sampled and analyzed as a control, rather than half of a splitable swab as with the other samples.
To analyze and compare the trace results of the decentral rapid DNA procedure and the regular procedure, the trace results are divided into 4 categories: (1) ‘good DNA profile’: a DNA profile suitable for admission to the DNA database ; (2) ‘profile suitable for one-time comparison’: a DNA profile too complex for admission to the DNA database but suitable for comparison against database reference profiles through SmartRank ; (3) ‘profile suitable for comparison within a case’: a complex DNA profile but informative enough to compare to reference profiles within a case; (4) ‘no profile/unsuitable for comparison’. Note that, based on the validation of the system, DNA profiles obtained through the RapidHit analyses needed to be single source to be considered for comparison. Mixed DNA profiles were considered to be unsuitable for comparison. Mixed DNA profiles were considered for comparison in the regular procedure.
For this study a new decentral rapid DNA working process was set up and evaluated during the study. Cases following the decentral rapid DNA procedure followed the steps listed below. For the regular DNA procedure, a simplified version of the procedural steps is also outlined below. 2.4.1. Decentral Rapid DNA Procedure Forensic investigators arrive at a crime scene at which potentially suitable traces (blood and/or saliva) for the decentral rapid DNA procedure are present. If such traces are found, the field experiment-coordinator is notified. Collectively, the suitability of the traces is determined based on the criteria: blood and/or saliva traces with a high probability of obtaining a full DNA profile from one donor. Permission for the decentral rapid DNA procedure is requested from the prosecutor, based on the case and proposed selection of traces. Upon agreement, the field experiment-coordinator notifies the forensic lab technicians of the police region concerned, the deployment coordinator of the rapid DNA device of the national police unit, and a DNA expert of the NFI. The rapid DNA device is moved to a suitable location (e.g., investigation site or police station). The forensic investigator samples suitable traces with the splitable swab and photographs the samples and traces. The samples are handed over to the lab technicians of the relevant unit, who split the swab and hand over one half of each sample to the lab technician of the national unit, who enters the sample into the rapid device and starts the run ( Due to the accreditation criteria, only lab technicians of the national police unit were allowed to enter samples in the RapidHit device ). The second half of the swab is analyzed at a later time, following the regular procedure (see next paragraph). Upon completion of the analysis, the data generated by the RapidHit is transmitted via a secure data-connection to the NFI. At the NFI, analysts that are trained at analyzing the RapidHit system electropherogram data analyze the obtained DNA data. The resulting DNA profiles are interpreted by a DNA expert who also performs the comparison of the profiles (if applicable) with profiles within the case. The expert also initiates a DNA database search when the profiles meet the database search criteria. The results are first reported by telephone to the field experiment-coordinator, who informs the forensic investigators and forensic prosecutor. Within 24 h an official DNA expert report is sent by e-mail to the forensic prosecutor, field experiment-coordinator, and other usual recipients of DNA reports. 2.4.2. Regular DNA Procedure (Simplified) Forensic investigators arrive at a crime scene. The forensic investigator samples and photographs the samples and traces. If necessary, DNA sampling/pre-examination is carried out in the police laboratory by lab technicians. The collected samples are prioritized based on potential success rates and crime relatedness. Permission for DNA analysis is requested from the forensic prosecutor based on the case and proposed selection of traces. The samples are sent to a forensic laboratory, where the DNA samples are isolated, quantified, and amplified. Material (DNA extract) is separated and stored for possible future contra-analysis. The DNA experts analyze the obtained samples, interpret the DNA profiles, and compare the profiles (if applicable) with profiles within the case and with profiles stored in the DNA database for criminal cases. The results are reported in an official DNA expert report by e-mail to the forensic prosecutor and other usual recipients of DNA reports.
Forensic investigators arrive at a crime scene at which potentially suitable traces (blood and/or saliva) for the decentral rapid DNA procedure are present. If such traces are found, the field experiment-coordinator is notified. Collectively, the suitability of the traces is determined based on the criteria: blood and/or saliva traces with a high probability of obtaining a full DNA profile from one donor. Permission for the decentral rapid DNA procedure is requested from the prosecutor, based on the case and proposed selection of traces. Upon agreement, the field experiment-coordinator notifies the forensic lab technicians of the police region concerned, the deployment coordinator of the rapid DNA device of the national police unit, and a DNA expert of the NFI. The rapid DNA device is moved to a suitable location (e.g., investigation site or police station). The forensic investigator samples suitable traces with the splitable swab and photographs the samples and traces. The samples are handed over to the lab technicians of the relevant unit, who split the swab and hand over one half of each sample to the lab technician of the national unit, who enters the sample into the rapid device and starts the run ( Due to the accreditation criteria, only lab technicians of the national police unit were allowed to enter samples in the RapidHit device ). The second half of the swab is analyzed at a later time, following the regular procedure (see next paragraph). Upon completion of the analysis, the data generated by the RapidHit is transmitted via a secure data-connection to the NFI. At the NFI, analysts that are trained at analyzing the RapidHit system electropherogram data analyze the obtained DNA data. The resulting DNA profiles are interpreted by a DNA expert who also performs the comparison of the profiles (if applicable) with profiles within the case. The expert also initiates a DNA database search when the profiles meet the database search criteria. The results are first reported by telephone to the field experiment-coordinator, who informs the forensic investigators and forensic prosecutor. Within 24 h an official DNA expert report is sent by e-mail to the forensic prosecutor, field experiment-coordinator, and other usual recipients of DNA reports.
Forensic investigators arrive at a crime scene. The forensic investigator samples and photographs the samples and traces. If necessary, DNA sampling/pre-examination is carried out in the police laboratory by lab technicians. The collected samples are prioritized based on potential success rates and crime relatedness. Permission for DNA analysis is requested from the forensic prosecutor based on the case and proposed selection of traces. The samples are sent to a forensic laboratory, where the DNA samples are isolated, quantified, and amplified. Material (DNA extract) is separated and stored for possible future contra-analysis. The DNA experts analyze the obtained samples, interpret the DNA profiles, and compare the profiles (if applicable) with profiles within the case and with profiles stored in the DNA database for criminal cases. The results are reported in an official DNA expert report by e-mail to the forensic prosecutor and other usual recipients of DNA reports.
Fifty cases that followed the regular procedure, encompassing 37 serious crime cases and 13 volume crime cases, and 47 cases ( The goal was to investigate 50 cases. During the study period, there was limited availability of the mobile DNA device. As a result, only 47 cases were investigated ) that followed the decentral rapid procedure, encompassing 16 serious crime cases and 31 volume crime cases were monitored. The impact of the used procedure on the investigative process is divided into two sections: (1) impact on duration of the investigative process and (2) impact on the quality of the trace results. 3.1. Duration of the Investigative Process 3.1.1. Turnaround Times Decentral Rapid DNA Procedure The turnaround time from the notification of a crime until DNA results were reported to all parties of the case in the decentral rapid DNA procedure (n = 47) averaged 46 h. The average time between reporting the crime and investigating the crime scene was 5.5 h. The crime scene investigation took an average of 1 h. The time between the start of the crime scene investigation and requesting the rapid DNA procedure averaged 7.5 h. The data suggests that these 7.5 h of ‘time loss’ were mainly related to the moment a crime is reported to the police. Many crimes take place in the evenings and on weekends. For this field experiment, the rapid DNA technology was (mostly) deployable on weekdays between 8 a.m. and 5 p.m. As a result, a relatively large amount of time was lost, not only between the crime scene investigation and requesting the RapidHit, but especially in the period between the implementation of the rapid DNA technology and the start of this DNA analysis on location, which averaged 28 h. Only in 26% (12 out of 47) of the cases could the rapid DNA procedure start on the same day as the crime scene investigation. In 36% of the cases (17 out of 47) the procedure was performed 1 day after the incident and in 38% of the cases (18 out of 47) the rapid DNA analysis was performed 2 or more days after the crime scene investigation. Generating the DNA profiles with the RapidHIT took an average of 2 to 2.5 h, after which the NFI communicated the results back within, on average, 1.5 h to the investigation leaders. Communicating the results between the forensic prosecutor, crime scene investigators, and apprehension team took an average of 2 h. Notable here is that communication with the forensic prosecutor and crime scene investigators was relatively quick (respectively, after an average of 20 and 35 min) however results were communicated to the teams responsible for the apprehension after an average 3 h. This average time is slightly higher as in some cases the results were not communicated to the apprehension team until several days after the reporting. 3.1.2. Duration Investigative Process Decentral Rapid DNA Procedure vs. Regular Procedure In order to understand the potential impact of the decentralized rapid DNA procedure on the duration of the investigation process and on the identification of suspects ( Throughout this paper we refer to ‘identification of suspect’ to summarize the results of DNA comparison (through the DNA database or with reference profiles in a case) and likelihood ratio calculations supporting the presence of DNA of an individual in the sample. Whether or not this resulted identification of the person of interest, and this person being considered a suspect are legal matters that are outside of the scope of this paper ) in comparison to the regular DNA procedure, all cases where an identification occurred as a result of a comparison with the DNA database and forensic investigators had a leading role in the identification of a suspects (regular DNA procedure, 11 of the 37 cases; rapid DNA procedure, 19 of 36 cases ( In 22 cases a donor was identified through a search in the DNA database. In 3/22 cases the apprehension team identified the suspect prior to the DNA database result. Therefore these 3 cases were excluded for this analysis.) ) were analyzed. In these cases, the rapid DNA technology can provide a potential acceleration in the investigative procedure. The date and time of different stages in the investigative procedure were recorded and used for this analysis, namely: report of the crime, start of the crime scene investigation, RapidHIT deployment, prioritization of traces, sending traces/data to the NFI, DNA report, and identification and apprehension or signaling of the suspect. As mentioned previously, the average time to identify a person via the decentral rapid DNA procedure (from the start of the crime scene investigation to the identification of the person as a result of a DNA database match) averaged 46 h ≈ 2 days (n = 19). Duration between the date of identification and the apprehension or signaling of a suspect averaged 20 days (median 4 days) in the decentral rapid procedure. In five out of the nineteen cases (26%), the suspect was apprehended within two days after the identification. In the regular procedure (n = 11), the time (in days) to identify a person (from the start of the crime scene investigation to identification) averaged 66 days (median 49 days). After crime scene investigation, it took on average 29 days (median 29 days) before traces were selected and prioritized by the police, after which it took on average another 16 days (median 4 days) before the traces were sent to the laboratory for DNA analysis. After arrival, traces were booked in (average 2 days; median 1 day), interpreted, and reported back within an average of 19 days (median 15 days). The average time between the date of identification and the apprehension or signaling of a suspect averaged 126 days (median 73 days) in cases following the regular procedure (n = 11). A more detailed timeline with the median and quartiles can be found in . There is a significant acceleration in the investigative process from the report of a crime until the apprehension or signaling of a suspect in the decentral rapid DNA procedure compared to the regular procedure (t(28) = 3.750, p = 0.001). An in-depth analysis of the data shows that there was a significant acceleration between the two procedures in the following steps of the process: ‘sending traces/data to the NFI‘ (t(28) = 3.181, p = 0.004); ‘DNA report/identification’ (t(28) = 2.275, p = 0.032) and ‘apprehension or signaling suspect‘ (t(28) = 5.609, p < 0.001). No significant acceleration between the two groups was seen in the duration of the ‘crime scene investigation’ (t(28) = 1.514, p = 0.159) and the ‘registration of traces at NFI’ (t(28) = 1.092, p = 0.284). 3.2. Quality of the Trace Results In the 47 cases where the RapidHit was deployed, a total of 97 blood and 38 saliva traces were sampled with the splitable swab. Blood and saliva traces analyzed with the decentral rapid DNA procedure provided (non-complex) single DNA profiles in 65% of the blood traces (63 out of 97) and in 26% of saliva traces (10/38). 3.2.1. Identifications The trace results yielded at least one usable DNA profile in 37 of the 47 cases (79%) analyzed with the decentral rapid DNA procedure, which led to the identification of a potential donor of the trace in 25 of the 47 cases ( Previously, in section ‘Duration investigative process decentral rapid DNA procedure vs. regular procedure,’ 19 identifications were discussed with the decentral rapid DNA procedure in which forensic investigators were leading in identifying suspects. Here, all identifications obtained with the RapidHit are discussed ) (53%). In 22/25 cases, a potential donor was identified through a search in the national DNA database for criminal cases. In 19/22 cases, this was a blood trace and in the remaining 3 cases a saliva trace. In 3/25 cases, a match with a trace of known origin, a saliva reference sample, was found. Of the 25 identifications, 28% (7/25) were found in serious crime cases (5 through the DNA database, 2 through a ‘reference’) and 72% (18/25) were found in volume crime cases (17 through the DNA database and 1 through a ‘reference’). The same traces analyzed with the regular procedure (second half of the splitable swab) provided (non-complex) single DNA profiles in 92% (89/97) of the blood traces and in 68% (26/38) of the saliva traces, giving at least one usable DNA profile in 45 of the 47 cases (96%), leading to an additional 19% identifications (9 of 47 cases; 4 blood traces, 5 saliva traces). Next to this, there was one case where the rapid procedure did yield a DNA profile suitable for comparison that could be compared manually once with individuals in the DNA database, but this comparison did not yield a match. The (more sensitive) laboratory analysis of this trace, on the other hand, did yield an identification with the DNA database. Subsequent analysis showed that rapid DNA analysis had generated a profile with only a few markers causing a DNA database identification with a low probative value, which did not result in an identification. 3.2.2. Quality of the Generated DNA Profiles With the rapid procedure, a good DNA profile (suitable for admission to the DNA database) was generated for 45% of the blood traces (44/97), while the regular procedure resulted in a good profile in 95% (92 of 97) of the blood traces. Saliva traces (n = 38) were divided into three subcategories: saliva, cigarette, and reference buccal swab samples. For saliva traces, a good DNA profile was produced for 8% (2/25) of the traces with the rapid procedure versus 56% (14/25) with the regular procedure. Swabs from seven cigarette butts were examined with the rapid procedure, none of which resulted in a good DNA profile ( During the field experiment, based on experience, stricter criteria for selecting saliva traces than for blood traces were implemented. Cases with a single cigarette butt (saliva trace) were no longer eligible for analysis with the rapid DNA equipment, and samples from face masks also proved unsuitable for deployment of the DNA analysis equipment ). The regular DNA analysis consisted of analyzing the cigarette butt’s filter paper in the extraction and resulted in a good profile for four butts. Four of the six reference buccal swabs resulted in good DNA profiles in the decentral rapid procedure vs. five in the regular procedure. In , the results of the DNA profiles obtained with the decentral rapid DNA procedure are visualized and compared with the DNA profiles obtained by the regular procedure from the same samples. The percentage of a ‘good profile’ was significantly ( p < 0.01) higher using the regular DNA examination compared to the rapid procedure for all types of traces—except the reference buccal swabs. Analysis of the DNA markers showed that the quality of DNA profiles obtained with the rapid procedure is structurally lower compared to DNA profiles obtained by regular procedure; predominantly lower peak heights (low template DNA profiles) are observed for 43% of the traces analyzed with the rapid procedure (58/135) versus 11% in the regular procedure (15/135). For 62% of the traces analyzed with the rapid procedure (83/135), imbalance between the peaks and stochastic effects, such as allele and locus drop-out, occurred versus 17% in the regular procedure (23/135). Also, artifacts are often visible in the profiles obtained with the rapid procedure such as broadly spaced peaks, asymmetric peaks, signal pull-up and distorted baselines. 3.2.3. Sensitivity RapidHit The sensitivity of the RapidHit to derive a full DNA profile is set on the threshold of 0.25 μL of blood on a cotton swab by the company ThermoFisher . Blood contains 0.020–0.040 μg DNA/μL blood , equaling that the stated threshold of 0.25 μL blood contains 5–10 ng DNA. The RapidHit does not measure DNA quantity, therefore the DNA quantity of the analyzed samples by the rapid procedure is unknown. However, the amount of DNA in the laboratory samples was quantified in the regular procedure. Evidently, the quantities on both parts of the splitable swab cannot be assumed to be exactly the same. Yet, due to the used swabbing technique it can be assumed that the quantities are comparable to some extent. For this analysis it is assumed that the amount of DNA in the swab half analyzed with the RapidHit is equal to the amount of DNA in the other half analyzed in the laboratory. Starting at a DNA quantity of 75.3 ng DNA for blood traces, mainly ‘good DNA profiles’ were observed in 87% of the blood samples. For saliva traces, 46% of saliva samples resulted in a ‘good DNA profile’ when the sample contained at least 96.6 ng of DNA. With lower quantities, usable DNA profiles were obtained sporadically. The lowest amount of DNA from which the RapidHit could extract a DNA profile usable for comparison with the DNA database was 2.2 ng DNA for blood traces and 33.9 ng DNA for saliva traces. The distribution of the DNA quantity of blood (n = 97) and saliva traces (n = 38) of the laboratory results linked with the profiles generated by the RapidHit is shown in and in the . Based on ThermoFisher’s stated threshold of being able to derive a full DNA profile with 5–10 ng DNA (0.25 μL of applied blood on a swab), we would have expected that the DNA samples in our study would have yielded a ‘good’ DNA profile more often.
3.1.1. Turnaround Times Decentral Rapid DNA Procedure The turnaround time from the notification of a crime until DNA results were reported to all parties of the case in the decentral rapid DNA procedure (n = 47) averaged 46 h. The average time between reporting the crime and investigating the crime scene was 5.5 h. The crime scene investigation took an average of 1 h. The time between the start of the crime scene investigation and requesting the rapid DNA procedure averaged 7.5 h. The data suggests that these 7.5 h of ‘time loss’ were mainly related to the moment a crime is reported to the police. Many crimes take place in the evenings and on weekends. For this field experiment, the rapid DNA technology was (mostly) deployable on weekdays between 8 a.m. and 5 p.m. As a result, a relatively large amount of time was lost, not only between the crime scene investigation and requesting the RapidHit, but especially in the period between the implementation of the rapid DNA technology and the start of this DNA analysis on location, which averaged 28 h. Only in 26% (12 out of 47) of the cases could the rapid DNA procedure start on the same day as the crime scene investigation. In 36% of the cases (17 out of 47) the procedure was performed 1 day after the incident and in 38% of the cases (18 out of 47) the rapid DNA analysis was performed 2 or more days after the crime scene investigation. Generating the DNA profiles with the RapidHIT took an average of 2 to 2.5 h, after which the NFI communicated the results back within, on average, 1.5 h to the investigation leaders. Communicating the results between the forensic prosecutor, crime scene investigators, and apprehension team took an average of 2 h. Notable here is that communication with the forensic prosecutor and crime scene investigators was relatively quick (respectively, after an average of 20 and 35 min) however results were communicated to the teams responsible for the apprehension after an average 3 h. This average time is slightly higher as in some cases the results were not communicated to the apprehension team until several days after the reporting. 3.1.2. Duration Investigative Process Decentral Rapid DNA Procedure vs. Regular Procedure In order to understand the potential impact of the decentralized rapid DNA procedure on the duration of the investigation process and on the identification of suspects ( Throughout this paper we refer to ‘identification of suspect’ to summarize the results of DNA comparison (through the DNA database or with reference profiles in a case) and likelihood ratio calculations supporting the presence of DNA of an individual in the sample. Whether or not this resulted identification of the person of interest, and this person being considered a suspect are legal matters that are outside of the scope of this paper ) in comparison to the regular DNA procedure, all cases where an identification occurred as a result of a comparison with the DNA database and forensic investigators had a leading role in the identification of a suspects (regular DNA procedure, 11 of the 37 cases; rapid DNA procedure, 19 of 36 cases ( In 22 cases a donor was identified through a search in the DNA database. In 3/22 cases the apprehension team identified the suspect prior to the DNA database result. Therefore these 3 cases were excluded for this analysis.) ) were analyzed. In these cases, the rapid DNA technology can provide a potential acceleration in the investigative procedure. The date and time of different stages in the investigative procedure were recorded and used for this analysis, namely: report of the crime, start of the crime scene investigation, RapidHIT deployment, prioritization of traces, sending traces/data to the NFI, DNA report, and identification and apprehension or signaling of the suspect. As mentioned previously, the average time to identify a person via the decentral rapid DNA procedure (from the start of the crime scene investigation to the identification of the person as a result of a DNA database match) averaged 46 h ≈ 2 days (n = 19). Duration between the date of identification and the apprehension or signaling of a suspect averaged 20 days (median 4 days) in the decentral rapid procedure. In five out of the nineteen cases (26%), the suspect was apprehended within two days after the identification. In the regular procedure (n = 11), the time (in days) to identify a person (from the start of the crime scene investigation to identification) averaged 66 days (median 49 days). After crime scene investigation, it took on average 29 days (median 29 days) before traces were selected and prioritized by the police, after which it took on average another 16 days (median 4 days) before the traces were sent to the laboratory for DNA analysis. After arrival, traces were booked in (average 2 days; median 1 day), interpreted, and reported back within an average of 19 days (median 15 days). The average time between the date of identification and the apprehension or signaling of a suspect averaged 126 days (median 73 days) in cases following the regular procedure (n = 11). A more detailed timeline with the median and quartiles can be found in . There is a significant acceleration in the investigative process from the report of a crime until the apprehension or signaling of a suspect in the decentral rapid DNA procedure compared to the regular procedure (t(28) = 3.750, p = 0.001). An in-depth analysis of the data shows that there was a significant acceleration between the two procedures in the following steps of the process: ‘sending traces/data to the NFI‘ (t(28) = 3.181, p = 0.004); ‘DNA report/identification’ (t(28) = 2.275, p = 0.032) and ‘apprehension or signaling suspect‘ (t(28) = 5.609, p < 0.001). No significant acceleration between the two groups was seen in the duration of the ‘crime scene investigation’ (t(28) = 1.514, p = 0.159) and the ‘registration of traces at NFI’ (t(28) = 1.092, p = 0.284).
The turnaround time from the notification of a crime until DNA results were reported to all parties of the case in the decentral rapid DNA procedure (n = 47) averaged 46 h. The average time between reporting the crime and investigating the crime scene was 5.5 h. The crime scene investigation took an average of 1 h. The time between the start of the crime scene investigation and requesting the rapid DNA procedure averaged 7.5 h. The data suggests that these 7.5 h of ‘time loss’ were mainly related to the moment a crime is reported to the police. Many crimes take place in the evenings and on weekends. For this field experiment, the rapid DNA technology was (mostly) deployable on weekdays between 8 a.m. and 5 p.m. As a result, a relatively large amount of time was lost, not only between the crime scene investigation and requesting the RapidHit, but especially in the period between the implementation of the rapid DNA technology and the start of this DNA analysis on location, which averaged 28 h. Only in 26% (12 out of 47) of the cases could the rapid DNA procedure start on the same day as the crime scene investigation. In 36% of the cases (17 out of 47) the procedure was performed 1 day after the incident and in 38% of the cases (18 out of 47) the rapid DNA analysis was performed 2 or more days after the crime scene investigation. Generating the DNA profiles with the RapidHIT took an average of 2 to 2.5 h, after which the NFI communicated the results back within, on average, 1.5 h to the investigation leaders. Communicating the results between the forensic prosecutor, crime scene investigators, and apprehension team took an average of 2 h. Notable here is that communication with the forensic prosecutor and crime scene investigators was relatively quick (respectively, after an average of 20 and 35 min) however results were communicated to the teams responsible for the apprehension after an average 3 h. This average time is slightly higher as in some cases the results were not communicated to the apprehension team until several days after the reporting.
In order to understand the potential impact of the decentralized rapid DNA procedure on the duration of the investigation process and on the identification of suspects ( Throughout this paper we refer to ‘identification of suspect’ to summarize the results of DNA comparison (through the DNA database or with reference profiles in a case) and likelihood ratio calculations supporting the presence of DNA of an individual in the sample. Whether or not this resulted identification of the person of interest, and this person being considered a suspect are legal matters that are outside of the scope of this paper ) in comparison to the regular DNA procedure, all cases where an identification occurred as a result of a comparison with the DNA database and forensic investigators had a leading role in the identification of a suspects (regular DNA procedure, 11 of the 37 cases; rapid DNA procedure, 19 of 36 cases ( In 22 cases a donor was identified through a search in the DNA database. In 3/22 cases the apprehension team identified the suspect prior to the DNA database result. Therefore these 3 cases were excluded for this analysis.) ) were analyzed. In these cases, the rapid DNA technology can provide a potential acceleration in the investigative procedure. The date and time of different stages in the investigative procedure were recorded and used for this analysis, namely: report of the crime, start of the crime scene investigation, RapidHIT deployment, prioritization of traces, sending traces/data to the NFI, DNA report, and identification and apprehension or signaling of the suspect. As mentioned previously, the average time to identify a person via the decentral rapid DNA procedure (from the start of the crime scene investigation to the identification of the person as a result of a DNA database match) averaged 46 h ≈ 2 days (n = 19). Duration between the date of identification and the apprehension or signaling of a suspect averaged 20 days (median 4 days) in the decentral rapid procedure. In five out of the nineteen cases (26%), the suspect was apprehended within two days after the identification. In the regular procedure (n = 11), the time (in days) to identify a person (from the start of the crime scene investigation to identification) averaged 66 days (median 49 days). After crime scene investigation, it took on average 29 days (median 29 days) before traces were selected and prioritized by the police, after which it took on average another 16 days (median 4 days) before the traces were sent to the laboratory for DNA analysis. After arrival, traces were booked in (average 2 days; median 1 day), interpreted, and reported back within an average of 19 days (median 15 days). The average time between the date of identification and the apprehension or signaling of a suspect averaged 126 days (median 73 days) in cases following the regular procedure (n = 11). A more detailed timeline with the median and quartiles can be found in . There is a significant acceleration in the investigative process from the report of a crime until the apprehension or signaling of a suspect in the decentral rapid DNA procedure compared to the regular procedure (t(28) = 3.750, p = 0.001). An in-depth analysis of the data shows that there was a significant acceleration between the two procedures in the following steps of the process: ‘sending traces/data to the NFI‘ (t(28) = 3.181, p = 0.004); ‘DNA report/identification’ (t(28) = 2.275, p = 0.032) and ‘apprehension or signaling suspect‘ (t(28) = 5.609, p < 0.001). No significant acceleration between the two groups was seen in the duration of the ‘crime scene investigation’ (t(28) = 1.514, p = 0.159) and the ‘registration of traces at NFI’ (t(28) = 1.092, p = 0.284).
In the 47 cases where the RapidHit was deployed, a total of 97 blood and 38 saliva traces were sampled with the splitable swab. Blood and saliva traces analyzed with the decentral rapid DNA procedure provided (non-complex) single DNA profiles in 65% of the blood traces (63 out of 97) and in 26% of saliva traces (10/38). 3.2.1. Identifications The trace results yielded at least one usable DNA profile in 37 of the 47 cases (79%) analyzed with the decentral rapid DNA procedure, which led to the identification of a potential donor of the trace in 25 of the 47 cases ( Previously, in section ‘Duration investigative process decentral rapid DNA procedure vs. regular procedure,’ 19 identifications were discussed with the decentral rapid DNA procedure in which forensic investigators were leading in identifying suspects. Here, all identifications obtained with the RapidHit are discussed ) (53%). In 22/25 cases, a potential donor was identified through a search in the national DNA database for criminal cases. In 19/22 cases, this was a blood trace and in the remaining 3 cases a saliva trace. In 3/25 cases, a match with a trace of known origin, a saliva reference sample, was found. Of the 25 identifications, 28% (7/25) were found in serious crime cases (5 through the DNA database, 2 through a ‘reference’) and 72% (18/25) were found in volume crime cases (17 through the DNA database and 1 through a ‘reference’). The same traces analyzed with the regular procedure (second half of the splitable swab) provided (non-complex) single DNA profiles in 92% (89/97) of the blood traces and in 68% (26/38) of the saliva traces, giving at least one usable DNA profile in 45 of the 47 cases (96%), leading to an additional 19% identifications (9 of 47 cases; 4 blood traces, 5 saliva traces). Next to this, there was one case where the rapid procedure did yield a DNA profile suitable for comparison that could be compared manually once with individuals in the DNA database, but this comparison did not yield a match. The (more sensitive) laboratory analysis of this trace, on the other hand, did yield an identification with the DNA database. Subsequent analysis showed that rapid DNA analysis had generated a profile with only a few markers causing a DNA database identification with a low probative value, which did not result in an identification. 3.2.2. Quality of the Generated DNA Profiles With the rapid procedure, a good DNA profile (suitable for admission to the DNA database) was generated for 45% of the blood traces (44/97), while the regular procedure resulted in a good profile in 95% (92 of 97) of the blood traces. Saliva traces (n = 38) were divided into three subcategories: saliva, cigarette, and reference buccal swab samples. For saliva traces, a good DNA profile was produced for 8% (2/25) of the traces with the rapid procedure versus 56% (14/25) with the regular procedure. Swabs from seven cigarette butts were examined with the rapid procedure, none of which resulted in a good DNA profile ( During the field experiment, based on experience, stricter criteria for selecting saliva traces than for blood traces were implemented. Cases with a single cigarette butt (saliva trace) were no longer eligible for analysis with the rapid DNA equipment, and samples from face masks also proved unsuitable for deployment of the DNA analysis equipment ). The regular DNA analysis consisted of analyzing the cigarette butt’s filter paper in the extraction and resulted in a good profile for four butts. Four of the six reference buccal swabs resulted in good DNA profiles in the decentral rapid procedure vs. five in the regular procedure. In , the results of the DNA profiles obtained with the decentral rapid DNA procedure are visualized and compared with the DNA profiles obtained by the regular procedure from the same samples. The percentage of a ‘good profile’ was significantly ( p < 0.01) higher using the regular DNA examination compared to the rapid procedure for all types of traces—except the reference buccal swabs. Analysis of the DNA markers showed that the quality of DNA profiles obtained with the rapid procedure is structurally lower compared to DNA profiles obtained by regular procedure; predominantly lower peak heights (low template DNA profiles) are observed for 43% of the traces analyzed with the rapid procedure (58/135) versus 11% in the regular procedure (15/135). For 62% of the traces analyzed with the rapid procedure (83/135), imbalance between the peaks and stochastic effects, such as allele and locus drop-out, occurred versus 17% in the regular procedure (23/135). Also, artifacts are often visible in the profiles obtained with the rapid procedure such as broadly spaced peaks, asymmetric peaks, signal pull-up and distorted baselines. 3.2.3. Sensitivity RapidHit The sensitivity of the RapidHit to derive a full DNA profile is set on the threshold of 0.25 μL of blood on a cotton swab by the company ThermoFisher . Blood contains 0.020–0.040 μg DNA/μL blood , equaling that the stated threshold of 0.25 μL blood contains 5–10 ng DNA. The RapidHit does not measure DNA quantity, therefore the DNA quantity of the analyzed samples by the rapid procedure is unknown. However, the amount of DNA in the laboratory samples was quantified in the regular procedure. Evidently, the quantities on both parts of the splitable swab cannot be assumed to be exactly the same. Yet, due to the used swabbing technique it can be assumed that the quantities are comparable to some extent. For this analysis it is assumed that the amount of DNA in the swab half analyzed with the RapidHit is equal to the amount of DNA in the other half analyzed in the laboratory. Starting at a DNA quantity of 75.3 ng DNA for blood traces, mainly ‘good DNA profiles’ were observed in 87% of the blood samples. For saliva traces, 46% of saliva samples resulted in a ‘good DNA profile’ when the sample contained at least 96.6 ng of DNA. With lower quantities, usable DNA profiles were obtained sporadically. The lowest amount of DNA from which the RapidHit could extract a DNA profile usable for comparison with the DNA database was 2.2 ng DNA for blood traces and 33.9 ng DNA for saliva traces. The distribution of the DNA quantity of blood (n = 97) and saliva traces (n = 38) of the laboratory results linked with the profiles generated by the RapidHit is shown in and in the . Based on ThermoFisher’s stated threshold of being able to derive a full DNA profile with 5–10 ng DNA (0.25 μL of applied blood on a swab), we would have expected that the DNA samples in our study would have yielded a ‘good’ DNA profile more often.
The trace results yielded at least one usable DNA profile in 37 of the 47 cases (79%) analyzed with the decentral rapid DNA procedure, which led to the identification of a potential donor of the trace in 25 of the 47 cases ( Previously, in section ‘Duration investigative process decentral rapid DNA procedure vs. regular procedure,’ 19 identifications were discussed with the decentral rapid DNA procedure in which forensic investigators were leading in identifying suspects. Here, all identifications obtained with the RapidHit are discussed ) (53%). In 22/25 cases, a potential donor was identified through a search in the national DNA database for criminal cases. In 19/22 cases, this was a blood trace and in the remaining 3 cases a saliva trace. In 3/25 cases, a match with a trace of known origin, a saliva reference sample, was found. Of the 25 identifications, 28% (7/25) were found in serious crime cases (5 through the DNA database, 2 through a ‘reference’) and 72% (18/25) were found in volume crime cases (17 through the DNA database and 1 through a ‘reference’). The same traces analyzed with the regular procedure (second half of the splitable swab) provided (non-complex) single DNA profiles in 92% (89/97) of the blood traces and in 68% (26/38) of the saliva traces, giving at least one usable DNA profile in 45 of the 47 cases (96%), leading to an additional 19% identifications (9 of 47 cases; 4 blood traces, 5 saliva traces). Next to this, there was one case where the rapid procedure did yield a DNA profile suitable for comparison that could be compared manually once with individuals in the DNA database, but this comparison did not yield a match. The (more sensitive) laboratory analysis of this trace, on the other hand, did yield an identification with the DNA database. Subsequent analysis showed that rapid DNA analysis had generated a profile with only a few markers causing a DNA database identification with a low probative value, which did not result in an identification.
With the rapid procedure, a good DNA profile (suitable for admission to the DNA database) was generated for 45% of the blood traces (44/97), while the regular procedure resulted in a good profile in 95% (92 of 97) of the blood traces. Saliva traces (n = 38) were divided into three subcategories: saliva, cigarette, and reference buccal swab samples. For saliva traces, a good DNA profile was produced for 8% (2/25) of the traces with the rapid procedure versus 56% (14/25) with the regular procedure. Swabs from seven cigarette butts were examined with the rapid procedure, none of which resulted in a good DNA profile ( During the field experiment, based on experience, stricter criteria for selecting saliva traces than for blood traces were implemented. Cases with a single cigarette butt (saliva trace) were no longer eligible for analysis with the rapid DNA equipment, and samples from face masks also proved unsuitable for deployment of the DNA analysis equipment ). The regular DNA analysis consisted of analyzing the cigarette butt’s filter paper in the extraction and resulted in a good profile for four butts. Four of the six reference buccal swabs resulted in good DNA profiles in the decentral rapid procedure vs. five in the regular procedure. In , the results of the DNA profiles obtained with the decentral rapid DNA procedure are visualized and compared with the DNA profiles obtained by the regular procedure from the same samples. The percentage of a ‘good profile’ was significantly ( p < 0.01) higher using the regular DNA examination compared to the rapid procedure for all types of traces—except the reference buccal swabs. Analysis of the DNA markers showed that the quality of DNA profiles obtained with the rapid procedure is structurally lower compared to DNA profiles obtained by regular procedure; predominantly lower peak heights (low template DNA profiles) are observed for 43% of the traces analyzed with the rapid procedure (58/135) versus 11% in the regular procedure (15/135). For 62% of the traces analyzed with the rapid procedure (83/135), imbalance between the peaks and stochastic effects, such as allele and locus drop-out, occurred versus 17% in the regular procedure (23/135). Also, artifacts are often visible in the profiles obtained with the rapid procedure such as broadly spaced peaks, asymmetric peaks, signal pull-up and distorted baselines.
The sensitivity of the RapidHit to derive a full DNA profile is set on the threshold of 0.25 μL of blood on a cotton swab by the company ThermoFisher . Blood contains 0.020–0.040 μg DNA/μL blood , equaling that the stated threshold of 0.25 μL blood contains 5–10 ng DNA. The RapidHit does not measure DNA quantity, therefore the DNA quantity of the analyzed samples by the rapid procedure is unknown. However, the amount of DNA in the laboratory samples was quantified in the regular procedure. Evidently, the quantities on both parts of the splitable swab cannot be assumed to be exactly the same. Yet, due to the used swabbing technique it can be assumed that the quantities are comparable to some extent. For this analysis it is assumed that the amount of DNA in the swab half analyzed with the RapidHit is equal to the amount of DNA in the other half analyzed in the laboratory. Starting at a DNA quantity of 75.3 ng DNA for blood traces, mainly ‘good DNA profiles’ were observed in 87% of the blood samples. For saliva traces, 46% of saliva samples resulted in a ‘good DNA profile’ when the sample contained at least 96.6 ng of DNA. With lower quantities, usable DNA profiles were obtained sporadically. The lowest amount of DNA from which the RapidHit could extract a DNA profile usable for comparison with the DNA database was 2.2 ng DNA for blood traces and 33.9 ng DNA for saliva traces. The distribution of the DNA quantity of blood (n = 97) and saliva traces (n = 38) of the laboratory results linked with the profiles generated by the RapidHit is shown in and in the . Based on ThermoFisher’s stated threshold of being able to derive a full DNA profile with 5–10 ng DNA (0.25 μL of applied blood on a swab), we would have expected that the DNA samples in our study would have yielded a ‘good’ DNA profile more often.
In this study, the effect of a decentralized (outside the laboratory) rapid DNA technique was investigated by comparing 47 real criminal cases where 135 real crime scene samples were analyzed with this procedure to 50 cases following the regular DNA procedure. 4.1. Duration Investigative Process The decentral rapid DNA procedure has strongly accelerated the duration of the investigative process compared to the regular investigative procedure (average 22 days vs. 192 days) in cases where a person was identified as a result of identification with the DNA database. This result is mainly achieved by: (1) the acceleration on procedural steps before traces can be send to the laboratory in the regular procedure (average 45 days) vs. deployment of the RapidHit (average 2 days); (2) the acceleration of the analysis (average 2–2.5 h) and interpretation (average 1.5 h) of the DNA results in the decentral rapid DNA procedure vs. DNA analysis of traces in the regular procedure (average 19 days); (3) and apprehension or signaling of a suspect averaged 20 days in the decentral rapid DNA procedure vs. 126 days in cases following the regular procedure. There may be a variety of reasons why suspects are not apprehended immediately upon identification. For example, in one case following the decentral rapid DNA procedure, it was decided to wait 6 days before apprehending a suspect because more evidence needed to be collected. In another case, a DNA profile in the DNA database needed to be ‘upgraded’ to ensure the reliability of the identification before the suspect could be apprehended, spanning 28 days. In a third case, it took a relatively long time before a suspect could be signaled as the cases concerned a Prüm ( An international comparison of DNA profiles between (EU) member countries. The comparison is done in writing through mutual legal assistance requests or automated based on EU legislation based on the Prüm treaty ) identification and the paperwork took 131 days. There were also instances when a suspect could not be apprehended immediately due to lack of capacity i.e., available police. In general, the time of the whole investigative process seems to have been significantly reduced compared to the regular procedure. However, the low number of cases and variability between the cases asks for cautious statements about the added value of the procedure for the processing time from identification to apprehension. This is partly because in the decentral rapid DNA procedure the field experiment leaders, crime scene investigators, forensic prosecutor, and case officers proactively shared identifying information, which affects the speed of the apprehension or signaling of a suspect. Implicitly and explicitly, there has been prioritization to the cases following the rapid procedure. 4.2. Trace Results In total, in 53% of the cases (25/47 cases) a match with a donor was found using the RapidHit. A fast identification was obtained in 40% of the cases (19/47 cases) through a blood trace, and in 13% of the cases through a saliva trace (6/47). Of these 25 identifications, 28% (7/25) were found in serious crime cases and 72% (18/25) in volume crime cases. This indicates that the decentralized rapid DNA procedure is more suitable for volume crime cases with a relatively high percentage of repeat offenders, whose DNA profiles are relatively often included in the DNA database. Due to the lower sensitivity of the RapidHit compared to the DNA analysis in the regular procedure, identifying information was lost in 19% of the cases following the rapid procedure, where analysis at the laboratory did yield an identification (this was the case with 4 blood traces and 5 saliva traces). The RapidHit generated a good DNA profile (suitable for admission to the DNA database) in 42% of the blood traces (41/97); while the regular procedure resulted in a good profile in 94% (91 of 97) of these traces. Although these are apparently reasonable results, blood traces do sacrifice a great deal in terms of the quality of the DNA profile in the rapid DNA analysis procedure. For saliva traces a good DNA profile was produced for 8% (2/25) of the traces with the rapid procedure versus 52% (13/25) with the regular procedure. By default saliva contains less DNA than blood, possibly explaining the lower results ( Saliva is also a more complex matrix intended for degradation of biological material, which can also explain the lower DNA results ). Next to this, the uncertainty in collecting invisible saliva traces based on contextual information compared to directly sampling visible blood stains could also be a reason for the reduced number of useful DNA profiles. The results of regular DNA analysis of these saliva traces indicated that many of them are not promising traces, arguing that only large/visible saliva traces with a high success rate should be selected for analysis with the RapidHit. None of the saliva traces sampled from the cigarette butts examined with the rapid DNA analysis procedure resulted in a good DNA profile, demonstrating that this technique is unsuitable for examining sampled cigarette butts. These limited DNA results for blood and saliva traces were expected and confirm the results of the validation study performed by the NFI . The results also emphasize that the use of the RapidHit 200 for analyzing crime scene traces should be handled with caution and additional requirements as stated by the ENSFI, SWGDAM and Rapid DNA task group, should be taken into account before rapid (mobile) DNA technologies can be used for crime scene traces . 4.2.1. Contamination In one case a noteworthy result was found. One swab derived from a cigarette butt was examined using the rapid procedure. This sample was examined in lane 1 of the cartridge. Lanes 2 through 5 were left empty. When analyzing the data, it was found that lane 1 did not produce a DNA profile. However, in lane 2, a single, almost complete DNA profile of an unknown man was obtained. To find the cause of this possible contamination, the cigarette butt sample from lane 1 was subsequently submitted for regular DNA testing at the NFI. This resulted in a single DNA profile, that did not match the DNA profile of the unknown man from lane 2. The DNA profile of the unknown man from lane 2 was compared with the national elimination DNA database and once with the Dutch DNA database for criminal cases. The profile was also compared with employees of the manufacturer of the cartridge. None of the comparisons resulted in a match. This raises the question how error- and contamination-prone and usable this equipment is for crime scene traces, which may warrant additional research in this area. 4.2.2. Mistyping For six DNA profiles obtained with the rapid procedure, one or two STR markers were mistyped based on the control analysis performed. All six profiles were low-template DNA profiles, for which this is a well-known phenomenon. Mistyping can lead to a loss of probative value of possible matches or to differences in the list of possible matches after searching a DNA database. No incorrect identifications occurred due to mistyping in the field experiment cases. Yet, additional research should indicate the probability of incorrect individualization. 4.2.3. Multiple Donors In four samples (two blood and two saliva), the rapid procedure resulted in a DNA profile with characteristics of one person whereas the quality control resulted in a DNA mixture profile of two or more persons for these samples. This is caused by the difference in sensitivity between the two techniques. The main donor of the four samples was found by the rapid procedure, but the additional donors who contributed relatively little DNA, were not detected. It can be very relevant in a case to know whether DNA from one individual is present or DNA from more individuals is present in a sample. For example, in addition to the main profile of the victim, a possible offender profile may appear as a minor contributor. If rapid (mobile) DNA equipment will be implemented in case work, careful consideration must be given to the potential impact of missing donors in the trace results. It is recommended that alongside the RapidHit, traces should also be analyzed with a more sensitive technique in order to not lose information.
The decentral rapid DNA procedure has strongly accelerated the duration of the investigative process compared to the regular investigative procedure (average 22 days vs. 192 days) in cases where a person was identified as a result of identification with the DNA database. This result is mainly achieved by: (1) the acceleration on procedural steps before traces can be send to the laboratory in the regular procedure (average 45 days) vs. deployment of the RapidHit (average 2 days); (2) the acceleration of the analysis (average 2–2.5 h) and interpretation (average 1.5 h) of the DNA results in the decentral rapid DNA procedure vs. DNA analysis of traces in the regular procedure (average 19 days); (3) and apprehension or signaling of a suspect averaged 20 days in the decentral rapid DNA procedure vs. 126 days in cases following the regular procedure. There may be a variety of reasons why suspects are not apprehended immediately upon identification. For example, in one case following the decentral rapid DNA procedure, it was decided to wait 6 days before apprehending a suspect because more evidence needed to be collected. In another case, a DNA profile in the DNA database needed to be ‘upgraded’ to ensure the reliability of the identification before the suspect could be apprehended, spanning 28 days. In a third case, it took a relatively long time before a suspect could be signaled as the cases concerned a Prüm ( An international comparison of DNA profiles between (EU) member countries. The comparison is done in writing through mutual legal assistance requests or automated based on EU legislation based on the Prüm treaty ) identification and the paperwork took 131 days. There were also instances when a suspect could not be apprehended immediately due to lack of capacity i.e., available police. In general, the time of the whole investigative process seems to have been significantly reduced compared to the regular procedure. However, the low number of cases and variability between the cases asks for cautious statements about the added value of the procedure for the processing time from identification to apprehension. This is partly because in the decentral rapid DNA procedure the field experiment leaders, crime scene investigators, forensic prosecutor, and case officers proactively shared identifying information, which affects the speed of the apprehension or signaling of a suspect. Implicitly and explicitly, there has been prioritization to the cases following the rapid procedure.
In total, in 53% of the cases (25/47 cases) a match with a donor was found using the RapidHit. A fast identification was obtained in 40% of the cases (19/47 cases) through a blood trace, and in 13% of the cases through a saliva trace (6/47). Of these 25 identifications, 28% (7/25) were found in serious crime cases and 72% (18/25) in volume crime cases. This indicates that the decentralized rapid DNA procedure is more suitable for volume crime cases with a relatively high percentage of repeat offenders, whose DNA profiles are relatively often included in the DNA database. Due to the lower sensitivity of the RapidHit compared to the DNA analysis in the regular procedure, identifying information was lost in 19% of the cases following the rapid procedure, where analysis at the laboratory did yield an identification (this was the case with 4 blood traces and 5 saliva traces). The RapidHit generated a good DNA profile (suitable for admission to the DNA database) in 42% of the blood traces (41/97); while the regular procedure resulted in a good profile in 94% (91 of 97) of these traces. Although these are apparently reasonable results, blood traces do sacrifice a great deal in terms of the quality of the DNA profile in the rapid DNA analysis procedure. For saliva traces a good DNA profile was produced for 8% (2/25) of the traces with the rapid procedure versus 52% (13/25) with the regular procedure. By default saliva contains less DNA than blood, possibly explaining the lower results ( Saliva is also a more complex matrix intended for degradation of biological material, which can also explain the lower DNA results ). Next to this, the uncertainty in collecting invisible saliva traces based on contextual information compared to directly sampling visible blood stains could also be a reason for the reduced number of useful DNA profiles. The results of regular DNA analysis of these saliva traces indicated that many of them are not promising traces, arguing that only large/visible saliva traces with a high success rate should be selected for analysis with the RapidHit. None of the saliva traces sampled from the cigarette butts examined with the rapid DNA analysis procedure resulted in a good DNA profile, demonstrating that this technique is unsuitable for examining sampled cigarette butts. These limited DNA results for blood and saliva traces were expected and confirm the results of the validation study performed by the NFI . The results also emphasize that the use of the RapidHit 200 for analyzing crime scene traces should be handled with caution and additional requirements as stated by the ENSFI, SWGDAM and Rapid DNA task group, should be taken into account before rapid (mobile) DNA technologies can be used for crime scene traces . 4.2.1. Contamination In one case a noteworthy result was found. One swab derived from a cigarette butt was examined using the rapid procedure. This sample was examined in lane 1 of the cartridge. Lanes 2 through 5 were left empty. When analyzing the data, it was found that lane 1 did not produce a DNA profile. However, in lane 2, a single, almost complete DNA profile of an unknown man was obtained. To find the cause of this possible contamination, the cigarette butt sample from lane 1 was subsequently submitted for regular DNA testing at the NFI. This resulted in a single DNA profile, that did not match the DNA profile of the unknown man from lane 2. The DNA profile of the unknown man from lane 2 was compared with the national elimination DNA database and once with the Dutch DNA database for criminal cases. The profile was also compared with employees of the manufacturer of the cartridge. None of the comparisons resulted in a match. This raises the question how error- and contamination-prone and usable this equipment is for crime scene traces, which may warrant additional research in this area. 4.2.2. Mistyping For six DNA profiles obtained with the rapid procedure, one or two STR markers were mistyped based on the control analysis performed. All six profiles were low-template DNA profiles, for which this is a well-known phenomenon. Mistyping can lead to a loss of probative value of possible matches or to differences in the list of possible matches after searching a DNA database. No incorrect identifications occurred due to mistyping in the field experiment cases. Yet, additional research should indicate the probability of incorrect individualization. 4.2.3. Multiple Donors In four samples (two blood and two saliva), the rapid procedure resulted in a DNA profile with characteristics of one person whereas the quality control resulted in a DNA mixture profile of two or more persons for these samples. This is caused by the difference in sensitivity between the two techniques. The main donor of the four samples was found by the rapid procedure, but the additional donors who contributed relatively little DNA, were not detected. It can be very relevant in a case to know whether DNA from one individual is present or DNA from more individuals is present in a sample. For example, in addition to the main profile of the victim, a possible offender profile may appear as a minor contributor. If rapid (mobile) DNA equipment will be implemented in case work, careful consideration must be given to the potential impact of missing donors in the trace results. It is recommended that alongside the RapidHit, traces should also be analyzed with a more sensitive technique in order to not lose information.
In one case a noteworthy result was found. One swab derived from a cigarette butt was examined using the rapid procedure. This sample was examined in lane 1 of the cartridge. Lanes 2 through 5 were left empty. When analyzing the data, it was found that lane 1 did not produce a DNA profile. However, in lane 2, a single, almost complete DNA profile of an unknown man was obtained. To find the cause of this possible contamination, the cigarette butt sample from lane 1 was subsequently submitted for regular DNA testing at the NFI. This resulted in a single DNA profile, that did not match the DNA profile of the unknown man from lane 2. The DNA profile of the unknown man from lane 2 was compared with the national elimination DNA database and once with the Dutch DNA database for criminal cases. The profile was also compared with employees of the manufacturer of the cartridge. None of the comparisons resulted in a match. This raises the question how error- and contamination-prone and usable this equipment is for crime scene traces, which may warrant additional research in this area.
For six DNA profiles obtained with the rapid procedure, one or two STR markers were mistyped based on the control analysis performed. All six profiles were low-template DNA profiles, for which this is a well-known phenomenon. Mistyping can lead to a loss of probative value of possible matches or to differences in the list of possible matches after searching a DNA database. No incorrect identifications occurred due to mistyping in the field experiment cases. Yet, additional research should indicate the probability of incorrect individualization.
In four samples (two blood and two saliva), the rapid procedure resulted in a DNA profile with characteristics of one person whereas the quality control resulted in a DNA mixture profile of two or more persons for these samples. This is caused by the difference in sensitivity between the two techniques. The main donor of the four samples was found by the rapid procedure, but the additional donors who contributed relatively little DNA, were not detected. It can be very relevant in a case to know whether DNA from one individual is present or DNA from more individuals is present in a sample. For example, in addition to the main profile of the victim, a possible offender profile may appear as a minor contributor. If rapid (mobile) DNA equipment will be implemented in case work, careful consideration must be given to the potential impact of missing donors in the trace results. It is recommended that alongside the RapidHit, traces should also be analyzed with a more sensitive technique in order to not lose information.
The duration of the investigation process in cases where the decentral rapid DNA procedure was deployed has been significantly reduced compared to cases where the regular procedure was used. Most of the delay in the regular process lies in the procedural steps during the police investigation, not in the DNA analysis. This highlights the importance of an effective work process and having sufficient capacity available. Rather than focusing on technological solutions, improved turnaround times can be achieved by dedicated innovations in operational procedures. This study shows, in correspondence with known literature, that rapid DNA techniques are less sensitive than regular DNA analysis equipment. Comparison between the RapidHit 200 and regular DNA analysis shows that especially saliva traces secured at a crime scene should be selected critically with regard to the potential limited DNA quantity (success rate) before analyzing them with a rapid DNA technology. The rapid equipment is therefore, to a limited extent, suitable for the analysis of saliva traces secured at the crime scene and can mainly be used for the analysis of visible blood traces with an expected high DNA quantity of a single donor. Incorporating rapid DNA analysis equipment in real casework could be promising. This study has shown that rapid results have led to multiple quick identifications of suspects, especially in volume crime cases. However, the quality of the DNA profiles generated using the RapidHit are still far from desirable compared to the results obtained by the regular procedure. Due to the lower sensitivity of the RapidHit and the inconsistent results, particularly of saliva traces, it is necessary that crime scene traces are also examined in the laboratory to prevent loss of information until more advanced equipment is available. The acceleration in the procedure is largely dependent on an efficient work process. The question remains whether the achieved results are opportune (enough) to invest in further development of this procedure for the analysis of real crime scene traces with the current available technology. Additional research is highly recommended to evaluate other equipment and sampling methods and develop criteria for selecting crime scene traces that are suitable for the less sensitive rapid (mobile) DNA procedure. Next to this, it should be kept in mind that rapid technologies and the choice of mobile solutions are only part of the whole range of possibilities to explore the best set of methods and procedures to meet the needs for rapid and effective investigations.
|
Tailored versus conventional surgical debridement in complex facial lacerations in emergency department: A retrospective study
|
7616730e-3979-4220-8a7e-c44002a1013b
|
10145807
|
Debridement[mh]
|
Facial lacerations (FL) with a variety of shapes and severities are reported among patients in the emergency department (ED). The principal goal of FL management is to close the wound to reduce healing time and the decrease the risk of further infection and scarring. However, scarring may occur even when an infection is prevented through wound closure. Since the face is well-exposed and conspicuous, reducing scarring is vital for an optimal cosmetic appearance and patient satisfaction. Scarring and infection can be more problematic with complex facial lacerations (CFL) than with simple, superficial FL. Studies on treatment methods for scar reduction in initial CFL cases are limited, and previous studies have not considered the CFL severity. Therefore, identification of an ideal closure method that considers CFL severity remains necessary. In CFL treatment, debridement is more important than simple, superficial FL during the process leading to wound closure. Surgical debridement of the wound edges is an essential step in managing most CFLs. In preparing a CFL for suture, the wound edges that are appreciably damaged should be excised, converting a traumatic wound into a “clean” surgical wound. To achieve a more linear closure, removal of ragged wound edges or any sections of the wound that are de-vascularized requires a scalpel or sharp tissue scissors. [ – ] If this debridement results in a slightly gaping wound, closure tension can be relieved by undermining the edges with sharp superficial dissection to the deep fascia. Conventional surgical debridement (CSD) proceeds as conservatively as possible without the need for customized designing for CFLs. Thus, tension and asymmetry can occur. Therefore, CSD may not effectively remove the entire ragged tissue, and transformation of the wound edge into a simplified overall linear shape may be difficult. As the severity of CFL increases, effective CSD becomes very difficult. If the tissue is preserved as much as possible, the possibility of preservation of damaged tissue also increases.. [ , , ] However, if the debridement is excessive, it can leave gaping wounds, lead to tissue necrosis, or cause dehiscence because of excessive tension. These conditions become worse as the CFL severity increases. [ , , ] Given the different shape and size of the face and the different shape and severity of CFL across patients, customized pre-excisional designs (tailored surgical debridement [TSD]) should be tailored according to each CFL case before performing surgical debridement. [ , – ] In TSD, the area to be excised is delineated using a skin marker pen before excisional debridement is performed, to obtain the best results. TSD can effectively remove almost all damaged tissues and create clean and simplified wound edges. [ , – ] However, while applying TSD, excessive tension or facial asymmetry may occur, which can be resolved by applying a customized pre-excisional design, such as a local flap design (LFD). [ , – ] Therefore, the overall prognosis may be more favorable, which may result in a favorable prognosis even if the severity of CFL increases. [ , , ] Although the design method of debridement has been studied, only few studies have compared TSD and CSD, especially according to CFL severity. [ , , ] Therefore, this study aimed to compare the cosmetic outcomes and complication incidence of CFLs between TSD and CSD, according to CFL severity.
2.1. Study design and patients In this retrospective observational study, we used wound registry data collected from patients with FL who visited the ED of Chungnam National University Sejong Hospital, a university-affiliated 409-bed care referral center in Sejong, South Korea, and who underwent wound closure between August 2020 and December 2021. The Institutional Review Board of Chungnam National University Hospital approved this study (approval number: CNUSH IRB 2022-02-005), and written informed consent was obtained when registering for wound registry from all patients in accordance with national requirements and the principles of the Declaration of Helsinki and registered in a database. Patients who visited the ED with FL were included. The exclusion criteria were as follows: patients who were aged < 18 years, opposed wound registry registration, took medication for chronic skin disease, had open fractures at the laceration site, and had degloving injuries. Patients with FL with superficial or sharp wound edges were also excluded. CFLs were classified into Grades I and II according to severity, conditions of wound edges, and laceration shapes (Table ). 2.2. Interventions CSD is aimed at approximation by conservative sharp debridement. Therefore, CSD was proceeded without drawing an excisional line, and debridement was performed only to the extent that the approximation was possible. Meanwhile, TSD is aimed at approximate tissues with minimal injuries by removal beyond the severely macerated, ragged wound edge or partially avulsed segment in the wound edge. After the bleeding was controlled, a skin marker pen (Dual Marking Pen, Ayida, Xiamen, Fujian, China) was used to draw the design according to the aforementioned goal, and wound excision and incision were performed. Various types of LFD were applied in cases of excessive tension or when preserving facial anatomical symmetry or function was required. For all procedures performed in the ED, 6 to 0 Mersilk (Ethicon, Somerville, NJ) was used to close the cutaneous layer, 6 to 0 Monosyn (B. Braun, Rubi, Barcelona, Spain) was used for the subcutaneous layer, and 5 to 0 coated VICRYL (Ethicon, Somerville, New Jersey) was used for closure below the subcutaneous layer. 2.3. Outcomes evaluation The primary outcome of this study was comparison of the long-term cosmetic outcomes of TSD versus CSD. The primary outcome was compared using the scar cosmesis assessment and rating (SCAR) score between the TSD and CSD groups (Table S1, Supplemental Digital Content, http://links.lww.com/MD/I825 ). In the plastic surgery outpatient clinic, SCAR scores were recorded between 6 months and 1 year after repair, and these scores were recorded on the outpatient chart and in the wound registry with photographs. The percentages of good prognoses between the 2 groups were also compared as a primary outcome. A good cosmetic outcome was defined as a SCAR score of ≤ 2. The secondary outcome was comparison of the incidence of complications such as asymmetry, infection, and dehiscence between the 2 groups. 2.4. Analysis Statistical analyses were performed using SPSS version 21.0 (IBM Corp., Armonk, NY) to compare the TSD and CSD groups. Nominal variables are expressed as frequencies (percentages), and Fisher exact test was used for the analysis. Continuous variables were tested for normal distributions using the Shapiro–Wilk test. Non-normally distributed variables are expressed as median values (interquartile ranges), whereas normally distributed variables are described as means (± standard deviations). Student t test was used for normally distributed data, whereas the nonparametric Mann–Whitney U test was used for non-normally distributed data. Statistical significance was set at P < .05.
In this retrospective observational study, we used wound registry data collected from patients with FL who visited the ED of Chungnam National University Sejong Hospital, a university-affiliated 409-bed care referral center in Sejong, South Korea, and who underwent wound closure between August 2020 and December 2021. The Institutional Review Board of Chungnam National University Hospital approved this study (approval number: CNUSH IRB 2022-02-005), and written informed consent was obtained when registering for wound registry from all patients in accordance with national requirements and the principles of the Declaration of Helsinki and registered in a database. Patients who visited the ED with FL were included. The exclusion criteria were as follows: patients who were aged < 18 years, opposed wound registry registration, took medication for chronic skin disease, had open fractures at the laceration site, and had degloving injuries. Patients with FL with superficial or sharp wound edges were also excluded. CFLs were classified into Grades I and II according to severity, conditions of wound edges, and laceration shapes (Table ).
CSD is aimed at approximation by conservative sharp debridement. Therefore, CSD was proceeded without drawing an excisional line, and debridement was performed only to the extent that the approximation was possible. Meanwhile, TSD is aimed at approximate tissues with minimal injuries by removal beyond the severely macerated, ragged wound edge or partially avulsed segment in the wound edge. After the bleeding was controlled, a skin marker pen (Dual Marking Pen, Ayida, Xiamen, Fujian, China) was used to draw the design according to the aforementioned goal, and wound excision and incision were performed. Various types of LFD were applied in cases of excessive tension or when preserving facial anatomical symmetry or function was required. For all procedures performed in the ED, 6 to 0 Mersilk (Ethicon, Somerville, NJ) was used to close the cutaneous layer, 6 to 0 Monosyn (B. Braun, Rubi, Barcelona, Spain) was used for the subcutaneous layer, and 5 to 0 coated VICRYL (Ethicon, Somerville, New Jersey) was used for closure below the subcutaneous layer.
The primary outcome of this study was comparison of the long-term cosmetic outcomes of TSD versus CSD. The primary outcome was compared using the scar cosmesis assessment and rating (SCAR) score between the TSD and CSD groups (Table S1, Supplemental Digital Content, http://links.lww.com/MD/I825 ). In the plastic surgery outpatient clinic, SCAR scores were recorded between 6 months and 1 year after repair, and these scores were recorded on the outpatient chart and in the wound registry with photographs. The percentages of good prognoses between the 2 groups were also compared as a primary outcome. A good cosmetic outcome was defined as a SCAR score of ≤ 2. The secondary outcome was comparison of the incidence of complications such as asymmetry, infection, and dehiscence between the 2 groups.
Statistical analyses were performed using SPSS version 21.0 (IBM Corp., Armonk, NY) to compare the TSD and CSD groups. Nominal variables are expressed as frequencies (percentages), and Fisher exact test was used for the analysis. Continuous variables were tested for normal distributions using the Shapiro–Wilk test. Non-normally distributed variables are expressed as median values (interquartile ranges), whereas normally distributed variables are described as means (± standard deviations). Student t test was used for normally distributed data, whereas the nonparametric Mann–Whitney U test was used for non-normally distributed data. Statistical significance was set at P < .05.
3.1. Characteristics of the enrolled patients In total, 431 patients visited the ED for FL repair. Following exclusions, 284 patients were included in the study. Then, 29 patients were further excluded based on the exclusion criteria. Three of the remaining 255 patients were lost to follow-up. Eventually, 252 patients were enrolled and analyzed, among whom 121 (48.0%) underwent CSD and 131 (52.0%) underwent TSD (Fig. ). No significant differences were noted in age, sex, incidence of hypertension and diabetes mellitus, smoking, and alcohol intake between the CSD and TSD groups (Table ). In addition, no significant differences were observed in injury to repair time, laceration length and depth, angle of laceration to the relaxed skin tension line, laceration region, and laser scar therapy between the CSD and TSD groups (Table ). Although a significant difference was noted in procedure duration, no significant difference was noted between the CSD and TSD groups when divided by grade (Table ). 3.2. Main results The median SCAR scores were 3 (1–5) in the CSD group and 1 (0–2) in the TSD group ( P < .001; Fig. ). For Grade I patients, the median SCAR scores were 2 (0–4) in the CSD group and 1 (0–1) in the TSD group ( P < .01; Fig. ). For Grade II patients, the median SCAR scores were 5 (4–6) in the CSD group and 1 (1–2) in the TSD group ( P < .001; Fig. ). Regarding parameters on the SCAR scale, scar spread, erythema, dyspigmentation, hypertrophy or atrophy, and overall impression were significantly lower in the TSD group than in the CSD group (Fig. ). Scar spread and overall impression were also significantly lower in the TSD group than in the CSD group for Grade I patients (Fig. ). Scar spread, erythema, dyspigmentation, hypertrophy or atrophy, overall impression, and itching were significantly lower in the TSD group than in the CSD group for Grade II patients (Fig. ). A good cosmetic outcome was achieved in 46.3% of patients in the CSD group and 84.0% of patients in the TSD group ( P < .001; Fig. ). In Grade I patients, an excellent cosmetic outcome was achieved in 59.6% of patients in the CSD group and 85.0% of patients in the TSD group ( P < .01; Fig. ). In Grade II patients, a good cosmetic outcome was achieved in 9.4% of patients in the CSD group and 83.5% of patients in the TSD group ( P < .001; Fig. ). The incidence of complications was significantly lower in the TSD group than in the CSD group ( P = .010; Table ). Although a significant difference was noted in the incidence of complications, it was only in terms of asymmetry, no significant difference was noted in the incidence of in infection or dehiscence (Table ). Asymmetry occurred only in the CSD group for Grade II patients. Infection occurred in 1 patient each among Grade I and II patients in the CSD group. Dehiscence occurred in the same patient as the infection.
In total, 431 patients visited the ED for FL repair. Following exclusions, 284 patients were included in the study. Then, 29 patients were further excluded based on the exclusion criteria. Three of the remaining 255 patients were lost to follow-up. Eventually, 252 patients were enrolled and analyzed, among whom 121 (48.0%) underwent CSD and 131 (52.0%) underwent TSD (Fig. ). No significant differences were noted in age, sex, incidence of hypertension and diabetes mellitus, smoking, and alcohol intake between the CSD and TSD groups (Table ). In addition, no significant differences were observed in injury to repair time, laceration length and depth, angle of laceration to the relaxed skin tension line, laceration region, and laser scar therapy between the CSD and TSD groups (Table ). Although a significant difference was noted in procedure duration, no significant difference was noted between the CSD and TSD groups when divided by grade (Table ).
The median SCAR scores were 3 (1–5) in the CSD group and 1 (0–2) in the TSD group ( P < .001; Fig. ). For Grade I patients, the median SCAR scores were 2 (0–4) in the CSD group and 1 (0–1) in the TSD group ( P < .01; Fig. ). For Grade II patients, the median SCAR scores were 5 (4–6) in the CSD group and 1 (1–2) in the TSD group ( P < .001; Fig. ). Regarding parameters on the SCAR scale, scar spread, erythema, dyspigmentation, hypertrophy or atrophy, and overall impression were significantly lower in the TSD group than in the CSD group (Fig. ). Scar spread and overall impression were also significantly lower in the TSD group than in the CSD group for Grade I patients (Fig. ). Scar spread, erythema, dyspigmentation, hypertrophy or atrophy, overall impression, and itching were significantly lower in the TSD group than in the CSD group for Grade II patients (Fig. ). A good cosmetic outcome was achieved in 46.3% of patients in the CSD group and 84.0% of patients in the TSD group ( P < .001; Fig. ). In Grade I patients, an excellent cosmetic outcome was achieved in 59.6% of patients in the CSD group and 85.0% of patients in the TSD group ( P < .01; Fig. ). In Grade II patients, a good cosmetic outcome was achieved in 9.4% of patients in the CSD group and 83.5% of patients in the TSD group ( P < .001; Fig. ). The incidence of complications was significantly lower in the TSD group than in the CSD group ( P = .010; Table ). Although a significant difference was noted in the incidence of complications, it was only in terms of asymmetry, no significant difference was noted in the incidence of in infection or dehiscence (Table ). Asymmetry occurred only in the CSD group for Grade II patients. Infection occurred in 1 patient each among Grade I and II patients in the CSD group. Dehiscence occurred in the same patient as the infection.
This study compared the cosmetic outcomes and complication incidence of CSD versus TSD according to CFL severity, revealing that when severity increased, TSD provided better cosmetic outcomes and reduced complications such as asymmetry (Figs. and , Table ). CFLs are nonlinear, consisting of multiple lines, sometimes satellite macerated or ragged wound edges. In addition, healing is deteriorated by devitalized and contaminated tissues (Figs. and ). Therefore, CFL can lead to ugly scars and thus have an impact on psychosocial functioning, causing increased anxiety and self-consciousness and impairing social functioning and emotional well-being (Figs. and ). Laceration healing comprises the following 3 major phases: inflammation, proliferation, and remodeling. In the inflammatory phase, the more severe the debris, the more devitalized, nonviable, and contaminated tissues noted in the wound and the worse the inflammatory response. Increased inflammation causes over-proliferation and over-differentiation of cells (such as fibroblasts and keratinocytes) in the proliferation phase at the wound site. Additionally, collagen production is increased by excessive fibroblasts. In the remodeling phase, dysregulated inflammatory mediators can cause excessive extracellular matrix synthesis by disorganized collagen bundles. Collectively, these result in excessive, obtrusive, and undesirable scarring. Minimizing the inflammatory response during the wound healing process is the simplest way to reduce scarring, and this can be achieved by effective debridement. Through debridement, devitalized and nonviable tissue, gross contaminants, and foreign bodies are removed, creating a wound edge as close as possible in cleanliness to healthy tissue. Even if the inflammatory reaction is reduced by effective debridement, this alone is not enough. Even if the epidermis looks relatively clean, if the dermal layer of the lacerated edge is a beveled cross section or has dermal injuries, depressed or indented scars can occur even if the wound edges are relatively clean, and the degree of raggedness is less severe (Fig. ). [ , , , , ] This becomes more prominent in case of damage to the subcutaneous tissue. Therefore, these damages must be corrected to reduce scarring. To this end, the beveled cross section should be made perpendicular through sharp excisional debridement, and if there is damage to the subcutaneous tissue, it should also be repaired. [ , , , , ] In correcting damage through debridement, excessive tension, gaping, and facial asymmetry may occur. [ , – ] In treating the CFL, sufficient debridement should be performed to reduce scarring. Most surgeons and physicians mainly use CSD for CFL repair. However, CSD may not be sufficient to remove the entire ragged tissue effectively, and it may be difficult to turn the wound edge into a simplified overall linear shape. [ – ] If the shape of the CFL is complicated, debridement is limited, and wound closure is performed as conservatively as possible. [ , , ] If the tissue is preserved as much as possible, even the severely damaged tissue can be preserved. [ , , , , ] For this reason, when CSD is performed alone for CFLs with nonlinear or harsh ragged edges, it is highly probable that devitalized, contaminated, and badly damaged tissues are retained. [ , , , , ] This may increase the inflammatory response, leading to excessive wound healing, increased dermal fibrosis, disorganized collagen, disappearance of elastic fibers and appendages, and disruption of skin texture, thereby creating unsightly scars. Macerated or ragged wound edges are excised; usually, 1 to 2 mm is sufficient. However, it can be widened depending on its severity. If the debridement is too much, it can leave gaping wounds, cause tissue necrosis, or lead to dehiscence due to excessive tension. [ , , – ] These are thought to become more severe as the severity of CFL increases. Therefore, debridement for CFL needs planning before excision in terms of using TSD based on individual CFL cases, with the goal of safer surgeries and more favorable outcomes. [ , – ] This becomes more important as the severity increases. Given that each patient has a different face shape and different CFL severity, it is important to tailor the pre-excisional design for debridement according to each CFL case before performing surgical debridement. [ , – ] In TSD, the area to be excised is custom-made and designed before debridement is performed. Before drawing a design on the skin, surgeons should plan ahead and draw a design that can produce the best outcomes by considering the possible complications such as asymmetry, gaping, and excessive tension. LFD may be applied in some cases to enable laceration closure with significantly reduced tension and reduced gaping. By doing so, TSD can effectively remove almost all damaged tissues and create clean and simplified wound edges and a smooth shape (Fig. ). Therefore, the overall prognosis may be more favorable even if the CFL severity increases. In this study, we evaluated scars using the SCAR scores, as this scale was created to evaluate postsurgical scars. Several scar scales, such as the Vancouver scar scale, the patient and observer scar assessment scale, the Manchester scar scale, and the Stony Brook scar evaluation scale, have been used to evaluate the condition of scars. Each scale has its advantages and disadvantages in assessing the different characteristics of scars. However, no valid and reliable scar scale is currently available to effectively assess postsurgical scar quality. The Vancouver scar scale and patient and observer scar assessment scale were originally developed to assess burn scars and are unsuitable for assessing postsurgical scars. The Stony Brook scar evaluation scale lacks a subjective parameter, thus limiting its clinical utility. The Manchester scar scale has been criticized for being better suited to assess linear scars and not account for symptoms. Therefore, an evaluation tool that provides a reliable outcome measure for postsurgical scars is needed. The SCAR scale can be used to assess postsurgical scars in a clinical and research context. The convergent validity, inter-rater reliability, and intra-rater reliability of the SCAR scale have been tested, and the results showed that this scale is outstanding in terms of feasibility, validity, and reliability for postoperative scar-related outcome measurements. After a short training period, the SCAR scale can be quickly and reliably used during the clinical follow-up process. In our analysis, when the cosmetic prognoses of CSD and TSD were compared using the SCAR scale, the TSD group showed a significantly better prognosis across the entire cohort than the CSD group. Even for grades classified according to the CFL severity, the CSD group had a better prognosis than the TSD group. The prognostic difference between the CSD and TSD groups in Grade II patients with higher severity was significantly larger. Moreover, a marked difference was noted in the proportion of patients with good cosmetic outcomes among Grade II patients. The proportion of patients with good cosmetic outcomes was higher in the TSD group than in the CSD group. This result was more marked in Grade II patients. These results indicate that TSD can produce cleaner and sharper edges with reduced skin tension than CSD. We found significant differences in the SCAR scale parameters between both the groups (Fig. ). Among the parameters for Grade I patients, scar spread and overall impression were significantly different between CSD and TSD. For Grade II patients, additionally significant differences were noted in erythema, dyspigmentation, hypertrophy or atrophy, and itching (Fig. ). Extended scar spread is the result of a rupture of the dermis and excessive tension. Erythema results from increased local blood flow and vascular permeability of capillaries stimulated by inflammatory cytokines. Dyspigmentation may result from inflammatory conditions, and hypertrophic scars result from the excessive proliferation of myofibroblasts and increased collagen deposition within the scar. As an adjunct to collagen production, the synthesis of histamine is increased, and the response of histamine receptors is activated, resulting in pruritus. Furthermore, various substances such as acetylcholine, bradykinin, and proteinases are involved in pruritic sensations. This also means that, compared to CSD, TSD can lower the inflammatory response in the wound healing process and approximate the wound edge by making it a relatively intact edge. This suggests that debridement is required for CFLs with damaged tissue and that TSD is more effective than CSD as the CFL severity increases. Regarding complications, asymmetry showed a significant difference between the 2 groups. Asymmetry occurred only in Grade II patients in the CSD group. Asymmetry is caused by excessive tension that leads to asymmetry during approximation and scar contracture during wound healing. [ , , , ] If asymmetry is likely to occur, a design that can correct this should been applied, such as applying LFD in the TSD group in some cases. It seems that the CSD group lacks such processes. In terms of infection, no significant difference was noted between the CSD and TSD groups. Infection and dehiscence occurred in the same patient. This is thought to occur because both the procedures effectively prevent infection through debridement. This study has some limitations. First, it was retrospective in nature and was conducted at a single center. A prospective multicenter and multiethnic study with a larger sample size is needed for generalization of our study findings. Second, self-fulfilling prophecy bias was possible, as treating physicians or surgeons were exposed to the results of TSD and CSD. In conclusion, for CFL with higher severity, when TSD is properly applied considering the anatomical symmetry and function of the face, objectively good cosmetic outcomes and subjective patient satisfaction can occur. However, although there was no difference in infection and dehiscence when the severity of CFL is high, CSD is more likely to lead to asymmetry than TSD.
Conceptualization: Byeong Kwon Park, Jin Hong Min. Data curation: Byeong Kwon Park, Jung Soo Park. Formal analysis: Yeon Ho You. Funding acquisition: Won Joon Jeong, Yong Chul Cho. Investigation: Se Kwang Oh, Yong Nam In. Methodology: Jin Hong Min. Project administration: Hong Joon Ahn, Chang Shin Kang. Resources: Byung Kook Lee. Supervision: Joo Hak Kim, Ho Jik Yang. Software: Heon Jong Yoo. Validation: Hyun Woo Kyung, Joo Hak Kim, Ho Jik Yang. Writing – original draft: Jin Hong Min. Writing – review & editing: Byeong Kwon Park, Jin Hong Min.
|
33rd Brazilian Society for Virology (SBV) 2022 Annual Meeting
|
deba95b8-5846-4dff-8186-3a6a8ae4d13a
|
10145839
|
Microbiology[mh]
|
For the last 33 years, the Brazilian Society of Virology (SBV) has been organizing annual national meetings, bringing together the best senior scientists in Brazil, renowned researchers in the field worldwide, and young virology researchers and students. Young researchers in the area and undergraduate and graduate students are encouraged to actively participate in these meetings, making these events an important forum for discussion and inspiration for all young Brazilian virologists to learn about the most recent data and results in the area. As a national society, SBV meetings cover five main areas of interest: human, veterinary, plant and invertebrate, environmental, and basic virology, and the events are held in different regions of Brazil each year. In 2022, it was a great pleasure to hold an in-person event. After two years of online events, the contact between colleagues and collaborators was refreshing after days of tension and isolation. The 33rd SBV National Congress was held in Arraial da Ajuda, in Porto Seguro district, Bahia state. Almost all the speakers were present, but some lectures were remotely delivered. Four hundred eighteen attendees were at the 33rd SBV National Meeting, with a makeup of 57% students and 43% professionals, including 44 postdoctoral researchers. This year, the event started on 17 October and ended on 21 October. A new format was applied, with the SBV supporting a parallel event, the 9th Annual International Experimental Biology and Medicine Conference (IEBMC 2022). This new format was a great success and amplified the visibility of both events, providing an exciting forum for the attendees of both meetings. Compared to the last two SBV annual meetings, the 33rd SBV meeting had fewer attendees from all categories. This reduction can be attributed to the high costs of an in-person meeting, including airplane tickets, hotel reservations, and other expenses associated with accessing and staying at the event location. Brazil is a vast country, and the distances between the most important research centers and the event location could impose high travel costs. Nevertheless, even with this lower number of attendees, the 33rd SBV meeting could be considered a successful event in a difficult time for Brazilian science.
In 2022, the SBV meeting scientific program included eight plenary conferences, of which four were “state-of-the-art” talks, two technical conferences, and one special talk. In addition, the event had nine roundtables, nine oral presentation sessions shared between human, plant, veterinary, invertebrate, environmental, and basic virology, the Helio Gelli Pereira award session, and a precongress workshop with a total of 86 speakers. Among the speakers, six were from the USA, one from the Czech Republic, two were from Germany, one was from Norway, one was from the United Kingdom, and the remaining speakers were from Brazil. More than 21 researchers from all virology areas, mainly from Brazil, collaborate with the event, sharing roundtables and conferences, discussing the scientific program, evaluating poster submissions and/or presentations, and selecting the best submissions to receive the Helio Gelli Pereira award. Once again, SBV acknowledges the important work of these enthusiastic virologists. A total of 332 poster abstracts were submitted for the meeting, of which 331 were presented during the event as complete reports. This year, the event had posters presented by students and professionals from Paraguay (10), Colombia (1), Chile (1), Canada (1), and the Czech Republic (1), in addition to those from Brazil. Detailed information about the SBV 33rd meeting can be found at https://sbv.org.br/event/ (accessed on 20 February 2023) and at https://sbv.org.br/files/anais_2022.pdf (accessed on 20 February 2023) . 2.1. Meeting Attendants In 2022, the SBV annual meeting had a total of 418 participants, including professionals, undergraduates, and graduate students from all Brazilian regions ( ) and other countries. From this total, professionals represented 32.8% and students from all categories represented 67.2% of the meeting attendees ( ). Of the students, 12% were undergraduates, 44.7% graduate students, and 10.5% were postdocs. As highlighted before, high attendance of students was achieved, showing that this annual meeting is succeeding in impacting young virologists from distinct parts of Brazil. Most attendees were women, representing 65.1% of all 33rd SBV meeting members. SBV is glad to see the increasing contribution of women in life sciences in Brazil. However, professional opportunities do not seem to follow this phenomenon since most of the lectures were given by men (59% men vs. 41% women). There are many competent and brilliant female researchers in virology, but this is not reflected in the meeting presentations of our virology society. This bias must be eliminated in the future. 2.2. Scientific Program During the five days of the 33rd SBV meeting, the activities started at 1 p.m. and finished at 9:30 p.m. ( ). During the morning, the conference rooms held the IEBMC 2022, which started at 8 a.m. and finished around midday. This kind of schedule permitted the attendees of the 33rd SBV to attend all the conferences and roundtables during the morning events. 2.3. Conference Speakers and Roundtable Presentations The opening conference, titled “Friend and foe: the complex interactions between dengue and Zika virus immune responses and epidemiology,” was presented by Dr. Eva Harris from the University of California, Berkeley, CA, USA. In her very enthusiastic talk, Dr. Harris showed a comprehensive overview of years of information collected from health centers in Nicaragua, showing how infections, reinfections, and cross-infections from both viruses can impact patient intake. Before the opening ceremony, a precongress course on “Science communication and divulgation” was conducted by Dr. Laura M.A. de Oliveira and Prof. Tatiana de C.A. Pinto, both from the Universidade Federal do Rio de Janeiro, UFRJ, Brazil. On the second day of the meeting, Dr. Colleen B. Jonsson from the University of Tennessee Health Science Center, Memphis, TN, USA, presented a conference that brought together interests from both of the simultaneous congresses. The conference’s title was “Disease, ecology, and evolution of hantavirus in South America.” On the same afternoon, a state-of-the-art conference on fundamental or basic virology was presented by Dr. Akira Ono from the University of Michigan, Ann Arbor, MI, USA. Dr. Ono’s research focused on virus-cell interactions in HIV infections, and his conference title was “The roles played by the plasma membrane components in HIV-1 assembly and beyond.” Finally, after three simultaneous oral presentations, it was time for the first round table, called Young Inspiring Researchers. This roundtable has become a tradition in the SBV meeting since it first started some years ago and has been a great success. Young Brazilian researchers showing emerging skills are invited to present their latest research data, an event that works to stimulate the careers of young virologists. This year, Dr. Flávio L. Matassoli, from NAID/NIH, Maryland, USA, and Dr. Luciana P. Tavares, from Harvard Medical School, Boston, MA, USA, presented the talks “SARS-CoV-2 protein vaccination elicits long-lived plasma cells in Rhesus macaques” and “Pro-resolving therapies for Influenza A virus disease”, respectively. Both talks were fascinating and excited the audience. Closing the second day, a special talk, titled “Multidisciplinary research with arboviruses at the Brazilian synchrotron source,” was presented by Dr. Rafael Elias, from CNPEM, Campinas, SP, Brazil. The conference was exciting and brought the audience up-to-date with data from a multidisciplinary approach to help understand arboviruses. The third meeting day started with a state-of-the-art conference on environmental virology. Dr. Rodrigo F. de Bueno from UFABC, Santo André, SP, Brazil, talked about the contamination of wastewater with SARS-CoV-2 in Brazil, and his conference title was “Wastewater-based epidemiology for SARS-CoV-2: Lessons learned from recent studies by the wastewater COVID-19 monitoring network—MCTI.” Next, there were two simultaneous roundtables. One roundtable’s focus was basic virology. In this roundtable, Dr. Luciana B. de Arruda, from UFRJ, RJ, Brazil, exposed the latest results discovered by her team in Zika virus IFN response activation. Her conference title was “Activation of microvascular endothelial cells and type INF response in resistance and tolerance against Zika infection.” In the same round table, Dr. Eugênio Hottz, from UFJF, Juiz de Fora, MG, Brazil, presented the talk “Thromboinflammation in COVID-19 mechanisms and contributions to pathogenesis and Dr. Enrique M.B. Pierulivo, from ICB, USP, São Paulo, SP, Brazil, gave the talk “Cell transformation by human papillomaviruses: from the nucleus to the extracellular matrix and back.” The parallel roundtable focused on plant virology. In this roundtable, Dr. Juliana de Freitas Astúa, from EMBRAPA Mandioca e Fruticultura, Cruz das Almas, BA, Brazil, presented the talk “Updates on the citrus leprosis virus C-plant interaction.” Dr. Alice Inoue-Nagata, from EMBRAPA Hortaliças, DF, Brazil, gave the talk “Critical points for a virus control strategy via application of dsRNA molecules,” and Dr. Elizabeth P.B. Fontes, from UFV, Viçosa, MG, Brazil, gave the talk “Begomoviruses NSP-host interactome: integrating developmental signals, antiviral immunity, and pro-viral functions.” Two technical conferences were presented during the event. The first one, titled “lllumina genomic surveillance of infectious diseases,” was presented by Dr. Michelle G. Penna from Illumina Co. and occurred after the conference given by Dr. Colen Jonsson. The second technical conference, “xGenTM amplicon panels for metagenomics and viruses: investigative answers to your questions,” was presented by Síntese Biotecnologia Co. and occurred after the two roundtables described above. The fourth day of the event began with a state-of-the-art conference on veterinary virology, where Dr. Edviges M. Pituco, from PAHO/PANAFTOSA OIE Reference Laboratories, Brazil, presented the talk “Updates and advances in the control of foot-and-mouth disease in Brazil.” The conference followed with one roundtable on human virology (“Epidemiology and evolution of viruses in the context of One Health”) and another on environmental virology. Three exciting talks took place at the human virology roundtable: “SARS-CoV-2 and other respiratory viruses: from surveillance to pandemic action in the context of One Health,” presented by Dr. Edison L. Durigon, ICB USP, São Paulo, SP, Brazil; “Characterization of Ilheus virus: implications for emergence,” presented by Dr. Nikolaos Vasilakis, UTMB, Texas, USA; and “Emergence, spread, and evolution of SARS-CoV-2 lineages circulating in Brazil during the first 18 months of the pandemic,” presented by Dr. Gonzalo B. Bentacor, Fiocruz, Rio de Janeiro, RJ, Brazil. The environmental virology roundtable started with the talk by Dr. Caroline Rogotto, Univ. Feevale, Novo Hamburgo, RS, Brazil, titled “Environmental surveillance as a complementary tool for monitoring COVID-19,” and was followed by the talk, “The influence of Escherichia coli phage vB_EcoM-UFV13 on a consortium of sulfate-reducing bacteria opens a new window to bacteriophage use” presented by Dr. Roberto S. Dias, UFV, Viçosa, MG, Brazil. Moreover, the last talk was presented by Dr. Gabriel M.F. Almeida, UiT at the Artic University of Norway, Norway, with the title “The forgotten tale of Brazilian phage therapy.” This last talk was presented as a video conference. After the roundtables, another state-of-the-art conference was presented to the whole meeting audience, named “Plant manipulation by geminiviruses” by Dr. Rosa Lozano-Dúran from the Department of Plant Biochemistry, Centre for Plant Molecular Biology (ZMBP), Eberhard Karls University, Tübingen, Germany. The last day of the meeting began with three simultaneous roundtables. One dedicated to veterinary virology involved Dr. Jan Drexler Felix, Universitätsmedizin Berlin, Berlin, Germany, presenting “Challenges toward serologic diagnostics of emerging arboviruses.” The following talks were “Monkey see, monkey do: potential zoonotic viruses in nonhuman primates from Southern Brazil,” presented by Dr. Fernando R. Spilki, Feevale, Novo Hamburgo, RS, Brazil, and “Point-of-care diagnostic platforms for arboviruses,” presented by Lindomar J. Penna, Fiocruz-PE, Recife, PE, Brazil. In parallel, an invertebrate virology roundtable occurred. In this roundtable, the talks “ Spodoptera frugiperda fall armyworm virus and its biological control applications” by Dr. Leonardo A. da Silva, AgbiTech, Goiania, GO, Brazil, “Regulation of dengue transmission by the natural mosquito virome” by João Marques Trindade, UFMG, Belo Horizonte, MG, Brazil, and “One bacterium in the fight against arboviruses” by Luciano Moreira, Fiocruz, Belo Horizonte, MG, Brazil, were presented. The third roundtable of the day focused on basic virology and included the following talks: “Antagonism of nuclear antiviral responses by herpesviruses,” presented by Dr. Colin Crump, University of Cambridge, United Kingdom; “Immune responses to the efferocytosis of SARS-CoV-2-infected dying cells,” presented by Dr. Larissa D. Cunha, USP, SP, Brazil; and “Unique structural features of flaviviruses’ capsid proteins and their role in viral capsid assembly,” presented by Dr. Andrea da Poian, UFRJ, Rio de Janeiro, RJ. Dr. Felipe Naveca (Fiocruz, Manaus, AM) (human virology), Dr. Sergio de Paula (Universidade Federal de Viçosa—UFV, Viçosa, MG) (environmental virology), Dr. Juliane Deise Fleck (Universidade Feevale, Nova Hamburgo, RS) (environmental virology), Dr. Iranaia Assunção Miranda (Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ) (basic virology), Luis Lamberti Pinto da Silva (FMRP, Ribeirão Preto, SP) (basic virology) , Dr. Paula Rahal (IBILCE-Universidade Estadual Paulista—UNESP, São José do Rio Preto, SP) (human virology), Dr. Eurico Arruda (Faculdade de Medicina de Ribeirão Preto, USP, SP) (human virology), Dr. Tatiana Domitrovic (UFRJ, RJ) and Dr. Daniel Ardisson-Araujo (Universidade de Brasilia, Brasília, DF) (plants and invertebrate virology), Dr. Abelardo Silva Jr (UFV, Viçosa, MG) (veterinary virology), Marcelo de Lima (UFPel, Pelotas, RS) (veterinary virology), and Matheus Weber (Feevale, Nova Hamburgo, RS) (veterinary virology) were the chairs of the conferences and roundtables. 2.4. Abstracts, Oral Presentations, and the Helio Gelli Pereira Award A total of 331 abstracts were approved to be presented at the meeting. As usual, human virology represented the majority with 149 abstracts, representing 46.0% of the total number of posters, followed by basic virology (90 abstracts, 27.8%), veterinary (50 abstracts, 15.4%), environmental (19 abstracts, 5.9%), and plant and invertebrate virology (16 abstracts, 4.9%) ( ). Among the 331 abstracts, 45 studies were selected by 12 senior researchers to be presented as short oral presentations. Nine oral presentation sections were presented in the 33rd SBV meeting: three for human virology, with fifteen students in total; two for basic virology (with five students); one for veterinary virology (with five students); two for plant and invertebrate virology (with ten students); and one for environmental virology (with five students) ( ). The studies selected for oral presentation were performed in 18 independent institutions: 17 from three distinct Brazilian regions and one from Paraguay (Universidad National de Assumption). Among Brazilian institutions, the highest number of selected studies came from the Universidade Federal do Rio de Janeiro (UFRJ), followed by the Universidade Federal de Minas Gerais (UFMG). Each oral presentation section was evaluated by at least two independent researchers who evaluated each talk and selected the best presentation in each section. The winners of the best presentation of each section are highlighted in with an asterisk. All of the remaining abstracts were presented as posters that were evaluated individually by a special scientific commission. Respecting the tradition of SBV meetings, in 2022 the Helio Gelli Pereira (HGP) award was given to the best complete scientific articles produced by virology students. To participate, candidates had to apply and submit their work to a scientific committee composed of distinguished virologists from different areas. This year the commission was composed of Nikolaos Vasilaskis, from the University of Texas Medical Branch at Galveston, USA; Jan Drexler Felix, from Universitätsmedizin, Berlin, Germany; Juliane Deise Fleck, from Univ. Feevale, Novo Hamburgo, RS; Luis Lamberti da Silva, FMRP/USP, Ribeirão Preto, SP; Daniel M.P. Ardisson-Araujo, UnB, Brasília, DF; and José Luís Proença Módena, Unicamp, Campinas, SP. Seven articles were selected to be presented orally during the SBV meeting. During this session, named the HGP award, the committee chose the best articles or presentations in each category (given by undergraduate and graduate students) ( ). The HGP award for the undergraduate student category was given to Luan Rocha Lima, from UFRJ, RJ, Brazil, for the work “Differential modulation of type I IFN response by distinct Zika virus isolates impacts virus replication and disease tolerance in vitro and in vivo.” The graduate student award was conferred upon Otávio Augusto Chaves, from Fiocruz, RJ, who presented the work “Commercially available flavonols are better SARS-CoV-2 inhibitors than isoflavone and flavones.” Viruses , published by MDPI, the American Society for Microbiology (ASM), and SBV support the HGP award. In 2022, ASM granted one eBook and a one-year membership for the winners, and Viruses offered a full waiver for publication by both research groups awarded. SBV partnerships with Viruses represent a significant incentive for students and their research groups, especially in Brazil, where very few institutions can support APC fees.
In 2022, the SBV annual meeting had a total of 418 participants, including professionals, undergraduates, and graduate students from all Brazilian regions ( ) and other countries. From this total, professionals represented 32.8% and students from all categories represented 67.2% of the meeting attendees ( ). Of the students, 12% were undergraduates, 44.7% graduate students, and 10.5% were postdocs. As highlighted before, high attendance of students was achieved, showing that this annual meeting is succeeding in impacting young virologists from distinct parts of Brazil. Most attendees were women, representing 65.1% of all 33rd SBV meeting members. SBV is glad to see the increasing contribution of women in life sciences in Brazil. However, professional opportunities do not seem to follow this phenomenon since most of the lectures were given by men (59% men vs. 41% women). There are many competent and brilliant female researchers in virology, but this is not reflected in the meeting presentations of our virology society. This bias must be eliminated in the future.
During the five days of the 33rd SBV meeting, the activities started at 1 p.m. and finished at 9:30 p.m. ( ). During the morning, the conference rooms held the IEBMC 2022, which started at 8 a.m. and finished around midday. This kind of schedule permitted the attendees of the 33rd SBV to attend all the conferences and roundtables during the morning events.
The opening conference, titled “Friend and foe: the complex interactions between dengue and Zika virus immune responses and epidemiology,” was presented by Dr. Eva Harris from the University of California, Berkeley, CA, USA. In her very enthusiastic talk, Dr. Harris showed a comprehensive overview of years of information collected from health centers in Nicaragua, showing how infections, reinfections, and cross-infections from both viruses can impact patient intake. Before the opening ceremony, a precongress course on “Science communication and divulgation” was conducted by Dr. Laura M.A. de Oliveira and Prof. Tatiana de C.A. Pinto, both from the Universidade Federal do Rio de Janeiro, UFRJ, Brazil. On the second day of the meeting, Dr. Colleen B. Jonsson from the University of Tennessee Health Science Center, Memphis, TN, USA, presented a conference that brought together interests from both of the simultaneous congresses. The conference’s title was “Disease, ecology, and evolution of hantavirus in South America.” On the same afternoon, a state-of-the-art conference on fundamental or basic virology was presented by Dr. Akira Ono from the University of Michigan, Ann Arbor, MI, USA. Dr. Ono’s research focused on virus-cell interactions in HIV infections, and his conference title was “The roles played by the plasma membrane components in HIV-1 assembly and beyond.” Finally, after three simultaneous oral presentations, it was time for the first round table, called Young Inspiring Researchers. This roundtable has become a tradition in the SBV meeting since it first started some years ago and has been a great success. Young Brazilian researchers showing emerging skills are invited to present their latest research data, an event that works to stimulate the careers of young virologists. This year, Dr. Flávio L. Matassoli, from NAID/NIH, Maryland, USA, and Dr. Luciana P. Tavares, from Harvard Medical School, Boston, MA, USA, presented the talks “SARS-CoV-2 protein vaccination elicits long-lived plasma cells in Rhesus macaques” and “Pro-resolving therapies for Influenza A virus disease”, respectively. Both talks were fascinating and excited the audience. Closing the second day, a special talk, titled “Multidisciplinary research with arboviruses at the Brazilian synchrotron source,” was presented by Dr. Rafael Elias, from CNPEM, Campinas, SP, Brazil. The conference was exciting and brought the audience up-to-date with data from a multidisciplinary approach to help understand arboviruses. The third meeting day started with a state-of-the-art conference on environmental virology. Dr. Rodrigo F. de Bueno from UFABC, Santo André, SP, Brazil, talked about the contamination of wastewater with SARS-CoV-2 in Brazil, and his conference title was “Wastewater-based epidemiology for SARS-CoV-2: Lessons learned from recent studies by the wastewater COVID-19 monitoring network—MCTI.” Next, there were two simultaneous roundtables. One roundtable’s focus was basic virology. In this roundtable, Dr. Luciana B. de Arruda, from UFRJ, RJ, Brazil, exposed the latest results discovered by her team in Zika virus IFN response activation. Her conference title was “Activation of microvascular endothelial cells and type INF response in resistance and tolerance against Zika infection.” In the same round table, Dr. Eugênio Hottz, from UFJF, Juiz de Fora, MG, Brazil, presented the talk “Thromboinflammation in COVID-19 mechanisms and contributions to pathogenesis and Dr. Enrique M.B. Pierulivo, from ICB, USP, São Paulo, SP, Brazil, gave the talk “Cell transformation by human papillomaviruses: from the nucleus to the extracellular matrix and back.” The parallel roundtable focused on plant virology. In this roundtable, Dr. Juliana de Freitas Astúa, from EMBRAPA Mandioca e Fruticultura, Cruz das Almas, BA, Brazil, presented the talk “Updates on the citrus leprosis virus C-plant interaction.” Dr. Alice Inoue-Nagata, from EMBRAPA Hortaliças, DF, Brazil, gave the talk “Critical points for a virus control strategy via application of dsRNA molecules,” and Dr. Elizabeth P.B. Fontes, from UFV, Viçosa, MG, Brazil, gave the talk “Begomoviruses NSP-host interactome: integrating developmental signals, antiviral immunity, and pro-viral functions.” Two technical conferences were presented during the event. The first one, titled “lllumina genomic surveillance of infectious diseases,” was presented by Dr. Michelle G. Penna from Illumina Co. and occurred after the conference given by Dr. Colen Jonsson. The second technical conference, “xGenTM amplicon panels for metagenomics and viruses: investigative answers to your questions,” was presented by Síntese Biotecnologia Co. and occurred after the two roundtables described above. The fourth day of the event began with a state-of-the-art conference on veterinary virology, where Dr. Edviges M. Pituco, from PAHO/PANAFTOSA OIE Reference Laboratories, Brazil, presented the talk “Updates and advances in the control of foot-and-mouth disease in Brazil.” The conference followed with one roundtable on human virology (“Epidemiology and evolution of viruses in the context of One Health”) and another on environmental virology. Three exciting talks took place at the human virology roundtable: “SARS-CoV-2 and other respiratory viruses: from surveillance to pandemic action in the context of One Health,” presented by Dr. Edison L. Durigon, ICB USP, São Paulo, SP, Brazil; “Characterization of Ilheus virus: implications for emergence,” presented by Dr. Nikolaos Vasilakis, UTMB, Texas, USA; and “Emergence, spread, and evolution of SARS-CoV-2 lineages circulating in Brazil during the first 18 months of the pandemic,” presented by Dr. Gonzalo B. Bentacor, Fiocruz, Rio de Janeiro, RJ, Brazil. The environmental virology roundtable started with the talk by Dr. Caroline Rogotto, Univ. Feevale, Novo Hamburgo, RS, Brazil, titled “Environmental surveillance as a complementary tool for monitoring COVID-19,” and was followed by the talk, “The influence of Escherichia coli phage vB_EcoM-UFV13 on a consortium of sulfate-reducing bacteria opens a new window to bacteriophage use” presented by Dr. Roberto S. Dias, UFV, Viçosa, MG, Brazil. Moreover, the last talk was presented by Dr. Gabriel M.F. Almeida, UiT at the Artic University of Norway, Norway, with the title “The forgotten tale of Brazilian phage therapy.” This last talk was presented as a video conference. After the roundtables, another state-of-the-art conference was presented to the whole meeting audience, named “Plant manipulation by geminiviruses” by Dr. Rosa Lozano-Dúran from the Department of Plant Biochemistry, Centre for Plant Molecular Biology (ZMBP), Eberhard Karls University, Tübingen, Germany. The last day of the meeting began with three simultaneous roundtables. One dedicated to veterinary virology involved Dr. Jan Drexler Felix, Universitätsmedizin Berlin, Berlin, Germany, presenting “Challenges toward serologic diagnostics of emerging arboviruses.” The following talks were “Monkey see, monkey do: potential zoonotic viruses in nonhuman primates from Southern Brazil,” presented by Dr. Fernando R. Spilki, Feevale, Novo Hamburgo, RS, Brazil, and “Point-of-care diagnostic platforms for arboviruses,” presented by Lindomar J. Penna, Fiocruz-PE, Recife, PE, Brazil. In parallel, an invertebrate virology roundtable occurred. In this roundtable, the talks “ Spodoptera frugiperda fall armyworm virus and its biological control applications” by Dr. Leonardo A. da Silva, AgbiTech, Goiania, GO, Brazil, “Regulation of dengue transmission by the natural mosquito virome” by João Marques Trindade, UFMG, Belo Horizonte, MG, Brazil, and “One bacterium in the fight against arboviruses” by Luciano Moreira, Fiocruz, Belo Horizonte, MG, Brazil, were presented. The third roundtable of the day focused on basic virology and included the following talks: “Antagonism of nuclear antiviral responses by herpesviruses,” presented by Dr. Colin Crump, University of Cambridge, United Kingdom; “Immune responses to the efferocytosis of SARS-CoV-2-infected dying cells,” presented by Dr. Larissa D. Cunha, USP, SP, Brazil; and “Unique structural features of flaviviruses’ capsid proteins and their role in viral capsid assembly,” presented by Dr. Andrea da Poian, UFRJ, Rio de Janeiro, RJ. Dr. Felipe Naveca (Fiocruz, Manaus, AM) (human virology), Dr. Sergio de Paula (Universidade Federal de Viçosa—UFV, Viçosa, MG) (environmental virology), Dr. Juliane Deise Fleck (Universidade Feevale, Nova Hamburgo, RS) (environmental virology), Dr. Iranaia Assunção Miranda (Universidade Federal do Rio de Janeiro, Rio de Janeiro, RJ) (basic virology), Luis Lamberti Pinto da Silva (FMRP, Ribeirão Preto, SP) (basic virology) , Dr. Paula Rahal (IBILCE-Universidade Estadual Paulista—UNESP, São José do Rio Preto, SP) (human virology), Dr. Eurico Arruda (Faculdade de Medicina de Ribeirão Preto, USP, SP) (human virology), Dr. Tatiana Domitrovic (UFRJ, RJ) and Dr. Daniel Ardisson-Araujo (Universidade de Brasilia, Brasília, DF) (plants and invertebrate virology), Dr. Abelardo Silva Jr (UFV, Viçosa, MG) (veterinary virology), Marcelo de Lima (UFPel, Pelotas, RS) (veterinary virology), and Matheus Weber (Feevale, Nova Hamburgo, RS) (veterinary virology) were the chairs of the conferences and roundtables.
A total of 331 abstracts were approved to be presented at the meeting. As usual, human virology represented the majority with 149 abstracts, representing 46.0% of the total number of posters, followed by basic virology (90 abstracts, 27.8%), veterinary (50 abstracts, 15.4%), environmental (19 abstracts, 5.9%), and plant and invertebrate virology (16 abstracts, 4.9%) ( ). Among the 331 abstracts, 45 studies were selected by 12 senior researchers to be presented as short oral presentations. Nine oral presentation sections were presented in the 33rd SBV meeting: three for human virology, with fifteen students in total; two for basic virology (with five students); one for veterinary virology (with five students); two for plant and invertebrate virology (with ten students); and one for environmental virology (with five students) ( ). The studies selected for oral presentation were performed in 18 independent institutions: 17 from three distinct Brazilian regions and one from Paraguay (Universidad National de Assumption). Among Brazilian institutions, the highest number of selected studies came from the Universidade Federal do Rio de Janeiro (UFRJ), followed by the Universidade Federal de Minas Gerais (UFMG). Each oral presentation section was evaluated by at least two independent researchers who evaluated each talk and selected the best presentation in each section. The winners of the best presentation of each section are highlighted in with an asterisk. All of the remaining abstracts were presented as posters that were evaluated individually by a special scientific commission. Respecting the tradition of SBV meetings, in 2022 the Helio Gelli Pereira (HGP) award was given to the best complete scientific articles produced by virology students. To participate, candidates had to apply and submit their work to a scientific committee composed of distinguished virologists from different areas. This year the commission was composed of Nikolaos Vasilaskis, from the University of Texas Medical Branch at Galveston, USA; Jan Drexler Felix, from Universitätsmedizin, Berlin, Germany; Juliane Deise Fleck, from Univ. Feevale, Novo Hamburgo, RS; Luis Lamberti da Silva, FMRP/USP, Ribeirão Preto, SP; Daniel M.P. Ardisson-Araujo, UnB, Brasília, DF; and José Luís Proença Módena, Unicamp, Campinas, SP. Seven articles were selected to be presented orally during the SBV meeting. During this session, named the HGP award, the committee chose the best articles or presentations in each category (given by undergraduate and graduate students) ( ). The HGP award for the undergraduate student category was given to Luan Rocha Lima, from UFRJ, RJ, Brazil, for the work “Differential modulation of type I IFN response by distinct Zika virus isolates impacts virus replication and disease tolerance in vitro and in vivo.” The graduate student award was conferred upon Otávio Augusto Chaves, from Fiocruz, RJ, who presented the work “Commercially available flavonols are better SARS-CoV-2 inhibitors than isoflavone and flavones.” Viruses , published by MDPI, the American Society for Microbiology (ASM), and SBV support the HGP award. In 2022, ASM granted one eBook and a one-year membership for the winners, and Viruses offered a full waiver for publication by both research groups awarded. SBV partnerships with Viruses represent a significant incentive for students and their research groups, especially in Brazil, where very few institutions can support APC fees.
The 33rd SBV meeting marks the return of SBV in-person meetings. It was a great pleasure for all the attendees to finally return to the presential form, with the participation of influential scientists in the virology area, undergraduate and graduate students, and a large group of Brazilian virology researchers together at the event. The meeting occurred in a very beautiful and pleasant region of Brazil, south of Bahia state, more precisely at Arraial da Ajuda, Porto Seguro city, which has gorgeous beaches with warm waters. In this unique scenario, the meeting talks and sessions included massive participation from attendees every day, who enjoyed the high-quality virology science from all virology areas that were discussed. The cost of attending an in-person meeting was prohibitive for some of the potential attendees and caused a reduction in the number of attendees compared to the 2020 and 2021 online events. Despite this slight reduction in attendee numbers, the meeting was a great success. One of the positive points of the 2022 meeting was pairing with the 9th IEBMC. The sharing of the venue permitted some of the attendees of the SBV meeting to also attend IEBMC sessions during the mornings. To encourage students to take part in both events, the subscription for one event permitted participation in both.
|
Integrated hepatology and addiction care for inpatients with alcohol use disorder improves outcomes: a prospective study
|
e1a7a80c-9c2c-4905-92ff-5682205bae0d
|
10145975
|
Internal Medicine[mh]
|
A major obstacle in the treatment of patients with alcohol-associated liver disease (ALD) is early diagnosis, as many patients with ALD are diagnosed late in their disease course. In addition, many patients at high risk of developing ALD are not on alcohol use disorder (AUD) pharmacotherapy. To address these gaps, our group developed a dedicated inpatient alcohol liver evaluation (ALivE) team to evaluate patients with AUD for subclinical ALD while admitted to the hospital for non–liver-related complaints. This consultation service was paired with the robust inpatient addiction consultation team, which has previously been reported to improve outcomes in AUD. The previous descriptive analysis of the ALivE service demonstrated its efficacy for identifying ALD in admitted patients and promoting engagement in outpatient care. However, prospective data evaluating the efficacy of a collaborative hepatology and addiction medicine service are missing. This single-center, prospective study evaluated the performance of a novel multidisciplinary consultation group (ALivE plus addiction medicine) against a historical cohort of patients with AUD who received standard of care (SOC) in our hospital system. Starting in January 2020, primary inpatient medical services were provided the option to consult the ALivE service (which consisted of 1 hepatologist and a nurse practitioner) for patients with AUD admitted to the hospital with no known liver disease or evidence of current liver disease. These patients were evaluated by the ALivE service and underwent fibrosis staging with elastography, discussion of the importance of viral hepatitis vaccination, and education of the deleterious effects of alcohol on liver health. All patients seen by the ALivE team were also evaluated by addiction medicine. The SOC control group consisted of patients with AUD, and no known previous or active liver disease (based on chart documentation, laboratory, and imaging analysis) admitted to the hospital from January 2019 to January 2020, 1 year before the ALIVE consultation started. These patients received a standard of care at that time, which included management by the addiction medicine consultation service alone. We approached all identified patients with AUD admitted from January 2019 to January 2020 for inclusion into the SOC group; however, only those patients who consented to enter our alcohol biorepository patient cohort were included in the SOC group so that we could follow their clinical course without actively being involved in their care. Demographic and clinical data were extracted from patients’ electronic medical records. AUD pharmacotherapy included any 1 of 3 medications approved by the US Food and Drug Administration for AUD (naltrexone oral or intramuscular, disulfiram, and acamprosate), as well as medications used off-label for AUD (gabapentin, topiramate, or baclofen). Patients who had one of these medications prescribed during their hospitalization and/or on their hospital discharge summary were counted as treated. Early remission was defined by alcohol abstinence at a 6-month follow-up from study enrollment (Figure ), per patient report documented in the electronic medical records and not based on objective laboratory data. Hepatitis A and B vaccination administration was assessed through chart review. Subjects without any health care encounters after the time of enrollment, missing data, or those lost to follow-up were excluded from the remission analysis. Continuous variables were summarized with means (SDs) and compared using an unpaired, 2-tailed t test with Welch correction, whereas categorical variables were compared using the Fisher exact test. In total, 256 patients were included in the ALivE group, whereas 63 patients were included in the SOC group (Table ). Patients evaluated by the ALivE service were more likely to be Hispanic and less likely to speak English, be married, or have a concurrent substance use diagnosis compared with the SOC group. The most common reason for hospital presentation in both cohorts was alcohol intoxication or withdrawal. Patients evaluated by ALivE had higher rates of liver fibrosis screening compared with SOC (84.8% vs. 1.6%; p < 0.001), leading to the identification of F2 or greater fibrosis by noninvasive testing in 25.8% (N = 66) of the ALivE cohort. We found that patients in the ALivE cohort had higher rates of AUD pharmacotherapy prescriptions at discharge compared with the SOC group (73.4% vs. 57.1%; p = 0.012). The rate of new AUD therapy (newly prescribed during the hospitalization or at discharge) was higher in the ALivE group compared with the SOC group (67.3% vs. 37.8%, p < 0.001). With regard to remission data, 189 of the 256 patients seen by the ALiVE service had data available at the 6-month follow-up time point compared with 45 of the 63 patients in the SOC group. We did not see a difference in either early remission (13.9% vs. 14.3%, p = 0.819) or partial remission (less than a 6-month period of abstinence in the 6-month follow-up period; 51.3% vs. 44.4%, p = 0.407) between the ALiVE and SOC groups, respectively. As expected, given the involvement of a hepatologist in the patient’s care, the ALivE cohort had higher rates of hepatitis A (49.6% vs. 11.1%; p = 0.04) and hepatitis B (51.0% vs. 17.6%; p < 0.01) vaccination in nonimmune patients. In this prospective cohort study, we found that the evaluation of patients with AUD without known liver disease by a hepatologist improved diagnostic and therapeutic care compared with the SOC. We found that hepatology evaluation increased liver fibrosis screening and advanced fibrosis identification, improved rates of AUD pharmacotherapy prescription, and improved rates of preventative hepatology care. The SOC at our hospital for patients with any substance use disorder is the evaluation by the addiction service, which provides education, behavior coaching, and substance use treatments. The findings of our study suggest that an integrated model of addiction medicine and hepatology could further improve care for patients with AUD in a multimodal capacity. However, certain limitations must be highlighted. For one, the ALiVE group differed in multiple ways from the SOC group, which may affect our observed results. Second, we did have a considerable dropout rate during our 6-month follow-up period. Finally, during the time-period when ALiVE consultation was available (January 2020–2022), there were many patients who qualified for ALiVE consult that did not receive one. This could be related to patient or provider preference. Nonetheless, it could bias our results to favor the involvement of patients more willing to seek care. Despite the evaluation by addiction medicine, an integrated consult approach with hepatology resulted in higher rates of AUD pharmacotherapy at the time of discharge. This finding is particularly important, given that data show an association between AUD pharmacotherapy and reduced odds of developing ALD in those without advanced fibrosis or having a decompensating hepatic event in those with underlying cirrhosis. The underutilization of AUD pharmacotherapy is well recognized, and our results highlight the impact that hepatology evaluation and counseling may have on the acceptance of AUD treatment.
|
Olfactory Neuroblastoma—A Challenging Fine Line between Metastasis and Hematology
|
e6cd3c2a-55ff-4a3b-b47a-a23599ba1d92
|
10146428
|
Internal Medicine[mh]
|
Tumors in the nose and paranasal sinuses are rare lesions, affecting <1 in 100,000 people per year . These tumors may have monomorphic, nonspecific symptomatology, but they require a prompt and accurate diagnosis due to their poor prognosis and evolution. While new discoveries are made worldwide, especially to assess new methods of prevention, diagnosis, or specialized treatment, the rarity of these cases remains an obstacle in choosing the management option. Olfactory neuroblastoma, also referred to as esthesioneuroblastoma, represents an oncological entity of neuroectodermal origin arising in the upper part of the nasal cavity. Its incidence represents 2–3% of all nasal neoplasms . The etiology and risk factors are still unknown. As with most sinonasal tumors, the symptoms are nonspecific, most commonly epistaxis, nasal obstruction, and hyposmia. A biopsy is essential for diagnosis, while CT scans and MRI images are used for the staging system. These tumors are locally aggressive, with the propensity to spread into the anterior skull base as well as to metastasize to the cervical lymph nodes, thorax, and bones . Cervical metastases are described in 5–8% of cases at diagnosis and in 15–25% of patients as recurrence . The management of the neck in olfactory neuroblastoma is still controversial. In our day, three separate staging systems exist. Kadish et al. proposed in 1976 a staging of olfactory neuroblastoma in three groups based on the extension of the disease, which is still widely used (group A—tumor limited to the nasal fossa; group B—extension to the paranasal sinuses; group C—extension beyond the paranasal sinuses). The staging has evolved, and a modified Kadish system was proposed with an additional group D describing tumors with the locoregional or distant presence of metastases. Some institutions apply the TNM staging system by the American Joint Committee on Cancer (AJCC) based on the Dulguerov modified version of staging . There is no agreed-upon standard treatment for olfactory neuroblastoma. Surgical treatment and radiotherapy represent the most frequently used approaches. Chemotherapy treatment modalities can be used in selected cases. Open surgical approaches, such as extracranial and anterior craniofacial resection, are preferred in the treatment of ONB. The development of an endoscopic approach over the past few decades has gained popularity and offered many advantages (better cosmetic outcome and better visualization of some deep areas within the sinonasal region), representing a valid treatment for olfactory neuroblastoma . Multimodal treatment associating surgery with radiation therapy has been demonstrated to have the best survival rates; however, the infrequency of olfactory neuroblastoma and its heterogeneous clinical biology limit the possibility of creating specific protocols of treatment . Recurrence may be encountered years after treatment; therefore, long-term follow-up is recommended . Multiple myeloma currently affects 250,000 people globally . According to the 2022 Canadian data, the incidence is 10.1 per 100,000 men and 6.4 per 100,000 women, with an average age of diagnosis in the 6thdecade for both . The diagnosis is made based on the evidence of one or more of the CRAB criteria (C—hypercalcemia, R—renal insufficiency, A—anemia, and B—bone lesions), with biopsy confirmation of bone marrow infiltration by 10% clonal plasma cells or detection of a plasmacytoma. The laboratory diagnosis includes a variety of biochemical examinations (serum protein electrophoresis, urine protein electrophoresis, urine immunofixation, serum free light chains, and total protein) associated with the monitoring of end-organ damage .
A 45-year-old patient was admitted to our department with epistaxis from the right nasal fossa. He received conservative treatment using an anterior nasal tampon (Merocel) for 2 days, and he returned a few days later with recurrent symptomatology. The patient underwent an endoscopic examination, where he presented a solitary, smooth mass in the right nasal fossa. The patient underwent CT and MRI of the head and neck prior to surgical resection; the CT scan identified a heterogeneous, natively hyperdense tissue mass, located at the level of the right nasal cavity, which it filled for the most part. The mass-produced remodeling and bone erosion, with partial visualization of the right inferior nasal turbinates. The native CT appearance was nonspecific in the differential diagnosis, including an inverted papilloma, sinonasal polyp, or adenocarcinoma. The right maxillary sinus was filled with mixed, parafluid, and tissue densities . The patient underwent a chest and abdominal CT scan and MRI, and there was no proof of distant metastasis. The patient was staged according to the Kadish system. The tumor was located in the nasal fossa, without intracranial extension or erosion of the cribriform plate. An endoscopic approach was performed. The tumor was successfully removed with negative margins; a right maxillary antrostomy was performed. A Merocel was placed in the nasal fossa and removed after 48 h. In order to avoid the risk of infection, for all endoscopic surgeries, we use an antibioprophylaxis for 48–72 h, initiated on the day of the surgery. The patient had no complications (no CSF leak, no neurologic symptoms), and no epistaxis was found after the postoperative Merocel removal. The patient stayed in the hospital for less than a week. Tissue fragments were immunohistologically studied; the fragments were initially fixed in formalin. Positive and negative controls were performed. The tumor consisted of a polypoid thickened respiratory mucosa through a pseudolobular tumor proliferation that develops exclusively in the lamina propria, respecting the structure of the covering epithelium. Tumor cells were relatively monomorphic, small in size, weakly basophilic or weakly acidophilic, with a nucleus with irregularly or finely dispersed chromatin and inconstant nucleoli; some cells have a central nucleus, and in other cells, the position of the nucleus is eccentric, rare mitoses (1–2 cp × 40), with solid and trabecular growth architecture in a fibrous stroma and the formation of pseudorosettes (Homer Wright rosettes), and frequently in perivascular position, encasing small and medium vessels.Positive and negative controls were performed. Immunohistochemical reactions are performed on the paraffin block. The negative immunoreactions in tumor cells were identified (CD45, AE1/AE3, CD34, SMA, desmina, and CD99). BCL-2 was positive, and PGP9.5 was positive. Correlating the histological aspects with immunohistochemical staining, the diagnosis of neuroectodermal tumor was established, with the subtype of olfactory neuroblastoma (BCL2-positive and PGP9.5-positive) determined at G2 differentiation . After the surgical endoscopic removal of the tumor and the confirmation of the diagnosis by the anatomopathological exam, the patient was referred for radiotherapy. He underwent 33 sessions of radiotherapy treatment (DT = 50 Gy/fr/38 days) on the right nasal fossa. Radiotherapy was well tolerated. Follow-up was conducted by the oncology and radiotherapy departments, where postoperative imaging of the head and neck was performed. No recurrence was identified. The ENT follow-up was scheduled every 3 months during the first year for nasal endoscopy, every 6 months during the second and third years, and annually thereafter without recurrence or symptoms. In the fifth year after the primary diagnosis, the patient complained of lumbosacral and humeral bone pain. He underwent a CT scan and an MRI, which indicated osteolytic lesions of the vertebral bodies C6, C7, C8, T7 (16 mm), and T11, as well as the costal arches and bone basin (11 cm at the level of the right iliac wing with the extension to the right iliac muscle and a 10 mm lesion at the left iliac wing). The CT scan identified an important circumferential thickening of the mucosa of the right maxillary sinus that almost completely occupied it, with otherwise normally aerated paranasal sinuses. The lesions raised suspicion of multiple myeloma or long-distance metastasis. A biopsy of the sinus mucosa was performed with local anesthesia; the patient was referred to a hematologist and scheduled for a whole-body positron emission computer tomography (PET-CT) and a bone biopsy. The sinus biopsy revealed hypertrophic mucosa and the absence of tumoral cells. The whole-body PET-CT showed the activated metabolism of a mass in the right iliac wing measuring 10 to 7.5 cm, which invaded the neighboring endo- and exopelvic structures. The mass showed an inhomogeneous uptake of FDG. Other similarly moderate FDG-capturing osteolytic lesions could be distinguished in the right posterior fourth costal arch (invasive and dimensional progression compared to the CT scan performed 1 week before), the right fourth anterior costal arch, the left sixth lateral costal arch, the sternal manubrium, the medial angle of the left scapula, C7 (with a major risk of subsidence), T7, and T9 vertebral bodies, the right lateral clavicular extremity, the right humeral head, apex of the right temporal bone, the right parietal bone, and the left sciatic tuberosity [ and ]. The bone marrow biopsy concluded medullary iron blockage with moderate hyperplasia of the plasmacyte, affecting 4.5% of nucleated cells. Based on the complete blood cell counts, the protein electrophoresis, the serum electrophoresis, the immunoquantification, and the bone marrow biopsy, the diagnosis of monoclonal gammopathy of unspecified etiology of type IgG with lambda chains was established. A bone biopsy was taken from the iliac lesion, demonstrating the diagnosis of multiple myeloma type IgG with kappa chains (CD-138 positive; kappa chains weakly intensify CD79a-positive, CD56-positive, and CD20-negative for lambda chains). The Ki37 cell proliferation index was 40%. The disease was classified as stage III according to the International Staging System (R2-ISS) for overall survival in multiple myeloma. The patient underwent hematological treatment with daratumumab, bortezomib, thalidomide, dexamethasone, and bisphosphonate. The symptoms of bone pain were relieved. The patient is still in the follow-up stage.
As a rare malignant tumor of the nasal cavity with a frequency of only 0.4 million per year, olfactory neuroblastoma was described for the first time in 1924 . The tumor can affect both children and adults; in adults, the disease generally occurs between the fifth and sixth decades of life. Its origin is still unknown. No lifestyle risk, environmental, or geographic factors are linked to its apparition. The symptoms may vary from unilateral nasal obstruction (70%) to epistaxis (46%; see ). The gold standard of diagnosis for olfactory neuroblastoma is a biopsy and an anatomopathological examination. Having neuroectodermal and epithelial origins, olfactory neuroblastoma presents itself as a unilateral, polypoid tumor formation of low consistency with a nonspecific clinical presentation. The olfactory epithelium can be identified in the mucosa of the superior and middle turbinates and also in the mucosa of the nasal septum . The immunohistochemicalexamination is based on positive markers for S100, BCL2 , and PGP9.5 and negative markers for keratin, muscle, melanoma, and lymphoma. Olfactory neuroblastoma poses a high risk of local invasion, recurrence, and distant metastasis . Dulguerov et al. found in their meta-analysis that cervical lymph node metastasis is the most important prognostic factor in olfactory neuroblastoma that negatively affects survival . Castelnuovo et al. reported, in a total of 10 patients treated with endoscopic surgery, the presence of cervical metastasis 21 months after surgery. The patients underwent a bilateral modified neck dissection plus radiotherapy . Some studies recommend cervical neck dissection for metastases occurring 6 months or more after treatment of the primary site. Naples et al. found in a meta-analysis that elective supra-omohyoid neck dissection is a reasonable option for patients with Kadish stage B and TNM stage N0 . Based on our experience, we think that the endonasal approach achieves a complete resection for small, localized lesions when no reconstruction is needed, and all the lesions can be resected with negative margins. Our patient was, at the time of the diagnosis, N0M0, so we did not perform a neck dissection. The endonasal excision allows rapid recovery and returnsto daily activities for the patients, improving their quality of life. The infrequency of these tumors has limited the possibility of categorizing the prognostic factors and specific protocols of treatment. Several staging systems have been proposed. The most commonly used was proposed by Kadish et al. in 1976 and modified in 1993. Nowadays, some institutions apply the TNM staging system by the AJCC based on the Dulguerov modified version of staging . A meta-analysis compared the outcomes of the Kadish and Dulguerov staging systems, finding that both systems correlated with prognostic factors in terms of disease-free and overall survival, with the Dulguerov system having a superior performance . CT and MRI images are essential for correct staging. PET-CT can identify local recurrences and metastases. The anatomopathologicalexamination is the gold standard for diagnosis. The typical recommended treatment for olfactory neuroblastoma consists of endoscopic surgery resection associated with radiochemotherapy. Endonasal endoscopic surgery is preferred due to its efficient local control and lower morbidity . In our center, we consider and use the open approach when there is an extensive tumor with intracranial involvement or when a pericranial flap for reconstruction is needed . However, in this particular case, the imaging suggests no involvement of the cribriform plate, base of the skull, orbit, or intracranial cavity. As described in the literature, endoscopic resection allows the total excision of small lesions. Due to its location in the right nasal fossa and the fact that it had not spread into the adjacent structures, we performed an endoscopic approach. In a retrospective study by Gallia et al., eight patients with olfactory neuroblastoma treated by endonasal endoscopic surgery were identified. They had a complete resection and negative intraoperative margins, with no evidence of disease over a mean follow-up of over 27 months . Newer radiotherapy techniques have been added, reducing cerebral and ocular toxicity over time . Neoadjuvant chemotherapy for tumor reduction can improve surgical management by reducing the size of the tumor and its complications . Due to the delayed regional recurrences associated with olfactory neuroblastoma, prolonged surveillance is recommended. We highlight in our report the aggressiveness of this tumor and the importance of including PET-CT in the monitoring follow-up protocol for olfactory neuroblastoma while also considering that distant metastases (approximately 10%) can occur irrespective of the grade of the tumor . In our report, osteolytic lesions raised the suspicion of distant metastases of the olfactory neuroblastoma and multiple myeloma. Distant recurrences of olfactory neuroblastoma are described in the literature. In an article by Loy et al., 34% of patients developed recurrent disease, and the most distant metastases were osseous (humerus, lumbar spine, and diffuse bone metastases) . The symptomatology of the patient correlated with theirhistory of olfactory neuroblastoma, raising the suspicion of a distant recurrence of the disease. However, the clinical ENT exam and negative biopsy of the sinus confirmed a hematological malignancy. In multiple myeloma, malignant plasma cells proliferate in the bone marrow, displacing normal blood cells and leading to disease and symptom manifestations such as generalized weakness, weight loss, bone pain, hypercalcemia, and anemia . Although our patient was diagnosed with a hematological disease and there is no correlation between multiple myeloma and olfactory neuroblastoma, it should be noted that long-term follow-up of an olfactory neuroblastoma patient is mandatory. The presence of a multidisciplinary team around the patient can identify and treat relapses, long-distance metastases, or even a hematological disease.
Rare sinonasal tumors present similar symptomatology, originating in a relatively small anatomical space. The patients often have a long history before their initial presentation. The treatment modalities have changed over time with the evolution of endoscopic surgery, and a multidisciplinary approach may improve the survival rate and the patient’s quality of life. Lifelong follow-up is crucial, combined with imaging surveillance, given the possibility of distant metastases occurring many years after treatment of the primary tumor or, as in our case, an early diagnosis of hematological disease.
|
Neurophysiological Evaluation of Neural Transmission in Brachial Plexus Motor Fibers with the Use of Magnetic versus Electrical Stimuli
|
9201673b-aee4-4e81-b800-4315d8b3a127
|
10146775
|
Physiology[mh]
|
The anatomical complexity of the brachial plexus and its often multilevel damage require specialized in-depth diagnostics. The purpose is to select the appropriate treatment, assess its effectiveness, and provide prognostic information about its course . Imaging of the brachial plexus, such as ultrasound or magnetic resonance imaging, provides important information about the nerve structures and surrounding tissues. Contemporary studies emphasize the importance of these tests, but they do not mention assessing the brachial plexus function . Besides the clinical examination , the diagnostic standard for brachial plexus function should include clinical neurophysiology tests. Electroneurography (ENG) studies are used to assess the function of motor fibers and peripheral sensory nerves. Somatosensory evoked potentials are used to evaluate afferent sensory pathways. Needle electromyography analyses the bioelectrical activity of the muscles innervated by peripheral nerves originating from the brachial plexus. The results of the tests above determine the extent, type, and severity of the damage. ENG of motor fibers uses a specific low-voltage electrical stimulus. It stimulates the nerve motor fibers, causing their depolarization, and the excitation spreads to the muscle, resulting in the generation of compound muscle action potential (CMAP). The strength of the electrical stimulus should be supramaximal, of sufficient intensity to generate CMAP with the highest amplitude and shortest latency. The CMAP amplitude reflects the number of conducting motor axons, and latency refers to the function of the myelin sheath and the rate of depolarization, mainly in fast-conducting axons . Despite the advantages of this type of stimulation, it has limitations due to the physical properties of the electrical stimulus. The main limitation is the inability to penetrate through the bone structures surrounding the brachial plexus in its proximal part, at the level of the spinal roots, at the spinal nerves in the neck, and often at Erb’s point. Stimulation at Erb’s point may be complicated due to the individual anatomy of the examined person, such as obesity, extensive musculature, or past injuries at this level. This can significantly affect the CMAP parameters and give false positive results indicating pathology of the assessed motor fibers. In contrast to ENG, magnetic stimulus is used to induce motor evoked potential (MEP) . Its use in brachial plexus diagnostics overcomes these limitations, which is of great clinical importance . The propagation of excitation along the axon and elicitation of motor potential using a magnetic stimulus is similar to electrical stimulation. However, as some authors indicate, the applied magnetic stimulus may be submaximal due to magnetic stream dispersion or insufficient power generated by the stimulation coil. Therefore, the assessment of MEP parameters may not reflect the actual number of excitable axons, and the interpretation of the results may incorrectly determine the functional status of the brachial plexus. An MEP study can provide important information regarding the location of the injury, especially in cases of traumatic damage to the brachial plexus where there may be multiple levels of impairment. The physical properties of the magnetic stimulus released from the generator device to penetrate bone structures would have to allow an assessment of the proximal part of the brachial plexus, especially at the level of the spinal roots. Scientific studies are mainly concerned with MEP efferent conduction studies in patients with disc–root conflict and other neurological disorders . Little attention has been paid to assessing the peripheral part of the lower motoneurone, including injuries of brachial plexus using MEP, studies of which may constitute the novum among the aims of the presented study. The main concern has been high-voltage electrical stimulation applied over the vertebrae . To the best of our knowledge, apart from studies by Schmid et al. and Cros et al. from 1990, this paper is one of the few sources of reference values. Therefore, it makes a practical contribution to the routine neurophysiological diagnosis of brachial plexus injuries. The aim of this study was to reinvestigate the hypothesis concerning the usefulness of the MEP test applied both over the vertebrae and at Erb’s point to assess the neural transmission of the brachial plexus motor fibers, with special attention to the functional evaluation of the short brachial plexus branches. The latter element has not been examined in detail ; most of the studies have been devoted to the evaluation of the long nerves, such as the median or ulnar. In addition, we formulated the following secondary goals: to compare the parameters of electrically evoked potentials (CMAP) with the parameters generated by magnetic stimulus (MEP), and to analyze whether these stimulation methods have compatible effectiveness and whether they could be used interchangeably during an examination. This would make it possible to select a method by taking into account the individual patient’s needs and the examination targets. Moreover, the additional aim of our work was to confirm that magnetic stimulation induces supramaximal potentials with the same parameters as during electrical stimulation, which was previously considered a methodological limitation . A further study aim was to confirm an assumption that magnetic stimulation is less painful than electrical stimulation and better tolerated by patients during neurophysiological examinations, which has never before been examined. 2.1. Study Design, Participants, and Clinical Evaluation Seventy-five volunteer subjects were randomly chosen to participate in the research. The ethical considerations of the study were compliant with the Declaration of Helsinki. Approval was granted by the Bioethical Committee of the University of Medical Sciences in Poznań, Poland (resolution no. 554/17). All the subjects signed a written consent form to voluntarily participate in the study without financial benefit. The consent included all the information necessary to understand the purpose of the study, the scope of the diagnostic procedures, and their characteristics. Before the study began, fifteen subjects declined to participate. The subjects in the study group (N = 60) were enrolled based on the results of clinical studies performed independently by a clinical neurophysiologist and a neurologist. The exclusion criteria included craniocerebral, cervical spine, shoulder girdle, brachial plexus, or upper extremity injuries and other systemic disorders under treatment. The contraindications to undergoing neurophysiological tests were pregnancy, stroke, oncological disorder, epilepsy, metal implants in the head or spine, and implanted cardiac pacemaker or cochlear implant because of the use of magnetic stimulation. The results were analyzed blindly, satisfying intra-rater reliability. The medical history and clinical studies consisted of evaluating the sensory perception of the upper extremities according to the C5-C8 dermatomes and peripheral nerve sensory distribution, based on von Frey’s monofilament method . The maximal strength of the upper extremity muscles was assessed using Lovett’s scale . A bilateral clinical examination of each volunteer was performed once. Based on the clinical examination and medical history, the neurologist classified the subjects in the research group as healthy volunteers. After excluding 14 participants who did not meet the inclusion criteria and declining 4 others during the neurophysiological exams, the final group included 42 subjects. The characteristics of the study group (N = 42) and a flowchart of the diagnostic algorithm proposed in this study are presented in and . There were 40 right-handed participants and only 2 left-handed. 2.2. Neurophysiological Examination All the participants were examined bilaterally once according to the same neurophysiological schedule. Each time, we used both magnetic and electrical stimuli to assess the function of the peripheral nerve and magnetic stimulus to evaluate neural transmission from the cervical spinal root. We applied stimulation three times at Erb’s point and at the selected level of the cervical segment, checking the repeatability of the evoked potential. The compound muscle action potentials (CMAP) recording during electroneurography (ENG) and motor evoked potential (MEP) induced by magnetic stimulation were analyzed. During the neurophysiological examination, the subjects were in a seated position, with relaxed muscles of the upper extremities and shoulder girdle, and in a quiet environment. The KeyPoint Diagnostic System (Medtronic A/S, Skøvlunde, Denmark) was used for the MEP and CMAP recordings. External magnetic stimulus for the MEP studies was applied by a MagPro X100 magnetic stimulator (Medtronic A/S, Skøvlunde, Denmark) via a circular coil (C-100, 12 cm in diameter) ( A,B). The strength of the magnetic field stream was 100% of the maximal stimulus output, which means 1.7 T for each pulse. The recordings were performed at an amplification of 20 mV/D and a time base of 5–8 ms/D. For the CMAP recording, a bipolar stimulation electrode and a single rectangular electric stimulus with a duration of 0.2 ms at 1 Hz frequency was used. The intensity of the electrical stimulus was 100 mA to evoke the supramaximal CMAP amplitude at Erb’s point. Such strength is obligatory and is determined by anatomical conditions and the fact that the nerve structures of the brachial plexus lie deep in the supraclavicular fossa. In the ENG studies, the time base was set to 5 ms/D, the sensitivity of recording to 2 mV/D, and 10 Hz upper and 10 kHz lower filters were used in the recorder amplifier. A bipolar stimulation electrode was used, the pools of which were moisturized with a saline solution (0.9% NaCl). The skin where the ground electrode and recording electrodes were placed was disinfected with a 70% alcohol solution; along with the conductive gel, this reduced the resistance between the skin and the recording sensors. The impedance did not exceed 5 kΩ. In the ENG examination, the bipolar stimulation electrode was applied at Erb’s point over the supraclavicular region, along an anatomical passage of the brachial plexus motor fibers. If repetitive CMAP with the shortest latency and the highest amplitude was evoked at this point, the spot became the starting point for the application of magnetic stimulation at this level (hot spot). To assess the MEP from the spinal roots of the cervical segment, the magnetic coil was applied 0.5 cm laterally and slightly below the spinous process in accordance with the anatomical location of the spinal roots (C5–C8). In this way, the cervical roots were selectively stimulated. For the recording of CMAP and MEP, standard disposable Ag/AgCl surface sensors with an active surface of 5 mm 2 were used in the same location for both electrical and magnetic stimulus. The active electrode was placed over the muscle belly innervated by the peripheral nerve, taking the origin from the superior, middle, and inferior trunk of brachial plexus. The same selected muscles also represented a specific root domain in accordance with the innervation of the upper extremity through the cervical segment of the spine. The reference electrode was placed distal to the active ones, depending on the muscle, i.e., on the olecranon or the tendon . A list of the tested muscles and their innervation (peripheral pathway and root domain), as well as the location of electrodes are given in . The same parameters were analyzed for both the CMAP and MEP recordings. The amplitude of the negative deflection (from baseline to negative peak, measured in mV), distal latency (DL) (from visible stimulating artefact to negative deflection of potential, measured in ms), and standardized latency (SL) were calculated by the equation SL = DL/LNS where LNS is the length of the nerve segment between the stimulation point (Erb’s point) and the recording area on the muscle (measured in cm). A reliable value of standardized latency depends on an accurate distance measurement. Therefore, a pelvimeter, which reduces the risk of error in measuring the distance between the stimulation point and the recording electrode, was used in the research. This makes it possible to consider the anatomical curvature of the brachial plexus nerves. The standardized latency indicates a direct correlation between latency and distance. This is important in assessing the conduction of the brachial plexus short branches with regard to various anthropometric features of the examined subjects, such as the length of the upper extremities relative to height. In standard neurophysiological tests of short nerve branches, the F wave is not assessed, hence the calculation of the root conduction time for nerves such as axillary, musculocutaneous, etc., is not possible. In order to assess conduction in the proximal part of these nerves, the value of standardized latency was also calculated (proximal standardized latency, PSL) using the following equation: PSL = (MRL − MEL)/D where MRL is the latency of MEP from the root level stimulation (measured in ms), MEL is the latency of MEP elicited from Erb’s point stimulation (measured in ms), and D is the distance between these two stimulation points (measured in cm). Therefore, the PSL value reflects the conduction between the cervical root and Erb’s point for each examined nerve. Distal latency and standardized latency correspond to speed conduction in the fastest axons. The amplitude of the recorded potentials and their morphology reflects the number of conducting motor fibers . After undergoing neurophysiological tests, the subjects reported which of the applied stimuli (electrical or magnetic) evoked a painful sensation, as scored on a 10-point visual analogue scale (VAS) . 2.3. Statistical Analysis The statistical data were analyzed using Statistica 13.3 software (StatSoft, Kraków, Poland) and are presented with descriptive statistics: minimal and maximal values (range), and mean and standard deviation (SD) for measurable values. The Shapiro–Wilk test was performed to assess the normality of distribution, and Leven’s test was used to define the homogeneity of variance in some cases. The results from the neurophysiological studies were compared to determine the differences between the sides (left and right), genders (female and male), stimulation techniques (electrical and magnetic), and stimulation areas (Erb’s point and cervical root). The changes in evoked the potential parameters between the groups of men and women were calculated with an independent Student’s t -test. In cases where the distribution was not normal, a Mann–Whitney U test was used. The dependent Student’s t -test (paired difference t -test) or Wilcoxon’s test (in the absence of distribution normality) was used to compare the differences between the stimulation methods, stimulation areas, and sides of the body. p -values less than 0.05 were considered statistically significant. The percentage of difference was expressed for each variable. An analysis of lateralization influence was not performed because there was only one left-handed volunteer. With regard to the results of the clinical tests, including pain measured by a 0–10 point visual analogue scale (VAS) and muscle strength measured by the 0–5 point Lovett’s scale, the minimum and maximum values (range) and mean and standard deviation (SD) are presented. At the beginning of the pilot study, statistical software was used to determine the required sample size using the amplitudes from the MEP and ENG recordings with a power of 80% and a significance level of 0.05 (two-tailed) as the primary outcome variable. The mean and standard deviation (SD) were calculated using the data from the first 10 patients of each gender, and the software estimated that at least 20 patients were needed as a sample size for the purposes of this study. Seventy-five volunteer subjects were randomly chosen to participate in the research. The ethical considerations of the study were compliant with the Declaration of Helsinki. Approval was granted by the Bioethical Committee of the University of Medical Sciences in Poznań, Poland (resolution no. 554/17). All the subjects signed a written consent form to voluntarily participate in the study without financial benefit. The consent included all the information necessary to understand the purpose of the study, the scope of the diagnostic procedures, and their characteristics. Before the study began, fifteen subjects declined to participate. The subjects in the study group (N = 60) were enrolled based on the results of clinical studies performed independently by a clinical neurophysiologist and a neurologist. The exclusion criteria included craniocerebral, cervical spine, shoulder girdle, brachial plexus, or upper extremity injuries and other systemic disorders under treatment. The contraindications to undergoing neurophysiological tests were pregnancy, stroke, oncological disorder, epilepsy, metal implants in the head or spine, and implanted cardiac pacemaker or cochlear implant because of the use of magnetic stimulation. The results were analyzed blindly, satisfying intra-rater reliability. The medical history and clinical studies consisted of evaluating the sensory perception of the upper extremities according to the C5-C8 dermatomes and peripheral nerve sensory distribution, based on von Frey’s monofilament method . The maximal strength of the upper extremity muscles was assessed using Lovett’s scale . A bilateral clinical examination of each volunteer was performed once. Based on the clinical examination and medical history, the neurologist classified the subjects in the research group as healthy volunteers. After excluding 14 participants who did not meet the inclusion criteria and declining 4 others during the neurophysiological exams, the final group included 42 subjects. The characteristics of the study group (N = 42) and a flowchart of the diagnostic algorithm proposed in this study are presented in and . There were 40 right-handed participants and only 2 left-handed. All the participants were examined bilaterally once according to the same neurophysiological schedule. Each time, we used both magnetic and electrical stimuli to assess the function of the peripheral nerve and magnetic stimulus to evaluate neural transmission from the cervical spinal root. We applied stimulation three times at Erb’s point and at the selected level of the cervical segment, checking the repeatability of the evoked potential. The compound muscle action potentials (CMAP) recording during electroneurography (ENG) and motor evoked potential (MEP) induced by magnetic stimulation were analyzed. During the neurophysiological examination, the subjects were in a seated position, with relaxed muscles of the upper extremities and shoulder girdle, and in a quiet environment. The KeyPoint Diagnostic System (Medtronic A/S, Skøvlunde, Denmark) was used for the MEP and CMAP recordings. External magnetic stimulus for the MEP studies was applied by a MagPro X100 magnetic stimulator (Medtronic A/S, Skøvlunde, Denmark) via a circular coil (C-100, 12 cm in diameter) ( A,B). The strength of the magnetic field stream was 100% of the maximal stimulus output, which means 1.7 T for each pulse. The recordings were performed at an amplification of 20 mV/D and a time base of 5–8 ms/D. For the CMAP recording, a bipolar stimulation electrode and a single rectangular electric stimulus with a duration of 0.2 ms at 1 Hz frequency was used. The intensity of the electrical stimulus was 100 mA to evoke the supramaximal CMAP amplitude at Erb’s point. Such strength is obligatory and is determined by anatomical conditions and the fact that the nerve structures of the brachial plexus lie deep in the supraclavicular fossa. In the ENG studies, the time base was set to 5 ms/D, the sensitivity of recording to 2 mV/D, and 10 Hz upper and 10 kHz lower filters were used in the recorder amplifier. A bipolar stimulation electrode was used, the pools of which were moisturized with a saline solution (0.9% NaCl). The skin where the ground electrode and recording electrodes were placed was disinfected with a 70% alcohol solution; along with the conductive gel, this reduced the resistance between the skin and the recording sensors. The impedance did not exceed 5 kΩ. In the ENG examination, the bipolar stimulation electrode was applied at Erb’s point over the supraclavicular region, along an anatomical passage of the brachial plexus motor fibers. If repetitive CMAP with the shortest latency and the highest amplitude was evoked at this point, the spot became the starting point for the application of magnetic stimulation at this level (hot spot). To assess the MEP from the spinal roots of the cervical segment, the magnetic coil was applied 0.5 cm laterally and slightly below the spinous process in accordance with the anatomical location of the spinal roots (C5–C8). In this way, the cervical roots were selectively stimulated. For the recording of CMAP and MEP, standard disposable Ag/AgCl surface sensors with an active surface of 5 mm 2 were used in the same location for both electrical and magnetic stimulus. The active electrode was placed over the muscle belly innervated by the peripheral nerve, taking the origin from the superior, middle, and inferior trunk of brachial plexus. The same selected muscles also represented a specific root domain in accordance with the innervation of the upper extremity through the cervical segment of the spine. The reference electrode was placed distal to the active ones, depending on the muscle, i.e., on the olecranon or the tendon . A list of the tested muscles and their innervation (peripheral pathway and root domain), as well as the location of electrodes are given in . The same parameters were analyzed for both the CMAP and MEP recordings. The amplitude of the negative deflection (from baseline to negative peak, measured in mV), distal latency (DL) (from visible stimulating artefact to negative deflection of potential, measured in ms), and standardized latency (SL) were calculated by the equation SL = DL/LNS where LNS is the length of the nerve segment between the stimulation point (Erb’s point) and the recording area on the muscle (measured in cm). A reliable value of standardized latency depends on an accurate distance measurement. Therefore, a pelvimeter, which reduces the risk of error in measuring the distance between the stimulation point and the recording electrode, was used in the research. This makes it possible to consider the anatomical curvature of the brachial plexus nerves. The standardized latency indicates a direct correlation between latency and distance. This is important in assessing the conduction of the brachial plexus short branches with regard to various anthropometric features of the examined subjects, such as the length of the upper extremities relative to height. In standard neurophysiological tests of short nerve branches, the F wave is not assessed, hence the calculation of the root conduction time for nerves such as axillary, musculocutaneous, etc., is not possible. In order to assess conduction in the proximal part of these nerves, the value of standardized latency was also calculated (proximal standardized latency, PSL) using the following equation: PSL = (MRL − MEL)/D where MRL is the latency of MEP from the root level stimulation (measured in ms), MEL is the latency of MEP elicited from Erb’s point stimulation (measured in ms), and D is the distance between these two stimulation points (measured in cm). Therefore, the PSL value reflects the conduction between the cervical root and Erb’s point for each examined nerve. Distal latency and standardized latency correspond to speed conduction in the fastest axons. The amplitude of the recorded potentials and their morphology reflects the number of conducting motor fibers . After undergoing neurophysiological tests, the subjects reported which of the applied stimuli (electrical or magnetic) evoked a painful sensation, as scored on a 10-point visual analogue scale (VAS) . The statistical data were analyzed using Statistica 13.3 software (StatSoft, Kraków, Poland) and are presented with descriptive statistics: minimal and maximal values (range), and mean and standard deviation (SD) for measurable values. The Shapiro–Wilk test was performed to assess the normality of distribution, and Leven’s test was used to define the homogeneity of variance in some cases. The results from the neurophysiological studies were compared to determine the differences between the sides (left and right), genders (female and male), stimulation techniques (electrical and magnetic), and stimulation areas (Erb’s point and cervical root). The changes in evoked the potential parameters between the groups of men and women were calculated with an independent Student’s t -test. In cases where the distribution was not normal, a Mann–Whitney U test was used. The dependent Student’s t -test (paired difference t -test) or Wilcoxon’s test (in the absence of distribution normality) was used to compare the differences between the stimulation methods, stimulation areas, and sides of the body. p -values less than 0.05 were considered statistically significant. The percentage of difference was expressed for each variable. An analysis of lateralization influence was not performed because there was only one left-handed volunteer. With regard to the results of the clinical tests, including pain measured by a 0–10 point visual analogue scale (VAS) and muscle strength measured by the 0–5 point Lovett’s scale, the minimum and maximum values (range) and mean and standard deviation (SD) are presented. At the beginning of the pilot study, statistical software was used to determine the required sample size using the amplitudes from the MEP and ENG recordings with a power of 80% and a significance level of 0.05 (two-tailed) as the primary outcome variable. The mean and standard deviation (SD) were calculated using the data from the first 10 patients of each gender, and the software estimated that at least 20 patients were needed as a sample size for the purposes of this study. The research group was homogeneous in terms of age. We found statistically significant differences between the women and men concerning height, weight, and BMI . In the clinical study, the Lovett’s muscle strength score was found to be 5 on average for both men and women. This cumulative result applies to all assessed muscles bilaterally, i.e., deltoid, biceps brachii, triceps brachii, and abductor digiti minimi, and reflects the proper maximal muscle contraction against the applied resistance. The results of the sensory perception studies of the upper extremities, according to dermatomes C5–C8, were about the normal outcomes in the study group. There were no significant differences in the CMAP and MEP between the right and left sides among women (N = 21) and men (N = 21). Hence, further comparative analysis of CMAP and MEP between the two groups refers to the cumulative number of tests performed (N = 42). The results are presented in . The significantly prolonged latency of evoked potential in the men compared to the women is related to the greater distance between the stimulation point and the recording level, due to anthropometric features such as the length of the extremities, which are longer in men. However, this does not determine the value of standardized latency reflecting conduction in a particular segment. These values are comparable in the two groups for both types of stimulation (electrical and magnetic) and levels of stimulation (Erb’s point and cervical root) with generally no statistical differences. The exception is the C5 spinal root and Erb’s point stimulation (both electrical and magnetic) for the radial nerve. In the cases above, the standardized latency was significantly longer in the group of men. However, the percentage difference is only 8–11% and the numerical difference is only about 0.02 ms/cm, and these differences are not clinically significant. Similarly, there were significant differences in the amplitude of evoked potentials between women and men. In the assessment of the musculocutaneous nerve, CMAP and MEP generated from Erb’s point showed higher values in the men, while those generated from the ulnar nerve had higher values in the women. The difference is also between 10 and 16%, without clinical significance, and may have resulted from a measurement error, such as the cursor setting during the analysis of potentials. Because the conduction parameters in the groups of women and men were comparable, further statistical analysis was conducted on 84 tests (both groups were combined). The parameters of potentials generated by electrical stimulus (CMAP) were compared with those of potentials generated by magnetic impulse (MEP). Stimulation in both cases was applied at Erb’s point. The data are presented in and . The amplitude of CMAP was significantly higher after electrical stimulation than MEP after magnetic stimulation for all the examined nerves, in the range of 3–7%. This may have been due to the wider dispersion of electrical stimulation according to the rule of electrical field spread. The latency of the evoked potentials was significantly shorter after magnetic stimulation, which is related to the shorter standardized latency. Note that the difference in potential latency values using the two types of stimulation did not exceed 5%. This may be a result of the deeper and more selective penetration of magnetic impulses into tissues (based on the rule of magnetic field spread) and through the bone structures, and, thus, faster depolarization of the brachial plexus fibers. presents examples of CMAP and MEP recordings following electrical and magnetic stimulation at Erb’s point. The repeatability of the morphology of potentials with the use of both types of excitation is noteworthy. The brachial plexus trunks are stimulated at Erb’s point in the supraclavicular area. In the area over the vertebrae, the spinous processes of the vertebrae are points of reference for the corresponding spinal root locations. In the cervical spine, according to the anatomical structure, the spinal roots emerge from the spinal cord above the corresponding numbered vertebrae. A,B presents magnetic coil placements during the MEP study, while gives data results. The results show significantly higher amplitudes of the potentials after stimulation of the cervical roots compared to the potentials evoked at Erb’s point for C5 and C6. In the case of C8, the amplitude was lower than the potentials evoked at Erb’s point. It should be noted, however, that these values varied in the range of 9–16%, which, as explained above, is not clinically relevant. We also note the comparable values of proximal standardized latency (PSL) in the cervical root–Erb’s point segment for all the stimulated nerves. presents the MEP recordings after magnetic stimulation of the C5 to C8 cervical spinal roots. The MEPs recorded from the cervical roots have a repetitive and symmetrical morphology. The MEPs have a lower amplitude at the C8 level than in the other studied segments (see and ). After undergoing neurophysiological tests, the subjects indicated the degree of pain sensation during stimulation according to a 10-point visual analogue scale (VAS) (see ). The results indicate that they felt more pain or discomfort during electrical stimulation. The subjects described it as a burning sensation. They also indicated that magnetic stimulation was perceptible as the feeling of being hit, causing a more highly expressed motor action (contraction of the muscle as the effector of the stimulated nerve). Neuroimaging and basic clinical examinations of sensory perception and muscle strength are still the primary approaches for evaluating brachial plexus injury symptoms . Neurophysiological diagnostics is considered supplementary, with the aim of confirming the results of the clinical evaluation. The main novelty of the present study is that it proves the similar importance of magnetic and peripheral electrical stimulation over the vertebrae in evaluating the functional status of brachial plexus motor fiber transmission. The pros of our research are the neurophysiological assessment of the function of brachial plexus short branches, which are part of its trunks. Our studies prove the similarity of results obtained with the two mentioned methods following the excitation of nerve structures at Erb’s point. The latency and amplitude values of the potentials (CMAP, MEP) evoked at this level by two types of stimuli differed in the range of 2–7%. In routine diagnostic tests, this range of difference would not significantly affect the interpretation of the results of neurophysiological tests. Hence, we conclude that magnetic and electrical stimuli could be used interchangeably during an examination. We also proved that the range of excitation of motor fibers by a magnetic impulse may be supramaximal due to the stable and comparable MEP and CMAP amplitudes. The properties of supramaximal motor potential with the shortest latency were, in previous studies, attributed to the effects of electrical stimulation, which is commonly used in neurophysiological research. Many authors pointed to the limited diagnostic possibilities of the magnetic stimulus , the pros of which were examined in detail in this paper. This is crucial because of the different anthropometric features of patients and the possible extent of damage to the structures surrounding the brachial plexus. Past fractures, swelling, or post-surgical conditions at this level may limit the excitation of axons by electrical stimulus. The benefit of magnetic-induced MEP is that it is less invasive than electrical stimulation, as concluded from the VAS pain scores (see ). The movement artifact associated with the magnetic stimulation may influence the quality of the MEPs recording, which should be considered during the interpretation of the diagnostic test results . MEP studies allow evaluation of the proximal part of the peripheral motor pathway, between the cervical roots, contrary to low-voltage electrical stimulation. The comparable amplitudes of MEPs induced by magnetic stimulus recorded over the vertebrae with those recorded at Erb’s point, as shown in our study, could be the basis for the diagnosis of a conduction block in the area between the spinal root and Erb’s point. By definition, in a neurophysiological examination, conduction block is considered to have occurred when the amplitude of the proximal potential is reduced by 50% relative to the distal potential. In the opinion of Öge et al. , the amplitude of evoked potentials induced by stimulation of the cervical roots compared with potentials recorded distally using electrical stimulation may help to reveal a possible conduction block at this level. According to Matsumoto et al. , the constant latency of MEP induced by magnetic stimulation of the cervical roots was comparable with potentials induced by high-voltage electrical stimulation. In our opinion, similar to the method mentioned above, combining two research techniques using magnetic stimulation of the cervical roots or Erb’s point and conventional peripheral electrical stimulation is valid for neurophysiological assessment of the brachial plexus. Previous studies on a similar topic by Cros et al. involving healthy subjects revealed parameters of MEPs recorded from proximal and distal muscles of the upper extremities with the best “hot spots” from C4–C6 during stimulation over the vertebrae. They found that the root potentials were characterized by similar latencies, while the amplitudes recorded from the abductor digiti minimi muscle were the lowest following excitation at the C6 neuromere, contrary to our study, in which they were evoked the most effectively but with the smallest amplitudes following stimulation at C8 (see ). We similarly recorded the largest amplitudes for MEPs evoked from the proximal muscles of the upper extremity. However, our study only involved magnetic stimulation over the vertebrae and not electrical stimulation, which was considered painful. In another study by Schmid et al. , magnetic excitation over the vertebrae at C7-T1 evoked MEPs with smaller amplitudes from distal muscles than proximal muscles compared to high-voltage electrical stimulation applied to the same area. Similar to our study, for MEPs following magnetic versus low-voltage electrical stimulation at Erb’s point, latencies were shorter and amplitudes were smaller, and the morphology was the same (see and ). The standardized latencies were comparable for both types of stimulation, which was not reported by Schmid et al. . In our opinion, when interpreting the results of neurophysiological tests of the brachial plexus, the reference values show a trend in terms of whether the parameters of the recorded potentials are within the normal range or indicate pathology . When interpreting the results, special consideration should be given to comparing them with the asymptomatic side, which is the reference for the recorded outcome on the damaged side . The results of the present study can be directly transferred to the clinical neurophysiology practice, due to the possibility of using two different stimuli in diagnostics to evoke the potentials with the same parameters that are recorded by non-invasive surface sensors. Magnetic stimulation appears to be less painful due to the non-excitation of the afferent component, contrary to electrical stimulation, where antidromically excited nociceptive fibers may be involved . One of the study limitations that may have influenced the results, especially the parameters of latencies of potentials, was the anthropometric differences between women and men included in the study group. However, the gender proportions were equal, making the whole population of participants typical for European countries. Considering the number of participants examined in this study, it should be mentioned that due to comparable conduction parameters in the groups of women and men, the final statistical analysis covered 84 tests to compare the parameters of potentials evoked with electrical or magnetic impulses. Moreover, as mentioned in , at the beginning of the pilot study, statistical software was used to determine the required sample size, and it was estimated that at least 20 patients were needed for the purposes of this study. This study reveals that the parameters of evoked potentials in CMAP and MEP recordings from the same muscles after the application of magnetic and electrical stimuli applied to the nerves of the brachial plexus are comparable. Magnetic field stimulation is an adequate technique that enables the recording of supramaximal potential (instead of the submaximal, which was reported in other studies ), which is the result of stimulation of the entire axonal pool of the tested motor path, similar to testing with an electric stimulus. We found that the two types of stimulation can be used interchangeably during an examination, depending on the diagnostic protocol for the individual patient, and the parameters of evoked potentials can be compared. Moreover, in the case of patients sensitive to stimulation with an electric field, which is considered to cause pain in neurophysiological diagnostics, it is crucial to have the possibility of changing the type of stimulus. Magnetic stimulus is painless in comparison with electrical stimulus. We can conclude that the use of magnetic stimulation makes it possible to eliminate diagnostic limitations resulting from individual anatomical conditions or anthropometric features (such as large muscle mass or obesity). MEP studies allow us to evaluate the proximal part of the peripheral motor pathway (between the cervical root level and Erb’s point, and via trunks of the brachial plexus to the target muscles) following the application of stimulus over the vertebrae, which is the main clinical advantage of this study. It may be of particular importance in the case of damage to the proximal part of the brachial plexus. As a study of brachial plexus function, MEP should be compared to imaging studies in order to obtain full data on the patient’s functional and structural status.
|
Building a genome-based understanding of bacterial pH preferences
|
a8992143-d8d4-41ec-a216-c4b920045fb1
|
10146879
|
Microbiology[mh]
|
Predicting the environmental preferences of organisms is an important goal in ecology. If we know the conditions under which a given taxon can thrive, then we can better predict biogeographical distributions ( ), guide ecological restoration efforts ( ), design effective probiotics ( ), and understand taxon-specific responses to global change factors ( ). Unfortunately, the environmental preferences of most microbial taxa and the genomic attributes associated with those preferences often remain undetermined ( , ). One reason for this is that most microorganisms, particularly those found in nonhost-associated environments, can be difficult to cultivate in vitro ( ), making it difficult to measure environmental preferences directly. Even for those taxa that can be cultivated, quantifying how microbial growth rates vary across broad environmental gradients can be time consuming and the environmental gradients created in vitro may not necessarily mimic those found in situ. However, when direct information on environmental preferences can be collected, such information can be very useful for predicting microbial distributions and functions across space and time [e.g., ( – )]. Even without the direct measurement of environmental preferences, it is feasible to infer some environmental preferences from genomic information ( ). We can leverage the information contained in curated genomic databases, which can include both cultivated and uncultivated microbial taxa ( ), to infer the specific environmental preferences of uncharacterized microorganisms ( ). For example, genomic information from isolates whose environmental preferences have been measured in vitro has been used to infer the preferences of bacteria across gradients in oxygen ( ) and temperature ( ). Using genomic information to determine the environmental preferences of microbial taxa can have important ramifications. For example, we could improve our ability to predict community assembly across different environmental gradients, identify the conditions under which specific taxa can thrive, and better optimize medium conditions to improve the cultivability of fastidious taxa. However, genome-based inferences of environmental preferences can be difficult to validate (especially for uncultivated taxa), and we do not always know which genes or other genomic attributes are associated with adaptations to specific environmental conditions of interest. Consider microbial preferences for specific pH conditions. To our knowledge, it is not now possible to predict bacterial pH preferences from genomic information alone, although we know that pH is often a key factor determining the niche space occupied by microorganisms. The distributions of specific bacterial taxa and the overall composition of bacterial communities are often strongly associated with gradients in pH, as has been observed in a wide range of environments including soils ( – ), freshwater ( – ), and geothermal systems ( ). Despite the importance of this environmental factor, the pH preferences of most bacterial taxa remain undetermined, although we know that bacterial pH preferences can vary widely ( ). This knowledge gap is particularly evident in nonhost-associated systems that are often dominated by taxa that are difficult to cultivate and study in vitro. Even across cultivated taxa, pH tolerances and pH optima for growth are rarely determined experimentally, and most cultivation media likely select for taxa that grow at near-neutral pH conditions ( ). We note that “pH preference” across a given gradient is akin to the “realized niche” of a population (as opposed to the “fundamental niche”), where pH preference is the pH at which an organism achieves maximal relative abundances in nature ( ). This relative abundance is determined by the metabolically optimal pH for growth as well as other biotic or abiotic constraints. For example, a given taxon may grow optimally at around pH 7 under controlled conditions, but its pH preference could be lower if its abundance in a given environment is maximized at pH 6 due to its interplay with other factors. In the absence of whole genome sequence data, one might still be able to predict bacterial pH preferences from taxonomic and phylogenetic information alone. Shifts in bacterial community composition across pH gradients can be evident at broad levels of taxonomic resolution ( , ), suggesting some degree of conservatism in the traits related to pH adaptation. However, closely related taxa can have very distinct pH preferences ( ), such as bacteria within the phylum Acidobacteria which tend to be more abundant in lower pH soils ( ), a pattern that does not necessarily hold for subgroups within the phylum ( , ). Thus, it is not clear whether taxonomy and phylogenetic information is sufficient to predict bacterial pH preferences. Previous studies have focused on identifying common adaptations to changes in pH conditions across different taxonomic groups and the genes or transcripts associated with those adaptations ( , – ). This generates the expectation that the presence or absence of specific functional genes can be used to predict bacterial pH preferences. An approach that integrates biogeographic distributions of multiple individual taxa with genomic information across environmental gradients can address whether taxonomic, phylogenetic, or genomic information can be used to predict microbial environmental preferences. We set out to determine whether bacterial pH preferences are predictable. In other words, we asked whether we could use taxonomic, phylogenetic, or genomic information to predict where along a pH gradient a taxon is most likely to achieve its highest relative abundance (which we define here as its “pH preference”), searching for patterns that are generalizable across distinct environment types. To do so, we used information on bacterial distributions across five independent datasets that span large pH gradients in soil and freshwater systems to infer the putative pH preferences of the bacterial taxa found in these environments. We then used this information to assess the degree to which bacterial pH preferences are phylogenetically and taxonomically conserved. To determine the genomic features associated with adaptations to pH, we analyzed representative genomes from taxa with varying pH preferences, as inferred from our cultivation-independent analyses, and identified genes that are consistently associated with differences in bacterial pH preference across environments. Last, we developed and validated a machine learning model that enables the accurate identification of bacterial pH preferences from the presence or absence of 56 functional genes, making it feasible to infer pH preferences for both uncultivated and cultivated taxa. More generally, we demonstrate how our workflow ( ) can be used to investigate other bacterial environmental preferences and the genes associated with environmental adaptations while overcoming the limitations of cultivation-based experimental approaches to expand our trait-based understanding of microorganisms.
Overview of the approach We first inferred bacterial pH preferences using biogeographical information from natural pH gradients found in distinct systems (soil and freshwater environments) ( ). We acknowledge that pH is unlikely to be the only factor influencing the distributions of bacteria in these systems and that the pH preferences of many taxa could not be inferred using this approach as there are other biotic or abiotic factors that are of similar, or greater, importance in shaping their distributions. However, we note that this approach is similar to the approach routinely used in plant and animal systems to quantify the relationships between environmental factors and growth optima or tolerances ( ). After inferring pH preferences from the biogeographical information, we then matched the 16 S ribosomal RNA (rRNA) gene sequences from those taxa for which pH preferences could be inferred to representative genomes, identifying sets of genes that are consistently associated with pH preference based on their presence or absence in a given genome ( ). These genes were then incorporated into a machine learning modeling framework ( ) to predict pH preferences, with the approach validated using independent test sets. The biogeographical analyses included 16 S rRNA gene sequence data from a total of 795 soil samples and 675 freshwater samples spanning a pH range from 3 to 10 with a total of 250,275 amplicon sequence variants (ASVs) included in downstream analyses (fig. S1 and table S1). These data came from five independent datasets, and, in all cases, pH was an important driver of overall bacterial community composition, with Mantel ρ values ranging from 0.37 [La Romaine watershed (ROMAINE), freshwater] to 0.78 [Panama (PAN), tropical forest soils] (fig. S1). These patterns are expected considering that pH is often observed to have a strong influence on bacterial community structure in many systems ( , ) and considering that each of these sample sets was specifically selected to span broad gradients in pH. We also note that a broad diversity of bacterial taxa was found within and across each of the five sample sets (fig. S2A), which is important as we were trying to identify patterns in pH preferences across a broad array of taxa. Using our conservative approach, we were able to estimate the pH preferences of bacterial taxa in these environmental samples for 0.5 to 4.9% of all ASVs per dataset (table S1). The analyses of the taxonomic and phylogenetic signals in bacterial pH preferences and the search for representative genomes from these taxa were ultimately based on a total of 4568 ASVs (468 to 1614 ASVs per dataset) spanning 38 bacterial phyla. Is taxonomic and phylogenetic information good predictors of the pH preference of bacterial taxa? Taxonomy was a poor predictor of bacterial pH preferences ( ). In almost every phylum, there were ASVs with very distinct pH preferences and there were few cases where a high proportion of ASVs from a particular phylum were found to have a similar pH preference. For example, in the ROMAINE freshwater dataset, many ASVs assigned to the phylum Acidobacteria did exhibit a general preference for acidic pH conditions, but this observation was not consistent across datasets ( ). Thus phylum-level information provides relatively little insight into bacterial pH preferences, most likely because taxa within a given phylum can have divergent pH preferences [as noted previously for different acidobacterial subdivisions ( , )]. We observed similar patterns at finer levels of taxonomic resolution. For example, ASVs within some of the most ubiquitous bacterial families observed across these five datasets (Xanthobacteraceae in soil and Chitinophagaceae in freshwater systems) included ASVs with inferred pH preferences ranging from 4.01 to 8.20 and from 4.63 to 8.35, respectively. Consistent with the taxonomy-based results, we also found that bacterial pH preferences were not readily predictable from phylogenetic information. In other words, there was minimal phylogenetic conservation in pH preferences. Although we detected a significant phylogenetic signal in pH preferences for all but one dataset ( and fig. S3), the signal was relatively weak (Pagel’s λ, 0.22 to 0.78) (fig. S3). This is qualitatively evident from the phylogenetic trees associated with each dataset, which show that even closely related taxa often had very distinct pH preferences. For example, within the Proteobacteria phylum, some clusters of ASVs with similar preference for higher or lower pH conditions were indeed observed in the phylogenies from both Australia (AUS) (soil) and ROMAINE (freshwater) datasets, but these clusters were not evident in the other datasets ( ). These observations were further confirmed by the phylogenetic correlogram analyses that show that the Moran’s I autocorrelation values were low in all cases [<0.25; ( )], even at shallow phylogenetic depth (figs. S2B and S3). A significant Pagel’s λ indicates that there is a significant phylogenetic signal of the trait by comparison to a random model of evolution, while Moran’s I indicates whether this signal is clustered around a particular phylogenetic depth based on correlation. Because these indices are differently affected by sample size and trait variation, there is no consensus on thresholds that help interpret these parameters together ( ). A statistically significant Pagel’s λ verifies that there is phylogenetic signal, and the magnitude of Moran’s I indicates how strong this signal is at a given phylogenetic depth. Our results are in line with previous work that found no phylogenetic conservation of pH preferences across both cultured bacteria ( ) and uncultured bacteria ( ). Neither taxonomic nor phylogenetic information is generally useful for determining pH preferences without additional information, highlighting the potential value of incorporating functional gene information to better predict bacterial pH preferences. Associations between bacterial pH preferences and functional genes We next analyzed representative genomes of those ASVs with inferred pH preferences. Expectedly, given the well-recognized biases in reference genome databases ( ), only a relatively small fraction (6 to 24%) of those ASVs identified in our sample sets had representative genomes in Genome Taxonomy Database (GTDB) ( ) and a substantial portion (~35%) of those genomes were metagenome-assembled genomes (MAGs) and single-cell–assembled genomes (SAGs) from uncultured taxa (fig. S4). The number of genome matches obtained ranged from 57 genomes in the AUS soil dataset to 293 genomes in the Carbon Biogeochemistry in Boreal Aquatic Systems (CARBBAS) freshwater dataset (with a total across all five datasets of 669 ASV-genome matches representing 580 unique genomes with inferred pH preferences; table S1 and fig. S5A). From the taxa with inferred pH preferences, the proportion of ASVs with available genome representatives was 10.6% in soils and 22.4% in freshwater on average (fig. S5A). Still, the taxonomic composition of the available genomes spanned 20 different phyla, with a general predominance of Actinobacteria and Proteobacteria among the matching genomes (fig. S5B), a result that is expected given that these two phyla are relatively well represented in genome databases ( ). We identified 332 gene types that had the same significant association with inferred pH preference in at least two datasets, while 56 of those genes had the same association with pH in at least three datasets across soil and freshwater habitats ( and table S2). We note that the taxonomic distinctiveness of the datasets and the generally even distribution of pH preferences across phyla support the notion that the identified associations between gene types and inferred pH preferences were general and unlikely to be a product of the weak phylogenetic signal detected (figs. S3 and S5B; see also Materials and Methods for additional information). No gene types were identified as having a significant association with pH in all datasets, which is likely a result of pH adaptations not being conserved across taxa, these genes having other functionalities besides just pH adaptation, and the habitat-specific nature of some of the observed associations. The 56 gene types identified as having shared pH associations across both soil and freshwater habitats encoded for proteins known for their involvement in pH tolerance such as adenosine triphosphatases (ATPases), anion and cation transporters and antiporters, and alkaline and acidic phosphatases ( ). Generally, bacteria need to maintain pH homeostasis to maintain enzymatic function and cytoplasmic membrane stability, and they generally use four main mechanisms to cope with acid stress ( , ). Across those genes that we identified as being associated with pH preferences ( ), we see evidence for all four mechanisms. First, proton-consuming reactions, notably decarboxylation and deamination of amino acids, buffer proton concentrations in the cytoplasm by incorporation of H + to metabolic by-products ( , ). We identified genes for decarboxylases ( AAL_decarboxy , soils), amino acid transporters ( AA_permease , freshwater), carboxylate transporters ( TctA , soils and freshwater), and amino acid deaminases ( Queuosine_synth , soils and freshwater) associated with inferred pH preferences. Second, cells will produce basic compounds, such as ammonia released from urea to counter acidity. We found genes assigned to urea membrane transporters ( ureide_permeases ) overrepresented in taxa with low pH preference in soil and freshwater, as well as a gene for urease ( UreE_C , soils) that hydrolyzes urea into ammonia ( ). Third, bacteria can actively efflux protons to maintain intracellular pH levels. We identified genes for a wide range of cation and anion efflux pumps such as the Kdp K + membrane transporters ( KdpACD ) that were overrepresented in taxa with low pH preference in all habitats. In contrast, Na + /H + antiporters [ PhaGF , MnhG , MrpF , and YufB ; ( )] and anion transporters such as citrate ( CitMHS , soils and freshwater) and lactate permeases (freshwater) were overrepresented in taxa with preferences for higher pH. Last, bacteria also modify the permeability of the cytoplasmic membrane and control the maturation and folding of proteins to limit acid stress. We identified genes for multiple hydrogenase quality control proteins ( HypCD , HycI , and HupF ) as overrepresented in taxa with low pH preference across soils and freshwater, with these genes known to be involved in acid stress response ( , ). We also identified genes for process-specific proteins that act in a pH-dependent manner such as acidic phosphatases [Phosphoesterase and CpsB_CapC in soils and freshwater; ( )]. Malik et al. ( ) analyzed genomic information from soil bacterial communities across a pH gradient of 4 to 8.5 and identified very similar genes and gene functions as found here (summarized in table S2). Overall, we found that 30 of the 56 gene types that we found are consistently associated with pH across habitats have previously been linked to bacterial pH adaptations in other studies ( and table S2). While we cannot confirm that the genes that we identified are associated with pH preferences across multiple datasets actually represent specific bacterial adaptations to pH, our results make it possible to generate hypotheses about genes not previously involved in bacterial responses to pH. Our results expand on previous work by identifying multiple gene functions associated with pH preferences beyond transporter genes ( ) while also confirming established associations between specific genes and bacterial pH adaptations ( ). In addition, because most of our current knowledge regarding the specific genes involved in pH tolerance is derived from in vitro studies focused on bacterial pathogens ( ), our findings provide a basis to extend the investigation of bacterial adaptations to pH more broadly beyond those selected taxa. Prediction of bacterial pH preference from genomic information alone We incorporated the information on the presence/absence of the 56 gene types consistently associated with pH preference across habitats into a machine learning modeling framework to predict bacterial pH preferences. Preselecting gene types with consistent associations across habitats increased the likelihood that the model would maintain its predictive power on independent datasets (i.e., those containing genomes not included in this study). Across datasets, we obtained an average coefficient of determination ( R 2 ) value of 0.80 for the linear regression between predicted and observed pH preferences using the training data ( ). These genes encoded very diverse functions, from well-established ones such as transmembrane anion and cation transport, ATPase, and phosphatase activity, to functions less known to be involved in bacterial pH responses such as nucleases for DNA repair, endolysins, or the type V secretion system (table S2). With only presence/absence information of these 56 genes, we were able to predict the pH preferences of bacterial taxa that had not been used for model training. The validations conducted on a randomly selected subset of the genomes in each dataset (10% of genomes) had a mean absolute error (MAE) of 0.63 (MAE = 0.43 using the training data; ), indicating that, with available presence/absence information of these 56 genes, this machine learning model can predict the pH preference of a given bacterial taxon with an accuracy of 0.63 pH units. Considering bacterial taxa generally have pH optima within 1 pH unit ( , ), this error is relatively small. Likewise, the average R 2 of the linear regression between the observed and predicted pH preferences in the independent validation set was 0.55. Note that, with this model, the absence of a gene is as informative for predictive purposes as gene presence (fig. S6) and that, due to the lack of data for taxa with estimated pH preferences below pH 4 and above pH 9, the model can only predict accurately within that range. We also note that our model predicted bacterial pH preferences across all five datasets, not necessarily pH optima for growth. Observed pH preferences correspond to the pH at which taxa achieve maximal abundance in nature, reflecting both the pH optimum for growth and the biotic and abiotic factors that may constrain bacterial abundances—the realized niche. We have made a detailed description of this model and how it can be applied to any genome of interest in https://doi.org/10.6084/m9.figshare.22588963 . We further validated our model using information on bacterial pH preferences estimated in a completely independent study of bacterial distributions in soils across the United Kingdom ( ). The linear regression between the statistically estimated pH preference of taxa in the dataset and our model predictions had R 2 = 0.21 and MAE = 0.93 (fig. S7). Note that the estimated pH preferences in that study were obtained using a different approach from our study and that the pH preferences for many taxa were inferred using models with relatively weak fits, a point that the authors were careful to acknowledge ( ). Despite the limitations of this independent dataset and the limitations associated with our model, the correspondence between predicted and observed pH preferences from this independent study (and an independent set of genomes from our study; ) supports the value of our approach and the model we developed. We emphasize the importance of including data from future studies that could measure actual pH preferences of bacterial isolates studied in vitro. While curated bacterial phenotypic information from culture collections can provide information on bacterial pH preferences, preexisting data on bacterial pH preferences of cultured isolates are now limited to the very narrow distribution of putative pH preferences [85.4% of pH preferences falling between 6 and 8 ( )], a pattern that most likely reflects the limited breadth of culturing conditions most commonly used in isolation efforts ( ). Further quantification of microbial growth responses across large pH gradients under laboratory conditions coupled with whole genome sequencing is key to improving our knowledge of the genetic underpinnings of microbial adaptations to pH. We show that biogeographical information can be combined with genomic information to infer and predict the pH preferences of bacterial taxa. This is important given the considerable effort often required to directly measure pH preferences of cultivated taxa in vitro and given that such in vitro assays are impossible for the majority of bacterial taxa that are resistant to cultivation. We show that pH preferences can be inferred from genomic information alone, making it feasible to leverage the ever expanding genomic databases, including those which include MAGs and SAGs, to determine a key ecological attribute that cannot be readily determined from taxonomic or phylogenetic information alone ( ). We not only identified genes that had been previously linked to pH tolerance via detailed studies of select bacteria but also identified genes that warrant further investigation as they have not previously been associated with pH adaptations. Our approach demonstrates the feasibility of using genomic information to make predictions for other important traits that can be difficult to infer directly. In this sense, our work is similar to previous studies that have used genomic data to predict maximum potential growth rates ( , ), oxygen tolerances ( ), and temperature optima ( ), among other traits. However, in those cases, data from cultivated isolates were used to develop the models linking genomic attributes to the trait values of interest. This represents an important bottleneck given that well-characterized cultivated isolates represent only a fraction of the phylogenetic and ecological diversity found in many environments. Instead, we demonstrate how biogeographical information, specifically distributions of taxa across environmental gradients of interest, combined with data from representative genomes, can be used to predict environmental preferences and identify the specific adaptations that may be associated with the ecological attributes. We expect that a similar approach could be used to quantify other relevant traits that have traditionally been difficult to infer directly for uncultivated taxa, including tolerances to changes in moisture, salinity, or heavy metals and other potentially toxic compounds. The machine learning model presented here has the potential to aid the rational design of microbiomes where information on the pH preferences of bacterial taxa is needed. For example, several studies with N 2 -fixing rhizobia have successfully improved the symbiotic benefits to legume crops via the isolation and inoculation of acid resistant Rhizobium and Bradyrhizobium strains ( , ). Forecasting invasive species spread can also benefit from genome-based predictions of pH ( – ), given the likely importance of pH as a factor limiting bacterial colonization of habitats. In addition, by predicting bacterial pH preferences, our model can aid the optimization of culturing conditions for any bacterial taxon with available genomic information. The coupling of biogeographical and genomic information can be successfully used to predict the environmental preferences of bacterial taxa, presenting opportunities for improving our trait-based understanding of microbial life.
We first inferred bacterial pH preferences using biogeographical information from natural pH gradients found in distinct systems (soil and freshwater environments) ( ). We acknowledge that pH is unlikely to be the only factor influencing the distributions of bacteria in these systems and that the pH preferences of many taxa could not be inferred using this approach as there are other biotic or abiotic factors that are of similar, or greater, importance in shaping their distributions. However, we note that this approach is similar to the approach routinely used in plant and animal systems to quantify the relationships between environmental factors and growth optima or tolerances ( ). After inferring pH preferences from the biogeographical information, we then matched the 16 S ribosomal RNA (rRNA) gene sequences from those taxa for which pH preferences could be inferred to representative genomes, identifying sets of genes that are consistently associated with pH preference based on their presence or absence in a given genome ( ). These genes were then incorporated into a machine learning modeling framework ( ) to predict pH preferences, with the approach validated using independent test sets. The biogeographical analyses included 16 S rRNA gene sequence data from a total of 795 soil samples and 675 freshwater samples spanning a pH range from 3 to 10 with a total of 250,275 amplicon sequence variants (ASVs) included in downstream analyses (fig. S1 and table S1). These data came from five independent datasets, and, in all cases, pH was an important driver of overall bacterial community composition, with Mantel ρ values ranging from 0.37 [La Romaine watershed (ROMAINE), freshwater] to 0.78 [Panama (PAN), tropical forest soils] (fig. S1). These patterns are expected considering that pH is often observed to have a strong influence on bacterial community structure in many systems ( , ) and considering that each of these sample sets was specifically selected to span broad gradients in pH. We also note that a broad diversity of bacterial taxa was found within and across each of the five sample sets (fig. S2A), which is important as we were trying to identify patterns in pH preferences across a broad array of taxa. Using our conservative approach, we were able to estimate the pH preferences of bacterial taxa in these environmental samples for 0.5 to 4.9% of all ASVs per dataset (table S1). The analyses of the taxonomic and phylogenetic signals in bacterial pH preferences and the search for representative genomes from these taxa were ultimately based on a total of 4568 ASVs (468 to 1614 ASVs per dataset) spanning 38 bacterial phyla.
Taxonomy was a poor predictor of bacterial pH preferences ( ). In almost every phylum, there were ASVs with very distinct pH preferences and there were few cases where a high proportion of ASVs from a particular phylum were found to have a similar pH preference. For example, in the ROMAINE freshwater dataset, many ASVs assigned to the phylum Acidobacteria did exhibit a general preference for acidic pH conditions, but this observation was not consistent across datasets ( ). Thus phylum-level information provides relatively little insight into bacterial pH preferences, most likely because taxa within a given phylum can have divergent pH preferences [as noted previously for different acidobacterial subdivisions ( , )]. We observed similar patterns at finer levels of taxonomic resolution. For example, ASVs within some of the most ubiquitous bacterial families observed across these five datasets (Xanthobacteraceae in soil and Chitinophagaceae in freshwater systems) included ASVs with inferred pH preferences ranging from 4.01 to 8.20 and from 4.63 to 8.35, respectively. Consistent with the taxonomy-based results, we also found that bacterial pH preferences were not readily predictable from phylogenetic information. In other words, there was minimal phylogenetic conservation in pH preferences. Although we detected a significant phylogenetic signal in pH preferences for all but one dataset ( and fig. S3), the signal was relatively weak (Pagel’s λ, 0.22 to 0.78) (fig. S3). This is qualitatively evident from the phylogenetic trees associated with each dataset, which show that even closely related taxa often had very distinct pH preferences. For example, within the Proteobacteria phylum, some clusters of ASVs with similar preference for higher or lower pH conditions were indeed observed in the phylogenies from both Australia (AUS) (soil) and ROMAINE (freshwater) datasets, but these clusters were not evident in the other datasets ( ). These observations were further confirmed by the phylogenetic correlogram analyses that show that the Moran’s I autocorrelation values were low in all cases [<0.25; ( )], even at shallow phylogenetic depth (figs. S2B and S3). A significant Pagel’s λ indicates that there is a significant phylogenetic signal of the trait by comparison to a random model of evolution, while Moran’s I indicates whether this signal is clustered around a particular phylogenetic depth based on correlation. Because these indices are differently affected by sample size and trait variation, there is no consensus on thresholds that help interpret these parameters together ( ). A statistically significant Pagel’s λ verifies that there is phylogenetic signal, and the magnitude of Moran’s I indicates how strong this signal is at a given phylogenetic depth. Our results are in line with previous work that found no phylogenetic conservation of pH preferences across both cultured bacteria ( ) and uncultured bacteria ( ). Neither taxonomic nor phylogenetic information is generally useful for determining pH preferences without additional information, highlighting the potential value of incorporating functional gene information to better predict bacterial pH preferences.
We next analyzed representative genomes of those ASVs with inferred pH preferences. Expectedly, given the well-recognized biases in reference genome databases ( ), only a relatively small fraction (6 to 24%) of those ASVs identified in our sample sets had representative genomes in Genome Taxonomy Database (GTDB) ( ) and a substantial portion (~35%) of those genomes were metagenome-assembled genomes (MAGs) and single-cell–assembled genomes (SAGs) from uncultured taxa (fig. S4). The number of genome matches obtained ranged from 57 genomes in the AUS soil dataset to 293 genomes in the Carbon Biogeochemistry in Boreal Aquatic Systems (CARBBAS) freshwater dataset (with a total across all five datasets of 669 ASV-genome matches representing 580 unique genomes with inferred pH preferences; table S1 and fig. S5A). From the taxa with inferred pH preferences, the proportion of ASVs with available genome representatives was 10.6% in soils and 22.4% in freshwater on average (fig. S5A). Still, the taxonomic composition of the available genomes spanned 20 different phyla, with a general predominance of Actinobacteria and Proteobacteria among the matching genomes (fig. S5B), a result that is expected given that these two phyla are relatively well represented in genome databases ( ). We identified 332 gene types that had the same significant association with inferred pH preference in at least two datasets, while 56 of those genes had the same association with pH in at least three datasets across soil and freshwater habitats ( and table S2). We note that the taxonomic distinctiveness of the datasets and the generally even distribution of pH preferences across phyla support the notion that the identified associations between gene types and inferred pH preferences were general and unlikely to be a product of the weak phylogenetic signal detected (figs. S3 and S5B; see also Materials and Methods for additional information). No gene types were identified as having a significant association with pH in all datasets, which is likely a result of pH adaptations not being conserved across taxa, these genes having other functionalities besides just pH adaptation, and the habitat-specific nature of some of the observed associations. The 56 gene types identified as having shared pH associations across both soil and freshwater habitats encoded for proteins known for their involvement in pH tolerance such as adenosine triphosphatases (ATPases), anion and cation transporters and antiporters, and alkaline and acidic phosphatases ( ). Generally, bacteria need to maintain pH homeostasis to maintain enzymatic function and cytoplasmic membrane stability, and they generally use four main mechanisms to cope with acid stress ( , ). Across those genes that we identified as being associated with pH preferences ( ), we see evidence for all four mechanisms. First, proton-consuming reactions, notably decarboxylation and deamination of amino acids, buffer proton concentrations in the cytoplasm by incorporation of H + to metabolic by-products ( , ). We identified genes for decarboxylases ( AAL_decarboxy , soils), amino acid transporters ( AA_permease , freshwater), carboxylate transporters ( TctA , soils and freshwater), and amino acid deaminases ( Queuosine_synth , soils and freshwater) associated with inferred pH preferences. Second, cells will produce basic compounds, such as ammonia released from urea to counter acidity. We found genes assigned to urea membrane transporters ( ureide_permeases ) overrepresented in taxa with low pH preference in soil and freshwater, as well as a gene for urease ( UreE_C , soils) that hydrolyzes urea into ammonia ( ). Third, bacteria can actively efflux protons to maintain intracellular pH levels. We identified genes for a wide range of cation and anion efflux pumps such as the Kdp K + membrane transporters ( KdpACD ) that were overrepresented in taxa with low pH preference in all habitats. In contrast, Na + /H + antiporters [ PhaGF , MnhG , MrpF , and YufB ; ( )] and anion transporters such as citrate ( CitMHS , soils and freshwater) and lactate permeases (freshwater) were overrepresented in taxa with preferences for higher pH. Last, bacteria also modify the permeability of the cytoplasmic membrane and control the maturation and folding of proteins to limit acid stress. We identified genes for multiple hydrogenase quality control proteins ( HypCD , HycI , and HupF ) as overrepresented in taxa with low pH preference across soils and freshwater, with these genes known to be involved in acid stress response ( , ). We also identified genes for process-specific proteins that act in a pH-dependent manner such as acidic phosphatases [Phosphoesterase and CpsB_CapC in soils and freshwater; ( )]. Malik et al. ( ) analyzed genomic information from soil bacterial communities across a pH gradient of 4 to 8.5 and identified very similar genes and gene functions as found here (summarized in table S2). Overall, we found that 30 of the 56 gene types that we found are consistently associated with pH across habitats have previously been linked to bacterial pH adaptations in other studies ( and table S2). While we cannot confirm that the genes that we identified are associated with pH preferences across multiple datasets actually represent specific bacterial adaptations to pH, our results make it possible to generate hypotheses about genes not previously involved in bacterial responses to pH. Our results expand on previous work by identifying multiple gene functions associated with pH preferences beyond transporter genes ( ) while also confirming established associations between specific genes and bacterial pH adaptations ( ). In addition, because most of our current knowledge regarding the specific genes involved in pH tolerance is derived from in vitro studies focused on bacterial pathogens ( ), our findings provide a basis to extend the investigation of bacterial adaptations to pH more broadly beyond those selected taxa.
We incorporated the information on the presence/absence of the 56 gene types consistently associated with pH preference across habitats into a machine learning modeling framework to predict bacterial pH preferences. Preselecting gene types with consistent associations across habitats increased the likelihood that the model would maintain its predictive power on independent datasets (i.e., those containing genomes not included in this study). Across datasets, we obtained an average coefficient of determination ( R 2 ) value of 0.80 for the linear regression between predicted and observed pH preferences using the training data ( ). These genes encoded very diverse functions, from well-established ones such as transmembrane anion and cation transport, ATPase, and phosphatase activity, to functions less known to be involved in bacterial pH responses such as nucleases for DNA repair, endolysins, or the type V secretion system (table S2). With only presence/absence information of these 56 genes, we were able to predict the pH preferences of bacterial taxa that had not been used for model training. The validations conducted on a randomly selected subset of the genomes in each dataset (10% of genomes) had a mean absolute error (MAE) of 0.63 (MAE = 0.43 using the training data; ), indicating that, with available presence/absence information of these 56 genes, this machine learning model can predict the pH preference of a given bacterial taxon with an accuracy of 0.63 pH units. Considering bacterial taxa generally have pH optima within 1 pH unit ( , ), this error is relatively small. Likewise, the average R 2 of the linear regression between the observed and predicted pH preferences in the independent validation set was 0.55. Note that, with this model, the absence of a gene is as informative for predictive purposes as gene presence (fig. S6) and that, due to the lack of data for taxa with estimated pH preferences below pH 4 and above pH 9, the model can only predict accurately within that range. We also note that our model predicted bacterial pH preferences across all five datasets, not necessarily pH optima for growth. Observed pH preferences correspond to the pH at which taxa achieve maximal abundance in nature, reflecting both the pH optimum for growth and the biotic and abiotic factors that may constrain bacterial abundances—the realized niche. We have made a detailed description of this model and how it can be applied to any genome of interest in https://doi.org/10.6084/m9.figshare.22588963 . We further validated our model using information on bacterial pH preferences estimated in a completely independent study of bacterial distributions in soils across the United Kingdom ( ). The linear regression between the statistically estimated pH preference of taxa in the dataset and our model predictions had R 2 = 0.21 and MAE = 0.93 (fig. S7). Note that the estimated pH preferences in that study were obtained using a different approach from our study and that the pH preferences for many taxa were inferred using models with relatively weak fits, a point that the authors were careful to acknowledge ( ). Despite the limitations of this independent dataset and the limitations associated with our model, the correspondence between predicted and observed pH preferences from this independent study (and an independent set of genomes from our study; ) supports the value of our approach and the model we developed. We emphasize the importance of including data from future studies that could measure actual pH preferences of bacterial isolates studied in vitro. While curated bacterial phenotypic information from culture collections can provide information on bacterial pH preferences, preexisting data on bacterial pH preferences of cultured isolates are now limited to the very narrow distribution of putative pH preferences [85.4% of pH preferences falling between 6 and 8 ( )], a pattern that most likely reflects the limited breadth of culturing conditions most commonly used in isolation efforts ( ). Further quantification of microbial growth responses across large pH gradients under laboratory conditions coupled with whole genome sequencing is key to improving our knowledge of the genetic underpinnings of microbial adaptations to pH. We show that biogeographical information can be combined with genomic information to infer and predict the pH preferences of bacterial taxa. This is important given the considerable effort often required to directly measure pH preferences of cultivated taxa in vitro and given that such in vitro assays are impossible for the majority of bacterial taxa that are resistant to cultivation. We show that pH preferences can be inferred from genomic information alone, making it feasible to leverage the ever expanding genomic databases, including those which include MAGs and SAGs, to determine a key ecological attribute that cannot be readily determined from taxonomic or phylogenetic information alone ( ). We not only identified genes that had been previously linked to pH tolerance via detailed studies of select bacteria but also identified genes that warrant further investigation as they have not previously been associated with pH adaptations. Our approach demonstrates the feasibility of using genomic information to make predictions for other important traits that can be difficult to infer directly. In this sense, our work is similar to previous studies that have used genomic data to predict maximum potential growth rates ( , ), oxygen tolerances ( ), and temperature optima ( ), among other traits. However, in those cases, data from cultivated isolates were used to develop the models linking genomic attributes to the trait values of interest. This represents an important bottleneck given that well-characterized cultivated isolates represent only a fraction of the phylogenetic and ecological diversity found in many environments. Instead, we demonstrate how biogeographical information, specifically distributions of taxa across environmental gradients of interest, combined with data from representative genomes, can be used to predict environmental preferences and identify the specific adaptations that may be associated with the ecological attributes. We expect that a similar approach could be used to quantify other relevant traits that have traditionally been difficult to infer directly for uncultivated taxa, including tolerances to changes in moisture, salinity, or heavy metals and other potentially toxic compounds. The machine learning model presented here has the potential to aid the rational design of microbiomes where information on the pH preferences of bacterial taxa is needed. For example, several studies with N 2 -fixing rhizobia have successfully improved the symbiotic benefits to legume crops via the isolation and inoculation of acid resistant Rhizobium and Bradyrhizobium strains ( , ). Forecasting invasive species spread can also benefit from genome-based predictions of pH ( – ), given the likely importance of pH as a factor limiting bacterial colonization of habitats. In addition, by predicting bacterial pH preferences, our model can aid the optimization of culturing conditions for any bacterial taxon with available genomic information. The coupling of biogeographical and genomic information can be successfully used to predict the environmental preferences of bacterial taxa, presenting opportunities for improving our trait-based understanding of microbial life.
Datasets and sequence processing We compiled five 16 S rRNA gene sequencing datasets that spanned broad gradients in pH and represented distinct ecosystem types. These datasets included two previously published soil datasets [soils collected from across Panama and across Australia; ( )], two previously published freshwater datasets [samples collected from streams, rivers, and lakes in Canada; ( , )], and one dataset of soils collected from the Savannah River Site (SRS) in South Carolina, United States. For the SRS dataset, we collected soil samples in May 2021 from patches of savanna and surrounding forests of longleaf and loblolly pine that have been maintained since 2000 as part of the Corridor Project experiment [e.g., ( )]. The SRS is a National Environmental Research Park. It is a U.S. Department of Energy site that is managed by the U.S. Department of Agriculture Forest Service. Each soil sample consisted of a homogenate of eight 5-cm-deep soil core subsamples. We extracted DNA using the DNeasy PowerSoil HTP 96 Kit (Qiagen) from well-mixed soil slurries consisting of 1 g of soil and 2 ml of deionized and autoclaved water. We amplified the V4 region of the 16 S rRNA gene using universal primers 515 forward and 806 reverse in duplicate polymerase chain reactions (PCRs). We cleaned and normalized amplicon samples using the SequelPrep Normalization Plate Kit (Applied Biosystems, Waltham, MA, USA) and sequenced a total of 240 soil samples, 22 extraction blanks (400 μl of deionized and autoclaved water), and three PCR negative controls using paired-end Illumina MiSeq sequencing (300-cycle flow cell). For pH measurements, we created soil slurries consisting of 1 g of soil and 10 ml of deionized water, vortexed the slurries at maximum speed for 20 s, and then let them rest for about 1 hour. The five datasets were selected as each one encompassed a broad range in measured sample pH values, each included a sufficiently large number of samples across the five pH gradients (>200 samples per dataset), and pH was strongly correlated with differences in overall bacterial community composition across each dataset (fig. S1). From all but the SRS dataset, we compiled the metadata from open-source databases and from the authors upon request and downloaded the DNA sequences from the Sequence Read Archive (SRA) of the National Center for Biotechnology Information. The raw sequences of the SRS project were deposited in the SRA under bioproject ID PRJNA898410. Additional details on these datasets are provided in fig. S1. All datasets were analyzed using the same bioinformatic pipeline. Briefly, primers, adapters, and ambiguous bases were initially removed from the 16 S rRNA gene reads using cutadapt [v1.18; ( )]. Sequences were then quality-filtered, and ASVs were inferred using the DADA2 pipeline [v1.14.1; ( )]. Chimeric sequences were also removed, and taxonomic affiliations were determined against the SILVA SSU database [release 138; ( )]. The outputs were loaded into R ( ) using the phyloseq package [v1.38.0; ( )] for downstream analyses. Statistical inference of pH preferences Singletons (ASVs represented by only a single read within a given dataset) were removed, and samples were rarefied to the minimum read number that ensured sufficient sequencing depth based on rarefaction curves (see table S1). As archaeal reads were relatively rare in most of these samples, we focused our analyses only on bacteria. Those bacterial ASVs that occurred in fewer than 20 samples per dataset were excluded from the analysis as ASVs needed to occur in a sufficiently large number of samples to effectively infer pH preferences (2 to 10% of ASVs passed this minimum threshold of occurrence per dataset; table S1). To infer pH preferences for individual ASVs in each dataset, we first generated 1000 randomized distributions of the relative abundance of each ASV across samples with replacement (i.e., each relative abundance value could be sampled more than once) and calculated the maximum value for each of these distributions. We then calculated 95% confidence intervals of these relative abundance maxima using the boot package (v1.3.28) in R. The extremes of these intervals in relative abundance maxima were then matched to the pH of the samples where these ASVs achieved these relative abundance values, thus obtaining an estimated range of preferential pH for each ASV (i.e., the range of pH in which a given ASV consistently achieved maximal abundance across randomizations). All ASVs with inferred pH preferences that had ranges greater than 0.5 pH units were removed from downstream analyses as we were not confident that we could accurately infer a specific pH preference for those taxa. This stems from the assumption that we could only obtain confident statistical inferences for taxa with narrower pH preferences, excluding taxa with broader ranges in pH preferences to yield more accurate inferences. We visually inspected whether the abundance of each ASV in the nonrandomized dataset was within the identified preferred pH range to verify that our pH range filter was sufficiently stringent to exclude unreliable associations between ASVs and pH preferences. The midpoint of the pH range was taken as the estimated pH preference for each ASV. Our preliminary tests showed this bootstrapping approach to be more robust to zero inflation compared to alternative randomization approaches, generalized additive model, or logistic regression model fits that have previously been used for estimating environmental optima ( ). We note that many factors, besides just pH, can influence the distributions of bacteria, and we restricted our analyses just to those taxa (ASVs) for which pH preferences could effectively be determined (4568 ASVs in total, 468 to 1614 ASVs per dataset; table S1). Genome search and annotation The ASVs with estimated pH preferences were matched to the annotated GTDB (release 207; ( )], using vsearch [v2.21.1; ( )] to identify representative genomes. We acknowledge that the representative genome identified for any given ASV will not necessarily be an identical match to the genome of that taxon in situ as even bacterial taxa with identical 16 S rRNA genes can have distinct genomes ( ). However, this limitation poses a challenge to nearly all genomic analyses of bacteria from environmental samples, as even collections of MAGs would not necessarily capture the genomic heterogeneity that could exist at finer levels of taxonomic resolution. We also note that, when selecting representative genomes, we only allowed a single–base pair mismatch between the genomes and the 16 S rRNA gene amplicons, corresponding to a conservative 99.6% sequence similarity for the 250–base pair amplicons. In situations where a single ASV matched multiple genomes with the same similarity, we selected the most complete genome. Predicted coding sequences for the ~62,000 unique bacterial genomes (“species clusters”) available in the GTDB representative set were identified using Prodigal [v2.6.3; ( )]. The predicted coding sequences for each genome were aligned to the Pfam database [v35.0; ( )] using HMMER [v3; ( )] to obtain annotation of potential domains, genes, and gene families in each coding sequence. We discarded matches with a bit score lower than 10. This pipeline yielded a list of putative genes and domains found in each of the GTDB genomes that matched the ASVs identified from the samples included in this study with estimated pH preferences (580 genomes in total, 57 to 293 genomes per dataset; table S1). The copy numbers of genes and domains were binarized to presence/absence for further analyses. Identification of genes associated with pH preference We next determined associations between the estimated pH preferences of ASVs and the annotated genes in the corresponding representative genomes. We identified genes associated with pH preference by fitting generalized linear models with a binomial distribution and using the logit as a link function with core R functions. For each binarized gene function, we fitted a single model with the presence/absence of that gene type as the response variable and the estimated pH preference of each genome as the independent variable. We evaluated the statistical significance of the model coefficients using the Wald test as implemented with R core functions. We obtained the slope (positive or negative) of the relationship between pH preference and the presence/absence of each gene from the model estimates. We verified that the model estimates reliably reflected whether the relationship was positive or negative by plotting the proportion of genomes containing a particular gene type against the estimated pH preferences of those genomes. Last, we filtered out genes that had nonsignificant associations to pH preference in all datasets as well as those gene types that had significant associations in only one of the five datasets (as our goal was to identify consistent associations between genes and pH preferences across multiple datasets). We thus considered those gene types with the same significant association ( P < 0.05 and same positive or negative direction of the model estimate) that occurred in two or more datasets to be associated with bacterial pH preferences. We verified that phylogenetic signal of pH preference among the representative genomes was very weak (in two datasets, Pearson’s r ~ 0.06 to 0.07) or nonexistent (in three datasets) to justify not penalizing the generalized linear model estimates with phylogenetic information (fig. S8). Phylogenetic visualization and analysis We generated phylogenetic trees for each dataset that included only those ASVs for which pH preferences could be inferred. For each of those ASVs, we selected the corresponding highest quality full-length 16 S rRNA gene sequence from the SILVA SSU database [release 138; ( )], aligned those sequences with MUSCLE [v5; ( )], and created maximum likelihood (ML) trees using the RAxML approach with standard parameters [v8; ( )]. We visualized and edited the trees using iTOL [v6; ( )]. We additionally tested whether pH preference had a phylogenetic signal by calculating Blomberg’s K ( ) and Pagel’s λ ( ) with the depth at which pH preference had a phylogenetic signal estimated using phylogenetic correlograms ( ). These three types of phylogenetic analyses were conducted using the R package phylosignal [v1.3; ( )] using ML trees constructed as described above with all ASVs from a given dataset for which pH preference could be inferred (table S1). Prediction and independent validation of pH preferences from genomic information We used the full set of genes associated with pH preference across habitats to train a gradient boosted decision tree model to predict bacterial pH preference from genomic information. The gene type presence/absence table was imported into Python, and pH preferences were predicted for all ASVs from a one-hot encoded gene matrix using gradient boosted decision trees created with the XGBRegressor function from Python’s xgboost package [v1.6.2; ( )]. Hyperparameter optimization was implemented with the hyperopt package [v0.2.7; ( )] to select the best hyperparameters for the XGBRegressor model. We validated the accuracy of the model by measuring its MAE on an independent 10% of the genomes in each dataset (i.e., the test set). We estimated the impact of each gene type on model predictions using Shapley additive explanations as integrated in Python’s xgboost package. We also tested the model using a dataset of bacterial taxa with estimated pH preferences from soils across the United Kingdom ( ), from which we selected taxa with unimodal relationships of abundance with pH and therefore more confident estimation of pH preference [models IV and V in ( )]. For each of these taxa, we obtained reference genomes using the same approach described for the ASVs in the other datasets and recorded the presence/absence of the predictive genes in those genomes to run the model.
We compiled five 16 S rRNA gene sequencing datasets that spanned broad gradients in pH and represented distinct ecosystem types. These datasets included two previously published soil datasets [soils collected from across Panama and across Australia; ( )], two previously published freshwater datasets [samples collected from streams, rivers, and lakes in Canada; ( , )], and one dataset of soils collected from the Savannah River Site (SRS) in South Carolina, United States. For the SRS dataset, we collected soil samples in May 2021 from patches of savanna and surrounding forests of longleaf and loblolly pine that have been maintained since 2000 as part of the Corridor Project experiment [e.g., ( )]. The SRS is a National Environmental Research Park. It is a U.S. Department of Energy site that is managed by the U.S. Department of Agriculture Forest Service. Each soil sample consisted of a homogenate of eight 5-cm-deep soil core subsamples. We extracted DNA using the DNeasy PowerSoil HTP 96 Kit (Qiagen) from well-mixed soil slurries consisting of 1 g of soil and 2 ml of deionized and autoclaved water. We amplified the V4 region of the 16 S rRNA gene using universal primers 515 forward and 806 reverse in duplicate polymerase chain reactions (PCRs). We cleaned and normalized amplicon samples using the SequelPrep Normalization Plate Kit (Applied Biosystems, Waltham, MA, USA) and sequenced a total of 240 soil samples, 22 extraction blanks (400 μl of deionized and autoclaved water), and three PCR negative controls using paired-end Illumina MiSeq sequencing (300-cycle flow cell). For pH measurements, we created soil slurries consisting of 1 g of soil and 10 ml of deionized water, vortexed the slurries at maximum speed for 20 s, and then let them rest for about 1 hour. The five datasets were selected as each one encompassed a broad range in measured sample pH values, each included a sufficiently large number of samples across the five pH gradients (>200 samples per dataset), and pH was strongly correlated with differences in overall bacterial community composition across each dataset (fig. S1). From all but the SRS dataset, we compiled the metadata from open-source databases and from the authors upon request and downloaded the DNA sequences from the Sequence Read Archive (SRA) of the National Center for Biotechnology Information. The raw sequences of the SRS project were deposited in the SRA under bioproject ID PRJNA898410. Additional details on these datasets are provided in fig. S1. All datasets were analyzed using the same bioinformatic pipeline. Briefly, primers, adapters, and ambiguous bases were initially removed from the 16 S rRNA gene reads using cutadapt [v1.18; ( )]. Sequences were then quality-filtered, and ASVs were inferred using the DADA2 pipeline [v1.14.1; ( )]. Chimeric sequences were also removed, and taxonomic affiliations were determined against the SILVA SSU database [release 138; ( )]. The outputs were loaded into R ( ) using the phyloseq package [v1.38.0; ( )] for downstream analyses.
Singletons (ASVs represented by only a single read within a given dataset) were removed, and samples were rarefied to the minimum read number that ensured sufficient sequencing depth based on rarefaction curves (see table S1). As archaeal reads were relatively rare in most of these samples, we focused our analyses only on bacteria. Those bacterial ASVs that occurred in fewer than 20 samples per dataset were excluded from the analysis as ASVs needed to occur in a sufficiently large number of samples to effectively infer pH preferences (2 to 10% of ASVs passed this minimum threshold of occurrence per dataset; table S1). To infer pH preferences for individual ASVs in each dataset, we first generated 1000 randomized distributions of the relative abundance of each ASV across samples with replacement (i.e., each relative abundance value could be sampled more than once) and calculated the maximum value for each of these distributions. We then calculated 95% confidence intervals of these relative abundance maxima using the boot package (v1.3.28) in R. The extremes of these intervals in relative abundance maxima were then matched to the pH of the samples where these ASVs achieved these relative abundance values, thus obtaining an estimated range of preferential pH for each ASV (i.e., the range of pH in which a given ASV consistently achieved maximal abundance across randomizations). All ASVs with inferred pH preferences that had ranges greater than 0.5 pH units were removed from downstream analyses as we were not confident that we could accurately infer a specific pH preference for those taxa. This stems from the assumption that we could only obtain confident statistical inferences for taxa with narrower pH preferences, excluding taxa with broader ranges in pH preferences to yield more accurate inferences. We visually inspected whether the abundance of each ASV in the nonrandomized dataset was within the identified preferred pH range to verify that our pH range filter was sufficiently stringent to exclude unreliable associations between ASVs and pH preferences. The midpoint of the pH range was taken as the estimated pH preference for each ASV. Our preliminary tests showed this bootstrapping approach to be more robust to zero inflation compared to alternative randomization approaches, generalized additive model, or logistic regression model fits that have previously been used for estimating environmental optima ( ). We note that many factors, besides just pH, can influence the distributions of bacteria, and we restricted our analyses just to those taxa (ASVs) for which pH preferences could effectively be determined (4568 ASVs in total, 468 to 1614 ASVs per dataset; table S1).
The ASVs with estimated pH preferences were matched to the annotated GTDB (release 207; ( )], using vsearch [v2.21.1; ( )] to identify representative genomes. We acknowledge that the representative genome identified for any given ASV will not necessarily be an identical match to the genome of that taxon in situ as even bacterial taxa with identical 16 S rRNA genes can have distinct genomes ( ). However, this limitation poses a challenge to nearly all genomic analyses of bacteria from environmental samples, as even collections of MAGs would not necessarily capture the genomic heterogeneity that could exist at finer levels of taxonomic resolution. We also note that, when selecting representative genomes, we only allowed a single–base pair mismatch between the genomes and the 16 S rRNA gene amplicons, corresponding to a conservative 99.6% sequence similarity for the 250–base pair amplicons. In situations where a single ASV matched multiple genomes with the same similarity, we selected the most complete genome. Predicted coding sequences for the ~62,000 unique bacterial genomes (“species clusters”) available in the GTDB representative set were identified using Prodigal [v2.6.3; ( )]. The predicted coding sequences for each genome were aligned to the Pfam database [v35.0; ( )] using HMMER [v3; ( )] to obtain annotation of potential domains, genes, and gene families in each coding sequence. We discarded matches with a bit score lower than 10. This pipeline yielded a list of putative genes and domains found in each of the GTDB genomes that matched the ASVs identified from the samples included in this study with estimated pH preferences (580 genomes in total, 57 to 293 genomes per dataset; table S1). The copy numbers of genes and domains were binarized to presence/absence for further analyses.
We next determined associations between the estimated pH preferences of ASVs and the annotated genes in the corresponding representative genomes. We identified genes associated with pH preference by fitting generalized linear models with a binomial distribution and using the logit as a link function with core R functions. For each binarized gene function, we fitted a single model with the presence/absence of that gene type as the response variable and the estimated pH preference of each genome as the independent variable. We evaluated the statistical significance of the model coefficients using the Wald test as implemented with R core functions. We obtained the slope (positive or negative) of the relationship between pH preference and the presence/absence of each gene from the model estimates. We verified that the model estimates reliably reflected whether the relationship was positive or negative by plotting the proportion of genomes containing a particular gene type against the estimated pH preferences of those genomes. Last, we filtered out genes that had nonsignificant associations to pH preference in all datasets as well as those gene types that had significant associations in only one of the five datasets (as our goal was to identify consistent associations between genes and pH preferences across multiple datasets). We thus considered those gene types with the same significant association ( P < 0.05 and same positive or negative direction of the model estimate) that occurred in two or more datasets to be associated with bacterial pH preferences. We verified that phylogenetic signal of pH preference among the representative genomes was very weak (in two datasets, Pearson’s r ~ 0.06 to 0.07) or nonexistent (in three datasets) to justify not penalizing the generalized linear model estimates with phylogenetic information (fig. S8).
We generated phylogenetic trees for each dataset that included only those ASVs for which pH preferences could be inferred. For each of those ASVs, we selected the corresponding highest quality full-length 16 S rRNA gene sequence from the SILVA SSU database [release 138; ( )], aligned those sequences with MUSCLE [v5; ( )], and created maximum likelihood (ML) trees using the RAxML approach with standard parameters [v8; ( )]. We visualized and edited the trees using iTOL [v6; ( )]. We additionally tested whether pH preference had a phylogenetic signal by calculating Blomberg’s K ( ) and Pagel’s λ ( ) with the depth at which pH preference had a phylogenetic signal estimated using phylogenetic correlograms ( ). These three types of phylogenetic analyses were conducted using the R package phylosignal [v1.3; ( )] using ML trees constructed as described above with all ASVs from a given dataset for which pH preference could be inferred (table S1).
We used the full set of genes associated with pH preference across habitats to train a gradient boosted decision tree model to predict bacterial pH preference from genomic information. The gene type presence/absence table was imported into Python, and pH preferences were predicted for all ASVs from a one-hot encoded gene matrix using gradient boosted decision trees created with the XGBRegressor function from Python’s xgboost package [v1.6.2; ( )]. Hyperparameter optimization was implemented with the hyperopt package [v0.2.7; ( )] to select the best hyperparameters for the XGBRegressor model. We validated the accuracy of the model by measuring its MAE on an independent 10% of the genomes in each dataset (i.e., the test set). We estimated the impact of each gene type on model predictions using Shapley additive explanations as integrated in Python’s xgboost package. We also tested the model using a dataset of bacterial taxa with estimated pH preferences from soils across the United Kingdom ( ), from which we selected taxa with unimodal relationships of abundance with pH and therefore more confident estimation of pH preference [models IV and V in ( )]. For each of these taxa, we obtained reference genomes using the same approach described for the ASVs in the other datasets and recorded the presence/absence of the predictive genes in those genomes to run the model.
|
Functional characterization and molecular fingerprinting of potential phosphate solubilizing bacterial candidates from Shisham rhizosphere
|
b6ab75d6-151b-468c-b46e-adb4bb50c97d
|
10147649
|
Microbiology[mh]
|
Shisham ( Dalbergia sissoo Roxb.) is a nitrogen fixing tree species belonging to the family-Fabaceae. It is similar to nitrogen fixing agricultural crops in forming root nodules through symbiosis with rhizobia and is extensively used for commercial practices . Shisham is an important timber tree, growing throughout sub-Himalayan tract upto 1200 m of altitude . It is a valued tree species, and its global popularity has increased greatly over the past few decades owing to its potent fast growth, multi-purpose uses and nitrogen-fixing ability . Shisham is used for its high-quality timber, fuelwood with various byproducts as well as for its intercropping system to produce maximum yield of forage based farming system . The decrease in productivity of Shisham affects the source of income of rural families and global economy . Wilting is one of the most devastating plant diseases worldwide causing Shisham mortality. Soilborne pathogens are important production constraints leading to reduced growth, yield loss, and threaten adult tree and young tree populations . The declining population of Shisham can be effectively protected by the application of functional bacteria , . The rhizosphere represents the intense zone of plant–microbe interaction. Among the microbes bacteria are the most abundant taxonomic group . Rhizospheric bacteria that exhibit plant growth promotion characteristics are known as plant growth promoting rhizobacteria (PGPR). Rhizobacteria promotes growth in plants directly by synthesizing plant growth hormones, enhanced uptake of nutrients or indirectly by inhibiting the phytopathogens attack as well as many other mechanisms . Rhizobacteria promoting plant growth and providing protection from wide range of plant pathogens via several direct and indirect mode of actions are called microbial biological control agents (MBCA). PGPR are potential agents for disease suppression of several phytopathogens and induction of systemic resistance against nematodes and insects via synthesis of antimicrobial metabolites . In addition, some other mechanisms of beneficial bacteria such as competition, interfering with the host immunity to establish a mutualistic association with the host and antagonism can protect plants against pathogen attack , . Berg and Koskella reported that beneficial members of plant microbiome can contribute to boost host immune functions. Moreover, immunity of plant may play a major role in determining growth and accommodation of beneficial microbes which further contributes to the association of a stable microbial community inside as well as in their root zone, thus playing a crucial role in regulating variations in microbiota composition , . However, the microbial composition in rhizospheric region is decided majorly by plant secondary metabolites and root exudates . Several PGPRs ( Rhizobium , Burkholderia , Klebsiella , Pseudomonas , Azotobacter and Bacillus ) are reported for N 2 fixation, P solubilization, siderophore production, zinc solubilization and phytohormone production , . Bacteria solubilize the insoluble phosphate in medium by oxidation of glucose to gluconic acid or its derivative i.e., 2-ketogluconic acid. The production of acid reduces the soil pH which aids in mineralization of phosphate and makes it available to the plant root . Beneficial effects of P solubilizing bacteria on crops has been evaluated by Raymond . Numerous applications of PSB make it essential to explore their diversity which may further help to design alternative strategies and use these potent strains as bioinoculants. Moreover, community structure is seen to be affected by several factors such as host interaction, fertilizer application, irrigation, and climate . In order to identify endogenous PSB with greater ability to survive under stress conditions and develop as biofertilizers for diverse crops, it is required to learn about the bacterial diversity among them to assess the extent of changes in the bacterial community. The selection of the dominant strains of bacteria involved in P-solubilization that are used as biofertilizers can be aided by knowledge of the molecular diversity of PSB. Various organic acids viz., gluconic acid, citric acid, malic acid, oxalic acid, fumaric acid, malonic acid, tartaric acid, propionic acid, glyoxylic acid, butyric acid, glutaric acid and adipic acid are reported for phosphate solubilization but among these gluconic acid is the most commonly produced by phosphate solubilizing bacteria , . Production of gluconic acid mainly occurs in bacteria with the help of enzyme glucose dehydrogenase encoded by gcd (glucose dehydrogenase) gene under direct oxidation pathway . A cofactor pyrroloquinoline quinone encoded by pqq operon consisting of six core genes ( pqqA-F ) is required for the effective functioning of GDH enzyme . The cloning and expression of genes involved in biosynthesis of PQQ showed the importance of gluconic acid and its derivative 2-ketogluconic acid production in phosphate solubilization . Sonnenburg and Sonnenburg suggested that signature genes primarily involved in pqq biosynthesis pathway are pqqA , pqqC , pqqD , and pqqE which were recognized by gene knockout experiment. The majority of identified pqq genes in bacterial isolates belongs to α, β, γ class of proteobacteria and primarily present in gram-negative bacteria . The commonly found bacteria genera with PQQ gene belongs to Acinetobacter, Azotobacter, Beijerinckia, Bradyrhizobium, Burkholderia, Erwinia, Gluconoacetobacter, Klebsiella, Gluconobacter, Methylobacillus, Methylobacterium, Mycobacterium, Pseudomonas, Rhizobium, Streptomyces, and Xanthomonas . Growth conditions such as high glucose concentration as a carbon source and high insoluble phosphate level significantly affect the biosynthesis of glucose dehydrogenase and PQQ level . The characterization of PSB colonizing rhizosphere of Shisham trees and their effects on plant growth under stress condition remains under explored. Hence it is necessary to investigate the effect of P solubilizing bacterial diversity on soil health, and mechanism involved in the rhizospheric region. Therefore, in the current study we aimed to explore the rhizosphere of Shisham trees from various unexplored soils and screen the most effective P-solubilizing bacteria in mitigating environmental stress conditions. The primary aim of this study was to: find the optimal P-solubilizing bacteria that are most effective under various environmental and growth conditions by screening the Shisham of different unexplored soils. functional and molecular characterization of isolated PSB strains to explore the biodiversity among different rhizospheric regions of Shisham. validate the corresponding mechanisms and genes involved in P-solubilization.
Soil sampling Soil samples were collected from three different rhizospheric regions of Shisham forests located at three sites: Pantnagar (29.0222° N latitude, 79.4908°E longitude), Lachhiwala (30.2230°N latitude, 78.0766° E longitude) and Tanakpur (29.0722°N latitude, 80.1066° E longitude) regions in India. The three sites represent different agroecological zones and niches, each diversified with distinct vegetation cover, soil, and other natural resources. The Shisham trees in Lachhiwala and Tanakpur forest were healthy but the Shisham trees in Pantnagar forest were diseased. Furthermore, from each forest region, three trees were identified for rhizospheric soil sample collection within the range of 1–10 m. The samples were collected in triplicate from rhizospheric soil (15 cm depth) of Shisham trees of a forest region during winter season. The samples were pooled together to generate a representative composite sample and transferred in sterilized soil sampling plastic bags (zip lock) to the laboratory and kept at − 20 °C till further analysis. Soil physico-chemical characteristics Soil samples were air dried for soil physico-chemical analysis. Soil physico-chemical analysis included determination of soil pH, electrical conductivity, total organic carbon (TOC), total nitrogen (TN), available potassium (AK) and trace elements such as Fe and Zn , . To verify the results statistically, One-Way Analysis of Variance (ANOVA) was used at level of p < 0.05 using SPSS software. Soil enzymatic assays Each soil sample was analyzed for their significant contribution of microbial community (soil microbial enzymes) in rhizospheric region with the help of spectrophotometer. The exact concentration of the analyzed soil enzymes was determined by plotting a standard curve. All soil microbial enzymatic assays were performed in triplicates. Dehydrogenase activity was determined as reported by Thalmann . Fluorescein diacetate (FDA) activity was determined according to Inbar et al. . Alkaline and acid phosphomonoesterases activity was assayed according to a method of Tabatabai and Bremner . Urease activity in soil was determined as given by Kandeler and Gerber . Soil microbial enumeration Enumeration of bacteria in rhizospheric soil (total aerobic bacterial count) was enumerated through serial dilution pour plating on Angles’s medium whereas for phosphate solubilizing bacteria, Pikovskaya medium was used . The bacterial population in per gram of soil was determined by counting and expressing as colony forming unit (CFU) after 2–3 days of incubation at 30 ± 1 °C. Both the media were supplemented with 100 mgL –1 of cycloheximide to inhibit fungal growth . Selection of rhizobacterial isolates based on biochemical and plant growth promotion traits The biochemical characterization of the bacterial isolates was conducted this includes amylase, urease, nitrate reductase, lipase, xylanase, protease, pectinase, and catalase activity. In vitro PGP traits of the rhizobacterial isolates were assessed for production of siderophore, indoleacetic acid (IAA), ammonia, hydrogen cyanide (HCN) and solubilization of zinc. For all these biochemical and functional traits analysis protocols were followed described by Joshi et al. . Phosphate solubilizing efficiency of phosphate solubilizers Phosphate solubilizing bacteria were isolated using serial dilution and pour plate technique on Pikovskaya’s medium (PK medium). To provide optimum growth conditions the inoculated plates were incubated at 28 ± 2 °C for 3–4 days within the incubator. The bacterial colonies surrounded with halo zones were picked and restreaked to obtain pure cultures. All pure cultures were spot inoculated on Pikovskaya medium and incubated at 30 °C for 48 h. Halo zones surrounding the colonies were measured. Solubilizing efficiency (SE) and solubilization index (SI) of PSB isolates were also calculated , . [12pt]{minimal}
$${}( {{}} ) \, = }}{{}} 100$$ Solubilization Efficiency SE = Diameter of bacterial growth Diameter of clear zone × 100 [12pt]{minimal}
$${}( {{}} ) \, = } + {}}}{{}}$$ Solubilization Index SI = Diameter of bacterial growth + Diameter of clear zone Diameter of colony Quantitative estimation of phosphorous Selected bacterial cultures were subjected to transfer in 25 mL National Botanical Research Institute's phosphate growth medium (NBRIP: glucose (10 g L −1 ), calcium phosphate (5 g L −1 ), magnesium chloride hexahydrate (5 g L −1 ), magnesium sulfate heptahydrate (0.25 g L −1 ), potassium chloride (0.2 g L −1 ), ammonium sulfate (0.1 g L −1 )) for 72 h at 28 ± 1 °C and 120 rpm. After completion of successive growth period the bacterial isolates were centrifuged for 15 min at 5000 rpm. Supernatant (1 mL) was taken in a test tube and added with 60% perchloric acid (0.4 mL); molybdate solution: 2.5% ammonium molybdate in 5 N H 2 SO 4 (0.4 mL); colouring reagent: 10 mL of 5% sodium bisulphate, 20% sodium sulphite, 25 g 1-amino-2-naphthol-4-sulphonic acid (0.2 mL), and triple distilled water or TDW (4 mL) subsequently. After that the test tubes were incubated for 30 min at room temperature. The appearance and intensity of blue color exhibit the total concentration of phosphorus and measured the absorbance at 640 nm . Molecular characterization, identification, and phylogenetic analysis Genomic DNA of all 18 isolates were extracted using alkaline lysis method and the purity was checked in agarose gel. Amplification of 16S rDNA was done using template DNA of all 18 bacterial isolates recovered from different provenances of Shisham. Forward primer GM3f (5ʹ TACCTTGTTGTTACGACTT3ʹ) and reverse primer GM4r (5ʹTACCTTGTTACGACTT3ʹ) were used for amplification of 16S rDNA gene. PCR product was electrophoresed in 1.0% agarose gel at 80 mA for 1 h along with λ DNA/EcoRI/HindIII double digest ladder . Further the purified 16S rDNA amplicon products were sent to Biotech Centre UDSC, New Delhi for sequence analysis. The obtained nucleotide sequences were processed for homology using BLASTn through EzBioCloud's database ( https://www.ezbiocloud.net/identify ) . All the sequences were aligned with MEGA7 (Molecular Evolutionary Genetic Analysis version 7.0) software for constructing a phylogenetic tree . Fingerprinting of selected bacterial isolates Purified 16S rDNA amplicon of each 18 isolate was digested with three tetra cutter restriction endonucleases namely Msp I, Alu I, and BsuR I. The digestion reaction was set in a reaction mixture of 25 µL, which included 20 µL amplicon and reaction mixture with 1X assay buffer for enzyme, 1U/reaction of each restriction endonuclease Msp I, Alu I, and Fast digest BsuR I. For digestion with Msp I and Alu I reaction mixture was kept at 37 °C for 2 h and BsuR I fast digest for 5 min. Thereafter the enzymatic reaction was inactivated by adding loading dye and kept at − 20 °C. The product of restriction digestion was analyzed on 2.5% agarose gel electrophoresed at 60 V. The band pattern was visualized under UV Gel documentation system . Amplification of pqqA and pqqC gene The bacterial genomic DNA of selected isolates was subjected to amplification using Gen Amp PCR System 9700 (Applied Biosystems) in a 20 μL volume. The primers used for pqqA gene were forward primer pqq A-F: 5ʹATGTGGACCAAACCTGCATAC3ʹ and reverse primer pqq A-R:5ʹGCGGTTAGCGAAGTACATGGT3ʹ, while the primer set for pqqC gene were forward primer pqq C-F:5ʹATTACCCTGCAGCACTACAC3ʹ and reverse primer pqq C-R:5ʹ CCAGAGGATATCCAGCTTGAAC 3ʹ. The composition of reagents was:10X Assay Buffer (1×), MgCl 2 (0.5 mM), dNTPs (200 µM), Taq polymerase (1U), Forward and reverse primer (0.3 μM), Template DNA (50 ng). For the amplification of 2 PQQ genes the reaction conditions were as follows: Initial denaturation 94 °C for 5 min (1 cycle); denaturation 94 °C for 30 s (30 cycles); annealing 50 °C for 30 s; extension 72 °C for 1 min; and final extension 72 °C for 10 min (1 cycle). The presence of amplified fragments was checked on 2.0% ( w / v ) Agarose gel with 50 bp DNA ladder . Statistical analysis The experimental data (qualitative and quantitative) were statistically processed using t-test (Cochran and approx t-test). All results were expressed as mean ± SEM. F values for which p < 0.05 were considered significant .
Soil samples were collected from three different rhizospheric regions of Shisham forests located at three sites: Pantnagar (29.0222° N latitude, 79.4908°E longitude), Lachhiwala (30.2230°N latitude, 78.0766° E longitude) and Tanakpur (29.0722°N latitude, 80.1066° E longitude) regions in India. The three sites represent different agroecological zones and niches, each diversified with distinct vegetation cover, soil, and other natural resources. The Shisham trees in Lachhiwala and Tanakpur forest were healthy but the Shisham trees in Pantnagar forest were diseased. Furthermore, from each forest region, three trees were identified for rhizospheric soil sample collection within the range of 1–10 m. The samples were collected in triplicate from rhizospheric soil (15 cm depth) of Shisham trees of a forest region during winter season. The samples were pooled together to generate a representative composite sample and transferred in sterilized soil sampling plastic bags (zip lock) to the laboratory and kept at − 20 °C till further analysis.
Soil samples were air dried for soil physico-chemical analysis. Soil physico-chemical analysis included determination of soil pH, electrical conductivity, total organic carbon (TOC), total nitrogen (TN), available potassium (AK) and trace elements such as Fe and Zn , . To verify the results statistically, One-Way Analysis of Variance (ANOVA) was used at level of p < 0.05 using SPSS software.
Each soil sample was analyzed for their significant contribution of microbial community (soil microbial enzymes) in rhizospheric region with the help of spectrophotometer. The exact concentration of the analyzed soil enzymes was determined by plotting a standard curve. All soil microbial enzymatic assays were performed in triplicates. Dehydrogenase activity was determined as reported by Thalmann . Fluorescein diacetate (FDA) activity was determined according to Inbar et al. . Alkaline and acid phosphomonoesterases activity was assayed according to a method of Tabatabai and Bremner . Urease activity in soil was determined as given by Kandeler and Gerber .
Enumeration of bacteria in rhizospheric soil (total aerobic bacterial count) was enumerated through serial dilution pour plating on Angles’s medium whereas for phosphate solubilizing bacteria, Pikovskaya medium was used . The bacterial population in per gram of soil was determined by counting and expressing as colony forming unit (CFU) after 2–3 days of incubation at 30 ± 1 °C. Both the media were supplemented with 100 mgL –1 of cycloheximide to inhibit fungal growth .
The biochemical characterization of the bacterial isolates was conducted this includes amylase, urease, nitrate reductase, lipase, xylanase, protease, pectinase, and catalase activity. In vitro PGP traits of the rhizobacterial isolates were assessed for production of siderophore, indoleacetic acid (IAA), ammonia, hydrogen cyanide (HCN) and solubilization of zinc. For all these biochemical and functional traits analysis protocols were followed described by Joshi et al. .
Phosphate solubilizing bacteria were isolated using serial dilution and pour plate technique on Pikovskaya’s medium (PK medium). To provide optimum growth conditions the inoculated plates were incubated at 28 ± 2 °C for 3–4 days within the incubator. The bacterial colonies surrounded with halo zones were picked and restreaked to obtain pure cultures. All pure cultures were spot inoculated on Pikovskaya medium and incubated at 30 °C for 48 h. Halo zones surrounding the colonies were measured. Solubilizing efficiency (SE) and solubilization index (SI) of PSB isolates were also calculated , . [12pt]{minimal}
$${}( {{}} ) \, = }}{{}} 100$$ Solubilization Efficiency SE = Diameter of bacterial growth Diameter of clear zone × 100 [12pt]{minimal}
$${}( {{}} ) \, = } + {}}}{{}}$$ Solubilization Index SI = Diameter of bacterial growth + Diameter of clear zone Diameter of colony
Selected bacterial cultures were subjected to transfer in 25 mL National Botanical Research Institute's phosphate growth medium (NBRIP: glucose (10 g L −1 ), calcium phosphate (5 g L −1 ), magnesium chloride hexahydrate (5 g L −1 ), magnesium sulfate heptahydrate (0.25 g L −1 ), potassium chloride (0.2 g L −1 ), ammonium sulfate (0.1 g L −1 )) for 72 h at 28 ± 1 °C and 120 rpm. After completion of successive growth period the bacterial isolates were centrifuged for 15 min at 5000 rpm. Supernatant (1 mL) was taken in a test tube and added with 60% perchloric acid (0.4 mL); molybdate solution: 2.5% ammonium molybdate in 5 N H 2 SO 4 (0.4 mL); colouring reagent: 10 mL of 5% sodium bisulphate, 20% sodium sulphite, 25 g 1-amino-2-naphthol-4-sulphonic acid (0.2 mL), and triple distilled water or TDW (4 mL) subsequently. After that the test tubes were incubated for 30 min at room temperature. The appearance and intensity of blue color exhibit the total concentration of phosphorus and measured the absorbance at 640 nm .
Genomic DNA of all 18 isolates were extracted using alkaline lysis method and the purity was checked in agarose gel. Amplification of 16S rDNA was done using template DNA of all 18 bacterial isolates recovered from different provenances of Shisham. Forward primer GM3f (5ʹ TACCTTGTTGTTACGACTT3ʹ) and reverse primer GM4r (5ʹTACCTTGTTACGACTT3ʹ) were used for amplification of 16S rDNA gene. PCR product was electrophoresed in 1.0% agarose gel at 80 mA for 1 h along with λ DNA/EcoRI/HindIII double digest ladder . Further the purified 16S rDNA amplicon products were sent to Biotech Centre UDSC, New Delhi for sequence analysis. The obtained nucleotide sequences were processed for homology using BLASTn through EzBioCloud's database ( https://www.ezbiocloud.net/identify ) . All the sequences were aligned with MEGA7 (Molecular Evolutionary Genetic Analysis version 7.0) software for constructing a phylogenetic tree .
Purified 16S rDNA amplicon of each 18 isolate was digested with three tetra cutter restriction endonucleases namely Msp I, Alu I, and BsuR I. The digestion reaction was set in a reaction mixture of 25 µL, which included 20 µL amplicon and reaction mixture with 1X assay buffer for enzyme, 1U/reaction of each restriction endonuclease Msp I, Alu I, and Fast digest BsuR I. For digestion with Msp I and Alu I reaction mixture was kept at 37 °C for 2 h and BsuR I fast digest for 5 min. Thereafter the enzymatic reaction was inactivated by adding loading dye and kept at − 20 °C. The product of restriction digestion was analyzed on 2.5% agarose gel electrophoresed at 60 V. The band pattern was visualized under UV Gel documentation system .
pqqA and pqqC gene The bacterial genomic DNA of selected isolates was subjected to amplification using Gen Amp PCR System 9700 (Applied Biosystems) in a 20 μL volume. The primers used for pqqA gene were forward primer pqq A-F: 5ʹATGTGGACCAAACCTGCATAC3ʹ and reverse primer pqq A-R:5ʹGCGGTTAGCGAAGTACATGGT3ʹ, while the primer set for pqqC gene were forward primer pqq C-F:5ʹATTACCCTGCAGCACTACAC3ʹ and reverse primer pqq C-R:5ʹ CCAGAGGATATCCAGCTTGAAC 3ʹ. The composition of reagents was:10X Assay Buffer (1×), MgCl 2 (0.5 mM), dNTPs (200 µM), Taq polymerase (1U), Forward and reverse primer (0.3 μM), Template DNA (50 ng). For the amplification of 2 PQQ genes the reaction conditions were as follows: Initial denaturation 94 °C for 5 min (1 cycle); denaturation 94 °C for 30 s (30 cycles); annealing 50 °C for 30 s; extension 72 °C for 1 min; and final extension 72 °C for 10 min (1 cycle). The presence of amplified fragments was checked on 2.0% ( w / v ) Agarose gel with 50 bp DNA ladder .
The experimental data (qualitative and quantitative) were statistically processed using t-test (Cochran and approx t-test). All results were expressed as mean ± SEM. F values for which p < 0.05 were considered significant .
Soil physico-chemical analysis Soil physico-chemical analysis was performed to assess the soil nutrient status and health. The analysis of macro and micro-nutrient contents along with some other important parameters (soil type, pH and electrical conductivity) of Shisham rhizospheric soil from three different provenances are presented in Table . The soil texture was silty loam in Lachhiwala and Tanakpur region whereas it was silty clay loam in Pantnagar. Soil pH in Pantnagar soil was 6.85 which was comparatively higher than Lachhiwala and Tanakpur (6.00 and 6.12) respectively. Electrical conductivity was found to be 0.11 dS m −1 for Lachhiwala, 0.14 dS m −1 for Tanakpur and 0.13 dS m −1 for Pantnagar. Total organic carbon in Pantnagar, Lachhiwala and Tanakpur was 42,750 kg hac −1 , 19,500 kg hac −1 and 25,000 kg hac −1 respectively. Further available phosphorus in soil was highest in Lacchiwala (56.48 kg hac −1 ) as compared to Pantnagar (37.86 kg hac −1 ) and Tanakpur (46.87 kg hac −1 ). Total nitrogen (TN) in Pantnagar, Lachhiwala and Tanakpur was 137.98 kg hac −1 , 163.07 kg hac −1 and 100.35 kg hac −1 respectively while soil potassium was 505.34 kg hac −1 , 434.11 kg hac −1 , and 520.12 kg hac −1 respectively. The iron (22.6 kg hac −1 ) and zinc (11 kg hac −1 ) content was highest in Tanakpur soil as compared to the Lachhiwala (Fe: 12.5 kg hac −1 ; Zn: 9.3 kg hac −1 ) and Pantnagar soil (Fe: 11 kg hac −1 ; Zn: 0.2 kg hac −1 ) (Table ). Soil nutrient properties were analysed statistically. The ANOVA (p < 0.05) results revealed highly significant differences between soil nutrient values at Lachhiwala, Tanakpur and Pantnagar. Soil enzymatic activities Alkaline phosphatase, acid phosphatase, fluorescein diacetate, dehydrogenase and urease activities of Shisham rhizospheric soils from Shisham forests at three different location was done. Alkaline phosphatase activity ranged from 442.8 µg PNP g −1 h −1 at Tanakpur to 1196.2 µg PNP g −1 h −1 at Lachhiwala. The highest activity of acid phosphatase enzyme was found to be 1109.6 µg PNP g −1 h −1 in Lachhiwala followed by Tanakpur (654.5 µg PNP g −1 h −1 ) and Pantnagar (574.8 µg PNP g −1 h −1 ). FDA (fluorescein diacetate) activity in Lachhiwala, Tanakpur and Pantnagar was 291.2 µg fluorescein g −1 h −1 , 372.6 µg fluorescein g −1 h −1 and 325 µg fluorescein g −1 h −1 respectively. Dehydrogenase enzyme levels were two-fold higher in case of Tanakpur forest (4300 µg TPF g −1 h −1 ) as compared to Lachhiwala forest (1880 µg TPF g −1 h −1 ) while least activity was reported in the case of Pantnagar forest (1770 µg TPF g −1 h −1 ). The maximum urease activity was observed in the Shisham rhizosphere soil from Lachiwala forest (241 µg NH 4 + g −1 h −1 ) followed by Pantnagar forest (192.25 µg NH 4 + g −1 h −1 ). The minimum urease activity was observed in rhizosphere soil from Tanakpur forest (65.78 µg NH 4 + g −1 h −1 ). There was a significant difference (p < 0.05) between the enzyme activities of Shisham rhizosphere soils from three provenances (Table ). Soil microbial enumeration Total population as enumerated on Angle’s medium in Shisham rhizospheric soil of Tanakpur, Lacchiwala and Pantnagar was 2.76 × 10 4 , 1.87 × 10 4 and 1.96 × 10 4 cfu g −1 of soil. However, count of phosphorus solubilizing bacteria was 1.20 × 10 4 cfu g −1 , 1.55 × 10 4 cfu g −1 and 1.06 × 10 4 cfu g −1 soil at Tanakpur, Lachhiwala and Pantnagar respectively. Name of the selected PSB bacterial isolates were coded according to their native rhizospheric region from different forest (Table ). Bacterial morphological characteristics were also observed (Table ). Solubilizing efficiency of P solubilizer Overall, 18 PSBs, eight from Lacchiwala, four from Pantnagar and six from Tanakpur were recovered on Pikovaskya agar plates from Shisham rhizospheric soil of different provenances (Fig. ). All eighteen bacterial isolates exhibited zone of solubilization in the range 1.16 to 4.75 cm on pikovaskya agar plates (Fig. ). The isolates from Lachhiwala provenance depicted higher phosphate solubilising index as compared to Tanakpur and Pantnagar. Highest P solubilising index (PSI) was detected in L4 and lowest in T4 (Table ). Bacteria-mediated phosphorous solubilization was quantified by following Fiske and Subbarow (1925) method. Out of the eighteen bacterial isolates, L4 solubilized highest amount of phosphorus (891.38 µg mL −1 ) and T4 (285.78 µg mL −1 ) solubilized lowest amount of phosphorus (Fig. ). The solubilizing index of PSBs as detected on Pikovaskya agar plates positively correlated with amount of P solubilized in NBRIP liquid medium. Functional characterization of PSB recovered from Shisham rhizosphere Selected PSB strains were screened for various enzyme activities and plant growth promotory properties. All PSBs exhibited one or more of enzymes amylase, urease, nitrate reductase, lipase, xylanase, protease, pectinase and catalase activity (Fig. ). Among the eighteen isolates, four isolates such as L7, L8, T3 and T5 were positive for amylase production. Urease test was found positive for L4, P2, T2 and T6. All the isolates except L4, T1, T3, T4, T5 and T6 exhibited nitrate reduction. Out of eighteen PSBs, five; L7, L8, P2, T3 and T5 were positive for lipase activity. Only eight isolates, L7, L8, P1, P4, T1, T3, T4 and T5 were positive for xylanase production. Five isolates from Lachhiwala (L1, L2, L5, L7 and L8), one each from Pantnagar (P2) and Tanakpur (T5) were positive for protease production as a halo zone was observed around bacterial growth on skim milk agar plates. Six out of eighteen isolates L1, L5, P1, P3, P4 and T1 were positive for pectinase enzyme production. Except L6, T1 and T3 all the isolates were able to produce catalase as production of gas bubbles and effervescence was observed after addition of drops of H 2 O 2 . Amongst 18 PSBs, seven isolates were able to solubilize Zinc. Zinc solubilization efficiency was highest in L3, L5, P2 and T2 and lowest in L4, P3 and P4. Five isolates were positive for siderophore production. Orange halos were maximum in L7, L8, T1, T3 and minimum in L1. IAA production was maximum in L4, P3, T1, T2 and T4 and least in L1, L5, L6, L7, L8, P1, P4, T3 and T5. All the isolates except P2 were negative for HCN production. Ammonia production in peptone water marked by color change from yellow to orange was found positive for all isolates except L6, L7, T1 and T3. Hence, all bacterial isolates exhibited multiple PGP traits along with inorganic P solubilization (Fig. ; Table ). Molecular characterization, identification and phylogenetic analysis PCR amplification of 16S rDNA gene region of all eighteen PSB isolates recovered from Shisham rhizosphere of different provenances, resulted in a distinct band of 1492 bp in the agarose gel (Fig. ). Bacterial isolates were identified by comparison of 16S rDNA sequences with reference strains using BLASTn programme. Out of eighteen isolates seven were identified within genus Pseudomonas . Out of these seven, 3 isolates were from Lachhiwala (L1, L3 and L5) whereas four were from Pantnagar (P1, P2, P3 and P4). Four isolates were identified as Streptomyces sp. (L6, L7, T3 and T5) , two each as Klebsiella sp. (L4 and T2) and Staphylococcus sp. (L2 and T6) , and one each as Pantoea sp . (L8), Kitasatospora sp . (T1) and Micrococcus sp . (T4). All eighteen strains were identified as belonging to 7 genera distributed across three phyla: Proteobacteria, Actinobacteria and Firmicutes. The genera identified were: Pseudomonas , Klebsiella, Streptomyces, Pantoea, Kitasatospora, Micrococcus and Staphylococcus (Fig. ; Table ). Seven strains were identified as L1 (98.14% similarity to Pseudomonas simiae strain NR 042392.1), L3 and L5 (99.16% similarity to Pseudomonas paralactis strain KP756923), P1 (98.89% similarity to Pseudomonas hunanensis strain JX545210), P2 (97% similarity to Pseudomonas aeruginosa strain NR 117678.1), P3 (98.14% similarity to Pseudomonas putida strain Z76667.1) and P4 (98.42% similarity to Pseudomonas plecoglossicida strain NR 114226.1). Strain L8 was identified as Pantoea sp. (96.83% similarity to Pantoea conspicua strain NR 116247.1). Two strains L4 and T2 were identified as Klebsiella sp . (99.51% similarity to Klebsiella variicola strain CP010523 and 96.37% similarity to Klebsiella singaporensis strain AF250285). Strain L2 was assigned to (97.98%) Staphylococcus petrasii (NR 118450.1) and T6 to Staphylococcus pasteuri (NR 114435.1). Isolates belonging to phylum Actinobacteria were clustered together which includes T4 (98.0% similarity to Micrococcus yunnanensis strain NR 116578.1), T1 (93.86% similarity to Kitasatospora kifunensis strain NR 112085.2), L6 (87% similarity to Streptomyces curacoi strain KY585954.1), L7 (95% similarity to Streptomyces cellostaticus strain NR 112304.1), T3 (94.22% similarity to Streptomyces antibioticus strain NR 043348.1), T5 (97.92% similarity to Streptomyces griseoruber strain NR 041086.1). The 16S rDNA sequences of all eighteen isolates are deposited in NCBI GenBank under accession numbers MG966339-MG966355 (Table ). DNA fingerprinting of selected bacterial isolates Based on Amplified ribosomal DNA (rDNA) restriction analysis (ARDRA) profiles and morphological characters, isolates were selected and taxonomically identified. After the restriction of amplified 16S rDNA with endonucleases generated 100–1000 bp fragment of DNA. Restriction enzyme AluI generated 2–4 well resolved bands of 700 bp to 100 bp in all eighteen isolates. Endonuclease AluI resolved all 18 strains into eight different genotypes (Fig. a). The restriction pattern of amplified 16S rDNA region with restriction enzyme BsuI resulted in 2 to 4 well resolved bands in a range from 1000 to 100 bp. The restriction with Bsu I resolved all 18 strains into six different genotypes (Fig. b). The restriction profiles obtained with Msp I enzyme resulted in one to three well resolved bands in a region from 200 to 600 bp (Fig. c). All eighteen isolates were distinguished into eight genotypes. Combined UPGMA dendrogram based on DNA fingerprint profiles An unweighted pair group means average (UPGMA) dendrogram calculating Jaccard’s coefficient was constructed based on analysis of the ARDRA profile of 16S rDNA region with Alu I, Bsu I and Msp I through NTSYSpc version 2.0 software . Restriction profile was interpreted on the basis of bands developed. Similar banding patterns obtained after combination of the three independent digestions were grouped. The isolates depicted higher polymorphism with Alu I and Msp I as compared to Bsu I. Eight different restriction patterns were obtained with Alu I and Msp I whereas six with Bsu I. Phylogenetic relationship within gram negative and gram-positive isolates were revealed by UPGMA clustering of isolates separately. In a UPGMA cluster based on RFLP with Alu I, BsuI and Msp I, all gram-negative strains grouped into two major clusters A and B (Fig. a). Cluster A included five isolates L1, L3, L5, P1 and P3. The cluster A was further divided into two subclusters. Subcluster, I included L1, L3 and L5 and subcluster II grouped P1 and P3. L3 and L5 in subcluster I exhibited 100% similarity and was related to L1 at a distance of 0.80 on Jaccard’s scale. Cluster B included the remaining strains P4 and P2 related at a distance of 0.60 Jaccard’s scale. For gram positive bacteria a separate dendogram was constructed (Fig. b). Majority of gram-positive isolates were placed in a single cluster which was further divided into two subclusters at 0.80 on Jaccard’s scale. Subcluster I included two isolates L6 and L7 whereas subcluster II included T3 and T5. Isolate T4 was placed singly on an outlying branch at a distance of 0.60 on Jaccard’s scale. Isolate T1 was distantly (0.35 on Jaccard’s scale) related to all the other strains. Amplification of pqqA and pqqC genes To confirm the conserved genomic region ( pqqA and pqqC ) for gluconic acid formation, PQQ gene amplification was done with the help of designed primer. Out of eighteen only sixteen bacterial isolates showed positive amplification for pqqC gene (82 bp band) whereas six bacterial isolates namely L1, L3, L5, P1, P3 and P4 showed positive amplification of pqqA gene (72 bp band) (Figs. , ). All six isolates with positive amplification for both pqqC and pqqA genes suggests that they possess two crucial genes of PQQ biosynthesis pathway.
Soil physico-chemical analysis was performed to assess the soil nutrient status and health. The analysis of macro and micro-nutrient contents along with some other important parameters (soil type, pH and electrical conductivity) of Shisham rhizospheric soil from three different provenances are presented in Table . The soil texture was silty loam in Lachhiwala and Tanakpur region whereas it was silty clay loam in Pantnagar. Soil pH in Pantnagar soil was 6.85 which was comparatively higher than Lachhiwala and Tanakpur (6.00 and 6.12) respectively. Electrical conductivity was found to be 0.11 dS m −1 for Lachhiwala, 0.14 dS m −1 for Tanakpur and 0.13 dS m −1 for Pantnagar. Total organic carbon in Pantnagar, Lachhiwala and Tanakpur was 42,750 kg hac −1 , 19,500 kg hac −1 and 25,000 kg hac −1 respectively. Further available phosphorus in soil was highest in Lacchiwala (56.48 kg hac −1 ) as compared to Pantnagar (37.86 kg hac −1 ) and Tanakpur (46.87 kg hac −1 ). Total nitrogen (TN) in Pantnagar, Lachhiwala and Tanakpur was 137.98 kg hac −1 , 163.07 kg hac −1 and 100.35 kg hac −1 respectively while soil potassium was 505.34 kg hac −1 , 434.11 kg hac −1 , and 520.12 kg hac −1 respectively. The iron (22.6 kg hac −1 ) and zinc (11 kg hac −1 ) content was highest in Tanakpur soil as compared to the Lachhiwala (Fe: 12.5 kg hac −1 ; Zn: 9.3 kg hac −1 ) and Pantnagar soil (Fe: 11 kg hac −1 ; Zn: 0.2 kg hac −1 ) (Table ). Soil nutrient properties were analysed statistically. The ANOVA (p < 0.05) results revealed highly significant differences between soil nutrient values at Lachhiwala, Tanakpur and Pantnagar.
Alkaline phosphatase, acid phosphatase, fluorescein diacetate, dehydrogenase and urease activities of Shisham rhizospheric soils from Shisham forests at three different location was done. Alkaline phosphatase activity ranged from 442.8 µg PNP g −1 h −1 at Tanakpur to 1196.2 µg PNP g −1 h −1 at Lachhiwala. The highest activity of acid phosphatase enzyme was found to be 1109.6 µg PNP g −1 h −1 in Lachhiwala followed by Tanakpur (654.5 µg PNP g −1 h −1 ) and Pantnagar (574.8 µg PNP g −1 h −1 ). FDA (fluorescein diacetate) activity in Lachhiwala, Tanakpur and Pantnagar was 291.2 µg fluorescein g −1 h −1 , 372.6 µg fluorescein g −1 h −1 and 325 µg fluorescein g −1 h −1 respectively. Dehydrogenase enzyme levels were two-fold higher in case of Tanakpur forest (4300 µg TPF g −1 h −1 ) as compared to Lachhiwala forest (1880 µg TPF g −1 h −1 ) while least activity was reported in the case of Pantnagar forest (1770 µg TPF g −1 h −1 ). The maximum urease activity was observed in the Shisham rhizosphere soil from Lachiwala forest (241 µg NH 4 + g −1 h −1 ) followed by Pantnagar forest (192.25 µg NH 4 + g −1 h −1 ). The minimum urease activity was observed in rhizosphere soil from Tanakpur forest (65.78 µg NH 4 + g −1 h −1 ). There was a significant difference (p < 0.05) between the enzyme activities of Shisham rhizosphere soils from three provenances (Table ).
Total population as enumerated on Angle’s medium in Shisham rhizospheric soil of Tanakpur, Lacchiwala and Pantnagar was 2.76 × 10 4 , 1.87 × 10 4 and 1.96 × 10 4 cfu g −1 of soil. However, count of phosphorus solubilizing bacteria was 1.20 × 10 4 cfu g −1 , 1.55 × 10 4 cfu g −1 and 1.06 × 10 4 cfu g −1 soil at Tanakpur, Lachhiwala and Pantnagar respectively. Name of the selected PSB bacterial isolates were coded according to their native rhizospheric region from different forest (Table ). Bacterial morphological characteristics were also observed (Table ).
Overall, 18 PSBs, eight from Lacchiwala, four from Pantnagar and six from Tanakpur were recovered on Pikovaskya agar plates from Shisham rhizospheric soil of different provenances (Fig. ). All eighteen bacterial isolates exhibited zone of solubilization in the range 1.16 to 4.75 cm on pikovaskya agar plates (Fig. ). The isolates from Lachhiwala provenance depicted higher phosphate solubilising index as compared to Tanakpur and Pantnagar. Highest P solubilising index (PSI) was detected in L4 and lowest in T4 (Table ). Bacteria-mediated phosphorous solubilization was quantified by following Fiske and Subbarow (1925) method. Out of the eighteen bacterial isolates, L4 solubilized highest amount of phosphorus (891.38 µg mL −1 ) and T4 (285.78 µg mL −1 ) solubilized lowest amount of phosphorus (Fig. ). The solubilizing index of PSBs as detected on Pikovaskya agar plates positively correlated with amount of P solubilized in NBRIP liquid medium.
Selected PSB strains were screened for various enzyme activities and plant growth promotory properties. All PSBs exhibited one or more of enzymes amylase, urease, nitrate reductase, lipase, xylanase, protease, pectinase and catalase activity (Fig. ). Among the eighteen isolates, four isolates such as L7, L8, T3 and T5 were positive for amylase production. Urease test was found positive for L4, P2, T2 and T6. All the isolates except L4, T1, T3, T4, T5 and T6 exhibited nitrate reduction. Out of eighteen PSBs, five; L7, L8, P2, T3 and T5 were positive for lipase activity. Only eight isolates, L7, L8, P1, P4, T1, T3, T4 and T5 were positive for xylanase production. Five isolates from Lachhiwala (L1, L2, L5, L7 and L8), one each from Pantnagar (P2) and Tanakpur (T5) were positive for protease production as a halo zone was observed around bacterial growth on skim milk agar plates. Six out of eighteen isolates L1, L5, P1, P3, P4 and T1 were positive for pectinase enzyme production. Except L6, T1 and T3 all the isolates were able to produce catalase as production of gas bubbles and effervescence was observed after addition of drops of H 2 O 2 . Amongst 18 PSBs, seven isolates were able to solubilize Zinc. Zinc solubilization efficiency was highest in L3, L5, P2 and T2 and lowest in L4, P3 and P4. Five isolates were positive for siderophore production. Orange halos were maximum in L7, L8, T1, T3 and minimum in L1. IAA production was maximum in L4, P3, T1, T2 and T4 and least in L1, L5, L6, L7, L8, P1, P4, T3 and T5. All the isolates except P2 were negative for HCN production. Ammonia production in peptone water marked by color change from yellow to orange was found positive for all isolates except L6, L7, T1 and T3. Hence, all bacterial isolates exhibited multiple PGP traits along with inorganic P solubilization (Fig. ; Table ).
PCR amplification of 16S rDNA gene region of all eighteen PSB isolates recovered from Shisham rhizosphere of different provenances, resulted in a distinct band of 1492 bp in the agarose gel (Fig. ). Bacterial isolates were identified by comparison of 16S rDNA sequences with reference strains using BLASTn programme. Out of eighteen isolates seven were identified within genus Pseudomonas . Out of these seven, 3 isolates were from Lachhiwala (L1, L3 and L5) whereas four were from Pantnagar (P1, P2, P3 and P4). Four isolates were identified as Streptomyces sp. (L6, L7, T3 and T5) , two each as Klebsiella sp. (L4 and T2) and Staphylococcus sp. (L2 and T6) , and one each as Pantoea sp . (L8), Kitasatospora sp . (T1) and Micrococcus sp . (T4). All eighteen strains were identified as belonging to 7 genera distributed across three phyla: Proteobacteria, Actinobacteria and Firmicutes. The genera identified were: Pseudomonas , Klebsiella, Streptomyces, Pantoea, Kitasatospora, Micrococcus and Staphylococcus (Fig. ; Table ). Seven strains were identified as L1 (98.14% similarity to Pseudomonas simiae strain NR 042392.1), L3 and L5 (99.16% similarity to Pseudomonas paralactis strain KP756923), P1 (98.89% similarity to Pseudomonas hunanensis strain JX545210), P2 (97% similarity to Pseudomonas aeruginosa strain NR 117678.1), P3 (98.14% similarity to Pseudomonas putida strain Z76667.1) and P4 (98.42% similarity to Pseudomonas plecoglossicida strain NR 114226.1). Strain L8 was identified as Pantoea sp. (96.83% similarity to Pantoea conspicua strain NR 116247.1). Two strains L4 and T2 were identified as Klebsiella sp . (99.51% similarity to Klebsiella variicola strain CP010523 and 96.37% similarity to Klebsiella singaporensis strain AF250285). Strain L2 was assigned to (97.98%) Staphylococcus petrasii (NR 118450.1) and T6 to Staphylococcus pasteuri (NR 114435.1). Isolates belonging to phylum Actinobacteria were clustered together which includes T4 (98.0% similarity to Micrococcus yunnanensis strain NR 116578.1), T1 (93.86% similarity to Kitasatospora kifunensis strain NR 112085.2), L6 (87% similarity to Streptomyces curacoi strain KY585954.1), L7 (95% similarity to Streptomyces cellostaticus strain NR 112304.1), T3 (94.22% similarity to Streptomyces antibioticus strain NR 043348.1), T5 (97.92% similarity to Streptomyces griseoruber strain NR 041086.1). The 16S rDNA sequences of all eighteen isolates are deposited in NCBI GenBank under accession numbers MG966339-MG966355 (Table ).
Based on Amplified ribosomal DNA (rDNA) restriction analysis (ARDRA) profiles and morphological characters, isolates were selected and taxonomically identified. After the restriction of amplified 16S rDNA with endonucleases generated 100–1000 bp fragment of DNA. Restriction enzyme AluI generated 2–4 well resolved bands of 700 bp to 100 bp in all eighteen isolates. Endonuclease AluI resolved all 18 strains into eight different genotypes (Fig. a). The restriction pattern of amplified 16S rDNA region with restriction enzyme BsuI resulted in 2 to 4 well resolved bands in a range from 1000 to 100 bp. The restriction with Bsu I resolved all 18 strains into six different genotypes (Fig. b). The restriction profiles obtained with Msp I enzyme resulted in one to three well resolved bands in a region from 200 to 600 bp (Fig. c). All eighteen isolates were distinguished into eight genotypes.
An unweighted pair group means average (UPGMA) dendrogram calculating Jaccard’s coefficient was constructed based on analysis of the ARDRA profile of 16S rDNA region with Alu I, Bsu I and Msp I through NTSYSpc version 2.0 software . Restriction profile was interpreted on the basis of bands developed. Similar banding patterns obtained after combination of the three independent digestions were grouped. The isolates depicted higher polymorphism with Alu I and Msp I as compared to Bsu I. Eight different restriction patterns were obtained with Alu I and Msp I whereas six with Bsu I. Phylogenetic relationship within gram negative and gram-positive isolates were revealed by UPGMA clustering of isolates separately. In a UPGMA cluster based on RFLP with Alu I, BsuI and Msp I, all gram-negative strains grouped into two major clusters A and B (Fig. a). Cluster A included five isolates L1, L3, L5, P1 and P3. The cluster A was further divided into two subclusters. Subcluster, I included L1, L3 and L5 and subcluster II grouped P1 and P3. L3 and L5 in subcluster I exhibited 100% similarity and was related to L1 at a distance of 0.80 on Jaccard’s scale. Cluster B included the remaining strains P4 and P2 related at a distance of 0.60 Jaccard’s scale. For gram positive bacteria a separate dendogram was constructed (Fig. b). Majority of gram-positive isolates were placed in a single cluster which was further divided into two subclusters at 0.80 on Jaccard’s scale. Subcluster I included two isolates L6 and L7 whereas subcluster II included T3 and T5. Isolate T4 was placed singly on an outlying branch at a distance of 0.60 on Jaccard’s scale. Isolate T1 was distantly (0.35 on Jaccard’s scale) related to all the other strains.
pqqA and pqqC genes To confirm the conserved genomic region ( pqqA and pqqC ) for gluconic acid formation, PQQ gene amplification was done with the help of designed primer. Out of eighteen only sixteen bacterial isolates showed positive amplification for pqqC gene (82 bp band) whereas six bacterial isolates namely L1, L3, L5, P1, P3 and P4 showed positive amplification of pqqA gene (72 bp band) (Figs. , ). All six isolates with positive amplification for both pqqC and pqqA genes suggests that they possess two crucial genes of PQQ biosynthesis pathway.
Among microorganisms, bacteria play an important role in biogeochemical cycling. Bacteria solubilize the insoluble organic and inorganic phosphates in the soil, which makes P available to plant roots and is considered the most eco-friendly and economic method . PSB are well known for disease suppression by synthesizing pathogen inhibitory compounds as well as enhancing the plant immune response. Hence the aim of this study was to identify the PSB bacteria which suppress plant disease and enhance the plant growth, bringing dual benefits. Soil of Pantnagar was reported to be silty clay loam with high pH, high carbon, low phosphorus, and low micronutrients (Fe and Zn) content in comparison with other two samples. At a time of sampling, it was observed that rate of mortality in Shisham trees was maximum in Pantnagar soil as compared to others. The reason behind the Shisham mortality in Pantnagar may be the deficiency of micro and macronutrients in soil. Micronutrients are essential for proper functioning of plants as well as to promote growth of beneficial microbes in rhizospheric region . Inadequate amount of micronutrients in soil directly affects the metabolic capacity of plants which further directly affects the tolerance towards biotic and abiotic stress . Macronutrient and micronutrient deficiency in soil affects the yield in crops and plants, invite disease and resist their propagation , . Hence, low nutrient status (low P, Fe, Zn) in soil of Pantnagar might be associated with disease incidence and spread. Correlation analysis showed that the values of P solubilizing index and the amount of soluble phosphorus in liquid NBRIP medium shared a highly significant relationship (t value = 15.30069) which indicates that the strains with the highest potential to solubilize Ca 3 (PO 4 ) 2 in liquid media were the same as the ones that exhibited the greatest halos. Moreover, slightly high pH of soil could also be the cause for mortality in Pantnagar Shisham forest. Higher soil pH hinders the availability of phosphorous to the plants and alter biological, geological, and chemical environment of soil which leads to disease in plants . Soil enzyme activities and nutrient status are closely related. Soil organic carbon (SOC), phosphorus, nitrogen, potassium and other essential micronutrients significantly affect the activities of the soil enzymes . In the present study Fluorescein diacetate (FDA) and Dehydrogenase activity correlated with culturable microbial population or respiratory metabolism . The dehydrogenase and FDA activities were higher in Shisham rhizosphere from Tanakpur where the aerobic bacterial population was also highest. Soil phosphatase activity is pH sensitive, depending on the number and diversity of soil resident microflora . The acid phosphatase, alkaline phosphatase, and urease activities were higher in Shisham rhizosphere from Lachhiwala which is due to the measure of total microbial population. Pathogens are encouraged to colonise the rhizosphere by increasing carbon levels, whereas helpful bacteria may do so if there are more nutrients available. Hence it indicates that not the individual C and nutrient content but the ratio that affects the rhizosphere microbiome which ultimately alters the soil enzyme status. Organic phosphate is solubilized by group of phosphatase enzymes like acid and alkaline phosphatases, phytases, and nucleotidases . Among which the extracellular acid and alkaline phosphatase play a key role in solubilization. In the terrestrial ecosystem the acid phosphatase is primary synthesized by plant roots and microbial action whereas the alkaline phosphatase is synthesized by microbes . Li et al., studied the role of acidic and alkaline phosphatase in subalpine forest region and found that alkaline phosphatase actively participates rather than the acid phosphatase in mineralization and solubilization of phosphorus further making it available to plant roots. The present study investigated that the different environmental abiotic factors and total organic carbon content of the soil at different provenances could significantly affect PSB population in the rhizospheric region. Microbial population density at rhizospheric region depends on several factors such as physico-chemical property of soil, water potential of soil, change in soil pH, partial pressure of oxygen and chemical composition of plant exudation . Microbial enzymes such as amylase, xylanase, lipase, pectinase, and protease were found actively involved in organic matter decomposition, plant growth promotion and are important in the disease suppression , . Bacterial genus such as Pseudomonas, Micrococcus, Paenibacillus, Streptococcus, Curtobacterium, Chryseobacterium are reported to produce hydrolytic enzymes which degrade the cell wall of pathogenic organisms . Out of eighteen isolates recovered in the present study, seven isolates were positive for Zn solubilization. Production of organic acids is the prominent mechanism for Zn solubilization by rhizobacteria . Out of 18 isolates, five isolates exhibited yellow to orange halo zone on CAS amended nutrient agar plates for siderophore production. Siderophores may enhance plant growth by mobilizing metal cations including Fe and Cu as well as indirectly stimulate P solubilization and disease suppression , . Siderophore positive PGPRs scavenge Fe 3+ from complex compounds under iron starvation condition and thus indirectly release P in soil . Moreover, they deprive phytopathogen from iron and hence lead to disease suppression . In the present study, fourteen isolates were potent IAA producers. IAA production by bacteria enhances root growth which leads to increased nutrient uptake in plants . The ability of IAA production by microbes varies among different species and is also affected by availability of substrate, culture conditions and stage of growth . HCN is also reported to play crucial role in disease suppression . Ammonia promotes plant growth by providing N to plants and suppressing plant pathogen . Isolated bacterial strains were related to genus Streptomyces, Pseudomonas , Klebsiella, Staphylococcus, Kitasatospora, Pantoea, and Micrococcus. Several members within these genera are identified for exhibiting plant growth promoting ability, P solubilizing and biocontrol properties for example: Pantoea, Pseudomonas and Streptomyces , Klebsiella and Micrococcus , , Kitasatospora reported for resistance to pest attack and growth promotion in Teak ( Tectonagrandis ), which is a valuable tree species . Sixteen bacterial isolates showed positive amplification for 82 bp pqqC gene whereas six for 72 bp pqqA gene. The bacterial isolates that exhibited amplicon for pqqA gene were also positive for pqqC gene, this suggests that they possess two crucial genes of PQQ biosynthesis pathway. PQQ operon ( pqqA-pqqF ) organize differently in different PSB isolates such as in PQQ operon of Acinetobacter calcoaceticus , the pqqF gene is absent , . While in P. fluorescens B16, the PQQ operon was composed of 11 genes namely, pqqA , B , C , D , E , F , H , I , J , K , and pqqM . Hence the presence of pqqA and pqqC gene in bacterial isolates could be prominent candidate for solubilization of insoluble phosphate. Presence of pqqA , pqqC, pqqD and pqqE genes are prerequisite for P solubilization in PSB isolates . Gene pqqA consists of 22 amino acids, a peptide of glutamic acid and tyrosine which serve carbon and nitrogen for PQQ biosysnthesis , . The pqqC gene encodes the pyrroloquinoline quinone synthase C (PqqC), which catalyzes the conversion of 3a-(2-amino-2-carboxy-ethyl)-4,5-dioxo-4,5,6,7,8,9-hexahydroquinoline-7,9-dicarboxylic acid to pyrroloquinoline quinone , . Therefore we can conclude that the selected bacterial isolates might be following gluconic acid mediated mechanism for solubilization of insoluble P in soil. pqqC is ubiquitous in Pseudomonas species . High PQQ-producing bacteria have been identified in bacteria of diverse genera, including Mycobacterium, Acinetobacter, Hyphomicrobium, Gluconobacter, Klebsiella, Polyporus, Ancylobacter, Pseudomonas, Xanthobacter, Methylobacillus, Paracoccus, Methylophilus, Methylobacterium, Thiobacillus and Methylovorus . In the present study, there were several strains in which there was no amplification of pqqA and pqqC genes. However, they were solubilizing phosphorus on pikovaskya medium. The possible reason is that these strains might be solubilizing phosphate via secretion of organic acids other than gluconic acid such as isovaleric acid, lactic acid, isobutyric acid, glycolic acid, acetic acid, oxalic acid, succinic acid and malonic acid. Bacteria like E. coli JM109 (genetically modified), Synechococcus PCC7942 (phosphoenol pyruvate carboxylase ( ppc )); Serratia marcescens and Pseudomonas cepacia ( gabY ) solubilize P by other than PQQ pathway or gene – . Therefore, our finding concludes that the deficiency of nutrients and excess availability of carbon and high pH invite the pathogenic microorganisms which is the main cause of wilt in Pantnagar soil. Most of the selected bacterial strains were previously reported for P solubilization. Mechanism of P solubilization through signature genes such as pqqA and pqqC has been reported for the first time in Shisham forest region.
In this study we found that the nature of soil and their native microbial community play a crucial role for plant growth and protection. To resolve the problem of mortality in forest soil it is necessary to analyze the physicochemical and biological properties of soil. The deficiency of macronutrients, micronutrients, alteration in soil pH and soil enzymes may lead to invite different kinds of plant disease and plant pathogen. The finding and enrichment of the best PGPR bacterial strains could minimize the mortality of Shisham trees and help to enhance the biodiversity. Amplification of phosphate solubilizing gene ( pqqA and pqqC ) in bacterial strain provides strong evidence for the mechanism of phosphate solubilization and their potent solubilizing efficiency. Hence our findings suggested that the bioformulation of bacterial isolates could mitigate the phosphate deficiency and promote plant yield directly as well as indirectly.
Supplementary Information.
|
Discordance between PAM50 intrinsic subtyping and immunohistochemistry in South African women with breast cancer
|
722bc8f5-470e-478d-87f2-2d41871f4827
|
10147771
|
Anatomy[mh]
|
Breast cancer is the most commonly diagnosed cancer among South African women accounting for 27.1% of all cancers diagnosed in these women . Breast cancer diagnoses on the African continent have been steadily increasing over the past decades, attributed to longer lifespans and changes in lifestyle associated with westernization. In Africa, mortality rates are higher than in Europe and the United States, largely due to late stage at diagnosis and fewer treatment options . Breast cancer is a heterogeneous disease, differing in gene expression patterns, growth rates, responses to treatment and clinical outcomes. Breast tumors can be subtyped by immunohistochemistry (IHC) which investigates the expression of four biomarkers: estrogen receptor (ER), progesterone receptor (PR), human epidermal growth factor receptor 2 (HER2/neu) and a marker of proliferation, Ki67. These markers distinguish ER-positive A-like, ER-positive B-like; HER2-enriched and triple negative (TNC) tumors (Table ) . Analysis of expression of the hormone receptors (ER and PR) is a semi-quantitative method based on the Allred Score . The proliferation marker Ki67 is used to distinguish between the luminal subtypes and was adopted as a marker by the St Gallen International Consensus on Breast Cancer . Ki67 was introduced as a diagnostic marker in South Africa in 2013 and is indicative of proliferation if Ki67 expression is ≥ 14% . However, the optimal Ki67 cut off value to distinguish luminal A-like tumors from luminal B-like tumors remains controversial due to uncertainty about how to classify tumors with intermediate (10–30%) Ki67 levels . The 2015 St Gallen’s suggested a cutoff of 20–29% be used to distinguish A-like and B-like subtypes, along with clinical validation . In addition, IHC for Ki67 analysis lacks reproducibility across laboratories . Immunohistochemical results can be affected by the duration of fixation, type of fixative used, speed of assay and completeness of dehydration . Moreover, the assessment is subject to interpretation by the histopathologist. In South Africa, the Department of Health 2018 recommendations are to use a Ki67 cutoff of 14% , although there is ongoing debate to the best cutoff of Ki67 to distinguish between luminal subtypes . Ki67 cutoffs of both 14% and 20% are currently used at different centers. The last decade has seen the development of many commercialized multigene tests to guide treatment and provide prognostic information for patients with breast cancer. The PAM50/ Prosigna assay has a 50-gene signature that groups tumors into intrinsic molecular subtypes luminal-A, luminal-B, HER2-enriched and basal-like . The PAM50 assay is less subjective than the IHC-based techniques, but is much more expensive and labor intensive than IHC. In South African public hospitals, IHC continues to be used for clinical subtyping because of its lower cost. A recent study from South Africa found that 64.9% of patients were diagnosed by IHC4 as B-like, 15.3% as TNC, 13.8% as A-like, and 6.0% as HER2-enriched . An earlier country-wide study, found that black South African women had higher levels of ER-negative and PR-negative tumors than women of European, South Asian or admixture heritage, but did not have significantly different HER2 levels . More recently, a study showed that white South African women had similar IHC profiles to European women and white American women, with more aggressive subtypes predominant in young women and less aggressive subtypes in older women, whereas black South African women did not have substantial profile changes according to age . This study examines the concordance between PAM50 molecular subtyping assigned and the IHC results currently used for the management of breast cancer diagnosed within the South African Public Health System, focusing on varying Ki67 cutoffs. The data generated should help to inform cutoff values for IHC and may lead to better management of breast cancer in South Africa and other settings where genomic subtyping is unaffordable.
Study participants The South African Breast Cancer and HIV Outcomes (SABCHO) cohort studied patients recruited at the breast clinic of Chris Hani Baragwanath Academic Hospital (CHBAH), Soweto, South Africa. Participants were consenting women with biopsy-confirmed breast cancer who self-identified as Black African. Exclusion criteria were age < 18 years or current pregnancy. Clinical staging was according to the American Joint Committee on Cancer (AJCC) system . The study was approved by the Human Research Ethics Committee (Medical) at the University of the Witwatersrand (M161116). IHC classification of tumors Histopathological characteristics for 384 patients, obtained from the National Health Laboratory Service (NHLS), included histological type, tumor grade, ER, PR, HER2 scoring and Ki67. All tissues for this study were processed at CHBAH NHLS Laboratory, following College of American Pathologist guidelines. Immunostaining was performed on the Benchmark XT automatic platform. The tumors were classified according to the St Gallen’s Guidelines . The Allred score was used to determine ER/PR status, with a value of 0–2 considered negative, and 3–8 considered positive . Tumors were HER2 positive if they scored 3 + by IHC, or 2 + by IHC with fluorescent in situ hybridization (FISH) confirmation. The Ki67 antibody used was 30–9 (Roche diagnostic, Ventana, USA), and multiple scorers at the same laboratory assessed the Ki67 stains. Percentage of proliferation was determined by visual estimation . The cut-off for the proliferation marker Ki67 is unresolved. The multidisciplinary team at CHBAH uses a Ki67 score of 20% in conjunction with the Allred score, grade and age of patient as a cut off for chemotherapeutic treatment in HR positive breast cancers. We additionally explored cutoffs of 10%, 15%, 20%, 25% and 30%, because of the uncertainty surrounding those values for clinical decision making . We assigned IHC used for clinical decision making as follows: Clin-A (HR + /HER2-/Ki67 ≤ 14%); Clin-B (HR + /HER2-/Ki67 > 14%, or HR + /HER2 + /Ki67 any); Clin-HER2 (HR-/HER2 + /Ki67 any); and Clin-TNC (HR-/HER2-/Ki67 any). The IHC subtyping surrogates were assigned as: A-like (HR + /HER2-/Ki67 ≤ 10%); A- or B-like (HR + /HER2-/10% < Ki67 ≤ 30%); B-like (HR + /HER2-/Ki67 > 30%); B/HER2-like (HR + /HER2 + /Ki67 any); HER2-like (HR-/HER2 + /Ki67 any); TNC (HR-/HER2-/Ki67 any). Both the clinical IHC subtypes as well as the IHC subtyping surrogates were compared with the PAM50 Intrinsic subtypes: luminal-A; luminal-B; HER2-enriched and basal-like (Table ). PAM50 intrinsic subtyping FFPE blocks were cut into 5 µm serial sections; the area of tumor was identified and marked on an H&E section. If available, primary surgery blocks were preferentially chosen. If the surgery section was unavailable, or if the patient received neoadjuvant chemotherapy or radiation therapy prior to surgery, a biopsy section was used. RNA was purified from the FFPE sections using the All Prep® DNA/RNA FFPE kit (Qiagen, Hilden, Germany). The RNA concentration was calculated using the optical density at 260 nm on the Nanodrop 2000™ spectrophotometer (Thermo Fisher Scientific, Waltham, MA). The extract was deemed suitable for further analysis if the concentration of RNA was greater than 12.5 ng/µl and the A260/280 ratio was 1.7–2.3. Following RNA extraction, 384 samples were of sufficient quantity and quality for molecular typing. The PAM50 gene expression was measured on the nCounter SPRINT™ (Nanostring Technologies, Seattle, WA), as per the Prosigna® Breast Cancer Prognostic Gene signature assay Package insert . (The 50 genes and 8 housekeeping genes are shown in supplementary Table S1 and an example of the resultant heat map are shown in supplementary Fig. S1.) nSOLVER 4.0 was used to retrieve the RCF files and perform QC analysis, background subtraction and normalization. Of the 384 samples, 378 passed QC and underwent further analysis; classification of intrinsic subtype was done at Nanostring (Seattle, WA). Quality control (QC) of the data was performed by NanoString Technologies, Inc. using their proprietary software, nSolver. For mRNA samples, as used in this study, QC is performed at a number of stages. Imaging QC flags samples if less than 75% of the imaging surface can be read. Binding density QC calculates the barcodes/micron 2 , samples with binding densities between 0.05 and 1.8 are usable with optimal binding densities being around 1.4 barcodes/micron 2 . The PAM50 panel includes both positive and negative controls which are assessed by geometric mean. Positive controls are synthetic RNA targets, spiked in at known concentrations, that are used to ensure proper hybridization and lack of RNase contamination in the samples and to establish limits of detection (the 0.5 fM positive control must be more than 2 standard deviations above the mean of the negative controls to pass QC). Positive controls are also used in normalization QC by generating scaling factors that must be between 0.3 and 3 to pass QC. Negative controls are probes for which no known target exists in biological samples and are used to establish background levels of detection. Statistical analysis Continuous variables were assessed for normality using the Shapiro-Wilks test. The data were described by mean ± standard deviation for normally distributed variables and median (interquartile range) for non-normally distributed variables. Categorical variables were described as frequencies and percentages. Statistical analyses were done using STATA v14.2 (College Station, Texas). Significance between the groups was determined using Pearson’s χ 2 test or the Kruskall Wallis rank test, and post hoc analysis using Dunn’s Pairwise Comparison test. A p -value < 0.05 was considered significant. Agreement in subtype call between the IHC and PAM50 subtyping methods was assessed using the kappa statistic. To allow for comparable groups with this method, the IHC results were classified as follows: Clin-A (HR + /HER2-/Ki67 ≤ 14%), Clin-B (HR + /HER2-/Ki67 > 14%), Clin-HER2 (HR any/HER2 + /Ki67 any) and TNC (HR-/HER2-/Ki67 any).
The South African Breast Cancer and HIV Outcomes (SABCHO) cohort studied patients recruited at the breast clinic of Chris Hani Baragwanath Academic Hospital (CHBAH), Soweto, South Africa. Participants were consenting women with biopsy-confirmed breast cancer who self-identified as Black African. Exclusion criteria were age < 18 years or current pregnancy. Clinical staging was according to the American Joint Committee on Cancer (AJCC) system . The study was approved by the Human Research Ethics Committee (Medical) at the University of the Witwatersrand (M161116).
Histopathological characteristics for 384 patients, obtained from the National Health Laboratory Service (NHLS), included histological type, tumor grade, ER, PR, HER2 scoring and Ki67. All tissues for this study were processed at CHBAH NHLS Laboratory, following College of American Pathologist guidelines. Immunostaining was performed on the Benchmark XT automatic platform. The tumors were classified according to the St Gallen’s Guidelines . The Allred score was used to determine ER/PR status, with a value of 0–2 considered negative, and 3–8 considered positive . Tumors were HER2 positive if they scored 3 + by IHC, or 2 + by IHC with fluorescent in situ hybridization (FISH) confirmation. The Ki67 antibody used was 30–9 (Roche diagnostic, Ventana, USA), and multiple scorers at the same laboratory assessed the Ki67 stains. Percentage of proliferation was determined by visual estimation . The cut-off for the proliferation marker Ki67 is unresolved. The multidisciplinary team at CHBAH uses a Ki67 score of 20% in conjunction with the Allred score, grade and age of patient as a cut off for chemotherapeutic treatment in HR positive breast cancers. We additionally explored cutoffs of 10%, 15%, 20%, 25% and 30%, because of the uncertainty surrounding those values for clinical decision making . We assigned IHC used for clinical decision making as follows: Clin-A (HR + /HER2-/Ki67 ≤ 14%); Clin-B (HR + /HER2-/Ki67 > 14%, or HR + /HER2 + /Ki67 any); Clin-HER2 (HR-/HER2 + /Ki67 any); and Clin-TNC (HR-/HER2-/Ki67 any). The IHC subtyping surrogates were assigned as: A-like (HR + /HER2-/Ki67 ≤ 10%); A- or B-like (HR + /HER2-/10% < Ki67 ≤ 30%); B-like (HR + /HER2-/Ki67 > 30%); B/HER2-like (HR + /HER2 + /Ki67 any); HER2-like (HR-/HER2 + /Ki67 any); TNC (HR-/HER2-/Ki67 any). Both the clinical IHC subtypes as well as the IHC subtyping surrogates were compared with the PAM50 Intrinsic subtypes: luminal-A; luminal-B; HER2-enriched and basal-like (Table ).
FFPE blocks were cut into 5 µm serial sections; the area of tumor was identified and marked on an H&E section. If available, primary surgery blocks were preferentially chosen. If the surgery section was unavailable, or if the patient received neoadjuvant chemotherapy or radiation therapy prior to surgery, a biopsy section was used. RNA was purified from the FFPE sections using the All Prep® DNA/RNA FFPE kit (Qiagen, Hilden, Germany). The RNA concentration was calculated using the optical density at 260 nm on the Nanodrop 2000™ spectrophotometer (Thermo Fisher Scientific, Waltham, MA). The extract was deemed suitable for further analysis if the concentration of RNA was greater than 12.5 ng/µl and the A260/280 ratio was 1.7–2.3. Following RNA extraction, 384 samples were of sufficient quantity and quality for molecular typing. The PAM50 gene expression was measured on the nCounter SPRINT™ (Nanostring Technologies, Seattle, WA), as per the Prosigna® Breast Cancer Prognostic Gene signature assay Package insert . (The 50 genes and 8 housekeeping genes are shown in supplementary Table S1 and an example of the resultant heat map are shown in supplementary Fig. S1.) nSOLVER 4.0 was used to retrieve the RCF files and perform QC analysis, background subtraction and normalization. Of the 384 samples, 378 passed QC and underwent further analysis; classification of intrinsic subtype was done at Nanostring (Seattle, WA). Quality control (QC) of the data was performed by NanoString Technologies, Inc. using their proprietary software, nSolver. For mRNA samples, as used in this study, QC is performed at a number of stages. Imaging QC flags samples if less than 75% of the imaging surface can be read. Binding density QC calculates the barcodes/micron 2 , samples with binding densities between 0.05 and 1.8 are usable with optimal binding densities being around 1.4 barcodes/micron 2 . The PAM50 panel includes both positive and negative controls which are assessed by geometric mean. Positive controls are synthetic RNA targets, spiked in at known concentrations, that are used to ensure proper hybridization and lack of RNase contamination in the samples and to establish limits of detection (the 0.5 fM positive control must be more than 2 standard deviations above the mean of the negative controls to pass QC). Positive controls are also used in normalization QC by generating scaling factors that must be between 0.3 and 3 to pass QC. Negative controls are probes for which no known target exists in biological samples and are used to establish background levels of detection.
Continuous variables were assessed for normality using the Shapiro-Wilks test. The data were described by mean ± standard deviation for normally distributed variables and median (interquartile range) for non-normally distributed variables. Categorical variables were described as frequencies and percentages. Statistical analyses were done using STATA v14.2 (College Station, Texas). Significance between the groups was determined using Pearson’s χ 2 test or the Kruskall Wallis rank test, and post hoc analysis using Dunn’s Pairwise Comparison test. A p -value < 0.05 was considered significant. Agreement in subtype call between the IHC and PAM50 subtyping methods was assessed using the kappa statistic. To allow for comparable groups with this method, the IHC results were classified as follows: Clin-A (HR + /HER2-/Ki67 ≤ 14%), Clin-B (HR + /HER2-/Ki67 > 14%), Clin-HER2 (HR any/HER2 + /Ki67 any) and TNC (HR-/HER2-/Ki67 any).
Characteristics of the study cohort The clinicopathological characteristics are shown in Table . The mean age of study participants was 49.7 years. Most patients had stage II or III cancers, and were more likely to have grade-2 or -3 tumors between 20 and 50 mm (AJCC T2), with some nodal involvement. The intrinsic subtyping distribution by the PAM50 assay was 19.3% luminal-A ( n = 73), 32.5% luminal-B ( n = 123), 23.5% HER2-enriched ( n = 89) and 25.6% basal-like ( n = 93) (Fig. a, Table ). When classified by IHC, most patients (79.6%) were HR positive (with, or without, HER2) (Fig. b). Although the intrinsic subtypes (Fig. a) show roughly equal numbers of luminal-A (19.3%), luminal-B (32.5%), HER2-enriched (23.5%) and basal-like (24.6%) subtypes, the clinical IHC results show a massive predominance of Clin-B subtype (72.7%), and only 6.9% Clin-A, 5.3% Clin-HER2 and 15.1% Clin-TNC. High grade (3) Clin-A subtype, treated as Clin-B by the multidisciplinary team, only accounted for 0.53% (Table ) of the total cohort, and did not meaningfully affect the concordance with the molecular subtypes. Comparison of immunohistochemistry and intrinsic subtypes The luminal-B intrinsic subtype and the IHC B-like (Fig. a) were highly concordant. The intrinsic HER2-enriched showed the best concordance with the IHC B/HER2-like and the HR-/HER2-like (62.9% and 19.1%, respectively), while the intrinsic basal-like was most concordant with the IHC TNC (53.8%). Immunohistochemistry currently classifies the B/HER2-like as B-like tumors because they are HR positive but it may be more appropriate to classify these B/HER2-like tumors as HER2 positive tumors and to treat them accordingly. The intrinsic luminal-A subtype was not strongly associated with any one IHC subtype, raising questions about appropriate Ki67 cutoff values. By comparison, the IHC-like groups were well reflected by the intrinsic subtypes (Fig. b). The A-like group was mainly composed of luminal-A intrinsic subtypes; A- or B- like was primarily distributed between luminal-A (38.2%) and luminal-B (56.4%) intrinsic subtypes, and the IHC B-like was mainly comprised of luminal-B. The HR positive/HER2 + (B/HER2-like) group consisted mainly of the intrinsic HER2-enriched subtype, followed by the luminal-B subtype. The HR negative / HER2 positive (HER2-like) group was predominantly HER2-enriched, and the TNC group mainly basal-like, as expected. Characteristics by intrinsic subtype Expression of the proliferation marker, Ki67, was lowest in luminal-A tumors [20% (10–32.5%)], highest in the basal-like subtype [70% (50–80%)], and intermediate in the luminal-B [40% (30–55%)] and HER2-enriched [50% (40–62.5%)] subtypes, as expected (Table ). Categorical analysis of Ki67 expression showed that the luminal-A tumors had the greatest spread, while close to 80% of the luminal-B tumors had Ki67 levels > 30%. The HER2-enriched and basal-like tumors expressed Ki67 at high values (over 30%), as expected. The Allred scores in luminal-A and luminal-B subtypes were predominantly high (scores of 7,8), while HER2-enriched subtypes had a greater spread of HR expression scores and basal-like subtypes were mainly negative or low scoring (Table ). Luminal-A (69.9%) and luminal-B (61.8%) subtypes were more likely to have lower T stages (T1 or T2), compared to HER2-enriched (36.0%) and basal-like (47.3%) subtypes. All intrinsic subtypes had T4 tumors, indicative of the late stage at presentation in this setting (Table ). Tumors with a luminal subtype were more likely to be of lower grade (grade 1 or 2) than basal-like subtypes (75.0% grade 3). Histologically, only the luminal-A subtypes had a significant proportion of invasive lobular carcinomas (11.1%) and invasive mucinous carcinomas (6.9%). Age and nodal involvement were not associated with intrinsic subtype in this cohort (Table ). Comparisons of Ki67 cutoff levels The kappa test was used to compare the classification of the luminal subtype using IHC and PAM50 based on Ki67 levels (Supplementary Table S2). The IHC groups were split into luminal-A and luminal-B subtypes using Ki67 cutoffs of 10%, 15%, 20%, 25% and 30% and the kappa statistic was used to compare these classifications to the subtypes assigned by the PAM50 analysis. The agreement between the methods ranged from 43 to 49%. The best concordance of the IHC and intrinsic subtypes, was when the cutoff was at 25% Ki67 (κ = 0.128, p = 0.003) and the worst at a cutoff of 10% (κ = 0.079, p = 0.033) (Supplementary Table S2). Thus, a Ki67 cutoff of 25% appears best for separating the luminal-A and -B subtypes in our setting. Using the 25% cutoff results in 15.5% IHC A-like and 37.3% B-like (Fig. b), closer in value to the intrinsic subtype proportions of luminal-A (19%) and luminal-B (32%) (Fig. a) than the current clinical cutoff of 14% (Fig. b). Moreover, when IHC HR + /HER2 + samples are separated from the Clin-B (Fig. b) into the B/HER2-like group (Fig. ), the B-like group becomes smaller, but the B/HER2-like group (26.9%) and HER2-like group (5.3%) together, are more reflective of the HER2-enriched intrinsic subtype (Fig. b).
The clinicopathological characteristics are shown in Table . The mean age of study participants was 49.7 years. Most patients had stage II or III cancers, and were more likely to have grade-2 or -3 tumors between 20 and 50 mm (AJCC T2), with some nodal involvement. The intrinsic subtyping distribution by the PAM50 assay was 19.3% luminal-A ( n = 73), 32.5% luminal-B ( n = 123), 23.5% HER2-enriched ( n = 89) and 25.6% basal-like ( n = 93) (Fig. a, Table ). When classified by IHC, most patients (79.6%) were HR positive (with, or without, HER2) (Fig. b). Although the intrinsic subtypes (Fig. a) show roughly equal numbers of luminal-A (19.3%), luminal-B (32.5%), HER2-enriched (23.5%) and basal-like (24.6%) subtypes, the clinical IHC results show a massive predominance of Clin-B subtype (72.7%), and only 6.9% Clin-A, 5.3% Clin-HER2 and 15.1% Clin-TNC. High grade (3) Clin-A subtype, treated as Clin-B by the multidisciplinary team, only accounted for 0.53% (Table ) of the total cohort, and did not meaningfully affect the concordance with the molecular subtypes.
The luminal-B intrinsic subtype and the IHC B-like (Fig. a) were highly concordant. The intrinsic HER2-enriched showed the best concordance with the IHC B/HER2-like and the HR-/HER2-like (62.9% and 19.1%, respectively), while the intrinsic basal-like was most concordant with the IHC TNC (53.8%). Immunohistochemistry currently classifies the B/HER2-like as B-like tumors because they are HR positive but it may be more appropriate to classify these B/HER2-like tumors as HER2 positive tumors and to treat them accordingly. The intrinsic luminal-A subtype was not strongly associated with any one IHC subtype, raising questions about appropriate Ki67 cutoff values. By comparison, the IHC-like groups were well reflected by the intrinsic subtypes (Fig. b). The A-like group was mainly composed of luminal-A intrinsic subtypes; A- or B- like was primarily distributed between luminal-A (38.2%) and luminal-B (56.4%) intrinsic subtypes, and the IHC B-like was mainly comprised of luminal-B. The HR positive/HER2 + (B/HER2-like) group consisted mainly of the intrinsic HER2-enriched subtype, followed by the luminal-B subtype. The HR negative / HER2 positive (HER2-like) group was predominantly HER2-enriched, and the TNC group mainly basal-like, as expected.
Expression of the proliferation marker, Ki67, was lowest in luminal-A tumors [20% (10–32.5%)], highest in the basal-like subtype [70% (50–80%)], and intermediate in the luminal-B [40% (30–55%)] and HER2-enriched [50% (40–62.5%)] subtypes, as expected (Table ). Categorical analysis of Ki67 expression showed that the luminal-A tumors had the greatest spread, while close to 80% of the luminal-B tumors had Ki67 levels > 30%. The HER2-enriched and basal-like tumors expressed Ki67 at high values (over 30%), as expected. The Allred scores in luminal-A and luminal-B subtypes were predominantly high (scores of 7,8), while HER2-enriched subtypes had a greater spread of HR expression scores and basal-like subtypes were mainly negative or low scoring (Table ). Luminal-A (69.9%) and luminal-B (61.8%) subtypes were more likely to have lower T stages (T1 or T2), compared to HER2-enriched (36.0%) and basal-like (47.3%) subtypes. All intrinsic subtypes had T4 tumors, indicative of the late stage at presentation in this setting (Table ). Tumors with a luminal subtype were more likely to be of lower grade (grade 1 or 2) than basal-like subtypes (75.0% grade 3). Histologically, only the luminal-A subtypes had a significant proportion of invasive lobular carcinomas (11.1%) and invasive mucinous carcinomas (6.9%). Age and nodal involvement were not associated with intrinsic subtype in this cohort (Table ).
The kappa test was used to compare the classification of the luminal subtype using IHC and PAM50 based on Ki67 levels (Supplementary Table S2). The IHC groups were split into luminal-A and luminal-B subtypes using Ki67 cutoffs of 10%, 15%, 20%, 25% and 30% and the kappa statistic was used to compare these classifications to the subtypes assigned by the PAM50 analysis. The agreement between the methods ranged from 43 to 49%. The best concordance of the IHC and intrinsic subtypes, was when the cutoff was at 25% Ki67 (κ = 0.128, p = 0.003) and the worst at a cutoff of 10% (κ = 0.079, p = 0.033) (Supplementary Table S2). Thus, a Ki67 cutoff of 25% appears best for separating the luminal-A and -B subtypes in our setting. Using the 25% cutoff results in 15.5% IHC A-like and 37.3% B-like (Fig. b), closer in value to the intrinsic subtype proportions of luminal-A (19%) and luminal-B (32%) (Fig. a) than the current clinical cutoff of 14% (Fig. b). Moreover, when IHC HR + /HER2 + samples are separated from the Clin-B (Fig. b) into the B/HER2-like group (Fig. ), the B-like group becomes smaller, but the B/HER2-like group (26.9%) and HER2-like group (5.3%) together, are more reflective of the HER2-enriched intrinsic subtype (Fig. b).
The ability to diagnose breast cancer subtypes accurately and appropriately, fundamentally affects cancer treatment decisions. PAM50 is widely used for molecular diagnosis of breast cancer subtypes in high income countries (HICs) because its results are reproducible and unaffected by inter- and intra-laboratory variability . Within resource-constrained settings, IHC is used as a proxy for intrinsic subtypes because it is less expensive, the infrastructure to run IHC assays is widespread, and it requires less “hands-on” technical expertise than the PAM50 assay. We thus need accurate and population-specific information to assign proxies that optimize concordance calibration with the PAM50 intrinsic subtyping findings. We found that the luminal-A intrinsic subtype had the greatest spread of IHC-analysis subgroups; the A-like IHC group was mainly composed of luminal-A subtype. This observation suggests that the currently used 14–20% Ki67 cutoff in South Africa may be too low. If the Ki67 cutoff were increased to 20–25%, the IHC A-like and B-like distribution would more accurately reflect the intrinsic subtypes. Subtyping strongly affects treatment options. Patients with luminal-A subtypes are likely to benefit from primary endocrine therapies in place of chemotherapy as first choice systemic treatment, whereas the benefits of chemotherapy to patients with luminal-B subtypes may offset chemotherapy side effects . The ambiguity in the Ki67 cutoff is not unique to the South African public health care system. German guidelines state that primary invasive tumors that are HR + , HER2- are considered low risk if Ki67 ≤ 10%, high risk if ≥ 25%, and intermediate risk if 10–25% as Ki67 does not differentiate risk groups accurately in this range . By contrast, the 14% cutoff was the best to distinguish between luminal-A and luminal-B in Spanish and Italian patients using Prosigna™ assays . These results reinforced the original PCR findings of Cheang et al. that the 14% cutoff was optimal. However, like Noske et al. , we observed better concordance at higher Ki67 levels. In HICs, where most breast cancers are diagnosed in early stages, the ASCO recommendations suggested that PAM50 could be used to inform chemotherapy decisions, much better than IHC in node negative luminal subtypes. Pu et al. found that survival rates were consistently worse in the luminal-B subtype, irrespective of menopausal status. The 2019 St Gallen report recommended that patients with ER ≥ 1% receive endocrine therapy, although it might have limited benefits . This recommendation is in line with the South African policy, which regards ER or PR ≥ 1% as hormone receptor positive . The Allred score shows ER and/or PR expression is high in luminal-A and luminal-B subtypes, as expected. The PAM50 basal-like subtype was predominantly negative for the Allred score, but also had a portion of low (3,4) Allred scores. This second finding is interesting, as it may suggest that the Allred cutoff to distinguish between A-like, B-like IHC subtypes and TNC subtypes could be increased to an Allred score ≤ 4. A larger study is needed to confirm this. Most tumors of the HER2-enriched luminal subtype are assigned to the B/HER-like IHC-analysis group. This finding is obvious when looking at the Allred score, where most of the HER2-enriched luminal subtypes had high HR positivity. While the multidisciplinary teams follow the St Gallen’s recommendations and treat HR + /HER2 + as Clin-B, the PAM50 intrinsic subtypes do not make this subtle distinction. In South African public health care, patients in this group received adjuvant endocrine therapy until 2019, when anti-HER2 therapies were introduced. A mere 19% of the HER2 enriched subtype would be HR negative and would not benefit from endocrine therapy. Patient subtyping should be interpreted cautiously. Mistaking luminal-A patients for luminal-B may result in overtreatment with chemotherapy. Confusion of HER2 with luminal-B subtypes may result in under treatment by HER2 targeted therapy (e.g., trastuzumab) and/or overtreatment endocrine therapy . Trastuzumab is expensive and inconsistently available in the South African public sector , so the option of using endocrine therapies if trastuzumab is unavailable would be an advantage for HER2 positive patients. A Swedish cohort, found 81–85% concordance between molecular luminal-A and IHC-A subtypes. However, 35–52% of their luminal-B intrinsic subtypes were classified as IHC-A. Ki67 distinguished between good and bad prognostic groups with node negative cancer, but its use is very controversial . Lundgren et al. found that concordance with luminal subtypes improved when histological grade was included. Well differentiated tumors (grade 1) tended to have low Ki67 levels . Intermediate (grade 2) and poorly differentiated tumors (grade 3) had higher Ki67 levels and a wider range of Ki67 values . In our study histological grades were generally high, so including grade with clinical IHC subtype had a negligible effect on concordance. Previously, women of African ancestry were thought to have fewer hormone receptor positive breast cancers than women of European ancestry. West African women and African-American women appear more likely to have TNC cancers . However, research has shown that most sub-Saharan Africans (South African, Kenyan, Sudanese) have HR positive cancers. In our cohort, 79.5% were HR positive, and more likely to be B-like (i.e., HR positive, high Ki67), even when the cutoff of Ki67 is 30%. Such cancers are more aggressive and have a poorer prognosis than those classified as luminal-A or IHC A-like. Because our study was part of a HIV outcome study, HIV positive and HIV negative cases were age matched within a 5 year band. Our study participants were therefore younger (49.9 years ± 11 years) than South African women with breast cancer on average. Younger patients are thought to have more clinically aggressive disease and poorer outcomes. Korean breast cancer patients are much more likely to be premenopausal than others , and this younger population shows poorer outcomes. Sub-Saharan Africa shows huge disparities in IHC subtyping . In Uganda, breast cancer patients had mean age of 45, with IHC of 38% A or B; 5% B/HER2; 22% HER2 and 34% TNC . Two separate Nigerian groups found very different IHC expression: a study in Ibadan, found 77.6% A or B; 2.6% B/HER2; 4% HER2 and 15.8% TNC ; while a different study in Lagos found 38% HR pos; 18.3% HER2 pos and 47.4% TNC . Patients in Mozambique , had IHC of 51% A or B; 24% HER2 pos and 25% TNC; and Angola reported 25.7% A-like; 19.3% B-like; 7.9% B/HER2; 15.7% HER2-like and 31.4% TNC ; while in Zimbabwe, the IHC was 68% HR positive and 17% TNC . Work on 985 participants in South Africa showed 13.8% A-like; 43.9% B-like, 19.0% B/HER2; 6.0% HER2-like and 15.3% TNC, although this work included individuals of different ethnicities . Recent work in South Africa found that black South Africans had expression of about 49–53% HR + /HER2- (A- or B-like), 13–18% HR + /HER2 + (B/HER2-like), 7–12% HR-/HER2 + and 23–27% TNC, regardless of age. By comparison, South African whites had 30–65% HR + /HER2- (A- or B-like), 9–29% HR + /HER2 + , 4–13% HR-/HER2 + and 14–29% TNC. White women under 40 had higher expression of the more aggressive TNC and HER2 tumors, while women over 60 had more A-like and B-like tumors. Our results, with exclusively black participants, did not show differences in between the distribution of subtypes with age, which is consistent with the results found by Achilonu et al. . Limitations of this study include the small sample size and lower age of participants. This may have artificially increased the proportion of HER2 positive tumors. However, these limitations may have had reduced impact on the main focus of this study; which was the discordance between PAM50 intrinsic subtyping and IHC surrogates. Our study is, as far as we know, the first to compare IHC with PAM50 in black southern African women. Most of our study participants had hormone receptor positive breast cancer, and even tumors with the HER2-enriched subtype were more likely to be HR positive than HR negative. PAM50 is widely used for breast cancer subtyping, with IHC often used in resource constrained settings. The cost and labor of the PAM50 method make it prohibitive for the South African public health care sector and its inability to distinguish between HER2-positive B-subtypes and HR negative/HER2 positive subtypes must also give pause. We found the lowest concordance between molecular and IHC subtyping for the luminal-A group and recommend raising the cutoff for Ki67 to 20–25% to distinguish between A-like and B-like tumors, to better reflect the luminal subtypes.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 112 KB) Supplementary file2 (DOCX 33 KB) Supplementary file3 (DOCX 23 KB)
|
Comparative Effectiveness of 2 Interventions to Increase Breast, Cervical, and Colorectal Cancer Screening Among Women in the Rural US
|
38c6f438-93bc-4e3e-b5cd-c5e052d666fd
|
10148202
|
Patient-Centered Care[mh]
|
Adherence to guideline-based screening for breast, cervical, and colorectal cancer decreases mortality; unfortunately, rural screening rates fall short of Healthy People 2030 goals. For instance, compared with residents of large metropolitan areas, people living in rural sectors with fewer than 10 000 residents experience a 12-point higher crude cancer mortality rate. Previous studies have identified sociodemographic factors that limit up-to-date screening for these cancers in rural areas, including lower educational attainment, less knowledge about screening, lower income, poor access to health care, and greater social deprivation, as measured by the area deprivation index (ADI). , , , , , , Given the substantial contribution guideline-based cancer screening provides for lowering cancer mortality, , , interventions to increase breast, cervical, and colorectal cancer screening could increase adherence rates of being up to date with screening guidelines and decrease the disparate cancer mortality experienced by rural women, resulting in cost savings by preventing or finding and treating cancers at earlier stages. Over the past 2 decades, interventions to improve screening have demonstrated efficacy for tailored messaging delivered through print, telephone, and technology. , , , , Furthermore, patient navigation is effective in increasing cancer screening. , , , Technological advancements with dissemination have allowed both tailored interventions and patient navigation to be delivered remotely via technology or telephone, opening the possibility of reaching rural US residents. , , Although studies have intervened simultaneously to increase the uptake of 2 needed cancer screening tests, , , , most interventions have focused on screening for a single cancer: breast, cervical, or colorectal. Supporting a multiscreening approach, the literature provides evidence that individuals who complete 1 cancer screening behavior are more likely to complete a second, and experts are now voicing the possibility of providing a “one-stop shop” approach to cancer screening to increase multiple screening rates. , , , , , To our knowledge, no interventions have been tested to simultaneously increase the guideline-recommended breast, cervical, and colorectal cancer screenings for women. Each of these screenings can detect early-stage disease, protecting women from breast, cervical, or colorectal cancer mortality, and addressing all 3 cancers simultaneously increases the probability that women will have knowledge of and consider all screenings for which they are not up to date. This study evaluated the effect of 2 interventions: (1) a mailed, interactive digital video disc (DVD) with messages tailored to each woman’s responses and (2) the DVD followed by telephonic patient navigation (DVD/PN). Both interventions were tailored to the unique barriers, needs, and experiences of rural women by using platforms that could be delivered remotely, thereby reducing access barriers. Intervention groups were compared with usual care for increasing the percentage of women who were up to date with all recommended screening tests (breast, cervical, or colorectal). Secondary research questions tested the comparative effectiveness of the 2 interventions vs usual care for increasing the percentage of women up to date with screening for any needed screening (breast, cervical, and colorectal cancer). In addition, we assessed the costs and cost-effectiveness of these interventions.
Sample In this randomized clinical trial, participants were recruited between October 20, 2016 (first baseline interview), and March 15, 2019 (last baseline interview), from 98 rural Indiana and Ohio counties with Rural-Urban Continuum Codes ranging from 4 (least rural) to 9 (most rural). Eligibility included (1) biological female sex, (2) age 50 to 74 years, (3) not up to date with screening for 1 or more guideline-based cancer screening for women (breast, cervical, or colorectal), (4) ability to speak English, (5) no previous cancer diagnosis (other than nonmelanoma skin cancer), and (6) provision of informed consent. Definitions for being up to date with screening for these cancers were obtained from the US Preventive Task Force (USPSTF) , , and included (1) biennial screening mammography for women aged 50 to 74 years; (2) cervical cytology completed every 3 years or Papanicolaou and human papillomavirus test or cotesting completed every 5 years for women aged 21 to 65 years; and (3) colorectal cancer screening (fecal occult blood test/fecal immunochemical test [annual], colonoscopy [10 years]). , Screening verification via medical record review (MRR) was used to both assess baseline eligibility and determine outcomes. The study was approved by the institutional review boards of Indiana University and The Ohio State University, and all participants provided written informed consent. The full trial protocol and statistical analysis plan are available in . This study follows the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. Recruitment methods included 3 strategies: (1) commercial listing of women meeting age criteria residing in rural counties in Indiana and Ohio, (2) personal contact at community events, and (3) social media and advertisement websites. During an initial phone call, potentially eligible participants (N = 1852) verbally consented to participate in the study. Participants completed a baseline survey and consented to MRR to verify screening status for each of the 3 cancers. Of the 1852 women, 209 refused to consent to MRR, and 658 were ineligible after MRR, leaving 983 eligible women randomly assigned to study groups ( ). Interventions An interactive DVD, developed by study investigators (V.L.C. and S.M.R.), allowed users the ability to respond to prompts and receive personalized feedback to encourage uptake of needed screenings. Within each cancer screening unit, tailored messages, built on 2 decades of research, , provided information specific to the woman’s age, family history of cancer, perceived risk of developing the specific cancers, and barriers, benefits, and self-efficacy with regard to the respective screening behavior. , , , , , , , , , , An explanation of the screening options within each unit included information about the process for scheduling and completing needed screenings. Participants were randomly assigned to the DVD, DVD/PN, and usual groups between November 26, 2018, and July 1, 2019. Women randomly assigned to the DVD/PN intervention group were mailed a DVD followed by a patient navigator telephone attempt within 4 weeks. Two licensed social workers who were residents of rural Ohio were trained as patient navigators by study investigators (E.D.P. and M.L.K.). Social workers were selected as navigators because they had the requisite knowledge and skills needed to counsel women regarding cancer screening. Navigators contacted participants, confirmed receipt of the DVD, promoted and fostered information provided by the DVD, and counseled women to overcome identified barriers to any screening tests that were needed. Additional follow-up calls were made, as necessary, with a mean of 3 content calls (range, 1-14 calls) successfully completed per woman. A trained research assistant assessed 10% of the calls for fidelity. Outcome Assessment Outcome data (completion of screening tests) were obtained 12 months after mailing the DVD through MRR verification and self-report. Prior to MRR, women were queried about the medical record home(s) of all screening test results. Women were considered up to date at 12 months for all screening tests if breast, cervical, and colorectal cancer screening had been completed consistent with USPSTF guidelines , , during the period between baseline and 12 months for the tests for which they were not up to date at baseline. To become up to date for all needed cancer screening tests, women needed to screen for 1, 2, or 3 cancers depending on baseline status. To become up to date for any needed cancer screening tests, women needed to be screened for at least 1 of the cancer screening tests that were not up to date at baseline. Self-report measures (baseline and 12 months) included sociodemographic and health care variables, smoking status, knowledge about the cancers and their guideline-recommended screening tests, health beliefs, and intention to obtain any screenings that were not up to date. Sociodemographic variables collected included age, education, income, marital status, insurance status, race and ethnicity, employment status, and height and weight to calculate body mass index (weight in kilograms divided by height in meters squared). Race and ethnicity were self-reported and can be an important factor in screening uptake. Physician-related variables contained questions about recommendations for cancer screening tests received or reminders sent from health care facilities. Knowledge and health beliefs for each cancer screening behavior were assessed using Likert response options. , , , , , Statistical Analysis Logistic regression was the primary analysis method; all tests were conducted at the P < .05 significance level, except P < .25 was used to select the initial variable pool for backward removal models. The statistical analysis was performed between August and December 2021 and again between March and September 2022 using R, version 4.2.3 software (R Foundation for Statistical Computing). Sample size was determined by projected 12-month effect sizes for usual care, DVD, and DVD/PN, estimated respectively at 10%, 20%, and 30% for being up to date for screening for all 3 cancers (primary outcome) and 25%, 35%, and 45% for being up to date for any cancer screening (secondary outcome). To achieve 80% power for logistic regression analyses, including 2-sided tests when comparing 2 intervention groups and 1-sided tests for comparisons with usual care, we planned for an analyzable (ie, at 12 months) sample of 200 in the usual care group and 376 in each of the 2 intervention groups. A power of 80% was realized for the observed 12-month analyzable sample (193 usual care, 379 DVD, and 387 DVD/PN) for all pairwise tests between arms as well as the omnibus test for both outcomes. Baseline characteristics were descriptively reported for the overall sample and separately for women in each of the 3 groups. The intention-to-treat approach was used. Binary logistic regression was used to compare the randomized groups on being up to date for all or any cancer screening(s) at 12 months. Baseline variables with P < .25 for associations with outcomes (eTable 1 in ) were entered into the initial step of a multivariable backward removal logistic regression procedure to compare study groups on primary and secondary outcomes while adjusting for potentially confounding covariates, where the final model was selected based on the lowest (ie, best) Akaike information criterion. Study group, age, and baseline screening status for each cancer was forced into all models. A sensitivity analysis considered women 66 years or older as up to date at baseline (and therefore no screening was needed at 12 months) with cervical cancer screening as supported by guidelines , , (eTables 2 and 3 in ). We conducted a cost analysis to calculate the cost per additional woman up to date for all needed screenings by accounting for development and intervention costs for the DVD-only and DVD/PN arms separately, excluding any costs purely attributable to research, converted to 2022 US dollars.
In this randomized clinical trial, participants were recruited between October 20, 2016 (first baseline interview), and March 15, 2019 (last baseline interview), from 98 rural Indiana and Ohio counties with Rural-Urban Continuum Codes ranging from 4 (least rural) to 9 (most rural). Eligibility included (1) biological female sex, (2) age 50 to 74 years, (3) not up to date with screening for 1 or more guideline-based cancer screening for women (breast, cervical, or colorectal), (4) ability to speak English, (5) no previous cancer diagnosis (other than nonmelanoma skin cancer), and (6) provision of informed consent. Definitions for being up to date with screening for these cancers were obtained from the US Preventive Task Force (USPSTF) , , and included (1) biennial screening mammography for women aged 50 to 74 years; (2) cervical cytology completed every 3 years or Papanicolaou and human papillomavirus test or cotesting completed every 5 years for women aged 21 to 65 years; and (3) colorectal cancer screening (fecal occult blood test/fecal immunochemical test [annual], colonoscopy [10 years]). , Screening verification via medical record review (MRR) was used to both assess baseline eligibility and determine outcomes. The study was approved by the institutional review boards of Indiana University and The Ohio State University, and all participants provided written informed consent. The full trial protocol and statistical analysis plan are available in . This study follows the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. Recruitment methods included 3 strategies: (1) commercial listing of women meeting age criteria residing in rural counties in Indiana and Ohio, (2) personal contact at community events, and (3) social media and advertisement websites. During an initial phone call, potentially eligible participants (N = 1852) verbally consented to participate in the study. Participants completed a baseline survey and consented to MRR to verify screening status for each of the 3 cancers. Of the 1852 women, 209 refused to consent to MRR, and 658 were ineligible after MRR, leaving 983 eligible women randomly assigned to study groups ( ).
An interactive DVD, developed by study investigators (V.L.C. and S.M.R.), allowed users the ability to respond to prompts and receive personalized feedback to encourage uptake of needed screenings. Within each cancer screening unit, tailored messages, built on 2 decades of research, , provided information specific to the woman’s age, family history of cancer, perceived risk of developing the specific cancers, and barriers, benefits, and self-efficacy with regard to the respective screening behavior. , , , , , , , , , , An explanation of the screening options within each unit included information about the process for scheduling and completing needed screenings. Participants were randomly assigned to the DVD, DVD/PN, and usual groups between November 26, 2018, and July 1, 2019. Women randomly assigned to the DVD/PN intervention group were mailed a DVD followed by a patient navigator telephone attempt within 4 weeks. Two licensed social workers who were residents of rural Ohio were trained as patient navigators by study investigators (E.D.P. and M.L.K.). Social workers were selected as navigators because they had the requisite knowledge and skills needed to counsel women regarding cancer screening. Navigators contacted participants, confirmed receipt of the DVD, promoted and fostered information provided by the DVD, and counseled women to overcome identified barriers to any screening tests that were needed. Additional follow-up calls were made, as necessary, with a mean of 3 content calls (range, 1-14 calls) successfully completed per woman. A trained research assistant assessed 10% of the calls for fidelity.
Outcome data (completion of screening tests) were obtained 12 months after mailing the DVD through MRR verification and self-report. Prior to MRR, women were queried about the medical record home(s) of all screening test results. Women were considered up to date at 12 months for all screening tests if breast, cervical, and colorectal cancer screening had been completed consistent with USPSTF guidelines , , during the period between baseline and 12 months for the tests for which they were not up to date at baseline. To become up to date for all needed cancer screening tests, women needed to screen for 1, 2, or 3 cancers depending on baseline status. To become up to date for any needed cancer screening tests, women needed to be screened for at least 1 of the cancer screening tests that were not up to date at baseline. Self-report measures (baseline and 12 months) included sociodemographic and health care variables, smoking status, knowledge about the cancers and their guideline-recommended screening tests, health beliefs, and intention to obtain any screenings that were not up to date. Sociodemographic variables collected included age, education, income, marital status, insurance status, race and ethnicity, employment status, and height and weight to calculate body mass index (weight in kilograms divided by height in meters squared). Race and ethnicity were self-reported and can be an important factor in screening uptake. Physician-related variables contained questions about recommendations for cancer screening tests received or reminders sent from health care facilities. Knowledge and health beliefs for each cancer screening behavior were assessed using Likert response options. , , , , ,
Logistic regression was the primary analysis method; all tests were conducted at the P < .05 significance level, except P < .25 was used to select the initial variable pool for backward removal models. The statistical analysis was performed between August and December 2021 and again between March and September 2022 using R, version 4.2.3 software (R Foundation for Statistical Computing). Sample size was determined by projected 12-month effect sizes for usual care, DVD, and DVD/PN, estimated respectively at 10%, 20%, and 30% for being up to date for screening for all 3 cancers (primary outcome) and 25%, 35%, and 45% for being up to date for any cancer screening (secondary outcome). To achieve 80% power for logistic regression analyses, including 2-sided tests when comparing 2 intervention groups and 1-sided tests for comparisons with usual care, we planned for an analyzable (ie, at 12 months) sample of 200 in the usual care group and 376 in each of the 2 intervention groups. A power of 80% was realized for the observed 12-month analyzable sample (193 usual care, 379 DVD, and 387 DVD/PN) for all pairwise tests between arms as well as the omnibus test for both outcomes. Baseline characteristics were descriptively reported for the overall sample and separately for women in each of the 3 groups. The intention-to-treat approach was used. Binary logistic regression was used to compare the randomized groups on being up to date for all or any cancer screening(s) at 12 months. Baseline variables with P < .25 for associations with outcomes (eTable 1 in ) were entered into the initial step of a multivariable backward removal logistic regression procedure to compare study groups on primary and secondary outcomes while adjusting for potentially confounding covariates, where the final model was selected based on the lowest (ie, best) Akaike information criterion. Study group, age, and baseline screening status for each cancer was forced into all models. A sensitivity analysis considered women 66 years or older as up to date at baseline (and therefore no screening was needed at 12 months) with cervical cancer screening as supported by guidelines , , (eTables 2 and 3 in ). We conducted a cost analysis to calculate the cost per additional woman up to date for all needed screenings by accounting for development and intervention costs for the DVD-only and DVD/PN arms separately, excluding any costs purely attributable to research, converted to 2022 US dollars.
Based on 12-month MRR, 19 women were excluded because updated MRRs indicated they were up to date with all cancer screenings at baseline, 5 participants were missing MRRs but had self-reported screening outcomes that were used in lieu of medical records as done in previous studies, and 1 participant did not have MRR or self-report and was considered missing, yielding a sample of 963 participants for analyses ( ). Participants reported a mean (SD) age of 58.6 (6.3) years; 150 (16%) had a high school education or less, 367 (38%) had some college, and 446 (46%) had a college education or higher. Most participants self-reported as White (936 [97%] vs 27 [3%] other race and ethnicity [ie, African American, Asian, Native American, multiple race and ethnicity]), and 743 (77%) were married. Only 179 participants (19%) reported an annual household income less than $40 000, while 351 (36%) had incomes of $40 000 to $79 999, and 396 (41%) disclosed incomes of $80 000 or more. Only 49 participants (5%) reported not having health insurance ( ). Baseline data revealed minimal missing data except for weight, which was unknown for 318 participants (33%). Participants were classified into 7 categories according to their up-to-date status for breast, cervical, and colorectal cancer screenings at baseline, with 186 (19%) reporting not being up to date for all 3 tests ( ). Descriptive and Bivariate Analyses The unadjusted 12-month rate of being up to date with screening for all cancers was 10%, 15%, and 30%, respectively, for usual care, DVD alone, and DVD/PN (omnibus P < .001) ( ). The unadjusted 12-month rate of being up to date with screening for any of the 3 cancers needed was 25%, 29%, and 49%, respectively (omnibus P < .001) ( ). The DVD/PN group demonstrated a significantly greater percentage (vs DVD alone or usual care) of women being up to date for all and any needed screenings by 12 months ( P < .001 for 4 pairwise comparisons). Comparative Effectiveness Analyses for Up-to-Date Screening for All Cancers After adjusting for a parsimonious set of covariates through backward model selection, women assigned to the DVD group had nearly twice the odds of those in the usual care group of being up to date for all screenings (odds ratio [OR], 1.84; 95% CI, 1.02-3.43; P = .048) ( ). Women in the combined DVD/PN group were nearly 6 times more likely to be up to date for all cancer screenings compared with usual care (OR, 5.69; 95% CI, 3.24-10.50; P < .001). Women in the DVD/PN group were 3 times more likely to obtain all needed screenings compared with those in the DVD group (OR, 3.09; 95% CI, 2.05-4.68; P < .001). Baseline screening status was significantly associated with 12-month screening up-to-date status. Compared with women not up to date with all screenings at baseline, those who were not up to date for 1 cancer screening or not up to date for 2 cancer screenings, 1 of which included breast cancer screening, were more likely to be up to date for all needed cancer screenings at 12 months (OR, 19.10; 95% CI, 8.18-47.30; P < .001) ( ). Participants aged 65 years or older were less likely to be up to date for all cancer screenings (OR, 0.53; 95% CI, 0.30-0.93; P = .03). Participants who were planning at baseline to obtain cancer screening in the next 6 months (OR, 1.86; 95% CI, 1.24-2.81; P = .003), those with higher baseline self-efficacy scores (OR, 1.10; 95% CI, 1.01-1.19; P = .03), and those with lower ADI scores (OR, 0.99; 95% CI, 0.97-0.998; P = .02) were more likely to be up to date for screening for all cancers at 12 months. Comparative Effectiveness Analysis for Being Up to Date for Screening for Any Cancer In the covariate-adjusted model, the DVD/PN intervention, but not the DVD intervention alone, was significantly more effective than usual care (OR, 4.01; 95% CI, 2.60-6.28; P < .001) for promoting an up-to-date screening status for any of the cancers at 12 months ( ). The DVD/PN intervention compared with the DVD alone was significantly more effective for promoting up-to-date screening at 12 months (OR, 2.98; 95% CI, 2.09-4.18; P < .001). Participants who perceived their finances as inadequate to pay their bills were half as likely (OR, 0.45; 95% CI, 0.24-0.81; P = .01) to be up to date for any needed cancer screenings compared with those who reported having enough money to pay their bills. Participants who were working full time compared with those not working were more likely (OR, 1.58; 95% CI, 1.07-2.36; P = .02) to be up to date at 12 months for any cancer screening ( ). Participants who intended at baseline to obtain needed screenings in the next 6 months (OR, 1.85; 95% CI, 1.33-2.59; P < .001), those who had higher knowledge (OR, 1.20; 95% CI, 1.01-1.42; P = .04) and self-efficacy (OR, 1.07; 95% CI, 1.002-1.14; P = .047) scores, and those who had lower ADI scores (OR, 0.99; 95% CI, 0.98-0.998; P = .03) had greater odds of being up to date for screening for any cancer. Higher perceived barrier scores to screening were associated with higher odds of completing screening (OR, 1.23; 95% CI, 1.03-1.47; P = .02), although there was no interaction with the intervention. Age of 65 years or older was not associated with being up to date for any screening outcome (OR, 0.91; 95% CI, 0.55-1.49; P = .70). Sensitivity and Cost Analyses The intervention effectiveness (eg, ORs and P values) did not change meaningfully in our sensitivity analysis when all participants aged 66 years or older were considered as being up to date at baseline with cervical cancer screening for analyses of 12-month all or any cancer screening test up-to-date outcomes (eTables 2 and 3 in ). We conducted a cost analysis to determine the additional costs associated with each additional unit of being up to date for all screening tests gained from the (1) DVD intervention and (2) DVD/PN intervention compared with the usual care approach, which had no incremental costs (eAppendix in ). Excluding research costs, we found a total cost of $326 012 for the DVD intervention and an additional $344 829 to add patient navigation to the DVD intervention. Normalizing on the main outcome of being up to date with all needed screening tests, the cost-effectiveness amounted to $14 462 per up-to-date participant in the DVD group and $10 638 per up-to-date participant in the DVD/PN group.
The unadjusted 12-month rate of being up to date with screening for all cancers was 10%, 15%, and 30%, respectively, for usual care, DVD alone, and DVD/PN (omnibus P < .001) ( ). The unadjusted 12-month rate of being up to date with screening for any of the 3 cancers needed was 25%, 29%, and 49%, respectively (omnibus P < .001) ( ). The DVD/PN group demonstrated a significantly greater percentage (vs DVD alone or usual care) of women being up to date for all and any needed screenings by 12 months ( P < .001 for 4 pairwise comparisons).
After adjusting for a parsimonious set of covariates through backward model selection, women assigned to the DVD group had nearly twice the odds of those in the usual care group of being up to date for all screenings (odds ratio [OR], 1.84; 95% CI, 1.02-3.43; P = .048) ( ). Women in the combined DVD/PN group were nearly 6 times more likely to be up to date for all cancer screenings compared with usual care (OR, 5.69; 95% CI, 3.24-10.50; P < .001). Women in the DVD/PN group were 3 times more likely to obtain all needed screenings compared with those in the DVD group (OR, 3.09; 95% CI, 2.05-4.68; P < .001). Baseline screening status was significantly associated with 12-month screening up-to-date status. Compared with women not up to date with all screenings at baseline, those who were not up to date for 1 cancer screening or not up to date for 2 cancer screenings, 1 of which included breast cancer screening, were more likely to be up to date for all needed cancer screenings at 12 months (OR, 19.10; 95% CI, 8.18-47.30; P < .001) ( ). Participants aged 65 years or older were less likely to be up to date for all cancer screenings (OR, 0.53; 95% CI, 0.30-0.93; P = .03). Participants who were planning at baseline to obtain cancer screening in the next 6 months (OR, 1.86; 95% CI, 1.24-2.81; P = .003), those with higher baseline self-efficacy scores (OR, 1.10; 95% CI, 1.01-1.19; P = .03), and those with lower ADI scores (OR, 0.99; 95% CI, 0.97-0.998; P = .02) were more likely to be up to date for screening for all cancers at 12 months.
In the covariate-adjusted model, the DVD/PN intervention, but not the DVD intervention alone, was significantly more effective than usual care (OR, 4.01; 95% CI, 2.60-6.28; P < .001) for promoting an up-to-date screening status for any of the cancers at 12 months ( ). The DVD/PN intervention compared with the DVD alone was significantly more effective for promoting up-to-date screening at 12 months (OR, 2.98; 95% CI, 2.09-4.18; P < .001). Participants who perceived their finances as inadequate to pay their bills were half as likely (OR, 0.45; 95% CI, 0.24-0.81; P = .01) to be up to date for any needed cancer screenings compared with those who reported having enough money to pay their bills. Participants who were working full time compared with those not working were more likely (OR, 1.58; 95% CI, 1.07-2.36; P = .02) to be up to date at 12 months for any cancer screening ( ). Participants who intended at baseline to obtain needed screenings in the next 6 months (OR, 1.85; 95% CI, 1.33-2.59; P < .001), those who had higher knowledge (OR, 1.20; 95% CI, 1.01-1.42; P = .04) and self-efficacy (OR, 1.07; 95% CI, 1.002-1.14; P = .047) scores, and those who had lower ADI scores (OR, 0.99; 95% CI, 0.98-0.998; P = .03) had greater odds of being up to date for screening for any cancer. Higher perceived barrier scores to screening were associated with higher odds of completing screening (OR, 1.23; 95% CI, 1.03-1.47; P = .02), although there was no interaction with the intervention. Age of 65 years or older was not associated with being up to date for any screening outcome (OR, 0.91; 95% CI, 0.55-1.49; P = .70).
The intervention effectiveness (eg, ORs and P values) did not change meaningfully in our sensitivity analysis when all participants aged 66 years or older were considered as being up to date at baseline with cervical cancer screening for analyses of 12-month all or any cancer screening test up-to-date outcomes (eTables 2 and 3 in ). We conducted a cost analysis to determine the additional costs associated with each additional unit of being up to date for all screening tests gained from the (1) DVD intervention and (2) DVD/PN intervention compared with the usual care approach, which had no incremental costs (eAppendix in ). Excluding research costs, we found a total cost of $326 012 for the DVD intervention and an additional $344 829 to add patient navigation to the DVD intervention. Normalizing on the main outcome of being up to date with all needed screening tests, the cost-effectiveness amounted to $14 462 per up-to-date participant in the DVD group and $10 638 per up-to-date participant in the DVD/PN group.
The goal of this study was to compare the effectiveness of 2 interventions with usual care to increase the proportion of rural women up to date with screening for 3 cancers (breast, cervical, and colorectal). We considered the following 12-month outcomes: being up to date with all screening tests and being up to date with any needed screening tests (ie, for 1, 2, or 3 cancers depending on baseline status). Although other studies have been successful at simultaneously increasing both colorectal and breast cancer screening or cervical and breast cancer screening, , , to our knowledge, interventions to increase the uptake of screening for 3 cancers simultaneously have not been tested. Our findings demonstrate that interventions delivered remotely to rural women can simultaneously improve screening rates for breast, cervical, and colorectal cancer. Following the mailed DVD, participants in the DVD/PN group received patient navigation to reduce their individual barriers to needed cancer screenings. While participants receiving only the DVD intervention were almost twice as likely to be up to date with all cancer screenings, the addition of a patient navigator was almost 6 times more effective than usual care, supporting the importance of patient navigation. Participants had greater odds of becoming up to date with all screenings if only 1 screening was needed or if breast cancer was 1 of the 2 screenings needed. Within our study, when only 1 screening is needed, it is easier for women to become up to date with all screenings. If a participant needed screening for more than 1 site, the intervention was more intense because it focused on obtaining any cancer screening test that was not up to date. Although the need for multiple screening tests could have increased the time needed for intervention activities, consolidating efforts was still more time efficient than addressing 1 needed cancer screening test at a time and probably at different times. Additionally, our baseline breast cancer screening up-to-date rates were higher than baseline rates for cervical and colorectal cancer screening, suggesting that this population of women may find it easier to become up to date with mammography vs cervical or colorectal cancer screening. Consistent with this finding, previous studies have found higher rates of breast cancer screening compared with colorectal or cervical cancer screening. It is difficult to determine why the study found that women who were due for mammography compared with cervical or colorectal cancer screening were more likely to become up to date for all tests. The fact that mammography screening is discussed more in the media might make it more socially acceptable than the other cancer screening tests. Sociodemographic characteristics associated with becoming up to date with screening tests were similar to reports from previous studies. , , , Full-time employment is often linked to health insurance, and both have been consistently shown to be associated with being up to date with all 3 cancer screenings. , , Compared with younger participants, those 65 years or older were only half as likely to be up to date with screenings for all cancers at 12 months. However, age was not associated with being up to date for any screening outcome. Regarding the significant inverse relationship between age and all screening outcomes, the literature has indicated that adherence to needed screening tests increases with age. Being up to date with all needed tests could be more problematic for older women, as there might be more barriers (practical and personal) in this age group to completing multiple screening tests within a 12-month period. Among our theoretical measures, intention to screen, knowledge, and barriers reported at baseline were related to becoming up to date at 12 months after randomization. A participant who self-reported intention (contemplation) to screen in the next 6 months had almost 2 times the odds of becoming up to date with all needed cancer screenings, consistent with prior research based on the transtheoretical model of behavior change. , Consistent with other studies, greater knowledge was related to becoming up to date with screening tests. , Unlike in other studies, , higher scores of perceived barriers at baseline were associated with being up to date for any cancer screening at 12 months. Patient navigator calls focused on reducing barriers that might keep women from engaging in needed screenings; thus, participants with more barriers may have experienced increased interaction with the patient navigator. The USPSTF guidelines support that cervical cancer screening not be done after 65 years of age if results in the past 3 years were negative. , , Sensitivity analyses (eTable 2 in ) revealed that when participants aged 66 years or older were all considered up to date at baseline based on current cervical cancer screening guidelines, the results showed similar intervention effects as observed in the primary analyses. The DVD/PN intervention was more cost-effective in bringing participants up to date with all needed tests due to the greater effect size. Compared with treating cancer, the costs of each intervention to bring women up to date with screening were relatively modest, as on average, cancer treatment costs $150 000 per patient in the US, and costs of the intervention would be lower per person at a larger scale. Thus, the additional costs required for the addition of PN to improve screening may result in cost savings by avoiding cancer deaths or treatment at more advanced stages. Strengths and Limitations This study had some strengths and limitations. Our sample was highly educated and predominately White, making translation to a less educated and more diverse population difficult. However, our study counties have few racial and ethnic minority residents. Although 84% of our population had some college or higher, the DVD technology was completely narrated, rendering it accessible regardless of educational attainment. We found that all participants had the requisite technology necessary to use the interactive DVD, although this technology is rapidly becoming obsolete, creating the necessity to translate the intervention to an online tool that can be accessed via a computer, tablet, or smartphone. This intervention was delivered to rural women who, at the time of the study, had limited internet access; therefore, remote delivery was best suited to DVD technology. This study supports the one-stop-shop approach as advocated by other researchers who also found that a screening intervention could simultaneously improve the uptake of more than 1 cancer screening test. The potential for increasing multiple screening behaviors at one time is especially relevant for rural communities where health care may be hampered by remote living conditions that limit access to preventive services. , , , , ,
This study had some strengths and limitations. Our sample was highly educated and predominately White, making translation to a less educated and more diverse population difficult. However, our study counties have few racial and ethnic minority residents. Although 84% of our population had some college or higher, the DVD technology was completely narrated, rendering it accessible regardless of educational attainment. We found that all participants had the requisite technology necessary to use the interactive DVD, although this technology is rapidly becoming obsolete, creating the necessity to translate the intervention to an online tool that can be accessed via a computer, tablet, or smartphone. This intervention was delivered to rural women who, at the time of the study, had limited internet access; therefore, remote delivery was best suited to DVD technology. This study supports the one-stop-shop approach as advocated by other researchers who also found that a screening intervention could simultaneously improve the uptake of more than 1 cancer screening test. The potential for increasing multiple screening behaviors at one time is especially relevant for rural communities where health care may be hampered by remote living conditions that limit access to preventive services. , , , , ,
In this randomized clinical trial, a single intervention was used to support being up to date for any or all USPSTF guideline–supported screenings for women (breast, cervical, and colorectal cancer) aged 50 to 74 years. The effectiveness of these interventions that targeted all or any needed cancer screenings simultaneously offered an approach that can be delivered remotely to rural women and has paved the way to approach preventive health care holistically, fostering cancer prevention and early detection when a cure is realistic and ultimately decreasing cancer health disparities.
|
A coordinated approach for managing polypharmacy among children with medical complexity: rationale and design of the Pediatric Medication Therapy Management (pMTM) randomized controlled trial
|
7a49c583-e94d-4dfe-b6ed-226af8997a2d
|
10148507
|
Patient-Centered Care[mh]
|
Pediatric polypharmacy and children with medical complexity Pediatric polypharmacy(defined as concurrent use of ≥ 5 medications) is a major public health problem with high prevalence among the priority population of children with medical complexity (CMC) . Characterized by the presence of complex chronic conditions (e.g., intractable epilepsy, degenerative neurologic disease) that are expected to last at least 12 months and require subspecialty care or tertiary care hospitalizations, CMC often require treatment with complex polypharmacy to sustain quality of life and control substantial symptom burden [ - ]. Pediatric polypharmacy is shown to increase the risk of medication-related problems (MRPs) [ - ]. A MRP is an event involving medication therapy that interferes with an optimum patient outcome, for example, an inappropriate therapy, undertreated symptom, major drug-drug interaction, or adverse drug event (ADE) [ , , - ]. These types of MRPs are defined, measurable, and potentially treatable if recognized [ , , - ]. Although MRPs are associated with patient morbidity and healthcare utilization, polypharmacy is infrequently assessed during routine clinical care for CMC, and MRPs are managed ad hoc [ , - ]. While polypharmacy is often necessary for symptom and disease management in CMC, opportunities for improved outpatient medication management are ubiquitous [ , - ]. Current pediatric polypharmacy management strategies are fragmented and reactive, rather than proactive . CMC are often prescribed medications by multiple sub-specialists and lack a coordinating medication supervisor . Isolated medication regimen reviews may occur when CMC experience acute healthcare changes or ADEs . In contrast, the Centers for Medicare & Medicaid Services requires Medicare sponsors to provide preventive medication therapy management (MTM) programs to targeted adult patients . Standardized pharmacist-led MTM activities (e.g., medication optimization, deprescribing, education) are patient-centric, comprehensive, and improve health outcomes and safety [ - ]. Numerous potential benefits of a systematic approach to MTM-like services in an analogous pediatric population have been described. In a study of 100 CMC with polypharmacy in the ambulatory setting, an average of 3.4 MRPs were identified per patient, with 97% of patients having opportunities for potential intervention . Most frequently proposed interventions included drug discontinuation trials, caregiver education, dose modification, and modification of dosage form or frequency to reduce medication regimen complexity. In a separate, health system-wide initiative focused on medication list reviews within a broad pediatric population, a group of ambulatory clinical pharmacists performed 409 interventions over a 6-month pilot period, most frequently involving the management of asthma, infections, or pain . The majority of interventions resulted in full resolution of identified MRPs, but the authors described a need for further investigation to determine the value-based sustainability of the program. In the priority population of CMC, the additional administrative complexity of polypharmacy regimens may introduce further risks and opportunities for benefit of MTM services, particularly those focused on medication simplification where appropriate. In a study of 123 pediatric patients with neurological impairment and polypharmacy, patients’ medication regimens included a median of 31 total doses of medication, 6 unique dosage forms, 7 different dosing frequencies, and 5 medications with additional administration specifications (e.g., split/crush tablet, open capsule for administration via g-tube) per patient . Safety and effectiveness of these regimens is therefore highly dependent on caregiver understanding and ability. In a study of 156 caregivers of CMC, most parents were highly involved in home medication administration, but some reported concerns about medication administration and safety . Of all caregivers, only 73% were able to correctly match a medication to its targeted symptoms, 60% were able to report complete dosing instructions, and 55% were able to correctly measure liquid medication doses. Significant differences existed between caregivers’ perceived understanding of such abilities versus demonstrated task performance. Related concerns have been described by parents and investigators elsewhere [ , , ]. Major knowledge gaps and research needs In 2021, the Joint Commission Sentinel Event Alert highlighted the dire need for “additional research on interventions to reduce pediatric medication errors, especially in emergency departments, ambulatory clinics and home environments” . Despite a robust body of prior research demonstrating the risks of pediatric polypharmacy, rigorously tested pediatric-specific interventions to manage polypharmacy-related issues are scarce and greatly needed [ , - ]. Complex care programs that provide comprehensive care to CMC have identified pharmacy support as a preeminent need . While medication safety is a priority for pediatric complex care programs, a systematic intervention will not be widely adopted without demonstrated effectiveness and value for CMC . Pharmacists may provide targeted reactive pharmaceutical care in the existing model, but proactive comprehensive care is needed [ , - ]. Pediatric pharmacy specialists currently provide support in multiple hospital settings, but pediatric pharmacists are infrequently incorporated into outpatient models of care for CMC [ , , , , ]. However, a more central role has been proposed for outpatient pediatric pharmacists in the medical home to coordinate and manage medication regimens, and to support primary care providers (PCPs) [ , , ]. Furthermore, parental acceptance of this model is high; in the previous study of 156 parents of CMC with polypharmacy, 87% were willing to change ≥ 1 medication(s) if recommended by their provider . As care models evolve, thoughtful incorporation of proactive and preventative evidence-based strategies into the management of pediatric polypharmacy is necessary to improve medication-related patient outcomes, safety, and value. Pharmacist-led MTM is a proven and effective tool for managing adult and geriatric polypharmacy [ - ]. The overarching aim of this trial is to determine if a structured pharmacist-led Pediatric Medication Therapy Management (pMTM) intervention will improve the proactive management of polypharmacy in CMC by directly addressing major gaps in current practice. An approach for improving the management of pediatric polypharmacy We propose a rigorous and efficient hybrid type 2 trial with evaluation of pMTM guided by the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) framework with the following specific aims: Aim 1 : Assess Reach and Effectiveness by determining the effect of a pMTM intervention on the primary outcome of total MRPs among CMC with polypharmacy, as well as the secondary outcomes of parent-reported symptoms and acute healthcare utilization, compared to usual care. We hypothesize that pMTM will result in lower MRP counts, stable or improved symptom burdens, and fewer cumulative acute healthcare encounters compared to usual care. Aim 2 : Determine how key patient and parent factors modify pMTM Effectiveness through quantitative measurement of the effect modification of patient/parent factors on the primary MRP outcome, as well as through qualitative parental report. We hypothesize that higher medical complexity and higher parental health literacy will be associated with a larger treatment effect. Aim 3 : Evaluate provider pMTM Adoption, Implementation, and potential for Maintenance through assessment of actual provider adoption, fidelity/time requirements, qualitative provider perceptions (including feasibility, acceptability, and barriers or facilitators), and assessment of program replication costs. Through a systematic approach, the results of this pMTM trial will inform the medical community on the value and effectiveness of pMTM towards optimization of polypharmacy among the priority population of CMC.
Pediatric polypharmacy(defined as concurrent use of ≥ 5 medications) is a major public health problem with high prevalence among the priority population of children with medical complexity (CMC) . Characterized by the presence of complex chronic conditions (e.g., intractable epilepsy, degenerative neurologic disease) that are expected to last at least 12 months and require subspecialty care or tertiary care hospitalizations, CMC often require treatment with complex polypharmacy to sustain quality of life and control substantial symptom burden [ - ]. Pediatric polypharmacy is shown to increase the risk of medication-related problems (MRPs) [ - ]. A MRP is an event involving medication therapy that interferes with an optimum patient outcome, for example, an inappropriate therapy, undertreated symptom, major drug-drug interaction, or adverse drug event (ADE) [ , , - ]. These types of MRPs are defined, measurable, and potentially treatable if recognized [ , , - ]. Although MRPs are associated with patient morbidity and healthcare utilization, polypharmacy is infrequently assessed during routine clinical care for CMC, and MRPs are managed ad hoc [ , - ]. While polypharmacy is often necessary for symptom and disease management in CMC, opportunities for improved outpatient medication management are ubiquitous [ , - ]. Current pediatric polypharmacy management strategies are fragmented and reactive, rather than proactive . CMC are often prescribed medications by multiple sub-specialists and lack a coordinating medication supervisor . Isolated medication regimen reviews may occur when CMC experience acute healthcare changes or ADEs . In contrast, the Centers for Medicare & Medicaid Services requires Medicare sponsors to provide preventive medication therapy management (MTM) programs to targeted adult patients . Standardized pharmacist-led MTM activities (e.g., medication optimization, deprescribing, education) are patient-centric, comprehensive, and improve health outcomes and safety [ - ]. Numerous potential benefits of a systematic approach to MTM-like services in an analogous pediatric population have been described. In a study of 100 CMC with polypharmacy in the ambulatory setting, an average of 3.4 MRPs were identified per patient, with 97% of patients having opportunities for potential intervention . Most frequently proposed interventions included drug discontinuation trials, caregiver education, dose modification, and modification of dosage form or frequency to reduce medication regimen complexity. In a separate, health system-wide initiative focused on medication list reviews within a broad pediatric population, a group of ambulatory clinical pharmacists performed 409 interventions over a 6-month pilot period, most frequently involving the management of asthma, infections, or pain . The majority of interventions resulted in full resolution of identified MRPs, but the authors described a need for further investigation to determine the value-based sustainability of the program. In the priority population of CMC, the additional administrative complexity of polypharmacy regimens may introduce further risks and opportunities for benefit of MTM services, particularly those focused on medication simplification where appropriate. In a study of 123 pediatric patients with neurological impairment and polypharmacy, patients’ medication regimens included a median of 31 total doses of medication, 6 unique dosage forms, 7 different dosing frequencies, and 5 medications with additional administration specifications (e.g., split/crush tablet, open capsule for administration via g-tube) per patient . Safety and effectiveness of these regimens is therefore highly dependent on caregiver understanding and ability. In a study of 156 caregivers of CMC, most parents were highly involved in home medication administration, but some reported concerns about medication administration and safety . Of all caregivers, only 73% were able to correctly match a medication to its targeted symptoms, 60% were able to report complete dosing instructions, and 55% were able to correctly measure liquid medication doses. Significant differences existed between caregivers’ perceived understanding of such abilities versus demonstrated task performance. Related concerns have been described by parents and investigators elsewhere [ , , ].
In 2021, the Joint Commission Sentinel Event Alert highlighted the dire need for “additional research on interventions to reduce pediatric medication errors, especially in emergency departments, ambulatory clinics and home environments” . Despite a robust body of prior research demonstrating the risks of pediatric polypharmacy, rigorously tested pediatric-specific interventions to manage polypharmacy-related issues are scarce and greatly needed [ , - ]. Complex care programs that provide comprehensive care to CMC have identified pharmacy support as a preeminent need . While medication safety is a priority for pediatric complex care programs, a systematic intervention will not be widely adopted without demonstrated effectiveness and value for CMC . Pharmacists may provide targeted reactive pharmaceutical care in the existing model, but proactive comprehensive care is needed [ , - ]. Pediatric pharmacy specialists currently provide support in multiple hospital settings, but pediatric pharmacists are infrequently incorporated into outpatient models of care for CMC [ , , , , ]. However, a more central role has been proposed for outpatient pediatric pharmacists in the medical home to coordinate and manage medication regimens, and to support primary care providers (PCPs) [ , , ]. Furthermore, parental acceptance of this model is high; in the previous study of 156 parents of CMC with polypharmacy, 87% were willing to change ≥ 1 medication(s) if recommended by their provider . As care models evolve, thoughtful incorporation of proactive and preventative evidence-based strategies into the management of pediatric polypharmacy is necessary to improve medication-related patient outcomes, safety, and value. Pharmacist-led MTM is a proven and effective tool for managing adult and geriatric polypharmacy [ - ]. The overarching aim of this trial is to determine if a structured pharmacist-led Pediatric Medication Therapy Management (pMTM) intervention will improve the proactive management of polypharmacy in CMC by directly addressing major gaps in current practice.
We propose a rigorous and efficient hybrid type 2 trial with evaluation of pMTM guided by the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) framework with the following specific aims: Aim 1 : Assess Reach and Effectiveness by determining the effect of a pMTM intervention on the primary outcome of total MRPs among CMC with polypharmacy, as well as the secondary outcomes of parent-reported symptoms and acute healthcare utilization, compared to usual care. We hypothesize that pMTM will result in lower MRP counts, stable or improved symptom burdens, and fewer cumulative acute healthcare encounters compared to usual care. Aim 2 : Determine how key patient and parent factors modify pMTM Effectiveness through quantitative measurement of the effect modification of patient/parent factors on the primary MRP outcome, as well as through qualitative parental report. We hypothesize that higher medical complexity and higher parental health literacy will be associated with a larger treatment effect. Aim 3 : Evaluate provider pMTM Adoption, Implementation, and potential for Maintenance through assessment of actual provider adoption, fidelity/time requirements, qualitative provider perceptions (including feasibility, acceptability, and barriers or facilitators), and assessment of program replication costs. Through a systematic approach, the results of this pMTM trial will inform the medical community on the value and effectiveness of pMTM towards optimization of polypharmacy among the priority population of CMC.
Protocol reporting This protocol has been prepared according to the RE-AIM framework (Table ) and the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement (Table ) [ - ]. Trial results will be reported according to the Consolidated Standards of Reporting Trials (CONSORT) and the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines [ - ]. This trial was registered at clinicaltrials.gov (NCT05761847) on 02/25/2023. The SPIRIT Checklist is provided as Additional File . Trial design This trial is a 5-year hybrid type 2 randomized controlled trial funded by the Agency for Healthcare Research and Quality (AHRQ) and designed to evaluate the management of pediatric polypharmacy in the primary care setting by comparing the pMTM intervention to usual care for lowering the primary outcome of MRP counts and secondary outcomes of symptom burdens and acute healthcare utilization. Because pediatric pharmacist support is currently a limited resource, a hybrid type 2 design is the most efficient, rigorous design to simultaneously evaluate the effectiveness and implementation of pMTM to enable rapid dissemination . The intervention is not blinded to enrolled patients; however, study team members involved in assessment of outcomes, data analysis, and safety monitoring will be blinded. All study procedures were reviewed and approved by the affiliated Institutional Review Board (IRB). Protocol amendments will be approved by the local IRB. Other pertinent parties will be notified through updates to the clinicaltrials.gov website. All publications related to the study will include a summary of protocol amendments. Study setting, participants, and eligibility criteria Study enrollment is scheduled to begin in August 2023 and will occur through September 2027. The study will take place at the Special Care Clinic (SCC) at Children’s Hospital Colorado, a large multidisciplinary primary care medical home for CMC within a large, tertiary, freestanding children’s hospital. Patients ages 2–18 years old with ≥ 1 complex chronic condition and ≥ 5 concurrent medications (including prescription, as needed, and over-the-counter medications), and their primary parental caregiver will be screened for inclusion . Patients with a non-English speaking primary caregiver will be excluded, as the pMTM intervention and certain study instruments are currently available only in English. Females and males and members of all racial and ethnic categories will be included if eligible, without bias. Randomization, allocation, and study phases Eligibility screening will be conducted by trained research personnel using automated daily electronic health record (EHR) reporting tools that identify eligible children with a scheduled routine clinical visit in the SCC within the next 14 days. Following review of eligibility criteria, research personnel will contact the caregiver to introduce the study, invite the caregiver (and assenting adolescents) to participate, and obtain written consent. Following consent, research personnel will work with study participants to complete baseline and 90-day assessments using EHR functionality. Baseline assessment will include patient and parent demographics, assessment of health literacy, assessment of parent attitudes towards deprescribing, and parental assessment of symptom burden (Table ). Using the current EHR medication list, all participants will undergo medication history review with a study team member trained using WHO’s Standard Operation Protocol ; data will be collected for research purposes only, but if significant medication safety concerns are noted, the study team will alert the primary care provider (PCP) before the clinical visit. All additional data will be conducted during subsequent study and clinical visits. Participants will then be randomized 1:1 in permuted blocks of 4 patients to pMTM intervention or usual care (2 patients to each arm), with the pMTM intervention occurring ≤ 3 days before a scheduled well child visit or routine follow-up medical encounter (Fig. ). Those randomized to intervention will meet with a study pharmacist (PharmD) in-person or via telehealth for completion of the pMTM encounter (described comprehensively below). Both groups will then be seen for their scheduled PCP visit as occurring within usual care. After the clinical visit, all participants will receive the post-clinical visit medication list and, for those in the intervention arm, the medication action plan (MAP). Participants will be followed for 90 days after the clinical visit to track the primary, secondary, and exploratory outcomes (Tables and ). Treatments Intervention: pMTM conceptual framework The trial design is conceptualized based on the Shed-MEDS model of deprescribing, which posits that adult patients with potentially inappropriate polypharmacy will benefit from a patient-centered deprescribing intervention to reduce polypharmacy and improve health . With permission, the model is modified to include the broader core activity of pMTM, during which parents and providers review medication changes, continuation, proper use, monitoring, and follow-up (Fig. ) . For CMC with polypharmacy, our model specifies that the pMTM intervention (which accounts for and prioritizes safety, quality of life, and parental considerations) will lead to patient-centered optimization of medications [ , , ]. Intervention: pediatric medication therapy management steps Following baseline patient and caregiver assessments, patients randomized to the intervention will take part in a PharmD visit for application of the pMTM intervention, occurring in person or via telehealth within 3 days prior to the planned PCP clinical visit. The pMTM intervention will be applied during a 30- to 45-min visit in which the PharmD will work collaborative with the caregiver to complete 3 major activities (Table ). First, the PharmD will perform with the patient and caregiver a comprehensive medication review (CMR) of the patient’s current personal medication list (PML). CMR is a systematic process of collecting patient-specific data and assessing for potential medication-related problems. Subsequent clinical decisions are reliant on accuracy of available data; therefore, the first step of CMR is to conduct a thorough medication history using all available resources, updating the patient’s PML where necessary, and documenting that such activities have occurred. The PharmD will gather a list of active medications (including prescriptions, over-the-counter medications, dietary supplements, and complementary medicines), determine and confirm active disease states, and identify providers involved in the prescribing and management of current medication therapy. Next, the PharmD will review each disease therapy with the caregiver and patient, if appropriate, to determine current goals of therapy. Caregiver understanding of goals of therapy is important to their ability to provide confident care to CMC. Education related to mismatch of therapy goals and known medication effects may be addressed at this time. Additionally, the PharmD will determine if any barriers may be affecting adherence. Barriers to adherence in CMC often include taste aversion, difficult or confusing administration techniques, burdensome dosing schedule, or medication cost, among others. Such barriers may be addressed in subsequent steps of the pMTM intervention. Following review of therapy goals and adherence, the PharmD will evaluate any available laboratory data and discuss ongoing clinical symptoms to determine current medication effectiveness or lack thereof. The PML will be appraised for potential therapeutic duplication and therapies for which resolution of symptoms may render ongoing treatment unnecessary. Common examples of duplication of therapy within this population include use of multiple NSAID agents or acetaminophen-containing products or use of multiple medications within the same therapeutic class (e.g., clonidine and guanfacine). Such duplication may cause potentiation of adverse effects or contribute to excessive medication costs without conferring additional therapeutic benefit. As the final component of CMR, the PharmD will review current patient symptoms to identify those which may be attributable to medication use or toxicity, which may result in recommendations for alternative therapy or deprescribing. Gaps in therapy for guideline-directed care of various disease states (e.g., asthma) may also be identified at this stage. The second essential element of the pMTM intervention involves optimization of the medication regimen to address those concerns or opportunities identified within the preceding CMR. With the caregiver, a list of all ongoing concerns will be prioritized according to goals of therapy. Considerations related to safety, patient quality of life, and caregiver or family quality of life will be measured during this process and communicated within subsequent recommendations for medication optimization. Next, the PharmD will formulate a plan for recommended medication-related changes, including potential dose or frequency adjustment, discontinuation of therapy, or initiation of alternate medications to manage untreated or ongoing disease symptoms. Recommendations will be classified according to type (Table ), and rationale will be provided. All recommendations will be included in a structured pMTM provider note within the EHR which is intended to be informative and suggestive, as well as concise and respectful. Recommendations may be provided in the form of changes recommended for urgent action by the PCP (at the impending clinic visit) or communication with subspecialists for those diseases states primarily managed by alternate providers. Expectations and recommendations for both subjective monitoring and objective labs or tests (e.g., ECG) will be communicated to the caregiver and documented within the structured pMTM provider note. Any medications changes recommended during the visit will ultimately be made at the discretion of the PCP during clinical care. As the final component of the pMTM intervention, after the clinical visit, the PharmD will create a written, patient-centered, and caregiver-friendly MAP using a template populated from the EHR. This MAP will describe a prioritized list of specific action items resulting from the interactive pMTM consultation, which empowers the caregiver to be personally involved in the administration of the proposed optimization(s). The document was developed from the CMS standardized format to allow for tracking of patient progress, clarification of intended patient response, and documentation of the perceived clinical effects of all changes . The MAP is designed assist the caregiver with resolving current drug therapy concerns and to help achieve the goals of medication treatment but is not intended to provide the level of detailed communication provided to the PCP or other healthcare providers. Items reinforcing compliance, maintaining caregiver actions, and acknowledging success in the child’s medication therapy may be included. The caregiver will be encouraged to bring the MAP with them to future healthcare visits and to request update of the document as necessary. A plan for follow-up all changes with the PharmD of other appropriate providers will be outlined within the MAP and communicated to the caregiver. Additionally, the reconciled PML (created within the CMR stage of the pMTM visit) will be provided to the caregiver to assist in the understanding of current medication treatment and the tracking of potential medication changes, such as addition of over-the-counter medications or redaction of discontinued products. Information about appropriate disposal of unneeded medications will be provided by the PharmD if applicable. Control group: usual care Patients randomized to usual care will undergo medication history review performed by study personnel prior to the PCP clinical visits as previously described. The goal of this medication review process is to ensure accuracy of the baseline medication-related data without recommendations related to medication management. We selected a usual care comparator because there are no current established standards for centralized medication management strategies within the population of CMC. All medications decisions for the control group are at the discretion of the PCP. Study measures and data collection Table includes all study measures and data collection time points. To promote study retention, participants will receive compensation in the form of $50 gift cards at two study time-points (i.e., completion of clinical visit and at 90 days). Each study measure listed in Table is briefly described below. Demographics, health literacy, and attitudes towards medication management Research personnel will use a standardized approach to extract information from the EHR related to basic patient demographics (age, gender, complex chronic conditions, level of polypharmacy). Complex chronic condition data is generated using the CCC V2 published classification system based on ICD-10 diagnosis codes . Prior work has demonstrated that CMC with some complex chronic conditions, such as technology dependence (e.g., tracheostomy dependence, gastrostomy tube), may be exposed to higher levels of polypharmacy, and subsequent analysis will seek to determine if the medical complexity of CMC is associated with varying trial outcomes . Caregiver health literacy will be assessed through parental completion of the Short Assessment of Health Literacy (SAHL-E) . This test consists of 18 items, for which participants are instructed to read a medical term aloud before associating each term to another word with similar meaning to demonstrate comprehension. In this study, scores > 14 will indicate adequate health literacy, while scores ≤ 14 will indicate inadequate health literacy. Studies of medication management in adults have demonstrated a clear link between health literacy and medication self-management skills . Parents with different levels of health literacy may have different levels of engagement with the pMTM intervention, especially because medication optimization is comprised of multiple activities and not solely medication discontinuation. Ultimately, interventions to manage polypharmacy must support parents of all levels of health literacy [ - ]. While the pMTM intervention uses patient-centric communication modalities, understanding differences in the outcome by level of parental health literacy will guide post-trial refinements. Finally, attitudes toward medication management will be assessed through parental completion of the Patients’ Attitudes Toward Deprescribing (PATD) tool . This scoring system consists of 15 items used to classify the participant’s feelings towards polypharmacy, their own medication history, and comfort with discontinuation of medications. Primary outcome measure: medication-related problems The primary outcome is the MRP count at 90 days after the clinical visit during which the PCP finalizes any clinical recommendations for either the pMTM intervention or usual care groups. Because medication changes require time to effect change in clinical outcomes, we will collect outcome measurements at 90 days after the clinical visit, consistent with adult literature using the MRP outcome [ , , - ]. Robust evidence exists to support the utility of using MRPs as an outcome to evaluate MTM. As related to this outcome, we will follow established guidelines for analyzing and reporting composite measures [ - ]. To facilitate blinded assessment of MRPs, we will generate an EHR-based clinical summary at 90 days, including the current weight, active medication list, symptom report, lab values, serum levels, and any diagnostic codes related to ADEs (Table ). Trained study personnel will contact parents to verify the medication history, symptom data, and any adverse events or acute healthcare utilization. Blinded outcome assessments will be made by ≥ 2 pediatric pharmacists not involved in the pMTM intervention using our published standardized approach . Secondary outcome measures: Parent-Reported Outcomes of Symptoms (PRO-Sx), and healthcare utilization We will also measure changes in total parent-reported outcomes of symptoms (PRO-Sx) scores and 90-day acute healthcare utilization for both the intervention and usual care groups [ , , ]. To ensure that symptoms are stable or improved after medication changes, we will track PRO-Sx scores, which we have experience measuring among CMC in the ambulatory setting [ , , ]. Based on our prior work, it is feasible for parents to easily track symptoms from home via EHR functionality [ , , ]. This will occur at scheduled time points, including the date of the pMTM visit for patients allocated to the intervention group, the date of the PCP clinical visit for all patients, and at 7 days, 30 days, and 90 days following the clinical visit. As part of the 90-day clinical summary, we will track counts of unplanned acute care utilization, including ambulatory sick visits, emergency room visits, and inpatient hospitalizations. Exploratory outcomes and safety measures During the 90-day follow up period, we will assess additional exploratory outcomes including Medication Regimen Complexity Index (MRCI) scores and medication counts for all medications, including scheduled, as needed, and over-the-counter medications . MRCI, a tool developed to measure medication complexity in adult and geriatric populations with polypharmacy, has demonstrated potential for application in pediatric populations and has been associated with increased acute healthcare utilization [ , , - ]. In addition to the previously described patient-level PRO-Sx symptom burden scores, parental completion of the Patient-Health Burden Scale for Family Caregivers (BSFC) and Patient Health Questionnaire (PHQ-9) will be performed at scheduled time points to measure caregiver burden and mental health . In addition to assessment of ADEs as described within the MRP primary outcome, several other measures of medication safety and adherence will be collected. First, the DrugBank database will be interrogated against baseline and 90-day patient medication lists for potential drug-drug interaction count . High-alert medication counts will be assessed using published guidance from the Institute for Safe Medication Practices . Finally, medication adherence will be measured using the Adherence to Refills and Medications Scale (ARMS) at similar study time points . Additional outcomes within RE-AIM framework As defined within the RE-AIM framework of the study design, the aims of this study will address several goals which are not formally captured by the primary and secondary outcomes defined above, which primarily measure pMTM effectiveness. The outcomes that will be used to assess the impact of the pMTM intervention towards other aims are described briefly below: Reach Reach of the pMTM intervention will be quantified by measuring the percentage and representativeness of CMC with polypharmacy who accept and decline participation in the pMTM intervention. Study personnel will track the patients and parents declining participation, including previously defined demographics and reasons for non-participation. Effect modification Previously described variables of medical complexity, health literacy, and attitudes towards medication management will be assessed at the patient- and parent-level, where appropriate, to quantitatively evaluate intervention effect modification. To qualitatively evaluate effect modification, we will conduct a semi-structured interviews through the study period with a total of 40 caregivers. Qualitative interviews will include 10 caregivers from each subgroup (technology dependent/independent and high/low health literacy); only caregivers who participated in the pMTM intervention arm will be included. To recruit for this portion of the study, participating parents will receive a $25 gift card. A trained professional qualitative study team member will conduct recorded 1-h parent interviews via phone or video software. Qualitative interview guides will be pilot tested prior to use with study subjects. The guide will elicit parents’ perceptions of the feasibility, acceptability, and barriers/facilitators of the pMTM intervention, specifically focusing on how outcomes may have been impacted by their health literacy and whether their child was dependent on specific forms of technology. Recruitment for qualitative caregiver interviews will discontinue if ongoing analysis (described below) reveals thematic saturation . Adoption, implementation, and maintenance To measure PCP adoption of the pMTM intervention, short annual confidential surveys will be administered to quantify adoption of the pMTM intervention and recommendations by clinical providers, as well satisfaction and time spent related to pMTM. The study team will pilot test and monitor surveys to identify potential problems that could result in missing responses. Providers will be encouraged to complete all items on the survey, informed of the negative impact of missing data on the research, and assured that their answers are completely confidential. Those who participate will receive a $10 gift card after completing each annual survey. We will calculate the percentage and representativeness of eligible providers involved in the pMTM intervention and attempt to collect reasons for declination if observed. Implementation fidelity will be evaluated through audio recording of a sample of visits from the intervention arm (pMTM visit and corresponding clinical visit) and the usual care arm (clinical visit). We will screen and recruit participants for recording of visits using permuted block randomization for a total of 100 audio-recorded encounters (50 encounters per arm). For in-person visits, study personnel will start the recorder and leave the room. The parent, child, or provider can stop recordings at any point. For telehealth-based pMTM study visits, audio recording will occur within the software. The audio-recorded clinical encounters will be used to compare whether the provider addresses pMTM-related components (medication review, optimizations, and action plan) during the clinical visits (binary outcome), and to estimate the time needed to implement the pMTM intervention or discuss medication-related issues (continuous outcome), focusing on differences between pMTM and usual care. To measure aims related to pMTM maintenance, we will conduct 15 qualitative interviews with consented providers at time points including the beginning, middle, and end of the trial, for a total of 45 interviews. To reduce bias, we will attempt to interview all providers at least once during the study period. We will also attempt to interview some providers (specifically the pharmacists) at > 1 time point to evaluate how their experience with pMTM changed over time. Qualitative interview guides will be pilot tested prior to use with study subjects. We will elicit providers’ perceptions of the feasibility, acceptability, and barriers/facilitators of the pMTM intervention. At the final time point, we will specifically focus on providers’ perceptions and intentions of sustaining the interventions following the completion of the trial. Providers who participate in an interview will receive a $50 gift card. Recruitment will discontinue if ongoing analysis (described below) reveals thematic saturation . Finally, to measure maintenance outcomes related to program replication costs, we will use time-driven activity-based costing approach to measure the cost related to implementation and maintenance of pMTM relative to usual care costs. Using best practices, we will develop process maps for patient/parent flow for both pMTM and usual care delivery and specify care activities and who (pharmacist, provider, other clinic staff) performs each activity . The largest component of cost will be the time clinic staff devote to delivering pMTM and usual care, which we will measure using the audio recordings of clinical visits, annual surveys (questions about average time spent for pre-clinical visit preparation and post-visit documentation), and provider interviews (to explore reasons for variation) described above. Measures of time will be converted to cost using internal salaries and fringe benefits for each category of clinic staff. We will also value time using Bureau of Labor Statistics data to estimate more representative replication costs . We will obtain cost information for other clinic and informatics resources directly and indirectly supporting the adoption, implementation, and maintenance of pMTM . Blinding Due to the nature of the pMTM intervention, patient, pharmacist, and PCP participants are not blinded to the intervention. Investigators and statisticians performing data analysis will be blinded to subject allocation. Additionally, participants involved in assessment of safety measures and pediatric pharmacists involved in assignment of the MRP primary outcome will be blinded to patient group assignment. Statistical methods Patient and parent characteristics in both study arms will be evaluated using appropriate measures of central tendency and spread for continuous variables and proportions for categorical variables. For the primary MRP outcome analysis, we will assess for differences in MRP counts between the intervention and control groups at 90 days using generalized linear models with Poisson response distribution and log link function. The overall effectiveness of the intervention will be assessed by testing the model coefficient for randomization group, with a null hypothesis of no mean difference in MRP counts at 90 days between treatment and control groups. Model checking and diagnostics will be performed to assess validity of model assumptions, with appropriate remedial measures taken as necessary. For the secondary outcome analysis, we will assess for differences in outcome changes over time between the intervention and control groups using generalized linear mixed models, which accounts for correlation between repeated outcome measurements over time. Within-subjects correlation will be accounted for using a random intercept. Towards assessment of patient and parent factors modifying pMTM effectiveness, we will employ similar generalized linear mixed models. Each of the pre-specified effect modifiers will be modeled as an interaction term between the intervention variable (binary) and the effect modifier variable (binary). The test of the null hypothesis that the interaction term’s coefficient is equal to 0 will indicate whether there is evidence that the effectiveness of the intervention varies according to the proposed modifier. Towards implementation fidelity, comparisons focusing on differences between the intervention and usual care arms will be made using generalized linear mixed models with logistic link and a random intercept for provider to account for correlation within providers. Comparisons in time, focusing on whether there is a difference between arms in the time a provider spends addressing medication-related issues, will be made using linear mixed models, with a random intercept for provider to account for correlation within providers. For analysis of program implementation and replication costs, our primary measures of cost will be the average amount of time of clinic staff devoted to the pMTM intervention and usual care and the average cost per-patient for pMTM and usual care. The average time will be calculated for each category of clinic staff, including the pharmacist, by the mean time measured in the audio recordings plus the mean time reported in the annual surveys. The average cost per-patient will convert the average time measures to dollars using Bureau of Labor Statistics data and add in the cost of other clinic resources divided by the number of patients. We will also conduct a sensitivity analysis using different time measures based on the distribution of the time measures across the audio recordings and survey responses. For all qualitative data, we will employ qualitative content analysis throughout the periods of data collection and analysis . This is appropriate as our goal is to explore the participants’ experiences, focusing on their perceptions of the pMTM intervention and the feasibility and acceptability of the intervention. To achieve this, we will use an inductive coding process in which 2 + research team members independently develop codes and their definitions through reading the transcripts. The team will discuss their respective codes to develop a consolidated codebook. The study team will then independently apply the codebook to the next set of transcripts, and then meet and reconcile their codebooks and coded data. This process will continue until a final codebook is agreed upon. The final codebook will be applied to the remaining transcripts. Coded transcripts will be entered into Atlas.ti version 9.1 for analysis, and we will develop themes that capture the major concepts about feasibility, acceptability, and barriers/facilitators of the pMTM intervention. Missing data and intent-to-treat In the event of missing data, we will examine the data to determine if omission varies by study arm. However, our approach using mixed effects regression modeling will provide accurate estimates and inference in the presence of missing data under certain assumptions. We will check these assumptions and, if necessary, perform sensitivity analyses to quantify the effect of missing outcome data on our results. All outcomes will be analyzed on an intention-to-treat basis. Preservation of type-1 error rate The overall effectiveness of the intervention will be assessed using a multiple degree-of-freedom test with a null hypothesis of no difference between study arms at 90 days post-randomization. Based on our prior studies of pediatric medication regimen complexity, we will adjust for potential confounders including patient age, number of complex chronic conditions, and recent acute healthcare utilization. All quantitative analyses will be performed in Stata 17.0 (College Station, TX). We will use a 2-sided significance level of 0.05 for all hypothesis testing; thus, the type-I error rate for the assessment of overall effectiveness is fixed at 5%. Standard errors and 95% confidence intervals will also be reported. Power and sample size The overall effectiveness of the intervention will be evaluated based on the primary outcome measure, total MRP count. Based on our previous medication safety studies in SCC, we anticipate enrolling 80% of eligible participants and collecting data from ≥ 80% of enrolled participants at the 90-day follow up. We will approach 463 potential participants and enroll 371 to achieve a final analytic sample size of 296 children and their parents. This will provide > 90% power at the 2-sided 0.05 significance level to detect a 1.0 difference between study arms in MRP count, which is sufficient to detect clinically meaningful changes demonstrated by our pilot data. If there is some degree of contamination between the intervention arms due to clinicians seeing patients in both arms, the study will maintain 80% power to detect a significant mean difference in the primary outcome; this assumes a dilution of the treatment effect of 15% (i.e., that the difference in mean outcomes between treatment arms is attenuated to 0.85). These calculations assume a standard deviation in the MRP outcome of 2.6 as determined through prior work in this area. The proposed sample size will also provide adequate power to detect clinically important changes in quantitative secondary outcomes at the 2-sided 0.05 significance level. Assuming a correlation of 0.4 within patients, the study will have 80% power to detect a difference between study arms in mean change of a) PRO-Sx symptom scores by 3.1 points and b) counts of acute healthcare utilization by 0.8–1.1 visits. These calculations assume a correlation of each outcome within patients of 0.4. To achieve implementation fidelity aims using audiotaped visits, additional sample size and power calculations were performed. With a total of 50 audiotaped visits per study arm, the study will have 80% power to detect a 0.23 difference in proportions if fidelity in the pMTM group is 90%. The study will also have 80% power to detect a mean difference of 2.8 min if the standard deviation of the length of the conversation is 5 min. Correlation within providers will be accounted for by a mixed effects model’s random intercept. Data integrity and privacy This project will produce a variety of data types across the five years of the project. All study data will be collected by trained research personnel during each study phase. Study data will be collected and analyzed from 4 primary sources, including (1) EHR data, (2) prospectively collected patient- and parent-reported data, (3) study visit data, and (4) transcripts from parent and provider interviews. Clinical data will be extracted from the EHR and/or patient charts. Throughout the trial, EHR data will be queried for utilization, pharmacy, and clinical outcome data. We do not anticipate the collection of any paper documents. Raw data will be transformed using REDCap data management tools and the subsequent processed dataset used for statistical analysis. REDCap is a secure, web-based application designed to support data entry, validation and management. Designated research staff will review REDCap data monthly to ensure data completeness and quality. To protect research participant identities and based on ethical and legal considerations, only de-identified individual data will be made available for sharing. All study data will be retained for a minimum preservation time of 3 years. The preservation time will be extended such that resulting publications have been publicly available for at least 12 months before retiring any data. Data will be made available upon request to the larger research community as soon as possible or at the time of associated publication. Access to data and dissemination policy All investigators will have access to the trial’s final dataset. There are no contractual agreements that limit such access. The investigators intend to publish results for all pre-specified primary and secondary outcomes in the peer-reviewed literature, including publication of the study protocol and access to statistical code upon request for review. Dissemination is key to ensuring that any evidence-based practices elucidated from our study can result in substantial improvements in management of pediatric polypharmacy beyond the study’s immediate scope. Study materials, tools, and resources will be developed so that they may be easily adapted to other settings, with particular focus on creation of an implementation and adaptation guide and online training module. Should the pMTM intervention prove effective, we intend to leverage ongoing research partnerships and collaborate with additional sites to test the pMTM intervention on a broader scale. Data and safety monitoring board The study’s principal investigator (JAF) will have overall responsibility for the Data Safety and Monitoring Plan and for participant safety monitoring. As we are studying only the pediatric-implementation of MTM, an evidence-based practice recommended and widely provided for adult and geriatric enrollees with polypharmacy, the risks to human subjects are minimal, Furthermore, any medication-related optimizations made as part of the pMTM intervention are implemented based on joint decision making between the patient, parent, and the PCP during the routine clinical visit. Although minimal risks to human subjects are anticipated and a formal data safety monitoring board is not required, we will take robust precautions to monitor study participants for signal of adverse events or unanticipated problems during the study according to AHRQ requirements.
This protocol has been prepared according to the RE-AIM framework (Table ) and the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement (Table ) [ - ]. Trial results will be reported according to the Consolidated Standards of Reporting Trials (CONSORT) and the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines [ - ]. This trial was registered at clinicaltrials.gov (NCT05761847) on 02/25/2023. The SPIRIT Checklist is provided as Additional File .
This trial is a 5-year hybrid type 2 randomized controlled trial funded by the Agency for Healthcare Research and Quality (AHRQ) and designed to evaluate the management of pediatric polypharmacy in the primary care setting by comparing the pMTM intervention to usual care for lowering the primary outcome of MRP counts and secondary outcomes of symptom burdens and acute healthcare utilization. Because pediatric pharmacist support is currently a limited resource, a hybrid type 2 design is the most efficient, rigorous design to simultaneously evaluate the effectiveness and implementation of pMTM to enable rapid dissemination . The intervention is not blinded to enrolled patients; however, study team members involved in assessment of outcomes, data analysis, and safety monitoring will be blinded. All study procedures were reviewed and approved by the affiliated Institutional Review Board (IRB). Protocol amendments will be approved by the local IRB. Other pertinent parties will be notified through updates to the clinicaltrials.gov website. All publications related to the study will include a summary of protocol amendments.
Study enrollment is scheduled to begin in August 2023 and will occur through September 2027. The study will take place at the Special Care Clinic (SCC) at Children’s Hospital Colorado, a large multidisciplinary primary care medical home for CMC within a large, tertiary, freestanding children’s hospital. Patients ages 2–18 years old with ≥ 1 complex chronic condition and ≥ 5 concurrent medications (including prescription, as needed, and over-the-counter medications), and their primary parental caregiver will be screened for inclusion . Patients with a non-English speaking primary caregiver will be excluded, as the pMTM intervention and certain study instruments are currently available only in English. Females and males and members of all racial and ethnic categories will be included if eligible, without bias.
Eligibility screening will be conducted by trained research personnel using automated daily electronic health record (EHR) reporting tools that identify eligible children with a scheduled routine clinical visit in the SCC within the next 14 days. Following review of eligibility criteria, research personnel will contact the caregiver to introduce the study, invite the caregiver (and assenting adolescents) to participate, and obtain written consent. Following consent, research personnel will work with study participants to complete baseline and 90-day assessments using EHR functionality. Baseline assessment will include patient and parent demographics, assessment of health literacy, assessment of parent attitudes towards deprescribing, and parental assessment of symptom burden (Table ). Using the current EHR medication list, all participants will undergo medication history review with a study team member trained using WHO’s Standard Operation Protocol ; data will be collected for research purposes only, but if significant medication safety concerns are noted, the study team will alert the primary care provider (PCP) before the clinical visit. All additional data will be conducted during subsequent study and clinical visits. Participants will then be randomized 1:1 in permuted blocks of 4 patients to pMTM intervention or usual care (2 patients to each arm), with the pMTM intervention occurring ≤ 3 days before a scheduled well child visit or routine follow-up medical encounter (Fig. ). Those randomized to intervention will meet with a study pharmacist (PharmD) in-person or via telehealth for completion of the pMTM encounter (described comprehensively below). Both groups will then be seen for their scheduled PCP visit as occurring within usual care. After the clinical visit, all participants will receive the post-clinical visit medication list and, for those in the intervention arm, the medication action plan (MAP). Participants will be followed for 90 days after the clinical visit to track the primary, secondary, and exploratory outcomes (Tables and ).
Intervention: pMTM conceptual framework The trial design is conceptualized based on the Shed-MEDS model of deprescribing, which posits that adult patients with potentially inappropriate polypharmacy will benefit from a patient-centered deprescribing intervention to reduce polypharmacy and improve health . With permission, the model is modified to include the broader core activity of pMTM, during which parents and providers review medication changes, continuation, proper use, monitoring, and follow-up (Fig. ) . For CMC with polypharmacy, our model specifies that the pMTM intervention (which accounts for and prioritizes safety, quality of life, and parental considerations) will lead to patient-centered optimization of medications [ , , ]. Intervention: pediatric medication therapy management steps Following baseline patient and caregiver assessments, patients randomized to the intervention will take part in a PharmD visit for application of the pMTM intervention, occurring in person or via telehealth within 3 days prior to the planned PCP clinical visit. The pMTM intervention will be applied during a 30- to 45-min visit in which the PharmD will work collaborative with the caregiver to complete 3 major activities (Table ). First, the PharmD will perform with the patient and caregiver a comprehensive medication review (CMR) of the patient’s current personal medication list (PML). CMR is a systematic process of collecting patient-specific data and assessing for potential medication-related problems. Subsequent clinical decisions are reliant on accuracy of available data; therefore, the first step of CMR is to conduct a thorough medication history using all available resources, updating the patient’s PML where necessary, and documenting that such activities have occurred. The PharmD will gather a list of active medications (including prescriptions, over-the-counter medications, dietary supplements, and complementary medicines), determine and confirm active disease states, and identify providers involved in the prescribing and management of current medication therapy. Next, the PharmD will review each disease therapy with the caregiver and patient, if appropriate, to determine current goals of therapy. Caregiver understanding of goals of therapy is important to their ability to provide confident care to CMC. Education related to mismatch of therapy goals and known medication effects may be addressed at this time. Additionally, the PharmD will determine if any barriers may be affecting adherence. Barriers to adherence in CMC often include taste aversion, difficult or confusing administration techniques, burdensome dosing schedule, or medication cost, among others. Such barriers may be addressed in subsequent steps of the pMTM intervention. Following review of therapy goals and adherence, the PharmD will evaluate any available laboratory data and discuss ongoing clinical symptoms to determine current medication effectiveness or lack thereof. The PML will be appraised for potential therapeutic duplication and therapies for which resolution of symptoms may render ongoing treatment unnecessary. Common examples of duplication of therapy within this population include use of multiple NSAID agents or acetaminophen-containing products or use of multiple medications within the same therapeutic class (e.g., clonidine and guanfacine). Such duplication may cause potentiation of adverse effects or contribute to excessive medication costs without conferring additional therapeutic benefit. As the final component of CMR, the PharmD will review current patient symptoms to identify those which may be attributable to medication use or toxicity, which may result in recommendations for alternative therapy or deprescribing. Gaps in therapy for guideline-directed care of various disease states (e.g., asthma) may also be identified at this stage. The second essential element of the pMTM intervention involves optimization of the medication regimen to address those concerns or opportunities identified within the preceding CMR. With the caregiver, a list of all ongoing concerns will be prioritized according to goals of therapy. Considerations related to safety, patient quality of life, and caregiver or family quality of life will be measured during this process and communicated within subsequent recommendations for medication optimization. Next, the PharmD will formulate a plan for recommended medication-related changes, including potential dose or frequency adjustment, discontinuation of therapy, or initiation of alternate medications to manage untreated or ongoing disease symptoms. Recommendations will be classified according to type (Table ), and rationale will be provided. All recommendations will be included in a structured pMTM provider note within the EHR which is intended to be informative and suggestive, as well as concise and respectful. Recommendations may be provided in the form of changes recommended for urgent action by the PCP (at the impending clinic visit) or communication with subspecialists for those diseases states primarily managed by alternate providers. Expectations and recommendations for both subjective monitoring and objective labs or tests (e.g., ECG) will be communicated to the caregiver and documented within the structured pMTM provider note. Any medications changes recommended during the visit will ultimately be made at the discretion of the PCP during clinical care. As the final component of the pMTM intervention, after the clinical visit, the PharmD will create a written, patient-centered, and caregiver-friendly MAP using a template populated from the EHR. This MAP will describe a prioritized list of specific action items resulting from the interactive pMTM consultation, which empowers the caregiver to be personally involved in the administration of the proposed optimization(s). The document was developed from the CMS standardized format to allow for tracking of patient progress, clarification of intended patient response, and documentation of the perceived clinical effects of all changes . The MAP is designed assist the caregiver with resolving current drug therapy concerns and to help achieve the goals of medication treatment but is not intended to provide the level of detailed communication provided to the PCP or other healthcare providers. Items reinforcing compliance, maintaining caregiver actions, and acknowledging success in the child’s medication therapy may be included. The caregiver will be encouraged to bring the MAP with them to future healthcare visits and to request update of the document as necessary. A plan for follow-up all changes with the PharmD of other appropriate providers will be outlined within the MAP and communicated to the caregiver. Additionally, the reconciled PML (created within the CMR stage of the pMTM visit) will be provided to the caregiver to assist in the understanding of current medication treatment and the tracking of potential medication changes, such as addition of over-the-counter medications or redaction of discontinued products. Information about appropriate disposal of unneeded medications will be provided by the PharmD if applicable. Control group: usual care Patients randomized to usual care will undergo medication history review performed by study personnel prior to the PCP clinical visits as previously described. The goal of this medication review process is to ensure accuracy of the baseline medication-related data without recommendations related to medication management. We selected a usual care comparator because there are no current established standards for centralized medication management strategies within the population of CMC. All medications decisions for the control group are at the discretion of the PCP.
The trial design is conceptualized based on the Shed-MEDS model of deprescribing, which posits that adult patients with potentially inappropriate polypharmacy will benefit from a patient-centered deprescribing intervention to reduce polypharmacy and improve health . With permission, the model is modified to include the broader core activity of pMTM, during which parents and providers review medication changes, continuation, proper use, monitoring, and follow-up (Fig. ) . For CMC with polypharmacy, our model specifies that the pMTM intervention (which accounts for and prioritizes safety, quality of life, and parental considerations) will lead to patient-centered optimization of medications [ , , ].
Following baseline patient and caregiver assessments, patients randomized to the intervention will take part in a PharmD visit for application of the pMTM intervention, occurring in person or via telehealth within 3 days prior to the planned PCP clinical visit. The pMTM intervention will be applied during a 30- to 45-min visit in which the PharmD will work collaborative with the caregiver to complete 3 major activities (Table ). First, the PharmD will perform with the patient and caregiver a comprehensive medication review (CMR) of the patient’s current personal medication list (PML). CMR is a systematic process of collecting patient-specific data and assessing for potential medication-related problems. Subsequent clinical decisions are reliant on accuracy of available data; therefore, the first step of CMR is to conduct a thorough medication history using all available resources, updating the patient’s PML where necessary, and documenting that such activities have occurred. The PharmD will gather a list of active medications (including prescriptions, over-the-counter medications, dietary supplements, and complementary medicines), determine and confirm active disease states, and identify providers involved in the prescribing and management of current medication therapy. Next, the PharmD will review each disease therapy with the caregiver and patient, if appropriate, to determine current goals of therapy. Caregiver understanding of goals of therapy is important to their ability to provide confident care to CMC. Education related to mismatch of therapy goals and known medication effects may be addressed at this time. Additionally, the PharmD will determine if any barriers may be affecting adherence. Barriers to adherence in CMC often include taste aversion, difficult or confusing administration techniques, burdensome dosing schedule, or medication cost, among others. Such barriers may be addressed in subsequent steps of the pMTM intervention. Following review of therapy goals and adherence, the PharmD will evaluate any available laboratory data and discuss ongoing clinical symptoms to determine current medication effectiveness or lack thereof. The PML will be appraised for potential therapeutic duplication and therapies for which resolution of symptoms may render ongoing treatment unnecessary. Common examples of duplication of therapy within this population include use of multiple NSAID agents or acetaminophen-containing products or use of multiple medications within the same therapeutic class (e.g., clonidine and guanfacine). Such duplication may cause potentiation of adverse effects or contribute to excessive medication costs without conferring additional therapeutic benefit. As the final component of CMR, the PharmD will review current patient symptoms to identify those which may be attributable to medication use or toxicity, which may result in recommendations for alternative therapy or deprescribing. Gaps in therapy for guideline-directed care of various disease states (e.g., asthma) may also be identified at this stage. The second essential element of the pMTM intervention involves optimization of the medication regimen to address those concerns or opportunities identified within the preceding CMR. With the caregiver, a list of all ongoing concerns will be prioritized according to goals of therapy. Considerations related to safety, patient quality of life, and caregiver or family quality of life will be measured during this process and communicated within subsequent recommendations for medication optimization. Next, the PharmD will formulate a plan for recommended medication-related changes, including potential dose or frequency adjustment, discontinuation of therapy, or initiation of alternate medications to manage untreated or ongoing disease symptoms. Recommendations will be classified according to type (Table ), and rationale will be provided. All recommendations will be included in a structured pMTM provider note within the EHR which is intended to be informative and suggestive, as well as concise and respectful. Recommendations may be provided in the form of changes recommended for urgent action by the PCP (at the impending clinic visit) or communication with subspecialists for those diseases states primarily managed by alternate providers. Expectations and recommendations for both subjective monitoring and objective labs or tests (e.g., ECG) will be communicated to the caregiver and documented within the structured pMTM provider note. Any medications changes recommended during the visit will ultimately be made at the discretion of the PCP during clinical care. As the final component of the pMTM intervention, after the clinical visit, the PharmD will create a written, patient-centered, and caregiver-friendly MAP using a template populated from the EHR. This MAP will describe a prioritized list of specific action items resulting from the interactive pMTM consultation, which empowers the caregiver to be personally involved in the administration of the proposed optimization(s). The document was developed from the CMS standardized format to allow for tracking of patient progress, clarification of intended patient response, and documentation of the perceived clinical effects of all changes . The MAP is designed assist the caregiver with resolving current drug therapy concerns and to help achieve the goals of medication treatment but is not intended to provide the level of detailed communication provided to the PCP or other healthcare providers. Items reinforcing compliance, maintaining caregiver actions, and acknowledging success in the child’s medication therapy may be included. The caregiver will be encouraged to bring the MAP with them to future healthcare visits and to request update of the document as necessary. A plan for follow-up all changes with the PharmD of other appropriate providers will be outlined within the MAP and communicated to the caregiver. Additionally, the reconciled PML (created within the CMR stage of the pMTM visit) will be provided to the caregiver to assist in the understanding of current medication treatment and the tracking of potential medication changes, such as addition of over-the-counter medications or redaction of discontinued products. Information about appropriate disposal of unneeded medications will be provided by the PharmD if applicable.
Patients randomized to usual care will undergo medication history review performed by study personnel prior to the PCP clinical visits as previously described. The goal of this medication review process is to ensure accuracy of the baseline medication-related data without recommendations related to medication management. We selected a usual care comparator because there are no current established standards for centralized medication management strategies within the population of CMC. All medications decisions for the control group are at the discretion of the PCP.
Table includes all study measures and data collection time points. To promote study retention, participants will receive compensation in the form of $50 gift cards at two study time-points (i.e., completion of clinical visit and at 90 days). Each study measure listed in Table is briefly described below. Demographics, health literacy, and attitudes towards medication management Research personnel will use a standardized approach to extract information from the EHR related to basic patient demographics (age, gender, complex chronic conditions, level of polypharmacy). Complex chronic condition data is generated using the CCC V2 published classification system based on ICD-10 diagnosis codes . Prior work has demonstrated that CMC with some complex chronic conditions, such as technology dependence (e.g., tracheostomy dependence, gastrostomy tube), may be exposed to higher levels of polypharmacy, and subsequent analysis will seek to determine if the medical complexity of CMC is associated with varying trial outcomes . Caregiver health literacy will be assessed through parental completion of the Short Assessment of Health Literacy (SAHL-E) . This test consists of 18 items, for which participants are instructed to read a medical term aloud before associating each term to another word with similar meaning to demonstrate comprehension. In this study, scores > 14 will indicate adequate health literacy, while scores ≤ 14 will indicate inadequate health literacy. Studies of medication management in adults have demonstrated a clear link between health literacy and medication self-management skills . Parents with different levels of health literacy may have different levels of engagement with the pMTM intervention, especially because medication optimization is comprised of multiple activities and not solely medication discontinuation. Ultimately, interventions to manage polypharmacy must support parents of all levels of health literacy [ - ]. While the pMTM intervention uses patient-centric communication modalities, understanding differences in the outcome by level of parental health literacy will guide post-trial refinements. Finally, attitudes toward medication management will be assessed through parental completion of the Patients’ Attitudes Toward Deprescribing (PATD) tool . This scoring system consists of 15 items used to classify the participant’s feelings towards polypharmacy, their own medication history, and comfort with discontinuation of medications. Primary outcome measure: medication-related problems The primary outcome is the MRP count at 90 days after the clinical visit during which the PCP finalizes any clinical recommendations for either the pMTM intervention or usual care groups. Because medication changes require time to effect change in clinical outcomes, we will collect outcome measurements at 90 days after the clinical visit, consistent with adult literature using the MRP outcome [ , , - ]. Robust evidence exists to support the utility of using MRPs as an outcome to evaluate MTM. As related to this outcome, we will follow established guidelines for analyzing and reporting composite measures [ - ]. To facilitate blinded assessment of MRPs, we will generate an EHR-based clinical summary at 90 days, including the current weight, active medication list, symptom report, lab values, serum levels, and any diagnostic codes related to ADEs (Table ). Trained study personnel will contact parents to verify the medication history, symptom data, and any adverse events or acute healthcare utilization. Blinded outcome assessments will be made by ≥ 2 pediatric pharmacists not involved in the pMTM intervention using our published standardized approach . Secondary outcome measures: Parent-Reported Outcomes of Symptoms (PRO-Sx), and healthcare utilization We will also measure changes in total parent-reported outcomes of symptoms (PRO-Sx) scores and 90-day acute healthcare utilization for both the intervention and usual care groups [ , , ]. To ensure that symptoms are stable or improved after medication changes, we will track PRO-Sx scores, which we have experience measuring among CMC in the ambulatory setting [ , , ]. Based on our prior work, it is feasible for parents to easily track symptoms from home via EHR functionality [ , , ]. This will occur at scheduled time points, including the date of the pMTM visit for patients allocated to the intervention group, the date of the PCP clinical visit for all patients, and at 7 days, 30 days, and 90 days following the clinical visit. As part of the 90-day clinical summary, we will track counts of unplanned acute care utilization, including ambulatory sick visits, emergency room visits, and inpatient hospitalizations. Exploratory outcomes and safety measures During the 90-day follow up period, we will assess additional exploratory outcomes including Medication Regimen Complexity Index (MRCI) scores and medication counts for all medications, including scheduled, as needed, and over-the-counter medications . MRCI, a tool developed to measure medication complexity in adult and geriatric populations with polypharmacy, has demonstrated potential for application in pediatric populations and has been associated with increased acute healthcare utilization [ , , - ]. In addition to the previously described patient-level PRO-Sx symptom burden scores, parental completion of the Patient-Health Burden Scale for Family Caregivers (BSFC) and Patient Health Questionnaire (PHQ-9) will be performed at scheduled time points to measure caregiver burden and mental health . In addition to assessment of ADEs as described within the MRP primary outcome, several other measures of medication safety and adherence will be collected. First, the DrugBank database will be interrogated against baseline and 90-day patient medication lists for potential drug-drug interaction count . High-alert medication counts will be assessed using published guidance from the Institute for Safe Medication Practices . Finally, medication adherence will be measured using the Adherence to Refills and Medications Scale (ARMS) at similar study time points . Additional outcomes within RE-AIM framework As defined within the RE-AIM framework of the study design, the aims of this study will address several goals which are not formally captured by the primary and secondary outcomes defined above, which primarily measure pMTM effectiveness. The outcomes that will be used to assess the impact of the pMTM intervention towards other aims are described briefly below: Reach Reach of the pMTM intervention will be quantified by measuring the percentage and representativeness of CMC with polypharmacy who accept and decline participation in the pMTM intervention. Study personnel will track the patients and parents declining participation, including previously defined demographics and reasons for non-participation. Effect modification Previously described variables of medical complexity, health literacy, and attitudes towards medication management will be assessed at the patient- and parent-level, where appropriate, to quantitatively evaluate intervention effect modification. To qualitatively evaluate effect modification, we will conduct a semi-structured interviews through the study period with a total of 40 caregivers. Qualitative interviews will include 10 caregivers from each subgroup (technology dependent/independent and high/low health literacy); only caregivers who participated in the pMTM intervention arm will be included. To recruit for this portion of the study, participating parents will receive a $25 gift card. A trained professional qualitative study team member will conduct recorded 1-h parent interviews via phone or video software. Qualitative interview guides will be pilot tested prior to use with study subjects. The guide will elicit parents’ perceptions of the feasibility, acceptability, and barriers/facilitators of the pMTM intervention, specifically focusing on how outcomes may have been impacted by their health literacy and whether their child was dependent on specific forms of technology. Recruitment for qualitative caregiver interviews will discontinue if ongoing analysis (described below) reveals thematic saturation . Adoption, implementation, and maintenance To measure PCP adoption of the pMTM intervention, short annual confidential surveys will be administered to quantify adoption of the pMTM intervention and recommendations by clinical providers, as well satisfaction and time spent related to pMTM. The study team will pilot test and monitor surveys to identify potential problems that could result in missing responses. Providers will be encouraged to complete all items on the survey, informed of the negative impact of missing data on the research, and assured that their answers are completely confidential. Those who participate will receive a $10 gift card after completing each annual survey. We will calculate the percentage and representativeness of eligible providers involved in the pMTM intervention and attempt to collect reasons for declination if observed. Implementation fidelity will be evaluated through audio recording of a sample of visits from the intervention arm (pMTM visit and corresponding clinical visit) and the usual care arm (clinical visit). We will screen and recruit participants for recording of visits using permuted block randomization for a total of 100 audio-recorded encounters (50 encounters per arm). For in-person visits, study personnel will start the recorder and leave the room. The parent, child, or provider can stop recordings at any point. For telehealth-based pMTM study visits, audio recording will occur within the software. The audio-recorded clinical encounters will be used to compare whether the provider addresses pMTM-related components (medication review, optimizations, and action plan) during the clinical visits (binary outcome), and to estimate the time needed to implement the pMTM intervention or discuss medication-related issues (continuous outcome), focusing on differences between pMTM and usual care. To measure aims related to pMTM maintenance, we will conduct 15 qualitative interviews with consented providers at time points including the beginning, middle, and end of the trial, for a total of 45 interviews. To reduce bias, we will attempt to interview all providers at least once during the study period. We will also attempt to interview some providers (specifically the pharmacists) at > 1 time point to evaluate how their experience with pMTM changed over time. Qualitative interview guides will be pilot tested prior to use with study subjects. We will elicit providers’ perceptions of the feasibility, acceptability, and barriers/facilitators of the pMTM intervention. At the final time point, we will specifically focus on providers’ perceptions and intentions of sustaining the interventions following the completion of the trial. Providers who participate in an interview will receive a $50 gift card. Recruitment will discontinue if ongoing analysis (described below) reveals thematic saturation . Finally, to measure maintenance outcomes related to program replication costs, we will use time-driven activity-based costing approach to measure the cost related to implementation and maintenance of pMTM relative to usual care costs. Using best practices, we will develop process maps for patient/parent flow for both pMTM and usual care delivery and specify care activities and who (pharmacist, provider, other clinic staff) performs each activity . The largest component of cost will be the time clinic staff devote to delivering pMTM and usual care, which we will measure using the audio recordings of clinical visits, annual surveys (questions about average time spent for pre-clinical visit preparation and post-visit documentation), and provider interviews (to explore reasons for variation) described above. Measures of time will be converted to cost using internal salaries and fringe benefits for each category of clinic staff. We will also value time using Bureau of Labor Statistics data to estimate more representative replication costs . We will obtain cost information for other clinic and informatics resources directly and indirectly supporting the adoption, implementation, and maintenance of pMTM .
Research personnel will use a standardized approach to extract information from the EHR related to basic patient demographics (age, gender, complex chronic conditions, level of polypharmacy). Complex chronic condition data is generated using the CCC V2 published classification system based on ICD-10 diagnosis codes . Prior work has demonstrated that CMC with some complex chronic conditions, such as technology dependence (e.g., tracheostomy dependence, gastrostomy tube), may be exposed to higher levels of polypharmacy, and subsequent analysis will seek to determine if the medical complexity of CMC is associated with varying trial outcomes . Caregiver health literacy will be assessed through parental completion of the Short Assessment of Health Literacy (SAHL-E) . This test consists of 18 items, for which participants are instructed to read a medical term aloud before associating each term to another word with similar meaning to demonstrate comprehension. In this study, scores > 14 will indicate adequate health literacy, while scores ≤ 14 will indicate inadequate health literacy. Studies of medication management in adults have demonstrated a clear link between health literacy and medication self-management skills . Parents with different levels of health literacy may have different levels of engagement with the pMTM intervention, especially because medication optimization is comprised of multiple activities and not solely medication discontinuation. Ultimately, interventions to manage polypharmacy must support parents of all levels of health literacy [ - ]. While the pMTM intervention uses patient-centric communication modalities, understanding differences in the outcome by level of parental health literacy will guide post-trial refinements. Finally, attitudes toward medication management will be assessed through parental completion of the Patients’ Attitudes Toward Deprescribing (PATD) tool . This scoring system consists of 15 items used to classify the participant’s feelings towards polypharmacy, their own medication history, and comfort with discontinuation of medications.
The primary outcome is the MRP count at 90 days after the clinical visit during which the PCP finalizes any clinical recommendations for either the pMTM intervention or usual care groups. Because medication changes require time to effect change in clinical outcomes, we will collect outcome measurements at 90 days after the clinical visit, consistent with adult literature using the MRP outcome [ , , - ]. Robust evidence exists to support the utility of using MRPs as an outcome to evaluate MTM. As related to this outcome, we will follow established guidelines for analyzing and reporting composite measures [ - ]. To facilitate blinded assessment of MRPs, we will generate an EHR-based clinical summary at 90 days, including the current weight, active medication list, symptom report, lab values, serum levels, and any diagnostic codes related to ADEs (Table ). Trained study personnel will contact parents to verify the medication history, symptom data, and any adverse events or acute healthcare utilization. Blinded outcome assessments will be made by ≥ 2 pediatric pharmacists not involved in the pMTM intervention using our published standardized approach .
We will also measure changes in total parent-reported outcomes of symptoms (PRO-Sx) scores and 90-day acute healthcare utilization for both the intervention and usual care groups [ , , ]. To ensure that symptoms are stable or improved after medication changes, we will track PRO-Sx scores, which we have experience measuring among CMC in the ambulatory setting [ , , ]. Based on our prior work, it is feasible for parents to easily track symptoms from home via EHR functionality [ , , ]. This will occur at scheduled time points, including the date of the pMTM visit for patients allocated to the intervention group, the date of the PCP clinical visit for all patients, and at 7 days, 30 days, and 90 days following the clinical visit. As part of the 90-day clinical summary, we will track counts of unplanned acute care utilization, including ambulatory sick visits, emergency room visits, and inpatient hospitalizations.
During the 90-day follow up period, we will assess additional exploratory outcomes including Medication Regimen Complexity Index (MRCI) scores and medication counts for all medications, including scheduled, as needed, and over-the-counter medications . MRCI, a tool developed to measure medication complexity in adult and geriatric populations with polypharmacy, has demonstrated potential for application in pediatric populations and has been associated with increased acute healthcare utilization [ , , - ]. In addition to the previously described patient-level PRO-Sx symptom burden scores, parental completion of the Patient-Health Burden Scale for Family Caregivers (BSFC) and Patient Health Questionnaire (PHQ-9) will be performed at scheduled time points to measure caregiver burden and mental health . In addition to assessment of ADEs as described within the MRP primary outcome, several other measures of medication safety and adherence will be collected. First, the DrugBank database will be interrogated against baseline and 90-day patient medication lists for potential drug-drug interaction count . High-alert medication counts will be assessed using published guidance from the Institute for Safe Medication Practices . Finally, medication adherence will be measured using the Adherence to Refills and Medications Scale (ARMS) at similar study time points .
As defined within the RE-AIM framework of the study design, the aims of this study will address several goals which are not formally captured by the primary and secondary outcomes defined above, which primarily measure pMTM effectiveness. The outcomes that will be used to assess the impact of the pMTM intervention towards other aims are described briefly below: Reach Reach of the pMTM intervention will be quantified by measuring the percentage and representativeness of CMC with polypharmacy who accept and decline participation in the pMTM intervention. Study personnel will track the patients and parents declining participation, including previously defined demographics and reasons for non-participation. Effect modification Previously described variables of medical complexity, health literacy, and attitudes towards medication management will be assessed at the patient- and parent-level, where appropriate, to quantitatively evaluate intervention effect modification. To qualitatively evaluate effect modification, we will conduct a semi-structured interviews through the study period with a total of 40 caregivers. Qualitative interviews will include 10 caregivers from each subgroup (technology dependent/independent and high/low health literacy); only caregivers who participated in the pMTM intervention arm will be included. To recruit for this portion of the study, participating parents will receive a $25 gift card. A trained professional qualitative study team member will conduct recorded 1-h parent interviews via phone or video software. Qualitative interview guides will be pilot tested prior to use with study subjects. The guide will elicit parents’ perceptions of the feasibility, acceptability, and barriers/facilitators of the pMTM intervention, specifically focusing on how outcomes may have been impacted by their health literacy and whether their child was dependent on specific forms of technology. Recruitment for qualitative caregiver interviews will discontinue if ongoing analysis (described below) reveals thematic saturation . Adoption, implementation, and maintenance To measure PCP adoption of the pMTM intervention, short annual confidential surveys will be administered to quantify adoption of the pMTM intervention and recommendations by clinical providers, as well satisfaction and time spent related to pMTM. The study team will pilot test and monitor surveys to identify potential problems that could result in missing responses. Providers will be encouraged to complete all items on the survey, informed of the negative impact of missing data on the research, and assured that their answers are completely confidential. Those who participate will receive a $10 gift card after completing each annual survey. We will calculate the percentage and representativeness of eligible providers involved in the pMTM intervention and attempt to collect reasons for declination if observed. Implementation fidelity will be evaluated through audio recording of a sample of visits from the intervention arm (pMTM visit and corresponding clinical visit) and the usual care arm (clinical visit). We will screen and recruit participants for recording of visits using permuted block randomization for a total of 100 audio-recorded encounters (50 encounters per arm). For in-person visits, study personnel will start the recorder and leave the room. The parent, child, or provider can stop recordings at any point. For telehealth-based pMTM study visits, audio recording will occur within the software. The audio-recorded clinical encounters will be used to compare whether the provider addresses pMTM-related components (medication review, optimizations, and action plan) during the clinical visits (binary outcome), and to estimate the time needed to implement the pMTM intervention or discuss medication-related issues (continuous outcome), focusing on differences between pMTM and usual care. To measure aims related to pMTM maintenance, we will conduct 15 qualitative interviews with consented providers at time points including the beginning, middle, and end of the trial, for a total of 45 interviews. To reduce bias, we will attempt to interview all providers at least once during the study period. We will also attempt to interview some providers (specifically the pharmacists) at > 1 time point to evaluate how their experience with pMTM changed over time. Qualitative interview guides will be pilot tested prior to use with study subjects. We will elicit providers’ perceptions of the feasibility, acceptability, and barriers/facilitators of the pMTM intervention. At the final time point, we will specifically focus on providers’ perceptions and intentions of sustaining the interventions following the completion of the trial. Providers who participate in an interview will receive a $50 gift card. Recruitment will discontinue if ongoing analysis (described below) reveals thematic saturation . Finally, to measure maintenance outcomes related to program replication costs, we will use time-driven activity-based costing approach to measure the cost related to implementation and maintenance of pMTM relative to usual care costs. Using best practices, we will develop process maps for patient/parent flow for both pMTM and usual care delivery and specify care activities and who (pharmacist, provider, other clinic staff) performs each activity . The largest component of cost will be the time clinic staff devote to delivering pMTM and usual care, which we will measure using the audio recordings of clinical visits, annual surveys (questions about average time spent for pre-clinical visit preparation and post-visit documentation), and provider interviews (to explore reasons for variation) described above. Measures of time will be converted to cost using internal salaries and fringe benefits for each category of clinic staff. We will also value time using Bureau of Labor Statistics data to estimate more representative replication costs . We will obtain cost information for other clinic and informatics resources directly and indirectly supporting the adoption, implementation, and maintenance of pMTM .
Reach of the pMTM intervention will be quantified by measuring the percentage and representativeness of CMC with polypharmacy who accept and decline participation in the pMTM intervention. Study personnel will track the patients and parents declining participation, including previously defined demographics and reasons for non-participation.
Previously described variables of medical complexity, health literacy, and attitudes towards medication management will be assessed at the patient- and parent-level, where appropriate, to quantitatively evaluate intervention effect modification. To qualitatively evaluate effect modification, we will conduct a semi-structured interviews through the study period with a total of 40 caregivers. Qualitative interviews will include 10 caregivers from each subgroup (technology dependent/independent and high/low health literacy); only caregivers who participated in the pMTM intervention arm will be included. To recruit for this portion of the study, participating parents will receive a $25 gift card. A trained professional qualitative study team member will conduct recorded 1-h parent interviews via phone or video software. Qualitative interview guides will be pilot tested prior to use with study subjects. The guide will elicit parents’ perceptions of the feasibility, acceptability, and barriers/facilitators of the pMTM intervention, specifically focusing on how outcomes may have been impacted by their health literacy and whether their child was dependent on specific forms of technology. Recruitment for qualitative caregiver interviews will discontinue if ongoing analysis (described below) reveals thematic saturation .
To measure PCP adoption of the pMTM intervention, short annual confidential surveys will be administered to quantify adoption of the pMTM intervention and recommendations by clinical providers, as well satisfaction and time spent related to pMTM. The study team will pilot test and monitor surveys to identify potential problems that could result in missing responses. Providers will be encouraged to complete all items on the survey, informed of the negative impact of missing data on the research, and assured that their answers are completely confidential. Those who participate will receive a $10 gift card after completing each annual survey. We will calculate the percentage and representativeness of eligible providers involved in the pMTM intervention and attempt to collect reasons for declination if observed. Implementation fidelity will be evaluated through audio recording of a sample of visits from the intervention arm (pMTM visit and corresponding clinical visit) and the usual care arm (clinical visit). We will screen and recruit participants for recording of visits using permuted block randomization for a total of 100 audio-recorded encounters (50 encounters per arm). For in-person visits, study personnel will start the recorder and leave the room. The parent, child, or provider can stop recordings at any point. For telehealth-based pMTM study visits, audio recording will occur within the software. The audio-recorded clinical encounters will be used to compare whether the provider addresses pMTM-related components (medication review, optimizations, and action plan) during the clinical visits (binary outcome), and to estimate the time needed to implement the pMTM intervention or discuss medication-related issues (continuous outcome), focusing on differences between pMTM and usual care. To measure aims related to pMTM maintenance, we will conduct 15 qualitative interviews with consented providers at time points including the beginning, middle, and end of the trial, for a total of 45 interviews. To reduce bias, we will attempt to interview all providers at least once during the study period. We will also attempt to interview some providers (specifically the pharmacists) at > 1 time point to evaluate how their experience with pMTM changed over time. Qualitative interview guides will be pilot tested prior to use with study subjects. We will elicit providers’ perceptions of the feasibility, acceptability, and barriers/facilitators of the pMTM intervention. At the final time point, we will specifically focus on providers’ perceptions and intentions of sustaining the interventions following the completion of the trial. Providers who participate in an interview will receive a $50 gift card. Recruitment will discontinue if ongoing analysis (described below) reveals thematic saturation . Finally, to measure maintenance outcomes related to program replication costs, we will use time-driven activity-based costing approach to measure the cost related to implementation and maintenance of pMTM relative to usual care costs. Using best practices, we will develop process maps for patient/parent flow for both pMTM and usual care delivery and specify care activities and who (pharmacist, provider, other clinic staff) performs each activity . The largest component of cost will be the time clinic staff devote to delivering pMTM and usual care, which we will measure using the audio recordings of clinical visits, annual surveys (questions about average time spent for pre-clinical visit preparation and post-visit documentation), and provider interviews (to explore reasons for variation) described above. Measures of time will be converted to cost using internal salaries and fringe benefits for each category of clinic staff. We will also value time using Bureau of Labor Statistics data to estimate more representative replication costs . We will obtain cost information for other clinic and informatics resources directly and indirectly supporting the adoption, implementation, and maintenance of pMTM .
Due to the nature of the pMTM intervention, patient, pharmacist, and PCP participants are not blinded to the intervention. Investigators and statisticians performing data analysis will be blinded to subject allocation. Additionally, participants involved in assessment of safety measures and pediatric pharmacists involved in assignment of the MRP primary outcome will be blinded to patient group assignment.
Patient and parent characteristics in both study arms will be evaluated using appropriate measures of central tendency and spread for continuous variables and proportions for categorical variables. For the primary MRP outcome analysis, we will assess for differences in MRP counts between the intervention and control groups at 90 days using generalized linear models with Poisson response distribution and log link function. The overall effectiveness of the intervention will be assessed by testing the model coefficient for randomization group, with a null hypothesis of no mean difference in MRP counts at 90 days between treatment and control groups. Model checking and diagnostics will be performed to assess validity of model assumptions, with appropriate remedial measures taken as necessary. For the secondary outcome analysis, we will assess for differences in outcome changes over time between the intervention and control groups using generalized linear mixed models, which accounts for correlation between repeated outcome measurements over time. Within-subjects correlation will be accounted for using a random intercept. Towards assessment of patient and parent factors modifying pMTM effectiveness, we will employ similar generalized linear mixed models. Each of the pre-specified effect modifiers will be modeled as an interaction term between the intervention variable (binary) and the effect modifier variable (binary). The test of the null hypothesis that the interaction term’s coefficient is equal to 0 will indicate whether there is evidence that the effectiveness of the intervention varies according to the proposed modifier. Towards implementation fidelity, comparisons focusing on differences between the intervention and usual care arms will be made using generalized linear mixed models with logistic link and a random intercept for provider to account for correlation within providers. Comparisons in time, focusing on whether there is a difference between arms in the time a provider spends addressing medication-related issues, will be made using linear mixed models, with a random intercept for provider to account for correlation within providers. For analysis of program implementation and replication costs, our primary measures of cost will be the average amount of time of clinic staff devoted to the pMTM intervention and usual care and the average cost per-patient for pMTM and usual care. The average time will be calculated for each category of clinic staff, including the pharmacist, by the mean time measured in the audio recordings plus the mean time reported in the annual surveys. The average cost per-patient will convert the average time measures to dollars using Bureau of Labor Statistics data and add in the cost of other clinic resources divided by the number of patients. We will also conduct a sensitivity analysis using different time measures based on the distribution of the time measures across the audio recordings and survey responses. For all qualitative data, we will employ qualitative content analysis throughout the periods of data collection and analysis . This is appropriate as our goal is to explore the participants’ experiences, focusing on their perceptions of the pMTM intervention and the feasibility and acceptability of the intervention. To achieve this, we will use an inductive coding process in which 2 + research team members independently develop codes and their definitions through reading the transcripts. The team will discuss their respective codes to develop a consolidated codebook. The study team will then independently apply the codebook to the next set of transcripts, and then meet and reconcile their codebooks and coded data. This process will continue until a final codebook is agreed upon. The final codebook will be applied to the remaining transcripts. Coded transcripts will be entered into Atlas.ti version 9.1 for analysis, and we will develop themes that capture the major concepts about feasibility, acceptability, and barriers/facilitators of the pMTM intervention.
In the event of missing data, we will examine the data to determine if omission varies by study arm. However, our approach using mixed effects regression modeling will provide accurate estimates and inference in the presence of missing data under certain assumptions. We will check these assumptions and, if necessary, perform sensitivity analyses to quantify the effect of missing outcome data on our results. All outcomes will be analyzed on an intention-to-treat basis.
The overall effectiveness of the intervention will be assessed using a multiple degree-of-freedom test with a null hypothesis of no difference between study arms at 90 days post-randomization. Based on our prior studies of pediatric medication regimen complexity, we will adjust for potential confounders including patient age, number of complex chronic conditions, and recent acute healthcare utilization. All quantitative analyses will be performed in Stata 17.0 (College Station, TX). We will use a 2-sided significance level of 0.05 for all hypothesis testing; thus, the type-I error rate for the assessment of overall effectiveness is fixed at 5%. Standard errors and 95% confidence intervals will also be reported.
The overall effectiveness of the intervention will be evaluated based on the primary outcome measure, total MRP count. Based on our previous medication safety studies in SCC, we anticipate enrolling 80% of eligible participants and collecting data from ≥ 80% of enrolled participants at the 90-day follow up. We will approach 463 potential participants and enroll 371 to achieve a final analytic sample size of 296 children and their parents. This will provide > 90% power at the 2-sided 0.05 significance level to detect a 1.0 difference between study arms in MRP count, which is sufficient to detect clinically meaningful changes demonstrated by our pilot data. If there is some degree of contamination between the intervention arms due to clinicians seeing patients in both arms, the study will maintain 80% power to detect a significant mean difference in the primary outcome; this assumes a dilution of the treatment effect of 15% (i.e., that the difference in mean outcomes between treatment arms is attenuated to 0.85). These calculations assume a standard deviation in the MRP outcome of 2.6 as determined through prior work in this area. The proposed sample size will also provide adequate power to detect clinically important changes in quantitative secondary outcomes at the 2-sided 0.05 significance level. Assuming a correlation of 0.4 within patients, the study will have 80% power to detect a difference between study arms in mean change of a) PRO-Sx symptom scores by 3.1 points and b) counts of acute healthcare utilization by 0.8–1.1 visits. These calculations assume a correlation of each outcome within patients of 0.4. To achieve implementation fidelity aims using audiotaped visits, additional sample size and power calculations were performed. With a total of 50 audiotaped visits per study arm, the study will have 80% power to detect a 0.23 difference in proportions if fidelity in the pMTM group is 90%. The study will also have 80% power to detect a mean difference of 2.8 min if the standard deviation of the length of the conversation is 5 min. Correlation within providers will be accounted for by a mixed effects model’s random intercept.
This project will produce a variety of data types across the five years of the project. All study data will be collected by trained research personnel during each study phase. Study data will be collected and analyzed from 4 primary sources, including (1) EHR data, (2) prospectively collected patient- and parent-reported data, (3) study visit data, and (4) transcripts from parent and provider interviews. Clinical data will be extracted from the EHR and/or patient charts. Throughout the trial, EHR data will be queried for utilization, pharmacy, and clinical outcome data. We do not anticipate the collection of any paper documents. Raw data will be transformed using REDCap data management tools and the subsequent processed dataset used for statistical analysis. REDCap is a secure, web-based application designed to support data entry, validation and management. Designated research staff will review REDCap data monthly to ensure data completeness and quality. To protect research participant identities and based on ethical and legal considerations, only de-identified individual data will be made available for sharing. All study data will be retained for a minimum preservation time of 3 years. The preservation time will be extended such that resulting publications have been publicly available for at least 12 months before retiring any data. Data will be made available upon request to the larger research community as soon as possible or at the time of associated publication.
All investigators will have access to the trial’s final dataset. There are no contractual agreements that limit such access. The investigators intend to publish results for all pre-specified primary and secondary outcomes in the peer-reviewed literature, including publication of the study protocol and access to statistical code upon request for review. Dissemination is key to ensuring that any evidence-based practices elucidated from our study can result in substantial improvements in management of pediatric polypharmacy beyond the study’s immediate scope. Study materials, tools, and resources will be developed so that they may be easily adapted to other settings, with particular focus on creation of an implementation and adaptation guide and online training module. Should the pMTM intervention prove effective, we intend to leverage ongoing research partnerships and collaborate with additional sites to test the pMTM intervention on a broader scale.
The study’s principal investigator (JAF) will have overall responsibility for the Data Safety and Monitoring Plan and for participant safety monitoring. As we are studying only the pediatric-implementation of MTM, an evidence-based practice recommended and widely provided for adult and geriatric enrollees with polypharmacy, the risks to human subjects are minimal, Furthermore, any medication-related optimizations made as part of the pMTM intervention are implemented based on joint decision making between the patient, parent, and the PCP during the routine clinical visit. Although minimal risks to human subjects are anticipated and a formal data safety monitoring board is not required, we will take robust precautions to monitor study participants for signal of adverse events or unanticipated problems during the study according to AHRQ requirements.
Optimal health for the priority population of CMC often depends on the chronic use of multiple medications in the outpatient setting. In all populations, MRPs resulting from polypharmacy can lead to potentially devastating outcomes, and CMC are indeed more vulnerable to MRPs. For example, in a study of 144 million pediatric emergency department visits, CMC were approximately 5 times more likely to experience an ADE-related emergency visit . In the outpatient setting, CMC may also have undertreated symptoms, receive suboptimal pharmacotherapy, or experience preventable adverse effects [ , , , ]. While pediatric polypharmacy is prevalent, current polypharmacy management strategies are fragmented and reactive, and medication safety initiatives remain a high priority for pediatric complex care programs . The medication-related and overall health outcomes associated with an MTM program for pediatric patients are unknown, particularly as these relate to CMC. We propose that a pMTM intervention by pediatric pharmacists could, through patient-centered medication regimen simplification and tailored caregiver support, address MRPs and result in increased parental confidence and medication understanding, thereby improvement medication safety and effectiveness. Real and potential limitations of the study do exist. First, enrollment plans were established in alignment with our prior medication safety studies, in which enrollment occurred at a rate of approximately 100 patient-parent pairs per year [ , , ]. If recruitment is slower than planned, we will work with the local family advisory council to alter our recruitment protocol. Also, if the intervention and study instruments are expanded to other languages during the study period, we will include these additional populations. Second, because participant blinding cannot be achieved for the pMTM intervention, participants who do not receive the intervention may leave the study early, potentially biasing results. We will provide small incentives to retain study participants. Also, all study personnel participating in assessment of outcomes, data analysis, and safety monitoring will be blinded. Third, as described above, the risk of contamination is low, but our total sample size accounts for a worst-case scenario of a 15% reduction in treatment effect from contamination. Finally, while we include CMC from a large urban and rural catchment area, this may not be representative of all CMC. To inform generalizability, we will compare enrolled CMC with national data. The pMTM study is the first randomized controlled trial to evaluate a centralized, coordinated, and comprehensive approach to medication management in CMC with polypharmacy. The results of this trial will quantify the impact of the pMTM intervention on medication safety, effectiveness, and overall medication complexity. Additionally, this trial will examine the impact of pMTM on subsequent acute healthcare utilization by CMC. Through the described systematic approach, the results of this trial will inform the pediatric medical community on the value and effectiveness of pMTM towards optimization of medication therapy among CMC with polypharmacy. Trial status We anticipate that trial recruitment will begin in August 2023 and will be completed by September 2027. The trial protocol is currently active in its original version without revision.
We anticipate that trial recruitment will begin in August 2023 and will be completed by September 2027. The trial protocol is currently active in its original version without revision.
Additional file 1. SPIRIT 2013 Checklist: Recommended items to address in a clinical trial protocol and related documents.
|
The use of text-mining software to facilitate screening of literature on centredness in health care
|
a184662f-cba4-4b41-acd5-0f7c7e73ac9d
|
10148558
|
Patient-Centered Care[mh]
|
Centredness in health care, i.e. care which takes its starting point in the patient perspective and is co-constructed and managed in partnership between patients and professionals, has been adopted in current health care discourse in Europe and there is an increasing call for its implementation worldwide . The lack of a clear and uniform definition and conceptualisation of centredness in health care is noticeable. However, this has been discussed as a strength, as the contextualisation of a concept is seen as crucial for successful implementation . Nevertheless, the lack of a coherent conceptualisation provides special challenges for literature reviews that explore such a broad topic of research. When decision-makers, practitioners and researchers are unable to review all existing knowledge, there is an obvious risk of misinformation due to a lack of synthesis of all the relevant research. Challenges in screening research on centredness in health care When searching for literature as one of the first steps in conducting a literature review, using a combination of several terms, including index terms and free text words, is most often ideal. This is an important measure to make sure that all relevant literature is retrieved. When focusing on the literature on centredness in health care, this first step presents several challenges. Firstly, various terms are used in connection with centredness, for example, person, patient, client, and family. Secondly, centredness in health care is closely related to and overlaps other fields of research, which themselves involve considerable volumes of publications (e.g. shared decision-making and narrative medicine). Thirdly, only one medical subject heading (MeSH), i.e. “patient-centred care”, exists in relation to the larger field. This MeSH term was introduced to PubMed in 1995 and is defined as: ‘Design of patient care wherein institutional resources and personnel are organized around patients rather than around specialized departments’ . Hence, this term relates mainly to the organisation of care and not its practice and conceptualisation. Also, despite being the only MeSH term for centredness in health care, it is not widely used (only 21,000 hits in the database PubMed) and does not capture the breadth of research literature available. In order to screen research literature within a reasonable time frame and with the project resources available, the aforementioned challenges can lead to reviews being restrictive in their approach—only using one or a few terms, a limited time frame, or delimiting the screening to a particular population and/or health care context. Even if this is understandable from a pragmatic perspective, the risk is that literature reviews only focus on select parts of the actual field of research and thus do not provide the available evidence. The difficulty in providing an overall picture of the field becomes clear, for example, when examining one current review, one white paper and an edited volume on person-centred care [ , , ]. Despite overlapping rationales of the publications, there is only minor or no overlap of the included studies. This indicates incompleteness in the syntheses of person-centred research, with the risk of presenting fragmented parts or even only a segment of the larger research field. However, this example is not surprising, since thorough searches in major databases related to centredness in health care end up in more than 90,000 unique citations (which will be further described in detail in our example below). Thus, synthesising this particular field of research involves multi-level challenges, if screening is to be performed manually, which was most likely the case in all of the three review examples described above. Even if the lack of overlap between publications cannot only be explained by the use of terms, it is a fact that some reviewers choose to use only one term in literature searches, some a couple, while others use several terms. How these choices are made is rarely explained in the literature and is therefore an additional complication. Moreover, according to Hughes and colleagues , for example, conceptual differences between constructs are minor and the main difference in terminology depends on the context and patient group in focus. The use of terms does not always correspond with a conceptual basis and several terms are often used in the same publication . Using text-mining functions in the screening process A way of tackling the challenge of retrieving an abundance of citations is to use text-mining functions to semi-automate the screening process. Text-mining can be defined as ‘the process of discovering knowledge and structure from unstructured data (i.e. text)’ (, p. 2). The use of text-mining within citation screening often entails a classification or prioritisation/ordering of retrieved citations in some way. This process typically involves an iterative approach in which reviewers manually screen titles and abstracts of a set of citations and then use these results to train a statistically predictive classification model to probabilistically identify and order citations by likelihood of relevance. Text-mining has been used within larger systematic review communities, such as the Cochrane Collaboration, for many years. However, it is likely be increasingly used by smaller working groups (such as ours) as well. Examples of the use of such functions, manually built and tailored for specific review projects, have shown promising results . Such project teams do, however, include expertise in language technology or text-mining. The development of ‘ready-made’ software available to researchers (not requiring expertise in text-mining) has also rapidly developed during the last couple of years . There are at least fifteen tools incorporating text-mining technologies which are available for abstract and title screening of retrieved citations . The level of uptake for non-experts in text-mining, i.e. researchers, is a question under debate . For current non-users of text-mining, aspects which can hinder uptake are described, for example, the attitude and technological knowledge in a research group (i.e. staff integration), influence from others in the systematic review community (methodological criticism), and possible barriers to organisational and technical integration of software with currently used IT systems. It has been estimated that screening burden can be reduced by between 40 and 90%, i.e. the complete sample of studies needed to be screened to include all relevant records . To reduce screening burden, text-mining and deciding a cut-off or threshold whereby no additional citations require screening are, even if tested for specific as well as broader topics, not widely used due to the risk of lowering the recall [ , , ]. Nevertheless, what these functions can clearly assist with is the earlier identification of the most relevant citations, which can improve the workflow of the complete review .
When searching for literature as one of the first steps in conducting a literature review, using a combination of several terms, including index terms and free text words, is most often ideal. This is an important measure to make sure that all relevant literature is retrieved. When focusing on the literature on centredness in health care, this first step presents several challenges. Firstly, various terms are used in connection with centredness, for example, person, patient, client, and family. Secondly, centredness in health care is closely related to and overlaps other fields of research, which themselves involve considerable volumes of publications (e.g. shared decision-making and narrative medicine). Thirdly, only one medical subject heading (MeSH), i.e. “patient-centred care”, exists in relation to the larger field. This MeSH term was introduced to PubMed in 1995 and is defined as: ‘Design of patient care wherein institutional resources and personnel are organized around patients rather than around specialized departments’ . Hence, this term relates mainly to the organisation of care and not its practice and conceptualisation. Also, despite being the only MeSH term for centredness in health care, it is not widely used (only 21,000 hits in the database PubMed) and does not capture the breadth of research literature available. In order to screen research literature within a reasonable time frame and with the project resources available, the aforementioned challenges can lead to reviews being restrictive in their approach—only using one or a few terms, a limited time frame, or delimiting the screening to a particular population and/or health care context. Even if this is understandable from a pragmatic perspective, the risk is that literature reviews only focus on select parts of the actual field of research and thus do not provide the available evidence. The difficulty in providing an overall picture of the field becomes clear, for example, when examining one current review, one white paper and an edited volume on person-centred care [ , , ]. Despite overlapping rationales of the publications, there is only minor or no overlap of the included studies. This indicates incompleteness in the syntheses of person-centred research, with the risk of presenting fragmented parts or even only a segment of the larger research field. However, this example is not surprising, since thorough searches in major databases related to centredness in health care end up in more than 90,000 unique citations (which will be further described in detail in our example below). Thus, synthesising this particular field of research involves multi-level challenges, if screening is to be performed manually, which was most likely the case in all of the three review examples described above. Even if the lack of overlap between publications cannot only be explained by the use of terms, it is a fact that some reviewers choose to use only one term in literature searches, some a couple, while others use several terms. How these choices are made is rarely explained in the literature and is therefore an additional complication. Moreover, according to Hughes and colleagues , for example, conceptual differences between constructs are minor and the main difference in terminology depends on the context and patient group in focus. The use of terms does not always correspond with a conceptual basis and several terms are often used in the same publication .
A way of tackling the challenge of retrieving an abundance of citations is to use text-mining functions to semi-automate the screening process. Text-mining can be defined as ‘the process of discovering knowledge and structure from unstructured data (i.e. text)’ (, p. 2). The use of text-mining within citation screening often entails a classification or prioritisation/ordering of retrieved citations in some way. This process typically involves an iterative approach in which reviewers manually screen titles and abstracts of a set of citations and then use these results to train a statistically predictive classification model to probabilistically identify and order citations by likelihood of relevance. Text-mining has been used within larger systematic review communities, such as the Cochrane Collaboration, for many years. However, it is likely be increasingly used by smaller working groups (such as ours) as well. Examples of the use of such functions, manually built and tailored for specific review projects, have shown promising results . Such project teams do, however, include expertise in language technology or text-mining. The development of ‘ready-made’ software available to researchers (not requiring expertise in text-mining) has also rapidly developed during the last couple of years . There are at least fifteen tools incorporating text-mining technologies which are available for abstract and title screening of retrieved citations . The level of uptake for non-experts in text-mining, i.e. researchers, is a question under debate . For current non-users of text-mining, aspects which can hinder uptake are described, for example, the attitude and technological knowledge in a research group (i.e. staff integration), influence from others in the systematic review community (methodological criticism), and possible barriers to organisational and technical integration of software with currently used IT systems. It has been estimated that screening burden can be reduced by between 40 and 90%, i.e. the complete sample of studies needed to be screened to include all relevant records . To reduce screening burden, text-mining and deciding a cut-off or threshold whereby no additional citations require screening are, even if tested for specific as well as broader topics, not widely used due to the risk of lowering the recall [ , , ]. Nevertheless, what these functions can clearly assist with is the earlier identification of the most relevant citations, which can improve the workflow of the complete review .
Scoping reviews can be described as ‘a preliminary assessment of potential size and scope of available research literature’ (, p. 101). This type of review is of particular use when the topic has not yet been extensively reviewed or is of a complex or heterogeneous nature . In our group, we wanted to map research on the topic ‘centredness in health care’. Centredness in health care was defined as care in which (1) the will, needs, and desires of people in need of health care are being elicited and acknowledged and (2) the people in need of health care, health care professionals and other people of importance are working in a collaborative partnership. A search strategy was developed in several steps using index terms and free text words related to centredness in health care (see Additional files and ). Relevant records were identified by searching the electronic databases: PubMed, Scopus, PsycINFO, Cumulative Index to Nursing and Allied Health Literature (CINAHL) and Web of Science. Language restriction was English, but no time restriction was applied. To be included in the review, the main aim of the records needed to focus on centredness in health care and the term used needed to be defined and in concordance with our stipulated definition of centredness in health care. The search resulted in the retrieval of 94,236 citations (after removal of duplicates). In the application of text-mining, we followed the general approach described by Sawatzky et al. , which involved first selecting an initial random sample large enough to train a project-tailored classifier model: in our case 5455 records from the database searches. The sample of 5455 records was screened manually (by two reviewers independently against inclusion and exclusion criteria) based on titles and abstracts. Records were classified as “included”, “maybe” or “excluded”. All records labelled “maybe” were screened in full text and then classified as “included” or “excluded”. This specific step was taken to ensure that records labelled as “included” were in fact relevant. The classified records from the manual screening were then used to train a predictive classifier model that was applied to the remaining citations from the database search. Manually building a classifier model based on single-word frequencies We first tried manually building a classifier model. The inspiration for doing this and not employing ready-made software was our experience of successfully building such models in previous work , and our lack of knowledge in using ready-made software in the project group at that point in time. This manually built model, developed by expert language technologists, was built on single-word frequencies (more information on this model can be found in Fig. ). However, progressively increasing the accuracy of the manual model proved time-consuming, and it was not possible to overlook the screening burden. This pushed us to consider a ready-made software for screening purposes. Building a classifier model based on tri-grams in ready-made software We decided to test the functionality of the EPPI-reviewer, but this was not straightforward. Although tutorials and support are available for EPPI-reviewer users, the perceived amount of effort needed to use the program was a bit discouraging at the time. Nevertheless, we managed to construct a bespoke classifier model in the program. Like the manual model, this model is built on word frequencies. The difference is that it uses a tri-gram ‘bag of words’ approach, meaning that, in addition to listing single words, word pairs and triplets of words are also recognised and counted for each record. Our model was trained with the results from the random sample screening of the 5455 records, identical to the ones used in the previous step using the manually built classifier model. After this, the EPPI built model was applied to the complete sample of records. Pilot comparison To have a clear rationale for this methodological change in the project, we conducted a pilot comparison between the two project-tailored classifier models, the first one built manually by expert language technologists, and the second one constructed in the program EPPI-reviewer (see Fig. ). The 1000 highest-ranking records were retrieved and manually screened for both models. In the manual model, 172 records were included at title and abstract level, 707 excluded and 121 marked as ‘maybes’. When reading these ‘maybes’ in full text, 63 records were included and 58 excluded. As a result, of the 1000 records, 235 were included and 765 excluded, meaning that in total, 23.5% of the sample was included. In the EPPI model, of the 1000 records screened at title and abstract level, 642 were included, 127 excluded and 231 marked as ‘maybes’. When reading these ‘maybes’ in full text, 192 records were included and 39 excluded. Of the 1000 records, 834 were included and 166 excluded, meaning that in total, 83.4% of the sample was included. When comparing the two models, of the 235 records which were included using the manually built predictive model for screening, 166 of these were also ranked as highly eligible by the model built in EPPI reviewer. The remaining 69 included records from the manual model were not ranked as highly eligible by EPPI. Nevertheless, the ranking in EPPI resulted in 669 other records being included. For our purposes, the classifier model built in EPPI-reviewer showed promise in identifying relevant citations earlier in the process, as compared to a manually built classifier. In the manually built model, even if the fraction of positive cases was assumed to be several times higher than for a random sample when ranking citations, it was not expected that the top-ranking citations would ever include more positive (eligible for inclusion) than negative cases (not eligible for inclusion). This was due to the small amount of textual data included in the model (title and abstract) and the skewed distribution (more records labelled as excluded than included in the initial random sample). This expectation was found to be true for the manually built model (based on single-word frequencies) but not for the model in EPPI-reviewer (based on tri-grams) which showed a higher fraction of positive than negative cases. There is no comparative data available on timeframes or human resources used for screening, but additional rounds of screening from at least two people would be necessary in order for the manual model to identify the same number of included studies as the model built in EPPI-reviewer. Additionally, no formal analysis of the accuracy of the two models was performed.
We first tried manually building a classifier model. The inspiration for doing this and not employing ready-made software was our experience of successfully building such models in previous work , and our lack of knowledge in using ready-made software in the project group at that point in time. This manually built model, developed by expert language technologists, was built on single-word frequencies (more information on this model can be found in Fig. ). However, progressively increasing the accuracy of the manual model proved time-consuming, and it was not possible to overlook the screening burden. This pushed us to consider a ready-made software for screening purposes.
We decided to test the functionality of the EPPI-reviewer, but this was not straightforward. Although tutorials and support are available for EPPI-reviewer users, the perceived amount of effort needed to use the program was a bit discouraging at the time. Nevertheless, we managed to construct a bespoke classifier model in the program. Like the manual model, this model is built on word frequencies. The difference is that it uses a tri-gram ‘bag of words’ approach, meaning that, in addition to listing single words, word pairs and triplets of words are also recognised and counted for each record. Our model was trained with the results from the random sample screening of the 5455 records, identical to the ones used in the previous step using the manually built classifier model. After this, the EPPI built model was applied to the complete sample of records.
To have a clear rationale for this methodological change in the project, we conducted a pilot comparison between the two project-tailored classifier models, the first one built manually by expert language technologists, and the second one constructed in the program EPPI-reviewer (see Fig. ). The 1000 highest-ranking records were retrieved and manually screened for both models. In the manual model, 172 records were included at title and abstract level, 707 excluded and 121 marked as ‘maybes’. When reading these ‘maybes’ in full text, 63 records were included and 58 excluded. As a result, of the 1000 records, 235 were included and 765 excluded, meaning that in total, 23.5% of the sample was included. In the EPPI model, of the 1000 records screened at title and abstract level, 642 were included, 127 excluded and 231 marked as ‘maybes’. When reading these ‘maybes’ in full text, 192 records were included and 39 excluded. Of the 1000 records, 834 were included and 166 excluded, meaning that in total, 83.4% of the sample was included. When comparing the two models, of the 235 records which were included using the manually built predictive model for screening, 166 of these were also ranked as highly eligible by the model built in EPPI reviewer. The remaining 69 included records from the manual model were not ranked as highly eligible by EPPI. Nevertheless, the ranking in EPPI resulted in 669 other records being included. For our purposes, the classifier model built in EPPI-reviewer showed promise in identifying relevant citations earlier in the process, as compared to a manually built classifier. In the manually built model, even if the fraction of positive cases was assumed to be several times higher than for a random sample when ranking citations, it was not expected that the top-ranking citations would ever include more positive (eligible for inclusion) than negative cases (not eligible for inclusion). This was due to the small amount of textual data included in the model (title and abstract) and the skewed distribution (more records labelled as excluded than included in the initial random sample). This expectation was found to be true for the manually built model (based on single-word frequencies) but not for the model in EPPI-reviewer (based on tri-grams) which showed a higher fraction of positive than negative cases. There is no comparative data available on timeframes or human resources used for screening, but additional rounds of screening from at least two people would be necessary in order for the manual model to identify the same number of included studies as the model built in EPPI-reviewer. Additionally, no formal analysis of the accuracy of the two models was performed.
The problem of delimiting database searches will not diminish in the future—rather, the opposite is more likely, as the overall number of research publications will increase. Further, new terms and combinations of already implemented terms in connection to centredness in health care might be used. As Park and Thomas discuss, it is important to consider the specific functions required for a particular review . However, the selection of suitable text-mining functions, as well as their precision for a specific review project, are challenging for a lay text-mining user (an ordinary researcher). In this commentary, we have discussed challenges associated with screening literature in a field of research with diffuse conceptual boundaries and used an example of our own journey in testing text-mining functions with literature on centredness in health care. The use of ready-made software text-mining functions, such as the ones used in EPPI-reviewer, seems truly promising for large scoping reviews such as ours on topics with diffuse conceptual barriers and large amounts of citations due to systematic database searches.
Additional file 1. Search terms. Additional file 2. Search syntax.
|
Integrating consumer perspectives into a large-scale health literacy audit of health information materials: learnings and next steps
|
42cae043-0873-47d1-8955-363409127c21
|
10148726
|
Patient-Centered Care[mh]
|
Study setting and design NPS MedicineWise is a national consumer-centred Australian not-for-profit organisation that promotes the safe and wise use of medicines and other health technologies. The NPS MedicineWise website contains online versions of official consumer medicine information (CMI) as well as online resources to support safe and appropriate use of medicines and health technologies, education about various health conditions and tools to support health behaviours and informed decision-making (e.g. action plans and patient decision aids). This study focuses on an audit of existing consumer resources, education and tools (collectively described as resources for this article) developed by NPS MedicineWise through the federally funded Quality Use of Diagnostics, Therapeutics and Pathology Program (Australian Government Department of Health and Aged Care). The audit did not include CMIs as these are developed externally by medicine sponsors and manufacturers in accordance with Therapeutic Goods Administration (TGA) regulations for registered medicines. Ethical approval was obtained from the University of Sydney Human Ethics Committee (Project number 2022/153). This committee ensures that research is conducted within the guidelines set out in the Australian National Statement on Ethical Conduct in Human Research (2007) – Updated 2018. After reading the participant information statement, interested participants then indicated informed consent via completion of the survey. Consumers were involved throughout project planning and implementation. The audit process comprised 4 stages (Fig. ), each described in further detail below. Stage 1: Selecting a sample of resources The NPS MedicineWise audit identified 147 individual consumer resources including web-based articles, downloadable factsheets and shared-decision making tools, and videos. Five consumers attended a workshop (Workshop A) to identify a sample of these resources ( n = 49) for further auditing. To facilitate this task, the consumers were presented with data summarising the resources, including general descriptive data (e.g. health topic; resource type such as standard written content, audio-visual or fact sheet) and user data (e.g. unique visits, time spent on page). In addition, consumers were presented with data showing specific health literacy skills addressed in each resource. NPS MedicineWise previously collaborated with the Consumers Health Forum of Australia (Australia’s leading advocacy group for consumer health care issues) to develop Health Literacy Quality Use of Medicines Indicators . The indicators encompass 5 domains of health literacy skills relevant to quality use of medicines: individual health literacy, understanding quality use of medicines, engaging with health professionals, reading medicine information, and accessing further information. Development of the indicators (herein referred to as ‘health literacy skills’) was informed by the literature, consumer-led online discussion forums with 185 consumers, and survey responses from 1,503 consumers. In selecting the 49 resources for further auditing, consumers and staff were asked to consider the need for a variety of resources, including those with low and high webpage visits, and low and high coverage of health literacy skills. If consumers identified other reasons for prioritising a resource for further auditing, these were also integrated into the selection process. Stage 2: Health literacy assessment of the sample Study authors prioritised health literacy assessment tools if they were widely used within health literacy research and practice, provided numeric output, and would be feasible to implement. A combination of objective and subjective assessment tools was sought, with subjective assessments carried out by four of the consumers who selected the resources in stage 1. The tools are underpinned by a universal precautions approach to health literacy, which argues that all patients and caregivers benefit from health information that is easier to understand . Patient Education Materials Assessment Tool (PEMAT) The PEMAT was selected because it is a validated and widely-used tool to assess the health literacy demands of a given resource. The tool consists of 26 items and provides assessment of two domains. The first domain, understandability, refers to how easily readers of varying health literacy levels can process and explain a text’s key messages. It comprises five topics: content; word choice and style; medical terms; numbers; organisation; layout and design. The second domain, actionability, relates to how easily a reader can identify what they can do. Assessments for each domain are presented as a percentage, with scores ≥ 70% considered adequate. Consumers received PEMAT training including practising on three ‘test’ resources, with the opportunity to reflect on the task as a group and ask further questions. Two consumers then independently rated each resource using the PEMAT. Once all resources were assessed, any discrepancies were resolved through discussion between the pair of consumers. Sydney Health Literacy Lab (SHeLL) Health Literacy Editor The SHeLL Health Literacy Editor (the Editor) is an automated online tool that provides real-time feedback on the complexity of health information . It was selected because it could provide objective assessment beyond grade reading score. The two additional assessments used in this study were complex language and passive voice. Grade reading scores are widely used in health literacy research to estimate text complexity in relation to school grade reading levels. The Editor uses a readability formula called the Simple Measure of Gobbledygook (SMOG) . This formula is a more reliable, robust, and conservative estimate of grade reading score compared to other readability formulas [ – ]. In Australia, health literacy guidelines recommend that information is written at a grade 8 reading level or lower . The complex language score reports the proportion (as a percentage) of words in the text being assessed that are flagged by the program as ‘complex.’ This includes acronyms, any words for which a simpler alternative has been identified, based on public health and medical thesauruses, and any words that are flagged as ‘uncommon’ in English, according to a database of more than 270 million words. Although there are no specific targets for complex language, lower scores are considered easier to understand as they contain fewer complex words. For this project, a target of < 15% complex language was used. The passive voice score indicates the number of passive voice constructions in the resource (e.g. passive voice: the medicine was given to the patient; active voice: the doctor gave the medicine to the patient ). In line with the PEMAT, resources should use no more than 1 instance of passive voice. NPS MedicineWise staff assessed grade reading score, complex language, and passive voice for the 49 online resources using the Editor. Data were collated in preparation for Stage 3. Stage 3: Workshops to review and interpret audit results, and identify priority areas NPS MedicineWise staff and consumers were invited to attend two online workshops. The aim of these workshops was to present the results of Stage 1 and 2 of the audit and facilitate discussions to establish recommendations for revising, creating, and removing online content. The five consumers involved in Workshop A (Stage 1) were invited to take part in Stage 3, in addition to other NPS MedicineWise consumers. Workshop content and activities were designed in collaboration with and facilitated by author MC, a consumer representative with a long-standing relationship with NPS MedicineWise and chair of the NPS MedicineWise consumer advisory group. Materials were distributed prior to the workshops to provide background reading and audio-visual content explaining the project and audit findings. The first of these workshops (Workshop B) focused on presenting the background to the study and audit findings, with the goal of interpreting the audit findings collectively, as a group. The second (Workshop C) focused on identifying potential areas for improvement with small groups looking at specific resources. Stage 4: Critical reflections on the audit process Attendees from the latter workshops (B and C) were invited to take part in semi-structured interviews. Interview questions asked for feedback on the health literacy audit methods and suggestions for further improvement. After obtaining consent, author JA interviewed participants via Zoom individually or in small focus groups. Participants could comment on any part of the health literacy audit. Audio data were transcribed and feedback collated. Participants were interviewed between 25 th May 2022, and 9 th June 2022.
NPS MedicineWise is a national consumer-centred Australian not-for-profit organisation that promotes the safe and wise use of medicines and other health technologies. The NPS MedicineWise website contains online versions of official consumer medicine information (CMI) as well as online resources to support safe and appropriate use of medicines and health technologies, education about various health conditions and tools to support health behaviours and informed decision-making (e.g. action plans and patient decision aids). This study focuses on an audit of existing consumer resources, education and tools (collectively described as resources for this article) developed by NPS MedicineWise through the federally funded Quality Use of Diagnostics, Therapeutics and Pathology Program (Australian Government Department of Health and Aged Care). The audit did not include CMIs as these are developed externally by medicine sponsors and manufacturers in accordance with Therapeutic Goods Administration (TGA) regulations for registered medicines. Ethical approval was obtained from the University of Sydney Human Ethics Committee (Project number 2022/153). This committee ensures that research is conducted within the guidelines set out in the Australian National Statement on Ethical Conduct in Human Research (2007) – Updated 2018. After reading the participant information statement, interested participants then indicated informed consent via completion of the survey. Consumers were involved throughout project planning and implementation. The audit process comprised 4 stages (Fig. ), each described in further detail below. Stage 1: Selecting a sample of resources The NPS MedicineWise audit identified 147 individual consumer resources including web-based articles, downloadable factsheets and shared-decision making tools, and videos. Five consumers attended a workshop (Workshop A) to identify a sample of these resources ( n = 49) for further auditing. To facilitate this task, the consumers were presented with data summarising the resources, including general descriptive data (e.g. health topic; resource type such as standard written content, audio-visual or fact sheet) and user data (e.g. unique visits, time spent on page). In addition, consumers were presented with data showing specific health literacy skills addressed in each resource. NPS MedicineWise previously collaborated with the Consumers Health Forum of Australia (Australia’s leading advocacy group for consumer health care issues) to develop Health Literacy Quality Use of Medicines Indicators . The indicators encompass 5 domains of health literacy skills relevant to quality use of medicines: individual health literacy, understanding quality use of medicines, engaging with health professionals, reading medicine information, and accessing further information. Development of the indicators (herein referred to as ‘health literacy skills’) was informed by the literature, consumer-led online discussion forums with 185 consumers, and survey responses from 1,503 consumers. In selecting the 49 resources for further auditing, consumers and staff were asked to consider the need for a variety of resources, including those with low and high webpage visits, and low and high coverage of health literacy skills. If consumers identified other reasons for prioritising a resource for further auditing, these were also integrated into the selection process. Stage 2: Health literacy assessment of the sample Study authors prioritised health literacy assessment tools if they were widely used within health literacy research and practice, provided numeric output, and would be feasible to implement. A combination of objective and subjective assessment tools was sought, with subjective assessments carried out by four of the consumers who selected the resources in stage 1. The tools are underpinned by a universal precautions approach to health literacy, which argues that all patients and caregivers benefit from health information that is easier to understand . Patient Education Materials Assessment Tool (PEMAT) The PEMAT was selected because it is a validated and widely-used tool to assess the health literacy demands of a given resource. The tool consists of 26 items and provides assessment of two domains. The first domain, understandability, refers to how easily readers of varying health literacy levels can process and explain a text’s key messages. It comprises five topics: content; word choice and style; medical terms; numbers; organisation; layout and design. The second domain, actionability, relates to how easily a reader can identify what they can do. Assessments for each domain are presented as a percentage, with scores ≥ 70% considered adequate. Consumers received PEMAT training including practising on three ‘test’ resources, with the opportunity to reflect on the task as a group and ask further questions. Two consumers then independently rated each resource using the PEMAT. Once all resources were assessed, any discrepancies were resolved through discussion between the pair of consumers. Sydney Health Literacy Lab (SHeLL) Health Literacy Editor The SHeLL Health Literacy Editor (the Editor) is an automated online tool that provides real-time feedback on the complexity of health information . It was selected because it could provide objective assessment beyond grade reading score. The two additional assessments used in this study were complex language and passive voice. Grade reading scores are widely used in health literacy research to estimate text complexity in relation to school grade reading levels. The Editor uses a readability formula called the Simple Measure of Gobbledygook (SMOG) . This formula is a more reliable, robust, and conservative estimate of grade reading score compared to other readability formulas [ – ]. In Australia, health literacy guidelines recommend that information is written at a grade 8 reading level or lower . The complex language score reports the proportion (as a percentage) of words in the text being assessed that are flagged by the program as ‘complex.’ This includes acronyms, any words for which a simpler alternative has been identified, based on public health and medical thesauruses, and any words that are flagged as ‘uncommon’ in English, according to a database of more than 270 million words. Although there are no specific targets for complex language, lower scores are considered easier to understand as they contain fewer complex words. For this project, a target of < 15% complex language was used. The passive voice score indicates the number of passive voice constructions in the resource (e.g. passive voice: the medicine was given to the patient; active voice: the doctor gave the medicine to the patient ). In line with the PEMAT, resources should use no more than 1 instance of passive voice. NPS MedicineWise staff assessed grade reading score, complex language, and passive voice for the 49 online resources using the Editor. Data were collated in preparation for Stage 3. Stage 3: Workshops to review and interpret audit results, and identify priority areas NPS MedicineWise staff and consumers were invited to attend two online workshops. The aim of these workshops was to present the results of Stage 1 and 2 of the audit and facilitate discussions to establish recommendations for revising, creating, and removing online content. The five consumers involved in Workshop A (Stage 1) were invited to take part in Stage 3, in addition to other NPS MedicineWise consumers. Workshop content and activities were designed in collaboration with and facilitated by author MC, a consumer representative with a long-standing relationship with NPS MedicineWise and chair of the NPS MedicineWise consumer advisory group. Materials were distributed prior to the workshops to provide background reading and audio-visual content explaining the project and audit findings. The first of these workshops (Workshop B) focused on presenting the background to the study and audit findings, with the goal of interpreting the audit findings collectively, as a group. The second (Workshop C) focused on identifying potential areas for improvement with small groups looking at specific resources. Stage 4: Critical reflections on the audit process Attendees from the latter workshops (B and C) were invited to take part in semi-structured interviews. Interview questions asked for feedback on the health literacy audit methods and suggestions for further improvement. After obtaining consent, author JA interviewed participants via Zoom individually or in small focus groups. Participants could comment on any part of the health literacy audit. Audio data were transcribed and feedback collated. Participants were interviewed between 25 th May 2022, and 9 th June 2022.
The NPS MedicineWise audit identified 147 individual consumer resources including web-based articles, downloadable factsheets and shared-decision making tools, and videos. Five consumers attended a workshop (Workshop A) to identify a sample of these resources ( n = 49) for further auditing. To facilitate this task, the consumers were presented with data summarising the resources, including general descriptive data (e.g. health topic; resource type such as standard written content, audio-visual or fact sheet) and user data (e.g. unique visits, time spent on page). In addition, consumers were presented with data showing specific health literacy skills addressed in each resource. NPS MedicineWise previously collaborated with the Consumers Health Forum of Australia (Australia’s leading advocacy group for consumer health care issues) to develop Health Literacy Quality Use of Medicines Indicators . The indicators encompass 5 domains of health literacy skills relevant to quality use of medicines: individual health literacy, understanding quality use of medicines, engaging with health professionals, reading medicine information, and accessing further information. Development of the indicators (herein referred to as ‘health literacy skills’) was informed by the literature, consumer-led online discussion forums with 185 consumers, and survey responses from 1,503 consumers. In selecting the 49 resources for further auditing, consumers and staff were asked to consider the need for a variety of resources, including those with low and high webpage visits, and low and high coverage of health literacy skills. If consumers identified other reasons for prioritising a resource for further auditing, these were also integrated into the selection process.
Study authors prioritised health literacy assessment tools if they were widely used within health literacy research and practice, provided numeric output, and would be feasible to implement. A combination of objective and subjective assessment tools was sought, with subjective assessments carried out by four of the consumers who selected the resources in stage 1. The tools are underpinned by a universal precautions approach to health literacy, which argues that all patients and caregivers benefit from health information that is easier to understand . Patient Education Materials Assessment Tool (PEMAT) The PEMAT was selected because it is a validated and widely-used tool to assess the health literacy demands of a given resource. The tool consists of 26 items and provides assessment of two domains. The first domain, understandability, refers to how easily readers of varying health literacy levels can process and explain a text’s key messages. It comprises five topics: content; word choice and style; medical terms; numbers; organisation; layout and design. The second domain, actionability, relates to how easily a reader can identify what they can do. Assessments for each domain are presented as a percentage, with scores ≥ 70% considered adequate. Consumers received PEMAT training including practising on three ‘test’ resources, with the opportunity to reflect on the task as a group and ask further questions. Two consumers then independently rated each resource using the PEMAT. Once all resources were assessed, any discrepancies were resolved through discussion between the pair of consumers. Sydney Health Literacy Lab (SHeLL) Health Literacy Editor The SHeLL Health Literacy Editor (the Editor) is an automated online tool that provides real-time feedback on the complexity of health information . It was selected because it could provide objective assessment beyond grade reading score. The two additional assessments used in this study were complex language and passive voice. Grade reading scores are widely used in health literacy research to estimate text complexity in relation to school grade reading levels. The Editor uses a readability formula called the Simple Measure of Gobbledygook (SMOG) . This formula is a more reliable, robust, and conservative estimate of grade reading score compared to other readability formulas [ – ]. In Australia, health literacy guidelines recommend that information is written at a grade 8 reading level or lower . The complex language score reports the proportion (as a percentage) of words in the text being assessed that are flagged by the program as ‘complex.’ This includes acronyms, any words for which a simpler alternative has been identified, based on public health and medical thesauruses, and any words that are flagged as ‘uncommon’ in English, according to a database of more than 270 million words. Although there are no specific targets for complex language, lower scores are considered easier to understand as they contain fewer complex words. For this project, a target of < 15% complex language was used. The passive voice score indicates the number of passive voice constructions in the resource (e.g. passive voice: the medicine was given to the patient; active voice: the doctor gave the medicine to the patient ). In line with the PEMAT, resources should use no more than 1 instance of passive voice. NPS MedicineWise staff assessed grade reading score, complex language, and passive voice for the 49 online resources using the Editor. Data were collated in preparation for Stage 3.
The PEMAT was selected because it is a validated and widely-used tool to assess the health literacy demands of a given resource. The tool consists of 26 items and provides assessment of two domains. The first domain, understandability, refers to how easily readers of varying health literacy levels can process and explain a text’s key messages. It comprises five topics: content; word choice and style; medical terms; numbers; organisation; layout and design. The second domain, actionability, relates to how easily a reader can identify what they can do. Assessments for each domain are presented as a percentage, with scores ≥ 70% considered adequate. Consumers received PEMAT training including practising on three ‘test’ resources, with the opportunity to reflect on the task as a group and ask further questions. Two consumers then independently rated each resource using the PEMAT. Once all resources were assessed, any discrepancies were resolved through discussion between the pair of consumers.
The SHeLL Health Literacy Editor (the Editor) is an automated online tool that provides real-time feedback on the complexity of health information . It was selected because it could provide objective assessment beyond grade reading score. The two additional assessments used in this study were complex language and passive voice. Grade reading scores are widely used in health literacy research to estimate text complexity in relation to school grade reading levels. The Editor uses a readability formula called the Simple Measure of Gobbledygook (SMOG) . This formula is a more reliable, robust, and conservative estimate of grade reading score compared to other readability formulas [ – ]. In Australia, health literacy guidelines recommend that information is written at a grade 8 reading level or lower . The complex language score reports the proportion (as a percentage) of words in the text being assessed that are flagged by the program as ‘complex.’ This includes acronyms, any words for which a simpler alternative has been identified, based on public health and medical thesauruses, and any words that are flagged as ‘uncommon’ in English, according to a database of more than 270 million words. Although there are no specific targets for complex language, lower scores are considered easier to understand as they contain fewer complex words. For this project, a target of < 15% complex language was used. The passive voice score indicates the number of passive voice constructions in the resource (e.g. passive voice: the medicine was given to the patient; active voice: the doctor gave the medicine to the patient ). In line with the PEMAT, resources should use no more than 1 instance of passive voice. NPS MedicineWise staff assessed grade reading score, complex language, and passive voice for the 49 online resources using the Editor. Data were collated in preparation for Stage 3.
NPS MedicineWise staff and consumers were invited to attend two online workshops. The aim of these workshops was to present the results of Stage 1 and 2 of the audit and facilitate discussions to establish recommendations for revising, creating, and removing online content. The five consumers involved in Workshop A (Stage 1) were invited to take part in Stage 3, in addition to other NPS MedicineWise consumers. Workshop content and activities were designed in collaboration with and facilitated by author MC, a consumer representative with a long-standing relationship with NPS MedicineWise and chair of the NPS MedicineWise consumer advisory group. Materials were distributed prior to the workshops to provide background reading and audio-visual content explaining the project and audit findings. The first of these workshops (Workshop B) focused on presenting the background to the study and audit findings, with the goal of interpreting the audit findings collectively, as a group. The second (Workshop C) focused on identifying potential areas for improvement with small groups looking at specific resources.
Attendees from the latter workshops (B and C) were invited to take part in semi-structured interviews. Interview questions asked for feedback on the health literacy audit methods and suggestions for further improvement. After obtaining consent, author JA interviewed participants via Zoom individually or in small focus groups. Participants could comment on any part of the health literacy audit. Audio data were transcribed and feedback collated. Participants were interviewed between 25 th May 2022, and 9 th June 2022.
Stage 1: Selecting a sample of resources Data about each of the 147 resources were presented to consumers (Appendix ). Of these, 47 (32.0%) provided general information about quality use of medicines and 29 (19.7%) were about pain and pain medicines. The remaining categories covered topics such as heart health, COVID-19, dementia, and bone health. For resources with available user data, the median page visits per resource was 1,662 in 2019–2020 (interquartile range (IQR) = 3,113), and 1,604 in 2020–2021 (IQR = 2,845). Median time spent on a resource was 2 min 41 s in 2019–2020 (IQR = 1 min 54 s), and 2 min 33 s in 2020–2021 (IQR = 2 min 6 s). A summary of how frequently health literacy skills appeared is presented in Table . Across all resources ( N = 147), the health literacy skills that featured most often were those that encouraged users to ask health professionals questions about their medicines ( n = 100, 68.0%), think about the benefits and risks of a medicine ( n = 94, 63.9%), seek advice from a health professional before starting medicine ( n = 86, 58.5%), and read about medicine side effects on medicine labels ( n = 85, 57.8%). The health literacy skills that featured least often (in less than 10% of resources) were those relating to medicine expiry dates, disposal, storage, cost, and addictiveness; taking others’ prescription medicines; and advice to have a consistent health professional. On average each resource covered 17 of the 25 health literacy skills (SD = 3.9). During Workshop A consumers identified 49 resources for more detailed audit. An additional resource had already been identified by NPS MedicineWise staff and used as an example to support discussions during this workshop. The detailed audit included resources from each key health topic available on the NPS MedicineWise website, and across all formats (e.g. standard written content and audio-visual formats). Table shows that each health literacy skill featured at least once in the selected resources. Throughout the selection process, consumers and staff used the summary data in conjunction with broader criteria e.g. making sure that resources related to different lifespan stages (i.e. childhood), specific topics of interest (e.g. managing migraine), and COVID-19. Stage 2: Health literacy assessment of the sample Consumer ratings of PEMAT items are presented in Table . Overall, 42 of the resources (85.7%) had adequate understandability. Within this domain, all resources were rated as presenting information in a logical sequence (100%) and having informative headers (100%). Almost all resources scored high on items related to word choice and style (range 94%-98%) and for most resources the visual aids (when present) were clear and uncluttered (97%) and reinforced the written content (92%). Few resources provided a summary (27.9%), and only one third used visual aids whenever possible (32.5%). Of the 49 resources, about half had adequate actionability ( n = 26, 53.1%). Although all resources could identify at least one action for the user and addressed the reader directly, very few provided tangible tools ( n = 15, 37.5%) or visual aids to help users act on instructions ( n = 10, 25.0%). Based on median SHeLL Health Literacy Editor assessment scores, a typical text was written at about a grade 12 reading level and used the passive voice 6 times. About one in five words in a typical text were considered complex (19%) (Table ). Stage 3: Workshops to review and interpret audit results, and identify priority areas In Workshop B, study authors presented the results of the health literacy audit (Stage 2). Twenty-five attendees reflected on the results as a whole and discussed in further detail in small groups. This workshop comprised 12 consumers including the 5 consumers from Workshop A; 12 staff; and 1 health literacy researcher. Four of the attendees were also study authors (1 consumer, 2 staff, 1 health literacy researcher). Demographic characteristics of consumers are shown in Table . In addition, consumers represented either direct lived experience or a close personal connection to culturally and linguistically diverse communities, Aboriginal and Torres Strait Islander communities, younger people, carer roles, LGBTQI + , disability, homelessness, and people living with chronic conditions. Staff who attended the latter workshops (B and C) included those at executive and management levels. Discussions centred on how to interpret the audit results to identify priorities for revising or adapting existing content. This included time that was allocated to identify potential ‘gaps’ that new content could address (for example, for more specific target audiences, and content areas or health literacy skills that could be more prominent). In Workshop C, attendees formed four small groups. Each group was given two resources assessed as having poor actionability, grade reading score, complex language, and passive voice. Each group was asked to reflect on how their specific resources could be further improved for use by consumers. Figure depicts the three key priority areas identified at the end of workshop C. The first two priority areas were more closely related to the PEMAT and SHeLL Editor results, whilst the third relates more closely to discussions about potential gaps in the resources. Workshop discussions helped shape these priority areas. For example, health literacy assessments indicated that the health information was often too complex (see Stage 2). Consumers discussed the importance of offering simple information alongside more detailed information. They suggested that layering information could achieve this goal, as well as using audio-visual formats for more complex concepts. Similarly, the PEMAT assessments from Stage 2 identified that many resources had poor actionability because they lacked tangible tools or visual aids. Consumers emphasised that tangible tools and visual aids would have limited utility if the purpose of a resource was unclear to readers, including the context in which it should be used. Stage 4: Critical reflections and feedback on the audit process Three staff and eight consumers took part in the interviews, including the four consumers involved in the PEMAT assessments. Participants appreciated the opportunity to be involved in the audit and highlighted four key ways to further improve the audit process (Table ).
Data about each of the 147 resources were presented to consumers (Appendix ). Of these, 47 (32.0%) provided general information about quality use of medicines and 29 (19.7%) were about pain and pain medicines. The remaining categories covered topics such as heart health, COVID-19, dementia, and bone health. For resources with available user data, the median page visits per resource was 1,662 in 2019–2020 (interquartile range (IQR) = 3,113), and 1,604 in 2020–2021 (IQR = 2,845). Median time spent on a resource was 2 min 41 s in 2019–2020 (IQR = 1 min 54 s), and 2 min 33 s in 2020–2021 (IQR = 2 min 6 s). A summary of how frequently health literacy skills appeared is presented in Table . Across all resources ( N = 147), the health literacy skills that featured most often were those that encouraged users to ask health professionals questions about their medicines ( n = 100, 68.0%), think about the benefits and risks of a medicine ( n = 94, 63.9%), seek advice from a health professional before starting medicine ( n = 86, 58.5%), and read about medicine side effects on medicine labels ( n = 85, 57.8%). The health literacy skills that featured least often (in less than 10% of resources) were those relating to medicine expiry dates, disposal, storage, cost, and addictiveness; taking others’ prescription medicines; and advice to have a consistent health professional. On average each resource covered 17 of the 25 health literacy skills (SD = 3.9). During Workshop A consumers identified 49 resources for more detailed audit. An additional resource had already been identified by NPS MedicineWise staff and used as an example to support discussions during this workshop. The detailed audit included resources from each key health topic available on the NPS MedicineWise website, and across all formats (e.g. standard written content and audio-visual formats). Table shows that each health literacy skill featured at least once in the selected resources. Throughout the selection process, consumers and staff used the summary data in conjunction with broader criteria e.g. making sure that resources related to different lifespan stages (i.e. childhood), specific topics of interest (e.g. managing migraine), and COVID-19.
Consumer ratings of PEMAT items are presented in Table . Overall, 42 of the resources (85.7%) had adequate understandability. Within this domain, all resources were rated as presenting information in a logical sequence (100%) and having informative headers (100%). Almost all resources scored high on items related to word choice and style (range 94%-98%) and for most resources the visual aids (when present) were clear and uncluttered (97%) and reinforced the written content (92%). Few resources provided a summary (27.9%), and only one third used visual aids whenever possible (32.5%). Of the 49 resources, about half had adequate actionability ( n = 26, 53.1%). Although all resources could identify at least one action for the user and addressed the reader directly, very few provided tangible tools ( n = 15, 37.5%) or visual aids to help users act on instructions ( n = 10, 25.0%). Based on median SHeLL Health Literacy Editor assessment scores, a typical text was written at about a grade 12 reading level and used the passive voice 6 times. About one in five words in a typical text were considered complex (19%) (Table ).
In Workshop B, study authors presented the results of the health literacy audit (Stage 2). Twenty-five attendees reflected on the results as a whole and discussed in further detail in small groups. This workshop comprised 12 consumers including the 5 consumers from Workshop A; 12 staff; and 1 health literacy researcher. Four of the attendees were also study authors (1 consumer, 2 staff, 1 health literacy researcher). Demographic characteristics of consumers are shown in Table . In addition, consumers represented either direct lived experience or a close personal connection to culturally and linguistically diverse communities, Aboriginal and Torres Strait Islander communities, younger people, carer roles, LGBTQI + , disability, homelessness, and people living with chronic conditions. Staff who attended the latter workshops (B and C) included those at executive and management levels. Discussions centred on how to interpret the audit results to identify priorities for revising or adapting existing content. This included time that was allocated to identify potential ‘gaps’ that new content could address (for example, for more specific target audiences, and content areas or health literacy skills that could be more prominent). In Workshop C, attendees formed four small groups. Each group was given two resources assessed as having poor actionability, grade reading score, complex language, and passive voice. Each group was asked to reflect on how their specific resources could be further improved for use by consumers. Figure depicts the three key priority areas identified at the end of workshop C. The first two priority areas were more closely related to the PEMAT and SHeLL Editor results, whilst the third relates more closely to discussions about potential gaps in the resources. Workshop discussions helped shape these priority areas. For example, health literacy assessments indicated that the health information was often too complex (see Stage 2). Consumers discussed the importance of offering simple information alongside more detailed information. They suggested that layering information could achieve this goal, as well as using audio-visual formats for more complex concepts. Similarly, the PEMAT assessments from Stage 2 identified that many resources had poor actionability because they lacked tangible tools or visual aids. Consumers emphasised that tangible tools and visual aids would have limited utility if the purpose of a resource was unclear to readers, including the context in which it should be used.
Three staff and eight consumers took part in the interviews, including the four consumers involved in the PEMAT assessments. Participants appreciated the opportunity to be involved in the audit and highlighted four key ways to further improve the audit process (Table ).
This paper presents a novel method for conducting large-scale consumer-centred health literacy audits. Consumers were involved throughout the process, from project planning and identifying which resources would undergo health literacy assessment, to conducting the health literacy assessments, interpreting results and identifying next steps. Three key areas for future action were identified: make resources easier to understand and act on; consider the readers’ context, needs, and skills; and improve inclusiveness and representation. Qualitative interviews highlighted that the audit method could be further improved by addressing issues related to diverse representation, providing greater opportunity for unstructured feedback, using a simpler subjective health literacy assessment tool, setting clear expectations about the project rationale and anticipated outcomes, and simplifying how audit data were presented. Findings from this study add to the published literature about how to conduct a health literacy audit for a large existing database of health information resources. Previously, Alpert, Desens conducted an audit that prioritised assessment of high-traffic health information resources (i.e. high page visits) within a US patient portal. The authors used data from a validated health literacy assessment tool to identify key overarching strategies to improve the quality of the patient portal’s health information. Building on this approach, the current study involved consumers throughout the process. These methods recognise the importance of understanding how health literacy needs and strengths relate to an organisation’s specific context, services, and actions , and the importance of partnering with consumers to deliver patient-centred health initiatives that have meaningful impact to the community [ , , ]. Interviews also highlighted the need for a more consumer-friendly health literacy assessment tool. Although consumers perceived some value in the PEMAT’s systematic and comprehensive approach, ultimately they felt the tool was too lengthy, ‘academic,’ and inadvertently restricted the type of feedback they could provide. In theory, the PEMAT was designed for use by ‘lay’ people as well as health literacy experts, and many of the items assess aspects of the text that are best suited to consumer feedback (e.g. ‘the material uses common, everyday language’). However, in practice, PEMAT assessments are rarely conducted by consumers. Further, to our knowledge this is the first study to report on the tool’s acceptability to consumers. Other existing health literacy assessment tools such as the CDC’s Clear Communication Index are likely to face similar issues, as they were not purpose-designed for consumers. Further work is needed to design and validate a quantitative health literacy assessment tool that applies a systematic and comprehensive approach to health literacy assessment, but is easier to use and more acceptable to consumers. This study has several strengths in addition to strong consumer engagement. The health literacy audit incorporated a combination of subjective and objective health literacy assessments, including objective assessments that extend beyond grade reading score. This provided richer, more detailed quantitative data about the resources. Ultimately our findings demonstrate that consumer input is essential but alone may not be sufficient for ensuring that health literacy needs are met, as many of the existing resources did not adhere to health literacy guidelines even though consumers had been involved in their development. Second, audit data reported on the extent that resources supported health literacy skills relevant to quality use of medicines. This invited greater discussion about the organisation’s role in community capacity-building, an aspect of organisational health literacy that is often overlooked . One of the key limitations was perceived lack of diversity amongst consumers. In Australia, there are several priority groups that do not receive or cannot easily access health information or health care . Meaningful partnerships with people from these communities is not only ethical; it is essential for developing and implementing equitable health literacy initiatives . Lack of diversity in health consumers is a common issue, particularly with regards to culturally and linguistically diverse communities . In this study, workshops attendees were of varied ages, location, and education; and many had direct or close personal connections to various priority groups. However, consumers discussed the need for greater diversity amongst workshop attendees. As such, the outputs of the workshops may have limited applicability to the various priority groups. Additional workshops with specific priority groups could help identify each group’s unique health literacy needs and strengths. Another limitation was that workshops were conducted online because of the COVID-19 pandemic. Although this format has some advantages (e.g. reducing barriers related to travel or disability), it may have also contributed to perceptions that Workshop B was overwhelming and reduced opportunities to connect and build rapport. Lastly, there was a 6-month delay between the workshops and interviews. This may have resulted in low participation rates and the difficulty some participants had remembering details of the audit. Since project completion, the organisation has taken several steps to act on findings from this audit and continue their strong consumer-centred approach. For example, consumers have led dissemination of findings at a research conference and continue to be involved in reviewing and updating the audited resources. The SHeLL Editor and PEMAT tool were embedded into standard document development and review processes within the organisation, with consumers contributing to staff training in the use of the PEMAT. Lastly, NPS MedicineWise strengthened partnerships with several peak bodies representing minority groups in efforts to increase representation from diverse groups. These are each practical examples of organisational health literacy actions that can inform the upcoming Australian National Health Literacy Strategy. In this study we focused on NPS MedicineWise’s direct-to-consumer health information. Health literacy audits of other content may benefit from engaging with additional relevant stakeholders, for example, health professionals, and relevant non-government and government organisations.
This study reports novel methods for a consumer-centred large-scale health literacy audit. Findings highlight the clear value of involving consumers in assessing resources and interpreting audit data. For future iterations we recommend developing a consumer-centred health literacy assessment tool, increasing the diversity of consumer voices, and setting clear goals and expectations for each stage of the audit.
Additional file 1: Supplementary Table 1. Descriptive characteristics of all resources, N =147. Supplementary Table 2. Descriptive health literacy characteristics of sample resources, n =49.
|
Metabolically versatile psychrotolerant Antarctic bacterium
|
d9bfbd5c-16cf-4b19-bc70-c5b4a942ece4
|
10149013
|
Microbiology[mh]
|
Siderophores, including those produced by bacteria, are chelating compounds with a high affinity for iron. Their main biological function is to improve the bioavailability and uptake of this crucial element in conditions of iron limitation [ – ]. Bacterial siderophores could also play other biological roles, including non-iron metal transport, toxic metal sequestration, signaling, or protection from oxidative stress. Thus, they have gained the attention of various branches of industry, trying to harness their potential in numerous applications [ , , , ]. Bacterial siderophores are considered to be one of the plant-growth-promoting (PGP) agents [ , – ]. Their role is to enhance the bioavailability of iron in the soil, which is crucial for proper plant nutrition , as well as to stimulate soil microbiota and biocontrol phytopathogens [ , – ]. Due to their PGP properties, bacterial siderophores have the potential to be used as soil biostimulants and/or as a component of biofertilizers . Despite the high application potential of bacterial siderophores, many aspects of their production remain inefficient, limiting their broader use. Firstly, the high efficiency of siderophores production is obligatory for industrial-scale production. Although siderophores biosynthesis is common in various microbial taxa, the production rates and efficiency of many strains may be insufficient . Secondly, the optimization of culture conditions for siderophores-producing bacteria (SPB) is highly relevant, including the composition of the medium and pH. The usage of various carbon, nitrogen, and/or phosphorus sources strongly influences the bacterial metabolism profile and affect siderophores’ production efficiency [ – ]. For large-scale applications, a medium for siderophores production should promote a significant yield of these compounds while also being cost-efficient . Another important issue is the cost-inefficient need for heating/cooling during cultivation because bacterial siderophores are usually produced by mesophilic/psychrophilic microorganisms [ , , ]. Finally, the formulation of the siderophores-based product often requires downstream procedure, such as purification (e.g., liquid–liquid or solid-phase extraction, gel filtration, High Performance Liquid Chromatography—HPLC), which make the siderophores production process more complex and expensive . Facilitating and streamlining of siderophores production could be achieved e.g., by the usage of inexpensive substrates for bacteria growth. Since many SPB require the supplementation of medium with expensive amino acids (e.g., l -asparagine) to achieve significant levels of siderophores production, their low-cost production remains a challenging task . Furthermore, the medium for siderophores production has to be nearly completely devoid of iron, which inhibits siderophores biosynthesis . For this reason, the low-cost alternatives of media components, such as waste materials, are mostly unsuitable for siderophores production due to their variable composition and contamination with iron . Therefore, media used for siderophores production should be based on easily accessible and low-cost pure synthetic substrates e.g., inorganic salts [ , , ]. Since metabolites produced by SPB alongside siderophores could possess other PGP traits (e.g., production of phytohormones or organic acids), purification of the final product is not required in the context of agricultural applications, provided there is no phytotoxic effect of siderophores accompanying metabolites (SAM) . Moreover, the concentrations of siderophores obtained in bacterial cultures are environmentally relevant and sufficient for PGP effects [ – ]. Therefore, omitting downstream processes could not only lower the production cost, but also enrich the final product in various plant-beneficial compounds. For this reason SPB bacterial strains, exhibiting complex and diverse metabolism could be great platforms for products with a broad range of PGP properties . In this context, numerous members of Pseudomonas genus have the potential to be used for agricultural purposes since they are not only efficient producers of siderophores but can also biosynthesize various secondary metabolites with PGP properties [ , , ]. This metabolic versatility is driven by the diversity of Pseudomonas , which due to various physiological and genetic properties are able to thrive in a broad range of environments, including extreme conditions [ – ]. In our previous paper we described Pseudomonas sp. ANT_H12B, a psychrotolerant siderophores producer, which exhibited PGP properties with regard to alfalfa ( Medicago sativa L.) . Due to these characteristics ANT_H12B could potentially be used for the manufacturing of biostimulating agricultural products, however, its biotechnological potential was not fully revealed, especially according to the optimization of siderophores production and the potential PGP role of other accompanying metabolites. The main goal of the conducted studies was to elucidate the potential of siderophores and accompanying secondary metabolites produced by the psychrotolerant strain Pseudomonas sp. ANT_H12B for biostimulation of plant growth. In order to achieve this aim, the following specific goals were accomplished: (i) Genomic and phenotypic analysis of Pseudomonas sp. ANT_H12B metabolic potential in order to design cost-efficient media for siderophore production, (ii) the experimental evaluation of siderophores production media, including various carbon and nitrogen sources, (iii) the investigation of the composition of siderophores accompanying metabolites (SAM) produced on various microbial media, as well as (iv) the assessment of the impact of siderophores and SAM on the rate and efficiency of plants germination. The presented studies were performed in the context of the potential application of siderophores produced by Pseudomonas sp. ANT_H12B for large-scale agricultural purposes. The knowledge about optimization of the yield and cost production of siderophores based on the example of ANT_H12B strain may be useful for the estimation of the actual biotechnological potential of secondary metabolites produced by representatives of Pseudomonas genus.
Bacterial strain and plant seeds The bacterial strain used in this study was Pseudomonas sp. ANT_H12B (GenBank assembly accession number: GCA_008369325.1) isolated from the Antarctic soil samples at King George Island (Antarctica; GPS coordinates: 62 09.6010 S, 58 28.4640 W) . Strain exhibits various PGP features, including siderophores biosynthesis (pyoverdine and achromobactin), phosphate solubilization, and indole acetic acid biosynthesis. Pseudomonas sp. ANT_H12B genome consists of 6 276 261 base pairs (58.57% GC content) and contains 6168 genes . Untreated plant seeds were used in this study for germination tests. Selected seeds cultivars were characterized by their relevance for agriculture and moderate germination rate/efficiency. The seeds of beetroot ( Beta vulgaris var. conditiva cv. Patryk), pea ( Pisum sativum L. cv. Iłówiecki) were obtained from Enterprise of Horticulture and Nursery (PNOS), Ożarów Mazowiecki, Poland, and tobacco ( Nicotiana tabacum L. var, Xanthi ) was propagated in the in-house growing chambers facilities to obtain sufficient seeds number. Bioinformatic analysis Genomic DNA extraction (cetyl trimethylammonium bromide /lysozyme method), sequencing (Illumina MiSeq platform) and basic genomic analysis (RAST, PATRIC and KEGG services) of Pseudomonas sp. ANT_H12B were performed and described in our previous work . In this study additional genomic analysis was used to characterize the genetic background of the phenotypic profile of Pseudomonas sp. ANT_H12B and assess the versatility of its metabolism in the context of the siderophores’ production. In the frame of genomic analysis, genes/pathways responsible for carbon metabolism were identified. Bioinformatics analysis of the ANT_H12B genome was performed using MicrobeAnnotator (v.2.0.5) software, and then the KO numbers were mapped to KEGG metabolic pathways and manually curated. The presence of missing enzymes in incomplete metabolic pathways was manually verified using the annotation of the genome deposited in the NCBI database. In addition, a genome search was performed based on the MetaCyc database and available scientific literature. Phenotypic profiling of Pseudomonas sp. ANT_H12B Phenotype Microarrays (Biolog Inc., USA) were used to examine the metabolic potential of Pseudomonas sp. ANT_H12B. The Phenotype Microarrays (PM) assays involved panels for carbon (PM01 and PM02—190 of C sources), nitrogen (PM03—95 of N sources), as well as phosphorus and sulfur usage (PM04—59 of P and 35 of S sources). PM assays were performed according to the standard protocols recommended by the manufacturer for gram-negative bacteria and described by Gharaie et al. . All assays were performed in triplicates using an OmniLog device (Biolog Inc., USA). All data were collected by OmniLog PM System software (Biolog Inc., USA). Optimization of medium chemical composition Bacterial inoculum preparation for siderophores production To prepare bacterial inoculum for siderophores production, Pseudomonas sp. ANT_H12B was cultivated overnight in lysogeny broth (LB) medium at 20 °C with rotary shaking set to 150 rpm. Next, 50 ml of bacterial culture was centrifuged (8000 rpm, 5 min), washed with 0.85% NaCl solution to remove any residues of LB broth, and again centrifuged. This procedure was repeated twice. The final inoculum was prepared by discarding the supernatant and suspending obtained biomass in 50 ml of 0.85% NaCl solution. Selection of media composition and experimental set-up Based on Pseudomonas sp. ANT_H12B phenotypic and genomic profiling, various media designed for siderophores production were prepared. Selected substrates included cost-effective compounds commonly used in industrial/agricultural applications. As a reference medium for siderophores production the GASN medium (7 g L −1 glucose, 2 g L −1 L-asparagine monohydrate, 0.96 g L −1 Na 2 HPO 4 , 0.44 g L −1 KH 2 PO 4 , and 0.2 g L −1 MgSO 4 × 7H 2 O) was used . C:N ratio in GASN medium is approximately 7:1. Other media used in this study were designed as modified versions of the GASN medium, using various carbon (glucose, glycerol, ethanol, citric acid) and nitrogen sources (ammonium sulfate, ammonium chloride, ammonium nitrate, l -asparagine), keeping C:N ratio at 7:1 level. The concentrations of phosphates and sulfates in every media were identical to those in the GASN medium. Detailed composition of used media, regarding carbon and nitrogen sources, is presented in Table . Media containing various carbon and nitrogen sources (CSA, CASN, EtASN, EtSA, GASN, GCl, GNO, GlASN, GlSA) in seven C:N ratio variants (1:2, 2:1, 3:1, 5:1, 7:1, 10:1, 20:1) were used to examine their influence on siderophores production. Ratio 7:1 (Table ) was regarded as reference, and other ratios were prepared by increasing or decreasing concentrations of carbon source, while keeping N concentration unchanged. Bacteria were cultivated for 3 days in respective media (pH 7.0) at 10 °C with rotary shaking set to 150 rpm. Initial culture OD 600nm was set at 0.06. Conditions were selected according to optimization experiments described in Additional file . All experiments were performed in triplicates, using 96-well microplates. The three best variants from every microplate experiment were selected for verification in increased culture volume. For this purpose, bacteria were cultivated for 3 days in conditions identical to the respective microplate assay, with the only difference in the volume of used medium (200 ml). Measurement of microorganisms quantity (CFU ml −1 ), pH and siderophores concentration (CAS assay) were taken every 24 h of the experiment. Determination of siderophores productivity of Pseudomonas sp. ANT_H12B To perform a screening estimation of the efficiency of siderophores production in bacterial cultures in various media, CAS (chrome azurol S) reagent was used according to the spectrophotometric method measuring overall siderophores concentration in samples described by Schwyn and Neilands . All measurements were performed in triplicates. The bacterial cultures were centrifuged (8000 rpm for 5 min), and supernatants were added in a 1:1 ratio to the CAS reagent and incubated in darkness for an hour. An automated microplate reader was used to measure the absorbance at 630 nm. A standard curve was obtained using deferoxamine mesylate salt (Sigma-Aldrich), which also served at a concentration of 0.025 mM as a positive control. A sterile medium was used as a negative control. To determine the concentration of pyoverdine, HPLC analyses were conducted. These analyses were performed for metabolites obtained using selected media: GASN, GCl, GSA, GlSA, GlASN, and CSA after 3 days of cultivation. Bacterial cultures were centrifuged (8000 rpm, 5 min), then supernatants were separated from the biomass and stored in sterile 50 ml tubes in 4 °C for further use. The liquid fractions were transferred to individual tubes and 40 µl of a FeCl 3 solution (1 M) was added to each tube. The HPLC analyses were carried out using the procedure described by Bultreys et al. . Commercially available pyoverdine from a Pseudomonas fluorescens strain (Sigma Aldrich, USA) was used as a standard. Chemical analysis of SAM produced by Pseudomonas sp. ANT_H12B The qualitative and semi-quantitative characteristics of SAM were performed with the use of GC–MS analysis. Analyses were performed for metabolites obtained using all studied media after 3 days of cultivation. Bacterial cultures were centrifuged (8000 rpm, 5 min), then supernatants were separated from biomass and stored in sterile 50 ml tubes in 4 °C for further use. All experiments were performed in triplicates. Extraction of organic compounds from the aqueous phase Organic compounds were extracted from 100 ml of the cell-free aqueous phase of the bacterial cultures and chemical control samples using 25 ml of chloroform in a separatory funnel for 3 min. This procedure was repeated three times. Chloroform extracts were dried with anhydrous Na 2 SO 4 , evaporating the solvent under an N 2 stream. Samples were then derivatized with 0.5 ml of BSTFA:TMCS, 99:1 (Supelco, USA), for 30 min at 70 °C. A blank sample was prepared according to the same procedure. Analysis of extractable organic compounds—gas chromatography analysis The separation of organic compounds was performed using an Agilent 7890A Series Gas Chromatograph interfaced with an Agilent 5973c Network Mass Selective Detector and an Agilent 7683 Series Injector (Agilent Technologies, USA). A 5 µl sample was injected with split 1:5 (sample; carrier gas) by 0.3% SD to an HP-5MS column (30 m × 0.25 mm I.D., 0.25 µm film thickness, Agilent Technologies, USA) using He as the carrier gas at 1 ml min −1 . The ion source was maintained at 250 °C; the GC oven was programmed with a temperature gradient starting at 100 °C (for 3 min), and this was gradually increased to 300 °C (for 2 min) at 6 °C min −1 . Mass spectrometry analysis was carried out in the electron-impact mode at an ionizing potential of 70 eV. Mass spectra were recorded from m/z 40 to 800 (0–39 min). Selection, identification, and classification of organic compounds Peaks that indicated an area not less than 0.1% of the total area of the total ion current chromatogram were selected for identification. The identification was performed with an Agilent Technologies Enhanced ChemStation (G1701EA ver. E.02.00.493) and The Wiley Registry of Mass Spectral Data (version 3.2, Copyright 1988–2000 by Palisade Corporation with, 8th Edition with Structures, Copyright 2000 by John Wiley and Sons, Inc.) using a 3% cutoff threshold. The selected peaks representing organic compounds whose mass spectra indicated compliance with reference mass spectra equal to or higher than 80% were identified. The rest of the organic compounds representing lower compliance (< 80%) were assigned only to the major classes of organic compounds based on the presence of characteristic and dominating fragmentation ions (aromatic hydrocarbons– m/z 65, 77, 78, 79; aliphatic hydrocarbons—m/z 43, 57, 71, 85, 99; alcohols—m/z 45, 59, 73, 87; aldehydes—m/z 44, 58, 72; carboxylic acids—m/z 43, 45, 57, 59, 60, 71, 73, 85, 87). Those organic compounds present in extracts of three repetitions of each sample were selected for further analysis. The effect of SAM on seeds germination The influence of SAM on the rate and efficiency of seeds germination was investigated. For this purpose pea, beetroot, and tobacco seeds were pre-soaked for 30 min in 100 ml of: (i) metabolites solutions produced by Pseudomonas sp. ANT_H12B on GASN, GCl, GSA, GlSA, GlASN or CSA medium according to the procedure described in paragraph 4.1, (ii) sterile GASN, GCl, GSA, GlSA, GlASN or CSA medium and (iii) distilled water (control). After 30 min the seeds were drained off, and 25 seeds were placed on a glass petri dish containing lignin soaked with 125 ml of (i) metabolites produced by Pseudomonas sp. ANT_H12B on studied media supplemented with Knopp nutrient solution (3 mM Ca(NO 3 ) 2 , 1.5 mM KNO 3 , 1.2 mM MgSO 4 , 1.1 mM KH 2 PO 4 , 0.1 mM EDTA—Fe, 5 μM CuSO 4 , 2 μM MnSO 4 × 5H 2 O, 2 μM ZnSO 4 × 7H 2 O, 15 nM (NH 4 ) 6 Mo 7 O 24 ), (ii) sterile GASN, GCl, GSA, GlSA, GlASN or CSA media supplemented with Knopp nutrient solution, (iii) Knopp nutrient solution. To ensure a comparable level of nutrients in obtained metabolites and initial media, the amount of carbon source was reduced by half in the sterile media variant, according to experimentally estimated usage during siderophores production. Each variant was performed in four repetitions. The seeds were incubated in the dark for 7 days at 20° C. Every day the number of germinating seeds was counted. Germination percentage (GP) was calculated using the following equation : [12pt]{minimal}
$$GP[\%]= 100$$ G P % = t o t a l n u m b e r o f s e e d s g e r m i n a t e d t o t a l n u m b e r o f s e e d s p e r p e t r i d i s h × 100 Statistical analysis Statistical analysis was performed using RStudio 2022.02.2 software . One-way analysis of variance (ANOVA) at p ≤ 0.05 was used to test the significance of the differences in groups during optimization and germination experiments. To test pairwise differences in groups Tukey Honestly Significant Difference (HSD) tests were used at p ≤ 0.05. The results were presented on graphs obtained with ggplot2 v3.3.5 .
The bacterial strain used in this study was Pseudomonas sp. ANT_H12B (GenBank assembly accession number: GCA_008369325.1) isolated from the Antarctic soil samples at King George Island (Antarctica; GPS coordinates: 62 09.6010 S, 58 28.4640 W) . Strain exhibits various PGP features, including siderophores biosynthesis (pyoverdine and achromobactin), phosphate solubilization, and indole acetic acid biosynthesis. Pseudomonas sp. ANT_H12B genome consists of 6 276 261 base pairs (58.57% GC content) and contains 6168 genes . Untreated plant seeds were used in this study for germination tests. Selected seeds cultivars were characterized by their relevance for agriculture and moderate germination rate/efficiency. The seeds of beetroot ( Beta vulgaris var. conditiva cv. Patryk), pea ( Pisum sativum L. cv. Iłówiecki) were obtained from Enterprise of Horticulture and Nursery (PNOS), Ożarów Mazowiecki, Poland, and tobacco ( Nicotiana tabacum L. var, Xanthi ) was propagated in the in-house growing chambers facilities to obtain sufficient seeds number.
Genomic DNA extraction (cetyl trimethylammonium bromide /lysozyme method), sequencing (Illumina MiSeq platform) and basic genomic analysis (RAST, PATRIC and KEGG services) of Pseudomonas sp. ANT_H12B were performed and described in our previous work . In this study additional genomic analysis was used to characterize the genetic background of the phenotypic profile of Pseudomonas sp. ANT_H12B and assess the versatility of its metabolism in the context of the siderophores’ production. In the frame of genomic analysis, genes/pathways responsible for carbon metabolism were identified. Bioinformatics analysis of the ANT_H12B genome was performed using MicrobeAnnotator (v.2.0.5) software, and then the KO numbers were mapped to KEGG metabolic pathways and manually curated. The presence of missing enzymes in incomplete metabolic pathways was manually verified using the annotation of the genome deposited in the NCBI database. In addition, a genome search was performed based on the MetaCyc database and available scientific literature.
Pseudomonas sp. ANT_H12B Phenotype Microarrays (Biolog Inc., USA) were used to examine the metabolic potential of Pseudomonas sp. ANT_H12B. The Phenotype Microarrays (PM) assays involved panels for carbon (PM01 and PM02—190 of C sources), nitrogen (PM03—95 of N sources), as well as phosphorus and sulfur usage (PM04—59 of P and 35 of S sources). PM assays were performed according to the standard protocols recommended by the manufacturer for gram-negative bacteria and described by Gharaie et al. . All assays were performed in triplicates using an OmniLog device (Biolog Inc., USA). All data were collected by OmniLog PM System software (Biolog Inc., USA).
Bacterial inoculum preparation for siderophores production To prepare bacterial inoculum for siderophores production, Pseudomonas sp. ANT_H12B was cultivated overnight in lysogeny broth (LB) medium at 20 °C with rotary shaking set to 150 rpm. Next, 50 ml of bacterial culture was centrifuged (8000 rpm, 5 min), washed with 0.85% NaCl solution to remove any residues of LB broth, and again centrifuged. This procedure was repeated twice. The final inoculum was prepared by discarding the supernatant and suspending obtained biomass in 50 ml of 0.85% NaCl solution. Selection of media composition and experimental set-up Based on Pseudomonas sp. ANT_H12B phenotypic and genomic profiling, various media designed for siderophores production were prepared. Selected substrates included cost-effective compounds commonly used in industrial/agricultural applications. As a reference medium for siderophores production the GASN medium (7 g L −1 glucose, 2 g L −1 L-asparagine monohydrate, 0.96 g L −1 Na 2 HPO 4 , 0.44 g L −1 KH 2 PO 4 , and 0.2 g L −1 MgSO 4 × 7H 2 O) was used . C:N ratio in GASN medium is approximately 7:1. Other media used in this study were designed as modified versions of the GASN medium, using various carbon (glucose, glycerol, ethanol, citric acid) and nitrogen sources (ammonium sulfate, ammonium chloride, ammonium nitrate, l -asparagine), keeping C:N ratio at 7:1 level. The concentrations of phosphates and sulfates in every media were identical to those in the GASN medium. Detailed composition of used media, regarding carbon and nitrogen sources, is presented in Table . Media containing various carbon and nitrogen sources (CSA, CASN, EtASN, EtSA, GASN, GCl, GNO, GlASN, GlSA) in seven C:N ratio variants (1:2, 2:1, 3:1, 5:1, 7:1, 10:1, 20:1) were used to examine their influence on siderophores production. Ratio 7:1 (Table ) was regarded as reference, and other ratios were prepared by increasing or decreasing concentrations of carbon source, while keeping N concentration unchanged. Bacteria were cultivated for 3 days in respective media (pH 7.0) at 10 °C with rotary shaking set to 150 rpm. Initial culture OD 600nm was set at 0.06. Conditions were selected according to optimization experiments described in Additional file . All experiments were performed in triplicates, using 96-well microplates. The three best variants from every microplate experiment were selected for verification in increased culture volume. For this purpose, bacteria were cultivated for 3 days in conditions identical to the respective microplate assay, with the only difference in the volume of used medium (200 ml). Measurement of microorganisms quantity (CFU ml −1 ), pH and siderophores concentration (CAS assay) were taken every 24 h of the experiment.
To prepare bacterial inoculum for siderophores production, Pseudomonas sp. ANT_H12B was cultivated overnight in lysogeny broth (LB) medium at 20 °C with rotary shaking set to 150 rpm. Next, 50 ml of bacterial culture was centrifuged (8000 rpm, 5 min), washed with 0.85% NaCl solution to remove any residues of LB broth, and again centrifuged. This procedure was repeated twice. The final inoculum was prepared by discarding the supernatant and suspending obtained biomass in 50 ml of 0.85% NaCl solution.
Based on Pseudomonas sp. ANT_H12B phenotypic and genomic profiling, various media designed for siderophores production were prepared. Selected substrates included cost-effective compounds commonly used in industrial/agricultural applications. As a reference medium for siderophores production the GASN medium (7 g L −1 glucose, 2 g L −1 L-asparagine monohydrate, 0.96 g L −1 Na 2 HPO 4 , 0.44 g L −1 KH 2 PO 4 , and 0.2 g L −1 MgSO 4 × 7H 2 O) was used . C:N ratio in GASN medium is approximately 7:1. Other media used in this study were designed as modified versions of the GASN medium, using various carbon (glucose, glycerol, ethanol, citric acid) and nitrogen sources (ammonium sulfate, ammonium chloride, ammonium nitrate, l -asparagine), keeping C:N ratio at 7:1 level. The concentrations of phosphates and sulfates in every media were identical to those in the GASN medium. Detailed composition of used media, regarding carbon and nitrogen sources, is presented in Table . Media containing various carbon and nitrogen sources (CSA, CASN, EtASN, EtSA, GASN, GCl, GNO, GlASN, GlSA) in seven C:N ratio variants (1:2, 2:1, 3:1, 5:1, 7:1, 10:1, 20:1) were used to examine their influence on siderophores production. Ratio 7:1 (Table ) was regarded as reference, and other ratios were prepared by increasing or decreasing concentrations of carbon source, while keeping N concentration unchanged. Bacteria were cultivated for 3 days in respective media (pH 7.0) at 10 °C with rotary shaking set to 150 rpm. Initial culture OD 600nm was set at 0.06. Conditions were selected according to optimization experiments described in Additional file . All experiments were performed in triplicates, using 96-well microplates. The three best variants from every microplate experiment were selected for verification in increased culture volume. For this purpose, bacteria were cultivated for 3 days in conditions identical to the respective microplate assay, with the only difference in the volume of used medium (200 ml). Measurement of microorganisms quantity (CFU ml −1 ), pH and siderophores concentration (CAS assay) were taken every 24 h of the experiment.
Pseudomonas sp. ANT_H12B To perform a screening estimation of the efficiency of siderophores production in bacterial cultures in various media, CAS (chrome azurol S) reagent was used according to the spectrophotometric method measuring overall siderophores concentration in samples described by Schwyn and Neilands . All measurements were performed in triplicates. The bacterial cultures were centrifuged (8000 rpm for 5 min), and supernatants were added in a 1:1 ratio to the CAS reagent and incubated in darkness for an hour. An automated microplate reader was used to measure the absorbance at 630 nm. A standard curve was obtained using deferoxamine mesylate salt (Sigma-Aldrich), which also served at a concentration of 0.025 mM as a positive control. A sterile medium was used as a negative control. To determine the concentration of pyoverdine, HPLC analyses were conducted. These analyses were performed for metabolites obtained using selected media: GASN, GCl, GSA, GlSA, GlASN, and CSA after 3 days of cultivation. Bacterial cultures were centrifuged (8000 rpm, 5 min), then supernatants were separated from the biomass and stored in sterile 50 ml tubes in 4 °C for further use. The liquid fractions were transferred to individual tubes and 40 µl of a FeCl 3 solution (1 M) was added to each tube. The HPLC analyses were carried out using the procedure described by Bultreys et al. . Commercially available pyoverdine from a Pseudomonas fluorescens strain (Sigma Aldrich, USA) was used as a standard.
Pseudomonas sp. ANT_H12B The qualitative and semi-quantitative characteristics of SAM were performed with the use of GC–MS analysis. Analyses were performed for metabolites obtained using all studied media after 3 days of cultivation. Bacterial cultures were centrifuged (8000 rpm, 5 min), then supernatants were separated from biomass and stored in sterile 50 ml tubes in 4 °C for further use. All experiments were performed in triplicates. Extraction of organic compounds from the aqueous phase Organic compounds were extracted from 100 ml of the cell-free aqueous phase of the bacterial cultures and chemical control samples using 25 ml of chloroform in a separatory funnel for 3 min. This procedure was repeated three times. Chloroform extracts were dried with anhydrous Na 2 SO 4 , evaporating the solvent under an N 2 stream. Samples were then derivatized with 0.5 ml of BSTFA:TMCS, 99:1 (Supelco, USA), for 30 min at 70 °C. A blank sample was prepared according to the same procedure. Analysis of extractable organic compounds—gas chromatography analysis The separation of organic compounds was performed using an Agilent 7890A Series Gas Chromatograph interfaced with an Agilent 5973c Network Mass Selective Detector and an Agilent 7683 Series Injector (Agilent Technologies, USA). A 5 µl sample was injected with split 1:5 (sample; carrier gas) by 0.3% SD to an HP-5MS column (30 m × 0.25 mm I.D., 0.25 µm film thickness, Agilent Technologies, USA) using He as the carrier gas at 1 ml min −1 . The ion source was maintained at 250 °C; the GC oven was programmed with a temperature gradient starting at 100 °C (for 3 min), and this was gradually increased to 300 °C (for 2 min) at 6 °C min −1 . Mass spectrometry analysis was carried out in the electron-impact mode at an ionizing potential of 70 eV. Mass spectra were recorded from m/z 40 to 800 (0–39 min). Selection, identification, and classification of organic compounds Peaks that indicated an area not less than 0.1% of the total area of the total ion current chromatogram were selected for identification. The identification was performed with an Agilent Technologies Enhanced ChemStation (G1701EA ver. E.02.00.493) and The Wiley Registry of Mass Spectral Data (version 3.2, Copyright 1988–2000 by Palisade Corporation with, 8th Edition with Structures, Copyright 2000 by John Wiley and Sons, Inc.) using a 3% cutoff threshold. The selected peaks representing organic compounds whose mass spectra indicated compliance with reference mass spectra equal to or higher than 80% were identified. The rest of the organic compounds representing lower compliance (< 80%) were assigned only to the major classes of organic compounds based on the presence of characteristic and dominating fragmentation ions (aromatic hydrocarbons– m/z 65, 77, 78, 79; aliphatic hydrocarbons—m/z 43, 57, 71, 85, 99; alcohols—m/z 45, 59, 73, 87; aldehydes—m/z 44, 58, 72; carboxylic acids—m/z 43, 45, 57, 59, 60, 71, 73, 85, 87). Those organic compounds present in extracts of three repetitions of each sample were selected for further analysis.
Organic compounds were extracted from 100 ml of the cell-free aqueous phase of the bacterial cultures and chemical control samples using 25 ml of chloroform in a separatory funnel for 3 min. This procedure was repeated three times. Chloroform extracts were dried with anhydrous Na 2 SO 4 , evaporating the solvent under an N 2 stream. Samples were then derivatized with 0.5 ml of BSTFA:TMCS, 99:1 (Supelco, USA), for 30 min at 70 °C. A blank sample was prepared according to the same procedure.
The separation of organic compounds was performed using an Agilent 7890A Series Gas Chromatograph interfaced with an Agilent 5973c Network Mass Selective Detector and an Agilent 7683 Series Injector (Agilent Technologies, USA). A 5 µl sample was injected with split 1:5 (sample; carrier gas) by 0.3% SD to an HP-5MS column (30 m × 0.25 mm I.D., 0.25 µm film thickness, Agilent Technologies, USA) using He as the carrier gas at 1 ml min −1 . The ion source was maintained at 250 °C; the GC oven was programmed with a temperature gradient starting at 100 °C (for 3 min), and this was gradually increased to 300 °C (for 2 min) at 6 °C min −1 . Mass spectrometry analysis was carried out in the electron-impact mode at an ionizing potential of 70 eV. Mass spectra were recorded from m/z 40 to 800 (0–39 min).
Peaks that indicated an area not less than 0.1% of the total area of the total ion current chromatogram were selected for identification. The identification was performed with an Agilent Technologies Enhanced ChemStation (G1701EA ver. E.02.00.493) and The Wiley Registry of Mass Spectral Data (version 3.2, Copyright 1988–2000 by Palisade Corporation with, 8th Edition with Structures, Copyright 2000 by John Wiley and Sons, Inc.) using a 3% cutoff threshold. The selected peaks representing organic compounds whose mass spectra indicated compliance with reference mass spectra equal to or higher than 80% were identified. The rest of the organic compounds representing lower compliance (< 80%) were assigned only to the major classes of organic compounds based on the presence of characteristic and dominating fragmentation ions (aromatic hydrocarbons– m/z 65, 77, 78, 79; aliphatic hydrocarbons—m/z 43, 57, 71, 85, 99; alcohols—m/z 45, 59, 73, 87; aldehydes—m/z 44, 58, 72; carboxylic acids—m/z 43, 45, 57, 59, 60, 71, 73, 85, 87). Those organic compounds present in extracts of three repetitions of each sample were selected for further analysis.
The influence of SAM on the rate and efficiency of seeds germination was investigated. For this purpose pea, beetroot, and tobacco seeds were pre-soaked for 30 min in 100 ml of: (i) metabolites solutions produced by Pseudomonas sp. ANT_H12B on GASN, GCl, GSA, GlSA, GlASN or CSA medium according to the procedure described in paragraph 4.1, (ii) sterile GASN, GCl, GSA, GlSA, GlASN or CSA medium and (iii) distilled water (control). After 30 min the seeds were drained off, and 25 seeds were placed on a glass petri dish containing lignin soaked with 125 ml of (i) metabolites produced by Pseudomonas sp. ANT_H12B on studied media supplemented with Knopp nutrient solution (3 mM Ca(NO 3 ) 2 , 1.5 mM KNO 3 , 1.2 mM MgSO 4 , 1.1 mM KH 2 PO 4 , 0.1 mM EDTA—Fe, 5 μM CuSO 4 , 2 μM MnSO 4 × 5H 2 O, 2 μM ZnSO 4 × 7H 2 O, 15 nM (NH 4 ) 6 Mo 7 O 24 ), (ii) sterile GASN, GCl, GSA, GlSA, GlASN or CSA media supplemented with Knopp nutrient solution, (iii) Knopp nutrient solution. To ensure a comparable level of nutrients in obtained metabolites and initial media, the amount of carbon source was reduced by half in the sterile media variant, according to experimentally estimated usage during siderophores production. Each variant was performed in four repetitions. The seeds were incubated in the dark for 7 days at 20° C. Every day the number of germinating seeds was counted. Germination percentage (GP) was calculated using the following equation : [12pt]{minimal}
$$GP[\%]= 100$$ G P % = t o t a l n u m b e r o f s e e d s g e r m i n a t e d t o t a l n u m b e r o f s e e d s p e r p e t r i d i s h × 100
Statistical analysis was performed using RStudio 2022.02.2 software . One-way analysis of variance (ANOVA) at p ≤ 0.05 was used to test the significance of the differences in groups during optimization and germination experiments. To test pairwise differences in groups Tukey Honestly Significant Difference (HSD) tests were used at p ≤ 0.05. The results were presented on graphs obtained with ggplot2 v3.3.5 .
Analysis of genomic potential to use various carbon sources Genomic analysis of Pseudomonas sp. ANT_H12B performed in our previous studies showed that this strain can obtain energy by metabolizing carbohydrates through glycolysis, oxidative and non-oxidative phases of the pentose phosphate cycle, Entner-Doudoroff glucose catabolism, d-galactonate degradation, and glycogen degradation, as well as beta-oxidation of fatty acids and degradation of acylglycerols . In order to estimate the full metabolic potential of the strain showcased in phenotype profiling, in-depth genomic analysis was carried out to identify genetic determinants associated with carbon conversion pathways. The obtained data indicated the ability to degrade various compounds e.g., (i) monosaccharides (e.g., xylose transformed by xylose isomerase to d -xylulose and in the next steps to ribulose-5-phosphate, which is intermediate of pentose phosphate pathway) and disaccharides ( d -trehalose transformed by trehalase to glucose), (ii) alcohol sugars ( d -mannitol transformed by mannitol 2-dehydrogenase to d -fructose and glycerol transformed by glycerol kinase into glycerone-phosphate. Both products could be involved into glycolysis), (iii) amino sugars ( n -acetyl- d -glucosamine transformed by n -acetylglucosamine PTS system to N-acetylglucosamine-6-phosphate and in the three next steps to d -fructose-6-phosphate, which is glycolysis intermediate), (iv) alcohols (dihydroxyacetone transformed by multi phosphoryl transferase to glycerone-phosphate) and (v) carboxylic acids (including fumaric acid and acetic acid transformed to acetyl-CoA respectively by fumarate hydratase in the first step and malate synthase in second step, and acetate-CoA ligase). Genomic analysis also revealed the remarkable ability to use amino acids as a carbon source, including the majority of proteinogenic amino acids (70.00%). l -Alanine and l -Serine could be transformed into pyruvate (by alanine-synthesizing transaminase and l -serine ammonia-lyase, respectively) and eventually enter the Krebs cycle. Most other proteinogenic amino acids used as C-source could enter the Krebs cycle transformed into its intermediates: oxoglutarate, oxaloacetate or Acetyl-CoA. Moreover, several dipeptidases were identified in Pseudomonas sp. ANT_H12B. A list of enzymes identified for carbon compounds was included in the Additional file . Phenotype profile of Pseudomonas sp. ANT_H12B regarding the using of various C, N, S, P sources During a PM assay, a wide array of 395 compounds was tested as a source of essential nutrients for bacterial growth (Fig. ). We estimated growth on each compound using the parameter of maximum curve height (mch) . The threshold level for positive growth result was calculated separately for all carbon (PM01 and PM02), nitrogen (PM03), phosphorus, and sulfur sources (PM04). For each nutrient source the positive threshold was set at 5.00% of the maximal value of mch obtained in given experiment. Obtained results showed the broad metabolic potential of Pseudomonas sp. ANT_H12B, which exhibited growth using 52.11% of carbon, 94.74% of nitrogen, 93.22% of phosphorus, and 100.00% of sulfur sources tested in this study. Among all tested nutrients, the use of carbon by Pseudomonas sp. ANT_H12B was the most selective (Fig. A). Positive growth signals were observed in 99 of 190 tested carbon sources. Particular compound groups were more preferential for bacterial growth than others. The strain was able to grow using all tested fatty acids and esters as a C source. It also showed the ability for the use of the majority of tested d -monosaccharides (83.33%), amino acids (70.00%), carboxylic acids (65.00%), nucleosides (60.00%) and l -monosaccharides (57.15%). ANT_H12B was less adapted to use amino sugars (33.33%), alcohol sugars (33.33%), trisaccharides (33.33%), disaccharides (30.00%), amines (20.00%), polymers (18.00%). The growth of the strain was not observed in the presence of amides. Pseudomonas sp. ANT_H12B was able to metabolize the vast majority of examined nitrogen sources (Fig. B). Growth was exhibited in 90 of 95 tested compounds (94.74%), belonging to various chemical groups, including peptides (100.00%), proteinogenic L-amino acids (100.00%), proteinogenic D-amino acids (100.00%), non-proteinogenic amino acids (100.00%), amino sugars (100.00%), nucleosides (83.33%), amines (81.81%) and inorganic compounds (75.00%). The phenotypic assay showed the versatility of phosphorus (Fig. C) and sulfur (Fig. D) metabolism in Pseudomonas sp. ANT_H12B. All 30 tested sulfur sources were suitable for bacteria growth, with no distinctive difference in growth rate between organic or inorganic compounds. Regarding phosphorus, the strain used 55 of 59 tested P-sources. Positive results were observed in all nucleoside phosphates samples, 90.00% other organic and 85.71% inorganic compounds. Optimization of medium composition for siderophores production—screening tests Pseudomonas sp. ANT_H12B was able to grow using the majority of tested carbon (glucose, glycerol, citric acid) and nitrogen sources ( l -asparagine, ammonium sulfate, ammonium chloride, ammonium nitrate) (Fig. A). Only the use of ethanol as a carbon source resulted in inhibited growth. C:N ratio significantly influenced observed bacterial growth (ANOVA test F = 64.52, p-value = 2.14 × 10 –9 ). Pairwise statistical analysis showed no significant differences between 20:1, 10:1, 7:1, and 5:1 ratios, in which the most intense bacterial growth was observed. Media composition strongly influenced the pH of the culture (Fig. B). Inorganic nitrogen sources (ammonium sulfate, chloride, and nitrate) combined with glucose, glycerol, and ethanol resulted in the acidification of the media. The lowest pH was observed with the use of glucose, reaching the level of 4–5 after 3 days. pH in media with glycerol and ethanol was slightly higher, and after 3 days, it reached 6–6.5. The magnitude of acidification increased with a rising C:N ratio. l -asparagine as a nitrogen source or citric acid as a carbon source were alkalization factors. The highest pH (> 8) was observed when citric acid (CASN, CSA) and glycerol (GLASN) were used as carbon sources. In these media, pH was not significantly affected by the change in the C:N ratio. The alkalization rate was lower with the use of glucose (GASN) and ethanol (EtSA), and the pH level dropped to neutral (EtSA) or even slightly acidic (GASN) with the increase in the C:N ratio. Siderophores production over 200 μM was observed with the use of the majority of tested carbon (glucose, glycerol, citric acid) and nitrogen ( l -asparagine, ammonium sulfate, ammonium chloride) sources (Fig. C). Only in media with ethanol as a carbon source and ammonium nitrate as a nitrogen source, significantly lower siderophores concentrations (below 100 μM) were observed compared to other tested variants. Therefore, media EtSA, EtASN, and GNO were excluded from further testing. C:N ratio was also an important factor influencing significantly siderophores production (ANOVA test F = 10.43, p-value = 4.52 × 10 –10 ). Three most optimal C:N ratios for siderophores production were selected using pairwise statistical tests: 2:1, 3:1, and 5:1 for GASN, GSA, GCl, GlSA, and GlASN media, 3:1, 5:1 and 7:1 for ClASN medium and 5:1, 7:1 and 10:1 for CSA medium. For every tested medium 1:2, C:N ratio was the least efficient variant, where siderophores production was strongly inhibited and fell under 100 μM. Optimization of medium composition for siderophores production—verification tests Five media (GCl, GSA, GlSA, GlASN, CSA) that promoted the highest siderophore production in optimization tests, and GASN medium as a reference, were selected for verification tests in larger volume. Based on optimization test results, three optimal variants of the C:N ratio were chosen for every tested medium. Significant differences were observed between media in siderophores production after 3 days (ANOVA test F = 63.12, p-value = 4.44 × 10 –9 ). Pairwise analysis revealed that the highest siderophores biosynthesis was associated with GSA and GCl (average 503.00 μM and 496.00 μM, respectively). The second group included media with lesser efficiency of siderophores production, which were GASN, GlASN, and GlSA (average 431.00 μM, 407.00 μM, and 401.00 μM, respectively). The least amount of siderophores was produced with use of CSA (average 329.00 μM). Among media containing glucose (GASN, GCl, GSA), significantly higher siderophores production was observed with the use of C:N ratios of 3:1 and 5:1. In other media C:N ratio did not affect siderophores production. According to these results, variants of media allowing for maximal production of siderophores with the lowest C:N ratios (GASN 3:1, GSA 3:1, GCl 3:1, GlSA 2:1, GlASN 2:1, and CSA 5:1) were selected for further tests. The effect of medium composition on the pyoverdine concentration Metabolites produced on selected media were analyzed using HPLC to estimate the exact concentration of pyoverdine in each sample. The results showed similar patterns to the CAS assay with 512.60 μM concentration in GSA, 509.30 μM in GCl, 450.20 μM in GASN, 380.40 μM in GlASN, 271.40 μM in GlSA and 223.50 μM in CSA medium. Pyoverdine production was also calculated as the yield of product in terms of biomass (μM siderophores g −1 biomass): 950.19 in GCl, 945.76 in GSA, 819.83 in GlASN, 773.54 in GASN, 710.47 in GlSA and 359.32 in CSA (Fig. ). Chemical characteristic of SAM produced by Pseudomonas sp. ANT_H12B The GC/MS analyses were performed to identify SAM with potential plant growth-promoting traits. We selected producing media according to the results of optimization experiments (GSA 3:1, GCl 3:1, GlASN 2:1, GlSA 2:1, CSA 5:1, and GASN 3:1 as the reference). Among detected compounds, fatty acids, other organic acids, alcohols, esters, indolic acid and its derivatives, sugars, nitrogen-containing organic compounds, sulfur-containing organic compounds, hydrocarbons, and aromatic compounds were observed (Table ). A semi-quantitative estimate of the total concentration of each group was made and the specific species of each compound identified under the other classification were described (with a minimum probability match of 70%). Fatty acids were the dominant group of compounds in metabolites obtained from every media, except CSA, with concentrations ranging from 40.48 mg L −1 (CSA) to 225.96 mg L −1 (GASN). The majority of detected fatty acids were long-chained (14–18 carbon atoms). Both unsaturated (9-octadecenoic acid, cis-13-octadecenoic acid, cis-9-hexadecenoic acid, cis-9-Octadecenoic acid, trans-13-octadecenoic acid, trans-9-octadecenoic acid) and saturated (3-hydroxymyristic acid, 3-trimethylsiloxymyristic acid, n-pentadecanoic acid, hexadecenoic acid, tetradecanoic acid, octadecanoic acid) fatty acids were observed. The most abundant long-chain fatty acids, detected in all samples, were hexadecenoic acid, cis-9-hexadecanoic acid, and dodecanoic acid. Medium-chain fatty acids were also identified (6–12 carbon atoms), including saturated (3-hydroxydecanoic acid, dodecanoic acid, hexanoic acid, nonanoic acid) and unsaturated (2-octenoic acid) compounds. In metabolites obtained from CSA medium dominant group of compounds were organic acids other than fatty acids (118.90 mg L −1 ). However, mainly due to the presence of 1,2,3-propanetricarboxylate. Organic acids other than fatty acids were also detected in significant amounts in metabolites from GASN medium (48.95 mg L −1 ) and in much smaller quantities in GlASN (8.37 mg L −1 ), GSA (4.14 mg L −1 ), and GlSA (2.51 mg L −1 ). Indoleacetic acid (IAA) and its derivatives were identified in all tested media. The highest concentration of IAA was measured in GASN (0.026 mg L −1 ) and GlASN (0.013 mg L −1 ) medium. In media containing inorganic nitrogen source, IAA concentrations were lower (GSA- 0.0075 mg L −1 , GCl—0.0025 mg L −1 , GlSA—0.0054 mg L −1 , CSA—0.0026 mg L −1 ). The effect of SAM on seed germination Seed germination tests were performed to reveal the effect of SAM on plant (beetroot, pea, and tobacco) growth. Media for metabolites production were selected based on the best results obtained from optimization experiments regarding siderophores production. Seeds germination percentage (GP) was similar among all tested variants for the first two days in beetroot and pea and three days in tobacco. On the third day of the experiment in pea treated with metabolites GP was higher than in control and sterile media variants (33.16% and 14,16% respectively) and the number of germinated seeds was significantly higher (ANOVA test F = 12.35, p-value = 0.013, Fig. A). For metabolites treated beetroot also on the 3rd day GP was 11.00% higher than control, 14.16% higher than in sterile media, and the number of germinated seeds was significantly higher (ANOVA test F = 24.74, p-value = 0.0025; Fig. B). A similar effect was observed for tobacco on day 4, with GP higher in metabolites treated seeds than in control and sterile media variants (35.16% and 43.83% respectively) and the number of germinated seeds was significantly higher (ANOVA test F = 7.16, p-value = 0.028, Fig. C).
Genomic analysis of Pseudomonas sp. ANT_H12B performed in our previous studies showed that this strain can obtain energy by metabolizing carbohydrates through glycolysis, oxidative and non-oxidative phases of the pentose phosphate cycle, Entner-Doudoroff glucose catabolism, d-galactonate degradation, and glycogen degradation, as well as beta-oxidation of fatty acids and degradation of acylglycerols . In order to estimate the full metabolic potential of the strain showcased in phenotype profiling, in-depth genomic analysis was carried out to identify genetic determinants associated with carbon conversion pathways. The obtained data indicated the ability to degrade various compounds e.g., (i) monosaccharides (e.g., xylose transformed by xylose isomerase to d -xylulose and in the next steps to ribulose-5-phosphate, which is intermediate of pentose phosphate pathway) and disaccharides ( d -trehalose transformed by trehalase to glucose), (ii) alcohol sugars ( d -mannitol transformed by mannitol 2-dehydrogenase to d -fructose and glycerol transformed by glycerol kinase into glycerone-phosphate. Both products could be involved into glycolysis), (iii) amino sugars ( n -acetyl- d -glucosamine transformed by n -acetylglucosamine PTS system to N-acetylglucosamine-6-phosphate and in the three next steps to d -fructose-6-phosphate, which is glycolysis intermediate), (iv) alcohols (dihydroxyacetone transformed by multi phosphoryl transferase to glycerone-phosphate) and (v) carboxylic acids (including fumaric acid and acetic acid transformed to acetyl-CoA respectively by fumarate hydratase in the first step and malate synthase in second step, and acetate-CoA ligase). Genomic analysis also revealed the remarkable ability to use amino acids as a carbon source, including the majority of proteinogenic amino acids (70.00%). l -Alanine and l -Serine could be transformed into pyruvate (by alanine-synthesizing transaminase and l -serine ammonia-lyase, respectively) and eventually enter the Krebs cycle. Most other proteinogenic amino acids used as C-source could enter the Krebs cycle transformed into its intermediates: oxoglutarate, oxaloacetate or Acetyl-CoA. Moreover, several dipeptidases were identified in Pseudomonas sp. ANT_H12B. A list of enzymes identified for carbon compounds was included in the Additional file .
Pseudomonas sp. ANT_H12B regarding the using of various C, N, S, P sources During a PM assay, a wide array of 395 compounds was tested as a source of essential nutrients for bacterial growth (Fig. ). We estimated growth on each compound using the parameter of maximum curve height (mch) . The threshold level for positive growth result was calculated separately for all carbon (PM01 and PM02), nitrogen (PM03), phosphorus, and sulfur sources (PM04). For each nutrient source the positive threshold was set at 5.00% of the maximal value of mch obtained in given experiment. Obtained results showed the broad metabolic potential of Pseudomonas sp. ANT_H12B, which exhibited growth using 52.11% of carbon, 94.74% of nitrogen, 93.22% of phosphorus, and 100.00% of sulfur sources tested in this study. Among all tested nutrients, the use of carbon by Pseudomonas sp. ANT_H12B was the most selective (Fig. A). Positive growth signals were observed in 99 of 190 tested carbon sources. Particular compound groups were more preferential for bacterial growth than others. The strain was able to grow using all tested fatty acids and esters as a C source. It also showed the ability for the use of the majority of tested d -monosaccharides (83.33%), amino acids (70.00%), carboxylic acids (65.00%), nucleosides (60.00%) and l -monosaccharides (57.15%). ANT_H12B was less adapted to use amino sugars (33.33%), alcohol sugars (33.33%), trisaccharides (33.33%), disaccharides (30.00%), amines (20.00%), polymers (18.00%). The growth of the strain was not observed in the presence of amides. Pseudomonas sp. ANT_H12B was able to metabolize the vast majority of examined nitrogen sources (Fig. B). Growth was exhibited in 90 of 95 tested compounds (94.74%), belonging to various chemical groups, including peptides (100.00%), proteinogenic L-amino acids (100.00%), proteinogenic D-amino acids (100.00%), non-proteinogenic amino acids (100.00%), amino sugars (100.00%), nucleosides (83.33%), amines (81.81%) and inorganic compounds (75.00%). The phenotypic assay showed the versatility of phosphorus (Fig. C) and sulfur (Fig. D) metabolism in Pseudomonas sp. ANT_H12B. All 30 tested sulfur sources were suitable for bacteria growth, with no distinctive difference in growth rate between organic or inorganic compounds. Regarding phosphorus, the strain used 55 of 59 tested P-sources. Positive results were observed in all nucleoside phosphates samples, 90.00% other organic and 85.71% inorganic compounds.
Pseudomonas sp. ANT_H12B was able to grow using the majority of tested carbon (glucose, glycerol, citric acid) and nitrogen sources ( l -asparagine, ammonium sulfate, ammonium chloride, ammonium nitrate) (Fig. A). Only the use of ethanol as a carbon source resulted in inhibited growth. C:N ratio significantly influenced observed bacterial growth (ANOVA test F = 64.52, p-value = 2.14 × 10 –9 ). Pairwise statistical analysis showed no significant differences between 20:1, 10:1, 7:1, and 5:1 ratios, in which the most intense bacterial growth was observed. Media composition strongly influenced the pH of the culture (Fig. B). Inorganic nitrogen sources (ammonium sulfate, chloride, and nitrate) combined with glucose, glycerol, and ethanol resulted in the acidification of the media. The lowest pH was observed with the use of glucose, reaching the level of 4–5 after 3 days. pH in media with glycerol and ethanol was slightly higher, and after 3 days, it reached 6–6.5. The magnitude of acidification increased with a rising C:N ratio. l -asparagine as a nitrogen source or citric acid as a carbon source were alkalization factors. The highest pH (> 8) was observed when citric acid (CASN, CSA) and glycerol (GLASN) were used as carbon sources. In these media, pH was not significantly affected by the change in the C:N ratio. The alkalization rate was lower with the use of glucose (GASN) and ethanol (EtSA), and the pH level dropped to neutral (EtSA) or even slightly acidic (GASN) with the increase in the C:N ratio. Siderophores production over 200 μM was observed with the use of the majority of tested carbon (glucose, glycerol, citric acid) and nitrogen ( l -asparagine, ammonium sulfate, ammonium chloride) sources (Fig. C). Only in media with ethanol as a carbon source and ammonium nitrate as a nitrogen source, significantly lower siderophores concentrations (below 100 μM) were observed compared to other tested variants. Therefore, media EtSA, EtASN, and GNO were excluded from further testing. C:N ratio was also an important factor influencing significantly siderophores production (ANOVA test F = 10.43, p-value = 4.52 × 10 –10 ). Three most optimal C:N ratios for siderophores production were selected using pairwise statistical tests: 2:1, 3:1, and 5:1 for GASN, GSA, GCl, GlSA, and GlASN media, 3:1, 5:1 and 7:1 for ClASN medium and 5:1, 7:1 and 10:1 for CSA medium. For every tested medium 1:2, C:N ratio was the least efficient variant, where siderophores production was strongly inhibited and fell under 100 μM.
Five media (GCl, GSA, GlSA, GlASN, CSA) that promoted the highest siderophore production in optimization tests, and GASN medium as a reference, were selected for verification tests in larger volume. Based on optimization test results, three optimal variants of the C:N ratio were chosen for every tested medium. Significant differences were observed between media in siderophores production after 3 days (ANOVA test F = 63.12, p-value = 4.44 × 10 –9 ). Pairwise analysis revealed that the highest siderophores biosynthesis was associated with GSA and GCl (average 503.00 μM and 496.00 μM, respectively). The second group included media with lesser efficiency of siderophores production, which were GASN, GlASN, and GlSA (average 431.00 μM, 407.00 μM, and 401.00 μM, respectively). The least amount of siderophores was produced with use of CSA (average 329.00 μM). Among media containing glucose (GASN, GCl, GSA), significantly higher siderophores production was observed with the use of C:N ratios of 3:1 and 5:1. In other media C:N ratio did not affect siderophores production. According to these results, variants of media allowing for maximal production of siderophores with the lowest C:N ratios (GASN 3:1, GSA 3:1, GCl 3:1, GlSA 2:1, GlASN 2:1, and CSA 5:1) were selected for further tests.
Metabolites produced on selected media were analyzed using HPLC to estimate the exact concentration of pyoverdine in each sample. The results showed similar patterns to the CAS assay with 512.60 μM concentration in GSA, 509.30 μM in GCl, 450.20 μM in GASN, 380.40 μM in GlASN, 271.40 μM in GlSA and 223.50 μM in CSA medium. Pyoverdine production was also calculated as the yield of product in terms of biomass (μM siderophores g −1 biomass): 950.19 in GCl, 945.76 in GSA, 819.83 in GlASN, 773.54 in GASN, 710.47 in GlSA and 359.32 in CSA (Fig. ).
Pseudomonas sp. ANT_H12B The GC/MS analyses were performed to identify SAM with potential plant growth-promoting traits. We selected producing media according to the results of optimization experiments (GSA 3:1, GCl 3:1, GlASN 2:1, GlSA 2:1, CSA 5:1, and GASN 3:1 as the reference). Among detected compounds, fatty acids, other organic acids, alcohols, esters, indolic acid and its derivatives, sugars, nitrogen-containing organic compounds, sulfur-containing organic compounds, hydrocarbons, and aromatic compounds were observed (Table ). A semi-quantitative estimate of the total concentration of each group was made and the specific species of each compound identified under the other classification were described (with a minimum probability match of 70%). Fatty acids were the dominant group of compounds in metabolites obtained from every media, except CSA, with concentrations ranging from 40.48 mg L −1 (CSA) to 225.96 mg L −1 (GASN). The majority of detected fatty acids were long-chained (14–18 carbon atoms). Both unsaturated (9-octadecenoic acid, cis-13-octadecenoic acid, cis-9-hexadecenoic acid, cis-9-Octadecenoic acid, trans-13-octadecenoic acid, trans-9-octadecenoic acid) and saturated (3-hydroxymyristic acid, 3-trimethylsiloxymyristic acid, n-pentadecanoic acid, hexadecenoic acid, tetradecanoic acid, octadecanoic acid) fatty acids were observed. The most abundant long-chain fatty acids, detected in all samples, were hexadecenoic acid, cis-9-hexadecanoic acid, and dodecanoic acid. Medium-chain fatty acids were also identified (6–12 carbon atoms), including saturated (3-hydroxydecanoic acid, dodecanoic acid, hexanoic acid, nonanoic acid) and unsaturated (2-octenoic acid) compounds. In metabolites obtained from CSA medium dominant group of compounds were organic acids other than fatty acids (118.90 mg L −1 ). However, mainly due to the presence of 1,2,3-propanetricarboxylate. Organic acids other than fatty acids were also detected in significant amounts in metabolites from GASN medium (48.95 mg L −1 ) and in much smaller quantities in GlASN (8.37 mg L −1 ), GSA (4.14 mg L −1 ), and GlSA (2.51 mg L −1 ). Indoleacetic acid (IAA) and its derivatives were identified in all tested media. The highest concentration of IAA was measured in GASN (0.026 mg L −1 ) and GlASN (0.013 mg L −1 ) medium. In media containing inorganic nitrogen source, IAA concentrations were lower (GSA- 0.0075 mg L −1 , GCl—0.0025 mg L −1 , GlSA—0.0054 mg L −1 , CSA—0.0026 mg L −1 ).
Seed germination tests were performed to reveal the effect of SAM on plant (beetroot, pea, and tobacco) growth. Media for metabolites production were selected based on the best results obtained from optimization experiments regarding siderophores production. Seeds germination percentage (GP) was similar among all tested variants for the first two days in beetroot and pea and three days in tobacco. On the third day of the experiment in pea treated with metabolites GP was higher than in control and sterile media variants (33.16% and 14,16% respectively) and the number of germinated seeds was significantly higher (ANOVA test F = 12.35, p-value = 0.013, Fig. A). For metabolites treated beetroot also on the 3rd day GP was 11.00% higher than control, 14.16% higher than in sterile media, and the number of germinated seeds was significantly higher (ANOVA test F = 24.74, p-value = 0.0025; Fig. B). A similar effect was observed for tobacco on day 4, with GP higher in metabolites treated seeds than in control and sterile media variants (35.16% and 43.83% respectively) and the number of germinated seeds was significantly higher (ANOVA test F = 7.16, p-value = 0.028, Fig. C).
Phenotype profile of Pseudomonas sp. ANT_H12B Results of genomic and phenotypic analysis showed the metabolic versatility of Pseudomonas sp. ANT_H12B. We identified various enzymes involved in the metabolism of organic compounds, which provide the ability to use them as a carbon source in main pathways of energy metabolism. Many of these organic compounds are particularly abundant in the Antarctic environment, thus the ability to use them is an important adaptation of Pseudomonas sp. ANT_H12B to survive in harsh regions. For example, trehalose, mannitol, and glycerol are frequently found in Antarctic soil because, as a compatible solute, one of their profound biological roles is osmo- and cryo-protection, especially important in cold environments [ – ]. These compounds could be used as a C source by ANT_H12B due to the presence of genes encoding e.g., trehalase, mannitol 2-dehydrogenase, and glycerol kinase. We confirmed the genomic-based hypothesis of Pseudomonas sp. ANT_H12B metabolic versatility during PM tests. This strain exhibited great metabolic flexibility, which could be regarded as outstanding among other members of the genus Pseudomonas . Pseudomonas sp. ANT_H12B was able to use 52.10% of tested carbon sources. Other members of the genus Pseudomonas obtained lower results during their respective tests, including environmental strains e.g., Pseudomonas putida from vineyard soils was able to use 30.50% of C sources , eight Pseudomonas strains isolated from rhizosphere were able to use 18.10–23.60% of C sources and clinical isolate Pseudomonas stutzeri, was able to use 26.80% of C sources . Pseudomonas sp . ANT_H12B shared with other Pseudomonas strains the ability for the efficiently use organic acids, while exceeding their capability of metabolizing carbohydrates and amino acids. The carbon source usage of Pseudomonas sp. ANT_H12B was also remarkable compared to other soil microorganisms , including Rhodococcus (37.90–38.95% of C sources) , Rhizobium (35.80%) and Sinorhizobium meliloti (40.00%) . Pseudomonas sp. ANT_H12B was also able to use a vast majority of nitrogen sources (94.70%), which exceeded the ability of Pseudomonas stuzeri (77.90%) , Rhizobium (approximately 54.30%) , Rhodococcus (approximately 65.30%) and Sinorhizobium meliloti strains (approximately 88.00%) . We demonstrated the ability of Pseudomonas sp. ANT_H12B to use peptides and amino acids both as a carbon and nitrogen source, confirming results obtained during genomic analysis, in which we identified many genes encoding enzymes involved in amino acid and protein metabolism. Such ability is characteristic of many psychrotolerant bacteria since the main nitrogen input to soil in polar environments is in the form of proteins or short peptides, which decomposition is slower due to low temperatures . Short peptides are one of the biggest contributors of the soil-dissolved nitrogen pool of polar environments. Microbial communities can directly take them up and subject them to further decomposition inside the cells [ – ]. Pseudomonas sp. ANT_H12B exhibited adaptation to these conditions possessing several dipeptidases and enzymes that allow for further transformation of amino acids to main metabolic pathways intermediates and use them as C and N sources. The efficiency of siderophores production by Pseudomonas sp. ANT_H12B Bacterial siderophores production varies significantly. Some bacteria can biosynthesize in culture media approximately 10 M of siderophores ( Azotoacter vinelandii) , while others exhibit a production rate of 1.6 mM ( Streptomyces olivaceus ) . However, the concentration of siderophores in culture media usually ranges between 100 and 200 µM . This diversity is driven by various factors e.g., culture conditions, medium composition, and bacterial taxonomy . Significant differences can also be observed in closely related microorganisms, even in the same species, e.g., three different strains of Azotobacter vinelandii producing siderophores in concentrations of 10 , 140 , or 360 µM . Members of the Pseudomonas genus are described as efficient producers of greenish-pigmented siderophores – pyoverdine . However, the diversity of pyoverdine production is also observed within this taxon. Moderate pyoverdine producers can obtain concentrations of 25-80 µM , while more efficient strains described in the literature are able to obtain 260-300 µM . In this context, Pseudomonas sp. ANT_H12B producing as high as 510 μM, can be regarded as a very efficient bacterial siderophores producer, with its outstanding pyoverdine biosynthesis rate among other Pseudomonas bacteria. Moreover, we performed detailed HPLC analysis to confirm qualitatively and quantitatively pyoverdine production. Unfortunately, in many studies, pyoverdine concentration is estimated only using CAS assay, which is valuable as a screening method, but it lacks precision in siderophores quantification since chelating compounds other than siderophores could affect its results . Temperature is one of the most critical factors influencing bacterial culture dynamics, and it also significantly impacts siderophores' production efficiency. It has been reported in several studies that the optimal temperature for siderophores production is often similar to optimal or sub-optimal for bacterial growth . The majority of described siderophores producers are mesophilic bacteria with a preference for moderate temperatures in the range of 25–37 °C [ , , ]. Although many psychrotolerant or psychrophilic microorganisms has been described as siderophores producers, the specific data about their productivity and characteristics are scarce and describe this process only in a qualitative approach. In our study, we characterized siderophores production in low temperatures in more detail, including biotechnological aspects of culture conditions and a quantitative approach. Results showed that Pseudomonas sp. ANT_H12B, an example of a psychrotolerant microorganism, exhibits very efficient pyoverdine production in a broad range of temperatures (4–22 °C). This flexibility could benefit biotechnological use since extensive temperature control is not required . The composition of the growth medium, particularly the carbon and nitrogen sources, plays a crucial role in siderophores production . Carbon, being a major component of biomass, significantly affects genetic and physiological processes, leading to varied qualitative and quantitative composition of produced metabolites [ – ] . In the case of siderophores production, many microorganisms exhibit a preference for gluconeogenic substrates (organic acids), especially those from the Pseudomonas genus . It has been proposed that gluconeogenic substrates increase metabolic fluxes toward the Krebs Cycle, providing an increased supply of pyoverdine intermediates . However, contrary to those observations, Pseudomonas sp. ANT_H12B exhibited the highest siderophores production when glycolytic substrate (glucose) was used. This finding suggests that the metabolic profiles of psychrotolerant bacteria can differ significantly, even in microorganisms from the similar taxon . Pseudomonas strains generally prefer organic acids due to the regulation of catabolic repression and absence of phosphofructokinase, an important enzyme in the glycolytic pathway . However, in the genome of Pseudomonas sp. ANT_H12B, we identified phosphofructokinase gene, indicating that carbon metabolism in this strain differs from that described in most Pseudomonas strains from moderate climates. Further studies of Pseudomonas sp. ANT_H12B metabolism, physiology, and genetics could reveal more about the specifics of psychrotolerant soil microorganisms. Nitrogen sources in the medium did not strongly influence siderophores production by Pseudomonas sp. ANT_H12B. Several studies have shown that adding amino acids as a nitrogen source could improve siderophores production in Pseudomonas strain, e.g., L-asparagine or glutamic acid . In the case of Pseudomonas sp. ANT_H12B, both organic and inorganic nitrogen sources, resulted in efficient siderophores production. The efficiency of siderophores production has been also linked with the pH of the culture. The decrease of medium pH has been shown to correlate with a reduction of siderophores concentration in media, as they are labile in acidic environments [ , , ]. Higher pyoverdine biosynthesis was associated with neutral to slightly alkaline conditions [ , , ]. During our study we showed a different pattern of pyoverdine production by Pseudomonas sp. ANT_H12B, which was not inhibited by the low medium pH. Moreover, we obtained the highest rate of pyoverdine production, confirmed by HPLC analysis, using GCl or GSA medium, which resulted in significant acidification of culture conditions. Plant growth-promoting properties of siderophores and SAM Pyoverdine production could be regarded as the most important PGP activity of Pseduomonas sp. ANT_H12B, due to efficiency of this process and its high plant-stimulating potential. Pyoverdine is described as one of the most important siderophores in agricultural context [ , , , , , ]. Pyoverdine could significantly improve plant nutrition since Fe-Pyoverdine complexes are providing iron to various plants more efficiently than Fe-EDTA complexes . In field experiments with pea ( Pisum sativum ), pyoverdine improved plant supply not only with iron, but also with other nutrients (e.g., Zn or Mg) . Moreover, pyoverdine efficiently provides iron for plants with various Fe-uptake strategies [ , , ]. However, this effect could be observed in iron-limited soil conditions. In our study we elucidated the role of other metabolites produced during biosynthesis of pyoverdine. Overall effect of SAM was tested by using them as a priming agent for pea, tobacco, and beetroot seeds in germination tests. Results showed not only SAM lack of phytotoxicity, but also stimulation of seeds germination percentage. Positive role of various metabolites was shown also in other seed germination experiments e.g., priming of triticale ( Triticale hexaploide L.) seeds with melatonin resulted in increased germination rate by 57.67% . In other experiments it was shown that treatment of seeds with gibberellic acid and/or Indole Acetic Acid (IAA) improved germination parameters and subsequent cultivation of Masson pine ( Pinus massoniana ) and Aspilia Africana . PGP potential of bacterial metabolite was studied in germination tests of pepper and maize, where treatment with cell free supernatant from Bacillus sp. AS19 significantly improved the process . Among chemical compounds identified in SAM was IAA, which could have a major role in improvement of seed germination. IAA is a plant hormone belonging to the auxin class, which has been linked with the regulation of various plant physiological processes, including growth and development. The main biological roles of IAA include the regulation of cell division, elongation, and differentiation [ – ]. We observed biosynthesis of IAA with use of every tested medium, obtaining the best results using L-asparagine-containing media GASN and GlASN. The addition of amino acids was shown in other papers as a factor potentially increasing the bacterial biosynthesis of IAA [ – ]. Exogenous bacterial IAA in the rhizosphere positively impacts plant growth, mainly through root development stimulation, which improves nutrient uptake . Moreover, foliar application of IAA resulted in increased leaf area and plant height . Application of IAA in the concentration of 0.017 mg L −1 (similar to concentrations obtained in SAM composition) was also demonstrated as a seed germination promoter in experiments with Masson pine ( Pinus massoniana ) . PGP potential of SAM could be broader than direct stimulation of seed germination, due to presence of plant-beneficial compounds, which activity could be exhibited in soil conditions e.g., organic acids (OA), which we also identified in SAM. This group of compounds is important in sustainable agriculture since it is associated with various PGP properties . OA could directly improve plant fitness, due to involvement in nutrient acquisition, e.g., by solubilization and mobilization of micronutrients and macronutrients [ – ]. Phosphorus and potassium are vital plant macronutrients and are present in the soil in large amounts. However, they mainly exist in a form of insoluble minerals and their bioavailability is limited . It was shown that various bacterial carboxylic acids, e.g., propionic acid, lactic acid, 2-ketogluconic acid, citric acid, tartaric acid, acetic acid, oxalic acid, glycolic acid, succinic acid, inalonic acid, and fumaric acid were involved in solubilization of P and K in soils [ , , , ]. OA could also improve the soil bioavailable zinc pool, which deficiency is one of the most widespread among plants . In GC/MS analysis, we showed that the majority of compounds found in SAM produced by Pseudomonas sp. ANT_H12B cultures belong to the subgroup of organic acid, specifically fatty acids. These compounds are very important in cold adaptation processes of bacteria [ – ]. Bacteria growing in low-temperature conditions change their membrane chemical composition, increasing the amount of unsaturated fatty acids (UFA) [ , , ]. A higher UFA content enables the cell to maintain increased membrane fluidity, which is crucial at lower temperatures and prevents membrane destabilization [ , , , ]. The application of fatty acids in agriculture could be beneficial, as the antifungal activities of these compounds was described. Antagonistic activities of various fatty acids (butyric acid, caproic acid, caprylic acid, capric acid, lauric acid, myristic acid, palmitic acid, oleic acid, and linoleic acid) was shown against phytopathogenic fungi . In other studies, bacteria from Bacillus and Pseudomonas genes inhibited the mycelial growth of Fusarium solani through the emission of volatile compounds, including fatty acids . The biocontrol properties of fatty acids could be also useful against eukaryotic parasites, as antagonistic activity of nine compounds from this group was shown against nematode Meloidogyne incognita , which is responsible for root infections . SAM could also increase overall soil quality and fertility in an indirect mode of action, by stimulating soil microorganisms, which eventually could improve plant health and growth. Several studies have shown that application of OA to soil is associated with increased diversity of microbial communities, positively affecting PGP microorganism [ , , ]. Nitrogen-fixing bacteria (NFB), which reduce atmospheric nitrogen into ammonia, are an important PGP group of microorganisms that contribute to soil fertility by increasing the availability of this crucial element . Many NFBs preferentially use OA as a carbon source; thus their numbers and activity could be increased by addition of OA to soil . Additionally, other groups of compounds found in SAM, e.g. sugars and alcohols, could also be beneficial for microbial growth and contribute to soil microbiome quality . One of the most important properties of SAM was their variable pH, which ranged from 4.5 to 8.5, depending on medium composition. Despite this variability, we were able to identify compounds with PGP properties e.g., siderophores, auxins or organic acids in every studied SAM. pH is one of the most important factors that affect soil properties, both abiotic (e.g. nutrients availability) or biotic (microbiome composition and activity) . Therefore, different plant species have varying optimal soil pH ranges, from acidic to alkaline. In this context, variability of SAM pH could be advantageous in agriculture since it allows for the development of tailored products to meet the specific needs of different crops .
Pseudomonas sp. ANT_H12B Results of genomic and phenotypic analysis showed the metabolic versatility of Pseudomonas sp. ANT_H12B. We identified various enzymes involved in the metabolism of organic compounds, which provide the ability to use them as a carbon source in main pathways of energy metabolism. Many of these organic compounds are particularly abundant in the Antarctic environment, thus the ability to use them is an important adaptation of Pseudomonas sp. ANT_H12B to survive in harsh regions. For example, trehalose, mannitol, and glycerol are frequently found in Antarctic soil because, as a compatible solute, one of their profound biological roles is osmo- and cryo-protection, especially important in cold environments [ – ]. These compounds could be used as a C source by ANT_H12B due to the presence of genes encoding e.g., trehalase, mannitol 2-dehydrogenase, and glycerol kinase. We confirmed the genomic-based hypothesis of Pseudomonas sp. ANT_H12B metabolic versatility during PM tests. This strain exhibited great metabolic flexibility, which could be regarded as outstanding among other members of the genus Pseudomonas . Pseudomonas sp. ANT_H12B was able to use 52.10% of tested carbon sources. Other members of the genus Pseudomonas obtained lower results during their respective tests, including environmental strains e.g., Pseudomonas putida from vineyard soils was able to use 30.50% of C sources , eight Pseudomonas strains isolated from rhizosphere were able to use 18.10–23.60% of C sources and clinical isolate Pseudomonas stutzeri, was able to use 26.80% of C sources . Pseudomonas sp . ANT_H12B shared with other Pseudomonas strains the ability for the efficiently use organic acids, while exceeding their capability of metabolizing carbohydrates and amino acids. The carbon source usage of Pseudomonas sp. ANT_H12B was also remarkable compared to other soil microorganisms , including Rhodococcus (37.90–38.95% of C sources) , Rhizobium (35.80%) and Sinorhizobium meliloti (40.00%) . Pseudomonas sp. ANT_H12B was also able to use a vast majority of nitrogen sources (94.70%), which exceeded the ability of Pseudomonas stuzeri (77.90%) , Rhizobium (approximately 54.30%) , Rhodococcus (approximately 65.30%) and Sinorhizobium meliloti strains (approximately 88.00%) . We demonstrated the ability of Pseudomonas sp. ANT_H12B to use peptides and amino acids both as a carbon and nitrogen source, confirming results obtained during genomic analysis, in which we identified many genes encoding enzymes involved in amino acid and protein metabolism. Such ability is characteristic of many psychrotolerant bacteria since the main nitrogen input to soil in polar environments is in the form of proteins or short peptides, which decomposition is slower due to low temperatures . Short peptides are one of the biggest contributors of the soil-dissolved nitrogen pool of polar environments. Microbial communities can directly take them up and subject them to further decomposition inside the cells [ – ]. Pseudomonas sp. ANT_H12B exhibited adaptation to these conditions possessing several dipeptidases and enzymes that allow for further transformation of amino acids to main metabolic pathways intermediates and use them as C and N sources.
Pseudomonas sp. ANT_H12B Bacterial siderophores production varies significantly. Some bacteria can biosynthesize in culture media approximately 10 M of siderophores ( Azotoacter vinelandii) , while others exhibit a production rate of 1.6 mM ( Streptomyces olivaceus ) . However, the concentration of siderophores in culture media usually ranges between 100 and 200 µM . This diversity is driven by various factors e.g., culture conditions, medium composition, and bacterial taxonomy . Significant differences can also be observed in closely related microorganisms, even in the same species, e.g., three different strains of Azotobacter vinelandii producing siderophores in concentrations of 10 , 140 , or 360 µM . Members of the Pseudomonas genus are described as efficient producers of greenish-pigmented siderophores – pyoverdine . However, the diversity of pyoverdine production is also observed within this taxon. Moderate pyoverdine producers can obtain concentrations of 25-80 µM , while more efficient strains described in the literature are able to obtain 260-300 µM . In this context, Pseudomonas sp. ANT_H12B producing as high as 510 μM, can be regarded as a very efficient bacterial siderophores producer, with its outstanding pyoverdine biosynthesis rate among other Pseudomonas bacteria. Moreover, we performed detailed HPLC analysis to confirm qualitatively and quantitatively pyoverdine production. Unfortunately, in many studies, pyoverdine concentration is estimated only using CAS assay, which is valuable as a screening method, but it lacks precision in siderophores quantification since chelating compounds other than siderophores could affect its results . Temperature is one of the most critical factors influencing bacterial culture dynamics, and it also significantly impacts siderophores' production efficiency. It has been reported in several studies that the optimal temperature for siderophores production is often similar to optimal or sub-optimal for bacterial growth . The majority of described siderophores producers are mesophilic bacteria with a preference for moderate temperatures in the range of 25–37 °C [ , , ]. Although many psychrotolerant or psychrophilic microorganisms has been described as siderophores producers, the specific data about their productivity and characteristics are scarce and describe this process only in a qualitative approach. In our study, we characterized siderophores production in low temperatures in more detail, including biotechnological aspects of culture conditions and a quantitative approach. Results showed that Pseudomonas sp. ANT_H12B, an example of a psychrotolerant microorganism, exhibits very efficient pyoverdine production in a broad range of temperatures (4–22 °C). This flexibility could benefit biotechnological use since extensive temperature control is not required . The composition of the growth medium, particularly the carbon and nitrogen sources, plays a crucial role in siderophores production . Carbon, being a major component of biomass, significantly affects genetic and physiological processes, leading to varied qualitative and quantitative composition of produced metabolites [ – ] . In the case of siderophores production, many microorganisms exhibit a preference for gluconeogenic substrates (organic acids), especially those from the Pseudomonas genus . It has been proposed that gluconeogenic substrates increase metabolic fluxes toward the Krebs Cycle, providing an increased supply of pyoverdine intermediates . However, contrary to those observations, Pseudomonas sp. ANT_H12B exhibited the highest siderophores production when glycolytic substrate (glucose) was used. This finding suggests that the metabolic profiles of psychrotolerant bacteria can differ significantly, even in microorganisms from the similar taxon . Pseudomonas strains generally prefer organic acids due to the regulation of catabolic repression and absence of phosphofructokinase, an important enzyme in the glycolytic pathway . However, in the genome of Pseudomonas sp. ANT_H12B, we identified phosphofructokinase gene, indicating that carbon metabolism in this strain differs from that described in most Pseudomonas strains from moderate climates. Further studies of Pseudomonas sp. ANT_H12B metabolism, physiology, and genetics could reveal more about the specifics of psychrotolerant soil microorganisms. Nitrogen sources in the medium did not strongly influence siderophores production by Pseudomonas sp. ANT_H12B. Several studies have shown that adding amino acids as a nitrogen source could improve siderophores production in Pseudomonas strain, e.g., L-asparagine or glutamic acid . In the case of Pseudomonas sp. ANT_H12B, both organic and inorganic nitrogen sources, resulted in efficient siderophores production. The efficiency of siderophores production has been also linked with the pH of the culture. The decrease of medium pH has been shown to correlate with a reduction of siderophores concentration in media, as they are labile in acidic environments [ , , ]. Higher pyoverdine biosynthesis was associated with neutral to slightly alkaline conditions [ , , ]. During our study we showed a different pattern of pyoverdine production by Pseudomonas sp. ANT_H12B, which was not inhibited by the low medium pH. Moreover, we obtained the highest rate of pyoverdine production, confirmed by HPLC analysis, using GCl or GSA medium, which resulted in significant acidification of culture conditions.
Pyoverdine production could be regarded as the most important PGP activity of Pseduomonas sp. ANT_H12B, due to efficiency of this process and its high plant-stimulating potential. Pyoverdine is described as one of the most important siderophores in agricultural context [ , , , , , ]. Pyoverdine could significantly improve plant nutrition since Fe-Pyoverdine complexes are providing iron to various plants more efficiently than Fe-EDTA complexes . In field experiments with pea ( Pisum sativum ), pyoverdine improved plant supply not only with iron, but also with other nutrients (e.g., Zn or Mg) . Moreover, pyoverdine efficiently provides iron for plants with various Fe-uptake strategies [ , , ]. However, this effect could be observed in iron-limited soil conditions. In our study we elucidated the role of other metabolites produced during biosynthesis of pyoverdine. Overall effect of SAM was tested by using them as a priming agent for pea, tobacco, and beetroot seeds in germination tests. Results showed not only SAM lack of phytotoxicity, but also stimulation of seeds germination percentage. Positive role of various metabolites was shown also in other seed germination experiments e.g., priming of triticale ( Triticale hexaploide L.) seeds with melatonin resulted in increased germination rate by 57.67% . In other experiments it was shown that treatment of seeds with gibberellic acid and/or Indole Acetic Acid (IAA) improved germination parameters and subsequent cultivation of Masson pine ( Pinus massoniana ) and Aspilia Africana . PGP potential of bacterial metabolite was studied in germination tests of pepper and maize, where treatment with cell free supernatant from Bacillus sp. AS19 significantly improved the process . Among chemical compounds identified in SAM was IAA, which could have a major role in improvement of seed germination. IAA is a plant hormone belonging to the auxin class, which has been linked with the regulation of various plant physiological processes, including growth and development. The main biological roles of IAA include the regulation of cell division, elongation, and differentiation [ – ]. We observed biosynthesis of IAA with use of every tested medium, obtaining the best results using L-asparagine-containing media GASN and GlASN. The addition of amino acids was shown in other papers as a factor potentially increasing the bacterial biosynthesis of IAA [ – ]. Exogenous bacterial IAA in the rhizosphere positively impacts plant growth, mainly through root development stimulation, which improves nutrient uptake . Moreover, foliar application of IAA resulted in increased leaf area and plant height . Application of IAA in the concentration of 0.017 mg L −1 (similar to concentrations obtained in SAM composition) was also demonstrated as a seed germination promoter in experiments with Masson pine ( Pinus massoniana ) . PGP potential of SAM could be broader than direct stimulation of seed germination, due to presence of plant-beneficial compounds, which activity could be exhibited in soil conditions e.g., organic acids (OA), which we also identified in SAM. This group of compounds is important in sustainable agriculture since it is associated with various PGP properties . OA could directly improve plant fitness, due to involvement in nutrient acquisition, e.g., by solubilization and mobilization of micronutrients and macronutrients [ – ]. Phosphorus and potassium are vital plant macronutrients and are present in the soil in large amounts. However, they mainly exist in a form of insoluble minerals and their bioavailability is limited . It was shown that various bacterial carboxylic acids, e.g., propionic acid, lactic acid, 2-ketogluconic acid, citric acid, tartaric acid, acetic acid, oxalic acid, glycolic acid, succinic acid, inalonic acid, and fumaric acid were involved in solubilization of P and K in soils [ , , , ]. OA could also improve the soil bioavailable zinc pool, which deficiency is one of the most widespread among plants . In GC/MS analysis, we showed that the majority of compounds found in SAM produced by Pseudomonas sp. ANT_H12B cultures belong to the subgroup of organic acid, specifically fatty acids. These compounds are very important in cold adaptation processes of bacteria [ – ]. Bacteria growing in low-temperature conditions change their membrane chemical composition, increasing the amount of unsaturated fatty acids (UFA) [ , , ]. A higher UFA content enables the cell to maintain increased membrane fluidity, which is crucial at lower temperatures and prevents membrane destabilization [ , , , ]. The application of fatty acids in agriculture could be beneficial, as the antifungal activities of these compounds was described. Antagonistic activities of various fatty acids (butyric acid, caproic acid, caprylic acid, capric acid, lauric acid, myristic acid, palmitic acid, oleic acid, and linoleic acid) was shown against phytopathogenic fungi . In other studies, bacteria from Bacillus and Pseudomonas genes inhibited the mycelial growth of Fusarium solani through the emission of volatile compounds, including fatty acids . The biocontrol properties of fatty acids could be also useful against eukaryotic parasites, as antagonistic activity of nine compounds from this group was shown against nematode Meloidogyne incognita , which is responsible for root infections . SAM could also increase overall soil quality and fertility in an indirect mode of action, by stimulating soil microorganisms, which eventually could improve plant health and growth. Several studies have shown that application of OA to soil is associated with increased diversity of microbial communities, positively affecting PGP microorganism [ , , ]. Nitrogen-fixing bacteria (NFB), which reduce atmospheric nitrogen into ammonia, are an important PGP group of microorganisms that contribute to soil fertility by increasing the availability of this crucial element . Many NFBs preferentially use OA as a carbon source; thus their numbers and activity could be increased by addition of OA to soil . Additionally, other groups of compounds found in SAM, e.g. sugars and alcohols, could also be beneficial for microbial growth and contribute to soil microbiome quality . One of the most important properties of SAM was their variable pH, which ranged from 4.5 to 8.5, depending on medium composition. Despite this variability, we were able to identify compounds with PGP properties e.g., siderophores, auxins or organic acids in every studied SAM. pH is one of the most important factors that affect soil properties, both abiotic (e.g. nutrients availability) or biotic (microbiome composition and activity) . Therefore, different plant species have varying optimal soil pH ranges, from acidic to alkaline. In this context, variability of SAM pH could be advantageous in agriculture since it allows for the development of tailored products to meet the specific needs of different crops .
With the use of genomic and phenotypic analysis, we optimized the siderophores production process by developing five novel media compositions using various carbon and nitrogen sources, which allow for improving production cost-efficiency. Metabolites produced with each medium shared a high concentration of pyoverdine but varied significantly in pH, enabling their use in different soil and plant context. In particular, we identified a high concentration of pyoverdine in acidic media (pH < 5), which is unique in siderophores research, as these compounds typically degrade under low pH conditions. We also demonstrated that during the production of siderophores on each newly designed medium, other PGP compounds were produced, e.g., auxins, organic acids, and fatty acids and we showed their growth-stimulating potential in germination tests of pea, beetroot, and tobacco seeds. Our findings indicated that unpurified siderophores solutions containing accompanying PGP metabolites—SAM, could be the basis for plant-stimulating bioproducts since they not only reduce production costs but also provide the added value of various PGP compounds. In our study we highlighted the importance of using metabolically versatile bacteria, such as Pseudomonas sp. ANT_H12B, to harness the full PGP potential of microbes for agriculture.
Additional file 1. Optimization of physicochemical and biological conditions for efficient siderophores production. Additional file 2. List of enzymes involved in carbon metabolism identified in Pseudomonas sp. ANT_H12B genome.
|
Retrospective analysis of 217 fatal intoxication autopsy cases from 2009 to 2021: temporal trends in fatal intoxication at Tongji center for medicolegal expertise, Hubei, China
|
df8e3c09-bd27-44f4-8dec-1153010947e7
|
10150053
|
Forensic Medicine[mh]
|
Intoxication is a global public health concern. In 2015, accidental intoxications caused 86,400 fatalities worldwide at a rate of 1.2 per 100,000 people ( ). More than 90% of the intoxication-related deaths occur in lower middle-income countries ( ). China is an emerging agriculturally-based nation, therefore, pesticide intoxication is one of the most prevalent forms of intoxication in China. However, the tendency toward intoxication has changed considerably due to the rapid urbanization and regulatory prohibitions on the use of particular toxicants ( ). Therefore, it is anticipated that characteristics of intoxications and hazardous compounds linked to the deaths may alter. The tremendous economic growth and changing lifestyles in China over the last few decades may have impacted the rising intoxication events. Our data may provide evidence regarding the outcome of interventions and suggest additional decisions to address intoxications. The retrospective data reported in this study could be a valuable resource for forensic pathologists and police officers dealing with intoxication cases. This is because there are currently no official statistics on autopsy data of intoxication deaths in China. It can also be a reference for identifying temporal trends in intoxication events and creating public health intervention strategies. Our research group previously reported intoxication deaths in the Tongji Center for Medicolegal Expertise in Hubei (TCMEH) during the years 1999 through 2008 ( ), which were contrasted with the intoxication cases recorded in the most recent 13 years (2009–2021).
2.1. Study setting and case sources TCMEH is a forensic institution affiliated with the Department of Forensic Medicine, Tongji Medical College, Huazhong University of Science and Technology, in Wuhan, Hubei. The institution accepts cases for investigation from Hubei and surrounding provinces, such as Henan, Hunan, Jiangxi, and Fujian. We retrospectively examined 4,753 autopsy documents recorded between January 2009 and December 2021 in TCMEH. Of the total 4,753 cases, we comprehensively evaluated the autopsy records, histopathology, toxicology test reports, case information, and scene evidence of the deceased and identified 217 cases with intoxication as the primary cause of death as the subjects of this retrospective analysis. Overall, 217 cases of fatal intoxications were included for further analyses after cases with conflicting causes of death or insufficient information were excluded. The deceased's kin completed written informed consent documentation. Data analyzed in this study were obtained from TCMEH with the approval of the Tongji Medical College Ethics Committee at Huazhong University of Science and Technology. 2.2. Toxicological analysis For all 217 cases, toxicological analyses were performed either at our toxicological laboratory or the toxicological laboratories of the national, provincial, and local public security agencies using methods such as GC-MS/MS, LC-MS/MS, HS-GC, gold immunochromatographic assay, and UV-visible spectrophotometry, etc. To determine the potential existence of various toxicants and exposure pathways, samples of urine, blood (heart blood or peripheral blood), liver, kidney, stomach wall, and stomach contents were obtained. Specific biological materials of a few unique cases were examined. For instance, when skin contact intoxication was suspected, the local skin was tested for toxicity. Using these samples for toxicological testing is in line with the standards of the People's Republic of China public safety industry. The toxicological analysis can provide a good evidence, but the cause of death needs to be determined in conjunction with the autopsy report and the circumstances of the case. The blood concentration standards utilized for diagnosing fatal intoxications were in accordance with the pertinent national/industrial standards or based on the lethal dose facts published in national textbooks ( , ). Depending on the nature of the toxicant or epidemiological characteristics of the case, nine different toxicants were categorized as follows: pesticides (rodenticides, insecticides, and herbicides); prescription medications; illicit drugs (narcotic drugs and addiction-inducing psychotropic substances); alcohol; toxic plants and animals; metal salts; combined intoxication; other compounds (e.g., nitrite, succinylcholine, and cyanide); and unidentified toxicants. Because of technological limitations, some early toxicant cases lacked qualitative data. 2.3. Causes and manner of death assessment The likelihood of mechanical asphyxiation, mechanical trauma, or sickness was ruled out during autopsy, pathological investigation, and toxicity study. To determine that intoxication was the primary cause of death in each case, two forensic medical specialists thoroughly examined the data, including the briefing on the cases, clinical histories, autopsy records, and toxicology findings. A third forensic examiner then reviewed the final report before it was released. In China, the police, not the medical examiner, determine how an individual dies. The medical examiner's cause of death, case investigation, autopsy reports, and results of the toxicological study were all considered by the police when determining the manner of death. 2.4. Statistical analysis Microsoft Excel 2016 was used to organize and summarize the data and generate the figures. IBM SPSS Statistics for Windows, version 27.0 (IBM Corp., Armonk, NY, USA) was used to describe the data, and the mean ± standard deviation are presented. To evaluate differences, χ 2 test ( P -value < 0.05) was performed.
TCMEH is a forensic institution affiliated with the Department of Forensic Medicine, Tongji Medical College, Huazhong University of Science and Technology, in Wuhan, Hubei. The institution accepts cases for investigation from Hubei and surrounding provinces, such as Henan, Hunan, Jiangxi, and Fujian. We retrospectively examined 4,753 autopsy documents recorded between January 2009 and December 2021 in TCMEH. Of the total 4,753 cases, we comprehensively evaluated the autopsy records, histopathology, toxicology test reports, case information, and scene evidence of the deceased and identified 217 cases with intoxication as the primary cause of death as the subjects of this retrospective analysis. Overall, 217 cases of fatal intoxications were included for further analyses after cases with conflicting causes of death or insufficient information were excluded. The deceased's kin completed written informed consent documentation. Data analyzed in this study were obtained from TCMEH with the approval of the Tongji Medical College Ethics Committee at Huazhong University of Science and Technology.
For all 217 cases, toxicological analyses were performed either at our toxicological laboratory or the toxicological laboratories of the national, provincial, and local public security agencies using methods such as GC-MS/MS, LC-MS/MS, HS-GC, gold immunochromatographic assay, and UV-visible spectrophotometry, etc. To determine the potential existence of various toxicants and exposure pathways, samples of urine, blood (heart blood or peripheral blood), liver, kidney, stomach wall, and stomach contents were obtained. Specific biological materials of a few unique cases were examined. For instance, when skin contact intoxication was suspected, the local skin was tested for toxicity. Using these samples for toxicological testing is in line with the standards of the People's Republic of China public safety industry. The toxicological analysis can provide a good evidence, but the cause of death needs to be determined in conjunction with the autopsy report and the circumstances of the case. The blood concentration standards utilized for diagnosing fatal intoxications were in accordance with the pertinent national/industrial standards or based on the lethal dose facts published in national textbooks ( , ). Depending on the nature of the toxicant or epidemiological characteristics of the case, nine different toxicants were categorized as follows: pesticides (rodenticides, insecticides, and herbicides); prescription medications; illicit drugs (narcotic drugs and addiction-inducing psychotropic substances); alcohol; toxic plants and animals; metal salts; combined intoxication; other compounds (e.g., nitrite, succinylcholine, and cyanide); and unidentified toxicants. Because of technological limitations, some early toxicant cases lacked qualitative data.
The likelihood of mechanical asphyxiation, mechanical trauma, or sickness was ruled out during autopsy, pathological investigation, and toxicity study. To determine that intoxication was the primary cause of death in each case, two forensic medical specialists thoroughly examined the data, including the briefing on the cases, clinical histories, autopsy records, and toxicology findings. A third forensic examiner then reviewed the final report before it was released. In China, the police, not the medical examiner, determine how an individual dies. The medical examiner's cause of death, case investigation, autopsy reports, and results of the toxicological study were all considered by the police when determining the manner of death.
Microsoft Excel 2016 was used to organize and summarize the data and generate the figures. IBM SPSS Statistics for Windows, version 27.0 (IBM Corp., Armonk, NY, USA) was used to describe the data, and the mean ± standard deviation are presented. To evaluate differences, χ 2 test ( P -value < 0.05) was performed.
3.1. Incidence and trends A total of 4,753 death cases, including 217 wherein intoxication was the principal cause of death, were accepted at TCMEH between 2009 and 2021. The yearly number of deaths due to intoxication ranged from 4 to 30 (average, 17), with the highest (6.4%) and lowest (1.9%) percentage of intoxication fatalities in 2013 and 2019, respectively. Compared to 1999–2008 (which reported 218 intoxication deaths out of 2416 deaths) ( ), the mortality rate from forensic autopsy intoxications in this study fell by 4.4%. shows the total number of autopsy cases and fatal intoxication cases caused by various toxicants each year. 3.2. Sex and age distribution Overall, 132 males (60.8%) and 85 females (39.2%) died from intoxications. The median age of the deceased was 36 years (36.0 ± 16.7; range, 7 months to 75 years; excluding the eight anonymous corpses). Patients' average ages were 37 ± 16.1 years for men and 36 ± 18.1 years for women. displays the age distribution of intoxication-related fatalities between 2009–2021 and 1999–2008. During the period 2009 to 2021, the age group for which the most fatal intoxication was identified included individuals between 30 and 39 years (24.4% of cases), followed by those aged between 40 and 49 years (18.4% of cases). displays the age and sex distribution of fatal intoxication cases from 1999 to 2021. 3.3. Routes of exposure Routes of exposure in both this and previous studies ( ) are listed in , indicating oral ingestion as the most common exposure route (66.4%), followed by inhalation (21.7%) and injection (5.1%). There were no significant differences between the 1999–2008 and 2009–2021 data. 3.4. Toxic agents This study grouped all toxicants into nine classes and further categorized them into various subclasses. Compared to the previous report period ( ), shows the number and percentage of intoxication cases according to these classes and subclasses. Rodenticides, insecticides, and herbicides are types of pesticide. Methanol and ethanol are examples of alcohol. Amphetamines, heroin, and morphine are classified as illicit drugs. Combined intoxication refers to exposure to two or more types of toxicants. Other compounds included succinylcholine, nitrite, and cyanide. We classified arsenide and barium chloride as metal salts. In this study, pesticides were the main cause of intoxication mortality (33.2% of deaths), with organic phosphorus as the main cause (54.2%). The highest number of deaths (19.7%, 43) were caused by rodenticides from 1999 to 2008, with pesticides (37.6%) representing a significant cause of intoxication fatalities. Our data comparing 1999–2008 with 2009–2021 showed that the percentages of deaths due to organophosphorus (10.6 vs. 18.0%, χ 2 = 4.902, p = 0.027); ethanol (10.1 vs. 18.0%, χ 2 = 5.602, p = 0.018); amphetamine (0.0 vs. 4.6%, χ 2 = 8.333, p = 0.004); and phosphorus rodenticides (0.0 vs. 4.6%, χ 2 = 8.333, p = 0.004) were increased. However, the percentages of deaths due to tetramine (17.9 vs. 2.8%, χ 2 = 26.823, p < 0.001), and carbon monoxide (16.5 vs. 8.3%, χ 2 = 6.756, p = 0.009) were decreased. The distribution of intoxications due to the various toxic substances according to sex is shown in . Alcohol ( n = 28, 21.2%); insecticides ( n = 23, 17.4%); illicit drugs ( n = 16, 12.1%); and other compounds ( n = 16, 12.1%) caused fatal intoxications more in males, whereas insecticides ( n = 24, 28.2%); alcohol ( n = 14, 16.5%), and rodenticides ( n = 12, 14.1%) involved more females ( ). 3.5. Manner of death Among the 217 intoxication cases, 131 (60.4%), 57 (26.3%), 14 (6.5%), and 15 (6.9%) were accidental deaths, suicides, homicides, and uncertain, respectively. The manner of death reported based on the type of intoxications is presented in . The most common types of intoxications to which accidental deaths were attributable were alcohol ( n = 42, 32.1%), illicit drugs and prescription medicines ( n = 22, 16.8%), and toxic plants and animals ( n = 16, 12.2%). Pesticides account for the largest proportion of intoxications deaths due to suicide (45, 78.9%). The extensive data analysis from 1999 to 2021 revealed a strong correlation between the cause and manner of death. Accidental intoxication deaths are usually unmotivated, while suicides and homicides are considered to be motivated. The number of cases where pesticides ( p < 0.0001) were used in suicide and homicide cases was more than that in the accidental cases. On the contrary, accidents included accidental alcohol overdose ( p < 0.0001), toxic animals and plants intoxication ( p = 0.002), and combined intoxication ( p = 0.023). All the results are summarized in .
A total of 4,753 death cases, including 217 wherein intoxication was the principal cause of death, were accepted at TCMEH between 2009 and 2021. The yearly number of deaths due to intoxication ranged from 4 to 30 (average, 17), with the highest (6.4%) and lowest (1.9%) percentage of intoxication fatalities in 2013 and 2019, respectively. Compared to 1999–2008 (which reported 218 intoxication deaths out of 2416 deaths) ( ), the mortality rate from forensic autopsy intoxications in this study fell by 4.4%. shows the total number of autopsy cases and fatal intoxication cases caused by various toxicants each year.
Overall, 132 males (60.8%) and 85 females (39.2%) died from intoxications. The median age of the deceased was 36 years (36.0 ± 16.7; range, 7 months to 75 years; excluding the eight anonymous corpses). Patients' average ages were 37 ± 16.1 years for men and 36 ± 18.1 years for women. displays the age distribution of intoxication-related fatalities between 2009–2021 and 1999–2008. During the period 2009 to 2021, the age group for which the most fatal intoxication was identified included individuals between 30 and 39 years (24.4% of cases), followed by those aged between 40 and 49 years (18.4% of cases). displays the age and sex distribution of fatal intoxication cases from 1999 to 2021.
Routes of exposure in both this and previous studies ( ) are listed in , indicating oral ingestion as the most common exposure route (66.4%), followed by inhalation (21.7%) and injection (5.1%). There were no significant differences between the 1999–2008 and 2009–2021 data.
This study grouped all toxicants into nine classes and further categorized them into various subclasses. Compared to the previous report period ( ), shows the number and percentage of intoxication cases according to these classes and subclasses. Rodenticides, insecticides, and herbicides are types of pesticide. Methanol and ethanol are examples of alcohol. Amphetamines, heroin, and morphine are classified as illicit drugs. Combined intoxication refers to exposure to two or more types of toxicants. Other compounds included succinylcholine, nitrite, and cyanide. We classified arsenide and barium chloride as metal salts. In this study, pesticides were the main cause of intoxication mortality (33.2% of deaths), with organic phosphorus as the main cause (54.2%). The highest number of deaths (19.7%, 43) were caused by rodenticides from 1999 to 2008, with pesticides (37.6%) representing a significant cause of intoxication fatalities. Our data comparing 1999–2008 with 2009–2021 showed that the percentages of deaths due to organophosphorus (10.6 vs. 18.0%, χ 2 = 4.902, p = 0.027); ethanol (10.1 vs. 18.0%, χ 2 = 5.602, p = 0.018); amphetamine (0.0 vs. 4.6%, χ 2 = 8.333, p = 0.004); and phosphorus rodenticides (0.0 vs. 4.6%, χ 2 = 8.333, p = 0.004) were increased. However, the percentages of deaths due to tetramine (17.9 vs. 2.8%, χ 2 = 26.823, p < 0.001), and carbon monoxide (16.5 vs. 8.3%, χ 2 = 6.756, p = 0.009) were decreased. The distribution of intoxications due to the various toxic substances according to sex is shown in . Alcohol ( n = 28, 21.2%); insecticides ( n = 23, 17.4%); illicit drugs ( n = 16, 12.1%); and other compounds ( n = 16, 12.1%) caused fatal intoxications more in males, whereas insecticides ( n = 24, 28.2%); alcohol ( n = 14, 16.5%), and rodenticides ( n = 12, 14.1%) involved more females ( ).
Among the 217 intoxication cases, 131 (60.4%), 57 (26.3%), 14 (6.5%), and 15 (6.9%) were accidental deaths, suicides, homicides, and uncertain, respectively. The manner of death reported based on the type of intoxications is presented in . The most common types of intoxications to which accidental deaths were attributable were alcohol ( n = 42, 32.1%), illicit drugs and prescription medicines ( n = 22, 16.8%), and toxic plants and animals ( n = 16, 12.2%). Pesticides account for the largest proportion of intoxications deaths due to suicide (45, 78.9%). The extensive data analysis from 1999 to 2021 revealed a strong correlation between the cause and manner of death. Accidental intoxication deaths are usually unmotivated, while suicides and homicides are considered to be motivated. The number of cases where pesticides ( p < 0.0001) were used in suicide and homicide cases was more than that in the accidental cases. On the contrary, accidents included accidental alcohol overdose ( p < 0.0001), toxic animals and plants intoxication ( p = 0.002), and combined intoxication ( p = 0.023). All the results are summarized in .
This study recorded 217 intoxication autopsy instances at TCMEH from 2009 to 2021. Comparing 2009–2021 and 1999–2008 showed that there were 4753 and 2416 autopsy cases, respectively ( ). Finally, intoxication deaths decreased from 9.0% in 1999–2008 ( ) to 4.7% in 2009–2021. This decrease ( p < 0.0001) may be attributable to a rise in autopsies and a decline in deaths from intoxication by some substances, especially carbon monoxide and tetramine, between 2009 and 2021. 4.1. Sex and age Compared with our previous 1999–2008 report ( ), by 2009–2021, alcohol ( n = 28) had surpassed insecticides ( n = 23) as the leading cause of intoxication-associated deaths in males. Rodenticides ( p = 0.045) accounted for more deaths in females than in males. Toxic plants and animals ( p = 0.023) accounted for a greater proportion of intoxication deaths among males than that among females ( ). Males tend to be more exposed to alcohol because of socialization and work pressure in China due to the traditional family structure where men work outside the home and women care for the family ( ). Since men are more inclined to use ethanol because of these reasons, they are more susceptible to ethanol intoxications ( ). Differences were also observed in the distribution of the manner of death among the sexes. Our research shows that men die from accidents at a higher rate than women ( p = 0.005), whereas women are more likely to die by suicide than men ( p = 0.006) ( ). In preventing female suicide, the issue of domestic violence cannot be disregarded. Family marriage conflicts in China frequently involve domestic violence, and more than 99% of those who engage in domestic violence during these conflicts are men ( ). Therefore, it is essential to strengthen the protection of women's rights and interests, and legal aid agencies should intensify efforts to promote the law and educate and inspire women to use the legal system to their advantage bravely and effectively. 4.2. Toxic agents 4.2.1. Pesticide intoxications Similar to the Shenyang, China findings ( ), pesticides led to 33.2% of all intoxication-related deaths during this study period. Overall, 47, 5, and 20 of the 72 deaths caused by pesticides were due to insecticide, herbicide, and rodenticide exposure, respectively. Most deaths were due to suicide (45, 62.5%), followed by accidents (13, 18.1%) and (6, 8.3%) homicides. Of the 39 cases of organophosphorus insecticide intoxications, 18 were due to oral administration of dichlorvos. The primary reason for this is the existence of numerous highly hazardous and relatively inexpensive organophosphorus insecticides worldwide ( ). The World Health Organization (WHO) estimates that there are approximately one million cases of pesticide intoxications annually, resulting in ~20,000 deaths globally ( ). According to a nationally representative survey of suicide mortality in India, pesticides are commonly used as suicide tools ( ), with the majority using insecticides, as in China. Since 2001 (except for 2005), the decline in their pesticide suicide rate has accelerated ( ). China's agricultural industry has outlawed the production and sale of highly dangerous pesticides, such as methomyl. However, since some pesticides are still on the market, the number of pesticide suicides has not decreased significantly. In the latest study, paraquat exposure resulted in four deaths, unlike that reported in the prior study. Additionally, two of the cases involved murder. There have been similar complaints in other regions of China ( ). In one of our cases, the suspect applied it to the deceased's undergarments by applying a small quantity each time. As a result, the clinical onset of the sickness was sluggish, and it was difficult to identify the components of paraquat in the body at the late stage due to the body's metabolism; this, together with the concealed methods of the crime, made it challenging to solve such murder. Since July 2014, China has revoked the registration and manufacturing license for liquid paraquat, and has permitted production only for export. The domestic sale and usage were discontinued in July 2016, and its soluble gel has been prohibited since September 2020 ( ). Compared to that in 1999–2008 ( ), the proportion of deaths due to rodenticide intoxications was substantially reduced in the present study (9.2% in 2009–2021 vs. 19.7% in 1999–2008, χ 2 =11.207, p = 0.001), mainly owing to the decline in tetramine intoxications (2.8% in 2009–2021 vs. 17.9% in 1999–2008, χ 2 =26.823, p < 0.001). In 2013, China's Ministry of Agriculture demanded a “designated area” for selling highly dangerous pesticides and acquiring pesticides with recognizable brand names. Simultaneously, the government seized and cleared up tetramine and strictly enforced the policy to avoid harm from tetramine ( ). However, the proportion of phosphine rodenticides has increased (from 0.0% in 1999–2008 to 4.6% in 2009–2021, χ 2 = 10.282, p = 0.001). Phosphate intoxication is the fourth most prevalent rodenticide intoxication in the United States, whereas aluminum phosphide is widely used in developing nations ( ). Notably, 10 of the 20 deaths attributed to rodenticide intoxications in this study were the result of unintentional phosphine inhalation. Aluminum phosphide and zinc phosphide react with water in the air and hydrochloric acid in the stomach to produce phosphine gas, which is highly poisonous ( – ). According to our research, phosphine was frequently associated with household intoxication whereby the deceased belonged to four distinct families, each with more than two victims. These accidental fatalities were the result of indoor use and inappropriate storage of solid phosphides. Our research also showed that children had a lower tolerance for phosphine inhalation than adults, and 9 of the 10 deaths occurred in children under 13 years old, with the youngest being 7 months old. According to previous studies, many phosphine intoxication victims are children ( ), and children are more sensitive to phosphine intoxication ( ). Even when all intoxication victims are children, the younger the victim, the more severe the intoxication symptoms. Improving the packaging of phosphine rodenticides and keeping them out of the reach of children would prevent accidental exposure to children. The government and schools should also enhance pesticide safety education for children. 4.2.2. Alcohol We observed 39 deaths (25 males and 14 females) due to ethanol intoxications. The proportion of ethanol intoxication deaths significantly increased from 10.1% in 1999–2008 ( ) to 18.0% in 2009–2021 ( p = 0.018), and this may be related to the rising socialization and life demands. Conroy and Visser reported that not drinking alcohol is construed as strange behavior ( ). Most ethanol-related fatalities occurred in males (64.1%), and the youngest victim was only 17 years old. WHO's 2018 Global Status Report on Alcohol and Health reveals that more than 3 million people die annually due to alcohol, with more than three-quarters of those deaths occurring in men ( ). The acetaldehyde dehydrogenase gene mutation rate in the Asian population is higher than that in the European and American populations ( ). Therefore, Asians are more susceptible to severe intoxications and even death. According to the fifth edition of forensic toxicology, the blood concentration of ethanol intoxication is 100 mg/dL, whereas the lethal blood concentration is 400–500 mg/dL ( ). Given the identification of ethanol production by postmortem microbial action, a blood ethanol/n-propanol concentration ratio >20 indicates that the individual consumed alcohol during their lifetime. The average blood ethanol content of those who died from ethanol intoxication was 434.06 ± 133.5 mg/dL (range: 199.1–843.5 mg/dL). Our data show that among the 39 cases of death by ethanol intoxications, 21 had ethanol blood concentrations of 400 mg/dL or more; In the remaining 18 cases, the concentration of ethanol in the blood exceeded the level for severe intoxication. We examined these 18 decedents and discovered that they had coronary heart disease, cardiomyopathy, chronic alcoholism, and alcoholic coma, respectively, but ethanol intoxication was their leading cause of death. A decedent with severe fatty infiltration of theatrio-ventricular node who died after consuming alcohol in our practice had a measured blood ethanol concentration of 199.1 mg/mL, indicating a large individual variation. Furthermore, we observed variations in the blood ethanol concentrations among the deceased, which might be related to the time between death and the toxicological test as well as individual differences ( ). 4.2.3. Illicit drugs and prescription medicines In China, illicit drug-related cases are mainly handled by public security, in which we have less. Twenty overdose cases involving illicit drugs, including narcotics (heroin and morphine) and psychoactive substances (methamphetamine and ketamine), have been identified. There were 10 deaths caused by amphetamine overdose, an emerging phenomenon, in 2009–2021 (including combined intoxication with amphetamines and other illicit drugs; χ 2 = 8.333, p = 0.004) at a mean age of 37 ± 8.8 years. Amphetamines, synthetic, addictive, mood-altering drugs, are used illegally as a stimulant ( ). According to WHO, illicit drug misuse is a major concern among high school students ( ). Notably, no deceased individuals were reported to have used more than two medicines concurrently between 1999 and 2008; nevertheless, five cases were discovered in our most recent investigation. This indicates that, despite China's rigorous anti-drug policies, the targeted substances are still being obtained illegally. The average age of individuals who died from drugs intoxication was 36.3 (range: 23–66) years, and 89.5% were aged 20–49 years. The synergistic effects of various drugs and the assessment of their lethal doses are of particular importance to forensic toxicologists, and as such, these issues necessitate not only objective toxicology reports but also a reliance on the empirical judgment of forensic scientists. There were eight prescription medication-related deaths, seven involving sedative-hypnotic medications, such as phenothiazine, clozapine, diphenhydramine, and amitriptyline, and one involving to insulin overdose via injection. The psychological tolerance of individuals declines gradually as society develops, and the incidence of drug intoxications rises due to various reasons including family, society, survival pressure, and emotional stress. From 1997 to 2003, a retrospective analysis of acute intoxication cases at the First Hospital of China Medical University's emergency department revealed that sedative-hypnotic medications accounted for 30.3% of medication intoxication, which warrants further study ( ). 4.2.4. Other compounds and metal salts In this category, five of six cyanide- and five succinylcholine-related deaths were linked to animal hunting. These two chemicals are frequently used to produce “poison darts” for illegal animal hunting but can also be used for murder. The government should increase patrols and early warning systems to combat the unlawful sale of slingshots and poisoned darts and to tighten down on illegal hunting and sale of wildlife. Three of the victims were aged under 10 (7 months, 2, and 6 years), making all four nitrite intoxication deaths food-related. This shows that children are more likely to mistakenly consume foods with high nitrite levels than adults and that children have a higher mortality risk from intoxications. This may be due to body weight and food intake; as children consume more food per unit of body weight than adults, making them more vulnerable to nitrite intoxication. To ensure food cleanliness and safety, the appropriate departments should improve food oversight while increasing public awareness of proper food handling and preservation techniques. 4.2.5. Toxic plants and animals In the present study, 16 deaths due to toxic animals or plants included 10 cases caused by aconitine intoxications, two strychnine intoxications, two brucine intoxications, and two from snakebites. According to the previous study's findings, aconitine intoxications remained the leading cause of death in this category ( ). All 10 cases were due to improper use, also consistent with the previous report ( ). Aconite is included in traditional Chinese medicine, and its improper or un-constituted form, overdose, and misuse of that meant for external use as for internal drink are the common causes of intoxications ( ). To prevent such accidents, we recommend that the Market Supervision Administration and other relevant departments strengthen the supervision and management of the production, processing, and use of toxic herbal medicines, and prohibit the private production of medicinal wine for consumption and sale. Additionally, we encourage the general population to receive herbal medications from formal Chinese hospitals rather than private clinics. 4.2.6. CO From 2009 to 2021, 18 cases of CO intoxications occurred, and the average concentration of carboxyhemoglobin (HbCO) in the blood was 61.3 ± 12.0% (39–83.4%). Compared to the 1999 to 2008 data, the number of deaths due to CO intoxication has decreased significantly (36 cases, 16.5%) from our institution ( p = 0.009). One of the following three conditions can easily lead to CO intoxications in China: gas water heaters in the bathroom are the most common source of CO intoxications, followed by burning coal for warmth while sleeping and sleeping with the windows closed or in a vehicle while using air conditioning. Natural gas and solar energy have increasingly replaced gas water heaters in several northern regions, resulting in a gradual decrease in CO intoxications. Deaths caused by CO intoxication were mostly accidental, consistent with reports from northeastern China ( ). When individuals keep their windows closed in the winter to block out the cold, they are more likely to accidentally poison themselves with CO. As an example, we had a situation where a father left his 5-year-old son in the passenger seat of a small car and got out of the car without turning the car off or opening the windows for about 40 min, after which the youngster had no indications of life. All parents should remember that leaving their children in a car is never a good idea, and if they must, they should at least make sure the car is turned off and the windows are cracked open for air circulation before leaving. We urge market supervision departments to tightly regulate the manufacturing and sale of gas stoves and gas water heaters and not to overlook CO in vehicles. Centralized heating areas may effectively lower the chances of intoxication due to CO, and their expansion is what we are advocating for. 4.2.7. Combined intoxication In total, 11 accidental deaths occurred due to combined intoxication from different agents. Four groups of combined intoxication were found: CO and alcohol (1 case), CO and illicit drugs (2 cases), alcohol and prescription medications (2 cases), and alcohol and illicit drugs (6 cases). Lee showed that illicit drug users with concomitant alcohol abuse have a significantly higher mortality rate than the general population ( ). According to previous reports ( ), death from CO combined with illicit drug intoxication may be because illicit drugs increase the metabolic rate and oxygen consumption of the body, thus, exacerbating the death process of CO intoxications. Therefore, recognizing that illicit drugs, alcohol, and CO predispose to increased death susceptibility, especially potentially preventable deaths, could help develop preventive measures. 4.3. Manner of death According to , the most common type of poison used in suicides was pesticides (45 cases, 78.9%). A better understanding of the emotional dynamics of suicidal patients is possible by implementing a community psychological counseling and security system. Several studies have linked suicidal ideation and behavior to the availability of lethal means ( ), and strict control of suicide tools will effectively reduce the suicide rate. In homicide cases, “other compounds” (succinylcholine, n = 4; concentrated sulfuric acid, n = 1; and cyanide, n = 1) and pesticides ( n = 6) were the most common. The manner of death can reflect the motivation for intoxication. Pesticides are more likely to be blamed for motivated deaths, like suicide and homicide. Deaths caused by alcohol, CO, illicit drugs, prescription medicines, and toxic animals and plants are always accidental. As concluded by Moebus and Bödeker, reducing the acquisition of pesticides has a positive effect on reducing the incidence of pesticide intoxications ( ). A study of a community cluster randomized trial of household pesticide lock-up storage in rural Asia found that locking up pesticides did not reduce suicide mortality ( ). Therefore, only banning highly toxic pesticides at the source of pesticide production is effective in reducing deaths from suicide via pesticide administration. Regarding CO and alcohol, safety education should be further developed, with a focus on appropriate use. For prescription medicine, it is important to emphasize following medical advice and not overdosing, as well as with the use of herbal medicines; we advocate going to a regular Chinese hospital to obtain them. Lastly, the government should take severe measures against illicit drug-related criminal activities and enhance the status and function of human intelligence in anti-drug efforts. Regarding limitations, the data used were from a forensic pathology laboratory, where intoxication-related deaths are investigated, and it excludes cases from the whole of China. However, this retrospective analysis may reflect the intoxication situation in central China to a certain extent. It provides valuable information for medical examiners and policemen when handling such intoxication cases and useful suggestions for improving public safety policies.
Compared with our previous 1999–2008 report ( ), by 2009–2021, alcohol ( n = 28) had surpassed insecticides ( n = 23) as the leading cause of intoxication-associated deaths in males. Rodenticides ( p = 0.045) accounted for more deaths in females than in males. Toxic plants and animals ( p = 0.023) accounted for a greater proportion of intoxication deaths among males than that among females ( ). Males tend to be more exposed to alcohol because of socialization and work pressure in China due to the traditional family structure where men work outside the home and women care for the family ( ). Since men are more inclined to use ethanol because of these reasons, they are more susceptible to ethanol intoxications ( ). Differences were also observed in the distribution of the manner of death among the sexes. Our research shows that men die from accidents at a higher rate than women ( p = 0.005), whereas women are more likely to die by suicide than men ( p = 0.006) ( ). In preventing female suicide, the issue of domestic violence cannot be disregarded. Family marriage conflicts in China frequently involve domestic violence, and more than 99% of those who engage in domestic violence during these conflicts are men ( ). Therefore, it is essential to strengthen the protection of women's rights and interests, and legal aid agencies should intensify efforts to promote the law and educate and inspire women to use the legal system to their advantage bravely and effectively.
4.2.1. Pesticide intoxications Similar to the Shenyang, China findings ( ), pesticides led to 33.2% of all intoxication-related deaths during this study period. Overall, 47, 5, and 20 of the 72 deaths caused by pesticides were due to insecticide, herbicide, and rodenticide exposure, respectively. Most deaths were due to suicide (45, 62.5%), followed by accidents (13, 18.1%) and (6, 8.3%) homicides. Of the 39 cases of organophosphorus insecticide intoxications, 18 were due to oral administration of dichlorvos. The primary reason for this is the existence of numerous highly hazardous and relatively inexpensive organophosphorus insecticides worldwide ( ). The World Health Organization (WHO) estimates that there are approximately one million cases of pesticide intoxications annually, resulting in ~20,000 deaths globally ( ). According to a nationally representative survey of suicide mortality in India, pesticides are commonly used as suicide tools ( ), with the majority using insecticides, as in China. Since 2001 (except for 2005), the decline in their pesticide suicide rate has accelerated ( ). China's agricultural industry has outlawed the production and sale of highly dangerous pesticides, such as methomyl. However, since some pesticides are still on the market, the number of pesticide suicides has not decreased significantly. In the latest study, paraquat exposure resulted in four deaths, unlike that reported in the prior study. Additionally, two of the cases involved murder. There have been similar complaints in other regions of China ( ). In one of our cases, the suspect applied it to the deceased's undergarments by applying a small quantity each time. As a result, the clinical onset of the sickness was sluggish, and it was difficult to identify the components of paraquat in the body at the late stage due to the body's metabolism; this, together with the concealed methods of the crime, made it challenging to solve such murder. Since July 2014, China has revoked the registration and manufacturing license for liquid paraquat, and has permitted production only for export. The domestic sale and usage were discontinued in July 2016, and its soluble gel has been prohibited since September 2020 ( ). Compared to that in 1999–2008 ( ), the proportion of deaths due to rodenticide intoxications was substantially reduced in the present study (9.2% in 2009–2021 vs. 19.7% in 1999–2008, χ 2 =11.207, p = 0.001), mainly owing to the decline in tetramine intoxications (2.8% in 2009–2021 vs. 17.9% in 1999–2008, χ 2 =26.823, p < 0.001). In 2013, China's Ministry of Agriculture demanded a “designated area” for selling highly dangerous pesticides and acquiring pesticides with recognizable brand names. Simultaneously, the government seized and cleared up tetramine and strictly enforced the policy to avoid harm from tetramine ( ). However, the proportion of phosphine rodenticides has increased (from 0.0% in 1999–2008 to 4.6% in 2009–2021, χ 2 = 10.282, p = 0.001). Phosphate intoxication is the fourth most prevalent rodenticide intoxication in the United States, whereas aluminum phosphide is widely used in developing nations ( ). Notably, 10 of the 20 deaths attributed to rodenticide intoxications in this study were the result of unintentional phosphine inhalation. Aluminum phosphide and zinc phosphide react with water in the air and hydrochloric acid in the stomach to produce phosphine gas, which is highly poisonous ( – ). According to our research, phosphine was frequently associated with household intoxication whereby the deceased belonged to four distinct families, each with more than two victims. These accidental fatalities were the result of indoor use and inappropriate storage of solid phosphides. Our research also showed that children had a lower tolerance for phosphine inhalation than adults, and 9 of the 10 deaths occurred in children under 13 years old, with the youngest being 7 months old. According to previous studies, many phosphine intoxication victims are children ( ), and children are more sensitive to phosphine intoxication ( ). Even when all intoxication victims are children, the younger the victim, the more severe the intoxication symptoms. Improving the packaging of phosphine rodenticides and keeping them out of the reach of children would prevent accidental exposure to children. The government and schools should also enhance pesticide safety education for children. 4.2.2. Alcohol We observed 39 deaths (25 males and 14 females) due to ethanol intoxications. The proportion of ethanol intoxication deaths significantly increased from 10.1% in 1999–2008 ( ) to 18.0% in 2009–2021 ( p = 0.018), and this may be related to the rising socialization and life demands. Conroy and Visser reported that not drinking alcohol is construed as strange behavior ( ). Most ethanol-related fatalities occurred in males (64.1%), and the youngest victim was only 17 years old. WHO's 2018 Global Status Report on Alcohol and Health reveals that more than 3 million people die annually due to alcohol, with more than three-quarters of those deaths occurring in men ( ). The acetaldehyde dehydrogenase gene mutation rate in the Asian population is higher than that in the European and American populations ( ). Therefore, Asians are more susceptible to severe intoxications and even death. According to the fifth edition of forensic toxicology, the blood concentration of ethanol intoxication is 100 mg/dL, whereas the lethal blood concentration is 400–500 mg/dL ( ). Given the identification of ethanol production by postmortem microbial action, a blood ethanol/n-propanol concentration ratio >20 indicates that the individual consumed alcohol during their lifetime. The average blood ethanol content of those who died from ethanol intoxication was 434.06 ± 133.5 mg/dL (range: 199.1–843.5 mg/dL). Our data show that among the 39 cases of death by ethanol intoxications, 21 had ethanol blood concentrations of 400 mg/dL or more; In the remaining 18 cases, the concentration of ethanol in the blood exceeded the level for severe intoxication. We examined these 18 decedents and discovered that they had coronary heart disease, cardiomyopathy, chronic alcoholism, and alcoholic coma, respectively, but ethanol intoxication was their leading cause of death. A decedent with severe fatty infiltration of theatrio-ventricular node who died after consuming alcohol in our practice had a measured blood ethanol concentration of 199.1 mg/mL, indicating a large individual variation. Furthermore, we observed variations in the blood ethanol concentrations among the deceased, which might be related to the time between death and the toxicological test as well as individual differences ( ). 4.2.3. Illicit drugs and prescription medicines In China, illicit drug-related cases are mainly handled by public security, in which we have less. Twenty overdose cases involving illicit drugs, including narcotics (heroin and morphine) and psychoactive substances (methamphetamine and ketamine), have been identified. There were 10 deaths caused by amphetamine overdose, an emerging phenomenon, in 2009–2021 (including combined intoxication with amphetamines and other illicit drugs; χ 2 = 8.333, p = 0.004) at a mean age of 37 ± 8.8 years. Amphetamines, synthetic, addictive, mood-altering drugs, are used illegally as a stimulant ( ). According to WHO, illicit drug misuse is a major concern among high school students ( ). Notably, no deceased individuals were reported to have used more than two medicines concurrently between 1999 and 2008; nevertheless, five cases were discovered in our most recent investigation. This indicates that, despite China's rigorous anti-drug policies, the targeted substances are still being obtained illegally. The average age of individuals who died from drugs intoxication was 36.3 (range: 23–66) years, and 89.5% were aged 20–49 years. The synergistic effects of various drugs and the assessment of their lethal doses are of particular importance to forensic toxicologists, and as such, these issues necessitate not only objective toxicology reports but also a reliance on the empirical judgment of forensic scientists. There were eight prescription medication-related deaths, seven involving sedative-hypnotic medications, such as phenothiazine, clozapine, diphenhydramine, and amitriptyline, and one involving to insulin overdose via injection. The psychological tolerance of individuals declines gradually as society develops, and the incidence of drug intoxications rises due to various reasons including family, society, survival pressure, and emotional stress. From 1997 to 2003, a retrospective analysis of acute intoxication cases at the First Hospital of China Medical University's emergency department revealed that sedative-hypnotic medications accounted for 30.3% of medication intoxication, which warrants further study ( ). 4.2.4. Other compounds and metal salts In this category, five of six cyanide- and five succinylcholine-related deaths were linked to animal hunting. These two chemicals are frequently used to produce “poison darts” for illegal animal hunting but can also be used for murder. The government should increase patrols and early warning systems to combat the unlawful sale of slingshots and poisoned darts and to tighten down on illegal hunting and sale of wildlife. Three of the victims were aged under 10 (7 months, 2, and 6 years), making all four nitrite intoxication deaths food-related. This shows that children are more likely to mistakenly consume foods with high nitrite levels than adults and that children have a higher mortality risk from intoxications. This may be due to body weight and food intake; as children consume more food per unit of body weight than adults, making them more vulnerable to nitrite intoxication. To ensure food cleanliness and safety, the appropriate departments should improve food oversight while increasing public awareness of proper food handling and preservation techniques. 4.2.5. Toxic plants and animals In the present study, 16 deaths due to toxic animals or plants included 10 cases caused by aconitine intoxications, two strychnine intoxications, two brucine intoxications, and two from snakebites. According to the previous study's findings, aconitine intoxications remained the leading cause of death in this category ( ). All 10 cases were due to improper use, also consistent with the previous report ( ). Aconite is included in traditional Chinese medicine, and its improper or un-constituted form, overdose, and misuse of that meant for external use as for internal drink are the common causes of intoxications ( ). To prevent such accidents, we recommend that the Market Supervision Administration and other relevant departments strengthen the supervision and management of the production, processing, and use of toxic herbal medicines, and prohibit the private production of medicinal wine for consumption and sale. Additionally, we encourage the general population to receive herbal medications from formal Chinese hospitals rather than private clinics. 4.2.6. CO From 2009 to 2021, 18 cases of CO intoxications occurred, and the average concentration of carboxyhemoglobin (HbCO) in the blood was 61.3 ± 12.0% (39–83.4%). Compared to the 1999 to 2008 data, the number of deaths due to CO intoxication has decreased significantly (36 cases, 16.5%) from our institution ( p = 0.009). One of the following three conditions can easily lead to CO intoxications in China: gas water heaters in the bathroom are the most common source of CO intoxications, followed by burning coal for warmth while sleeping and sleeping with the windows closed or in a vehicle while using air conditioning. Natural gas and solar energy have increasingly replaced gas water heaters in several northern regions, resulting in a gradual decrease in CO intoxications. Deaths caused by CO intoxication were mostly accidental, consistent with reports from northeastern China ( ). When individuals keep their windows closed in the winter to block out the cold, they are more likely to accidentally poison themselves with CO. As an example, we had a situation where a father left his 5-year-old son in the passenger seat of a small car and got out of the car without turning the car off or opening the windows for about 40 min, after which the youngster had no indications of life. All parents should remember that leaving their children in a car is never a good idea, and if they must, they should at least make sure the car is turned off and the windows are cracked open for air circulation before leaving. We urge market supervision departments to tightly regulate the manufacturing and sale of gas stoves and gas water heaters and not to overlook CO in vehicles. Centralized heating areas may effectively lower the chances of intoxication due to CO, and their expansion is what we are advocating for. 4.2.7. Combined intoxication In total, 11 accidental deaths occurred due to combined intoxication from different agents. Four groups of combined intoxication were found: CO and alcohol (1 case), CO and illicit drugs (2 cases), alcohol and prescription medications (2 cases), and alcohol and illicit drugs (6 cases). Lee showed that illicit drug users with concomitant alcohol abuse have a significantly higher mortality rate than the general population ( ). According to previous reports ( ), death from CO combined with illicit drug intoxication may be because illicit drugs increase the metabolic rate and oxygen consumption of the body, thus, exacerbating the death process of CO intoxications. Therefore, recognizing that illicit drugs, alcohol, and CO predispose to increased death susceptibility, especially potentially preventable deaths, could help develop preventive measures.
Similar to the Shenyang, China findings ( ), pesticides led to 33.2% of all intoxication-related deaths during this study period. Overall, 47, 5, and 20 of the 72 deaths caused by pesticides were due to insecticide, herbicide, and rodenticide exposure, respectively. Most deaths were due to suicide (45, 62.5%), followed by accidents (13, 18.1%) and (6, 8.3%) homicides. Of the 39 cases of organophosphorus insecticide intoxications, 18 were due to oral administration of dichlorvos. The primary reason for this is the existence of numerous highly hazardous and relatively inexpensive organophosphorus insecticides worldwide ( ). The World Health Organization (WHO) estimates that there are approximately one million cases of pesticide intoxications annually, resulting in ~20,000 deaths globally ( ). According to a nationally representative survey of suicide mortality in India, pesticides are commonly used as suicide tools ( ), with the majority using insecticides, as in China. Since 2001 (except for 2005), the decline in their pesticide suicide rate has accelerated ( ). China's agricultural industry has outlawed the production and sale of highly dangerous pesticides, such as methomyl. However, since some pesticides are still on the market, the number of pesticide suicides has not decreased significantly. In the latest study, paraquat exposure resulted in four deaths, unlike that reported in the prior study. Additionally, two of the cases involved murder. There have been similar complaints in other regions of China ( ). In one of our cases, the suspect applied it to the deceased's undergarments by applying a small quantity each time. As a result, the clinical onset of the sickness was sluggish, and it was difficult to identify the components of paraquat in the body at the late stage due to the body's metabolism; this, together with the concealed methods of the crime, made it challenging to solve such murder. Since July 2014, China has revoked the registration and manufacturing license for liquid paraquat, and has permitted production only for export. The domestic sale and usage were discontinued in July 2016, and its soluble gel has been prohibited since September 2020 ( ). Compared to that in 1999–2008 ( ), the proportion of deaths due to rodenticide intoxications was substantially reduced in the present study (9.2% in 2009–2021 vs. 19.7% in 1999–2008, χ 2 =11.207, p = 0.001), mainly owing to the decline in tetramine intoxications (2.8% in 2009–2021 vs. 17.9% in 1999–2008, χ 2 =26.823, p < 0.001). In 2013, China's Ministry of Agriculture demanded a “designated area” for selling highly dangerous pesticides and acquiring pesticides with recognizable brand names. Simultaneously, the government seized and cleared up tetramine and strictly enforced the policy to avoid harm from tetramine ( ). However, the proportion of phosphine rodenticides has increased (from 0.0% in 1999–2008 to 4.6% in 2009–2021, χ 2 = 10.282, p = 0.001). Phosphate intoxication is the fourth most prevalent rodenticide intoxication in the United States, whereas aluminum phosphide is widely used in developing nations ( ). Notably, 10 of the 20 deaths attributed to rodenticide intoxications in this study were the result of unintentional phosphine inhalation. Aluminum phosphide and zinc phosphide react with water in the air and hydrochloric acid in the stomach to produce phosphine gas, which is highly poisonous ( – ). According to our research, phosphine was frequently associated with household intoxication whereby the deceased belonged to four distinct families, each with more than two victims. These accidental fatalities were the result of indoor use and inappropriate storage of solid phosphides. Our research also showed that children had a lower tolerance for phosphine inhalation than adults, and 9 of the 10 deaths occurred in children under 13 years old, with the youngest being 7 months old. According to previous studies, many phosphine intoxication victims are children ( ), and children are more sensitive to phosphine intoxication ( ). Even when all intoxication victims are children, the younger the victim, the more severe the intoxication symptoms. Improving the packaging of phosphine rodenticides and keeping them out of the reach of children would prevent accidental exposure to children. The government and schools should also enhance pesticide safety education for children.
We observed 39 deaths (25 males and 14 females) due to ethanol intoxications. The proportion of ethanol intoxication deaths significantly increased from 10.1% in 1999–2008 ( ) to 18.0% in 2009–2021 ( p = 0.018), and this may be related to the rising socialization and life demands. Conroy and Visser reported that not drinking alcohol is construed as strange behavior ( ). Most ethanol-related fatalities occurred in males (64.1%), and the youngest victim was only 17 years old. WHO's 2018 Global Status Report on Alcohol and Health reveals that more than 3 million people die annually due to alcohol, with more than three-quarters of those deaths occurring in men ( ). The acetaldehyde dehydrogenase gene mutation rate in the Asian population is higher than that in the European and American populations ( ). Therefore, Asians are more susceptible to severe intoxications and even death. According to the fifth edition of forensic toxicology, the blood concentration of ethanol intoxication is 100 mg/dL, whereas the lethal blood concentration is 400–500 mg/dL ( ). Given the identification of ethanol production by postmortem microbial action, a blood ethanol/n-propanol concentration ratio >20 indicates that the individual consumed alcohol during their lifetime. The average blood ethanol content of those who died from ethanol intoxication was 434.06 ± 133.5 mg/dL (range: 199.1–843.5 mg/dL). Our data show that among the 39 cases of death by ethanol intoxications, 21 had ethanol blood concentrations of 400 mg/dL or more; In the remaining 18 cases, the concentration of ethanol in the blood exceeded the level for severe intoxication. We examined these 18 decedents and discovered that they had coronary heart disease, cardiomyopathy, chronic alcoholism, and alcoholic coma, respectively, but ethanol intoxication was their leading cause of death. A decedent with severe fatty infiltration of theatrio-ventricular node who died after consuming alcohol in our practice had a measured blood ethanol concentration of 199.1 mg/mL, indicating a large individual variation. Furthermore, we observed variations in the blood ethanol concentrations among the deceased, which might be related to the time between death and the toxicological test as well as individual differences ( ).
In China, illicit drug-related cases are mainly handled by public security, in which we have less. Twenty overdose cases involving illicit drugs, including narcotics (heroin and morphine) and psychoactive substances (methamphetamine and ketamine), have been identified. There were 10 deaths caused by amphetamine overdose, an emerging phenomenon, in 2009–2021 (including combined intoxication with amphetamines and other illicit drugs; χ 2 = 8.333, p = 0.004) at a mean age of 37 ± 8.8 years. Amphetamines, synthetic, addictive, mood-altering drugs, are used illegally as a stimulant ( ). According to WHO, illicit drug misuse is a major concern among high school students ( ). Notably, no deceased individuals were reported to have used more than two medicines concurrently between 1999 and 2008; nevertheless, five cases were discovered in our most recent investigation. This indicates that, despite China's rigorous anti-drug policies, the targeted substances are still being obtained illegally. The average age of individuals who died from drugs intoxication was 36.3 (range: 23–66) years, and 89.5% were aged 20–49 years. The synergistic effects of various drugs and the assessment of their lethal doses are of particular importance to forensic toxicologists, and as such, these issues necessitate not only objective toxicology reports but also a reliance on the empirical judgment of forensic scientists. There were eight prescription medication-related deaths, seven involving sedative-hypnotic medications, such as phenothiazine, clozapine, diphenhydramine, and amitriptyline, and one involving to insulin overdose via injection. The psychological tolerance of individuals declines gradually as society develops, and the incidence of drug intoxications rises due to various reasons including family, society, survival pressure, and emotional stress. From 1997 to 2003, a retrospective analysis of acute intoxication cases at the First Hospital of China Medical University's emergency department revealed that sedative-hypnotic medications accounted for 30.3% of medication intoxication, which warrants further study ( ).
In this category, five of six cyanide- and five succinylcholine-related deaths were linked to animal hunting. These two chemicals are frequently used to produce “poison darts” for illegal animal hunting but can also be used for murder. The government should increase patrols and early warning systems to combat the unlawful sale of slingshots and poisoned darts and to tighten down on illegal hunting and sale of wildlife. Three of the victims were aged under 10 (7 months, 2, and 6 years), making all four nitrite intoxication deaths food-related. This shows that children are more likely to mistakenly consume foods with high nitrite levels than adults and that children have a higher mortality risk from intoxications. This may be due to body weight and food intake; as children consume more food per unit of body weight than adults, making them more vulnerable to nitrite intoxication. To ensure food cleanliness and safety, the appropriate departments should improve food oversight while increasing public awareness of proper food handling and preservation techniques.
In the present study, 16 deaths due to toxic animals or plants included 10 cases caused by aconitine intoxications, two strychnine intoxications, two brucine intoxications, and two from snakebites. According to the previous study's findings, aconitine intoxications remained the leading cause of death in this category ( ). All 10 cases were due to improper use, also consistent with the previous report ( ). Aconite is included in traditional Chinese medicine, and its improper or un-constituted form, overdose, and misuse of that meant for external use as for internal drink are the common causes of intoxications ( ). To prevent such accidents, we recommend that the Market Supervision Administration and other relevant departments strengthen the supervision and management of the production, processing, and use of toxic herbal medicines, and prohibit the private production of medicinal wine for consumption and sale. Additionally, we encourage the general population to receive herbal medications from formal Chinese hospitals rather than private clinics.
From 2009 to 2021, 18 cases of CO intoxications occurred, and the average concentration of carboxyhemoglobin (HbCO) in the blood was 61.3 ± 12.0% (39–83.4%). Compared to the 1999 to 2008 data, the number of deaths due to CO intoxication has decreased significantly (36 cases, 16.5%) from our institution ( p = 0.009). One of the following three conditions can easily lead to CO intoxications in China: gas water heaters in the bathroom are the most common source of CO intoxications, followed by burning coal for warmth while sleeping and sleeping with the windows closed or in a vehicle while using air conditioning. Natural gas and solar energy have increasingly replaced gas water heaters in several northern regions, resulting in a gradual decrease in CO intoxications. Deaths caused by CO intoxication were mostly accidental, consistent with reports from northeastern China ( ). When individuals keep their windows closed in the winter to block out the cold, they are more likely to accidentally poison themselves with CO. As an example, we had a situation where a father left his 5-year-old son in the passenger seat of a small car and got out of the car without turning the car off or opening the windows for about 40 min, after which the youngster had no indications of life. All parents should remember that leaving their children in a car is never a good idea, and if they must, they should at least make sure the car is turned off and the windows are cracked open for air circulation before leaving. We urge market supervision departments to tightly regulate the manufacturing and sale of gas stoves and gas water heaters and not to overlook CO in vehicles. Centralized heating areas may effectively lower the chances of intoxication due to CO, and their expansion is what we are advocating for.
In total, 11 accidental deaths occurred due to combined intoxication from different agents. Four groups of combined intoxication were found: CO and alcohol (1 case), CO and illicit drugs (2 cases), alcohol and prescription medications (2 cases), and alcohol and illicit drugs (6 cases). Lee showed that illicit drug users with concomitant alcohol abuse have a significantly higher mortality rate than the general population ( ). According to previous reports ( ), death from CO combined with illicit drug intoxication may be because illicit drugs increase the metabolic rate and oxygen consumption of the body, thus, exacerbating the death process of CO intoxications. Therefore, recognizing that illicit drugs, alcohol, and CO predispose to increased death susceptibility, especially potentially preventable deaths, could help develop preventive measures.
According to , the most common type of poison used in suicides was pesticides (45 cases, 78.9%). A better understanding of the emotional dynamics of suicidal patients is possible by implementing a community psychological counseling and security system. Several studies have linked suicidal ideation and behavior to the availability of lethal means ( ), and strict control of suicide tools will effectively reduce the suicide rate. In homicide cases, “other compounds” (succinylcholine, n = 4; concentrated sulfuric acid, n = 1; and cyanide, n = 1) and pesticides ( n = 6) were the most common. The manner of death can reflect the motivation for intoxication. Pesticides are more likely to be blamed for motivated deaths, like suicide and homicide. Deaths caused by alcohol, CO, illicit drugs, prescription medicines, and toxic animals and plants are always accidental. As concluded by Moebus and Bödeker, reducing the acquisition of pesticides has a positive effect on reducing the incidence of pesticide intoxications ( ). A study of a community cluster randomized trial of household pesticide lock-up storage in rural Asia found that locking up pesticides did not reduce suicide mortality ( ). Therefore, only banning highly toxic pesticides at the source of pesticide production is effective in reducing deaths from suicide via pesticide administration. Regarding CO and alcohol, safety education should be further developed, with a focus on appropriate use. For prescription medicine, it is important to emphasize following medical advice and not overdosing, as well as with the use of herbal medicines; we advocate going to a regular Chinese hospital to obtain them. Lastly, the government should take severe measures against illicit drug-related criminal activities and enhance the status and function of human intelligence in anti-drug efforts. Regarding limitations, the data used were from a forensic pathology laboratory, where intoxication-related deaths are investigated, and it excludes cases from the whole of China. However, this retrospective analysis may reflect the intoxication situation in central China to a certain extent. It provides valuable information for medical examiners and policemen when handling such intoxication cases and useful suggestions for improving public safety policies.
Compared to the 1999–2008 period, the percentage of intoxication-related deaths evaluated at TCMEH dropped compared to that in the 2009–2021 period. Effective tetramine control and increased safety standards for residential gas use in China are likely responsible for the notable decline in the percentage of rodenticide and CO intoxication cases. Insecticides are the leading cause of death, followed by alcohol and illicit drugs. We propose strengthening the supervision of organophosphorus pesticides and pesticide regulatory departments within their scope of responsibility to effectively supervise and manage pesticide production, transportation, and sales, among other issues. Increases in alcoholic intoxication fatalities have been observed. Furthermore, amphetamine-related deaths have emerged in recent years, and the use of multiple illicit drugs has increased. This suggests that although the government is taking severe measures against the transportation and distribution of illicit drugs, the current anti-drug intelligence mechanism may be imperfect. The emergence of some “non-contact” illicit drugs poses new challenges for anti-drug efforts, requiring the development of a sound and sustainable system. Forensic professionals face new challenges because of evolving intoxication trends that result in death, such as the toxicology of combination intoxication, methods for detecting novel poisons, and identifying some intoxication-related homicide cases that go undetected. Our retrospective research included two deaths from snake bites, which is interesting to note. Forensic professionals should be aware of any minor skin lesions while conducting post-mortem examinations.
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Written informed consent was obtained from the individual(s), and minor(s)' legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.
LLih and WY searched for available studies and completed the manuscript. LLia made pictures and tables. JM, LX, HH, LLia, and LZ provided assistance for the whole research process. LQ designed the study, guided the writing of the paper, and made revisions. All authors have read and agreed to the published version of the manuscript.
|
Cardiomyocyte electrophysiology and its modulation: current views and future prospects
|
62de78f8-055e-4f9b-bd19-d8d89cb201a2
|
10150219
|
Physiology[mh]
|
. Classical experiments: Silvio Weidmann (1921–2005) The heart is the most important and prominent biological oscillator and is critical to most multicellular animal life. Its functional disruption causes death or disease. Understanding both normal and abnormal cardiomyocyte physiology is thus of fundamental scientific and clinical importance. It involves mechanisms operating at multiple cellular levels, ranging from the cell membranes and their molecular and cellular signalling machinery, through function in entire atrial and ventricular chambers and their conducting and pacing tissue, to systemic modulation by central and peripheral nervous and endocrine mechanisms. Much of this area and its application date from Silvio Weidmann's (1921–2005) pioneering experiments. This article and this Phil. Trans. R. Soc. issue it introduces, prefaced by DiFrancesco & Noble , falls close to and celebrates Weidmann's 100th birthday. Weidmann was first to record accurate cardiomyocyte action potentials (APs), the functional basis of cardiac electrophysiological activation, in the 1950s, employing recently invented Ling–Gerard glass microelectrodes . He demonstrated and clarified the contributions of Na + and K + currents, I Na and I K , to the initiation and conduction of excitation and its subsequent repolarization and recovery from refractoriness. Ventricular, atrial and Purkinje cardiomyocyte APs showed relatively rapid (less than 1 ms) upstrokes whose amplitude, in contrast to background resting potentials, depended upon extracellular [NaCl]. This implicated a selective transient Na + permeability reflecting a local anaesthetic-sensitive, inward voltage-dependent I Na paralleling findings in nerve. The subsequent, more gradual, AP recoveries to the resting potential varied in timescale and waveform between atria, and ventricles and Purkinje fibres with their prolonged plateau phases . Membrane impedance determinations identified the recoveries with inward rectifying rapid outward K + current, I Kr . Following recovery, Purkinje fibres additionally showed depolarizing pacemaker currents , potentially leading to re-excitation and repetitive activity. Weidmann's work then anticipated connexin gap junction-mediated AP propagation and relationships between membrane voltage, extracellular Ca 2+ and contraction . . Cardiac arrhythmias: a major public health problem These early observations were key to the development of the cardiac electrophysiological field and the continuing productive and constructive dialogue between its fundamental science and clinical applications bearing on normal and abnormal cardiac activity. The latter results in the major public health problem of cardiac arrhythmias, a leading cause of clinical mortality and morbidity, second in incidence only to all cancers combined. Sinus node disorders (SND) form the major indication for pacemaker implantation worldwide. Atrial fibrillation (AF) affects 1 : 10 adults aged >60 years , increasing stroke incidences and all-cause mortality . Ventricular arrhythmias precipitating sudden cardiac death (SCD) are a major cause of mortality in cardiac failure, and associated metabolic, including common diabetic and ischaemic, conditions . The early cardiac electrophysiological studies led to the classical Singh–Vaughan Williams classification scheme simultaneously classifying physiological targets governing cardiac rhythm and the then known cardiotropic drugs ( a (i)) . It provided widely useful clinical guidelines . Here, Class I drugs targeted I Na , reducing AP phase 0 slopes and overshoots, paralleling Weidmann's findings , and varyingly affecting AP duration (APD) and effective refractory period (ERP). Class II β-adrenergic inhibitors slowed sino-atrial node (SAN) pacing and atrioventricular node (AVN) conduction . Class III voltage-gated K + channel blockers delayed AP phase 3 repolarization, lengthening ERPs. Class IV L-type Ca 2+ channel inhibitors reduced cardiac, particularly SAN and AVN, rate and conduction . . Modern developments in the field Subsequent cardiac electrophysiological studies greatly advanced our understanding of events underlying pacing, electrical activity and its propagation through specialized conducting tissue into successive atrial, ventricular and conducting regions at the molecular and cellular as well as the systems levels. These studies demonstrated and characterized extensive numbers of novel ion channel, ion transport and receptor protein molecules . Many such insights, particularly their translation to roles in normal and arrhythmic activity at the systems level, suggesting novel pharmacological and therapeutic applications, came from monogenically modified murine platforms . Murine and human hearts share dual right- and left-sided circulations, distinct structurally homologous atria and ventricles, and pacing or conducting SAN, AVN and atrioventricular (AV) bundles. They did show differences in size, heart rate, L-type Ca 2+ current I CaL and transient outward K + current contributions ( I to ) and consequent APD. Nevertheless, major features of AP depolarization and conduction, transmural conduction velocities, relationships between APDs and ERPs and differences in transmural APD heterogeneities remain conserved . Finally, single cardiomyocyte isolations from these preparations permitted cellular-level experimental studies. In the current theme issue, Salvage et al . , Remme , Terrar , Jung et al . and He et al . review subsequent findings emerging from such genetic platforms; Anderson et al . implicate circadian variations in sympathetic actions on pacemaker ion channel gene transcription in diurnal cardiac rate variations in wild-type (WT) murine hearts. Complementary, theoretical, reconstructions then predict the physiological end-effects of the changes observed (Alrabghi et al . ; Hancox et al. ). More recently, genetically modified induced pluripotent stem cell (iPSC) platforms have shown promise, likely as cellular rather than systems models, lacking the anatomically related in vivo conducting (Purkinje cell) and contractile (cardiomyocyte) tissue organization involved in initiating and maintaining cardiac arrhythmias. Many available human pluripotent stem cell-derived cardiomyocyte (hiPSC-CM) monolayers show immature embryonic-like as opposed to human adult atrial/ventricular myocardial functional and structural phenotypes, limiting their translational utility . They showed low resting membrane potentials , low/absent I K1 , low membrane capacitances , immature AP profiles and slow electric impulse propagation velocities , and their generation primarily focused on ventricular rather than atrial phenotypes. However, Ahmad et al . describe hiPSC-CMs with AP properties and acetylcholine (ACh)-activated I K expression characteristic of atrial cells. iPSCs have also been explored as possible models for normal and disease-related changes in ion channel expression, Ca 2+ homeostatic phenotypes, neurocardiac interactions and cardiac hypertrophic change (see: Chen et al . ; Zhou et al . ; Li et al . and Langa et al . , respectively). Finally, direct human clinical electrophysiological studies continue to generate important scientific and translational insights into cardiac arrhythmic phenomena. Thus, recent electrocardiographic and electrical mapping studies distinguished potential roles of focal, Purkinje system activity from rotor activity in initiating and maintaining electrophysiologically and pharmacologically distinct polymorphic ventricular tachycardic (VT) or fibrillatory subtypes. These findings have potential implications for the clinical management of post-myocardial infarction sudden cardiac arrest. This theme issue discusses novel targets and their actions on excitable activity at multiple levels of cardiac functional organization established in this subsequent work as outlined in this introductory review, using standard texts as starting point ( b ). Thus normal and arrhythmic activity ( b (i)) immediately arises from ( b (ii)) surface membrane ion channels and their interactions underlying automaticity and pacemaking, and AP excitation, propagation and recovery (§§4 and 5 below). These membrane-level events initiate and are modulated by (iv) cellular-level feed-forward and feedback effects of excitation–contraction coupling and its Ca 2+ -mediated triggering (§6). Both these are modulated by (iii) G-protein-mediated autonomic inputs and the central nervous system circadian rhythms that these may transmit (§7). Of increasing interest are the longer-term regulatory mechanisms related to (v) metabolic feedback (§8) and other upstream target modulators (§9) causing potentially pathological electrophysiological and structural remodelling. All these regulatory events ultimately bear on surface membrane ion channel function in (ii), through which the arrhythmic outcomes emerge. These article sections are keyed to the individual articles in this Phil. Trans. theme issue. . Ion channels contributing to cardiomyocyte surface membrane excitation Normal cardiac rhythm requires a normal, regular, SAN automaticity. Inward, hyperpolarization-induced cyclic-nucleotide-activated channel (HCN)-mediated I f and other ionic currents combine with electrogenic Na + /Ca 2+ exchange (NCX) contributions driven by store Ca 2+ release (§6). Together these drive a time-dependent membrane potential depolarization from background resting levels to the Ca 2+ channel threshold. The resulting excitation initiates Na + current and consequent AP excitation at the outer rim of the SAN . Donald & Lakatta review recent discoveries bearing on the coupled-clock system from the cellular level, within the context of a complex cellular SAN organization. This pacing is modulated by adrenergic or cholinergic SAN pacemaker stimulation or inhibition (§7 below). Altered SAN automaticity causing abnormal or altered AP generation can arise from SAN malfunction, SND, or altered background diastolic or resting potentials. Abnormal automaticity can also arise with abnormal AVN or Purkinje tissue pacemaker activity when spontaneous impulses are generated in pathologically partially depolarized fibres, and can even involve normally non-automatic atrial and ventricular muscle. These latter circumstances can cause an automatic, often tachycardic, firing distinct from SAN activity. The ensuing APs form the functional unit of cardiomyocyte excitable activity. These are driven by a sequence of inward ( a ) and outward ( b ) currents mediating successive rapid depolarizing (phase 0), early repolarizing (phase 1), brief atrial ( c ) and prolonged ventricular ( d ) plateau (phase 2), late repolarization (phase 3) and electrically diastolic phases (phase 4). Inward I Na activation initiates the propagated AP phase as well as the remaining sequence of electrical events. Genetic evidence for loss or gain of I Na function correlates with pro-arrhythmic human Brugada (BrS) and long-QT3 syndromes (LQTS3), respectively. Recent findings reviewed here further report feedback actions on I Na activation (Salvage et al. ) and potentially pro-arrhythmic late I NaL currents (Liu et al . ) by further, downstream, excitation–contraction coupling (§5) and metabolic events (§7). All these effects were recapitulated in loss or gain of function genetic murine models affecting Nav1.5 and RyR2 function , and metabolic activation . Furthermore, electrophysiological aberrations and arrhythmic tendency in the BrS and LQTS3 models were similarly accentuated or relieved by flecainide and ameliorated or accentuated by quinidine , findings with potential translational significance . Remme reviews complex Nav1.5 functional and distribution patterns involving particular subcellular cardiomyocyte subdomains, as well as non-canonical non-electrogenic Nav1.5 actions with structural, potentially cardiomyopathic and pro-arrhythmic, effects. Finally, Nav1.5 does occur in other cell types, including various extracardiac tissues. Conversely, cardiomyocytes may express other than Nav1.5 subtypes. AP conduction involves local circuit currents through connexin channels connecting adjacent cardiomyocytes. Their magnitudes are determined by maximum rates of AP depolarization (d V /d t ) max , themselves dependent upon membrane capacitance and cytosolic resistance . The resulting AP propagation produces a coherent wave of excitation followed by refractoriness, of wavelength λ . This propagates through gap junction connexin and possible ephaptic connections between successive SAN, atrial, AV, Purkinje and endocardial and epicardial ventricular cardiomyocytes . The wavelength λ is normally sufficiently long to prevent re-excitation of recovered tissue behind the wave. Abnormal conduction slowing, shortening λ , can follow functional reductions in I Na or anatomical changes altering tissue electrical resistance or the functional or anatomical conducting pathway (§7; ). These can also produce heterogeneities in refractoriness and conduction in the conducting circuit. These heterogeneities can vary with time and previous impulse activation, and produce either total and unidirectional conduction block. Finally, at the temporal rather than spatial level, ERPs extend beyond each AP. They can increase with Na + channel inhibition, delaying the point at which a critical proportion of Na + channels have recovered, or with AP prolongation . These changes potentially cause re-entrant substrate perpetuating triggering events into sustained arrhythmias . These can involve spatial conduction heterogeneities, exemplified by transmural gradients across the ventricular wall, or temporal heterogeneities with abnormal AP recovery reflecting altered relative timings between AP recovery, refractoriness and repolarization reserve . Thus, discrepancies between ERP and AP recovery times occur in LQTS. Arrhythmias arising from isolated, decay of or block of impulse conduction can also occur in the absence of re-entrant pathways. Thus, a sino-atrial (SA) conduction block permits escape of a supraventricular or ventricular focus which generates abnormal impulses. Similar phenomena can follow delayed or blocked AV conduction . Different ion channels offer complementary contributions to AP characteristics with differing effects on heart rhythm reflected in turn in different modes of action of particular anti-arrhythmic drugs . Drugs acting on I Na alter the AP depolarization phase 0. Of these, Class Ia drugs bind to the Nav1.5 open state with τ ≈ 1–10 s dissociation time constants, inhibiting AV conduction and increasing ERPs, additionally increasing APD by a concomitant I K block. Class Ib agents bind preferentially to the Nav1.5 inactivated state, from which their more rapid τ ≈ 0.1–1.0 s dissociation minimizes their actions through successive cardiac cycles. Class Ic drugs bind to inactivated channels with a slow τ > 10 s dissociation giving a use-dependent channel block, slowing AV conduction, but little affecting APD. A new Class Id blocks pro-arrhythmic late Na + current ( I NaL ) in LQTS3, and pathological bradycardic and ischaemic conditions, and cardiac failure. Class Id drugs shorten APD and increase refractoriness and repolarization reserve . . Ion channels contributing to cardiomyocyte surface membrane recovery AP depolarization activates further channels both initiating contraction and restoring the resting membrane potential. The consequent AP waveforms vary with cell type: atrial cells show shorter APs than ventricular cells ( c,d ) . Ca 2+ channel (Cav1.2) activation, localized within the transverse tubules , detailed in the next section, contributes to the phase 2 plateau. In certain cardiomyocyte such as SAN and AVN types (see §4), this instead of Nav1.5 initiates excitable activity. Ca 2+ channel abnormalities can also cause arrhythmic phenotypes . Zeng et al . associate variants of pro-arrhythmic J wave syndromes, also found with loss of Nav1.5 function, with loss of Ca 2+ channel function, CACNB2b-S143F and CACNA1C-G37R , mutations. AP repolarization ultimately restoring the resting potential is driven by a range of outward K + currents ( b ) , for which a wide range of new K + channel subtypes have been described . Of these, transient outward Kv4.3 and Kv4.2-mediated I to currents drive the early phase 1 AP repolarization terminating phase 0 depolarization. The prominent I to , together with atrial-specific Kv1.5 ( KCNA5 )-mediated ultra-rapid I Kur , and the GIRK1- and GIRK4-mediated ACh-sensitive I KACh , result in the shorter atrial than ventricular APD. Gain of function Kv4.3 and Kv4.2 mutations have been implicated in AF. Alrabghi et al . model human atrial cells in computational reconstructions of atrial tissue and intact atria, to replicate reductions in APD, plateau, ERP and consequent λ , enhancing AP re-entry and facilitating AF. In ventricular myocytes, Kv11.1 (HERG or KCNH2) -mediated I Kr rapidly activates with phase 0 AP depolarization. It then rapidly inactivates over AP phases 0–2 . Phase 3 repolarization then re-activates I Kr , permitting outward phase 3 and early phase 4 currents terminating the plateau. By contrast, Kv7.1 ( KCNQ1 )-mediated I Ks activates more slowly over phase 2, becoming a major persistent phase 3 K + conductance. Kir2.1, Kir2.2 and Kir2.3 ( KCNJ2 , KCNJ12 and KCNJ4) mediate inwardly rectifying I K1 . This produces a reduced K + conductance at voltages greater than −20 mV in phases 0–2 while producing outward currents with repolarization to less than −40 mV late in phase 3. It also stabilizes phase 4 diastolic resting potentials. Cardiomyocyte resting potentials are further stabilized by background K 2P 2.1 ( KCNK2 , expressing K 2P currents), and the normally small adenosine triphosphate (ATP)-sensitive Kir6.2 ( KCNJ11) mediating I KATP . However, the latter can be activated by reduced intracellular ATP levels . Finally, Li et al . review effects of further, small-conductance Ca 2+ -activated K + (SK) channels on excitability in both normal and pathological conditions. Loss -of-K + channel function abnormalities are associated with pro-arrhythmic long -QT syndromes (LQTS). Computational analysis (Hancox et al . ) conversely implicates gain of K + channel function involving I Kr , I Ks and I K1 in short -QT syndrome (SQTS). The latter also predispose to atrial and ventricular arrhythmias and SCD. Protein expressional and functional changes related to I Ks have been closely associated with ventricular arrhythmias. Chen et al . reveal a novel role of the ubiquitin-like-modifier leukocyte antigen F-associated transcript 10 (FAT10) in regulating K + channels competing for Kv7.1 ubiquitination. This protects against pro-arrhythmic hypoxia-induced decreases in I Ks . FAT10 itself protects against myocardial ischaemia. Recent pharmacological targeting of a significant number of these novel K + currents includes new non-selective K + channel inhibitors and drugs directed towards the atrial-specific I Kur , I Kr and I KATP . . Ca 2+ homeostasis and excitation–contraction coupling summarizes the significant progress suggesting reciprocal relationships between membrane excitation and excitation–contraction coupling mechanisms ( a–d ). Transverse tubular L-type Ca 2+ current I CaL triggering producing the AP phase 2 plateau ( a,b ) results in extracellular Ca 2+ entry, causing a local cytosolic [Ca 2+ ] elevation in possible Ca 2+ microdomains formed by membranes bounding the transverse tubule–sarcoplasmic reticular, T-SR, junctions . This drives feed-forward ryanodine receptor (RyR2)-mediated sarcoplasmic reticular (SR) Ca 2+ release ( d ). RyRs are additionally regulated by intracellular factors exemplified by the FK506 binding proteins, FKBP12 and FKBP12.6, though their detailed action is debated. Richardson et al . report time- and concentration-dependent effects of FKBP12 on previously FKBP12/12.6-depleted RyR2 channels, suggesting negative co-operativity in their FKBP12 binding, potentially significant in regulating RyR-mediated Ca 2+ signalling. Genetic gain of RyR2 or loss of calsequestrin function is associated with the pro-arrhythmic condition catecholaminergic polymorphic ventricular tachycardia (CPVT) experimentally recapitulated in murine hearts carrying genetically altered RyR2 or calsequestrin-2 . The resulting further bulk cytosolic [Ca 2+ ] elevation ( e ) activates troponin, initiating mechanical activity. Ca 2+ release normally terminates with membrane repolarization. Cytosolic [Ca 2+ ] then returns to its resting level through cardiac SR membrane Ca 2+ -ATPase (SERCA2)-mediated Ca 2+ re-uptake and sequestration by SR calsequestrin, and surface membrane NCX-mediated cytosolic Ca 2+ extrusion into the extracellular space in exchange for extracellular Na + , whose electrogenicity has been implicated in both abnormal rhythm and normal SAN pacing (see §4; Donald & Lakatta ) . The cycles of increase followed by restoration of cytosolic Ca 2+ concentration and therefore of contraction are normally synchronized with membrane events associated with the AP. Alterations in these excitation–contraction coupling processes potentially exert pro-arrhythmic effects . Of feed-back effects on their initiating membrane events ( a ), membrane potential after-depolarization events could elicit triggered activity should their amplitude be sufficient to initiate regenerative Na + or Ca 2+ channel excitation ( b ). First, altered I CaL could predispose to pro-arrhythmic early after-depolarization (EAD) phenomena late in phase 2 or early in phase 3 of the AP, in turn causing extrasystolic membrane excitation. These events typically occur under bradycardic conditions, when altered balances of inward I Na or I Ca and outward I K prolong the AP. This permits I CaL reactivation, which in turn triggers an extrasystolic AP, potentially precipitating torsades de pointes . This is particularly likely under acquired or genetic conditions of increased APD exemplified by experimental hypokalaemia or LQTS . Secondly, elevated diastolic cytosolic [Ca 2+ ] following abnormally increased I CaL or RyR2 Ca 2+ sensitivity can itself trigger propagating waves of spontaneous SR Ca 2+ release asynchronous to the normal membrane excitation cycles, further elevating cytosolic [Ca 2+ ] ( c ). These can result in delayed after-depolarization (DAD) events that follow full AP repolarization. These are driven by transient inward currents, I ti , resulting from an electrogenic NCX activity enhanced by the elevated cytosolic [Ca 2+ ] produced by the abnormal diastolic SR Ca 2+ release . NCX itself may contribute to SAN automaticity through its depolarizing electrogenic effects (see §4; ). Thirdly, Terrar reviews contributions from further intracellular organelles, including lysosomes and mitochondria, to timing and Ca 2+ store-based modulation involving further, cADP-ribose, nicotinic acid adenine dinucleotide phosphate (NAADP) and inositol tris-phosphate (IP 3 )-mediated, signalling to intracellular organelles. These further modulations of Ca 2+ homeostasis may contribute additional arrhythmic mechanisms, often similarly acting through NCX. Fourthly, elevated cytosolic [Ca 2+ ] may also downregulate Na + channel expression and function, compromising AP initiation and/or conduction velocity ( d ). Salvage et al . review this action, likely involving Ca 2+ /calmodulin (Ca 2+ -CaM) and apo-CaM interactions with binding sites on the III–IV linker and the C-terminal domain of Nav1.5 . Such mechanisms appear to operate through a wide range of physiological situations. They could also modify the expression of other ion channels, exemplified by Li et al . in the calmodulin kinase II (CaMKII)-mediated modifications in Ca 2+ -activated K + (SK2) channel expression under conditions of cardiac hypertrophy , in addition to CaMKII actions in increasing I NaL (Liu et al . ) ( e ). Finally, Zhou et al . report a further possible level of RyR2–Na + channel interaction in iPSCs carrying clinically pro-arrhythmic RYR2-A1855D . Their resulting phenotype, with premature spontaneous SR Ca 2+ transients, Ca 2+ oscillations and increased APDs, was accentuated by a co-existent SCN10A-Q1362H variant by itself not conferring any specific phenotype. These advances broadened the potential therapeutic anti-arrhythmic options. Ca 2+ channel blockers can act as non-selective surface membrane Ca 2+ channel inhibitors. There are also phenylalkylamine and benzothiazepine Cav1.2 and Cav1.3 channel-mediated I CaL inhibitors. One RyR2 blocker, flecainide, has found recent use in the monotherapy of CPVT . Future explorations could target (a) further surface membrane L- and/or T-type Ca 2+ channels, (b) intracellular RyR-Ca 2+ channels, (c) SERCA2 activity, (d) ion exchange, particularly Na + –Ca 2+ exchange processes, and (e) phosphorylation levels of cytosolic Ca 2+ -handling proteins, including CaMKII inhibitors, and p21 activated kinase 1 (Pak1) modulators (see §§7 and 9). . Autonomic G-protein-mediated modulation The physiological processes of cardiac pacing, ion current activation in AP generation, and the excitation–contraction coupling that initiates myofilament activity are modulated by the cardiac autonomic, sympathetic and parasympathetic innervation ( f,g ). This releases transmitters and co-transmitters binding to receptors often coupled with guanine nucleotide-binding (G-) proteins. The latter G-protein-coupled receptors (GPCRs) activate regulatory biochemical cascades with complex and multiple inotropic, chronotropic and lusitropic effects upon cardiac function . hiPSC-derived co-culture systems permitting closer examination of neurocardiac interactions are under development. Li et al . report one such optimized system replicating many anatomical and pathophysiological features of both the individual and combined cardiomyocyte and innervating components mimicking physiological responses in other mammalian systems. Sympathetic nervous system terminals are widely distributed through different cardiac regions, where they release noradrenaline ( f ). Sympathetic activation also triggers adrenal medullary adrenaline release into the circulation. Both transmitters bind to surface membrane β 1 - and β 2 -adrenergic receptors. Of these, the cardiomyocytes express β 1 -adrenergic receptors whose activation triggers widespread actions. Noradrenaline binding activates the stimulatory G-protein G s . Its G α subunit binds guanosine triphosphate (GTP) and is released from the receptor and the βγ - subunit. The G α subunit then activates the adenylyl cyclase, enhancing cyclic 3′,5′-adenosine monophosphate (cAMP) production, increasing cellular cAMP levels. First, cAMP combines with, and maintains open, HCN channels, particularly in SAN cells, increasing, pacemaker current I f and heart rate. Secondly, cAMP activates protein kinase A (PKA), which exerts widespread strategic phosphorylation actions. The latter include exciting Nav1.5, Kv11.1 and Kv7.1, respectively, mediating rapid inward I Na and subsequent outward I Kr and I Ks . PKA also enhances phosphorylation of the C-terminal tail regions of Cav1.2 L-type Ca 2+ channels, increasing their open probability, increasing both amplitude and duration of the ventricular AP plateau. It also accelerates SAN pacemaker potentials. The consequent increased net Ca 2+ entry into the cell increases the rate and force of muscle contraction in subsequent beats. PKA-mediated phosphorylation of RyR2 reduces binding of its regulatory ligand FKBP12, which normally stabilizes its closed state. This dissociation increases the Ca 2+ sensitivity of RyR2, enhancing Ca 2+ -induced Ca 2+ release. Secondly, PKA-mediated phosphorylation of phospholamban (PLN) relieves its inhibition of SERCA2-mediated re-uptake of previously released cytosolic Ca 2+ , enhancing diastolic SR Ca 2+ store re-loading. Thirdly, of isoforms of cAMP-dependent exchange proteins directly activated by cAMP (Epac), Epac2 activates CaMKII activity, increasing RyR2-mediated SR Ca 2+ release . Epac1 activation induces programmes of hypertrophic, morphological and cytoskeletal changes. These accompany increased protein synthesis and induction of cardiac hypertrophic markers mediated by Ca 2+ -dependent calcineurin activation. Tomek & Zaccolo describe cellular compartmentation mechanisms in which such diverse cAMP actions might take place. In addition, different sympathetic responses amongst cardiomyocyte types are exemplified by differing electrophysiological properties and responses to noradrenaline of pulmonary vein compared with left atrial cardiomyocytes. These may contribute to atrial ectopy . Parasympathetic, inhibitory, nerve fibre activity slows heart rates and decreases contractile force. The underlying transmitter, ACh, acts through cardiac muscarinic (M 2 ) receptors. ACh–receptor binding activates the coupled G-protein G i2 . These actions occur in SAN, AVN or atrial myocardium in both the presence and absence, but in ventricular tissue only in the presence, of pre-existing adrenergic challenge. The G α subunit binds GTP and splits off from the receptor and its G βγ - subunit. G βγ subunits open inward rectifying I KACh or I KAdo channels particularly in supraventricular tissue, by acting on their GIRK1 and GIRK4 components . This occurs particularly in the SAN but also in atria and ventricles. The dissociated G iα binds to and inhibits adenylate cyclase (AC). This reduces cAMP production in pacemaker cells , resulting in their increased I CaL and I f . G i activation may also upregulate protein phosphatase (PP2A) activity. This likely takes place through a reaction sequence involving cell division control protein 42 homologue (Cdc42)/Ras-related C3 botulinum toxin substrate 2 (rac2) and Pak1. PP2A dephosphorylates PKA-phosphorylated proteins at the same serine/threonine phosphorylation sites. It therefore reverses PKA effects on L-type Ca 2+ channels, RyR2s and the SERCA2a inhibitor PLN. The cardioprotective effects of Pak1 may thus involve increased PP2A activity additional to its potentially strategic remodelling actions discussed in §9 (He et al . ; Jung et al . ) . Recent studies have closely examined its actions in increasing SERCA activity . Finally, adenine nucleotides act as excitatory postganglionic sympathetic co-transmitters on metabotropic P2Y receptors. The resulting adenosine (A 1 ) receptor activation activates phosphokinase C (PKC) through phospholipase C-mediated production of diacylglycerol. PKC acts on voltage-gated Na + and K + channels, L-type Ca 2+ channels and RyR2. These G-protein-linked systems show significant amplification. Activating a single β-adrenergic receptor activates many G-proteins. Each then activates an enzyme molecule, in turn producing many cAMP molecules. Each activated PKA molecule then phosphorylates several Ca 2+ channels. Correspondingly, activating one muscarinic receptor produces many G βγ subunits. This opens many GIRK1 channels. Closer characterization of such signalling pathways in iPSC cells is a relatively new area of study. Ahmad et al . describe differentiated human iPSCs resembling an atrial phenotype, with the expected electrophysiological and Ca 2+ signalling properties, and specific transcripts, responsive to adrenergic stimulation, therefore permitting studies of such effects. Recent results implicate a normal continuous diurnal ion channel remodelling at the level of SAN pacemaking driven by sympathetic, though not parasympathetic, actions coupling central nervous system suprachiasmatic nuclear circadian rhythms to rhythms within the heart itself. These actions were initially attributed to beat-to-beat autonomic transmitter-mediated modulation of specific ion channel activity . A greater adrenal medullary catecholamine release and cardiac catecholamine content might then explain higher awake than asleep resting heart rates . However, recent evidence implicates a periodic transcriptional cardiac remodelling varying ion channel abundances and their consequent ionic current densities in such diurnal heart rate variations. Anderson et al . discuss this particularly for the HCN channel, exploring possible mechanisms for these findings. About 44% of the sinus node transcriptome, including many important cardiac ion channels, displays a circadian rhythm . This non-canonical sympathetic action was reflected in chronic but not acute pharmacological autonomic blockade inhibiting both this circadian rhythm and the related ion channel transcription . This could involve cAMP response element action promoting the key clock genes, such as Per1 and Per 2.18 . The elaboration of adrenergic and cholinergic cardiac actions through fuller understanding of G-protein signalling allows the original Vaughan Williams Class II to be broadened to include G-protein actions in general. These have translated to therapeutic advances in the form of new selective and non-selective adrenergic antagonists, as well as adenosine receptor and cholinergic muscarinic receptor modulators . Possible future potential targets may arise from the numerous (approx. 150) further orphan GPCRs. There are now new non-selective, β-, and selective β 1 -adrenergic receptor inhibitors, muscarinic M 2 receptor inhibitors and activators, and adenosine A 1 receptor activators. . Cardiomyocyte energetics and excitable properties More recently reported processes affecting longer-term cellular energetics and tissue structure remodelling are also implicated in cardiac arrhythmias. These actions complement the more established acute effects of specific ion channels described above. They are often associated with hypoxic conditions generally , hypertrophic or fibrotic change, cardiac failure, ischaemia-reperfusion and biochemical conditions including obesity, insulin resistance and type 2 diabetes . The resulting oxidative stress and longer-term structural, fibrotic, hypertrophic and inflammatory, changes occur upstream of the membrane-level electrophysiological processes . Normal cardiomyocyte function in human hearts depends on a number of energy-intensive processes consuming kilogram ATP quantities daily. Approximately 30–40% of this cellular ATP is expended maintaining ionic gradients and efficient Ca 2+ cycling ( a,b ). Approximately 90% of the ATP consumption is replenished by the extensive cardiomyocyte mitochondrial network . Arrhythmic disorders, particularly AF, have been associated with the metabolic stress associated with metabolic syndrome . Animal models show abnormal mitochondrial structure early following AF induction . Cardiomyocyte mitochondria from human AF patients show increased DNA damage, structural abnormalities and evidence of impaired function . Atrial tissue from chronic AF patients also shows altered transcription of mitochondrial oxidative phosphorylation-related proteins . Decreased mitochondrial complex II/III activity has been reported in permeabilized atrial fibres from patients who developed post-operative AF, corresponding to decreased expression of the gene cluster for mitochondrial oxidative phosphorylation . Finally, right atrial tissue from cardiac surgery patients with an AF history also demonstrated downregulated electron transport chain activity and proton leakage . Mitochondrial dysfunction destabilizes the inner membrane potentials required to drive the electron transport chain, compromising ATP generation. The consequent ATP depletion or rising adenosine diphosphate (ADP) first increases opening probabilities of sarcolemmal K-ATP (sarcKATP) channels . This shortens APDs and consequently the ERPs, predisposing to re-entrant arrhythmia . It hyperpolarizes cell membrane potentials, compromising cell excitability and AP propagation ( c ). Secondly, excessive energetic demand, compromised vascular oxygen supply or pathological energetic disorders associated with mitochondrial dysfunction also increase reactive oxygen species (ROS) production. The normally occurring low ROS levels modulate activity in a range of signalling molecules or signal themselves. These either transiently alter the activity of proteins, or produce more sustained effects through altering transcription factors and gene expression. ROS influence cardiomyocyte excitability, and atrial and ventricular arrhythmic tendency, effects reduced by allopurinol or ascorbate antioxidant challenge. Increased ROS production could underlie shortened atrial ERPs and initiation of AF with rapid pacing . Right atrial appendages of AF patients show increased markers of oxidative stress . Dysregulated ROS production may also reduce cardiac Na + channel expression . In addition, reduced (NADH) or oxidized nicotinamide adenine dinucleotides (NAD + ), reflecting cell oxidative state, respectively inhibit and enhance Nav1.5 activity, despite normal overall Nav1.5 expression, affecting AP conduction . ROS also reduce connexin-43 (Cx43) trafficking and function and the consequent cell–cell coupling . Oxidative stress may also influence cardiomyocyte I K , sarcolemmal K ATP channels and I Ca . Thirdly, oxidative stress may also influence Ca 2+ homeostasis. ROS oxidize RyR2, increasing SR Ca 2+ leak, increasing cytosolic [Ca 2+ ] i . It thus altered intracellular Ca 2+ cycling in ageing rabbit ventricular myocytes, its effects reversed by a mitochondrial specific ROS scavenger . Oxidative stress also reduces SERCA-mediated Ca 2+ re-uptake . CaMKII may also be redox-sensitive, with oxidation resulting in kinase activity similar to auto-phosphorylated CaMKII : pharmacological CaMKII inhibition prevented H 2 O 2 -induced ventricular arrhythmias . ROS also oxidize and activate PKA . Finally, ROS may be linked to cardiac fibrosis through fibroblast activation and production of transforming growth factor-β (TGF-β) (§9) . Finally, both CaMKII and ROS could increase I NaL (Liu et al . ) Several transcriptional coactivators regulate mitochondrial mass and function ( d ) . Of these, the peroxisome proliferator activated receptor (PPAR) γ coactivator-1 (PGC-1) family, including PGC-1α and PGC-1β, is highly expressed in oxidative tissues, including heart, brain, skeletal muscle and kidney. Either PGC-1α or PGC-1β suffices to activate gene regulatory programmes increasing cellular energy production capacity. PGC-1 protein expression increases with a number of upstream signals linking cellular energy stores and external stimuli including cold exposure, fasting and exercise, matching mitochondrial activity to cellular energy requirements. PCG-1s act through numerous nuclear receptor targets including PPARα, PPARβ and oestrogen-related receptor alpha (ERRα). PGC-1α also coactivates nuclear respiratory factor-1 (NRF-1) and -2 (NRF-2) . The latter modulate expression of the nuclear-encoded transcription factor Tfam, essential for replication, maintenance and transcription of mitochondrial DNA . They also regulate expression of other proteins required for mitochondrial function, including respiratory chain subunits . PPARα is also a key regulator of genes involved in mitochondrial fatty acid oxidation. ERRα is an important regulator of mitochondrial energy transduction pathways, including fatty acid oxidation and oxidative phosphorylation . In cardiac cells, PCG-1α interaction with NRF-1, ERRα and PPARα also increases mitochondrial biogenesis . Forced PGC-1 expression in cultured cardiomyocytes induced expression of nuclear genes encoding mitochondrial proteins involved in other energy production pathways, including the tricarboxylic acid cycle, and nuclear and mitochondrial genes encoding components of the electron transport chain and oxidative phosphorylation complex . PGC-1 proteins, through these interactions, thus exert multi-level regulation of cellular mitochondrial function and metabolism as a whole. PCG-1s fall in obesity, insulin resistance, type II diabetes mellitus and ageing in parallel with mitochondrial dysfunction . Mice deficient in both Pgc-1α and Pgc-1β develop a low cardiac output state and conduction system disease, dying before weaning . Ablating either PCG-1α or PGC-1β produces a milder phenotype, permitting physiological study. Pgc-1α −/− hearts have normal baseline contractile function but develop cardiac failure with increased afterload . Pgc-1β −/− hearts showed similarly normal baseline features but blunted heart rate responses compared with WT hearts following adrenergic challenge . They also showed an increased arrhythmic propensity. Langendorff-perfused Pgc-1β −/− hearts demonstrated APD alternans, and more frequent episodes of VT in response to programmed electrical stimulation . Single-cell studies revealed alterations in the expression of a number of ion channels as well as evidence of spontaneous diastolic Ca 2+ transients, previously associated with pro-arrhythmic after-depolarizations Chronic studies of the effects of mitochondrial impairment on the development of pro-arrhythmic phenotypes compared young (12–16 weeks) and aged (older than 52 weeks) Pgc-1β −/− mice with aged-matched WT. Chronotropic incompetence in intact animals suggested SND and a paradoxical negative dromotropic response suggested AVN dysfunction, following β 1 -adrenergic challenge . Sharp microelectrode AP recordings in both atria and ventricles of Langendorff-perfused Pgc-1β −/− hearts during programmed electrical stimulation demonstrated arrhythmic phenotypes progressing with age. This accompanied reduced (d V /d t ) max , prolonged AP latencies, reduced APD, and a consequently reduced AP wavelength ( λ ) correlating with Pgc-1β −/− arrhythmogenicity . These findings could be accounted for by loose patch-clamp demonstrations of reduced I Na but not of I K in Pgc-1β −/− atria and ventricular preparations . Finally, the Pgc-1β −/− hearts showed accelerated fibrotic change with age (see §9; ). . Cardiac remodelling and excitable properties Remodelling of molecular and physiological processes as well as of cardiac structure can occur over all timescales, and involve any cardiac region(s). There have been recent suggestions implicating non-canonical sympathetic actions in normal diurnal variations in ion channel expression (§7). SAN pacemaking can also be remodelled in disease. Logantha et al . report altered SAN ion channel-, Ca 2+ -handling- and fibrosis-related gene expression and implicate these in the SAN dysfunction in a rat pulmonary arterial hypertension model. Investigations of detailed mechanisms are in their infancy. He et al . review one line of investigation exploring possible protective signalling actions of PAK1 possibly through altering Cav1.2/Cav1.3 ( I CaL )-mediated Ca 2+ entry, RyR2-mediated SR Ca 2+ release and CaMKII-mediated transcriptional regulation of SERCA2a and NCX. Conversely, Jung et al . demonstrate that PAK1 deficiency promotes atrial arrhythmogenesis under adrenergic stress conditions, likely through posttranslational and transcriptional modifications of key molecules, including RyR2 and CaMKII, critical to Ca 2+ homeostasis. Longer-term cardiac remodelling involving anatomical, fibrotic and/or hypertrophic change can also occur in cardiac disease processes. The nature of their possible mechanisms are here exemplified by a simplified summary of angiotensin II (AngII) action through its angiotensin receptor type 1 (ATR 1 ) . Although classically implicated in systemic blood pressure regulation and Na + and H 2 O homeostasis, ATR 1 activation also stimulates the inflammatory cell recruitment, angiogenesis, cellular proliferation, and accumulation of extracellular matrix (ECM) associated with cardiac hypertrophy and fibrosis . These actions may involve a local cardiac renin–angiotensin system (RAS) thought also to exist in other organs, including blood vessels, brain, kidney, liver and skin. Tissue RASs are functionally autonomous systems of known importance in fibrotic change. They also exert longer-term actions on surface electrophysiological ( a,b ) and Ca 2+ homeostatic activity ( c ), through potential actions of fibrotic and hypertrophic change on AP conduction ( d ). ATR 1 s act through both G-protein-, G α q/11 , G α 12/13, and G β y , and non-G-protein-related signalling pathways ( e ), then on multiple, oxidase and kinase signalling pathways ( f ). These include the serine/threonine kinases CaMKIII and protein kinase C (PKC), and the mitogen-activated protein kinases (MAPK) extracellular signal-regulated protein kinase 1/2 (ERK1/2), c-Jun NH 2 -terminal kinase (JNK) and p38 mitogen-activated protein kinases (p38MAPK). Signalling can also involve receptors, including platelet-derived growth factor (PDGF), epidermal growth factor receptor (EGFR) and insulin receptors, and the non-receptor tyrosine kinases Src, Janus kinase/signal transducer and activator of transcription IL (JAK/STAT) and focal adhesion kinase (FAK) . ATR 1 -mediated NAD(P)H oxidase activation following PKC activation leads to ROS generation, implicated in cardiomyocyte hypertrophy . The PKC activation also mediates a galectin-3-dependent fibrosis in HL-1 cells. AngII- or ROS-mediated CaMKII activation, in addition to enhancing phosphorylation of protein targets related to excitation–contraction coupling and cell survival, also did so for transcription factors driving hypertrophic and inflammatory gene expression . Activation of the MAPKs, ERK1/2, p38MAPK and JNK, has been implicated in cell growth and hypertrophy . It is also implicated in cardiac fibrosis through increasing gene transcription for procollagen I, procollagen III and fibronectin, and TGF-β, with TGF-β also directly activated by AngII-ATR 1 binding. The family of TGFs in turn critically regulates tissue homeostasis and repair, immune and inflammatory responses, ECM deposition, and cell differentiation and growth . TGF-β1, expressed in almost all tissues, is the most prevalent member. TGF-β1 overexpression, acting through Smad, and non-canonically and synergistically through ERK1/2, JNK, and p38MAPK, MAPK signalling, is a key contributor to fibrosis in most tissues . TGF-β1 stimulates myofibroblast differentiation and synthesis of ECM proteins and their preservation, by inhibiting matrix metalloproteinases (MMPs) and inducing synthesis of tissue inhibitor metalloproteinases (TIMPs) . TGF-β1 has been demonstrated to induce fibroblast proliferation, in turn leading to atrial fibrosis , SND and AF . AngII acts both by itself and in synergy with TGF-β1 to induce fibrosis ; its fibrogenic effects also have been linked to its activation of TGF-β1 signalling . Amongst non-receptor tyrosine kinases, JAK-STAT signalling has been implicated in cardiac hypertrophy and remodelling under conditions of pressure overload and ischaemic pathology . Langa et al . discuss emerging data implicating upregulated Notch signalling elements, particularly in hypertrophic (HCM) and dilated cardiomyopathy (DCM), conditions potentially constituting future therapeutic targets in their own right, in variant cTnT-I79N +/− hiPSC-CM cells. Fibrotic change could be implicated in AF through its action in reducing, and increasing heterogeneities in, AP conduction velocity, and affecting the integrity of AP propagation wavefronts has been implicated in AF. AF also accompanies some Na + channelopathies . Therapeutic exploration within this area has thus far targeted remodelling processes rather than their consequent electrophysiological properties. This is exemplified by now-available angiotensin-converting enzyme and angiotensin receptor blockers, aldosterone receptor antagonists, 3-hydroxy-3-methyl-glutaryl-CoA reductase inhibitors (statins), and n -3 (ω−3) polyunsaturated fatty acids . Nevertheless anti- arrhythmic drugs in this class may be possible . Thus, the key cardiomyocyte regulator of ion channel activity, Ca 2+ homeostasis and cardiac contractility , PAK1 may offer cardioprotective actions through inhibiting maladaptive, pro-arrhythmic, hypertrophic remodelling and progression in cardiac failure , actions of possible therapeutic utility (He et al . ; see also §§6 and 7). . Cycles of physiological discovery and their clinical translation The developments outlined here extend Weidmann's initial key electrophysiological studies and Vaughan Williams's classification of cardiac drugs and physiological and therapeutic targets, and have resulted in the development of novel, therapeutic classification schemes. The updating by a Working Group of the European Society of Cardiology provided a more complete, flexible pathophysiological framework predicting pro-arrhythmic circumstances, often termed the Sicilian Gambit . However, this did not seek or find extensive use as a formal classification scheme. A more recent reclassification of pharmacological targets and anti-arrhythmic agents related the more recently characterized ion channels, transporters, receptors, intracellular Ca 2+ - handling and cell-signalling molecules to their physiological, and potential and actual therapeutic actions. These were organized by strategic aspects of cardiac electrophysiological function paralleling the coverage in this Phil. Trans. B theme issue ( a (ii), b ). In so doing it was possible also to classify both existing and potential cardiac drugs and currently acceptable and potential sites of drug action. This classification also sought to facilitate future developments of investigational new anti-arrhythmic drugs. It expanded and updated established Singh–Vaughan Williams classes, in particular introducing target classes encompassing the longer-term processes in §§8 and 9. It added to Class I I NaL components with implications for long QT syndrome type 3 (LQTS3). A broadened Class II more fully dealt with G-protein signalling, and an expanded Class III incorporated subsequently discovered K + channel subtypes. A much increased Class IV encompassed recent findings on Ca 2+ homeostasis and excitation–contraction coupling. New classes recognized SAN automaticity (Class 0), and mechanically sensitive (Class V) and gap junction channels (Class VI), and longer-term energetic changes and structural remodelling (Class VII). The revised scheme thus provided a simple working model for cardiomyocyte function in which arrhythmia followed abnormal cardiac electrophysiological activation, linking particular therapies with then-known mechanistic targets (referenced in ). The physiological sciences have long worked in a succession of cycles involving mutually reinforcing interactions between laboratory and clinic. Identification of a clinical problem, particularly its aetiology, epidemiology, diagnosis, and natural history, or of novel physiological phenomena, prompts development of experimental models for the related disease process. These could augment mechanistic and clinically translatable understanding currently incomplete even for common and important arrhythmic conditions such as AF (Hu et al . ). The resulting physiological insights would prompt clinical tests and explorations for management and treatment. In turn, feedback of the outcomes of these continues the iterative cycles of experimental and clinical testing, activities currently termed translational medicine, for which some current efforts have been recently summarized (see supplementary file in ). The particular cycle of efforts represented in this present issue might then prompt further attempts at usefully determining physiological targets for investigational new drugs and other interventions directed at cardiac arrhythmic disease.
|
Guidelines for Qualifications of Neurodiagnostic Personnel: A Joint Position Statement of the American Clinical Neurophysiology Society, the American Association of Neuromuscular & Electrodiagnostic Medicine, the American Society of Neurophysiological Monitoring, and ASET—The Neurodiagnostic Society
|
ea6361dc-72b0-4b13-8094-bef4846130a8
|
10150627
|
Physiology[mh]
|
, 1. Neurodiagnostic Assistant (NDA) ○Job responsibilities (Tables and ) Continuously monitors patients undergoing cEEG recording for safety, either in room or via remote video monitoring; possesses knowledge of monitoring system and camera controls; alerts nursing and/or technologist staff when clinical seizures or other paroxysmal events occur; may communicate with patients and bedside staff to obtain information about events; and may document observations. The NDA is not qualified to analyze EEG data. May assist ND technologists as needed (restocking supplies, electrode removal, disinfection, etc). Completes hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification High school diploma or equivalent. Successful completion of the ASET online course, LTM 100, titled Introduction to LTM for EMU Personnel. ○Experience No previous experience; no less than 20 hours of observation in an EMU or ND laboratory under the direction of a credentialed ND technologist (R. EEG T., CLTM, or NA-CLTM) (The registries for EEG and EP, and the certifications for IONM, LTM, ANS testing, and MEG are registered by ABRET—Neurodiagnostic Credentialing and Accreditation as follows: R. EEG T., R. EP T., CNIM, CLTM, CAP, and CMEG). Competency assessments including, but not limited to, recognition of clinical seizures and other clinical paroxysmal events, ictal testing procedures, measures to reduce risk of fall, and seizure first aid. ○Supervision (Table ) General technical supervision by an ND Technologist III or above. ○Ongoing education/maintenance of competency Should attend relevant educational offerings and be required to demonstrate ongoing competence. 2. Neurodiagnostic Technologist I (Grandfather clause*: Any ND technologist practicing in the ND field before December 31, 2021, shall be considered grandfathered in ND education, and therefore shall be deemed that the existing ND education requirement as outlined in Section 3 has been met (Table 2).) ○Job responsibilities This is a transitional position, and a new hire is expected to obtain credentials within 5 years. Performs routine testing under supervision; writes a descriptive technical analysis for QA purposes only. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification Associate degree or higher is preferred or enrollment in a CAAHEP-accredited ND program. , For NCS Technologist I, may have 6 months of personal supervision training under a Technologist III or higher with direct supervision of EDX physician. ○Experience No specific previous experience required; must meet hospital standards for all patient care workers. Competencies should at minimum include those specified by ASET's National Competency Skill Standards, AANEM's skill standards for NCS, and/or ABEM's eligibility and application requirements. ○Supervision (Table ) Direct technical supervision by a ND Technologist III or above is required. May be permitted to perform routine testing under indirect technical supervision after successful completion of all required competencies as established by a Technologist III or higher and the laboratory medical and technical supervisors. Regular quality assessments of technical skills must be performed and documented at least yearly. For EEG, EPs, and ANS testing, works under indirect supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under personal physician supervision. ○Ongoing education/maintenance of competency Should attend relevant in-house educational offerings and be required to demonstrate ongoing competence through an in-house developed program. Should obtain a minimum of 15 hours of education in ND each year covering all modalities performed by the technologist. 3. Neurodiagnostic Technologist II ○Job responsibilities This is a transitional position and new hires should obtain credential within 3 years of hire. Performs routine testing under general supervision; writes a technical descriptive analysis for QA purposes only. For NCS Technologist II, 12 months of full-time (or equivalent) practical training in performing NCS under direct supervision of EDX physician. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification Meets eligibility requirements set by credentialing bodies, i.e., ABRET, – AAET, and/or ABEM, to take a credentialing examination. Associate degree or higher is preferred, or graduate of a CAAHEP-accredited ND program. , ○Experience Twelve or more months of experience working in a patient care environment with supervised experience in performing primary testing modality. Competencies should at minimum include those specified by ASET's National Competency Skill Standards and/or AANEM's skill standards for NCS, as appropriate. ○Supervision (Table ) General technical supervision. Reports to ND Technologist III or above. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 15 credits should be obtained every 3 years, covering all modalities performed by the technologist. 4. Neurodiagnostic Technologist III ○Job responsibilities Performs routine, as well as more advanced testing (per program guidelines); recognizes clinically significant events and patterns; follows policy and procedures regarding critical test results; communicates with team members; writes a technical descriptive analysis. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. R. EEG T. —Performs clinical EEG in the adult, pediatric, and neonatal populations. Also performs studies in ICUs. R. EP T. —Demonstrates proficiency in the acquisition and recognition of basic EP waveforms relevant to EP modality being tested. Includes VEP, BAEP, and SSEP. R.NCS.T. or CNCT , —Performs NCS; recognizes clinically significant events and follows facility policy and procedures regarding critical test results. CAP —Performs basic and advanced ANS testing procedures independently with a high degree of technical proficiency; recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them; and describes normal and abnormal clinical manifestations observed during the testing. ○Education/certification ABRET, AAET, or ABEM credential required. Associate degree or higher is preferred, or graduate of a CAAHEP-accredited ND program. , ○Experience Meets qualifications and requirements of Technologist II, is credentialed, and meets all education requirements set forth by ABRET, AAET, or ABEM. ○Supervision (Table ) Works under general technical supervision as specified in departmental policy and procedure manual. Regular quality assessments of technical skills must be performed and documented at least yearly. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years covering all modalities performed by the technologist. This is a minimum requirement and is superseded by individual credential requirements as set forth by ABRET, AAET, and ABEM. 5. IONM Neurodiagnostic Technologist I ○Job responsibilities This is a trainee-level position and is considered transitional. It is expected that new hires will obtain CNIM certification within 5 years. Helps set up monitoring equipment while assuring patient safety. Communicates effectively with team members. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification R. EEG T. or R. EP T. or a bachelor's degree ○Experience Six or more months of experience working in a patient care environment. For individuals entering the field with a bachelor's degree, patient experience requirements will be determined by their employer. ○Supervision (Table ) Requires direct technical supervision. Works under supervision of interpreting provider who can be immediately present either electronically or in person. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. 6. Neurodiagnostic Technology Specialist I ○Job responsibilities Includes all of those required for a Neurodiagnostic Technologist III but exhibits additional critical thinking skills. Able to recognize critical values in critically ill patients of all ages and report the values to the appropriate medical personnel. ○Education/certification Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. Current credentials required from ABRET, AAET, or ABEM. ○Experience Meets all requirements of experience and qualifications as specified in Tech level III in the ND field that includes an additional 1 year of experience in one of the advanced modalities listed below in Sections 6a–6e. ○Supervision (Table ) Works under general technical supervision. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and will be superseded by individual credential requirements and/or maintenance of certification requirements. 6a. Neurodiagnostic Technology Specialist I LTME (R. EEG T.) Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in LTM for epilepsy. Specific job responsibilities Recognizes and reports critical values to the appropriate medical personnel, significant clinical events, and EEG patterns. Prepares, organizes, and summarizes data for physician review. 6b. Neurodiagnostic Technology Specialist I ICU cEEG (R. EEG T.) Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in ICU/cEEG monitoring. Specific job responsibilities Recognizes significant clinical events and EEG patterns; provides alerts as detailed in departmental policy and procedure manual. Prepares, organizes, and summarizes data for physician review. 6c. Neurodiagnostic Technology Specialist I IONM (CNIM) Specific experience CNIM © Minimum of 1 year of experience in an IONM setting. Specific job responsibilities Able to apply electrodes and obtain high-quality waveforms independently. Able to recognize changes and communicate such with team as specified in the departmental policy and procedure manual. Able to troubleshoot common problems in IONM recordings. 6d. Neurodiagnostic Technology Specialist I NCS (R.NCS.T. or CNCT) Specific experience Bachelor's degree preferred. CNCT or R.NCS.T. required, plus training in performing advanced NCS. A minimum of 4 years as CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 5 years of experience in performing NCS and may have experience in the ICU. Technologists may perform pediatric studies. Specific job responsibilities Able to perform basic and advanced NCS procedures independently, including pediatric NCS, repetitive nerve stimulation, and autonomic studies with a high degree of technical proficiency; can perform studies in routine and ICU settings; with additional training may perform neuromuscular ultrasound. Recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them. Describes normal and abnormal clinical manifestations observed during the testing. Uses critical thinking and clinical expertise to determine the need for further NCS testing as needed to assist with interpretation. 6e. Neurodiagnostic Technology Specialist I MEG (CMEG-eligible) Specific experience Meets CMEG examination requirements set forth by ABRET, including completion of MEG certificate program. Three or more years of experience in the field of ND, which includes at least 6 months of supervised clinical and hands-on experience in an active MEG center. Experience of 75 MEGs for epilepsy; know the 10 to 20 International System of Electrode Placement. Twenty-five MEG evoked potentials including three or more of the five EP scans: auditory, language evoked, motor evoked, sensory evoked, and visually evoked. Experience to trouble shoot the system, including filling liquid helium MEG system. Specific job responsibilities Recognizes significant clinical events and EEG patterns; demonstrates competency in operational routines, including helium filling (if applicable), tuning procedures (as applicable), standard testing procedures, troubleshooting, artifact prevention and elimination, and data storage, and sufficient understanding of source localization to preprocess routine clinical data for the analysis by a physician magnetoencephalographer. 7. Neurodiagnostic Technology Specialist II ○Job responsibilities Generally similar to Neurodiagnostic Technology Specialist I descriptions but provides more detailed preliminary reports and more detailed data review (as specified in departmental policy and procedures) to the interpreting provider. Able to provide higher level of teaching and training for other technologists. ○Education/certification Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience, of which 3 years are postcredential. NCS specialist II requires a minimum of 5 years as a CNCT or R.NCS.T., with 6 years of experience, including ICU experience. Advanced modality requirements for experience and qualifications are listed below in Sections 7a–7e. ○Supervision (Table ) Works under general technical supervision as specified in departmental policy and procedure manual. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and may be superseded by the requirements of credentialing boards. 7a. Neurodiagnostic Technology Specialist II LTME (Long-Term Video EEG Monitoring) (CLTM) Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for LTME; assists other ND technologists in LTME. 7b. Neurodiagnostic Technology Specialist II ICU/cEEG (Continuous EEG Monitoring in the Intensive Care Unit) (CLTM) Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for ICU/cEEG; assists other ND technologists in ICU/cEEG. 7c. Neurodiagnostic Technology Specialist II IONM (CNIM) Specific education/certification CNIM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for IONM; assists other ND technologists in IONM. 7d. Neurodiagnostic Technology Specialist II NCS (R.NCS.T. or CNCT) Specific education/certification Meets all qualifications of NCS Specialist I. Bachelor's degree required. A minimum of 5 years as a CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 6 years of experience in performing NCS (Grandfather clause: Technologists who do not hold a bachelor’s degree or higher and who meet all the requirements of an NCS Specialist I may be considered for NCS Specialist II if they have a minimum of 10 years of continuous experience in performing NCS, a minimum of 8 years as a CNCT or R.NCS.T., a minimum of three faculty engagements in the NCS field, and at least two reference letters from ABEM physicians (Table 2)) and may have ICU experience. Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for NCS. Demonstrated ability to train others in the principles and practice of NCS, including technologists, residents, and fellows. 7e. Neurodiagnostic Technology Specialist II MEG (CMEG) Specific education/certification CMEG. Three or more years of experience in the ND field, specifically EEG, and 2 years of experience in MEG. Specific job responsibilities Performs digitization for co-registration to MRI, performs initial MEG spontaneous recording with concurrent EEG recording, understands placement and recording of evoked field trials (SEF, VEF, MEF, AEF, and LEF), implements nontraditional activation procedures as required (or ordered by attending physician), performs initial filtering and review of MEG/EEG data, performs preprocessing and localization of interictal activity, review of initial localization with physician, localization of evoked field data (for review by physician), and archiving and retrieval of MEG data. 8. NeuroAnalyst (Formerly Advanced Long-Term EEG Monitoring Analyst) (CLTM with NA-CLTM Preferred) ○Job responsibilities Monitors (on-site or remotely), evaluates, annotates, and classifies ictal, interictal, and paroxysmal events from EEG/video data. Recognizes physiologic and nonphysiologic artifacts. Writes detailed description of EEG patterns, seizure semiology, ictal and interictal abnormalities, and selection of representative EEG samples. Acts as a physician extender in collaboration with the supervising physician and other health care staff. If the NeuroAnalyst is working in an EMU, they must be able to perform the following duties: All duties and responsibilities for typical and special consideration for routine and advanced EEG/ECoG. Extensive knowledge in neuroanesthesia and its application to neuromonitoring. All aspects of invasive implants preoperatively, intraoperatively, and postoperatively, including, but not limited to, electrode setup, montage creation/verification, troubleshooting, hook-up and discontinuation, and stimulation for cortical mapping. ○Education/certification Holds credentials in EEG (R. EEG T.) and LTM (CLTM) with the NeuroAnalyst (NA-CLTM) credential preferred. Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience in LTM in the ambulatory setting, EMU, and/or critical care postcertification in LTM. ○Supervision (Table ) Works under general supervision of the neurodiagnostic technical lab supervisor or the neurodiagnostic lab director and the interpreting physician. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. 9. Neurodiagnostic Technical Lab Supervisor ○Overview Each laboratory requires technical supervision. These qualifications refer only to the issues specifically related to supervision of technical activities. The laboratory supervisor may take on additional responsibilities as dictated by hospital administrative policies and organization. ○Job responsibilities Provides direct supervision and education to other technologist levels; oversees day-to-day operations; responsible for maintaining policies and procedures; and QA program development and implementation in conjunction with the medical and technical laboratory directors. ○Education/certification Must have a minimum of one credential in ND technology, two or more preferred, in the area supervised. Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience in ND. ○Supervision (Table ) Works under the neurodiagnostic technical lab director and with the medical director. For clinical studies, works under supervision of interpreting provider who can be immediately present either electronically or in person. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. 10. Neurodiagnostic Education Specialist ○Overview Functions in the role of educator, facilitator, change agent, consultant, and leader for professional development. ○Job responsibilities Designs and implements competency and educational activities for ND personnel, including annual competency programs, orientation, continuing education, and professional development within a collaborative practice framework. Develops new employees to meet job requirements. Assists those who are not credentialed for board examination. Coordinates continuing education and competency activities for staff. ○Education/certification Graduate of an accredited Baccalaureate program, preferably in ND , or higher education. Must have a minimum of one ND-related credential, two or more preferred. Credential should be specific to the modality for which education is being provided. ○Experience Minimum of 5 years of experience in ND with previous teaching experience preferred. ○Supervision (Table ) Works under the neurodiagnostic technical lab director. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. 11. Neurodiagnostic Technical Lab Director ○Overview This position can be held either by a ND professional with additional management training or experience, or by non-ND manager, typically with experience in other diagnostic services. There are situations in which the administrative leadership of the CNP department may not, for the purposes of timekeeping, recordkeeping, and basic personnel management, have specific ND technology training. In that case, there must be a technologist at the level of Neurodiagnostic Technologist III or above who can provide technical supervision. ○Job responsibilities Works with hospital administration and the laboratory Medical Director to make personnel and budgetary decisions. Involved with marketing efforts. Serves as a liaison across departments when necessary. May also assume responsibility for productivity and financial viability, patient safety, and accreditation of the laboratory – among other high-level functions that contribute to the success of the department in support of the employer's mission. ○Education/certification A minimum of a bachelor's degree in health sciences; if job description includes performing ND studies, must have at least one ND credential. ○Experience Minimum of 5 years of minimum experience; 3 years of previous supervisory experience is recommended. ○Supervision (Table ) Works with hospital administration and Medical Director. If job description includes performing clinical ND studies, works under general supervision of interpreting provider who can be immediately present either electronically or in person. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by other individual credential requirements. ○Job responsibilities (Tables and ) Continuously monitors patients undergoing cEEG recording for safety, either in room or via remote video monitoring; possesses knowledge of monitoring system and camera controls; alerts nursing and/or technologist staff when clinical seizures or other paroxysmal events occur; may communicate with patients and bedside staff to obtain information about events; and may document observations. The NDA is not qualified to analyze EEG data. May assist ND technologists as needed (restocking supplies, electrode removal, disinfection, etc). Completes hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification High school diploma or equivalent. Successful completion of the ASET online course, LTM 100, titled Introduction to LTM for EMU Personnel. ○Experience No previous experience; no less than 20 hours of observation in an EMU or ND laboratory under the direction of a credentialed ND technologist (R. EEG T., CLTM, or NA-CLTM) (The registries for EEG and EP, and the certifications for IONM, LTM, ANS testing, and MEG are registered by ABRET—Neurodiagnostic Credentialing and Accreditation as follows: R. EEG T., R. EP T., CNIM, CLTM, CAP, and CMEG). Competency assessments including, but not limited to, recognition of clinical seizures and other clinical paroxysmal events, ictal testing procedures, measures to reduce risk of fall, and seizure first aid. ○Supervision (Table ) General technical supervision by an ND Technologist III or above. ○Ongoing education/maintenance of competency Should attend relevant educational offerings and be required to demonstrate ongoing competence. ○Job responsibilities This is a transitional position, and a new hire is expected to obtain credentials within 5 years. Performs routine testing under supervision; writes a descriptive technical analysis for QA purposes only. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification Associate degree or higher is preferred or enrollment in a CAAHEP-accredited ND program. , For NCS Technologist I, may have 6 months of personal supervision training under a Technologist III or higher with direct supervision of EDX physician. ○Experience No specific previous experience required; must meet hospital standards for all patient care workers. Competencies should at minimum include those specified by ASET's National Competency Skill Standards, AANEM's skill standards for NCS, and/or ABEM's eligibility and application requirements. ○Supervision (Table ) Direct technical supervision by a ND Technologist III or above is required. May be permitted to perform routine testing under indirect technical supervision after successful completion of all required competencies as established by a Technologist III or higher and the laboratory medical and technical supervisors. Regular quality assessments of technical skills must be performed and documented at least yearly. For EEG, EPs, and ANS testing, works under indirect supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under personal physician supervision. ○Ongoing education/maintenance of competency Should attend relevant in-house educational offerings and be required to demonstrate ongoing competence through an in-house developed program. Should obtain a minimum of 15 hours of education in ND each year covering all modalities performed by the technologist. ○Job responsibilities This is a transitional position and new hires should obtain credential within 3 years of hire. Performs routine testing under general supervision; writes a technical descriptive analysis for QA purposes only. For NCS Technologist II, 12 months of full-time (or equivalent) practical training in performing NCS under direct supervision of EDX physician. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification Meets eligibility requirements set by credentialing bodies, i.e., ABRET, – AAET, and/or ABEM, to take a credentialing examination. Associate degree or higher is preferred, or graduate of a CAAHEP-accredited ND program. , ○Experience Twelve or more months of experience working in a patient care environment with supervised experience in performing primary testing modality. Competencies should at minimum include those specified by ASET's National Competency Skill Standards and/or AANEM's skill standards for NCS, as appropriate. ○Supervision (Table ) General technical supervision. Reports to ND Technologist III or above. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 15 credits should be obtained every 3 years, covering all modalities performed by the technologist. ○Job responsibilities Performs routine, as well as more advanced testing (per program guidelines); recognizes clinically significant events and patterns; follows policy and procedures regarding critical test results; communicates with team members; writes a technical descriptive analysis. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. R. EEG T. —Performs clinical EEG in the adult, pediatric, and neonatal populations. Also performs studies in ICUs. R. EP T. —Demonstrates proficiency in the acquisition and recognition of basic EP waveforms relevant to EP modality being tested. Includes VEP, BAEP, and SSEP. R.NCS.T. or CNCT , —Performs NCS; recognizes clinically significant events and follows facility policy and procedures regarding critical test results. CAP —Performs basic and advanced ANS testing procedures independently with a high degree of technical proficiency; recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them; and describes normal and abnormal clinical manifestations observed during the testing. ○Education/certification ABRET, AAET, or ABEM credential required. Associate degree or higher is preferred, or graduate of a CAAHEP-accredited ND program. , ○Experience Meets qualifications and requirements of Technologist II, is credentialed, and meets all education requirements set forth by ABRET, AAET, or ABEM. ○Supervision (Table ) Works under general technical supervision as specified in departmental policy and procedure manual. Regular quality assessments of technical skills must be performed and documented at least yearly. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years covering all modalities performed by the technologist. This is a minimum requirement and is superseded by individual credential requirements as set forth by ABRET, AAET, and ABEM. ○Job responsibilities This is a trainee-level position and is considered transitional. It is expected that new hires will obtain CNIM certification within 5 years. Helps set up monitoring equipment while assuring patient safety. Communicates effectively with team members. Has hospital training to alert supervisor and/or activate hospital systems, such as rapid response, cardiac arrest, etc., per established protocols when encountering patient clinical issues. ○Education/certification R. EEG T. or R. EP T. or a bachelor's degree ○Experience Six or more months of experience working in a patient care environment. For individuals entering the field with a bachelor's degree, patient experience requirements will be determined by their employer. ○Supervision (Table ) Requires direct technical supervision. Works under supervision of interpreting provider who can be immediately present either electronically or in person. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. ○Job responsibilities Includes all of those required for a Neurodiagnostic Technologist III but exhibits additional critical thinking skills. Able to recognize critical values in critically ill patients of all ages and report the values to the appropriate medical personnel. ○Education/certification Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. Current credentials required from ABRET, AAET, or ABEM. ○Experience Meets all requirements of experience and qualifications as specified in Tech level III in the ND field that includes an additional 1 year of experience in one of the advanced modalities listed below in Sections 6a–6e. ○Supervision (Table ) Works under general technical supervision. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. Regular quality assessments of technical skills must be performed and documented at least yearly. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and will be superseded by individual credential requirements and/or maintenance of certification requirements. 6a. Neurodiagnostic Technology Specialist I LTME (R. EEG T.) Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in LTM for epilepsy. Specific job responsibilities Recognizes and reports critical values to the appropriate medical personnel, significant clinical events, and EEG patterns. Prepares, organizes, and summarizes data for physician review. 6b. Neurodiagnostic Technology Specialist I ICU cEEG (R. EEG T.) Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in ICU/cEEG monitoring. Specific job responsibilities Recognizes significant clinical events and EEG patterns; provides alerts as detailed in departmental policy and procedure manual. Prepares, organizes, and summarizes data for physician review. 6c. Neurodiagnostic Technology Specialist I IONM (CNIM) Specific experience CNIM © Minimum of 1 year of experience in an IONM setting. Specific job responsibilities Able to apply electrodes and obtain high-quality waveforms independently. Able to recognize changes and communicate such with team as specified in the departmental policy and procedure manual. Able to troubleshoot common problems in IONM recordings. 6d. Neurodiagnostic Technology Specialist I NCS (R.NCS.T. or CNCT) Specific experience Bachelor's degree preferred. CNCT or R.NCS.T. required, plus training in performing advanced NCS. A minimum of 4 years as CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 5 years of experience in performing NCS and may have experience in the ICU. Technologists may perform pediatric studies. Specific job responsibilities Able to perform basic and advanced NCS procedures independently, including pediatric NCS, repetitive nerve stimulation, and autonomic studies with a high degree of technical proficiency; can perform studies in routine and ICU settings; with additional training may perform neuromuscular ultrasound. Recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them. Describes normal and abnormal clinical manifestations observed during the testing. Uses critical thinking and clinical expertise to determine the need for further NCS testing as needed to assist with interpretation. 6e. Neurodiagnostic Technology Specialist I MEG (CMEG-eligible) Specific experience Meets CMEG examination requirements set forth by ABRET, including completion of MEG certificate program. Three or more years of experience in the field of ND, which includes at least 6 months of supervised clinical and hands-on experience in an active MEG center. Experience of 75 MEGs for epilepsy; know the 10 to 20 International System of Electrode Placement. Twenty-five MEG evoked potentials including three or more of the five EP scans: auditory, language evoked, motor evoked, sensory evoked, and visually evoked. Experience to trouble shoot the system, including filling liquid helium MEG system. Specific job responsibilities Recognizes significant clinical events and EEG patterns; demonstrates competency in operational routines, including helium filling (if applicable), tuning procedures (as applicable), standard testing procedures, troubleshooting, artifact prevention and elimination, and data storage, and sufficient understanding of source localization to preprocess routine clinical data for the analysis by a physician magnetoencephalographer. Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in LTM for epilepsy. Specific job responsibilities Recognizes and reports critical values to the appropriate medical personnel, significant clinical events, and EEG patterns. Prepares, organizes, and summarizes data for physician review. Specific experience R. EEG T. Three or more years of experience in the ND field that includes 1 year of experience in ICU/cEEG monitoring. Specific job responsibilities Recognizes significant clinical events and EEG patterns; provides alerts as detailed in departmental policy and procedure manual. Prepares, organizes, and summarizes data for physician review. Specific experience CNIM © Minimum of 1 year of experience in an IONM setting. Specific job responsibilities Able to apply electrodes and obtain high-quality waveforms independently. Able to recognize changes and communicate such with team as specified in the departmental policy and procedure manual. Able to troubleshoot common problems in IONM recordings. Specific experience Bachelor's degree preferred. CNCT or R.NCS.T. required, plus training in performing advanced NCS. A minimum of 4 years as CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 5 years of experience in performing NCS and may have experience in the ICU. Technologists may perform pediatric studies. Specific job responsibilities Able to perform basic and advanced NCS procedures independently, including pediatric NCS, repetitive nerve stimulation, and autonomic studies with a high degree of technical proficiency; can perform studies in routine and ICU settings; with additional training may perform neuromuscular ultrasound. Recognizes physiologic and nonphysiologic artifacts and takes appropriate steps to eliminate them. Describes normal and abnormal clinical manifestations observed during the testing. Uses critical thinking and clinical expertise to determine the need for further NCS testing as needed to assist with interpretation. Specific experience Meets CMEG examination requirements set forth by ABRET, including completion of MEG certificate program. Three or more years of experience in the field of ND, which includes at least 6 months of supervised clinical and hands-on experience in an active MEG center. Experience of 75 MEGs for epilepsy; know the 10 to 20 International System of Electrode Placement. Twenty-five MEG evoked potentials including three or more of the five EP scans: auditory, language evoked, motor evoked, sensory evoked, and visually evoked. Experience to trouble shoot the system, including filling liquid helium MEG system. Specific job responsibilities Recognizes significant clinical events and EEG patterns; demonstrates competency in operational routines, including helium filling (if applicable), tuning procedures (as applicable), standard testing procedures, troubleshooting, artifact prevention and elimination, and data storage, and sufficient understanding of source localization to preprocess routine clinical data for the analysis by a physician magnetoencephalographer. ○Job responsibilities Generally similar to Neurodiagnostic Technology Specialist I descriptions but provides more detailed preliminary reports and more detailed data review (as specified in departmental policy and procedures) to the interpreting provider. Able to provide higher level of teaching and training for other technologists. ○Education/certification Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience, of which 3 years are postcredential. NCS specialist II requires a minimum of 5 years as a CNCT or R.NCS.T., with 6 years of experience, including ICU experience. Advanced modality requirements for experience and qualifications are listed below in Sections 7a–7e. ○Supervision (Table ) Works under general technical supervision as specified in departmental policy and procedure manual. Works under supervision of interpreting provider who can be immediately present either electronically or in person. For NCS, works under direct physician supervision. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and may be superseded by the requirements of credentialing boards. 7a. Neurodiagnostic Technology Specialist II LTME (Long-Term Video EEG Monitoring) (CLTM) Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for LTME; assists other ND technologists in LTME. 7b. Neurodiagnostic Technology Specialist II ICU/cEEG (Continuous EEG Monitoring in the Intensive Care Unit) (CLTM) Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for ICU/cEEG; assists other ND technologists in ICU/cEEG. 7c. Neurodiagnostic Technology Specialist II IONM (CNIM) Specific education/certification CNIM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for IONM; assists other ND technologists in IONM. 7d. Neurodiagnostic Technology Specialist II NCS (R.NCS.T. or CNCT) Specific education/certification Meets all qualifications of NCS Specialist I. Bachelor's degree required. A minimum of 5 years as a CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 6 years of experience in performing NCS (Grandfather clause: Technologists who do not hold a bachelor’s degree or higher and who meet all the requirements of an NCS Specialist I may be considered for NCS Specialist II if they have a minimum of 10 years of continuous experience in performing NCS, a minimum of 8 years as a CNCT or R.NCS.T., a minimum of three faculty engagements in the NCS field, and at least two reference letters from ABEM physicians (Table 2)) and may have ICU experience. Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for NCS. Demonstrated ability to train others in the principles and practice of NCS, including technologists, residents, and fellows. 7e. Neurodiagnostic Technology Specialist II MEG (CMEG) Specific education/certification CMEG. Three or more years of experience in the ND field, specifically EEG, and 2 years of experience in MEG. Specific job responsibilities Performs digitization for co-registration to MRI, performs initial MEG spontaneous recording with concurrent EEG recording, understands placement and recording of evoked field trials (SEF, VEF, MEF, AEF, and LEF), implements nontraditional activation procedures as required (or ordered by attending physician), performs initial filtering and review of MEG/EEG data, performs preprocessing and localization of interictal activity, review of initial localization with physician, localization of evoked field data (for review by physician), and archiving and retrieval of MEG data. Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for LTME; assists other ND technologists in LTME. Specific education/certification CLTM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for ICU/cEEG; assists other ND technologists in ICU/cEEG. Specific education/certification CNIM © Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for IONM; assists other ND technologists in IONM. Specific education/certification Meets all qualifications of NCS Specialist I. Bachelor's degree required. A minimum of 5 years as a CNCT or R.NCS.T. performing NCS in the patient setting, with at least a total of 6 years of experience in performing NCS (Grandfather clause: Technologists who do not hold a bachelor’s degree or higher and who meet all the requirements of an NCS Specialist I may be considered for NCS Specialist II if they have a minimum of 10 years of continuous experience in performing NCS, a minimum of 8 years as a CNCT or R.NCS.T., a minimum of three faculty engagements in the NCS field, and at least two reference letters from ABEM physicians (Table 2)) and may have ICU experience. Specific job responsibilities Assists in development of and monitoring of adherence to policies and procedures for NCS. Demonstrated ability to train others in the principles and practice of NCS, including technologists, residents, and fellows. Specific education/certification CMEG. Three or more years of experience in the ND field, specifically EEG, and 2 years of experience in MEG. Specific job responsibilities Performs digitization for co-registration to MRI, performs initial MEG spontaneous recording with concurrent EEG recording, understands placement and recording of evoked field trials (SEF, VEF, MEF, AEF, and LEF), implements nontraditional activation procedures as required (or ordered by attending physician), performs initial filtering and review of MEG/EEG data, performs preprocessing and localization of interictal activity, review of initial localization with physician, localization of evoked field data (for review by physician), and archiving and retrieval of MEG data. ○Job responsibilities Monitors (on-site or remotely), evaluates, annotates, and classifies ictal, interictal, and paroxysmal events from EEG/video data. Recognizes physiologic and nonphysiologic artifacts. Writes detailed description of EEG patterns, seizure semiology, ictal and interictal abnormalities, and selection of representative EEG samples. Acts as a physician extender in collaboration with the supervising physician and other health care staff. If the NeuroAnalyst is working in an EMU, they must be able to perform the following duties: All duties and responsibilities for typical and special consideration for routine and advanced EEG/ECoG. Extensive knowledge in neuroanesthesia and its application to neuromonitoring. All aspects of invasive implants preoperatively, intraoperatively, and postoperatively, including, but not limited to, electrode setup, montage creation/verification, troubleshooting, hook-up and discontinuation, and stimulation for cortical mapping. ○Education/certification Holds credentials in EEG (R. EEG T.) and LTM (CLTM) with the NeuroAnalyst (NA-CLTM) credential preferred. Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience in LTM in the ambulatory setting, EMU, and/or critical care postcertification in LTM. ○Supervision (Table ) Works under general supervision of the neurodiagnostic technical lab supervisor or the neurodiagnostic lab director and the interpreting physician. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. ○Overview Each laboratory requires technical supervision. These qualifications refer only to the issues specifically related to supervision of technical activities. The laboratory supervisor may take on additional responsibilities as dictated by hospital administrative policies and organization. ○Job responsibilities Provides direct supervision and education to other technologist levels; oversees day-to-day operations; responsible for maintaining policies and procedures; and QA program development and implementation in conjunction with the medical and technical laboratory directors. ○Education/certification Must have a minimum of one credential in ND technology, two or more preferred, in the area supervised. Associate degree or graduate of a CAAHEP-accredited ND program, , bachelor's degree is preferred. ○Experience Minimum of 5 years of experience in ND. ○Supervision (Table ) Works under the neurodiagnostic technical lab director and with the medical director. For clinical studies, works under supervision of interpreting provider who can be immediately present either electronically or in person. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. ○Overview Functions in the role of educator, facilitator, change agent, consultant, and leader for professional development. ○Job responsibilities Designs and implements competency and educational activities for ND personnel, including annual competency programs, orientation, continuing education, and professional development within a collaborative practice framework. Develops new employees to meet job requirements. Assists those who are not credentialed for board examination. Coordinates continuing education and competency activities for staff. ○Education/certification Graduate of an accredited Baccalaureate program, preferably in ND , or higher education. Must have a minimum of one ND-related credential, two or more preferred. Credential should be specific to the modality for which education is being provided. ○Experience Minimum of 5 years of experience in ND with previous teaching experience preferred. ○Supervision (Table ) Works under the neurodiagnostic technical lab director. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by individual credential requirements. ○Overview This position can be held either by a ND professional with additional management training or experience, or by non-ND manager, typically with experience in other diagnostic services. There are situations in which the administrative leadership of the CNP department may not, for the purposes of timekeeping, recordkeeping, and basic personnel management, have specific ND technology training. In that case, there must be a technologist at the level of Neurodiagnostic Technologist III or above who can provide technical supervision. ○Job responsibilities Works with hospital administration and the laboratory Medical Director to make personnel and budgetary decisions. Involved with marketing efforts. Serves as a liaison across departments when necessary. May also assume responsibility for productivity and financial viability, patient safety, and accreditation of the laboratory – among other high-level functions that contribute to the success of the department in support of the employer's mission. ○Education/certification A minimum of a bachelor's degree in health sciences; if job description includes performing ND studies, must have at least one ND credential. ○Experience Minimum of 5 years of minimum experience; 3 years of previous supervisory experience is recommended. ○Supervision (Table ) Works with hospital administration and Medical Director. If job description includes performing clinical ND studies, works under general supervision of interpreting provider who can be immediately present either electronically or in person. ○Ongoing education/maintenance of competency A minimum of 30 credits should be obtained every 5 years. This is a minimum requirement and is superseded by other individual credential requirements. NOTE: As may pertain to all higher levels of practitioners, please note that there are individuals who perform, and in some cases interpret, testing under the supervision of a licensed and qualified physician. These individuals do not have a medical or osteopathic doctorate and are referred to as “Advanced Practitioners” or other “Qualified Health Care Providers.” This document recommends privilege-based licensure, as well as skills, knowledge, and abilities, gained through training, experience, and accredited programs. – These are demonstrated by passing board examinations and maintained through continuing education. – This document does not supersede applicable state law. These practitioners work within their state's regulatory and/or statutory scope of practice guidelines and within institutional credentialing. The scope of practice may differ across states, institutions, and insurance carriers. 12. Audiologist (Lab) ○Job responsibilities Audiological and vestibular testing and BAEPs including both the technical and the interpretative components related to the assessment of the function of the eighth cranial nerve and peripheral hearing apparatus. ○Education/certification All audiologists must be an AuD or hold current board certification. ○Experience Has performed and interpreted the number of studies required by federal, state, institutional, and/or certifying organization regulations. The minimum number should be sufficient for the practitioner to have gained mastery of all aspects of testing. ○Supervision (Table ) May work independently or under supervision as specified by federal, state, and hospital regulations. To supervise technologists performing audiological testing within the ND laboratory, the audiologist must have a minimum of 3 years of experience in clinical practice in addition to the AuD. ○Ongoing education/maintenance of competency Minimum of 50 CEUs spanning 5 or more years, as required for maintenance of certification. This is a minimum requirement and is superseded by other individual credential requirements. 13. Nonphysician (PhD, AuD , FMG) Neurophysiologist Performing IONM ○Job responsibilities This may include: Management of personnel and instrumentation that support IONM. Technical performance of IONM. IONM planning. Real-time interpretation of IONM under the supervision of a licensed physician who is immediately available, either in person or online, if needed, e.g., for rendering of medical opinion, decisions, and recommendations during surgery. This physician must be a clinical neurophysiologist trained, qualified, and experienced in IONM as referenced under Section 18. Providing recommendations for obtaining optimal neurophysiological data. Postoperative IONM report. ○Education/certification Possess a minimum of an earned doctoral degree in a physical science, life science, or clinical allied health profession from an accredited educational institution. Education must include successful completion of graduate-level training in neurophysiology and anatomy. Must have medical staff privileges for the performance of IONM in all hospitals where practicing. The DABNM is required (Grandfather clause: PhD neurophysiologists with a minimum of 20 years of experience in IONM are not required to hold the DABNM). ○Experience Evidence of continuous experience in IONM including case logs that document a minimum of 300 cases monitored with the primary responsibility for the clinical tasks in which the provider will participate. ○Supervision (Table ) The nonphysician neurophysiologist functions under the supervision of a licensed physician who is immediately available, either in person or online, if needed, e.g., for rendering of medical decisions and recommendations during surgery. This physician must be a clinical neurophysiologist trained, qualified, and experienced in IONM as referenced in Section 18. ○Ongoing education/maintenance of competency Maintenance of all credentials required for medical staff privileges in IONM. A minimum of 100 cases per year averaged over 3 years. Forty-five CEUs in IONM per year averaged over 5 years. 14. Senior Nonphysician (PhD, AuD, FMG) Neurophysiologist Performing IONM ○Job responsibilities May perform any of the job responsibilities described for the nonphysician neurophysiologist (Section 13) as described above. Available for teaching less experienced providers. The specific responsibilities assigned to each practitioner should be documented by the employer. ○Education/certification All requirements are the same as for the nonphysician neurophysiologist performing IONM except: The DABNM credential is required. ○Experience All requirements are the same as for the nonphysician neurophysiologist except that: At least 7 years of clinical activity in IONM is required. ○Supervision (Table ) The requirements are the same as the nonphysician neurophysiologist (Section 13). ○Ongoing education/maintenance of competency The requirements are the same as the nonphysician neurophysiologist (Section 13). ○Job responsibilities Audiological and vestibular testing and BAEPs including both the technical and the interpretative components related to the assessment of the function of the eighth cranial nerve and peripheral hearing apparatus. ○Education/certification All audiologists must be an AuD or hold current board certification. ○Experience Has performed and interpreted the number of studies required by federal, state, institutional, and/or certifying organization regulations. The minimum number should be sufficient for the practitioner to have gained mastery of all aspects of testing. ○Supervision (Table ) May work independently or under supervision as specified by federal, state, and hospital regulations. To supervise technologists performing audiological testing within the ND laboratory, the audiologist must have a minimum of 3 years of experience in clinical practice in addition to the AuD. ○Ongoing education/maintenance of competency Minimum of 50 CEUs spanning 5 or more years, as required for maintenance of certification. This is a minimum requirement and is superseded by other individual credential requirements. , FMG) Neurophysiologist Performing IONM ○Job responsibilities This may include: Management of personnel and instrumentation that support IONM. Technical performance of IONM. IONM planning. Real-time interpretation of IONM under the supervision of a licensed physician who is immediately available, either in person or online, if needed, e.g., for rendering of medical opinion, decisions, and recommendations during surgery. This physician must be a clinical neurophysiologist trained, qualified, and experienced in IONM as referenced under Section 18. Providing recommendations for obtaining optimal neurophysiological data. Postoperative IONM report. ○Education/certification Possess a minimum of an earned doctoral degree in a physical science, life science, or clinical allied health profession from an accredited educational institution. Education must include successful completion of graduate-level training in neurophysiology and anatomy. Must have medical staff privileges for the performance of IONM in all hospitals where practicing. The DABNM is required (Grandfather clause: PhD neurophysiologists with a minimum of 20 years of experience in IONM are not required to hold the DABNM). ○Experience Evidence of continuous experience in IONM including case logs that document a minimum of 300 cases monitored with the primary responsibility for the clinical tasks in which the provider will participate. ○Supervision (Table ) The nonphysician neurophysiologist functions under the supervision of a licensed physician who is immediately available, either in person or online, if needed, e.g., for rendering of medical decisions and recommendations during surgery. This physician must be a clinical neurophysiologist trained, qualified, and experienced in IONM as referenced in Section 18. ○Ongoing education/maintenance of competency Maintenance of all credentials required for medical staff privileges in IONM. A minimum of 100 cases per year averaged over 3 years. Forty-five CEUs in IONM per year averaged over 5 years. ○Job responsibilities May perform any of the job responsibilities described for the nonphysician neurophysiologist (Section 13) as described above. Available for teaching less experienced providers. The specific responsibilities assigned to each practitioner should be documented by the employer. ○Education/certification All requirements are the same as for the nonphysician neurophysiologist performing IONM except: The DABNM credential is required. ○Experience All requirements are the same as for the nonphysician neurophysiologist except that: At least 7 years of clinical activity in IONM is required. ○Supervision (Table ) The requirements are the same as the nonphysician neurophysiologist (Section 13). ○Ongoing education/maintenance of competency The requirements are the same as the nonphysician neurophysiologist (Section 13). 15. Physicians (MD, DO, or Foreign Equivalent) Who are Neither Neurologists, Physiatrists, nor Clinical Neurophysiologists ○Job responsibilities Interprets CNP studies under supervision as discussed below. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited residency. If practicing in a hospital setting, must satisfy the hospital's requirements for medical staff privileges in their specialty area. If the hospital has separate criteria for performing and interpreting neurophysiologic tests, the practitioner must meet those requirements for the particular test performed. A minimum of 6 months of full-time supervised training in the area(s) of neurophysiology practiced. If training is not full time, there should be equivalent of 6 months of supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist with expertise in the field of training. It is preferable if this training occurred as part of a program accredited by the institutional graduate medical education committee or by the ACGME. EDX physicians should refer to the AANEM position statement, “Who Is Qualified to Practice Electrodiagnostic Medicine?” Acceptable board certification for the supervising neurophysiologists includes any of the following: ABPN–CN (American Board of Psychiatry and Neurology Clinical Neurophysiology) ABCN (American Board of Clinical Neurophysiology) ABEM (American Board of Electrodiagnostic Medicine) ABNM (American Board of Neurophysiologic Monitoring) for IONM only ○Experience Before practicing independently, this physician should have completed the number of studies outlined below under supervision for which privileges are being requested. EEG—500 studies Long-term video EEG monitoring—100 studies EMG/NCS—200 complete EDX evaluations IONM—100 patients Diagnostic evoked potentials—50 studies; at least 15 in each modality the practitioner will interpret. ○Supervision (Table ) EEG—clinical neurophysiologist or neurologist credentialed to interpret EEG studies should be available to review record or help with any questions or complex patients. Long-term video EEG monitoring—should work with a clinical neurophysiologist or neurologist credentialed to interpret these studies who provides ongoing review of each study. EMG/NCS—neurologist/physiatrist/clinical neurophysiologist credentialed to interpret these studies should be available to review records or help with questions or complex patients. IONM—should work with clinical neurophysiologist, neurologist, or physiatrist credentialed to interpret these studies and who provides ongoing review of each study. Diagnostic evoked potentials—should work with clinical neurophysiologist or neurologist who provides ongoing review of each study. ○Ongoing education/maintenance of competency Must maintain certification in primary specialty. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. A board-certified clinical neurophysiologist should be involved in these activities. 16. Neurologist (Without Board Certification in Any Area of CNP) or Physiatrist Certified by Their Respective Boards ○Job responsibilities Interprets routine studies of the specified type. ○Education/certification Valid state license to practice medicine. For IONM, EEG, and EPs, a minimum of 6 months of full-time, supervised training in these areas. If training is not full time, there should be equivalent of 6 months supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist or neurologist with expertise in the field of training. It is preferable if this training occurred as part of a program accredited by the institutional graduate medical education committee or by the ACGME. Completion of an ACGME-accredited residency in neurology or physical medicine rehabilitation would be applicable for EDX physicians. If practicing in a hospital setting, should satisfy the hospital's requirement for medical staff privileges in neurology or PMR. If the hospital has separate criteria for performing and interpreting neurophysiologic tests, the practitioner should meet those requirements for the particular test performed. Meets hospital requirements to have medical staff privileges as a neurologist or for IONM and EDX testing as a PMR physician. ○Experience Before practicing independently, the physician should have completed under supervision the number of studies in an ACGME or RCPSC in a neurology or PMR residency program outlined below for which privileges are requested. EEG—500 studies Long -term video EEG monitoring—100 studies EMG/NCS—200 complete EDX evaluations IONM—100 patients Diagnostic evoked potentials—50 studies; at least 15 in each modality the practitioner will interpret. ○Supervision (Table ) Supervised by a clinical neurophysiologist who participates in quality assessment and quality improvement activities, including peer review, and is available for consultation regarding complex or difficult cases. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in neurology or physical medicine rehabilitation would be acceptable for EDX physicians. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. A board-certified clinical neurophysiologist should be involved in these activities. 17. Clinical Neurophysiologist (MD, DO) ○Job responsibilities Supervises and interprets general CNP studies in the area of their expertise. Available for consultation with other staff on complex or difficult cases. Participates in QA and quality improvement activities. Involved in ongoing training and education of physicians and technologists. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited fellowship in CNP or equivalent training before the establishment of accredited training programs as recognized by board certification as specified below. ○Board eligibility or certification by ABPN-CN, ABCN, or ABEM. ○Experience One should have performed or interpreted under supervision at least the number of studies specified in Section 16. At least 3 years in clinical practice of CNP. ○Supervision (Table ) Supervises studies performed by other providers with less experience or training. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in CNP as applicable. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. 18. Subspecialty Neurologist or Physiatrist (MD, DO) ○Job responsibilities Supervises and interprets general and complex CNP studies in the areas of expertise. Involved in planning QA and quality improvement activities in the ND department. Available for consultation with other staff on complex or difficult cases. Involved in ongoing training and education of physicians and technologists. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited fellowship in CNP or equivalent training before the establishment of accredited training programs as recognized by board certification as specified below. Board certification by ABPN-CN, ABCN, or ABEM. Completion of an ACGME-accredited residency in physical medicine and rehabilitation or neurology. A minimum of 6 months of full-time supervised training in the area of neurophysiology in which they will practice. If training is not full time, there should be equivalent of 6 months of supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist with expertise in the field of training. ○Experience Along with the larger years of experience, the subspecialist should have performed or interpreted at least twice the number of studies specified for the neurologist or physiatrist (Section 16). Should have at least 5 years of clinical practice in neurophysiology. ○Supervision (Table ) Supervises studies performed by other providers with less experience or training. Available for teaching and supervision of less experienced practitioners. ○Ongoing education/maintenance of competency Must maintain medical staff privileges/subspecialty privileges in CNP as applicable. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. 19. Neurodiagnostic Medical Director (MD, DO) ○Job responsibilities Development and implementation of policies and procedures for ND laboratory. Supervision and assessment of competency of ND laboratory staff at all levels. Assures that there is ongoing teaching and educational activities within the department. Supervises quality improvement activities. Works with technical director/manager in planning for the laboratory, staff, equipment, and budget. ○Education/certification Valid medical license to practice in the state where supervising studies. Case experience equal to or greater than that required for subspecialty neurologist or physiatrist (Section 18). Board certified by ABPN or ABPMR. Board certified in at least one area of CNP (ABPN-CN, ABCN, or ABEM). For AANEM medical director for EDX laboratories or EDX laboratory accreditation, the qualifications of a medical laboratory director shall meet AANEM medical lab director qualifications and AANEM CME requirements: 1. Completed ACGME or RCPSC neurology or PMR residency. 2. Completed primary board certification in ACGME or RCPSC neurology or PMR. 3. Completed 3 months of training in EDX medicine during neurology or PMR ACGME or RCPSC residency or fellowship. ○Experience At least 5 years of professional practice in neurophysiology. ○Supervision (Table ) Department Chair/Vice Chair, Chief Medical Officer, or Section Chief as governed by the department or medical facility. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in neurology or PMR, and CNP. Should have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Should be involved in managing ongoing QA and quality improvement activities. ○Job responsibilities Interprets CNP studies under supervision as discussed below. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited residency. If practicing in a hospital setting, must satisfy the hospital's requirements for medical staff privileges in their specialty area. If the hospital has separate criteria for performing and interpreting neurophysiologic tests, the practitioner must meet those requirements for the particular test performed. A minimum of 6 months of full-time supervised training in the area(s) of neurophysiology practiced. If training is not full time, there should be equivalent of 6 months of supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist with expertise in the field of training. It is preferable if this training occurred as part of a program accredited by the institutional graduate medical education committee or by the ACGME. EDX physicians should refer to the AANEM position statement, “Who Is Qualified to Practice Electrodiagnostic Medicine?” Acceptable board certification for the supervising neurophysiologists includes any of the following: ABPN–CN (American Board of Psychiatry and Neurology Clinical Neurophysiology) ABCN (American Board of Clinical Neurophysiology) ABEM (American Board of Electrodiagnostic Medicine) ABNM (American Board of Neurophysiologic Monitoring) for IONM only ○Experience Before practicing independently, this physician should have completed the number of studies outlined below under supervision for which privileges are being requested. EEG—500 studies Long-term video EEG monitoring—100 studies EMG/NCS—200 complete EDX evaluations IONM—100 patients Diagnostic evoked potentials—50 studies; at least 15 in each modality the practitioner will interpret. ○Supervision (Table ) EEG—clinical neurophysiologist or neurologist credentialed to interpret EEG studies should be available to review record or help with any questions or complex patients. Long-term video EEG monitoring—should work with a clinical neurophysiologist or neurologist credentialed to interpret these studies who provides ongoing review of each study. EMG/NCS—neurologist/physiatrist/clinical neurophysiologist credentialed to interpret these studies should be available to review records or help with questions or complex patients. IONM—should work with clinical neurophysiologist, neurologist, or physiatrist credentialed to interpret these studies and who provides ongoing review of each study. Diagnostic evoked potentials—should work with clinical neurophysiologist or neurologist who provides ongoing review of each study. ○Ongoing education/maintenance of competency Must maintain certification in primary specialty. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. A board-certified clinical neurophysiologist should be involved in these activities. ○Job responsibilities Interprets routine studies of the specified type. ○Education/certification Valid state license to practice medicine. For IONM, EEG, and EPs, a minimum of 6 months of full-time, supervised training in these areas. If training is not full time, there should be equivalent of 6 months supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist or neurologist with expertise in the field of training. It is preferable if this training occurred as part of a program accredited by the institutional graduate medical education committee or by the ACGME. Completion of an ACGME-accredited residency in neurology or physical medicine rehabilitation would be applicable for EDX physicians. If practicing in a hospital setting, should satisfy the hospital's requirement for medical staff privileges in neurology or PMR. If the hospital has separate criteria for performing and interpreting neurophysiologic tests, the practitioner should meet those requirements for the particular test performed. Meets hospital requirements to have medical staff privileges as a neurologist or for IONM and EDX testing as a PMR physician. ○Experience Before practicing independently, the physician should have completed under supervision the number of studies in an ACGME or RCPSC in a neurology or PMR residency program outlined below for which privileges are requested. EEG—500 studies Long -term video EEG monitoring—100 studies EMG/NCS—200 complete EDX evaluations IONM—100 patients Diagnostic evoked potentials—50 studies; at least 15 in each modality the practitioner will interpret. ○Supervision (Table ) Supervised by a clinical neurophysiologist who participates in quality assessment and quality improvement activities, including peer review, and is available for consultation regarding complex or difficult cases. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in neurology or physical medicine rehabilitation would be acceptable for EDX physicians. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. A board-certified clinical neurophysiologist should be involved in these activities. ○Job responsibilities Supervises and interprets general CNP studies in the area of their expertise. Available for consultation with other staff on complex or difficult cases. Participates in QA and quality improvement activities. Involved in ongoing training and education of physicians and technologists. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited fellowship in CNP or equivalent training before the establishment of accredited training programs as recognized by board certification as specified below. ○Board eligibility or certification by ABPN-CN, ABCN, or ABEM. ○Experience One should have performed or interpreted under supervision at least the number of studies specified in Section 16. At least 3 years in clinical practice of CNP. ○Supervision (Table ) Supervises studies performed by other providers with less experience or training. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in CNP as applicable. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. ○Job responsibilities Supervises and interprets general and complex CNP studies in the areas of expertise. Involved in planning QA and quality improvement activities in the ND department. Available for consultation with other staff on complex or difficult cases. Involved in ongoing training and education of physicians and technologists. ○Education/certification Valid state license to practice medicine in the state in which the study is performed. Completion of an ACGME-accredited fellowship in CNP or equivalent training before the establishment of accredited training programs as recognized by board certification as specified below. Board certification by ABPN-CN, ABCN, or ABEM. Completion of an ACGME-accredited residency in physical medicine and rehabilitation or neurology. A minimum of 6 months of full-time supervised training in the area of neurophysiology in which they will practice. If training is not full time, there should be equivalent of 6 months of supervised training when totaled. The training should be under the supervision of a board-certified clinical neurophysiologist with expertise in the field of training. ○Experience Along with the larger years of experience, the subspecialist should have performed or interpreted at least twice the number of studies specified for the neurologist or physiatrist (Section 16). Should have at least 5 years of clinical practice in neurophysiology. ○Supervision (Table ) Supervises studies performed by other providers with less experience or training. Available for teaching and supervision of less experienced practitioners. ○Ongoing education/maintenance of competency Must maintain medical staff privileges/subspecialty privileges in CNP as applicable. Must have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Must participate in ongoing QA and quality improvement activities. ○Job responsibilities Development and implementation of policies and procedures for ND laboratory. Supervision and assessment of competency of ND laboratory staff at all levels. Assures that there is ongoing teaching and educational activities within the department. Supervises quality improvement activities. Works with technical director/manager in planning for the laboratory, staff, equipment, and budget. ○Education/certification Valid medical license to practice in the state where supervising studies. Case experience equal to or greater than that required for subspecialty neurologist or physiatrist (Section 18). Board certified by ABPN or ABPMR. Board certified in at least one area of CNP (ABPN-CN, ABCN, or ABEM). For AANEM medical director for EDX laboratories or EDX laboratory accreditation, the qualifications of a medical laboratory director shall meet AANEM medical lab director qualifications and AANEM CME requirements: 1. Completed ACGME or RCPSC neurology or PMR residency. 2. Completed primary board certification in ACGME or RCPSC neurology or PMR. 3. Completed 3 months of training in EDX medicine during neurology or PMR ACGME or RCPSC residency or fellowship. ○Experience At least 5 years of professional practice in neurophysiology. ○Supervision (Table ) Department Chair/Vice Chair, Chief Medical Officer, or Section Chief as governed by the department or medical facility. ○Ongoing education/maintenance of competency Must maintain medical staff privileges in neurology or PMR, and CNP. Should have ongoing education in the area practiced with an average of 15 CME credits annually in the area(s) of CNP practiced, averaged over 3 years. Should be involved in managing ongoing QA and quality improvement activities. The field of clinical neurophysiology is large, diverse, and in constant evolution, and this document is not a review of the clinical indications or use of neurodiagnostic procedures. For more information, the following are additional resources. , , –
|
Effective and Practical Complete Blood Count Delta Check Method and Criteria for the Quality Control of Automated Hematology Analyzers
|
23a8545e-175d-44fc-b466-6419d0078ca2
|
10151276
|
Internal Medicine[mh]
|
Delta checks are used to compare current and previous test results and to estimate the probability of significant changes. When the difference between current and previous results exceeds predefined criteria, the cause of the error is identified by retesting all samples for QC. The difference exceeding the delta check limits provides an opportunity to determine the cause of the error and retest for correcting the error to identify sample mix-ups. A delta check can be an important component of autoverification procedures to improve laboratory efficiency . Mild (high) delta check limits decrease the retesting rate and turnaround time (TAT) but can miss errors, leading to decreased sensitivity of the laboratory results. Strict (low) delta check limits increase labor intensity. There are numerous reports on delta check methods and limits for chemistry laboratory results. Most studies on delta checks for chemistry items attempted to establish delta check methods and limits (criteria) based on empirically established methods and reported limitations to the application of the reference range (RR)-based delta check method . Some recent studies have used machine learning for delta check ; however, few studies on hematologic tests have been published. Fu, et al . simplified the delta check limitation formulae for data review and reported a new delta check model for automated complete blood counting that improved data validation. Miller, et al . suggested a new mean corpuscular volume (MCV)-based delta check method and limits (>3.0 fL) for hematology laboratories. In Korea, Park, et al . in 1989, Yang, et al . in 1991, and Koo, et al . in 2012 reported their experiences with the delta check method and empirically established delta check criteria for automated hematology analyzers. Although these review articles suggested the need for studies on the delta check method for hematology, no such studies have been published to date. Many laboratories have empirically established delta check methods and limits for hematologic tests. Therefore, an effective delta check method and criteria for hematologic tests are needed. We aimed to establish an effective and practical complete blood count (CBC) delta check method and criteria using statistics for the QC of automated hematology analyzers. In addition, we suggest a practical process with a new workflow algorithm for improving validation in the hematology laboratory.
This study was conducted in accordance with the Declaration of Helsinki (2013 revision) and was approved by the Institutional Review Board of Asan Medical Center, Seoul, Korea (#2019-0803). Data collection All blood samples for CBC tests were collected using K2EDTA tubes (Becton Dickinson and Company, Franklin Lakes, NJ, USA). Samples were collected from outpatients and inpatients, including patients in the emergency department of Asan Medical Center (a 2,715-bed tertiary hospital with 49 clinical departments or divisions, specialized centers, and departmental specialist clinics). In total, 219,804 samples were obtained from 151,120 inpatients and 68,684 outpatients between May 2019 and January 2020. Patients were included regardless of the department, their age, or medical status. Data on nine CBC items were collected in pairs of previous and current results. The nine CBC items were white blood cell (WBC) count, WBC differential counts (neutrophil %, lymphocyte %, monocyte %, eosinophil %, and basophil %), Hb, MCV, and platelet count. All EDTA-anticoagulated blood samples were analyzed using a Sysmex XN-20 automated hematology analyzer (Sysmex Co., Kobe, Japan). All data, including delta check time intervals and clinical features, were obtained from the laboratory information system and electronic medical records. Delta check using five methods The five delta check methods used in this study were as follows: (1) Absolute delta difference (ADD)=|current result–previous result| (2) Delta percent change (DPC) =|current result–previous result|/previous result×100% (3) DPC rate (%/day)=|current result–previous result|/previous result/delta interval×100% (4) DPC/RR =|current result–previous result|/previous result/RR×100% (5) DPC/RR rate (%/day)=|current result–previous result|/previous result/RR/delta interval×100% The DPC and DPC/RR rates included the RRs of the laboratory items. The RRs used in this study are shown in . We used the best-performing delta check method and criteria for the nine CBC items. Evaluation of the new delta check method For the evaluation of the new delta check method and criteria, we used paired CBC data from 42,652 samples (294,588 tests) collected between March 25 and April 7, 2020. Tests and samples yielding results exceeding the delta check criteria were counted. The samples yielding results exceeding the criteria were evaluated according to a new workflow algorithm to identify errors and corrections and the causes of the errors for Hb and platelet count ; for the other CBC items, manual review (peripheral blood smear and stain) and electric medical records were used. Statistical analysis A distribution of the delta check values for the CBC items (Hb, MCV, platelet count, WBC, and five-part WBC differential counts) was obtained using SPSS version 26.0 for Windows (IBM Corp., Armonk, NY, USA) . Microsoft Excel 2016 (Microsoft, Redmond, WA, USA) was used to calculate the delta check criteria and to determine the percentage of tests that exceeded the delta check criteria and causes for Hb and platelet counts exceeding the delta check criteria ( and ).
All blood samples for CBC tests were collected using K2EDTA tubes (Becton Dickinson and Company, Franklin Lakes, NJ, USA). Samples were collected from outpatients and inpatients, including patients in the emergency department of Asan Medical Center (a 2,715-bed tertiary hospital with 49 clinical departments or divisions, specialized centers, and departmental specialist clinics). In total, 219,804 samples were obtained from 151,120 inpatients and 68,684 outpatients between May 2019 and January 2020. Patients were included regardless of the department, their age, or medical status. Data on nine CBC items were collected in pairs of previous and current results. The nine CBC items were white blood cell (WBC) count, WBC differential counts (neutrophil %, lymphocyte %, monocyte %, eosinophil %, and basophil %), Hb, MCV, and platelet count. All EDTA-anticoagulated blood samples were analyzed using a Sysmex XN-20 automated hematology analyzer (Sysmex Co., Kobe, Japan). All data, including delta check time intervals and clinical features, were obtained from the laboratory information system and electronic medical records.
The five delta check methods used in this study were as follows: (1) Absolute delta difference (ADD)=|current result–previous result| (2) Delta percent change (DPC) =|current result–previous result|/previous result×100% (3) DPC rate (%/day)=|current result–previous result|/previous result/delta interval×100% (4) DPC/RR =|current result–previous result|/previous result/RR×100% (5) DPC/RR rate (%/day)=|current result–previous result|/previous result/RR/delta interval×100% The DPC and DPC/RR rates included the RRs of the laboratory items. The RRs used in this study are shown in . We used the best-performing delta check method and criteria for the nine CBC items.
For the evaluation of the new delta check method and criteria, we used paired CBC data from 42,652 samples (294,588 tests) collected between March 25 and April 7, 2020. Tests and samples yielding results exceeding the delta check criteria were counted. The samples yielding results exceeding the criteria were evaluated according to a new workflow algorithm to identify errors and corrections and the causes of the errors for Hb and platelet count ; for the other CBC items, manual review (peripheral blood smear and stain) and electric medical records were used.
A distribution of the delta check values for the CBC items (Hb, MCV, platelet count, WBC, and five-part WBC differential counts) was obtained using SPSS version 26.0 for Windows (IBM Corp., Armonk, NY, USA) . Microsoft Excel 2016 (Microsoft, Redmond, WA, USA) was used to calculate the delta check criteria and to determine the percentage of tests that exceeded the delta check criteria and causes for Hb and platelet counts exceeding the delta check criteria ( and ).
Distributions of delta check time intervals and delta check values according to the five delta check methods presents the distributions of delta check time intervals and delta check values for Hb, as an example of the nine CBC items, according to the five delta check methods and patient type (outpatient or inpatient). The median delta check time interval for Hb was one day (range, 1–20) for inpatients, including emergency department patients, and 21 days (1–222) for outpatients. For all nine CBC items, the distribution of delta check values varied among the five methods and between outpatients and inpatients. Frequencies according to the distribution of delta check values for the nine CBC items and the five delta check methods The frequency of delta check values exceeding 99.5% in the delta data distribution was 0.9%–1.1% for the nine CBC items, regardless of the method and patient type. For basophil %, the frequency of delta check values exceeding 99.5% was 0.6%–1.2% . As test error frequencies reportedly are approximately 1%, it is reasonable to select the delta check values ≥99.5% in the distribution with frequencies of 0.9%–1.1% as the delta check criteria (limits) . Therefore, we adopted the delta check values at 99.5% in the distributions as the delta check criteria. Delta check criteria for the nine CBC items and five delta check methods The delta check criteria for the nine CBC items varied among the five methods and between outpatients and inpatients. For all nine CBC items, the delta check method based on the DPC/RR rate, which reflects biological variation and the delta check time interval, performed best and was therefore adopted as the new delta check method . The delta check criteria for each CBC item are provided in the two rightmost columns (in bold font) of . Analysis of tests and samples producing results exceeding the new delta check criteria for the nine CBC items using the new delta check method The newly adopted DPC/RR rate-based delta check method was evaluated using 42,652 samples collected from outpatients and inpatients over a two-week period and a workflow algorithm for Hb and platelet count ; for the other CBC items, we used manual review (peripheral blood smear and stain) and electric medical records. Among the 294,588 tests, 5,008 test results (1.7%) exceeded the delta check criteria . There were 1,318 retests (0.5% among 294,588 tests) and four resamplings (0.01% among the total of 42,652 samples). The most common cause for delta check criterion exceedance for Hb and platelet count was transfusion (60.1%), followed by preanalytical errors (6.3%) (improper or inadequate samples, diluted samples, and misidentification), operation (3.6%), disease progression (2.1%), blood clots (0.2%), in vitro hemolysis (0.1%) during sample collection, and platelet aggregation (0.1%) .
presents the distributions of delta check time intervals and delta check values for Hb, as an example of the nine CBC items, according to the five delta check methods and patient type (outpatient or inpatient). The median delta check time interval for Hb was one day (range, 1–20) for inpatients, including emergency department patients, and 21 days (1–222) for outpatients. For all nine CBC items, the distribution of delta check values varied among the five methods and between outpatients and inpatients.
The frequency of delta check values exceeding 99.5% in the delta data distribution was 0.9%–1.1% for the nine CBC items, regardless of the method and patient type. For basophil %, the frequency of delta check values exceeding 99.5% was 0.6%–1.2% . As test error frequencies reportedly are approximately 1%, it is reasonable to select the delta check values ≥99.5% in the distribution with frequencies of 0.9%–1.1% as the delta check criteria (limits) . Therefore, we adopted the delta check values at 99.5% in the distributions as the delta check criteria.
The delta check criteria for the nine CBC items varied among the five methods and between outpatients and inpatients. For all nine CBC items, the delta check method based on the DPC/RR rate, which reflects biological variation and the delta check time interval, performed best and was therefore adopted as the new delta check method . The delta check criteria for each CBC item are provided in the two rightmost columns (in bold font) of .
The newly adopted DPC/RR rate-based delta check method was evaluated using 42,652 samples collected from outpatients and inpatients over a two-week period and a workflow algorithm for Hb and platelet count ; for the other CBC items, we used manual review (peripheral blood smear and stain) and electric medical records. Among the 294,588 tests, 5,008 test results (1.7%) exceeded the delta check criteria . There were 1,318 retests (0.5% among 294,588 tests) and four resamplings (0.01% among the total of 42,652 samples). The most common cause for delta check criterion exceedance for Hb and platelet count was transfusion (60.1%), followed by preanalytical errors (6.3%) (improper or inadequate samples, diluted samples, and misidentification), operation (3.6%), disease progression (2.1%), blood clots (0.2%), in vitro hemolysis (0.1%) during sample collection, and platelet aggregation (0.1%) .
In Korea, several laboratories have empirically established delta check methods and limits to decrease the TAT and identify errors . Koo, et al . reported that the delta check reduced unnecessary smear slides, rechecking, resampling, retesting, and telephone inquiries and concentrated workloads in specific times of the day. However, whether empirically established delta check methods and limits are adequate and effective for QC remained unknown. For outpatients, the delta check time interval can range from one day to several years. If the time interval between the previous test and the follow-up test is long, the error may reflect an altered patient status rather than a real error, even when the sample has a delta check flag. Previous studies focusing on delta check time intervals in delta check methods have suggested that methods based on the DPC or DPC/RR rate (%/day) are better than those based on ADD, DPC , or DPC/RR . The RR is another important factor in the delta check method. Park, et al . suggested a new delta check method based on the ratio of the delta difference to the width of the RR (DD/RR), which yielded more feasible and intuitive selection criteria and well explained changes in the results as it reflects biological variation in both the test items and clinical patient features. In this regard, the DPC/RR and DPC/RR rate delta check methods, which include RRs, are better than those based on the ADD, DPC, or DPC rate. Considering patients’ disease progression or recovery over time and biological variation, inclusion of the delta time intervals and RRs would be ideal. Therefore, both delta check time intervals and RRs should be included when establishing a delta check method and criteria. Therefore, we adopted a new delta check method based on the DPC/RR rate and established criteria to efficiently identify errors in CBC test results and reduce labor intensity. The error frequency in laboratory systems is approximately 1% . In this study, delta check criteria were obtained at 99.5% in the distribution of delta check values (absolute delta differences) with a frequency of 0.9%–1.1%. The delta check criteria differed between inpatients and outpatients for all nine CBC items. This is likely due to factors affecting the delta check calculation, including the inclusion of delta check time intervals, patients’ clinical states, and phlebotomists’ technical skills (e.g., adequate sampling and prompt transfer to the laboratory by experienced phlebotomists vs. potential inadequate sampling and delayed transfer to the laboratory by inexperienced nurses or doctors). Therefore, delta check criteria should be separately established for outpatients and inpatients. Koo, et al . demonstrated that the use of an empirically established delta check method reduced the TAT, decreased retesting and resampling rates, and increased automated reporting rates. In this study, the retesting rate (0.5%) with the newly adopted delta check method (based on the DPC/RR rate) was lower than that (3.2%) with a previous delta check method (DPC% ≥50% for WBCs, platelets, neutrophil %, and lymphocyte %; ≥20% for Hb and MCV; and ≥160% for monocyte %, eosinophil %, and basophil %) empirically established in our laboratory. The TAT of the newly adopted method (mean, 32 minutes; range, 1–51 minutes) did not significantly differ from that of the previous method (mean, 32 minutes; range, 1–92 minutes). The maximum TAT of the newly adopted method was lower than that of the previous method. Therefore, if the evaluation duration would have been longer, a reduced TAT may have been observed. The delta check method is aimed at determining sample misidentifications in laboratory QC programs. However, according to Schifman, et al . , transfusion and the patients’ physical condition and treatment are common causes of delta check criterion exceedance. In a study by He, et al . , the most common causes of delta check alerts were changes in the patient’s physiological status according to treatment and follow-up and interference from hemolysis, lipemia, or icterus. Therefore, a delta check can help identify disease progression based on patient status. The delta check flag is a critical automated QC tool and is useful to quickly determine the causes of automated analyzer errors through workflow automation, including the detection of hemolysis, transfusion, platelet aggregation, and blood clotting, which are readily flagged by automated analyzers. Workflow automation can be established based on previous studies and laboratory experience . Our study has some limitations. Comparisons between the existing and new delta check methods are required. In addition, workflow algorithms for CBC items other than Hb and platelet count remain to be established for the new delta check method. In conclusion, using EDTA blood samples from outpatients and inpatients, we developed an effective and practical delta check method based on the DPC/RR rate, which includes RRs and delta check time intervals, and delta check criteria for nine CBC items. Delta check criteria have to be established separately for outpatients and inpatients. Using a new workflow algorithm for Hb and platelet count for identifying test errors and corrections, we were able to identify causes of delta check criterion exceedance and report correct test results.
|
Case Report: Molecular autopsy underlie COVID-19-associated sudden, unexplained child mortality
|
e0e5da2e-3895-4ff1-bd85-ee5ed5ca8248
|
10151512
|
Forensic Medicine[mh]
|
Introduction Noonan syndrome (NS) is caused by pathogenic variants of genes encoding components of the RAS/MAPK signaling pathway ( ), including PTPN11 , SOS1 , and RAF1 ( ). Recently, a leucine-zipper-like transcription regulator 1 ( LZTR1 ) variant was found to be associated with NS using whole-exome sequencing (WES) ( ). In 2021, it was reported that the prevalence rate of LZTR1 variants in patients with NS was 4%–6% ( , ), which was less than 50 cases ( ). The clinical characteristics of patients with NS harboring LZTR1 variants are similar to those with other NS genotypes, including epicanthal folds, low-set ears, blepharoptosis, webbed neck, pectus excavatum-carinatum, cryptorchidism, short stature, intellectual disability, and cardiac anomalies ( ). Meanwhile, abnormalities in stature, cardiac function, and neurodevelopment are considerably different between these groups of patients ( ). For example, typical characteristics of children with NS include short stature related to growth hormone deficiency; however, only four cases of growth hormone deficiency have been reported in patients with NS harboring LZTR1 variants ( ). Additionally, patients with NS typically present cardiovascular anomalies, with pulmonary stenosis, hypertrophic cardiomyopathy, and atrial septal defects—the most prevalent ( , ). Correspondingly, 79.4% (27/34) of patients with NS harboring LZTR1 variants reported in 2019 had heart disease, the most frequent being hypertrophic cardiomyopathy (71.4%) ( , ). In a study of eight Japanese patients with NS harboring LZTR1 variants, one patient (c. 742G>A, p.Gly248Arg, a variant in Kelch 4 domain) was diagnosed to have an anomalous origin of the coronary artery with peripheral pulmonary stenosis by echocardiography ( ). Patients with NS have an increased risk of hematological abnormalities ( – ); particularly, transient myeloproliferative disorder is observed in approximately 10% of pediatric patients ( ). Juvenile myelomonocytic leukemia is another common hematological malignancy; approximately 90% of patients with NS and myelomonocytic leukemia have mutually exclusive PTPN11, NRAS, KRAS, NF1 , or CBL variants ( ). However, reports on association between LZTR1 variants and hematological abnormalities are scarce. Herein, we present a child who might have died due to COVID-19. A molecular autopsy of the patient revealed LZTR1 variant-associated NS with a complex combination of acute lymphoblastic leukemia of the B-cell precursor (BCP-ALL) phenotype, and a rare ectopic congenital coronary origin.
Case history and symptoms at presentation A 5-year-old boy, with no known medical history, developed a high fever (37.8°C) 6 days before his death. The patient was not administered any medication as the SARS-CoV-2 antigen test result was negative. On day 3 following fever onset, the patient’s body temperature dropped to below 37°C; however, his parents noticed that his face was abnormally pale. The fever (39.2°C) relapsed on day 5 after the initial episode. On day 6, the patient could barely drink or eat. When the patient’s mother tried to put him to bed, he moaned and convulsed with eyes open; the patient was rushed to the hospital emergency room via an ambulance. He went into cardiac arrest in the ambulance and died in the hospital despite the best efforts of cardiopulmonary resuscitation. The child’s blood analysis revealed severe anemia, thrombocytopenia, and hypercytokinemia, suggesting hemophagocytic lymphohistiocytosis (HLH). The detailed laboratory findings are summarized in .
Autopsy findings An autopsy was performed on the boy within 34 h after his death. Physical examination revealed the following: height, 108 cm; weight, 19.2 kg. No injuries were observed. The child was not prenatally diagnosed with any disease. The computed tomography scan showed no obvious acute or chronic fractures; furthermore, there were no major findings that pointed to a definite cause of death. The patient’s polymerase chain reaction test result was positive for SARS-CoV-2. An autopsy revealed splenomegaly (140 g, normal range: 45–50 g) and hepatomegaly (810 g, normal range: 550–600 g). The heart weighed 122 g, which is within the normal range for a 5-year-old boy. We did not find an anomalous positioning, abnormal chamber arrangement, significant ventricular wall thickening, or chamber dilatation. However, an acute angle take-off of the left coronary artery (LCA) from the non-coronary cusp (NCC) was observed ( ). The left main trunk (LMT) passed through a long course along the Valsalva sinus wall ( ), and a histological section of the LMT revealed an eccentric intimal fibrous thickening indicative of approximately 50% stenosis ( ). No other macro/microscopic anomaly including COVID-19 pneumonia was found. The child was diagnosed to have severe anemia and thrombocytopenia; therefore, further pathological examination was performed. Hematoxylin and eosin staining of the liver revealed diffused lymphoblast proliferation around the Glisson’s capsule ( ). Immunostaining of the liver using standard avidin–biotin immunohistochemical techniques showed positive staining of the cell cytoplasm and cytomembrane for CD34 (a hematopoietic stem cell marker; ), TdT (a marker of precursor lymphoid cells containing B and T cells; ), and CD79a (a pan-B-cell marker; ). These findings were consistent with a pre-B-cell phenotype. The lymphoblasts also infiltrated the lung, spleen, kidney, and pancreas ( ). HLH or the related inborn errors of immunity (IEI) were initially considered, and the child was screened for T-cell receptor excision circles (TRECs) and kappa-deleting recombination excision circles (KRECs). Moreover, tests for autoantibodies against type I interferon (IFN) were performed. The normal levels of TRECs and KRECs (965 and 817 copies/μg DNA, respectively) indicated no T-cell or B-cell immunodeficiencies. Moreover, the absence of autoantibodies against type I IFN suggested no COVID-19-associated IEI ( ). The complex cardiac and hematological abnormalities suggested the presence of an underlying disease; therefore, we performed WES. Briefly, genomic DNA was fragmented using the Wizard ® Genomic DNA Purification Kit (Promega, Madison, WI, USA). Exonic sequences were enriched using xGen Exome Research Panel v2 (Integrated DNA Technologies, Coralville, IA, USA) and SureSelect XT HS Reagents (Agilent Technologies, Santa Clara, CA, USA). The captured fragments were purified and sequenced on DNBSEQ-G400RS (MGI Tech, Shenzhen, China) using paired-end reads. WES revealed a heterozygous LZTR1 variant (c.1234C>T, p.Arg412Cys). Sanger sequencing revealed this variant in the nail, heart, brain, and spleen tissues, indicating that it is a germline variant ( ). Since the LZTR1 variant is positive for PS1, PM2, PP2, and PP3, both in silico analysis and evaluation under the ACMG guideline indicate that it is likely pathogenic ( ). We have also analyzed the model structure of LZTR1 using AlphaFoldDB ( ). In the model structure, R412 is located on the loop of the Six-bladed beta-propeller domain, presumably forming an intermolecular interaction site, and its side chain forms hydrogen bonds with N410 and D86. Structural stability assessment of the R412C mutant using FoldX showed no significant change (-0.23 kcal/mol), suggesting that the mutation does not contribute significantly to structural stability ( ). However, it does alter the hydrogen bonding network of the loop structure, which may affect the molecular interactions. This variant has been reported in a few cases of NS ( , ). Skilled geneticists identified mild facial features, such as broad forehead, blepharoptosis, epicanthal folds, hypertelorism, a short nose, and thick lips. However, no signs of a webbed neck or short stature were noted. Thus, the final diagnosis was NS associated with cryptorchidism, BCP-ALL, and a coronary malformation. Additional blood analyses data are shown in . The results of split-surface general bacterial cultures of blood, cerebrospinal fluid, and lung were negative. The analysis results of throat swab fluid were negative for respiratory syncytial virus, adenovirus, and antigens of group A Streptococcus , influenza A, influenza B, and human metapneumovirus. Liquid chromatography–mass spectrometry revealed low caffeine concentrations in blood. No other drug was detected.
Discussion In this study, the autopsy of the patient with COVID-19 revealed BCP-ALL and an anomalous origin of the LCA, and WES showed NS with an LZTR1 variant. The severity of NS was relatively mild and, thus, NS was not suspected during the patient’s lifetime. Rapid cardiac arrest could have been caused by a COVID-19-related fever and fluid imbalance such as dehydration combined with coronary artery anomaly, which may cause fatal arrhythmias and relative ischemia. LCA arising from NCC, which was observed in our patient, is considered one of the rarest forms of coronary defects ( ), detected in 0.02% (36 out of 174,262) of cases. Among the reported cases, 18 (50%) have been symptomatic, including 11 (31%) cases of sudden cardiac death. Saji et al. ( ) reported the death of a 13-year-old girl after long-distance running. Similar to our patient, this patient had LCA with marked intimal thickening. Other studies have also reported children who died suddenly after rigorous physical exercise, and the LCA of these patients also originated from the commissure between the NCC and left coronary cusp, with a slit-like orifice ( , ). The origin of the LCA in these cases was similar to that observed in our patient, supporting our assumption of the mode of death described above. Approximately 85% of childhood ALL cases are of B-cell precursor origin, whereas 15% originate from T cells ( ). However, patients with NS frequently develop juvenile myelomonocytic leukemia and seldom ALL ( ). Only a few cases of LZTR1 variants in patients with NS and ALL are known. Chinton et al. ( ) described a 2-year-old female NS/ALL patient with a variant (p. Gly248Arg) in the Kelch 4 domain. In a study of American patients with NS harboring LZTR1 variants (n = 23), two (c. 2220-17C>A, p. Arg210* and c. 1678G>C, p. Glu563Gln) patients were identified to have ALL. One of the two patients developed ALL at 5 years of age, which progressed to acute myeloid leukemia at 7 years of age; the patient died 2 years later. The other patient developed ALL at 3 years of age and remained in remission ( ). In addition, Lztr1 deficiency has been linked to B-cell malignancies in CD19 + B220 + CD43 + immature B cells in mice ( ). Thus, ALL development in our patient might be related to NS. However, the LZTR1 variants associated with ALL are rare and need to be researched further. Molecular autopsy refers to DNA-based identification of the cause of death. In recent large-scale studies of sudden death of young patients, molecular autopsies were able to uncover a likely or plausible cause of death in 12.6%–28% of cases ( , ). Comprehensive molecular autopsy, similar to that performed on our patient, has the potential to provide more accurate information by identifying genetic causes of unexpected sudden death. Since WES-based molecular autopsy does not lie in the identification of variants but in determining their predicted pathogenicity, care must be taken not to erroneously determine ambiguous variants as pathogens ( ). It should be noted that next-generation sequencing (NGS)-based target gene panel sequencing is useful for identifying the causative gene in a clinically suspected patient without accidental findings. However, WES can identify the causative gene even in patients without clinical diagnosis ( ).
Conclusion Herein, we presented a case of sudden child death. The death may have resulted from cardiac complications due to NS with a complex combination of BCP-ALL, COVID-19, and a rare pattern of an anomalous origin of the coronary artery. Our case study could be valuable for pathologists and pediatric practitioners as it emphasizes the significance of molecular autopsy. WES or whole-genome sequencing could be used in the diagnosis or even prevention of sudden child mortality.
The findings of the study are included in the article/ . Further inquiries can be directed to the corresponding author.
A forensic autopsy was performed on the boy as requested by the public prosecutor. For this type of case report, formal consent is not required. All procedures were performed in accordance with the ethical standards of our institutional research committee and tenets of the 1964 Helsinki Declaration and its later amendments.
Conception and design of the research: KaU, HK. Acquisition of data: KaU, DT, KN, KY, TM. Analysis and interpretation of the data: KaU, TM, AH, YM, SW, TO, NO, SO, KO, KoU, HK. Writing of the manuscript: KaU, HK. Critical revision of the manuscript for intellectual content: TM, YM, HK. All authors contributed to the article and approved the submitted version.
|
Mapping food surveillance chains through different sectors
|
449c988e-43a4-488c-8297-56a4d9851903
|
10151742
|
Microbiology[mh]
|
Introduction One Health (OH) defined as “an integrated, unifying approach that aims to sustainably balance and optimize the health of people, animals and ecosystems,” has become a widely accepted topic in the current debate about disease surveillance, and has a significant impact on the related health agenda ( – ). However, the practical application of the OH approach to real-life, existing surveillance systems is not easy. One Health surveillance (OHS) systems are not developed from scratch and the starting point is usually a combination of different hazard-specific problems, approaches, and objectives across the human, animal, and food safety sectors ( – ). Surveillance systems are complex structures and making the information gathered by a surveillance system useful for the involved stakeholders is not effortless ( – ). The OH approach necessarily adds complexity to existing surveillance systems and their chains of data flow. The complexity of the OH approach is related to the persistence of silo thinking ( ), which, despite being effective and useful in terms of following up on specific actors and topics, complicates collaborations among actors within each segment of the ‘farm-to-fork’ chain. European countries have invested in strengthening disease surveillance from a OH perspective with some successful collaborations, such as the Med.Vet.Net Association and the One Health European Joint Programme (OHEJP), which are now paving the way forward ( , ). The OHEJP is a partnership between 44 European food, veterinary, and medical laboratories and institutes across Europe and the Med.Vet.Net Association ( ). Among the many activities, including training opportunities and collaborations with the European intergovernmental agencies European Food Safety Authority (EFSA) and European Centre for Disease Prevention and Control (ECDC), the programme supports various research and integrative projects to stimulate the scientific development and integration of surveillance systems in a OH perspective ( ). In MATRIX, one of the OHEJP projects, the aim was to advance the implementation of OHS in practice, by building on existing resources, adding value to them, and creating synergies among sectors. The project created practical solutions for European countries to support and advance the implementation of OHS ( ). MATRIX operated with a focus on specific pathogens/hazards (hazard tracks, HT) to ensure that the solutions developed by the project were relevant to their surveillance. The hazards were chosen in 2019, based on the operational priorities of the 19 MATRIX partner institutes across 12 European countries and their OH relevance, namely: Campylobacter , Listeria monocytogenes, Salmonella , and emerging threats, including Hepatitis E virus. Prior to the integration of any surveillance system is the understanding of the relationships among its components. Mapping the components of existing disease surveillance systems is a fundamental step to facilitate subsequent integration of them from a OH perspective. As part of the broader objective to identify current examples of best practices and multi-sectorial collaborations across surveillance systems, one of the tasks of MATRIX aimed to map existing surveillance chains across the sectors involved in the surveillance of the project HTs, for at least one country per HT. Since the considered HTs are foodborne pathogens, the investigation followed the ‘farm-to-fork’ chain approach. The results of this work are detailed in a document published on Zenodo ( ), the open repository developed under the European OpenAIRE programme. However, the mapping exercise allowed the identification of both opportunities and challenges of this investigation approach of what is already in place in different countries. In this paper, we therefore will describe our methodological approach, and be presenting two real-life scenarios as case studies. The two scenarios chosen as case studies are the surveillance of L. monocytogenes in dairy products in Norway, and the Salmonella surveillance in pig meat in France. The scenarios concern pathogens that are of importance for human health based on the severity ( L. monocytogenes ) or the frequency ( Salmonella ) of the infections. In 2020 listeriosis was the fifth most reported zoonosis (1,876 cases) in Europe, mainly affecting people over the age of 64 ( ). In Norway, the number of annual cases of listeriosis in humans has been increasing gradually. Between 15 and 50 cases have been reported annually during the last decades, including a total of 37 cases in 2020 ( , ). Given the severe symptoms and fatality rate of listeriosis cases, and a high probability of an increased human burden of disease, L. monocytogenes was ranked in the top five groups of biological hazards in a risk ranking and source attribution study carried out by the Norwegian Scientific Committee for Food and Health ( ). In general, the prevalence of L. monocytogenes in food is low, but the bacterium can grow rapidly when there are optimum conditions of pH, temperatures between 30 and 37°C, and a water activity of 0.99 ( ). The theoretical minimum for growth is in conditions of pH 4.3, water activity of 0.92, and a temperature of −2°C, and both in presence or absence of oxygen ( ). The minimum infectious dose is not known, but dose–response models indicate that the marginal probability of developing invasive listeriosis upon ingestion of one cell of L. monocytogenes per individual for the general population is 8 × 10 –12 , and 3 × 10 –9 for extremely susceptible subpopulations ( ). Applying this to concentrations of L. monocytogenes in food, these numbers fit with the observation that the estimated probability of illness increases at 1,000 cfu/g for the most vulnerable consumers and at 100,000 cfu/g for adults with no underlying illness, provided that the usual portion size is 100 g of food ( ). When the growth conditions are good or the shelf life of the food is long, a high concentration of the bacterium can be reached before consumption. Foods with growth potential for L. monocytogenes that have a sufficiently long shelf life to exceed the critical concentrations mentioned above are regarded as risk products, unless they are heat-treated or L. monocytogenes is killed by other means before consumption. Contaminated, unpasteurised milk and other food ingredients are only some of the possible sources for the introduction of L. monocytogenes into dairies ( ). L. monocytogenes can enter production facilities and remain for an extended time, even decades, contaminating the food at regular or irregular intervals ( ). In addition, soft and semi-soft maturing cheeses are both examples of risk products for listeriosis. Outbreaks have been observed with cheeses from both pasteurised and unpasteurised milk: the largest in Norway was related to camembert cheese from a small-scale producer using pasteurised milk ( ). Dairy products are important both economically and culturally in Norway. Norwegian cheeses are, with only a few exceptions, produced and consumed domestically. In 2021, the annual consumption of cheese per person in Norway was 20,35 kg, of which 82% was produced in Norway ( ). The import of cheese was about four times higher than the export ( ). The variety of products from small-scale producers is large, and includes both pasteurised and unpasteurised products; the majority of dairy products sold are however coming from a few large producers, who produce from pasteurised milk and have extensive internal sampling programmes in place ( ). On the other hand, Salmonella is estimated to be responsible for more than 75 million foodborne infections worldwide each year ( ). In Europe, salmonellosis was the second most frequent zoonotic disease reported, with more than 91,000 cases reported each year until 2018, representing an economic burden of around 3 billion euros ( ). A marked improvement in this epidemiological situation can however be noted in comparison to the 200,000 annual number of human cases reported before 2004. The last Joint European zoonosis report from ECDC-EFSA highlighted decreasing number of human salmonellosis cases and Salmonella detection in food and animal sectors from 2016 to 2020. Nevertheless, this may be partly due to underreporting during the COVID-19 pandemic and Britain’s EU departure ( ). However, the number of positive sampling units related to the ‘pigs’ sector was stable in Europe over the same period (2016–2020). Pig meat and products thereof remained the second-largest source of salmonellosis food-borne outbreaks, with 11 strong-evidence outbreaks in 2020, compared to 37 outbreaks due to eggs and eggs products. Numerous Salmonella serovars were detected all along the food chain. Of these, S. Typhimurium, monophasic S. Typhimurium (1,4,[5],12:i:-) and S. Derby belonged to the top five, and were primarily related to pig sources ( ). For these reasons above, the second scenario chosen as a case study is the Salmonella surveillance in pig meat in France. In France, 139 among the 1,010 food-borne outbreaks declared in 2020 were attributable to Salmonella (120 were confirmed to have the presence of Salmonella in food, and 19 cases were suspected) ( ). The annual number of illnesses attributable to Salmonella is estimated at 183,000, including 4,110 hospitalizations and 67 deaths ( ). In France, 13 food-borne outbreaks were identified between 2002 and 2017, associated with products of porcine origin ( ). Contaminated raw animal food products are the main source of human infection. Contamination may occur during the processing stages from improper food handling and/or inadequate hygienic measures. Eating behaviours involving ingesting raw or undercooked products also pose a risk of infection ( ). Most (42%) of reported cases of salmonellosis are linked to the consumption of eggs or egg products ( ), but products from the pigs and dairy cattle sectors are also recognised as important reservoirs ( ). In pig farming, when an outbreak occurs, symptoms may include diarrhoea and growth delay. In farms with high biosecurity standards, the introduction of breeding animals and feed are considered the major routes for the introduction of Salmonella . Contamination of meat products most often occurs during the slaughtering of infected animals, when hygienic practices are lacking. For this reason, active monitoring is in place and is performed by the competent authority. In 2020, French food business operators (FBOs) performed more than 14,000 official controls at slaughterhouses and detected 4.8% (IC 95%: [4.4–5.2]) of pig carcasses contaminated by Salmonella ( ). At this stage, however, the integrated surveillance of Salmonella in the pig sector does remain needed in France. A shift towards a multi-sectorial approach is currently ongoing with the implementation of a collaborative and multidisciplinary platform dedicated to food chain surveillance ( ). The purpose of the paper is to describe the methodological approach we used to map the components of the existing disease surveillance systems for these two case scenarios, to enable its further application, and to share the lesson learned.
Materials and methods 2.1. Online questionnaires Within the activities of the project MATRIX, a multiple-choice questionnaire was created for each of the four hazards ( Salmonella, Campylobacter, L. monocytogenes , and Hepatitis E virus), to gather the necessary information for the mapping of the existing food chain surveillance activities from national experts in the field. As an adaptation of the approach from ‘farm-to-fork’ to ‘farm-to-patient’, each questionnaire was divided into three different sections: (I) focusing on the animal health aspects (AH), (II) on the food safety aspects (FS), and (III) on public health (PH). In each section, the surveillance was assessed by gathering information on actors, sampling context, collected sample types, laboratory methods for diagnosis, available data sources, and cross-sectoral collaboration in place. To ensure to include all the relevant information, eight experts were consulted during the implementation of the specific questionnaires for each sector. The draft version was circulated amongst the MATRIX participants for evaluation and implementation. The MATRIX partners were asked to suggest possible contact persons with expertise in the specific field of interest, between project partners and non-partners institutions. The identified experts were individually contacted to verify their interest and availability in taking part in the survey. The final version of the questionnaires was put online on the survey platform Survey Monkey©, for dissemination to the relevant experts previously selected. Given the specificities of the information required, a PDF version of the questionnaires (see , modified with permission from Cito et al., 2022 ( )). 2.2. Mapping template A questionnaire was considered completed when answers from the three involved sectors (AH, FS, PH) were obtained. Upon the reception of the three compiled sections, a preliminary evaluation of the results was carried out. Where missing or unclear information emerged, we requested clarifications by re-sending the questionnaire to the reference expert (or to a different one). For this reason, the questionnaires were open for completion for a period of about six months. In order to evaluate and display the collected information, a categorisation was put in place: information was classified as part of ‘data’, ‘metadata’, ‘events’, ‘event producing data (EPD)’, and/or ‘identified data source (IDS)’ ( ). The subsequent step was then the identification of the most relevant information, for their graphic representation on a map. Therefore, the information regarding the actors, the sampling context, the collected sample types, the laboratory methods in use in the diagnosis, and the available data sources, for each one of the sections, were highlighted. For the purpose of the task, we designed a template of the mapping and displayed it using MS PowerPoint © ( ). 2.3. The two case studies One of the main objectives of the MATRIX project was to map the surveillance systems along the food chain. To achieve this objective, we selected a specific food chain to be investigated in detail per each hazard. Combinations that are relevant from the public health point of view were selected, based on a consensus among the MATRIX Consortium on the epidemiological situation in 2020 in Europe. Concerning Listeria , the selected food chain was dairy products, given the epidemiological relevance of these products for the transmission of L. monocytogenes to humans. The investigated country was Norway, because of the economic and cultural importance of dairy products ( , ). Regarding Salmonella , we decided to assess surveillance activities in France in the pork meat food chain to avoid overlapping with the OHEJP project NOVA ( ), which investigated the poultry food chain with regard to Salmonella surveillance activities. For this reason, some information was already available, while less information existed for the pork meat food chain and the same pathogen.
Online questionnaires Within the activities of the project MATRIX, a multiple-choice questionnaire was created for each of the four hazards ( Salmonella, Campylobacter, L. monocytogenes , and Hepatitis E virus), to gather the necessary information for the mapping of the existing food chain surveillance activities from national experts in the field. As an adaptation of the approach from ‘farm-to-fork’ to ‘farm-to-patient’, each questionnaire was divided into three different sections: (I) focusing on the animal health aspects (AH), (II) on the food safety aspects (FS), and (III) on public health (PH). In each section, the surveillance was assessed by gathering information on actors, sampling context, collected sample types, laboratory methods for diagnosis, available data sources, and cross-sectoral collaboration in place. To ensure to include all the relevant information, eight experts were consulted during the implementation of the specific questionnaires for each sector. The draft version was circulated amongst the MATRIX participants for evaluation and implementation. The MATRIX partners were asked to suggest possible contact persons with expertise in the specific field of interest, between project partners and non-partners institutions. The identified experts were individually contacted to verify their interest and availability in taking part in the survey. The final version of the questionnaires was put online on the survey platform Survey Monkey©, for dissemination to the relevant experts previously selected. Given the specificities of the information required, a PDF version of the questionnaires (see , modified with permission from Cito et al., 2022 ( )).
Mapping template A questionnaire was considered completed when answers from the three involved sectors (AH, FS, PH) were obtained. Upon the reception of the three compiled sections, a preliminary evaluation of the results was carried out. Where missing or unclear information emerged, we requested clarifications by re-sending the questionnaire to the reference expert (or to a different one). For this reason, the questionnaires were open for completion for a period of about six months. In order to evaluate and display the collected information, a categorisation was put in place: information was classified as part of ‘data’, ‘metadata’, ‘events’, ‘event producing data (EPD)’, and/or ‘identified data source (IDS)’ ( ). The subsequent step was then the identification of the most relevant information, for their graphic representation on a map. Therefore, the information regarding the actors, the sampling context, the collected sample types, the laboratory methods in use in the diagnosis, and the available data sources, for each one of the sections, were highlighted. For the purpose of the task, we designed a template of the mapping and displayed it using MS PowerPoint © ( ).
The two case studies One of the main objectives of the MATRIX project was to map the surveillance systems along the food chain. To achieve this objective, we selected a specific food chain to be investigated in detail per each hazard. Combinations that are relevant from the public health point of view were selected, based on a consensus among the MATRIX Consortium on the epidemiological situation in 2020 in Europe. Concerning Listeria , the selected food chain was dairy products, given the epidemiological relevance of these products for the transmission of L. monocytogenes to humans. The investigated country was Norway, because of the economic and cultural importance of dairy products ( , ). Regarding Salmonella , we decided to assess surveillance activities in France in the pork meat food chain to avoid overlapping with the OHEJP project NOVA ( ), which investigated the poultry food chain with regard to Salmonella surveillance activities. For this reason, some information was already available, while less information existed for the pork meat food chain and the same pathogen.
Results We present below the results collected through the questionnaires on L. monocytogenes in dairy products in Norway, and Salmonella in the pork meat food chain in France, based on the information provided by the experts involved. 3.1. Listeria In Norway, the national and regional surveillance programmes in place are designed to detect illness cases among humans and animals, and non-compliance to food safety criteria in food, adapted to different production routes ( ). 3.1.1. Animals Veterinary technicians and/or private veterinarians carry out surveillance activities in the animal sector and perform outbreak investigations in case of increased mortality. Abortions are investigated, and bulk milk and blood from sick animals are collected. The bulk milk is routinely analysed at large-scale dairies, where the focus is on milk quality and production hygiene indicators rather than on L. monocytogenes specifically. Neurolisteriosis (meningitis) in animals is not a notifiable disease in Norway: clinical cases are not registered systematically, and clinical suspects are only rarely confirmed by laboratory diagnosis. The few laboratories that are involved in the diagnostics of listeriosis in animals work collaboratively at the national level. Even though laboratory results are not shared automatically, information can be made available upon request. The number of confirmed animal cases per region is reported and shared at the national level ( ). 3.1.2. Foods The sampling plans in the official national programmes are designed to cover imported foods and local small-scale dairy products. Large-scale dairies usually have their own sampling programmes. The surveillance of small-scale producers includes the sampling of summer products. In some programmes, ‘24 h samples’ (which means sampling the day after the start of the maturation process) are implemented in farms and small-scale dairies, as several pathogens can be found at the highest concentration at this stage. This kind of sampling allows for the rapid detection of anomalies and allows for sampling without the loss of the entire cheese. Sampling is also performed at the retail level, in compliance with the microbial criteria in the food legislation. In addition, metadata like production date, shelf-life date, animal species, whether the product is made of pasteurised or unpasteurised milk, producer, sampling place (address and kind of shop), and sampler can be recorded. For all products, a picture of the product is also collected. Auditors from the official control authorities carry out the sampling and the follow-up of positive samples with the producers. The National Reference Laboratory for Listeria in food, which is represented by the Norwegian Veterinary Institute (NVI), carries out the analysis of L. monocytogenes and other microbes. Detection and enumeration of L. monocytogenes are always included in the analyses. Whole genome sequencing (WGS) is newly applied, while it was not fully operational at the time at which the questionnaire was available for response. Isolates are stored for further analyses, for instance in case of outbreak investigation or research. Positive results are directly notified to the auditors, to allow rapid outbreak investigations and direct follow-up in case of non-compliance. In addition, all the results are anonymised, categorised, and presented annually or at the end of the programme. However, the national active surveillance programme for cheese and milk products is adapted intermittently: the focus foods for surveillance are decided every 1–3 years, based on priority lists for hazards and foods of particular concern. Besides the official surveillance programme, the farmers and dairies have their own-check sampling programmes in place, and hazard analysis and critical control points (HACCP) plans. Sampling in these cases may include the testing of surfaces, equipment, refrigerators, and water. 3.1.3. Humans Human listeriosis in Norway has been nominatively notifiable in the Norwegian Surveillance System for Communicable Diseases (MSIS) ( ) since 1991 (NIPH, 2022). Age, gender, place of residence, and travel history are among the parameters collected. The official number of cases is updated daily ( ). Medical microbiological laboratories in Norway are obligated to send clinical L. monocytogenes isolates to the National Reference Laboratory for Enteropathogenic Bacteria at the Norwegian Institute for Public Health (NIPH). WGS is performed routinely for confirmation, surveillance, and outbreak purposes (NIPH, 2022). All listeriosis cases are routinely investigated with a trawling questionnaire. When a WGS cluster is detected, epidemiological parameters as well as information from the trawling questionnaire are considered before the outbreak investigation is initiated. During an outbreak investigation, the NIPH works in close collaboration with municipality doctors, the Norwegian Food Safety Authority, and the NVI. 3.2. Salmonella In France, the Salmonella surveillance is based on a national system composed of approximately fifteen components or networks ( ). The system covers the entire food chain and most populations who are more at risk for these pathogens. Surveillance aims at reducing the risk for consumers through earlier detection of contamination by Salmonella in the food chain, limiting the economic impact of these contaminations in the production chains, and advancing knowledge. The French Public Health Institute, named ‘Santé publique France’ (SpF), defines a foodborne outbreak at the national level as the occurrence of at least two cases of similar symptomatology, generally gastrointestinal, which are attributed to the same food origin. The notification of cases has been mandatory since 1987. A notification can lead to investigations through the whole food chain and within different animal and food production sectors ( ). In the past, the pork food chain has been impacted on several occasions by Salmonella contamination ( , ). 3.2.1. Animals In the animal sector, many activities for Salmonella surveillance are implemented at the farm level in France ( ), which are carried out by official control authorities, laboratories, farmers, the industry, private veterinarians or technicians, and eventually research centers or institutions like universities. In the framework of monitoring programmes, outbreak investigations, or research projects, these actors collect environmental samples, including fecal material, water, and feed to detect and identify the bacteria by phenotypic or molecular methods. Laboratories implement official methods to serotype all isolates and, among this panel, only a part of the samples is typed in depth by polymerase chain reaction (PCR), SNPs, or cgMLST. All strains isolated in an outbreak context are sequenced with the technical support of the National Reference Laboratory (represented by the French Agency for Food, Environmental and Occupational Health and Safety - ANSES). These surveillance activities (through research) also concern animal movements. The monitoring and control of the application of biosecurity measures are particularly important, for both breeding and fattening pig farms. For this reason, additional data including personnel movement, and records of cleaning and sanitation procedures, is collected. The French Pork and Pig Institute (IFIP) stores the collected data at national and regional levels, and shares with other actors information on the coverage of surveillance activities and descriptive epidemiological results. 3.2.2. Foods For the food sector, official control authorities, the private sector, laboratories, and the IFIP predominantly perform activities at the slaughter and processing plants. Carcass swabs sampled at the slaughterhouses for official control programmes, are collected with other samples retrieved from the environment and equipment during monitoring programmes, own-checks, or outbreak investigations. Information on the activities performed at the retail stage, provided through the questionnaires, included that minced meat and meat preparations/products are subject to monitoring and research activities, outbreak investigations, official control programmes, and own-check. In France, sampling conducted within established surveillance programmes aims to investigate the exposure to Salmonella spp. In addition, sampling is targeted at consumer groups (e.g., vulnerable consumers, and consumers of a high amount of a particular food), and import/export. In case of non-compliance, depending on the results of the risk analysis, additional analyses may be carried out on the relevant products. Routinely, laboratories test samples for Salmonella detection by culture-dependent and molecular methods based on PCR. Each isolate is serotyped by the method of reference (ISO 6579-3:2017). WGS is performed to type strains that are suspected to be linked to food-borne outbreaks when epidemiological evidence (descriptive or analytical) is limited. The percentage of typed strains depends on the context but represents only a small fraction of the isolated strains. The overall process of testing and reporting may take months to conclude, even if the testing process is typically quite rapid. In 2018, the Food Chain Surveillance Platform was created to support surveillance activities and to promote an operational OH approach at the national level. This innovative structure is based on public and private governance. It effectively coordinates notably working groups on Salmonella with stakeholders including the IFIP, the Salmonella National Reference Laboratory (NRL), and National Reference Center (NRC), which are both hosted by a research unit from ANSES and Institut Pasteur, respectively, and numerous partners involved in the French Salmonella surveillance system ( ). 3.2.3. Humans In France, sporadic cases of salmonellosis are not notifiable diseases. Several actors, from local health authorities to hospitals/clinical/reference/local laboratories, monitor for human salmonellosis. In general, consistent data related to case detection are collected on a routine basis, while additional epidemiological data are collected mainly during outbreak investigations. A research unit from Institut Pasteur hosts the French mandate of NRC for Salmonella . This reference laboratory collects strains and data related to human cases confirmed by contaminated blood or faecal material. NRC shares confidential data related to each case with SpF, including the severity of symptoms, and spatial and temporal data. WGS is systematically performed, and results are centralised. Algorithms using this database produce weekly alerts when clusters based on microbiological data occur, and then the NRC informs SpF of these situations. Currently, there is no automatic tool or shared database in place at the national level to allow prompt interaction between human and non-human sectors. To date, the ability to share data mainly depends on the interpersonal connections between scientists working at the reference laboratories (NRC and NRL). In conclusion, the collaboration between sectors exists mostly for foodborne outbreak surveillance and investigation. The exchange of information issued from investigation frameworks is in place between the Regional sanitary authorities in charge of human surveillance (‘Regional health agency’) and of food safety, animal health, and welfare (‘Departmental Directorate for Social Cohesion and Population Protection’). Additionally, information is shared with the national competent authorities to implement adjusted control measures. The NRC and NRL have a central position in the framework, managing laboratory networking, developing, and harmonising analytical methods, and interacting with administrative organisations and professional and technical centers (including research).
Listeria In Norway, the national and regional surveillance programmes in place are designed to detect illness cases among humans and animals, and non-compliance to food safety criteria in food, adapted to different production routes ( ). 3.1.1. Animals Veterinary technicians and/or private veterinarians carry out surveillance activities in the animal sector and perform outbreak investigations in case of increased mortality. Abortions are investigated, and bulk milk and blood from sick animals are collected. The bulk milk is routinely analysed at large-scale dairies, where the focus is on milk quality and production hygiene indicators rather than on L. monocytogenes specifically. Neurolisteriosis (meningitis) in animals is not a notifiable disease in Norway: clinical cases are not registered systematically, and clinical suspects are only rarely confirmed by laboratory diagnosis. The few laboratories that are involved in the diagnostics of listeriosis in animals work collaboratively at the national level. Even though laboratory results are not shared automatically, information can be made available upon request. The number of confirmed animal cases per region is reported and shared at the national level ( ). 3.1.2. Foods The sampling plans in the official national programmes are designed to cover imported foods and local small-scale dairy products. Large-scale dairies usually have their own sampling programmes. The surveillance of small-scale producers includes the sampling of summer products. In some programmes, ‘24 h samples’ (which means sampling the day after the start of the maturation process) are implemented in farms and small-scale dairies, as several pathogens can be found at the highest concentration at this stage. This kind of sampling allows for the rapid detection of anomalies and allows for sampling without the loss of the entire cheese. Sampling is also performed at the retail level, in compliance with the microbial criteria in the food legislation. In addition, metadata like production date, shelf-life date, animal species, whether the product is made of pasteurised or unpasteurised milk, producer, sampling place (address and kind of shop), and sampler can be recorded. For all products, a picture of the product is also collected. Auditors from the official control authorities carry out the sampling and the follow-up of positive samples with the producers. The National Reference Laboratory for Listeria in food, which is represented by the Norwegian Veterinary Institute (NVI), carries out the analysis of L. monocytogenes and other microbes. Detection and enumeration of L. monocytogenes are always included in the analyses. Whole genome sequencing (WGS) is newly applied, while it was not fully operational at the time at which the questionnaire was available for response. Isolates are stored for further analyses, for instance in case of outbreak investigation or research. Positive results are directly notified to the auditors, to allow rapid outbreak investigations and direct follow-up in case of non-compliance. In addition, all the results are anonymised, categorised, and presented annually or at the end of the programme. However, the national active surveillance programme for cheese and milk products is adapted intermittently: the focus foods for surveillance are decided every 1–3 years, based on priority lists for hazards and foods of particular concern. Besides the official surveillance programme, the farmers and dairies have their own-check sampling programmes in place, and hazard analysis and critical control points (HACCP) plans. Sampling in these cases may include the testing of surfaces, equipment, refrigerators, and water. 3.1.3. Humans Human listeriosis in Norway has been nominatively notifiable in the Norwegian Surveillance System for Communicable Diseases (MSIS) ( ) since 1991 (NIPH, 2022). Age, gender, place of residence, and travel history are among the parameters collected. The official number of cases is updated daily ( ). Medical microbiological laboratories in Norway are obligated to send clinical L. monocytogenes isolates to the National Reference Laboratory for Enteropathogenic Bacteria at the Norwegian Institute for Public Health (NIPH). WGS is performed routinely for confirmation, surveillance, and outbreak purposes (NIPH, 2022). All listeriosis cases are routinely investigated with a trawling questionnaire. When a WGS cluster is detected, epidemiological parameters as well as information from the trawling questionnaire are considered before the outbreak investigation is initiated. During an outbreak investigation, the NIPH works in close collaboration with municipality doctors, the Norwegian Food Safety Authority, and the NVI.
Animals Veterinary technicians and/or private veterinarians carry out surveillance activities in the animal sector and perform outbreak investigations in case of increased mortality. Abortions are investigated, and bulk milk and blood from sick animals are collected. The bulk milk is routinely analysed at large-scale dairies, where the focus is on milk quality and production hygiene indicators rather than on L. monocytogenes specifically. Neurolisteriosis (meningitis) in animals is not a notifiable disease in Norway: clinical cases are not registered systematically, and clinical suspects are only rarely confirmed by laboratory diagnosis. The few laboratories that are involved in the diagnostics of listeriosis in animals work collaboratively at the national level. Even though laboratory results are not shared automatically, information can be made available upon request. The number of confirmed animal cases per region is reported and shared at the national level ( ).
Foods The sampling plans in the official national programmes are designed to cover imported foods and local small-scale dairy products. Large-scale dairies usually have their own sampling programmes. The surveillance of small-scale producers includes the sampling of summer products. In some programmes, ‘24 h samples’ (which means sampling the day after the start of the maturation process) are implemented in farms and small-scale dairies, as several pathogens can be found at the highest concentration at this stage. This kind of sampling allows for the rapid detection of anomalies and allows for sampling without the loss of the entire cheese. Sampling is also performed at the retail level, in compliance with the microbial criteria in the food legislation. In addition, metadata like production date, shelf-life date, animal species, whether the product is made of pasteurised or unpasteurised milk, producer, sampling place (address and kind of shop), and sampler can be recorded. For all products, a picture of the product is also collected. Auditors from the official control authorities carry out the sampling and the follow-up of positive samples with the producers. The National Reference Laboratory for Listeria in food, which is represented by the Norwegian Veterinary Institute (NVI), carries out the analysis of L. monocytogenes and other microbes. Detection and enumeration of L. monocytogenes are always included in the analyses. Whole genome sequencing (WGS) is newly applied, while it was not fully operational at the time at which the questionnaire was available for response. Isolates are stored for further analyses, for instance in case of outbreak investigation or research. Positive results are directly notified to the auditors, to allow rapid outbreak investigations and direct follow-up in case of non-compliance. In addition, all the results are anonymised, categorised, and presented annually or at the end of the programme. However, the national active surveillance programme for cheese and milk products is adapted intermittently: the focus foods for surveillance are decided every 1–3 years, based on priority lists for hazards and foods of particular concern. Besides the official surveillance programme, the farmers and dairies have their own-check sampling programmes in place, and hazard analysis and critical control points (HACCP) plans. Sampling in these cases may include the testing of surfaces, equipment, refrigerators, and water.
Humans Human listeriosis in Norway has been nominatively notifiable in the Norwegian Surveillance System for Communicable Diseases (MSIS) ( ) since 1991 (NIPH, 2022). Age, gender, place of residence, and travel history are among the parameters collected. The official number of cases is updated daily ( ). Medical microbiological laboratories in Norway are obligated to send clinical L. monocytogenes isolates to the National Reference Laboratory for Enteropathogenic Bacteria at the Norwegian Institute for Public Health (NIPH). WGS is performed routinely for confirmation, surveillance, and outbreak purposes (NIPH, 2022). All listeriosis cases are routinely investigated with a trawling questionnaire. When a WGS cluster is detected, epidemiological parameters as well as information from the trawling questionnaire are considered before the outbreak investigation is initiated. During an outbreak investigation, the NIPH works in close collaboration with municipality doctors, the Norwegian Food Safety Authority, and the NVI.
Salmonella In France, the Salmonella surveillance is based on a national system composed of approximately fifteen components or networks ( ). The system covers the entire food chain and most populations who are more at risk for these pathogens. Surveillance aims at reducing the risk for consumers through earlier detection of contamination by Salmonella in the food chain, limiting the economic impact of these contaminations in the production chains, and advancing knowledge. The French Public Health Institute, named ‘Santé publique France’ (SpF), defines a foodborne outbreak at the national level as the occurrence of at least two cases of similar symptomatology, generally gastrointestinal, which are attributed to the same food origin. The notification of cases has been mandatory since 1987. A notification can lead to investigations through the whole food chain and within different animal and food production sectors ( ). In the past, the pork food chain has been impacted on several occasions by Salmonella contamination ( , ). 3.2.1. Animals In the animal sector, many activities for Salmonella surveillance are implemented at the farm level in France ( ), which are carried out by official control authorities, laboratories, farmers, the industry, private veterinarians or technicians, and eventually research centers or institutions like universities. In the framework of monitoring programmes, outbreak investigations, or research projects, these actors collect environmental samples, including fecal material, water, and feed to detect and identify the bacteria by phenotypic or molecular methods. Laboratories implement official methods to serotype all isolates and, among this panel, only a part of the samples is typed in depth by polymerase chain reaction (PCR), SNPs, or cgMLST. All strains isolated in an outbreak context are sequenced with the technical support of the National Reference Laboratory (represented by the French Agency for Food, Environmental and Occupational Health and Safety - ANSES). These surveillance activities (through research) also concern animal movements. The monitoring and control of the application of biosecurity measures are particularly important, for both breeding and fattening pig farms. For this reason, additional data including personnel movement, and records of cleaning and sanitation procedures, is collected. The French Pork and Pig Institute (IFIP) stores the collected data at national and regional levels, and shares with other actors information on the coverage of surveillance activities and descriptive epidemiological results. 3.2.2. Foods For the food sector, official control authorities, the private sector, laboratories, and the IFIP predominantly perform activities at the slaughter and processing plants. Carcass swabs sampled at the slaughterhouses for official control programmes, are collected with other samples retrieved from the environment and equipment during monitoring programmes, own-checks, or outbreak investigations. Information on the activities performed at the retail stage, provided through the questionnaires, included that minced meat and meat preparations/products are subject to monitoring and research activities, outbreak investigations, official control programmes, and own-check. In France, sampling conducted within established surveillance programmes aims to investigate the exposure to Salmonella spp. In addition, sampling is targeted at consumer groups (e.g., vulnerable consumers, and consumers of a high amount of a particular food), and import/export. In case of non-compliance, depending on the results of the risk analysis, additional analyses may be carried out on the relevant products. Routinely, laboratories test samples for Salmonella detection by culture-dependent and molecular methods based on PCR. Each isolate is serotyped by the method of reference (ISO 6579-3:2017). WGS is performed to type strains that are suspected to be linked to food-borne outbreaks when epidemiological evidence (descriptive or analytical) is limited. The percentage of typed strains depends on the context but represents only a small fraction of the isolated strains. The overall process of testing and reporting may take months to conclude, even if the testing process is typically quite rapid. In 2018, the Food Chain Surveillance Platform was created to support surveillance activities and to promote an operational OH approach at the national level. This innovative structure is based on public and private governance. It effectively coordinates notably working groups on Salmonella with stakeholders including the IFIP, the Salmonella National Reference Laboratory (NRL), and National Reference Center (NRC), which are both hosted by a research unit from ANSES and Institut Pasteur, respectively, and numerous partners involved in the French Salmonella surveillance system ( ). 3.2.3. Humans In France, sporadic cases of salmonellosis are not notifiable diseases. Several actors, from local health authorities to hospitals/clinical/reference/local laboratories, monitor for human salmonellosis. In general, consistent data related to case detection are collected on a routine basis, while additional epidemiological data are collected mainly during outbreak investigations. A research unit from Institut Pasteur hosts the French mandate of NRC for Salmonella . This reference laboratory collects strains and data related to human cases confirmed by contaminated blood or faecal material. NRC shares confidential data related to each case with SpF, including the severity of symptoms, and spatial and temporal data. WGS is systematically performed, and results are centralised. Algorithms using this database produce weekly alerts when clusters based on microbiological data occur, and then the NRC informs SpF of these situations. Currently, there is no automatic tool or shared database in place at the national level to allow prompt interaction between human and non-human sectors. To date, the ability to share data mainly depends on the interpersonal connections between scientists working at the reference laboratories (NRC and NRL). In conclusion, the collaboration between sectors exists mostly for foodborne outbreak surveillance and investigation. The exchange of information issued from investigation frameworks is in place between the Regional sanitary authorities in charge of human surveillance (‘Regional health agency’) and of food safety, animal health, and welfare (‘Departmental Directorate for Social Cohesion and Population Protection’). Additionally, information is shared with the national competent authorities to implement adjusted control measures. The NRC and NRL have a central position in the framework, managing laboratory networking, developing, and harmonising analytical methods, and interacting with administrative organisations and professional and technical centers (including research).
Animals In the animal sector, many activities for Salmonella surveillance are implemented at the farm level in France ( ), which are carried out by official control authorities, laboratories, farmers, the industry, private veterinarians or technicians, and eventually research centers or institutions like universities. In the framework of monitoring programmes, outbreak investigations, or research projects, these actors collect environmental samples, including fecal material, water, and feed to detect and identify the bacteria by phenotypic or molecular methods. Laboratories implement official methods to serotype all isolates and, among this panel, only a part of the samples is typed in depth by polymerase chain reaction (PCR), SNPs, or cgMLST. All strains isolated in an outbreak context are sequenced with the technical support of the National Reference Laboratory (represented by the French Agency for Food, Environmental and Occupational Health and Safety - ANSES). These surveillance activities (through research) also concern animal movements. The monitoring and control of the application of biosecurity measures are particularly important, for both breeding and fattening pig farms. For this reason, additional data including personnel movement, and records of cleaning and sanitation procedures, is collected. The French Pork and Pig Institute (IFIP) stores the collected data at national and regional levels, and shares with other actors information on the coverage of surveillance activities and descriptive epidemiological results.
Foods For the food sector, official control authorities, the private sector, laboratories, and the IFIP predominantly perform activities at the slaughter and processing plants. Carcass swabs sampled at the slaughterhouses for official control programmes, are collected with other samples retrieved from the environment and equipment during monitoring programmes, own-checks, or outbreak investigations. Information on the activities performed at the retail stage, provided through the questionnaires, included that minced meat and meat preparations/products are subject to monitoring and research activities, outbreak investigations, official control programmes, and own-check. In France, sampling conducted within established surveillance programmes aims to investigate the exposure to Salmonella spp. In addition, sampling is targeted at consumer groups (e.g., vulnerable consumers, and consumers of a high amount of a particular food), and import/export. In case of non-compliance, depending on the results of the risk analysis, additional analyses may be carried out on the relevant products. Routinely, laboratories test samples for Salmonella detection by culture-dependent and molecular methods based on PCR. Each isolate is serotyped by the method of reference (ISO 6579-3:2017). WGS is performed to type strains that are suspected to be linked to food-borne outbreaks when epidemiological evidence (descriptive or analytical) is limited. The percentage of typed strains depends on the context but represents only a small fraction of the isolated strains. The overall process of testing and reporting may take months to conclude, even if the testing process is typically quite rapid. In 2018, the Food Chain Surveillance Platform was created to support surveillance activities and to promote an operational OH approach at the national level. This innovative structure is based on public and private governance. It effectively coordinates notably working groups on Salmonella with stakeholders including the IFIP, the Salmonella National Reference Laboratory (NRL), and National Reference Center (NRC), which are both hosted by a research unit from ANSES and Institut Pasteur, respectively, and numerous partners involved in the French Salmonella surveillance system ( ).
Humans In France, sporadic cases of salmonellosis are not notifiable diseases. Several actors, from local health authorities to hospitals/clinical/reference/local laboratories, monitor for human salmonellosis. In general, consistent data related to case detection are collected on a routine basis, while additional epidemiological data are collected mainly during outbreak investigations. A research unit from Institut Pasteur hosts the French mandate of NRC for Salmonella . This reference laboratory collects strains and data related to human cases confirmed by contaminated blood or faecal material. NRC shares confidential data related to each case with SpF, including the severity of symptoms, and spatial and temporal data. WGS is systematically performed, and results are centralised. Algorithms using this database produce weekly alerts when clusters based on microbiological data occur, and then the NRC informs SpF of these situations. Currently, there is no automatic tool or shared database in place at the national level to allow prompt interaction between human and non-human sectors. To date, the ability to share data mainly depends on the interpersonal connections between scientists working at the reference laboratories (NRC and NRL). In conclusion, the collaboration between sectors exists mostly for foodborne outbreak surveillance and investigation. The exchange of information issued from investigation frameworks is in place between the Regional sanitary authorities in charge of human surveillance (‘Regional health agency’) and of food safety, animal health, and welfare (‘Departmental Directorate for Social Cohesion and Population Protection’). Additionally, information is shared with the national competent authorities to implement adjusted control measures. The NRC and NRL have a central position in the framework, managing laboratory networking, developing, and harmonising analytical methods, and interacting with administrative organisations and professional and technical centers (including research).
Discussion 4.1. The online questionnaires The methodological approach adopted during the MATRIX project included the use of online questionnaires to collect information about surveillance in place in European countries. Our approach allowed for a substantial set of information to be obtained, in terms of both quality and quantity. Although in some cases surveillance activities are regulated by the existing European legislation [i.e., control programmes regarding Salmonella ( ), official controls under Regulation 2017/625 ( ) to verify that food complies with microbiological and process hygiene criteria established by Regulation 2073/2005 ( ) or epidemiological surveillance of communicable diseases ( )], in other there is no harmonised surveillance in the European Union. For this reason, the collection of information from the existing European legislation would have represented only a fraction of the overall amount of information gathered by the questionnaires. The questionnaires mainly asked closed questions with multiple-choice answers and checkboxes. This can potentially lead to biases, defined as a ‘deviation of results or inferences from the truth, or processes leading to such a deviation’ ( ). The biases may particularly result from the design of the questions and questionnaires, and/or from their modalities of administration and completion ( ). Semi-directive interviews may have allowed for collecting information that is more comprehensive. However, the conduction of interviews would have been more time-consuming and potentially introduced a greater risk of biases, given the interviewer’s subjectivity. Moreover, the use of questionnaires was a good alternative to in-person workshops, which were not feasible during the period of travel restrictions due to the SARS-CoV-2 pandemic. The questionnaire and the subsequent mapping made possible the drawing up of the initial description of the surveillance structure as the starting point for working collectively, and in more detail on each aspect. When using online questionnaires to collect information, the implementation can be an involved process, and it requires resources with expertise to design, pilot, and put them online. Both compiling and responding to the questionnaires also require deep knowledge of the subject. Therefore, depending on the involved expert in the compilation and response respectively, possible biases may be introduced. In addition, the splitting of the questions according to the three investigated sectors could not be sufficient, because even within the same sector the skills are diversified. As consequence, it could not be expected that each expert had the expertise to cover all aspects included in a single sector questionnaire (i.e., from the surveillance programmes in place, to existing information systems, and to laboratory tests used for diagnosis). To mitigate these risks, we applied the approach of involving, first, a country expert within the OHEJP MATRIX partner institutes and asking them to share the questionnaires with the appropriate experts, which could belong to different agencies. In this way, we gathered information not only from project partners but also from all three sectors involved in the surveillance of the pathogen under investigation. 4.2. Mapping template The mapping process could be a key step in initiating collaborative work to set up or improve a surveillance system. It seemed essential to clearly identify the actors involved in the monitoring, their role, and their position in the organisation, before considering implementation or possible adaptations and changes as actions, to achieve pre-established consensual objectives ( ). Although some examples of mapping were already available ( ), we designed a new template to display the relevant actors and other data regarding HT-specific surveillance. The key aspect of the mapping is the presentation, with a single figure, of the three investigated sectors, and for each sector the implemented surveillance activities. In this way, a clear visualisation and a quick comparison of the information reported is possible and the One Health approach is represented. The three involved sectors were animal health, food safety, and public health. Besides the food safety area, the OH approach can be applied to many others, covering complex health issues and requiring close collaboration across sectors, stakeholders, and countries ( ). Hence, our template can be applied to several different contexts, by simply adjusting the underlying structure. Beyond the purpose of the MATRIX project, in which a method to display/map surveillance activities was developed, the same method could be applied to several other scenarios. As a generic approach, the implementation of this template could facilitate also the description of areas within chemical monitoring, for example, using a preliminary adaptation of the questionnaire. Across further applications, the mapping approach could cover a whole production sector, impacted by several contaminants, or a specific contaminant monitored by multiple production sectors. 4.3. The two case studies In this study, we emphasised the methodology rather than the data collected using the questionnaire. Significantly more data than those shown on the maps were collected. The complete results are enclosed in a specific deliverable of the MATRIX project ( ). Here, we presented the application of the mapping of L. monocytogenes in Norway and Salmonella in France, as they were representative of two situations in which such information was thoroughly reported. The discussion with the experts on the two case studies highlighted how communication between official partners is generally more efficient when colleagues from different sectors know each other. Direct familiarity and trust can be important added values for successful surveillance and outbreak investigations ( ). The mapping clearly showed that surveillance of the animal and food sectors needs to be specifically designed to catch the production, processing, and use of the food products, by covering features such as seasonality, regional differences within a country, and large versus small-scale productions. The mapping method could be particularly useful in the case of a food category with a domestic market and small-scale producers, to follow up with the producers who do not have the size or economy to carry out many analyses. The additional value of using this approach, besides building connections and trust among authorities and producers, is to identify conditions that could lead to outbreaks, rather than detecting outbreaks when they have already started. The approach of having sampling schemes designed for the detection of risk factors within each sector, and combined with suited characterisation analyses and data sharing with other relevant sectors, can result in cost savings and rapid detection of OH challenges, regardless of the original purpose of the surveillance programme. For the food health segment, the focus has been placed on the consumers. It is possible to arrange different surveillance programmes for various vulnerable groups, but this aspect is already targeted in passive surveillance systems, when consumers go to the doctor if they are ill. The human health surveillance programme operates in a similar manner, regardless of the food segment covered. The contact between animal, food, and the human sector is likely to be easier for domestically produced and consumed food, as the options for signaling are more between people who know each other and work together on a regular basis, than if animal, food, and human health segments need to be alerted with official channels first. However, it is critical to define the specific situations under which other sectors should be alerted, and what information (in terms of data and metadata) should be shared among the different identified actors. Generally, the implementation of the OH approach is easier under the circumstance of an outbreak, since all the involved actors have the common goal of identifying the source of the infection and implementing control measures. The same thing does not happen during routine surveillance. Therefore, there is a general need for ‘traffic lights’ and checkpoints, about what to share, when, and why. While it is true that trust is important for sharing and respecting the rules agreed upon, active communication between sectors is a prerequisite for building trust. Collaborations are established gradually, based on the adhesion of the partners to a common organisation. A mapping stage could therefore be a prerequisite for establishing a shared and integrative vision of the organisation of surveillance activities, as a ground for further collaborative efforts. As an example, an approach to OH surveillance of listeriosis was suggested already in 2001 from France but was not followed up by other countries ( ). The current work in France and Norway to improve the efficiency of food hazard surveillance throughout the food chain is highlighting how long, sensitive, but successful, the process is. However, these food-borne hazards are not solely present within specific countries but are widespread in Europe and beyond. Because animals, food, and people move between countries, establishing links between specific country hazard maps would be useful. Likewise, efforts towards a OHS should be first made at the national level, and at some point linked internationally.
The online questionnaires The methodological approach adopted during the MATRIX project included the use of online questionnaires to collect information about surveillance in place in European countries. Our approach allowed for a substantial set of information to be obtained, in terms of both quality and quantity. Although in some cases surveillance activities are regulated by the existing European legislation [i.e., control programmes regarding Salmonella ( ), official controls under Regulation 2017/625 ( ) to verify that food complies with microbiological and process hygiene criteria established by Regulation 2073/2005 ( ) or epidemiological surveillance of communicable diseases ( )], in other there is no harmonised surveillance in the European Union. For this reason, the collection of information from the existing European legislation would have represented only a fraction of the overall amount of information gathered by the questionnaires. The questionnaires mainly asked closed questions with multiple-choice answers and checkboxes. This can potentially lead to biases, defined as a ‘deviation of results or inferences from the truth, or processes leading to such a deviation’ ( ). The biases may particularly result from the design of the questions and questionnaires, and/or from their modalities of administration and completion ( ). Semi-directive interviews may have allowed for collecting information that is more comprehensive. However, the conduction of interviews would have been more time-consuming and potentially introduced a greater risk of biases, given the interviewer’s subjectivity. Moreover, the use of questionnaires was a good alternative to in-person workshops, which were not feasible during the period of travel restrictions due to the SARS-CoV-2 pandemic. The questionnaire and the subsequent mapping made possible the drawing up of the initial description of the surveillance structure as the starting point for working collectively, and in more detail on each aspect. When using online questionnaires to collect information, the implementation can be an involved process, and it requires resources with expertise to design, pilot, and put them online. Both compiling and responding to the questionnaires also require deep knowledge of the subject. Therefore, depending on the involved expert in the compilation and response respectively, possible biases may be introduced. In addition, the splitting of the questions according to the three investigated sectors could not be sufficient, because even within the same sector the skills are diversified. As consequence, it could not be expected that each expert had the expertise to cover all aspects included in a single sector questionnaire (i.e., from the surveillance programmes in place, to existing information systems, and to laboratory tests used for diagnosis). To mitigate these risks, we applied the approach of involving, first, a country expert within the OHEJP MATRIX partner institutes and asking them to share the questionnaires with the appropriate experts, which could belong to different agencies. In this way, we gathered information not only from project partners but also from all three sectors involved in the surveillance of the pathogen under investigation.
Mapping template The mapping process could be a key step in initiating collaborative work to set up or improve a surveillance system. It seemed essential to clearly identify the actors involved in the monitoring, their role, and their position in the organisation, before considering implementation or possible adaptations and changes as actions, to achieve pre-established consensual objectives ( ). Although some examples of mapping were already available ( ), we designed a new template to display the relevant actors and other data regarding HT-specific surveillance. The key aspect of the mapping is the presentation, with a single figure, of the three investigated sectors, and for each sector the implemented surveillance activities. In this way, a clear visualisation and a quick comparison of the information reported is possible and the One Health approach is represented. The three involved sectors were animal health, food safety, and public health. Besides the food safety area, the OH approach can be applied to many others, covering complex health issues and requiring close collaboration across sectors, stakeholders, and countries ( ). Hence, our template can be applied to several different contexts, by simply adjusting the underlying structure. Beyond the purpose of the MATRIX project, in which a method to display/map surveillance activities was developed, the same method could be applied to several other scenarios. As a generic approach, the implementation of this template could facilitate also the description of areas within chemical monitoring, for example, using a preliminary adaptation of the questionnaire. Across further applications, the mapping approach could cover a whole production sector, impacted by several contaminants, or a specific contaminant monitored by multiple production sectors.
The two case studies In this study, we emphasised the methodology rather than the data collected using the questionnaire. Significantly more data than those shown on the maps were collected. The complete results are enclosed in a specific deliverable of the MATRIX project ( ). Here, we presented the application of the mapping of L. monocytogenes in Norway and Salmonella in France, as they were representative of two situations in which such information was thoroughly reported. The discussion with the experts on the two case studies highlighted how communication between official partners is generally more efficient when colleagues from different sectors know each other. Direct familiarity and trust can be important added values for successful surveillance and outbreak investigations ( ). The mapping clearly showed that surveillance of the animal and food sectors needs to be specifically designed to catch the production, processing, and use of the food products, by covering features such as seasonality, regional differences within a country, and large versus small-scale productions. The mapping method could be particularly useful in the case of a food category with a domestic market and small-scale producers, to follow up with the producers who do not have the size or economy to carry out many analyses. The additional value of using this approach, besides building connections and trust among authorities and producers, is to identify conditions that could lead to outbreaks, rather than detecting outbreaks when they have already started. The approach of having sampling schemes designed for the detection of risk factors within each sector, and combined with suited characterisation analyses and data sharing with other relevant sectors, can result in cost savings and rapid detection of OH challenges, regardless of the original purpose of the surveillance programme. For the food health segment, the focus has been placed on the consumers. It is possible to arrange different surveillance programmes for various vulnerable groups, but this aspect is already targeted in passive surveillance systems, when consumers go to the doctor if they are ill. The human health surveillance programme operates in a similar manner, regardless of the food segment covered. The contact between animal, food, and the human sector is likely to be easier for domestically produced and consumed food, as the options for signaling are more between people who know each other and work together on a regular basis, than if animal, food, and human health segments need to be alerted with official channels first. However, it is critical to define the specific situations under which other sectors should be alerted, and what information (in terms of data and metadata) should be shared among the different identified actors. Generally, the implementation of the OH approach is easier under the circumstance of an outbreak, since all the involved actors have the common goal of identifying the source of the infection and implementing control measures. The same thing does not happen during routine surveillance. Therefore, there is a general need for ‘traffic lights’ and checkpoints, about what to share, when, and why. While it is true that trust is important for sharing and respecting the rules agreed upon, active communication between sectors is a prerequisite for building trust. Collaborations are established gradually, based on the adhesion of the partners to a common organisation. A mapping stage could therefore be a prerequisite for establishing a shared and integrative vision of the organisation of surveillance activities, as a ground for further collaborative efforts. As an example, an approach to OH surveillance of listeriosis was suggested already in 2001 from France but was not followed up by other countries ( ). The current work in France and Norway to improve the efficiency of food hazard surveillance throughout the food chain is highlighting how long, sensitive, but successful, the process is. However, these food-borne hazards are not solely present within specific countries but are widespread in Europe and beyond. Because animals, food, and people move between countries, establishing links between specific country hazard maps would be useful. Likewise, efforts towards a OHS should be first made at the national level, and at some point linked internationally.
Conclusion During the MATRIX project, we proved that it is possible to map surveillance chains of foodborne pathogens of One Health relevance across the human health, animal health, and food safety sectors in various European countries, and the methodological approach described in this manuscript is replicable in several contexts. Although many efforts are implemented to remove barriers to a better application of the One Health, the importance of shifting from silo thinking should not be underestimated. The methodological approach that we presented can support identifying new opportunities for integrating OHS, while lifting our heads and looking further than we normally do, as it happens during research projects.
The raw data supporting the conclusions of this article will be made available by the authors upon request, without undue reservation.
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants was not required to participate in this study in accordance with the national legislation and the institutional requirements.
LA, GB, PG, TSk, and FC: idea and conceptualization. LA, FC, and TSk: methodology. LA, PG, and FC: data curation. LA, GB, VH, RL, ZN, TSc, TSk, and FC: original draft preparation and revision and editing. FC: supervision. All authors contributed to the article and approved the submitted version.
This work was supported by funding from the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement no. 773830: One Health European Joint Programme.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
The wide range of opportunities for large language models such as ChatGPT in rheumatology
|
bf7b40e3-8651-40a9-a1e3-a0f9e177588b
|
10151992
|
Internal Medicine[mh]
|
ChatGPT is a version of the language model GPT3 (a ‘natural language processing’ algorithm) that has been specifically trained for improved ‘chat’ capabilities. Meanwhile, there has already been an update to an even more performant version GPT4. The fine details behind how it is trained and used have not been made public, but this is what we know so far: the GPT architecture is, as the name suggests, a pretrained AI model that uses a neural network (here, a transformer) to generate new text based on previous texts. The pretraining is done using an ‘unsupervised’ method. This involves feeding a huge data set of text from books, newspaper articles, blogs, social media sites such as Reddit, etc and asking GPT to predict the next word in the text. In order for language models such as GPT to understand words (which have little inherent meaning for a computer), they must convert the words into vectors of numerical values, called word embeddings, which typically include 300 and 1500 different values for each word. These values represent different ‘traits’ of a word—a simplistic example being the ‘youngness’ or a word vs the ‘oldness’ of the word. Representing words as values like this enables models like GPT to understand which words are similar even if they use different letters (eg, ‘rapid’ and ‘speedy’). An ‘encoder’ network is used to convert words into values, and a decoder converts these vectors back into meaningful text . To progress from GPT3 to ChatGPT, a technique called ‘reinforcement learning with human feedback’ was used. Reinforcement learning involves providing a ‘reward’ based on desired performance and was also used in chess computers such as Deepmind’s AlphaZero. For ChatGPT, this involved an intermediate human step, where different model outputs were ranked by humans in terms of quality. This intermediate human step is important, however, and serves to check the output text for comprehensibility and meaningfulness. Finally, security precautions are taken to ensure that ChatGPT does not produce any potentially harmful content.
I use ChatGPT3 since December 2022. ChatGPT4 has been launched while editing this article. In the beginning, I have used it every day, for text formatting (typos, shortening, summarising, etc) searching for references (attention, confabulations occur, see below), helping to write introductions (not for this one), creation of non-scientific medical texts and creation of Excel or Python codes (rather gimmicky so far). If ChatGPT3 were an employee, my job reference after 2 months would look like this: ‘ChatGPT is a proficient text generator. He/she/it assembles information securely as texts in a wide variety of languages. Spelling and grammar are largely error-free. The language engine tries to recognise connections, but does not always succeed. ChatGPT usually recognises its limitations, for example, when recommending therapies, and refers to medical professionals. The summarisation of texts is reliable. In a literature search, ChatGPT unfortunately so far only records references up to the year 2021, which is insufficient from a scientific point of view. In some cases, references created by ChatGPT cannot be identified in PubMed. ChatGPT is extremely hard-working, can be used 24/7 and is never sick. However, for the last month or so, there have been recurring symptoms of overwork, especially during the day, but this will stop in the future through payment (so far ChatGPT works for free). ChatGPT can apparently programme its own codes and algorithms, but we don’t quite understand that yet as clinicians. Our trust in ChatGPT is growing, but we can’t yet have confidential data analysed by them or simply copied into the user interface. We need to talk to its parent OpenAI first and instal the code in our clinic to keep the data secure. ChatGPT is well liked by its staff for its skills and diligence, but it has no sense of humour.
So far, there is no clear evidence yet on how precisely ChatGPT may advance medicine. To date (March 2023), there are 36 medical publications on PubMed about the topic; 5 of them in 2022 and already 31 in 2023. Most of them are position papers and discussions from the domain of education, medical writing or automation. Many of these publications have been critical. In a recent publication, the journal Science, takes a flippant view: ‘ChatGPT is fun, but not an author’. Nature takes a more critical view and states in its Daily briefing: ‘Science urgently needs a plan for ChatGPT’. A controversy has quickly opened up in titles such as ‘ChatGPT- friend or foe?’, ‘Abstracts written by ChatGPT fool scientist’, or ‘Chatbots are a double-edged sword’. Other publications see chatbots as a great danger to science or even its ‘greatest enemy’. Nature takes a step forward and demands that certain basic rules be observed when dealing with chatGPT. In contrast, no published work exists yet on how chatGPT specifically helps with medical treatment. There are still no reports that clinical study protocols created by ChatGPT are successful, that ChatGPT brings quality to medical reports or it helps with clinical decisions. In my view, the first area in which scientific evidence of ChatGPT will be generated is the area of therapeutic education and health literacy.
In contrast to research and education, ChatGPT is seen much more positively in automation. For example, at Epic, a market leader in electronic medical records (EMR). As an EMR, Epic collects data from hundreds of millions of patients. Although only partially structured and in a lot of free text, more medical data comes together here than in any medical register in the world. It is no wonder that Epic, with its ‘Cosmos‘ or ‘Better Care’ programmes, is training algorithms to learn through AI from the many millions of clinical decisions made and make them useful for individual patients. ChatGPT, as a champion of processing and generating free text, comes at just the right time. But before ChatGPT may turn into a clinical decision-support system, it will help on another level. The magic formula is called ‘Automation Workflow Systems’. Combining ChatGPT and EMR may become a weapon against the rampant bureaucracy. Once the algorithm has the necessary data from the EMR available (medical history, laboratory, X-ray findings, etc) consultation reports, discharge reports but also cost-credits for health insurance companies, certificates of incapacity to work, etc could all be created in real time and only need to be validated. The relief for medical and administrative staff would be enormous. For doctors and nurses alone, several hours of work per week on documentation would be eliminated, which can be much better used for patient consultations or further training.
AI for solving specific, often particularly repetitive tasks, has spread through all industries, including healthcare. Over 500 algorithms are FDA approved, mostly as classification/diagnostic tools in imaging, including one or two rheumatology indications. In the clinic, we have access to a large amount of data and we know the clinical problems which we would like the algorithms to solve. Preferably with a user interface directly in the EMR. The limiting factor is finding programmers or paying them. Therefore, more and more so-called no-coding platforms are coming into use, in which code is generated (eg, in Python) and a user interface is directly included. An even more flexible solution than such platforms could be to have the code written by ChatGPT with the EMR as user interface. However, it may be risky to do so without any oversight, as ChatGPT may not appreciate the full context of the code and there’s a risk of security vulnerabilities. Oversight from clinicians with programming skills, or clinical informaticians will likely to be required for the foreseeable future.
While this viewpoint focuses more on the positive aspects of large language models, there are some important risks. For me, the biggest risk is that ChatGPT is unfortunately not always right. Sometimes, ChatGPT confidently states false information. OpenAI has been open about this from the beginning, however, and recently, it has been explicitly pointed out again on the chatbot. As mentioned earlier, some of the references provided by ChatGPT during my own literature search could not be found in PubMed or Google and seemed to be confabulated. Academic reference errors with concrete examples are also pointed out in the OpenAI API Community Forum. The chatbot sometimes paraphrases, and since it otherwise communicates in a very qualified manner, it is difficult to deduce these paraphrases from the text. It is different with the keyword search via Google, here one has at least the external aspect of the website or credibility of sources within the framework of affiliation, organisation, peer-reviewed publications, etc. In other words, humans are sometimes too gullible towards humans and probably also chatbots. This means ChatGPT delivers bite-sized but ultimately unvalidated information to the user. Other critical points relate to the AI as a whole, not just the chatbot. Where exactly does the data come from? How transparent is the algorithm? Is there a kind of ranking like Google for the content? And finally, what does ChatGPT actually do with the data? If we feed in our own data, ChatGPT keeps learning and becomes uncatchable? The other open question is whether these models continue to improve as more parameters are added—or whether it is now other techniques (such as reinforcement learning from human feedback, more fine-tuning) that will improve performance.
In March 2023, the results for a PubMed search for the terms ‘ChatGPT AND rheumatology’ still were 0 (for oncology 6 and psychiatry 5 results appeared). No scientific publications were found in a Google search for the same terms. Conversely, there are various articles about rheumatology and ChatGPT in social media such as LinkedIn, Twitter and in Podcasts. In a recent Tiktok video, the American rheumatologist, Dr. Clifford Stermer, asked a patient with systemic sclerosis to use ChatGPT to create an insurance letter with scientific references for the coverage of an echocardiogram. While the letter was perfectly written, some of the references were made up by the chatbot. In my opinion, this confirms that for the moment, ChatGPT is a well-trained language model, but not a scientific model. This function will certainly be improved in future versions of ChatGPT, or in language models which are more specifically trained on scientific text such as Googles SciBERT. Anyways, for simple written automatisation tasks such as insurance letters, ChatGPT will definitely be a game-changer and safe healthcare professionals precious time by taking administrative work away from us by using data from EMRs. The time saved for bureaucratic tasks such as cost credits for biological treatment to insurances or discharge reports, could be very large.
In rheumatology, there is a high proportion of patients with chronic diseases and a long patient journey, but at the same time, an important shortage of healthcare professionals. The delays for a consultation can be immense and the need for information often cannot be met by general practitioners. Chatbots on websites of hospitals, private practices, etc can simplify communication and automate it at least for certain parts. This also applies to telephone chatbots during times when the telephone is not manned, for example, during breaks. On a more didactic level, chatbots may substantially contribute to health literacy in patients with chronic diseases such as arthritis. Chatbots can educate patients, for example, in the form of weekly interactive interventions on lifestyle, physical exercise, drug adherence, etc. ChatGPT provides reasonable information on questions such as ‘What is the best diet when suffering from rheumatoid arthritis?‘). Or‚ ‘I have rheumatoid arthritis—what exercises can I do for my swollen joints?‘ For other therapeutic questions, such as drug therapy, ChatGPT answers with a security answer, but then gives some very general but quite helpful answers: ‘As an AI language model, I am not qualified to provide medical advice. However, there are certain exercises that can help alleviate symptoms of rheumatoid arthritis (RA), including swollen joints….’ On the other hand, cognitive–behavioural therapy or mindfulness elements might well be provided by the chatbot, for example, for patients with primary or secondary fibromyalgia. The query‚ can you do a general cognitive behavioural therapy exercise with me?‘ results in a reasonable answer . Importantly, chatbots can be connected or directly asses patient reported outcomes on disease activity, quality of life or digital biomarkers such as biometric data. By a reward function (as part of reinforcement learning) future chatbot models will thus be able to learn which type of suggested intervention was useful or not. ChatGPT thus inevitably moves in the direction of a DIGA (German term for certified Digital Health Applications which are reimbursed by health insurances), although this would of course require regulatory aspects to be worked on and evidence to be shown. Of course, this never comes close to a human coach, who may have less knowledge than ChatGPT, but more senses, empathy, facial expressions, gestures or similar interests and experiences as patients. In any case, chatbots and human coaches could work together on digital platforms on a broader scale. 10.1136/rmdopen-2023-003105.supp1 Supplementary data
A further direction of development is multimodal AI, where multiple types of data are combined, such as audio (speech), image, text and time series data. ChatGPT can, for example, work together with speech recognition tools to create reports with automatically inserted prognoses or lay descriptions for patients. Or it could merge with DALL.E, the image generator of OpenAI . The conventional doctors’ report could become interactive, understandable for patients and contains personalised images, for example, clinical courses or even physiotherapy exercises for the patient. These reports do not need to be static, but could be animated by AI, for example, by visually integrating digital biomarkers and individualised predictions of disease activity in arthritis. Without a doubt, the greatest risk in the use of chatbots in the field of rheumatology as elsewhere lies in the loneliness of patients who are particularly dependent on interpersonal exchange. I urge that we use the freed up time and resources wisely to get more interpersonal time to talk about therapy, nutrition, stress management, preventive care, ability to work, physical and psychological resources, vaccinations, etc.
Following speech and image recognition, text generation by large language models including chatbots is now entering medicine and thus also rheumatology. It will not completely change our clinical routine, but it will make information more easily available and save time, which we will hopefully use wisely. In particular, for chronic diseases such as RA, an immense amount of data accumulates in the course of a patient journey that we do not see and do not use. Chatbots will not be able to make clinical decisions for now; for this, they are dependent on the relevant guidelines or studies that have been carried out. But they will be able, for example, help to design study protocols and carry out decentralised studies more easily. The basis of all this is the education around this new technology among both patients and healthcare professionals.
|
Performance of the 2022 ACR/EULAR giant cell arteritis classification criteria for diagnosis in patients with suspected giant cell arteritis in routine clinical care
|
b0ea14ce-b32b-486b-9d49-5d500811f8db
|
10151996
|
Internal Medicine[mh]
|
This is the first external validation of the 2022 ACR/EULAR GCA classification criteria for diagnosis of patients with suspected GCA in routine clinical practice. The new criteria performed adequately in supporting GCA clinical diagnosis and improved the diagnostic accuracy of the 1990 ACR GCA classification criteria.
The 2022 ACR/EULAR GCA classification may be useful to support the diagnosis of GCA in clinical practice. Further studies are necessary to better determine their diagnostic accuracy in different GCA populations.
In 1990, the American College of Rheumatology (ACR) published criteria for the classification of seven types of systemic vasculitis, including giant cell arteritis (GCA). These criteria were meant to assist in the classification of patients for inclusion in clinical trials. However, their potential use for diagnostic purposes has been explored showing poor sensitivity (Sens). At the time they were developed, non-invasive imaging modalities were not available, so they were focused on clinical features, laboratory and histology findings on temporal artery biopsy (TAB). In addition, these criteria only included features of cranial GCA and did not perform well when classifying patients with large vessel (LV) involvement, commonly termed LV-GCA. Nowadays, vascular imaging modalities have increasingly been incorporated into patient assessments; indeed, in more than half of suspected GCA cases showing LV-GCA. Moreover, EULAR recommendations place ultrasound (US) of temporal (TA) and axillary arteries as first-line imaging tests and a non-compressible halo sign may replace the need for TAB in patients with high pretest probability. Newer randomised controlled trials have applied additional inclusion criteria for patients with GCA such as polymyalgia rheumatica, C reactive protein (CRP) or imaging (US, fluorodeoxyglucose (FDG)-positron emission tomography (PET)/CT, MRI or CT) and TAB has been replaced by imaging as the first-line diagnostic test in patients with suspected GCA in clinical practice. Therefore, new classification criteria were needed to better reflect current practice. The 2022 ACR/EULAR GCA classification criteria have been recently published using a very consistent methodology, including both a developmental cohort and a validation cohort, yielding a Sens of 87.0% and a specificity (Spec) of 94.8%. These new criteria incorporate modern imaging techniques, reflecting their growing use in routine care. Although these criteria have been developed for the purpose of patient classification in research settings, a comparison between their diagnostic performance versus the classic 1990 ACR GCA classification criteria in routine care may prove informative since validation is a continuous process. The primary objective of this study was to examine the performance of the 2022 ACR/EULAR GCA classification criteria for diagnosis in patients with suspected GCA in routine care.
Patients This retrospective cross-sectional study included patients referred to an US fast-track clinics (FTC) at two academic centres for the screening of possible GCA over a 4-year period (January 2018–January 2022). Patients with suspected GCA were referred for US evaluation by various specialties (rheumatology, internal medicine, emergency care, neurology) within 24–48 hours, per the protocol (excluding weekends, with delays up to 72 hours). The gold standard for GCA diagnosis was clinical confirmation by the treating clinician after at least 6 months of follow-up. All patients with a final GCA diagnosis over the study period were compared with a cohort of unselected controls with suspected GCA evaluated at the US FTC during a 1-year period. The study was performed under routine clinical practice conditions. Data collection All variables included in the 2022 ACR/EULAR GCA classification criteria were collected retrospectively from the electronic medical records, to include: demographics; presenting symptoms, including new-onset headache, scalp tenderness, jaw claudication, visual loss and ocular ischaemia diagnosed by an ophthalmologist; morning stiffness in shoulders/neck and abnormal findings on the TA examination. Additionally, we have collected the proportion of patients who have previously had a diagnosis of polymyalgia rheumatica (PMR) before the US scan. US findings are systematically registered as part of the routine practice of the fast-track clinics. Laboratory tests such as CRP, erythrocyte sedimentation rate (ESR), haemoglobin and platelets and TAB findings (if available) were also collected. Following EULAR recommendations, TAB was only performed according to clinician criteria in case of uncertainty (negative imaging findings in patients with moderate/high pretest probability or positive imaging findings in patients with low pretest probability). TAB results were reported as positive or negative for GCA based on the report of the pathologists, with >5 years of experience. A positive TAB was considered as a biopsy showing vasculitis characterised by a predominance of mononuclear cell infiltration or granulomatous inflammation, with or without the presence of multinucleated giant cells. Imaging assessments All patients underwent a US exam of TA, including common superficial TA, its parietal and frontal branches and an LV scan of the carotid, subclavian and axillary arteries. The US exam was performed by three ultrasonographers (EdM, IM and JM-C) with >15, 10 and 5 years of experience performing vascular US, respectively. We have used two US machines, including an EsaoteMyLab8 (Esaote, Genoa) with a 12–18 MHz (for TA) and 6–15 MHz transducers (for LV), as well as an Esaote MyLabTwice with a 10–22 MHz (for TA) and 4–13 MHz transducers (for LV). The presence of a halo and/or compression sign in TA, or the presence of a halo in LV in the absence of atherosclerosis was considered sufficient for a positive US examination, in agreement with the OMERACT definitions. In cases of uncertainty, the intima-media thickness was measured to confirm the findings according to published proposed cut-off values. The ultrasonographers were not blinded to the clinical data. An FDG-PET/CT was performed per clinician criteria if necessary for diagnosis, usually in patients with high suspicion of extracranial involvement (fever, constitutional symptoms, bruits or arm claudication) or patients with negative US scan but high pretest probability of GCA. All PET images were assessed by expert nuclear medicine physicians with >5 years of experience using a Siemens Biograph 6-4R TruePoint PET/CT Scanner and a Siemens Biograph Vision PET/CT Scanner 128 slices (Siemens Medical Systems, Knoxville, Tennessee, USA). An arterial FDG uptake higher than the liver uptake was defined as positive. The qualitative FDG uptakes in the aorta, its aortic branches (carotid, axillary and subclavian arteries), iliofemoral and cranial arteries were also recorded. Application of the 2022 ACR/EULAR GCA classification criteria All clinical variables were scored according to the 2022 classification criteria as follows: morning stiffness in shoulders/neck (+2), sudden visual loss (+3), jaw/tongue claudication (+2), new temporal headache (+2), scalp tenderness (+2), abnormal examination of TA (+2), ESR ≥50 mm/hour or CRP ≥10 mg/L (+3), positive TAB (if performed) or halo sign on TA US (+5), bilateral axillary involvement in angiography, US or FDG-PET/CT (+2) and FDG uptake throughout the aorta on FDG-PET/CT (+2). Age restriction ≥50 was applied. GCA classification criteria were considered fulfilled when a total score ≥6 of the sum of the 10 items was recorded. Statistical analysis The performance of the new criteria was evaluated in all patients with GCA, as well as in four different patient subsets: 1) isolated cranial GCA, 2) isolated LV-GCA, 3) TAB-proven GCA and 4) all LV-GCA (with or without cranial GCA). Quantitative data are expressed as the mean (SD) and qualitative variables as absolute frequencies (percentages). As we had a very low percentage of missing data which happened at a random fashion, a complete case analysis or listwise deletion was conducted (default option for analysis in the statistical software package). A χ 2 test or Fisher’s exact test was used to analyse differences between proportions; Student’s t-test was used for comparisons between means. Criterion validity was evaluated using receiver operating characteristic (ROC) curves with GCA clinical diagnosis as the external criterion. All tests were two-sided; p values <0.05 were considered statistically significant. SPSS software (V.25.0, IBM, USA) was used for the statistical analysis.
This retrospective cross-sectional study included patients referred to an US fast-track clinics (FTC) at two academic centres for the screening of possible GCA over a 4-year period (January 2018–January 2022). Patients with suspected GCA were referred for US evaluation by various specialties (rheumatology, internal medicine, emergency care, neurology) within 24–48 hours, per the protocol (excluding weekends, with delays up to 72 hours). The gold standard for GCA diagnosis was clinical confirmation by the treating clinician after at least 6 months of follow-up. All patients with a final GCA diagnosis over the study period were compared with a cohort of unselected controls with suspected GCA evaluated at the US FTC during a 1-year period. The study was performed under routine clinical practice conditions.
All variables included in the 2022 ACR/EULAR GCA classification criteria were collected retrospectively from the electronic medical records, to include: demographics; presenting symptoms, including new-onset headache, scalp tenderness, jaw claudication, visual loss and ocular ischaemia diagnosed by an ophthalmologist; morning stiffness in shoulders/neck and abnormal findings on the TA examination. Additionally, we have collected the proportion of patients who have previously had a diagnosis of polymyalgia rheumatica (PMR) before the US scan. US findings are systematically registered as part of the routine practice of the fast-track clinics. Laboratory tests such as CRP, erythrocyte sedimentation rate (ESR), haemoglobin and platelets and TAB findings (if available) were also collected. Following EULAR recommendations, TAB was only performed according to clinician criteria in case of uncertainty (negative imaging findings in patients with moderate/high pretest probability or positive imaging findings in patients with low pretest probability). TAB results were reported as positive or negative for GCA based on the report of the pathologists, with >5 years of experience. A positive TAB was considered as a biopsy showing vasculitis characterised by a predominance of mononuclear cell infiltration or granulomatous inflammation, with or without the presence of multinucleated giant cells.
All patients underwent a US exam of TA, including common superficial TA, its parietal and frontal branches and an LV scan of the carotid, subclavian and axillary arteries. The US exam was performed by three ultrasonographers (EdM, IM and JM-C) with >15, 10 and 5 years of experience performing vascular US, respectively. We have used two US machines, including an EsaoteMyLab8 (Esaote, Genoa) with a 12–18 MHz (for TA) and 6–15 MHz transducers (for LV), as well as an Esaote MyLabTwice with a 10–22 MHz (for TA) and 4–13 MHz transducers (for LV). The presence of a halo and/or compression sign in TA, or the presence of a halo in LV in the absence of atherosclerosis was considered sufficient for a positive US examination, in agreement with the OMERACT definitions. In cases of uncertainty, the intima-media thickness was measured to confirm the findings according to published proposed cut-off values. The ultrasonographers were not blinded to the clinical data. An FDG-PET/CT was performed per clinician criteria if necessary for diagnosis, usually in patients with high suspicion of extracranial involvement (fever, constitutional symptoms, bruits or arm claudication) or patients with negative US scan but high pretest probability of GCA. All PET images were assessed by expert nuclear medicine physicians with >5 years of experience using a Siemens Biograph 6-4R TruePoint PET/CT Scanner and a Siemens Biograph Vision PET/CT Scanner 128 slices (Siemens Medical Systems, Knoxville, Tennessee, USA). An arterial FDG uptake higher than the liver uptake was defined as positive. The qualitative FDG uptakes in the aorta, its aortic branches (carotid, axillary and subclavian arteries), iliofemoral and cranial arteries were also recorded.
All clinical variables were scored according to the 2022 classification criteria as follows: morning stiffness in shoulders/neck (+2), sudden visual loss (+3), jaw/tongue claudication (+2), new temporal headache (+2), scalp tenderness (+2), abnormal examination of TA (+2), ESR ≥50 mm/hour or CRP ≥10 mg/L (+3), positive TAB (if performed) or halo sign on TA US (+5), bilateral axillary involvement in angiography, US or FDG-PET/CT (+2) and FDG uptake throughout the aorta on FDG-PET/CT (+2). Age restriction ≥50 was applied. GCA classification criteria were considered fulfilled when a total score ≥6 of the sum of the 10 items was recorded.
The performance of the new criteria was evaluated in all patients with GCA, as well as in four different patient subsets: 1) isolated cranial GCA, 2) isolated LV-GCA, 3) TAB-proven GCA and 4) all LV-GCA (with or without cranial GCA). Quantitative data are expressed as the mean (SD) and qualitative variables as absolute frequencies (percentages). As we had a very low percentage of missing data which happened at a random fashion, a complete case analysis or listwise deletion was conducted (default option for analysis in the statistical software package). A χ 2 test or Fisher’s exact test was used to analyse differences between proportions; Student’s t-test was used for comparisons between means. Criterion validity was evaluated using receiver operating characteristic (ROC) curves with GCA clinical diagnosis as the external criterion. All tests were two-sided; p values <0.05 were considered statistically significant. SPSS software (V.25.0, IBM, USA) was used for the statistical analysis.
Patient characteristics A total of 319 patients, including all 188 patients with GCA and 131 consecutive non-selected controls (of the 502 patients without GCA) evaluated at our FTCs during the study period, were included for analysis (mean age 76 years, 58.9% females). Clinical, laboratory and imaging findings of patients, with and without GCA, are presented in . Different patient subsets were determined by the treating clinician at the time of diagnosis, based on clinical or imaging findings: 83 patients had isolated cranial GCA and 37 patients had isolated LV-GCA. TAB was performed in 57 patients; 21 (42%) patients with GCA had positive histology findings according to the pathologist’s criteria. Controls included 55 (42%) PMR, 10 (7.6%) cases of non-specific or tensional headache, 6 (4.6%) non-vasculitis ocular ischaemia, 5 (3.8%) fever of unknown origin and 55 (41.9%) other diagnosis (including cancer, infections, inflammatory arthritis or other forms of vasculitis). Imaging findings Positive US findings were found in 183 (97.3%) cases with GCA, and in only 5 (3.8%) controls (p<0.001). Remarkably, 98 (52.1%) patients had US signs of LV-GCA and 32 (17%) isolated LV-GCA, based only on US examination, without considering the findings of other imaging tests. FDG-PET/CT was performed in 99 patients per clinician criteria, with 32 (32.3%) showing positive findings. A total of 30 (40.5%) patients with GCA and FDG-PET/CT had abnormal artery uptake, while only 2 (8%) controls had positive findings (one patient with an IgG4-related disease diagnosis and another with non-vasculitic diffuse infiltrative disease) (p<0.01). Aortic uptake was the most frequent involvement in GCA (33.8%). Performance of the 2022 ACR/EULAR GCA classification criteria Overall, the new criteria had a Sens of 92.6% and a Spec of 71.8% for GCA clinical diagnosis , with the AUC measuring 0.928 (95% CI 0.899 to 0.957). The performance of each individual item included in the criteria with GCA clinical diagnosis as external criterion is detailed in . The diagnostic accuracies of the 2022 ACR/EULAR and the 1990 ACR GCA classification criteria in different subsets of patients are shown in . In patients with isolated cranial GCA, the new criteria showed the highest Sens (96.4%), with an AUC of 0.962 (95% CI 0.930 to 0.993), while the group of isolated LV-GCA cases showed a lower Sens: 62.2% with an AUC of 0.691 (95% CI 0.592 to 0.790). When we included only those patients with biopsy‐proven GCA, the Sens was 100%. The 1990 criteria only performed well in the biopsy-proven GCA group (Sens 95.2 and Spec 80.2), while the Sens was low in the overall GCA population (53.2%), particularly in the isolated LV-GCA group (18.9%), with an AUC of 0.554 (95% CI 0.455 to 0.653). We have additionally calculated the accuracy of the criteria in the subgroup of patients who underwent a TAB (negative or positive for GCA). Sens and Spec for the new criteria in this population was 100% and 0%, respectively, and for the 1990 ACR criteria was 72% and 28.6%, respectively. Higher scores of the criteria (≥7 or ≥8, instead of ≥6 points) decreased Sens to 92% and 84.6%, but increased Spec to 74% and 88.5%, respectively, for GCA clinical diagnosis. We further tested the performance of the criteria by including as scoring criteria (+2) bilateral axillary involvement, and any positive imaging findings on US or FDG/PET-TC pertaining to the carotid or subclavian arteries with either unilateral or bilateral involvement . While the overall Sens of these modified criteria slightly improved when applied to the general GCA population (from 92.6% to 94.7%), Spec findings remained the same. However, in the patient subset presenting isolated LV-GCA, the Sens considerably increased, from 62.2% to 73%. 10.1136/rmdopen-2022-002970.supp1 Supplementary data
A total of 319 patients, including all 188 patients with GCA and 131 consecutive non-selected controls (of the 502 patients without GCA) evaluated at our FTCs during the study period, were included for analysis (mean age 76 years, 58.9% females). Clinical, laboratory and imaging findings of patients, with and without GCA, are presented in . Different patient subsets were determined by the treating clinician at the time of diagnosis, based on clinical or imaging findings: 83 patients had isolated cranial GCA and 37 patients had isolated LV-GCA. TAB was performed in 57 patients; 21 (42%) patients with GCA had positive histology findings according to the pathologist’s criteria. Controls included 55 (42%) PMR, 10 (7.6%) cases of non-specific or tensional headache, 6 (4.6%) non-vasculitis ocular ischaemia, 5 (3.8%) fever of unknown origin and 55 (41.9%) other diagnosis (including cancer, infections, inflammatory arthritis or other forms of vasculitis).
Positive US findings were found in 183 (97.3%) cases with GCA, and in only 5 (3.8%) controls (p<0.001). Remarkably, 98 (52.1%) patients had US signs of LV-GCA and 32 (17%) isolated LV-GCA, based only on US examination, without considering the findings of other imaging tests. FDG-PET/CT was performed in 99 patients per clinician criteria, with 32 (32.3%) showing positive findings. A total of 30 (40.5%) patients with GCA and FDG-PET/CT had abnormal artery uptake, while only 2 (8%) controls had positive findings (one patient with an IgG4-related disease diagnosis and another with non-vasculitic diffuse infiltrative disease) (p<0.01). Aortic uptake was the most frequent involvement in GCA (33.8%).
Overall, the new criteria had a Sens of 92.6% and a Spec of 71.8% for GCA clinical diagnosis , with the AUC measuring 0.928 (95% CI 0.899 to 0.957). The performance of each individual item included in the criteria with GCA clinical diagnosis as external criterion is detailed in . The diagnostic accuracies of the 2022 ACR/EULAR and the 1990 ACR GCA classification criteria in different subsets of patients are shown in . In patients with isolated cranial GCA, the new criteria showed the highest Sens (96.4%), with an AUC of 0.962 (95% CI 0.930 to 0.993), while the group of isolated LV-GCA cases showed a lower Sens: 62.2% with an AUC of 0.691 (95% CI 0.592 to 0.790). When we included only those patients with biopsy‐proven GCA, the Sens was 100%. The 1990 criteria only performed well in the biopsy-proven GCA group (Sens 95.2 and Spec 80.2), while the Sens was low in the overall GCA population (53.2%), particularly in the isolated LV-GCA group (18.9%), with an AUC of 0.554 (95% CI 0.455 to 0.653). We have additionally calculated the accuracy of the criteria in the subgroup of patients who underwent a TAB (negative or positive for GCA). Sens and Spec for the new criteria in this population was 100% and 0%, respectively, and for the 1990 ACR criteria was 72% and 28.6%, respectively. Higher scores of the criteria (≥7 or ≥8, instead of ≥6 points) decreased Sens to 92% and 84.6%, but increased Spec to 74% and 88.5%, respectively, for GCA clinical diagnosis. We further tested the performance of the criteria by including as scoring criteria (+2) bilateral axillary involvement, and any positive imaging findings on US or FDG/PET-TC pertaining to the carotid or subclavian arteries with either unilateral or bilateral involvement . While the overall Sens of these modified criteria slightly improved when applied to the general GCA population (from 92.6% to 94.7%), Spec findings remained the same. However, in the patient subset presenting isolated LV-GCA, the Sens considerably increased, from 62.2% to 73%. 10.1136/rmdopen-2022-002970.supp1 Supplementary data
This is the first study evaluating the performance of the 2022 ACR/EULAR GCA classification criteria for diagnosis in patients with suspected GCA within a routine clinical care setting. Our analysis demonstrates that the diagnostic accuracy of the new criteria versus a clinical diagnosis after 6 months of follow-up in routine clinical practice is adequate, and substantially improves on the Sens and Spec of the 1990 ACR classification criteria. The 1990 criteria did facilitate research in vasculitis and have been widely used in clinical trials and observational studies. However, they were designed before the introduction of modern imaging techniques, which have considerably impacted diagnosis and monitoring of the disease. Importantly, these criteria were not designed as a diagnostic tool, although many rheumatologists use them as an aid for diagnostic purposes. The same applies to the 2022 ACR/EULAR criteria, as they were developed to differentiate patients with varying types of medium or LV vasculitis, after excluding potential mimics. However, in the absence of a suitable gold standard for GCA diagnosis, validation of the proposed criteria in other populations is needed. Although not developed for diagnostic purposes, these criteria may also be helpful for guiding treatment decisions in clinical practice. The new classification criteria have been developed using a very consistent methodology, including both a developmental cohort and a validation cohort. As comparators, they included Takayasu (33.5%), other vasculitis that mimic GCA and Takayasu (33.4%) and other mimics of LV vasculitis such as atherosclerosis and unspecific headaches (33.1%). Thus, there was a predominance of vasculitis cases among the comparators. Our study shows a different approach; we focused on routine care and analysed patients with suspected GCA referred to FTC. According to our findings, the performance of the criteria when applied to routine care was adequate and improved on traditional criteria across every subtype, especially in the LV-GCA group, in which the 1990 ACR criteria showed very low Sens. Overall, the new criteria had a Sens of 92.6% vs GCA clinical diagnosis, which was higher than that in the original publication. However, the Spec was lower (71.8%), as we included controls evaluated in our FTC, involving symptoms that usually mimic GCA (eg, new headaches), as well as PMR-like symptoms or visual disturbances related to other conditions, all of which led to higher rates of false positives. The low Spec of the new criteria may be problematic when used as diagnostic criteria, as overdiagnosis may lead to unnecessary glucocorticoid treatment. Higher scores (≥7 or ≥8) may be necessary to be used as diagnostic criteria, increasing Spec but decreasing Sens. Special consideration should be given to the patient subset with isolated LV-GCA, in whom diagnosis can be challenging due to their non-specific spectrum of symptoms. According to the study by Ponte et al , the Sens of the new criteria in this specific population is quite low (55.7%), which is in line with our own results (Sens 62.2%). Recent studies have shown that the inclusion of subclavian arteries in tandem with the US may improve the Sens of the examination to better support a clinical diagnosis. Interestingly, if we include bilateral axillary involvement as single criterion, and positive imaging findings on US or FDG/PET-TC in carotid or subclavian arteries with unilateral or bilateral involvement, the Sens increases considerably (from 62.2% to 73%) in this specific patients subset. While encouraging, these new possibilities should be further tested in larger cohorts. Our study has certain limitations. First, its retrospective design and the limited data for some ancillary studies such as TAB and FDG-PET/CT, which were only performed using clinician-based criteria, leading to selection bias. Second, the limited data for TAB could underestimate the diagnostic accuracy of the 1990 ACR criteria when compared with the new 2022 ACR/EULAR criteria. Third, interobserver reliability was not investigated for this study, but our group has performed reliability analysis in previous cohort with ICC between 0.958 and 0.979. Finally, our cohort of patients with GCA are older and the proportion of men is greater than in other populations, which may suggest a selection bias by the referring clinician. Additionally, we found few patients with abnormal examination of TA, suggesting a possible bias leading to a decrease in the Sens of the criteria. In summary, the performance of the 2022 EULAR/ACR GCA classification criteria, when applied in routine care, proved adequate and may support GCA diagnosis confirmation in tandem with clinician-based criteria. However, these results need to be confirmed in additional populations.
|
Population-based user-perceived experience of
|
1021c9f5-6817-43f2-a977-d19997dedb4e
|
10152040
|
Internal Medicine[mh]
|
Digital symptom-checkers (SCs), which hold a promise to improve rheumatology triage and reduce diagnostic delays, need to be user acceptable, but there is a lack of large-scale user experience studies based on real-world data. Together with patients, clinicians and eHealth experts, we have developed Rheumatic? —a widely used (>44 000 users in 16 months) and freely available online SC targeted to people with rheumatic complaints.
This is the largest (n=12 712) user experience study of a digital SC in rheumatology. The study finds that real-world users’ perception of Rheumatic? is positive, and even though the current version of Rheumatic? does not yet suggest a diagnosis or give care advice, people find it useful in summarising their complaints, and would recommend it to friends and other patients.
This study has contributed crucial end-user feedback towards optimisation of Rheumatic? that is currently being addressed in large prospective studies, including: (1) development of an algorithm for diagnosis and care advice, (2) inclusion of more targeted questions and (3) assessment of symptoms described as free text using artificial intelligence—with the ambition to integrate Rheumatic? in standard healthcare.
Diagnostic delay is a big challenge in rheumatology, and there is a need to accelerate access to specialist care and therapy for people with inflammatory rheumatic diseases (IRDs) as early diagnosis and treatment are key for improving clinical outcome. At the same time, up to 60% of patients with rheumatic complaints visiting rheumatologists do not have IRDs. With an ageing population, this group will grow, together with the cost and burden on the healthcare system. Hence, there is a need to improve rheumatology triage. Here, digital preassessment tools could be helpful. Online symptom-checkers (SCs) are patient-facing diagnostic decision support systems with the potential to reduce diagnostic delays and errors. A handful of studies exploring digital SCs within rheumatology have been performed, yet they are not commonly used in routine care, partly due to limited diagnostic accuracy and a lack of large-scale validation studies based on real-world data. We have developed a digital SC called Rheumatic? together with patients. When evaluated in a retrospective multicentre validation study, Rheumatic? demonstrated high discriminative performance in identifying individuals who would develop rheumatoid arthritis in an at-risk population (area under the receiver operating curve (AUC-ROC): 75.3%) and in differentiating IRDs from other musculoskeletal problems in individuals with early joint swelling (AUC-ROC: 79%). However, when clinicians already suspected an autoimmune IRD, Rheumatic? had less discriminative power (AUC-ROC: 53.6%). To optimise the scoring system and further evaluate self-reported symptoms of rheumatic and musculoskeletal diseases (RMDs), Rheumatic? is currently being investigated in a number of ongoing prospective studies, and a public version—providing a symptom overview without diagnostic scores or care advice—is available at: https://rheumatic.elsa.science/ , in English and Dutch. To this date, 44 395 people completed this public version of Rheumatic? . Fundamental for eHealth tools is that they are user acceptable, as also pointed out by the European Alliance of Associations for Rheumatology. Hence, in this study, we have assessed usability and acceptance of this increasingly used digital SC in a real-world setting.
Study design Study participants were recruited from an ongoing Dutch longitudinal observational prospective study. Briefly, since July 2021, people with musculoskeletal complaints searching online for information were directed to Rheumatic? via the Dutch Arthritis Association website or through social media campaigns. People who completed Rheumatic? and in conjunction gave online consent using a tick box consent form were asked to fill out the user experience survey within 1 week . The study population comprises adults (≥18 years) with musculoskeletal complaints, who are fluent in Dutch, and have an email address. Questions regarding diagnoses, interventions and type of care are sent out at 3, 6 and 12 months, and not reported on here. Study endpoints include: (1) referral to rheumatologist, (2) inflammatory versus non-inflammatory diagnosis and (3) specific diagnosis. User experience survey The user experience survey included five questions on usability and acceptability of Rheumatic? , with responses recorded on an 11-point (0–10) rating scale. In addition, an open-ended question concerning participant’s own suggestions for improving Rheumatic? was included. See for a more detailed description of the rationale behind the survey questions and response analysis. 10.1136/rmdopen-2022-002974.supp2 Supplementary data Statistics Data analysis was performed in R, V.4.4.2; t-test or Wilcoxon rank test was used for normally and non-normally distributed values, respectively; linear regression was used for continuous dependent variables (scores and age groups), with p values calculated for the complete distribution of the tested variable. P values <0.05 were considered significant.
Study participants were recruited from an ongoing Dutch longitudinal observational prospective study. Briefly, since July 2021, people with musculoskeletal complaints searching online for information were directed to Rheumatic? via the Dutch Arthritis Association website or through social media campaigns. People who completed Rheumatic? and in conjunction gave online consent using a tick box consent form were asked to fill out the user experience survey within 1 week . The study population comprises adults (≥18 years) with musculoskeletal complaints, who are fluent in Dutch, and have an email address. Questions regarding diagnoses, interventions and type of care are sent out at 3, 6 and 12 months, and not reported on here. Study endpoints include: (1) referral to rheumatologist, (2) inflammatory versus non-inflammatory diagnosis and (3) specific diagnosis.
The user experience survey included five questions on usability and acceptability of Rheumatic? , with responses recorded on an 11-point (0–10) rating scale. In addition, an open-ended question concerning participant’s own suggestions for improving Rheumatic? was included. See for a more detailed description of the rationale behind the survey questions and response analysis. 10.1136/rmdopen-2022-002974.supp2 Supplementary data
Data analysis was performed in R, V.4.4.2; t-test or Wilcoxon rank test was used for normally and non-normally distributed values, respectively; linear regression was used for continuous dependent variables (scores and age groups), with p values calculated for the complete distribution of the tested variable. P values <0.05 were considered significant.
Who do we capture with Rheumatic? : baseline characteristics of the study population By September 2022, 24 271 individuals had completed Rheumatic? . Of these, 24 061 also received the user experience survey, which 53% completed . The response rate was higher among people ≥50 years (63%) compared with people <50 years (38%), p<0.0001. Thus, participants in the user experience survey were somewhat older than the total approached group (71% ≥50 years vs 60%, p<0.0001). The study participants were normally distributed over the different age categories with a peak at age 50–59, and a majority of women (78%) . The proportion of women and men differed between age groups, with 9%–18% men in younger age groups (<60 years), increasing to 32%–47% in older age groups (≥60 years) , p<0.0001. What do the study participants think of Rheumatic? : user experience survey results Rheumatic? is composed of 17–76 questions (depending on previous answer given), with a median completion time of 10.4 min. When asked: How appropriate did you find the number of questions? , 61% answered that the number of questions was good (scored 4–6), with more women (62%) being positive than men (57%), p<0.0001 . Those who did not find the number of questions appropriate, mainly thought Rheumatic? had too many questions (36% scored 7–10). Younger people were more satisfied than older, p<0.0001 . 10.1136/rmdopen-2022-002974.supp1 Supplementary data A large majority (90%) found the questions in Rheumatic? to be clear (scored ≥6), with no difference between women and men or between age groups . Less than 4% disagreed (scored ≤4). Mean score was 7.8 . A majority also found the test useful (78% scored ≥6; mean score 6.8), while 9% did not (scored ≤4). Older people were more satisfied than younger (increasing score of 0.1 points per age category, p<0.0001) . Seventy-six per cent (74% women; 80% men) thought the questionnaire gave them an opportunity to describe their complaints well (scored ≥6), while 11% of women and 9% of men did not agree (scored ≤4). Mean score was 6.6 for women and 6.9 for men, p<0.0001, with no difference between age groups . Seventy-four per cent (74% women; 76% men) would recommend Rheumatic? to a friend or other patient (scored ≥6), while 10% would not (scored ≤4). Mean score was 6.9. Older people were more positive than younger (increasing score of 0.09 points per age category, p<0.0001), . Study participants’ suggestions to improve Rheumatic? Twenty-six per cent provided comments on how to improve Rheumatic? . The most common suggestions were to provide more detailed questions, particularly regarding their own complaints (39%), to provide more open-ended questions (28%), and to suggest a diagnosis (14%) or give care advice (8%). Notably, only 2% suggested a reduction of questions.
Rheumatic? : baseline characteristics of the study population By September 2022, 24 271 individuals had completed Rheumatic? . Of these, 24 061 also received the user experience survey, which 53% completed . The response rate was higher among people ≥50 years (63%) compared with people <50 years (38%), p<0.0001. Thus, participants in the user experience survey were somewhat older than the total approached group (71% ≥50 years vs 60%, p<0.0001). The study participants were normally distributed over the different age categories with a peak at age 50–59, and a majority of women (78%) . The proportion of women and men differed between age groups, with 9%–18% men in younger age groups (<60 years), increasing to 32%–47% in older age groups (≥60 years) , p<0.0001.
Rheumatic? : user experience survey results Rheumatic? is composed of 17–76 questions (depending on previous answer given), with a median completion time of 10.4 min. When asked: How appropriate did you find the number of questions? , 61% answered that the number of questions was good (scored 4–6), with more women (62%) being positive than men (57%), p<0.0001 . Those who did not find the number of questions appropriate, mainly thought Rheumatic? had too many questions (36% scored 7–10). Younger people were more satisfied than older, p<0.0001 . 10.1136/rmdopen-2022-002974.supp1 Supplementary data A large majority (90%) found the questions in Rheumatic? to be clear (scored ≥6), with no difference between women and men or between age groups . Less than 4% disagreed (scored ≤4). Mean score was 7.8 . A majority also found the test useful (78% scored ≥6; mean score 6.8), while 9% did not (scored ≤4). Older people were more satisfied than younger (increasing score of 0.1 points per age category, p<0.0001) . Seventy-six per cent (74% women; 80% men) thought the questionnaire gave them an opportunity to describe their complaints well (scored ≥6), while 11% of women and 9% of men did not agree (scored ≤4). Mean score was 6.6 for women and 6.9 for men, p<0.0001, with no difference between age groups . Seventy-four per cent (74% women; 76% men) would recommend Rheumatic? to a friend or other patient (scored ≥6), while 10% would not (scored ≤4). Mean score was 6.9. Older people were more positive than younger (increasing score of 0.09 points per age category, p<0.0001), .
Rheumatic? Twenty-six per cent provided comments on how to improve Rheumatic? . The most common suggestions were to provide more detailed questions, particularly regarding their own complaints (39%), to provide more open-ended questions (28%), and to suggest a diagnosis (14%) or give care advice (8%). Notably, only 2% suggested a reduction of questions.
Given that Rheumatic? is increasingly being used by patients and clinicians, we have performed a user experience study among real-world users to explore whether it could and should be improved. To the best of our knowledge, this is the largest user-evaluation study of a digital SC within rheumatology. A great majority of study participants found Rheumatic? useful (78%) and thought the questionnaire gave them an opportunity to describe their complaints well (76%). Three in four would recommend Rheumatic? to friends and other patients. Contrary to what was found in other studies, older people were in some measures more positive than younger, but differences were small. Most of the participants’ suggestions of improvement are being addressed in ongoing research, including (1) development of an algorithm for diagnosis and care advice and (2) assessment of symptoms described as free text. The overall survey response rate was 53%, which is higher than generally reported for web surveys. Still, with 47% not completing the survey, we acknowledge the risk of bias. The response rate was highest (63%) among people ≥50 years, and we speculate that this may to some extent be related to the fact that older people are more likely affected by RMDs, and thus possibly more motivated to contribute to the survey. The age distribution reflects the population with RMDs well, and with a lifetime risk of developing IRDs of 8.4% in women and 5.1% in men, also the ratio of women to men was as expected. The major patient-perceived shortcoming of Rheumatic? was the number of questions; 36% thought there were too many questions. At the same time, many participants suggested adding more specific questions about their own symptoms. This balance between individual needs and generalisability remains a challenge. In a separate study, we will investigate whether particular questions and the total number of questions can be improved. A weakness of the study is that we lack data on socioeconomic status and health literacy, thus, we cannot exclude that socioeconomically disadvantaged groups or people with low health literacy may be underrepresented. We have also not assessed digital literacy. Notably, a key concern in the eHealth era is that the electronic format may exclude people with digital illiteracy and people without access to smartphones or internet. Moreover, results from the Dutch population may not be applicable to users in other parts of the world. These are important aspects to address in future research. In summary, the good response rate to the user experience survey allows us to conclude that Rheumatic? is well accepted by people with RMD symptoms. Ongoing prospective studies will clarify if the high diagnostic accuracy of Rheumatic? —identified in a retrospective study —can be validated in a real-world setting. In its current form, Rheumatic? offers the increasing number of people googling RMD symptoms a comprehensive summary of complaints as a basis for clinical consultation, generated from a 10 min online questionnaire developed by patients, researchers and clinicians together with eHealth experts.
|
Neurologic Physiology after Removal of Therapy (NeuPaRT) study: study protocol of a multicentre, prospective, observational, pilot feasibility study of neurophysiology after withdrawal of life-sustaining measures
|
5dee9e94-3605-4931-84ee-96e4fec29a24
|
10152060
|
Physiology[mh]
|
Current organ donation after circulatory determination of death (DCDD) processes assume, but do not explicitly confirm , permanent loss of brain activity when death is determined 5 min after circulatory arrest. While this assumption is rooted in a strong physiological rationale, a lack of neurophysiological evidence regarding cessation of brain activity in humans contributes to ethical concerns and ongoing mistrust of the DCDD process among healthcare and public stakeholders. Healthcare providers may have uncertainty that waiting 5 min after circulatory arrest is sufficient to declare death in DCDD and ensure a permanent cessation of all brain activity prior to organ retrieval. Furthermore, ensuring protection from suffering is critical to maintaining trust among donor families, healthcare providers and the public. Rigorous scientific evidence to determine when brain activity stops relative to circulatory arrest will help to confirm the safety of existing procedures and promote trust in the DCDD process. A recent international multi-centre study confirmed that waiting 5 min after circulatory arrest before death determination is sufficient to ensure permanence of cessation of systemic circulation. Cessation of circulation is necessary to confirm death prior to organ retrieval. The cessation of circulation implies absence of brain function. However, it is not known if this time is sufficient to ensure permanent cessation of brain activity and to avoid donor harm. By objectively confirming when brain activity stops relative to circulatory arrest after withdrawal of life sustaining measures (WLSM), our study will help inform the appropriate duration for the observation period prior to determination of circulatory death in deceased organ donation that will avoid donor harm and optimise the quality of donated organs. The temporal relationship between cessation of brain function and circulatory arrest may be affected by several patient-level and practice-level factors. Approaches to the WLSM may affect cessation of circulation and brain activity and these practices are known to vary among institutions and geographical regions. For example, at some centres patients are extubated, while at other centres they remain intubated despite the withdrawal of other life sustaining measures. Early extubation results in earlier hypoxia, which may accelerate cessation of brain activity relative to circulatory arrest. Furthermore, variation in the aetiology of critical illness among different intensive care units (eg, neurological vs trauma vs cardiovascular units) may affect the dying process after WLSM. Thus, the time to arrest of brain activity may vary among institutions. Multicentre research is needed to ensure a representative cross-section of practice and enhance external validity of research investigating cessation of brain electrical activity. In preparation for a large multicentre study, we will conduct a pilot multicentre feasibility trial to assess the feasibility of recording neurophysiological data in adult patients during the dying process after WLSM at multiple sites. Results of this study will inform the design and conduct of a future large multicentre trial that will elucidate the temporal relationship between cessation of cortical and brainstem activity, cerebral blood flow velocity and circulatory arrest after WLSM in the intensive care unit. By informing DCDD practice, results of a future large trial will promote stakeholder trust and ensure donor protection from harm. Patient and public involvement A donor family partner has been involved in this study from the time of application for funding of the multicentred study and continues to contribute study activities at steering committee meetings. The donor family partner will not be involved in study recruitment, but will be most involved in data interpretation and dissemination as well as choosing which information to share with the public and the optimal language and format. Study objectives This is a multicentre prospective observational cohort feasibility study that will measure cortical and brainstem electrical activity, cerebral blood flow velocity and arterial blood pressure in adult patients during the dying process after WLSM in the intensive care units. Our primary objective is to determine the feasibility of patient accrual for assessing cortical electrical activity and cerebral blood flow velocity measured using electroencephalography (EEG) and transcranial Doppler (TCD) at each site and to identify challenges to patient accrual. Our secondary objectives are to determine: (a) proportion of patients with complete EEG, TCD and arterial pulse pressure waveform; (b) proportion of patients with complete transfer of waveform data to the London Health Sciences Centre (LHSC) site, and challenges to transferring complete waveform data; (c) time difference between circulatory arrest and cessation of EEG and TCD signals; (d) estimate of arterial pulse pressure and blood oxygenation at the time of cessation of EEG and TCD signals; (e) accrual of patients who complete evoked potentials and event-related potentials (ERP) at LHSC site; (f) time difference between circulatory arrest and cessation of somatosensory evoked potentials (SSEP), brainstem auditory evoked potentials (BAEP) and ERP signals. Consent Because participants are not expected to have capacity, written informed consent will be obtained from the legally authorised substitute decision maker/surrogate for the participant. Building on our experience from the DePPart study, the research team will obtain consent only after the clinical healthcare team and surrogate have reached a consensual decision for WLSM. After meeting with the organ donation organisation, the clinical team will seek permission from the surrogate to be approached about a research study. Supports will be provided to the surrogate as required (eg, palliative care medicine, social work, chaplaincy) and the informed consent process will not continue if it causes additional distress for the surrogate as stated by the surrogate or perceived by the research team. Informed written consent will be obtained by the research team prior to initiation of study procedures. Participants This study will enrol patients from the intensive care units at five participating academic centres (LHSC, Foothills Medical Centre in Calgary, the Ottawa Hospital, Kingston Health Sciences Centre and the Centre Hospitalier de l'Université de Montréal) beginning in August 2022 for a duration of 3 years. We will approach the substitute decision maker of consecutive patients who are >18 years, have a consensual plan for WLSM in the intensive care unit, have an indwelling arterial cannula for monitoring arterial pulse pressure, and the attending physicians anticipate death within 24 hours of WLSM. Patients fulfilling criteria for death by neurological criteria or with injuries that anatomically preclude neuromonitoring will be excluded. A donor family partner has been involved in this study from the time of application for funding of the multicentred study and continues to contribute study activities at steering committee meetings. The donor family partner will not be involved in study recruitment, but will be most involved in data interpretation and dissemination as well as choosing which information to share with the public and the optimal language and format. This is a multicentre prospective observational cohort feasibility study that will measure cortical and brainstem electrical activity, cerebral blood flow velocity and arterial blood pressure in adult patients during the dying process after WLSM in the intensive care units. Our primary objective is to determine the feasibility of patient accrual for assessing cortical electrical activity and cerebral blood flow velocity measured using electroencephalography (EEG) and transcranial Doppler (TCD) at each site and to identify challenges to patient accrual. Our secondary objectives are to determine: (a) proportion of patients with complete EEG, TCD and arterial pulse pressure waveform; (b) proportion of patients with complete transfer of waveform data to the London Health Sciences Centre (LHSC) site, and challenges to transferring complete waveform data; (c) time difference between circulatory arrest and cessation of EEG and TCD signals; (d) estimate of arterial pulse pressure and blood oxygenation at the time of cessation of EEG and TCD signals; (e) accrual of patients who complete evoked potentials and event-related potentials (ERP) at LHSC site; (f) time difference between circulatory arrest and cessation of somatosensory evoked potentials (SSEP), brainstem auditory evoked potentials (BAEP) and ERP signals. Because participants are not expected to have capacity, written informed consent will be obtained from the legally authorised substitute decision maker/surrogate for the participant. Building on our experience from the DePPart study, the research team will obtain consent only after the clinical healthcare team and surrogate have reached a consensual decision for WLSM. After meeting with the organ donation organisation, the clinical team will seek permission from the surrogate to be approached about a research study. Supports will be provided to the surrogate as required (eg, palliative care medicine, social work, chaplaincy) and the informed consent process will not continue if it causes additional distress for the surrogate as stated by the surrogate or perceived by the research team. Informed written consent will be obtained by the research team prior to initiation of study procedures. This study will enrol patients from the intensive care units at five participating academic centres (LHSC, Foothills Medical Centre in Calgary, the Ottawa Hospital, Kingston Health Sciences Centre and the Centre Hospitalier de l'Université de Montréal) beginning in August 2022 for a duration of 3 years. We will approach the substitute decision maker of consecutive patients who are >18 years, have a consensual plan for WLSM in the intensive care unit, have an indwelling arterial cannula for monitoring arterial pulse pressure, and the attending physicians anticipate death within 24 hours of WLSM. Patients fulfilling criteria for death by neurological criteria or with injuries that anatomically preclude neuromonitoring will be excluded. Continuous video-EEG EEG will be recorded (10–20 International System, Natus Neuroworks, Oakville, Canada) using the American Clinical Neurophysiology Society guidelines for EEG in suspected cerebral death. Electrode impedances will be maintained within 100–10 000 ohms. Interelectrode distances will be 10 cm. Digital tracings will be read by two certified electroencephalographers at LHSC, blinded to clinical and demographic patient characteristics, at a sensitivity of 2 uV/mm. To mitigate artefacts, we will use a non-cephalic channel and standard video monitoring to exclude sources of artefact in the environment. The video component of the EEG will focus on the participants’ bed and will not include other aspects of the room. Video-EEG is standard of care in critical care EEG. Cerebral blood flow Cerebral blood flow will be monitored using a standard TCD to record flow velocity bilaterally in the middle cerebral arteries. We will use 2 MHz pulsed probe to identify middle cerebral arteries. After locating flow, we will secure Doppler probes in place with a head harness, which will enable researchers to leave the room and provide the family with privacy. While insonation of carotid and vertebral arteries would enable more complete assessment of brain blood flow, it would require operator presence and changing patient’s head position throughout the dying process, which would intrude on patient and family privacy. Furthermore, the intermittent nature of these measurements would preclude temporal correlation with EEG. Haemodynamic monitoring We will use standard haemodynamic monitors to record arterial pulse pressure using an existing indwelling arterial catheter, ECG and arterial oxygen saturation (SpO2) from plethysmography pulse oximeter. Data from haemodynamic monitors will be captured from bedside monitors. While bedside monitors differ between sites, we will collate data from different sites/monitors as previously reported. Event-related, somatosensory and BAEP Event-related and evoked potentials will be performed in 18 patients at LHSC only. These patients will be enrolled in addition to the cohort of patients undergoing EEG and TCD at LHSC. Standard evoked potential paradigms will follow the American Clinical Neurophysiological Society guidelines for auditory evoked potentials or short-latency SSEP. Briefly, evoked potentials involve the presentation of discrete stimuli (auditory or somatosensory) that repeat at prescribed intervals. We will present a series of repetitive, brief (100–300 μs) auditory or somatosensory stimuli. Auditory stimuli will consist of either clicks or beeps presented into one ear only. Electrodes will be placed on scalp vertex (Cz according to the 10–20 EEG placement system) and at earlobes (A1/2) and will be able to record resultant electrical responses of the entire auditory pathways known to occur within 10 ms from source generators in the brainstem and as late as 300 ms in higher-order cortical processing areas after stimulus presentation in healthy participants. Somatosensory stimuli will involve electrical median nerve stimulation at the wrist crease unilaterally. The stimulation produces visible abduction of the thumb. Electrodes placed on the scalp at CP3/4 (over primary somatosensory cortical areas) will record the electrical responses of the primary somatosensory system within 20–35 ms after stimuli presentation. Study protocol See for a schematic representation of study procedures. The research team will apply neuromonitors (EEG, TCD, ERP, SSEP or BAEP) prior to WLSM, start recording and leave the room to provide the family with privacy. In our experience this set up takes approximately 30 min. For any given patient we will not use more than two neuromonitors (eg, EEG plus TCD). Each neuromonitor will be applied by a trained research technician. First, we will apply EEG and/or ERP/EP electrodes using standard clinical procedures. We will then use TCD probes to identify middle cerebral arteries. When the appropriate signal is identified the probes will be fixed and held in place for the duration of the monitoring period using the provided head harness. Where feasible, the research team will take advantage of clinically indicated monitoring already in place at the time of study enrolment. Once neuromonitors are applied, technicians will exit the room and the research team will initiate recordings and collect at least 10 min of baseline data prior to WLSM. There will be no restrictions on families’ presence at the bedside as a result of the patient’s participation in the study. The research team will not participate in any other aspect of end-of-life care, which will be overseen by the primary care team. The family or healthcare team will be able to stop study procedures at any point during end of life care if they no longer wish to participate. The clinical team will withdraw life sustaining measures in accordance with national guidelines and standard hospital protocols. As per standard clinical practice, the bedside nurse may place bedside monitors in comfort mode to silence alarms; and they will ensure that the full range of possibilities, including the very lowest values, will be visible on the screen. This study is observational and to prevent changes to the standard of care as a result of neuromonitoring data, families and critical care staff will be blinded to neuromonitoring data by turning away/shielding neuromonitors screens from clinical staff. Data recording will continue for 30 min following circulatory arrest to ensure that we capture permanent cessation of all signals or will stop at 6 hours after initiation of WLSM. For DCDD donors, recording will stop 5 min after circulatory arrest to enable procedures for organ retrieval surgery. If any monitoring equipment (ie, ECG leads, TCD probes) is detached at the request of staff or surrogates or for the purpose of organ donation, the subject will not be excluded from analysis; we will analyse data up to that point in time and consider this in relation to study feasibility. To enable synchronisation of neurological and haemodynamic data during data analysis, the clocks on neurological monitors will be synchronised with the haemodynamic monitor clock. Clinical data collection Participant demographics, admission diagnosis and clinical information will be collected to assess baseline characteristics of the study group. Clinical information will include age, sex, height, weight, admission to critical care diagnosis, Acute Physiology and Chronic Health Evaluation II score, Glasgow Coma Scale, organ donation assessment by local organ donation organisation, type of neuromonitors used, type/level of invasive/non-invasive mechanical ventilation (if applicable), if patient was receiving renal replacement therapy, mechanical circulatory support and arterial/venous blood gas, serum lactate, in the 24 hours prior to WLSM. We will also record sedation score in the 12-hour period prior to WLSM (eg, Richmond Agitation-Sedation Scale). Some of these covariates will be used in the exploratory analysis to determine if they affect the temporal relationship between cessation of brain activity and circulatory arrest. In addition to recording haemodynamic and neurological waveform data, we will record the following clinical variables during WLSM and for 30 min following circulatory arrest: hourly cumulative dose of sedative, analgesic, anxiolytic or neuromuscular blocker agents before and after WLSM; hourly cumulative dose of vasopressors and inotropes; time of removal of life-sustaining measures (non-invasive/invasive ventilation, renal replacement therapy, mechanical circulatory support); and details regarding the clinical determination of death (date, time and who determined death). Data management and validation All waveform data will be acquired from bedside monitors at each study site. They will be transferred to the LHSC site via secure file transfer. We will verify the completeness of all waveforms for required elements including duration of recording, inclusion of baseline recording, circulatory arrest and recording for 30 min following determination of death (5 min in DCDD) and ECG recording required for waveform synchronisation. Waveforms will be adjudicated by two qualified physicians, with a third adjudicator if disagreement arises. Sample size To assess patient accrual, our primary feasibility outcome, we plan to recruit patients for a period of 18 months across five sites. We expect to enrol 1 patient/site/month for a total of 90 patients over 18 months. This is based on recruitment achieved during pilot work. If we enrol <9 patients/site after 18 months, we will conclude that the larger study will not be feasible and the study approach will need to be re-evaluated. At LHSC site, we will plan to enrol an additional 1 patient/month for 18 months (total 18 patients) for EP, SSEP and BAEP studies given the unique technical abilities at this site. Similar enrolment rates were achieved in a single-centre pilot study. To understand feasibility challenges and modify the research plan for a larger study we will analyse study accrual, complete waveform data and success of data transfer to LHSC as outcomes regardless of the number of patients enrolled. Data analysis We will use descriptive statistics to summarise the feasibility outcomes. For categorical variables, frequencies and percentages will be tabulated. For continuous variables, means, medians, SD, IQRs, maximum and minimum will be tabulated. We will use MATLAB to synchronise and process waveform data, and SPSS to compute summary statistics. We will analyse each outcome as follows. Patient accrual We will compute the proportion of patients who were eligible for enrolment, were enrolled and completed full study protocol. We will identify those not enrolled due to lack of research coordinators, EEG/TCD/SSEP/BAEP/ERP technicians or equipment. Accrual will be assessed on a per site basis. Waveform data We will report the number of patients who have complete EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure data. We will summarise the reasons for all missing or incomplete EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure data (eg, technician or equipment unavailable; equipment malfunction; other technical challenges, time of WLSM). Waveform data transfer We will report the number of patients from non-LHSC sites who successfully transfer all EEG, TCD and arterial pulse pressure data to LHSC. Successful data transfer will be defined as complete set of data files that is transferred and can be successfully opened for analysis at LHSC. Time difference between circulatory arrest and cessation of brain activity In each patient, we will use synchronised waveform data and MATLAB to plot and record the time of first cessation of EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure signals. We will then calculate the time difference between circulatory arrest and cessation of EEG/TCD/ERP/SSEP/BAEP signals. Both qualitative (visual inspection of raw EEG) and quantitative (coherence analysis) EEG analyses will be performed. We will pool data across patients to calculate the mean and SD for the sample across patients. Given the difference in patient mix and approaches to WLSM between sites, we will stratify our analysis by site. The cessation of each waveform signal will be defined as follows: EEG signal : Defined based on the American Clinical Neurophysiology Society Guidelines for electrocerebral inactivity as identified by no EEG activity over 2 µV, without resumption of amplitude over 2 µV, when recording from electrode pairs 10 or more cm apart. The exact time of electrocerebral inactivity will begin at the onset of <2 µV for at least 60 s and will be determined by visual inspection by two adjudicators who are qualified electroencephalographers and will begin. TCD signal : Defined based on previously published definitions as the appearance of Doppler spectra suggesting biphasic oscillating flow or small systolic spikes of <200 ms duration and <50 cm/s pulse systolic velocity spike. The exact time of cessation of cerebral blood flow will begin at the onset of when these criteria are met for at least 60 s and will be determined by two adjudicators qualified in ultrasonography. Arterial pulse pressure (ie, circulatory arrest): Defined as a pulse pressure of ≤5 mm Hg that persists for at least 60 s. The exact timing of cessation of arterial pulse pressure will be determined by two blinded adjudicators. Discrepancies will be resolved by consensus by a panel of experts in neurocritical care. Evoked potentials and ERPs : Cessation of brainstem function will be defined as timing of the loss of wave V, indicating loss of function within the rostral pons. Cessation of cortical function may be defined as the time of cessation of a 40 Hz auditory steady state response which is a type of ERP that is generated in the primary auditory cortex in the supratemporal plane. The time of loss of wave V will begin at the onset of when these criteria are met for at least 60 s and will be determined by two adjudicators. Data analysis plan For categorical variables, frequencies and percentages will be tabulated. For continuous variables, means, medians, SD, IQRs, maximum and minimum will be tabulated. We will use MATLAB to synchronise and process waveform data, and SPSS 25 to compute summary statistics. We will analyse each outcome as follows: Patient accrual : We will compute proportion of patients who were (a) eligible for enrolment, (b) were enrolled, (c) complete full study protocol, (d) were not enrolled due to lack of research coordinators, EEG/TCD/event-related/SSEP/BAEP technicians, or equipment. A minimum of 80% patients will be required to have a complete dataset. Complete waveform data : We will compute the number of patients who have complete EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure signals. A complete dataset for each signal will be defined as an adequate waveform signal that (a) spans circulatory arrest, (b) includes data for at least 80% of the planned observation period and (c) has a clearly identifiable time of cessation for each signal (as defined, below). Waveform data transfer to LHSC : We will compute the number of patients from non-LHSC sites who have successful transfer of all EEG, TCD and ABP data to LHSC. A minimum of 80% successful data transfers will be required. Time difference between circulatory arrest and cessation of brain activity : In each patient, we will use synchronised waveform data and MATLAB to plot and record the time of first cessation of EEG/TCD/EP/SSEP/BAEP and ABP signals. We will then calculate the time difference between circulatory arrest and cessation of EEG/TCD/EP/SSEP/BAEP signals. We will pool data across patients to calculate whether data fits a normal distribution and the mean/median and SD/IQR/CI for the sample across patients. Given the difference in patient mix and approaches to WLSM between sites, we will report patient characteristics and cause of death by site and compute the differences between sites. We will test if the average time differences between sites are different. Data from all sites will be pooled and a meta-analysis will be performed to synthesise the average time differences across sites. We will perform a regression analysis to examine whether factors such as cause of death, approach to WLSM, age, sex and medication exposure influence the time difference. Cessation of each brain activity signal will be defined as outlined in previous sections. Requests for data sharing should be directed to the principal investigator (TG) and will be considered on a case by case basis and with approval from Clinical Trials Ontario and the Western Health Sciences Research Ethics Board. No video will be shared at any time. EEG will be recorded (10–20 International System, Natus Neuroworks, Oakville, Canada) using the American Clinical Neurophysiology Society guidelines for EEG in suspected cerebral death. Electrode impedances will be maintained within 100–10 000 ohms. Interelectrode distances will be 10 cm. Digital tracings will be read by two certified electroencephalographers at LHSC, blinded to clinical and demographic patient characteristics, at a sensitivity of 2 uV/mm. To mitigate artefacts, we will use a non-cephalic channel and standard video monitoring to exclude sources of artefact in the environment. The video component of the EEG will focus on the participants’ bed and will not include other aspects of the room. Video-EEG is standard of care in critical care EEG. Cerebral blood flow will be monitored using a standard TCD to record flow velocity bilaterally in the middle cerebral arteries. We will use 2 MHz pulsed probe to identify middle cerebral arteries. After locating flow, we will secure Doppler probes in place with a head harness, which will enable researchers to leave the room and provide the family with privacy. While insonation of carotid and vertebral arteries would enable more complete assessment of brain blood flow, it would require operator presence and changing patient’s head position throughout the dying process, which would intrude on patient and family privacy. Furthermore, the intermittent nature of these measurements would preclude temporal correlation with EEG. We will use standard haemodynamic monitors to record arterial pulse pressure using an existing indwelling arterial catheter, ECG and arterial oxygen saturation (SpO2) from plethysmography pulse oximeter. Data from haemodynamic monitors will be captured from bedside monitors. While bedside monitors differ between sites, we will collate data from different sites/monitors as previously reported. Event-related and evoked potentials will be performed in 18 patients at LHSC only. These patients will be enrolled in addition to the cohort of patients undergoing EEG and TCD at LHSC. Standard evoked potential paradigms will follow the American Clinical Neurophysiological Society guidelines for auditory evoked potentials or short-latency SSEP. Briefly, evoked potentials involve the presentation of discrete stimuli (auditory or somatosensory) that repeat at prescribed intervals. We will present a series of repetitive, brief (100–300 μs) auditory or somatosensory stimuli. Auditory stimuli will consist of either clicks or beeps presented into one ear only. Electrodes will be placed on scalp vertex (Cz according to the 10–20 EEG placement system) and at earlobes (A1/2) and will be able to record resultant electrical responses of the entire auditory pathways known to occur within 10 ms from source generators in the brainstem and as late as 300 ms in higher-order cortical processing areas after stimulus presentation in healthy participants. Somatosensory stimuli will involve electrical median nerve stimulation at the wrist crease unilaterally. The stimulation produces visible abduction of the thumb. Electrodes placed on the scalp at CP3/4 (over primary somatosensory cortical areas) will record the electrical responses of the primary somatosensory system within 20–35 ms after stimuli presentation. See for a schematic representation of study procedures. The research team will apply neuromonitors (EEG, TCD, ERP, SSEP or BAEP) prior to WLSM, start recording and leave the room to provide the family with privacy. In our experience this set up takes approximately 30 min. For any given patient we will not use more than two neuromonitors (eg, EEG plus TCD). Each neuromonitor will be applied by a trained research technician. First, we will apply EEG and/or ERP/EP electrodes using standard clinical procedures. We will then use TCD probes to identify middle cerebral arteries. When the appropriate signal is identified the probes will be fixed and held in place for the duration of the monitoring period using the provided head harness. Where feasible, the research team will take advantage of clinically indicated monitoring already in place at the time of study enrolment. Once neuromonitors are applied, technicians will exit the room and the research team will initiate recordings and collect at least 10 min of baseline data prior to WLSM. There will be no restrictions on families’ presence at the bedside as a result of the patient’s participation in the study. The research team will not participate in any other aspect of end-of-life care, which will be overseen by the primary care team. The family or healthcare team will be able to stop study procedures at any point during end of life care if they no longer wish to participate. The clinical team will withdraw life sustaining measures in accordance with national guidelines and standard hospital protocols. As per standard clinical practice, the bedside nurse may place bedside monitors in comfort mode to silence alarms; and they will ensure that the full range of possibilities, including the very lowest values, will be visible on the screen. This study is observational and to prevent changes to the standard of care as a result of neuromonitoring data, families and critical care staff will be blinded to neuromonitoring data by turning away/shielding neuromonitors screens from clinical staff. Data recording will continue for 30 min following circulatory arrest to ensure that we capture permanent cessation of all signals or will stop at 6 hours after initiation of WLSM. For DCDD donors, recording will stop 5 min after circulatory arrest to enable procedures for organ retrieval surgery. If any monitoring equipment (ie, ECG leads, TCD probes) is detached at the request of staff or surrogates or for the purpose of organ donation, the subject will not be excluded from analysis; we will analyse data up to that point in time and consider this in relation to study feasibility. To enable synchronisation of neurological and haemodynamic data during data analysis, the clocks on neurological monitors will be synchronised with the haemodynamic monitor clock. Participant demographics, admission diagnosis and clinical information will be collected to assess baseline characteristics of the study group. Clinical information will include age, sex, height, weight, admission to critical care diagnosis, Acute Physiology and Chronic Health Evaluation II score, Glasgow Coma Scale, organ donation assessment by local organ donation organisation, type of neuromonitors used, type/level of invasive/non-invasive mechanical ventilation (if applicable), if patient was receiving renal replacement therapy, mechanical circulatory support and arterial/venous blood gas, serum lactate, in the 24 hours prior to WLSM. We will also record sedation score in the 12-hour period prior to WLSM (eg, Richmond Agitation-Sedation Scale). Some of these covariates will be used in the exploratory analysis to determine if they affect the temporal relationship between cessation of brain activity and circulatory arrest. In addition to recording haemodynamic and neurological waveform data, we will record the following clinical variables during WLSM and for 30 min following circulatory arrest: hourly cumulative dose of sedative, analgesic, anxiolytic or neuromuscular blocker agents before and after WLSM; hourly cumulative dose of vasopressors and inotropes; time of removal of life-sustaining measures (non-invasive/invasive ventilation, renal replacement therapy, mechanical circulatory support); and details regarding the clinical determination of death (date, time and who determined death). All waveform data will be acquired from bedside monitors at each study site. They will be transferred to the LHSC site via secure file transfer. We will verify the completeness of all waveforms for required elements including duration of recording, inclusion of baseline recording, circulatory arrest and recording for 30 min following determination of death (5 min in DCDD) and ECG recording required for waveform synchronisation. Waveforms will be adjudicated by two qualified physicians, with a third adjudicator if disagreement arises. To assess patient accrual, our primary feasibility outcome, we plan to recruit patients for a period of 18 months across five sites. We expect to enrol 1 patient/site/month for a total of 90 patients over 18 months. This is based on recruitment achieved during pilot work. If we enrol <9 patients/site after 18 months, we will conclude that the larger study will not be feasible and the study approach will need to be re-evaluated. At LHSC site, we will plan to enrol an additional 1 patient/month for 18 months (total 18 patients) for EP, SSEP and BAEP studies given the unique technical abilities at this site. Similar enrolment rates were achieved in a single-centre pilot study. To understand feasibility challenges and modify the research plan for a larger study we will analyse study accrual, complete waveform data and success of data transfer to LHSC as outcomes regardless of the number of patients enrolled. We will use descriptive statistics to summarise the feasibility outcomes. For categorical variables, frequencies and percentages will be tabulated. For continuous variables, means, medians, SD, IQRs, maximum and minimum will be tabulated. We will use MATLAB to synchronise and process waveform data, and SPSS to compute summary statistics. We will analyse each outcome as follows. Patient accrual We will compute the proportion of patients who were eligible for enrolment, were enrolled and completed full study protocol. We will identify those not enrolled due to lack of research coordinators, EEG/TCD/SSEP/BAEP/ERP technicians or equipment. Accrual will be assessed on a per site basis. Waveform data We will report the number of patients who have complete EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure data. We will summarise the reasons for all missing or incomplete EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure data (eg, technician or equipment unavailable; equipment malfunction; other technical challenges, time of WLSM). Waveform data transfer We will report the number of patients from non-LHSC sites who successfully transfer all EEG, TCD and arterial pulse pressure data to LHSC. Successful data transfer will be defined as complete set of data files that is transferred and can be successfully opened for analysis at LHSC. Time difference between circulatory arrest and cessation of brain activity In each patient, we will use synchronised waveform data and MATLAB to plot and record the time of first cessation of EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure signals. We will then calculate the time difference between circulatory arrest and cessation of EEG/TCD/ERP/SSEP/BAEP signals. Both qualitative (visual inspection of raw EEG) and quantitative (coherence analysis) EEG analyses will be performed. We will pool data across patients to calculate the mean and SD for the sample across patients. Given the difference in patient mix and approaches to WLSM between sites, we will stratify our analysis by site. The cessation of each waveform signal will be defined as follows: EEG signal : Defined based on the American Clinical Neurophysiology Society Guidelines for electrocerebral inactivity as identified by no EEG activity over 2 µV, without resumption of amplitude over 2 µV, when recording from electrode pairs 10 or more cm apart. The exact time of electrocerebral inactivity will begin at the onset of <2 µV for at least 60 s and will be determined by visual inspection by two adjudicators who are qualified electroencephalographers and will begin. TCD signal : Defined based on previously published definitions as the appearance of Doppler spectra suggesting biphasic oscillating flow or small systolic spikes of <200 ms duration and <50 cm/s pulse systolic velocity spike. The exact time of cessation of cerebral blood flow will begin at the onset of when these criteria are met for at least 60 s and will be determined by two adjudicators qualified in ultrasonography. Arterial pulse pressure (ie, circulatory arrest): Defined as a pulse pressure of ≤5 mm Hg that persists for at least 60 s. The exact timing of cessation of arterial pulse pressure will be determined by two blinded adjudicators. Discrepancies will be resolved by consensus by a panel of experts in neurocritical care. Evoked potentials and ERPs : Cessation of brainstem function will be defined as timing of the loss of wave V, indicating loss of function within the rostral pons. Cessation of cortical function may be defined as the time of cessation of a 40 Hz auditory steady state response which is a type of ERP that is generated in the primary auditory cortex in the supratemporal plane. The time of loss of wave V will begin at the onset of when these criteria are met for at least 60 s and will be determined by two adjudicators. We will compute the proportion of patients who were eligible for enrolment, were enrolled and completed full study protocol. We will identify those not enrolled due to lack of research coordinators, EEG/TCD/SSEP/BAEP/ERP technicians or equipment. Accrual will be assessed on a per site basis. We will report the number of patients who have complete EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure data. We will summarise the reasons for all missing or incomplete EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure data (eg, technician or equipment unavailable; equipment malfunction; other technical challenges, time of WLSM). We will report the number of patients from non-LHSC sites who successfully transfer all EEG, TCD and arterial pulse pressure data to LHSC. Successful data transfer will be defined as complete set of data files that is transferred and can be successfully opened for analysis at LHSC. In each patient, we will use synchronised waveform data and MATLAB to plot and record the time of first cessation of EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure signals. We will then calculate the time difference between circulatory arrest and cessation of EEG/TCD/ERP/SSEP/BAEP signals. Both qualitative (visual inspection of raw EEG) and quantitative (coherence analysis) EEG analyses will be performed. We will pool data across patients to calculate the mean and SD for the sample across patients. Given the difference in patient mix and approaches to WLSM between sites, we will stratify our analysis by site. The cessation of each waveform signal will be defined as follows: EEG signal : Defined based on the American Clinical Neurophysiology Society Guidelines for electrocerebral inactivity as identified by no EEG activity over 2 µV, without resumption of amplitude over 2 µV, when recording from electrode pairs 10 or more cm apart. The exact time of electrocerebral inactivity will begin at the onset of <2 µV for at least 60 s and will be determined by visual inspection by two adjudicators who are qualified electroencephalographers and will begin. TCD signal : Defined based on previously published definitions as the appearance of Doppler spectra suggesting biphasic oscillating flow or small systolic spikes of <200 ms duration and <50 cm/s pulse systolic velocity spike. The exact time of cessation of cerebral blood flow will begin at the onset of when these criteria are met for at least 60 s and will be determined by two adjudicators qualified in ultrasonography. Arterial pulse pressure (ie, circulatory arrest): Defined as a pulse pressure of ≤5 mm Hg that persists for at least 60 s. The exact timing of cessation of arterial pulse pressure will be determined by two blinded adjudicators. Discrepancies will be resolved by consensus by a panel of experts in neurocritical care. Evoked potentials and ERPs : Cessation of brainstem function will be defined as timing of the loss of wave V, indicating loss of function within the rostral pons. Cessation of cortical function may be defined as the time of cessation of a 40 Hz auditory steady state response which is a type of ERP that is generated in the primary auditory cortex in the supratemporal plane. The time of loss of wave V will begin at the onset of when these criteria are met for at least 60 s and will be determined by two adjudicators. For categorical variables, frequencies and percentages will be tabulated. For continuous variables, means, medians, SD, IQRs, maximum and minimum will be tabulated. We will use MATLAB to synchronise and process waveform data, and SPSS 25 to compute summary statistics. We will analyse each outcome as follows: Patient accrual : We will compute proportion of patients who were (a) eligible for enrolment, (b) were enrolled, (c) complete full study protocol, (d) were not enrolled due to lack of research coordinators, EEG/TCD/event-related/SSEP/BAEP technicians, or equipment. A minimum of 80% patients will be required to have a complete dataset. Complete waveform data : We will compute the number of patients who have complete EEG/TCD/ERP/SSEP/BAEP and arterial pulse pressure signals. A complete dataset for each signal will be defined as an adequate waveform signal that (a) spans circulatory arrest, (b) includes data for at least 80% of the planned observation period and (c) has a clearly identifiable time of cessation for each signal (as defined, below). Waveform data transfer to LHSC : We will compute the number of patients from non-LHSC sites who have successful transfer of all EEG, TCD and ABP data to LHSC. A minimum of 80% successful data transfers will be required. Time difference between circulatory arrest and cessation of brain activity : In each patient, we will use synchronised waveform data and MATLAB to plot and record the time of first cessation of EEG/TCD/EP/SSEP/BAEP and ABP signals. We will then calculate the time difference between circulatory arrest and cessation of EEG/TCD/EP/SSEP/BAEP signals. We will pool data across patients to calculate whether data fits a normal distribution and the mean/median and SD/IQR/CI for the sample across patients. Given the difference in patient mix and approaches to WLSM between sites, we will report patient characteristics and cause of death by site and compute the differences between sites. We will test if the average time differences between sites are different. Data from all sites will be pooled and a meta-analysis will be performed to synthesise the average time differences across sites. We will perform a regression analysis to examine whether factors such as cause of death, approach to WLSM, age, sex and medication exposure influence the time difference. Cessation of each brain activity signal will be defined as outlined in previous sections. Requests for data sharing should be directed to the principal investigator (TG) and will be considered on a case by case basis and with approval from Clinical Trials Ontario and the Western Health Sciences Research Ethics Board. No video will be shared at any time. The study will be conducted in accordance with the ethical requirements outlined in the Tri-Council Policy Statement on the Ethical Conduct for research involving Humans and all relevant national and local guidelines on the ethical conduct of research. The protocol for this project has been approved by Clinical Trials Ontario (protocol #3862) and the relevant Health Sciences Research Ethics Boards for each participating site. Full study approval is currently in place at LHSC and other study site applications are under review by the local ethics committees. Informed consent will be obtained from patients with capacity to consent prior to enrolment or from the legally authorised substitute decision maker for patients lacking capacity. Elsewhere we have published a detailed ethical analysis of this study’s protocol. Review of ongoing study activities will occur every 3 months by the steering committee, which includes the donor family partner, and updates on study progress will be presented to Canadian Critical Care Trials Group and the Canadian Donation and Transplantation Research Programme. Study newsletters will update stakeholders throughout the conduct of the study. Dissemination of study results will occur through presentation at scientific meetings, communication with relevant organ donation organisations, local hospital staff and relevant patient advocacy organisations and at donor family/patient forums. Current DCDD practice assumes, but does not explicitly confirm, permanent loss of brain activity when death is declared 5 min after circulatory arrest. Establishing when brain activity stops relative to circulatory arrest in patients undergoing planned WLSM will inform DCDD practice, promote stakeholder trust and ensure donor protection from harm. Establishing this evidence will require a larger multicentre observational trial to confirm external validity and inform clinical practice. Given that this is a new area of research associated with logistical, technical and ethical challenges, this multicentre pilot study is essential to establish the feasibility, identify potential challenges and collect pilot data to inform the larger study. The results from this study will be able to provide direct objective evidence for the timing for cessation of cortical electrical activity (EEG), loss of brainstem auditory pathway transmission to the cortex (ERP), brainstem auditory pathway electrical activity (BAEP/SSEP) and cessation of forward blood flow in the middle cerebral arteries (TCD). The results will not, however, be able to provide definitive data about the presence or absence of consciousness, whole brain function, interneuronal communication and neuronal function at the cellular level or whole brain perfusion. Consciousness, whole brain function, neuronal function at the cellular level and whole brain perfusion would be very challenging to measure non-invasively and in a manner that respects patient and family privacy at a very difficult time of life. Despite these limitations, this study will provide rich feasibility data in addition to data of interest to neuroscientists, critical care, palliative care and organ donation communities, ethicists, legal scholars and policy experts. Our pilot multicentre feasibility trial will help inform design and conduct of this larger study, and will provide the first moderately sized prospective multicentre study in humans that will shed light on the neurobiology of the dying process. Reviewer comments Author's manuscript
|
Use of a Caprine Model for Simulation and Training of Endoscopic Ear surgery
|
07a26a90-31ae-462f-8fe3-4de8b3b4b52f
|
10152076
|
Otolaryngology[mh]
|
Transcanal endoscopic ear surgery is an increasingly common surgical technique. , There is a very shallow learning curve and trainees have limited opportunities to hone their skills in the operating room. In contrast to traditional, microscope-guided ear surgery, which has a long history of training on human temporal bone models in skills labs, in-house surgical simulation for endoscopic ear surgery has not yet become established in residency training programs. Consequently, skills are learned in the operating room increasing procedure duration. Further, newly graduated consultant otolaryngologists have identified otology and in particular ossiculoplasty as areas where they are less competent upon completion of residency training. As such, the development and use of high-fidelity, economically viable training models is an important requirement for residency training. Currently, the gold standard training model for endoscopic ear surgery is a fresh frozen cadaver head, though this can be prohibitively expensive and is not available in all jurisdictions. Multiple synthetic, virtual reality, and 3D-printed models are currently in existence. , Many of these were developed for use in teaching temporal bone drilling with a focus on the fidelity of drilling the hard bone. These are poorly adapted to endoscopic ear surgery training as it requires more emphasis on soft tissue dissection and handling. Synthetic models developed specifically for endoscopic ear surgery can offer accurate anatomy though, along with low-fidelity models, do not provide a realistic simulation of soft tissue handling which is an important component of transcanal surgery. , An ovine (sheep) model has been developed and used in endoscopic ear surgery education as an economically viable way of offering high-fidelity simulation of tissue similar to that of humans. , , , The favorable comparative anatomy has been described in detail. Currently in our center, due to infection control and animal use practices, we are not able to use the ovine model. A caprine (goat) model has been explored for teaching 2-handed endoscopic ear surgery using an endoscope holder. We propose that a fresh frozen caprine head would offer similar benefits to previously studied ovine models and provide an economically viable and useful teaching aid to learners. In assessing any novel surgical simulator, the domains of face and content validity must first be considered. Face validity refers to the degree to which the simulation resembles the real-world situation. Content validity refers to how well the simulation captures all aspects of the content being taught. Previous research has questioned the external validity of animal models specifically as it relates to knowledge transfer of surgical anatomy, but the single prior report on the feasibility of the caprine model suggests the comparative anatomy may be suitable. The objectives of the study were to evaluate (i) the utility of a caprine model in endoscopic ear surgical education using the index procedures of tympanoplasty and ossiculoplasty and (ii) the face and content validity of the caprine model, including an evaluation of the potential impact of anatomical differences on trainee understanding of human middle ear anatomy.
Model Selection Through literature review and consideration of logistics in our surgical skills lab, a fresh frozen goat head was selected as likely to provide a viable and readily available simulation model. A single goat head was obtained and an anatomic study was carried out through computed tomography (CT) scanning. A surgeon with 10 years of experience in transcanal endoscopic ear surgery performed the index procedures of endoscopic canalplasty, tympanoplasty, and ossiculoplasty on the caprine model to investigate whether it was a reasonable choice for an animal model. Independently, the procedures were repeated by a fellow on the contralateral ear, without supervision, to assess the model from an experienced trainee’s perspective. The principle anatomical differences in the model are illustrated in . In comparison with human anatomy, the curvature of the bony meatus obscures more of the pars tensa. Access can be improved by performing canalplasty. The body of the incus and malleus head lie medial to a relatively large pars flaccida and are not covered by a scutum. Ossicular morphology is fairly similar, but the long process of incus is comparatively short and does not extend medial to the chorda tympani nerve. Access to the ossicles, including the stapes footplate, is very suitable for ossiculoplasty. Both surgeons found that these anatomical differences were sufficiently minor that further evaluation of the face and content validity for residency training was considered appropriate. Approval to study resident evaluations of their experience with the model was obtained from the Research Ethics Board (REB# 1000076174). Model Preparation The specimens were received fresh from the supplier. The auricle had already been removed leaving exposed cartilaginous external auditory canal (EAC). There was often debris in the EAC that required microdebridement. The anatomy was generally well preserved between specimens and tissue characteristics were quite consistent, though 1 specimen had bilateral pars flaccida cholesteatoma with thickened middle ear mucosa. Participants and Course Structure Twelve otolaryngology surgical trainees were invited to participate in an endoscopic ear surgery simulation course which utilized the caprine model. The course was structured as a 3-hour dissection course. Each trainee was asked to review an educational course pack prior to attending the course in order to maximize hands-on dissection time in the skills laboratory. The course pack consisted of a presentation reviewing human endoscopic ear anatomy; narrated instructional videos of canalplasty, tympanoplasty, and ossiculoplasty on both human subjects and the caprine model; and access to CT images of the goat temporal bone. Educational videos were developed utilizing the IVORY guidelines. The course was run twice with 6 trainees at a time and 2 staff facilitators with extensive experience in endoscopic ear surgery. Each trainee had access to 1 fresh frozen goat head (2 ears) obtained from a distributor at a cost of CAD$45 per head. Surgical equipment included a 0° and a 30°, 3 mm 14 cm endoscope (Spiggle and Theis, Overath, Germany) with a light source, camera, and monitor, with a set of Panetti endoscopic ear instruments (Spiggle and Theis, Overath, Germany) and high-speed drill with 2-mm curved diamond burr (Xomed Medtronic, Minneapolis, Minn, USA). A single piezoelectric bone removal device was also available for use (Piezosurgery, Mectron s.p.a., Carasco, Italy). Course participants had access to titanium partial and total ossiculoplasty replacement prostheses (ALTO, Grace Medical, Memphis, Tenn, USA) for ossiculoplasty and a porcine-derived grafting material (Biodesign, Cook Medical, Bloomington, Ind, USA) for tympanoplasty. Videos of the caprine model dissection were played for reference throughout the course. The residents were led through a standard dissection with the goal of completing a canalplasty, tympanoplasty, and ossiculoplasty. If time was available, residents were then able to proceed with the same steps on the contralateral ear. Model Evaluation Prior to the distribution of the educational course pack described earlier, participants completed pre-course evaluations to evaluate their (i) knowledge of human middle ear anatomy (knowledge assessment) and (ii) self-reported assessment of their skill set in middle ear surgery and perceptions of their educational requirements for endoscopic ear surgery (learner reported needs assessment). These surveys were repeated after the course along with an additional validation survey. All surveys were conducted using Google forms and participants gave consent for use of their responses in this study. To protect resident confidentiality, all surveys were completed anonymously, but participants used a self-generated personal identification number to allow matched comparison of pre- and post-responses. Knowledge Assessment The evaluation of knowledge of middle ear anatomy was conducted to determine whether the use of the goat model disrupted residents’ understanding of human anatomy. The assessment consisted of intra-operative endoscopic images from 5 human middle ears. Each image had 3 or 4 arrows identifying structures that the trainee was asked to identify in short answer format, in total there were 24 questions. The same images and questions were used pre- and post-course. Pre- and post-seminar scores were compared using the Wilcoxon signed-rank test. Learner Reported Needs Assessment Learner needs were reported on survey questions using a 5-point Likert scale with 1 representing “very weak” and 5 representing “very strong.” Residents were asked to report their perception of their ability to perform surgical skills with either microscope or endoscope, including their ability to avoid complications of ossicular chain injury, facial nerve injury, or jugular bulb injury. The needs assessment was completed pre- and post-course to assess if needs were being met. Pre- and post-course evaluations were summarized using descriptive statistics and compared with the Wilcoxon signed-rank test. Validation Survey Face, content, and global validity questions were prepared based on a review of similar studies within the otolaryngology education literature and answered using a 5-point Likert scale. , , A median score of 4 or greater was considered validation for each specific question. Statistical Analysis Statistical analysis was run using 2-sided Wilcoxon rank tests with a significance level of P < .05 (SAS OnDemand for Academics, NC, USA).
Through literature review and consideration of logistics in our surgical skills lab, a fresh frozen goat head was selected as likely to provide a viable and readily available simulation model. A single goat head was obtained and an anatomic study was carried out through computed tomography (CT) scanning. A surgeon with 10 years of experience in transcanal endoscopic ear surgery performed the index procedures of endoscopic canalplasty, tympanoplasty, and ossiculoplasty on the caprine model to investigate whether it was a reasonable choice for an animal model. Independently, the procedures were repeated by a fellow on the contralateral ear, without supervision, to assess the model from an experienced trainee’s perspective. The principle anatomical differences in the model are illustrated in . In comparison with human anatomy, the curvature of the bony meatus obscures more of the pars tensa. Access can be improved by performing canalplasty. The body of the incus and malleus head lie medial to a relatively large pars flaccida and are not covered by a scutum. Ossicular morphology is fairly similar, but the long process of incus is comparatively short and does not extend medial to the chorda tympani nerve. Access to the ossicles, including the stapes footplate, is very suitable for ossiculoplasty. Both surgeons found that these anatomical differences were sufficiently minor that further evaluation of the face and content validity for residency training was considered appropriate. Approval to study resident evaluations of their experience with the model was obtained from the Research Ethics Board (REB# 1000076174).
The specimens were received fresh from the supplier. The auricle had already been removed leaving exposed cartilaginous external auditory canal (EAC). There was often debris in the EAC that required microdebridement. The anatomy was generally well preserved between specimens and tissue characteristics were quite consistent, though 1 specimen had bilateral pars flaccida cholesteatoma with thickened middle ear mucosa.
Twelve otolaryngology surgical trainees were invited to participate in an endoscopic ear surgery simulation course which utilized the caprine model. The course was structured as a 3-hour dissection course. Each trainee was asked to review an educational course pack prior to attending the course in order to maximize hands-on dissection time in the skills laboratory. The course pack consisted of a presentation reviewing human endoscopic ear anatomy; narrated instructional videos of canalplasty, tympanoplasty, and ossiculoplasty on both human subjects and the caprine model; and access to CT images of the goat temporal bone. Educational videos were developed utilizing the IVORY guidelines. The course was run twice with 6 trainees at a time and 2 staff facilitators with extensive experience in endoscopic ear surgery. Each trainee had access to 1 fresh frozen goat head (2 ears) obtained from a distributor at a cost of CAD$45 per head. Surgical equipment included a 0° and a 30°, 3 mm 14 cm endoscope (Spiggle and Theis, Overath, Germany) with a light source, camera, and monitor, with a set of Panetti endoscopic ear instruments (Spiggle and Theis, Overath, Germany) and high-speed drill with 2-mm curved diamond burr (Xomed Medtronic, Minneapolis, Minn, USA). A single piezoelectric bone removal device was also available for use (Piezosurgery, Mectron s.p.a., Carasco, Italy). Course participants had access to titanium partial and total ossiculoplasty replacement prostheses (ALTO, Grace Medical, Memphis, Tenn, USA) for ossiculoplasty and a porcine-derived grafting material (Biodesign, Cook Medical, Bloomington, Ind, USA) for tympanoplasty. Videos of the caprine model dissection were played for reference throughout the course. The residents were led through a standard dissection with the goal of completing a canalplasty, tympanoplasty, and ossiculoplasty. If time was available, residents were then able to proceed with the same steps on the contralateral ear.
Prior to the distribution of the educational course pack described earlier, participants completed pre-course evaluations to evaluate their (i) knowledge of human middle ear anatomy (knowledge assessment) and (ii) self-reported assessment of their skill set in middle ear surgery and perceptions of their educational requirements for endoscopic ear surgery (learner reported needs assessment). These surveys were repeated after the course along with an additional validation survey. All surveys were conducted using Google forms and participants gave consent for use of their responses in this study. To protect resident confidentiality, all surveys were completed anonymously, but participants used a self-generated personal identification number to allow matched comparison of pre- and post-responses.
The evaluation of knowledge of middle ear anatomy was conducted to determine whether the use of the goat model disrupted residents’ understanding of human anatomy. The assessment consisted of intra-operative endoscopic images from 5 human middle ears. Each image had 3 or 4 arrows identifying structures that the trainee was asked to identify in short answer format, in total there were 24 questions. The same images and questions were used pre- and post-course. Pre- and post-seminar scores were compared using the Wilcoxon signed-rank test.
Learner needs were reported on survey questions using a 5-point Likert scale with 1 representing “very weak” and 5 representing “very strong.” Residents were asked to report their perception of their ability to perform surgical skills with either microscope or endoscope, including their ability to avoid complications of ossicular chain injury, facial nerve injury, or jugular bulb injury. The needs assessment was completed pre- and post-course to assess if needs were being met. Pre- and post-course evaluations were summarized using descriptive statistics and compared with the Wilcoxon signed-rank test.
Face, content, and global validity questions were prepared based on a review of similar studies within the otolaryngology education literature and answered using a 5-point Likert scale. , , A median score of 4 or greater was considered validation for each specific question.
Statistical analysis was run using 2-sided Wilcoxon rank tests with a significance level of P < .05 (SAS OnDemand for Academics, NC, USA).
Nine residents chose to participate in the course evaluation study. The year of training and experience of the residents that participated in the course is summarized in , showing an increase in endoscopic ear surgery experience later in residency. All domains reported on the learner needs assessment, seen in , showed an average improvement of 1 point on the post-course evaluation. Junior learners tended to show a larger increase than senior learners. Six out of 9 domains improved significantly with a P < .05. The greatest need and improvement was found for ossiculoplasty. The average score for the assessment of knowledge of human middle ear anatomy increased slightly after the course from 15.6/24 (65%, range 25%-92%) to 17.3/24 (72%, range 50%-96%), but this was not statistically significant ( P = .23). As the number of trainees in each age group was small, subgroup analysis cannot be performed, but it was seen that the most senior trainees scored the highest marks, and the more junior trainees showed a greater improvement in score. Overall, on the 24-question knowledge assessment, 25% of answers changed from incorrect to correct after the course which would be consistent with an improvement in anatomical knowledge, 9% of answers changed from correct to incorrect, and 19% of answers remained incorrect before and after the course. Validation scores, summarized in , were assessed on a 5-point Likert scale with 1 representing “strongly disagree” and 5 representing “strongly agree.” A score of 4 or more was considered validation. There was validation for all domains. Importantly, participants did not report that the goat head anatomy confused their understanding of human anatomy.
This is the first study assessing the validity of a caprine model in endoscopic ear surgery education. The caprine model had a strong face, content, and global content validity. It did not erode learners’ knowledge of human anatomy and offered subjective improvement in surgical skills. While the sample size was modest, the course and use of caprine model received extremely positive feedback from participants. Following this study, we have introduced simulation on the caprine model to our residency program: all residents currently complete a supervised training session prior to starting their clinical endoscopic ear surgery rotation. Our impression is that this has allowed trainees to learn the requisite skills more safely and quickly. Simulation models commonly have limitations in face and content validity. The advantage of the caprine model is that it excels in the domain of content validity, providing accurate representation of soft tissue handling and surgical steps. The validation survey and our own observations are comparable with reports of the use of the ovine model in these domains as providing a suitable, cost-effective alternative to fresh-frozen human cadavers. , We consider the benefits of soft tissue handling in tympanomeatal flap elevation and tympanoplasty graft positioning to be superior to the use of 3D-printed models in this regard. When considering the important topics of animal welfare and sustainability, it is relevant to point out that the goat heads were obtained from surplus at a butcher’s abattoir as they are not widely used as a food source and also that this model is biodegradable. In some jurisdictions, the caprine model may be more readily available than the ovine model. With respect to face validity, the authors acknowledge that the caprine model inevitably has some limitations. However, a previous study has described the anatomical differences between goat and human from the perspective of endoscopic ear surgery and concluded that they were sufficiently small to make training on the caprine model feasible. As such the goal of this study was to assess if the face validity was acceptable from a trainee’s perspective and ensure that it did not confuse understanding of human anatomy. The most significant anatomical difference was that the ear canal of the goat was found to require a more significant canalplasty than a normal human ear canal. This did increase the time necessary to enter the middle ear but enabled the learners to gain more experience using a surgical drill alongside the endoscope which can be of value clinically when encountering narrow ear canals and for access in cholesteatoma surgery. Before introducing the caprine model routinely into our surgical training program, we were concerned to check for any sign that anatomical differences might cause confusion in the understanding of human anatomy. Reassuringly, 25% of answers improved on the knowledge assessment whereas only 9% became worse. Errors were inconsistent: most of these errors occurred only once. That there was no pattern toward systematic error of several trainees giving the same worse answer, suggests that the model does not systematically mislead interpretation of anatomy. Further, learners reported that the goat head enhanced their understanding of human anatomy and did not confuse their understanding of human anatomy on post-course assessment. Our overall impression is that the caprine model did not have a deleterious effect on the understanding of human temporal bone surgical anatomy for junior or senior residents. In common with other initial validation studies of simulation models, this study relied primarily on subjective learner-reported data. Further investigation could provide additional information on the utility of the caprine model in otologic surgical training. A current challenge in endoscopic ear surgery education is the lack of validated objective measures to track learner progress. While there are some impressive examples within the literature of assessment tools, they are either unvalidated or were developed for use in microscope-guided ear surgery. , , , Development of expert consensus on key steps or a tool such as an objective structured assessment tool would allow for a more objective assessment of the goat head model and offer a means of tracking learner progression through the course of a surgical skills curriculum. A benefit of endoscopic surgery in a simulation or operating room is the ability to easily record video of the procedure for subsequent analysis. Advances in machine learning and automated video evaluation may soon mean that review of surgical skills sessions can be automated and far less onerous for the instructor. The metrics used by an automated assessment system remain to be delineated. Potential parameters include duration of surgical steps, efficiency of hand and instrument movement (minimization of repetitive movements), and iatrogenic injury. This study did not directly compare the ovine and caprine models and, as such, it is difficult to draw conclusions about which is more suitable for educational purposes. As previously mentioned, we were unable to use the ovine model due to local health regulations. Based upon a review of published literature on the ovine model, both models seem to have quite similar morphology and anatomical differences relative to the human middle ear. , Chief among these is a prominent anterior canal which necessitates a large canalplasty. As well, there is a relatively large pars flaccida and absent scutum. In both models, the ossicular structure seems suitably similar to human middle ears with variable ligamentous support and mucosal bands relative to the human ear. The frequency of facial nerve dehiscence was not recorded but seems less frequent than in the ovine model. This study focused on the index procedures of tympanoplasty and ossiculoplasty. These procedures require an array of skills and basic competencies that can be adapted to other procedures. For example, raising a tympanomeatal flap and drilling a canalplasty is a key step in many procedures requiring access to the middle ear. Manipulating the ossicles and dissection of soft tissue could be adapted to cholesteatoma surgery. Further studies could include additional procedures and the development of a cholesteatoma model.
The caprine model offers an effective, readily available, economically viable simulation for training in endoscopic ear surgery. We currently use it to give otolaryngology residents endoscopic ear surgery experience before operating on patients. Further study would allow quantification of the impact of this model on the trainee’s learning curve.
|
Learning Curves in Directed Self-Regulated Virtual Reality Training of Mastoidectomy and the Role of Repetition and Motivation
|
b8a43fe8-14b5-4f84-9c81-1b69303e58f2
|
10152100
|
Otolaryngology[mh]
|
Surgical education has undergone a paradigm shift in recent decades – from the principle of “see one, do one” toward evidence-based training. This change has been fueled by many factors, including new requirements for training efficacy and patient safety, increasing numbers of trainees, decreasing work hours, and poor availability of donated human materials. Altogether, this necessitates the use of alternative learning platforms. For this to be successful, there is a need to understand how surgical technical skills are learned and to develop modern training methods that support high-quality learning. Learning curves are essential in understanding skills acquisition and are dependent not only on learner characteristics but also on instructional design and learning context. Learning curves can be used to inform best practice implementation and organization of training. Grantcharov and Funch-Jensen identified 4 learning curve patterns in the acquisition of basic laparoscopic skills: A few learners demonstrate proficiency almost from the beginning; the majority of learners achieve a predefined expert criterion after a certain number of repetitions; another small group of learners do improve some with repetition but are unable to achieve proficiency within the time provided; and finally, a few learners consistently underperform with no true improvement. Structured and reliable assessment of performance is key to measure learning curves in surgical technical skills training: the introduction of Objective Structured Assessment of Technical Skills pioneered such systematic assessment in the early 1990s and inspired the development of specific assessment tools for a number of different surgical skills and procedures. Virtual reality (VR) simulation training in temporal bone surgery is an example of a newer learning modality that allows otorhinolaryngology trainees to acquire basic mastoidectomy skills in a patient-safe environment, independent of service duties and without the presence of supervisors, supporting individual training needs through directed self-regulated learning. Such VR temporal bone simulation training is supported by evidence of the efficacy and validity of training early novices and several validated tools for structured performance assessment exist. For novices training in a VR temporal bone simulator, performances seemingly plateau at 4-9 performances ,9 at a level well below that of experienced surgeons, who had a final product mean score of 19.6 points out of 26 points on the modified Welling Scale. Frequently, we find that novices make damage to the facial nerve and inner ear structures, which limit their performance and cause a seeming plateau in their learning curve. Despite problems with ceiling effects that might arise from the simulation or the assessment itself, we also sometimes observe novice performances achieving the maximum score or close to it. None of the current studies on temporal bone simulation include more than 18 repetitions, and the majority of novices might need considerably more practice to achieve a satisfying performance. So far, additional learning supports such as simulator-integrated tutoring or structured self-assessment to improve the quality of the training itself have failed to increase performance beyond the early plateau. It is therefore possible that previous interventions trying to improve learning have been too short and that some learners might actually improve but at a slower rate as reported by Grantcharov and Funch-Jensen. Even for less complex surgical skills such as laparoscopic box training, very few studies include a high number of repetitions, while the repetition of basic procedures in sports, arts, and craftsmanship is generally at a higher order of magnitude. 15 Our research question is therefore: Can substantially more simulation-based training counter the observed learning curve plateau in a directed, self-regulated training program? This pilot study aims to examine the effect on the learning of numerous repetitions in VR simulation training of novices in temporal bone surgery.
Virtual Reality Simulation Platform The Visible Ear Simulator (VES) version 3.5 is a freeware real-time, 3-dimensional (3D), VR temporal bone surgical simulator that can be downloaded from the Internet. The software runs on a standard personal computer with an Nvidia GeForce GTX/RTX™ graphics card. A Geomagic Touch (3D Systems, Rock Hill, SC, USA) haptic device is recommended for intuitive drilling with force feedback. A built-in instructional guide is provided in a side panel on the simulation screen and resembles a traditional temporal bone manual. Each step of the procedure is explained with brief written instructions and a picture from the simulator illustrating the step with key anatomical landmarks indicated. The integrated tutor-function provides optional color coding of the volume of bone to be drilled in each step corresponding to the built-in guide, and this intuitively visualizes the volume to be removed directly on the interactive model in the central workspace. Participants and Setting The first author contributed to the study and further recruited 5 fellow participants ( ). They were all medical students from the Faculty of Health and Medical Sciences, University of Copenhagen, Denmark and represented true novices to the procedure. The study took place at Department of Otorhinolaryngology, Rigshospitalet, Copenhagen, Denmark, from December 2020 to April 2021. All participants volunteered and received no compensation or study credits for participation. Study Design and Intervention In this non-comparative pilot study, participants performed repeated virtual mastoidectomies in a VR temporal bone surgical simulator. In order to facilitate the completion of a large number of procedures, the typical setup for drilling under supervision in the department was replaced by at-home drilling using high-performance laptop PCs equipped with a Geomagic Touch haptic device, which were loaned to the participants. Each participant received a brief, standardized instruction after which they were asked to perform 100 repetitions (i.e., identical anatomical mastoidectomy procedures) in the simulator, distributed over the study period and at their own pace. The participants could use simulator-integrated tutoring by color coding during the first procedure only and were instructed to disable this function for the rest of the procedures. Participants had access to step-by-step on-screen instructions for the procedure at all times and they could contact the study investigators if they experienced any technical problems during the training at home. At the end of each repetition, the virtually drilled temporal bone model was saved manually on the PC by the trainee together with the exercise number. The duration of the procedure was logged by the simulator in the save file. Sample Size The sample size was one of convenience and limited by the extensive time required by study participants. No financial compensation or performance-related incentives were offered due to concerns about bias and it was not possible to recruit further participants within the study period. Outcomes and Statistics The saved final product of every fifth exercise was loaded in the simulator and assessed independently by 2 experienced raters (M.S.S., S.A.W.A.) blinded to participant ID and performance number. They used the 26-item modified Welling Scale, where each item is rated binarily with 0 points for an inadequate/incomplete performance and 1 point for adequate/complete performance. Individual learning curves were produced for each participant based on this final-product assessment along with time recorded by the simulator. Statistical analysis was performed using standard descriptive statistics for the demographic data, and learning curves were plotted in Microsoft Excel version 16.48 (Redmond, WA, USA). Ethics The study was deemed exempt by the ethics committee for the the Capital Region of Denmark (H-20078583).
The Visible Ear Simulator (VES) version 3.5 is a freeware real-time, 3-dimensional (3D), VR temporal bone surgical simulator that can be downloaded from the Internet. The software runs on a standard personal computer with an Nvidia GeForce GTX/RTX™ graphics card. A Geomagic Touch (3D Systems, Rock Hill, SC, USA) haptic device is recommended for intuitive drilling with force feedback. A built-in instructional guide is provided in a side panel on the simulation screen and resembles a traditional temporal bone manual. Each step of the procedure is explained with brief written instructions and a picture from the simulator illustrating the step with key anatomical landmarks indicated. The integrated tutor-function provides optional color coding of the volume of bone to be drilled in each step corresponding to the built-in guide, and this intuitively visualizes the volume to be removed directly on the interactive model in the central workspace.
The first author contributed to the study and further recruited 5 fellow participants ( ). They were all medical students from the Faculty of Health and Medical Sciences, University of Copenhagen, Denmark and represented true novices to the procedure. The study took place at Department of Otorhinolaryngology, Rigshospitalet, Copenhagen, Denmark, from December 2020 to April 2021. All participants volunteered and received no compensation or study credits for participation.
In this non-comparative pilot study, participants performed repeated virtual mastoidectomies in a VR temporal bone surgical simulator. In order to facilitate the completion of a large number of procedures, the typical setup for drilling under supervision in the department was replaced by at-home drilling using high-performance laptop PCs equipped with a Geomagic Touch haptic device, which were loaned to the participants. Each participant received a brief, standardized instruction after which they were asked to perform 100 repetitions (i.e., identical anatomical mastoidectomy procedures) in the simulator, distributed over the study period and at their own pace. The participants could use simulator-integrated tutoring by color coding during the first procedure only and were instructed to disable this function for the rest of the procedures. Participants had access to step-by-step on-screen instructions for the procedure at all times and they could contact the study investigators if they experienced any technical problems during the training at home. At the end of each repetition, the virtually drilled temporal bone model was saved manually on the PC by the trainee together with the exercise number. The duration of the procedure was logged by the simulator in the save file.
The sample size was one of convenience and limited by the extensive time required by study participants. No financial compensation or performance-related incentives were offered due to concerns about bias and it was not possible to recruit further participants within the study period.
The saved final product of every fifth exercise was loaded in the simulator and assessed independently by 2 experienced raters (M.S.S., S.A.W.A.) blinded to participant ID and performance number. They used the 26-item modified Welling Scale, where each item is rated binarily with 0 points for an inadequate/incomplete performance and 1 point for adequate/complete performance. Individual learning curves were produced for each participant based on this final-product assessment along with time recorded by the simulator. Statistical analysis was performed using standard descriptive statistics for the demographic data, and learning curves were plotted in Microsoft Excel version 16.48 (Redmond, WA, USA).
The study was deemed exempt by the ethics committee for the the Capital Region of Denmark (H-20078583).
Participant Characteristics Six participants were recruited: 5 were female; the mean age of the participants was 22.6 years; and the median semester of the study was 6th. Two participants (D and F) had a leisure computer gaming background whereas the remaining 4 participants did not engage in computer gaming activities ( ). Four participants (A, B, C, and D) completed all 100 procedures, whereas the remaining 2 (E and F) completed only 50 procedures within the study period. Individual Performance Curves A mean learning curve of average performance as a function of repetition number across all participants proved irrelevant due to a large, unexpected variance among the 6 observed performance patterns. The observed learning curve patterns are classified according to Grantcharov and Funch-Jensen’s terminology in . Participant A ( ) achieved a tutored first score of 20.5 points after which the performance without tutoring (i.e., without color coding) dropped to 14 points for 10 repetitions, then gradually increased to reach a steady high of 25-26 points after 45-50 repetitions. The time consumption of participant A displayed an initial decrease through 10 repetitions, followed by a slight gradual increase toward 20 minutes maintained into the high-performance plateau. Efficiency measured as Welling Scale points per minute of drilling was moderate and constant during the study. Participant E ( ) only completed 50 repetitions and the performance score increased from 10 to 16 points with generally low time use, decreasing from 15 to less than 10 minutes. The efficiency increased gradually from 1 to more than 3 points per minute. Participant C ( ) showed a slight initial improvement and then plateaued on a level below 15 points. The corresponding time consumption was very low and decreased early to less than 5 minutes per session. Efficiency showed no significant development. Participant D ( ) obtained a high score right from the start after drilling for 100 minutes. Without tutoring, the time use dropped rapidly to a very low level around 10 minutes, but the performance stayed high but varied around 20 points. Efficiency showed a reciprocal pattern with an initial low followed by a moderate to high constant level. Participants B and F (Supplementary Figure 1) completed 100 and 50 sessions, respectively, and presented a performance pattern similar to participant D with a relatively high performance but using very little time for each procedure.
Six participants were recruited: 5 were female; the mean age of the participants was 22.6 years; and the median semester of the study was 6th. Two participants (D and F) had a leisure computer gaming background whereas the remaining 4 participants did not engage in computer gaming activities ( ). Four participants (A, B, C, and D) completed all 100 procedures, whereas the remaining 2 (E and F) completed only 50 procedures within the study period.
A mean learning curve of average performance as a function of repetition number across all participants proved irrelevant due to a large, unexpected variance among the 6 observed performance patterns. The observed learning curve patterns are classified according to Grantcharov and Funch-Jensen’s terminology in . Participant A ( ) achieved a tutored first score of 20.5 points after which the performance without tutoring (i.e., without color coding) dropped to 14 points for 10 repetitions, then gradually increased to reach a steady high of 25-26 points after 45-50 repetitions. The time consumption of participant A displayed an initial decrease through 10 repetitions, followed by a slight gradual increase toward 20 minutes maintained into the high-performance plateau. Efficiency measured as Welling Scale points per minute of drilling was moderate and constant during the study. Participant E ( ) only completed 50 repetitions and the performance score increased from 10 to 16 points with generally low time use, decreasing from 15 to less than 10 minutes. The efficiency increased gradually from 1 to more than 3 points per minute. Participant C ( ) showed a slight initial improvement and then plateaued on a level below 15 points. The corresponding time consumption was very low and decreased early to less than 5 minutes per session. Efficiency showed no significant development. Participant D ( ) obtained a high score right from the start after drilling for 100 minutes. Without tutoring, the time use dropped rapidly to a very low level around 10 minutes, but the performance stayed high but varied around 20 points. Efficiency showed a reciprocal pattern with an initial low followed by a moderate to high constant level. Participants B and F (Supplementary Figure 1) completed 100 and 50 sessions, respectively, and presented a performance pattern similar to participant D with a relatively high performance but using very little time for each procedure.
The present data could not support the extraction of a universal mean learning curve because of large variability between the individual curves. The only general patterns in the results were: 1) the temporary performance drop as tutorial color coding was withdrawn, and, 2) the gradually increasing points per minute efficiency, which was found even in cases where the final-product performance did not improve sufficiently. Increasing efficiency may somehow reflect the learning of mastoidectomy skills, but for patient safety reasons, increasing the final-product rating seems to be a more important training goal. The latter was demonstrated in the learning curve of participant A, who reached a stable level of proficiency after 50 repetitions but this did not come with an increased efficiency. The performances of some of the participants fit the classification by Grantcharov and Funch-Jensen ( ). After 50 repetitions, participant A reached a stable performance level at maximum score (corresponding to “group 2”) and the final product was a full resection of the mastoid air space with no harmful exposure of soft tissue limiting structures such as nerves, major vessels, or inner ear spaces. This suggests that the early plateau observed in previous studies , in some cases might be overcome simply by further repetition—a training principle already well-proven for other psychomotor skills such as playing a musical instrument. Most likely, this participant exhibited traits associated with deliberate practice, that is, purposefully refining skills with cognitive engagement in the task. This also suggests that the amount of temporal bone training offered to most otorhinolaryngology residents as in-house or temporal bone course training should be increased to this level for candidates training for temporal bone surgery to ensure adequate competencies before supervised surgery on patients. Participants E and C matched the Grantcharov and Funch-Jensen groups 3 and 4: participant E improved to some extent but without ever reaching proficiency at 50 procedures and participant C showed no obvious improvement throughout 100 procedures. A shared characteristic of participants E and C was their very low time use on each procedure. The choice to complete a full mastoidectomy as a novice using only 5-10 minutes may reflect lack of motivation, poor self-assessment, or both, which appears to obstruct learning regardless of number of repetitions. These participants might need further support such as instant feedback provided by the simulator to motivate, guide, and inform the trainee of their progress during the drilling activity. The use of summative feedback at the end of each session seems to motivate the learner and improve self-assessment. Such integrated learning supports are a feature unique to VR simulation as this cannot easily be integrated in training on physical models. Further, VR simulation might lend itself to gamification strategies adopted from leisure PC gaming, which may prove further useful in future training routines for directed self-regulated learning in temporal bone surgery. None of the participants demonstrated proficiency from the beginning (Grantcharov and Funch-Jensen’s group 1) most likely because of the high complexity of the mastoidectomy task and the extensiveness of the procedure in contrast to the simple laparoscopic training tasks studied by Grantcharov and Funch-Jensen. The ability of some learners to achieve proficiency faster than others is most likely multifactorial and might be related to for example innate ability, motivation, ability for learning, cognitive processes, and gaming experience. Interestingly, our participants B, D, and F displayed some characteristics not accounted for in the Grantcharov and Funch-Jensen learner classification. These participants had unexpectedly high and variable performance scores from the start of training and throughout the entire study period despite using extremely little time on each procedure. Their results are unique when compared to novice performances measured previously during our controlled studies where participants were trained at the simulation center or department. , , ,19 Further, this very short amount of time to achieve such an excellent performance can only occasionally be reproduced in the simulator by expert surgeons. It is therefore possible that the results were enhanced through the unforeseen use of alternative “gaming” strategies at home such as sustained color tutoring, using the “undo drilling” function to rewind drilling and thereby repair damage made to structures, and/or the “save scenario” and “load scenario” commands to restore a well-drilled previously saved performance as a shortcut. These simulator functions were, except for the color tutoring and save function, not introduced to the participants. However, they could have been accessed in the menus and help function by the participants if they explored these. Indeed, D and F both reported leisure gaming skills on their inclusion questionnaire. This suggests that a proxy goal of achieving a high score by the learners may possibly have overruled our intended pursuit of true proficiency by meticulous drilling and sustained repetition. Participants were not aware that the drilling time was recorded in their save files. In other words, control mechanisms are needed, and this should be considered in the strategies of directed, self-regulated learning when trainees are asked to achieve a specific objective such as performing a defined number of procedures. Overall, our study has important implications for the organization of self-directed training. Self-regulated VR simulation training at home is convenient especially when many repetitions are needed. However, it is crucial to consider that learning curves are individual and that some trainees will consistently demonstrate proficiency before others. As motivation plays a key role in learning and will need to be high, especially in the context of many repetitions and training at home, addressing motivation, providing learning support and relevant feedback cannot be underestimated. Further, the present results demonstrate that the sheer number of documented sessions completed is not enough proof of proficiency. At the end of the training period, the trainee should perform a few supervised sessions for rating and certification. Moreover, we found that there is a need for training in an order of magnitude that is considerably larger than what is offered at many training institutions. The amount of training offered to most otorhinolaryngology residents as in-house or temporal bone course training should be increased and for candidates training for temporal bone surgery, >50 mastoidectomy repetitions are needed to ensure that the learning potential of simulation-based training is exhausted. Providing opportunity to practice more than 50 mastoidectomy procedures would require the otosurgical curriculum to supplement traditional practice on human cadaveric temporal bone specimens with additional training on inexpensive and convenient models such as VR simulation and 3D-printed temporal bone models. The main strength of this pilot study is the high number of repetitions compared with previous studies. This adds important new information, namely about the impact of quantitative aspects of temporal bone surgical training (i.e., number of repetitions) and defining the significance of the number of repetitions when training mastoidectomy. The main limitation is that we only included few participants, which is explained by the difficulty of recruiting volunteers for such time-consuming participation, and that they were medical student novices rather than otorhinolaryngology residents. Also, we only considered final-product performance and no other important aspects of technical skills performance such as process. Finally, participants were not provided with feedback or scores during their training, which could also reduce their motivation and ultimately performance.
In this pilot study, we found that high-repetition practice (>50 repetitions) might be beneficial in overcoming the learning curve plateau for some learners, whereas others do not show any progress most likely because of lack of motivation. We found learning curve patterns that corroborate 3 out of 4 types observed by Grantcharov and Funch-Jensen’s and also a new fifth group. The latter consisted of novice learners who used very little time while achieving high-performance scores, most likely through the use of gaming techniques in the VR simulation environment. The importance of deliberate practice cannot be stressed enough as continued cognitive engagement in the learning task is paramount to refine skills and achieve an excellent performance. Altogether, this study adds that control mechanisms need to be included in prolonged self-directed training programs to support learning and continued progress. This important in the design of simulation-based training and certification of surgeons.
|
Supporting US healthcare providers for successful vaccine communication
|
daebc906-da4d-4230-848a-5c7c0f0f163b
|
10152412
|
Health Communication[mh]
|
The importance of healthcare provider (HCP) patient communication has been thrust in the spotlight during the COVID-19 pandemic. Facing vaccine questions and concerns about vaccination in patient interactions is not a new challenge for many HCPs working in United States (US) health care settings. A large body of research suggests strategies that US providers can deploy to improve health communication and increase vaccine acceptance. Some recommendations include more time with patients during appointments , using motivational interviewing techniques , and allowing for honest discussions about vaccine concerns . While evidence has shown that US-based family physicians and pediatricians can have a great role in increasing vaccine acceptance, especially for vaccine-hesitant parents , this places a heavy burden on the provider to change often strongly held beliefs. Communication challenges have been exacerbated by the spread of false information on social media and other media outlets. This relentless misinformation coupled with an evolving information landscape submerged the public (including healthcare providers) in an overwhelming infodemic . As a result of this swirling information environment, there was an increasing demand on providers to provide up-to-date information to patients, navigate information voids, and combat resulting questions, concerns, and vaccine hesitancy . Throughout the COVID-19 pandemic, the communication resources available to providers were often insufficient to address patient questions in a highly fluctuating and increasingly polarized environment. The distinct challenges presented by COVID-19 vaccine attitudes and misinformation offer many considerations for continued practice, and new solutions . Unique environmental and policy factors in the US that persisted during COVID-19 pandemic and impacted HCP’s ability to provide patient care and education. In the US, controversies over compulsory vaccination have a complicated history, with COVID-19 as a prime example of the tension between protecting the health of the public and safeguarding the civil liberties of American citizens . Many state laws require vaccinations to reduce the rate of vaccine-preventable disease, such as those mandated for children to enter day care or school and federal employees to physically work within government buildings and facilitates . However, US communities with varying levels of suspicion and mistrust of vaccines, persistent health care access inequities, religious exemptions, and predatory disinformation have threatened herd immunity targets for some vaccinations and resistance to vaccination mandates . Cultural aspects of public responses to COVID-19 in the US and increased exposure to misinformation campaigns throughout the pandemic magnified need to improve health communication strategies and strengthen the patient-provider relationship . As a result of these complexities in the US context, vaccine communication can be challenging to address the myriad of concerns. Due to high levels of institutional and government distrust, provider communication plays a key role in vaccine acceptance. In a report by WHO (2014), patient education during routine care led to the greatest increase in vaccine uptake . Evidence-based education and training are crucial for clinicians to increase vaccine confidence , and as seen during the COVID-19 pandemic, are complemented by external factors such as vaccine availability and directing patients to trusted sources . Comprehensive and consistent efforts are especially important for pregnant patients and for Black Americans who consistently have lower rates of flu vaccination . In an emerging body of literature, HCP experiences and perspectives are explored to assess how COVID-19 vaccine attitudes have impacted patient-provider relationships. Using a series of focus groups with HCP across the United States, the aim of this study was to capture and analyze the provider experience of patient counseling for COVID-19 vaccinations and how different aspects of the pandemic environment have impacted vaccine trust. We present our findings from these focus groups and offer recommendations for strategies to support provider health communication broadly. Theoretical framework Vaccine decision-making is influenced by a variety of psychosocial and environmental factors that form a complex ecosystem of factors that facilitate or create barriers to vaccine acceptance . Kincaid et al. (2004) conceptualized a model of communication for social and behavior change across embedded sectors within the individual, social networks, community, and societal . While there is a breadth of research describing the strategies healthcare providers can employ to encourage vaccine uptake among their patients, there is little in the way of understanding how to best support healthcare providers at the community, organizational or policy level. We borrow from Kincaid et al. (2004) and the Socioecological Model developed by Bronfenbrenner (1977) to interpose communication influences and modes at each level in the larger communication context, demonstrating support for providers needs to stem from outer levels to encourage individual-level change . We present a suggested Socio-ecological Model of Vaccine Communication in Fig. , including common messengers and messages at each level. Each level of the model illustrates messengers that have an influence within that particular level based on the narrative provider data captured in this study. Additionally, the model provides a selection of various messages observed across each level of vaccine communication. Vaccine decision-making is influenced by a variety of psychosocial and environmental factors that form a complex ecosystem of factors that facilitate or create barriers to vaccine acceptance . Kincaid et al. (2004) conceptualized a model of communication for social and behavior change across embedded sectors within the individual, social networks, community, and societal . While there is a breadth of research describing the strategies healthcare providers can employ to encourage vaccine uptake among their patients, there is little in the way of understanding how to best support healthcare providers at the community, organizational or policy level. We borrow from Kincaid et al. (2004) and the Socioecological Model developed by Bronfenbrenner (1977) to interpose communication influences and modes at each level in the larger communication context, demonstrating support for providers needs to stem from outer levels to encourage individual-level change . We present a suggested Socio-ecological Model of Vaccine Communication in Fig. , including common messengers and messages at each level. Each level of the model illustrates messengers that have an influence within that particular level based on the narrative provider data captured in this study. Additionally, the model provides a selection of various messages observed across each level of vaccine communication. To screen and recruit focus group participants, we collaborated with Alligator Digital, a third-party panel provider to field a survey across the United States from October 19 – November 12, 2021. Alligator Digital conducted the survey with 524 complete opt-in computer-assisted web interviews (CAWI), composed of medical professionals. The panel data were used to support a purposive sampling strategy of eligible healthcare professionals to participate, including doctors, nurses, and other medical professionals. In line with the study’s research questions, the HCP sample was designed to capture and segment the perspectives of healthcare providers most likely to counsel patients on a regular basis. Participants who reported they did not discuss vaccination with patients were ineligible and excluded from focus group discussions (FGDs). HCPs interested in participating in a FGD provided their contact information when surveyed. All FGD recruitment was conducted by the research team via email. Based on the results from the national survey and input from an advisory group of experts in health communication, health behavior, and vaccine confidence, the following domains were identified as key areas to include in the focus group discussion guide (Appendix A): (1) best practices and strategies to discuss vaccination with patients; (2) preferred and helpful sources of information; (3) impacts of COVID-19 on the work environment; (4) perspectives on the HCP role in combating vaccine hesitancy; and (5) recommendations for supporting vaccine uptake. Through qualitative data collection and informed by our conceived model, data analysis aimed to define actionable items and communication strategies to improve vaccine acceptance among residents of the United States. This study was reviewed by the Internal Review Board at the City University of New York (CUNY) Graduate School of Public Health and Health Policy (SPH) as part of a larger mixed methods project, protocol number 2021-0330-PHHP. Findings from the full study are reported elsewhere . All focus group discussions (FGD) were segmented by profession and vaccination status: two groups of vaccinated physicians, two with vaccinated nurse practitioners (NP), two with vaccinated registered nurses (RN), and physician’s assistants (PA) and one with unvaccinated HCPs of various professions. Focus group size ranged from 4 to 7 participants per session depending on attendance and lasted 60–75 min. FGD were conducted until reaching thematic saturation. All participants were compensated with a $250 online gift card for their participation in the study. Focus groups were conducted by two members of the research team in December 2021 and January 2022. Before participating in the FGD, all recruited HCPs provided verbal informed consent and explicitly provided permission to be audio recorded as approved by the Internal Review Board at CUNY SPH. All groups were conducted and recorded through Zoom and audio files were transcribed for qualitative analysis. The research team developed an initial codebook based on the interview guide domains and made iterative revisions through a first round of coding . To ensure intercoder reliability, at least two team members were assigned to each transcript to code and develop analytic memos of the transcripts . Thematic analysis embedded within our model of Vaccination Communication of HCP. The research team met regularly to update current codes and discuss analytic approaches through an iterative approach to finalize the relevant themes as presented. Preliminary findings were sent to an advisory council for feedback following a roundtable discussion. The report was subsequently distributed to participants of the study to check for accuracy and ensure that the report reflected their experiences . Six participants replied via email to confirm the report adeptly summarized their perspectives. Table describes the demographics and characteristics of the 44 focus group participants. The majority of participants were doctors (34%) and physician’s assistants or nurse practitioners (34%). The majority (80%) were fully vaccinated at the time of data collection. Twenty-four US states were represented and included all regions of the country. The states with the largest representation were Indiana, North Carolina and Texas. Participants were primarily Democratic (41%), white (77%) and female (75%). Results at the intrapersonal and interpersonal level demonstrate the impact of COVID-19 misinformation on patient-provider communication and potential messengers and messages that can play a role in either promoting or combating misinformation. Results at the community, organizational and policy levels reveal key sources of information and recommended strategies to create an environment that supports vaccine acceptance. Table summarizes the thematic analysis and provides excerpts from the FGDs for each of the themes identified. Intrapersonal and interpersonal vaccine communication COVID-19 misinformation has altered the patient-provider relationship Overall, the focus group participants largely viewed their role as providing a source of scientific information and patient education during appointments. They saw themselves as trusted messengers for their patients, community, friends and family, but were quick to note communicating this information became more challenging during the COVID-19 pandemic. On the topic of COVID-19 vaccine hesitancy and refusal, most providers felt they had offered sufficient patient education and intervention in the year since the COVID-19 vaccine became widely available and that most unvaccinated individuals were no longer open to being counseled. Providers expressed that, for the first time, some of their patients had doubts about their clinical guidance, believing that they were influenced by pharmaceutical or other institutional forces. While most providers did not face direct accusations of purposely misleading patients (especially those with long-standing relationships with their patients), providers faced patients who expressed distrust of the accuracy of information they offered. A group of unvaccinated and/or “late adopting” providers (defined as being vaccinated after November 2021) indicated that they experienced a shift in perception of their own role in vaccine promotion. They expressed distrust stemming from their belief that vaccine mandates were implemented without comprehensive scientific evidence to support them, such as a lack of consideration for natural immunity in vaccine policy development. Importantly, these providers shared many of their patients’ COVID-19 vaccine concerns and reported that information provided by patients led them to question some key aspects of their medical training. Provider’s strategies for vaccine communication during patient interactions The providers offered several strategies for promoting vaccine acceptance among patients. The most common strategy was to tailor information to each patient’s medical history and concerns related to the COVID-19 vaccine and to avoid generic guidance. In their view, this approach facilitated provider trust and mitigated any institutional mistrust. This communication strategy was echoed as effective and meaningful in subsequent in-depth interviews and focus groups with patients in the parent study reported elsewhere . A few providers touted “scare tactics” that appeal to patient fear, stating that patients have responded to other vaccine recommendations which cautioned of severe disease outcomes. One provider suggested this is an underused patient education tactic for the COVID-19 vaccine, citing the success of anti-smoking campaigns that highlighted severely impacted former smokers with chronic illness and disability. Providers also found that testimonials from recent adopters had an impact on their patients. Providers described the sharing of personal anecdotes, family stories, and introductions to other recently vaccinated patients as helpful for individuals still uncertain of their vaccination decision. Details about the unknown and prolonged effects of long-COVID would be an example of this communication strategy. When discussing successful strategies for patient communication, many providers acknowledged that addressing vaccine hesitancy often takes multiple appointments with the same patient, and adequate appointment time - both circumstances that many patients and providers cannot independently facilitate or control. Those who saw patients on a regular basis due to the type of care they provided (i.e., maternal, and prenatal health care; care for chronic conditions) noted that the ability to have multiple touchpoints with the same patient facilitated a trusting dialogue around medical recommendations, including vaccination. Community, organizational and policy messengers Information sources that support provider vaccine communication Many providers discussed the pressure to stay up to date in an evolving information environment, especially during the first year of the pandemic. They had mixed opinions on whether they had adequate resources to answer patient questions about the COVID-19 vaccination but generally agreed about their main sources of vaccine information during the pandemic. Participants cited the Centers for Disease Control and Prevention (CDC) and professional organizations like the American College of Obstetricians and Gynecologists (ACOG) as helpful. Other helpful resources included local and state health departments whose regular updates mitigated pressure on healthcare providers to stay up to date. Several providers also indicated workplace communication digests and regular team meetings led by department heads as the most helpful resources to stay current on COVID-19 information. Some of these communications from employers also included patient-facing resources, which many providers reported as necessary to facilitate conversations with new information. Provider’s recommendations to support vaccine acceptance outside the clinical setting Providers described the challenge of addressing COVID-19 vaccine questions and concerns in an environment that often left them unsupported to reduce barriers to vaccination. This discussion led to some clear guidance for policy and institutional practices to address providers’ barriers to vaccine counseling. Firstly, many providers recommended all vaccinations be provided free of charge to the patient. Providers highlight patient financial concerns surrounding the COVID-19 as well as previous vaccinations. Despite the national provision that vaccines be available free of charge, there continued to be confusion among patients about the financial cost of vaccination. This has implications for both communication strategies and ready access to vaccines. Some providers suggested continuing to offer vaccinations outside of the medical office or hospital environments (i.e., at mobile units, or pharmacies) to prevent cold supply chain challenges and other barriers in small doctors’ offices. Many providers hoped that the lessons learned during the COVID-19 vaccine rollout will inform future vaccination availability. Providers indicated a strong need for a more centralized, unified vaccine communication response from regional and federal agencies to address the ongoing challenges that they face addressing oft-conflicting vaccine messages from health officials, and government representatives. While they recommend the policies and messaging come from a centralized effort, there is additional importance of engaging local messengers. They underscored the need for local, diverse and neutral messengers from trusted community leaders to combat further politicization and polarization. Some acknowledged this could involve collaboration between other sectors that may not be traditionally involved in public health campaigns (e.g., community leaders, faith leaders). COVID-19 misinformation has altered the patient-provider relationship Overall, the focus group participants largely viewed their role as providing a source of scientific information and patient education during appointments. They saw themselves as trusted messengers for their patients, community, friends and family, but were quick to note communicating this information became more challenging during the COVID-19 pandemic. On the topic of COVID-19 vaccine hesitancy and refusal, most providers felt they had offered sufficient patient education and intervention in the year since the COVID-19 vaccine became widely available and that most unvaccinated individuals were no longer open to being counseled. Providers expressed that, for the first time, some of their patients had doubts about their clinical guidance, believing that they were influenced by pharmaceutical or other institutional forces. While most providers did not face direct accusations of purposely misleading patients (especially those with long-standing relationships with their patients), providers faced patients who expressed distrust of the accuracy of information they offered. A group of unvaccinated and/or “late adopting” providers (defined as being vaccinated after November 2021) indicated that they experienced a shift in perception of their own role in vaccine promotion. They expressed distrust stemming from their belief that vaccine mandates were implemented without comprehensive scientific evidence to support them, such as a lack of consideration for natural immunity in vaccine policy development. Importantly, these providers shared many of their patients’ COVID-19 vaccine concerns and reported that information provided by patients led them to question some key aspects of their medical training. Provider’s strategies for vaccine communication during patient interactions The providers offered several strategies for promoting vaccine acceptance among patients. The most common strategy was to tailor information to each patient’s medical history and concerns related to the COVID-19 vaccine and to avoid generic guidance. In their view, this approach facilitated provider trust and mitigated any institutional mistrust. This communication strategy was echoed as effective and meaningful in subsequent in-depth interviews and focus groups with patients in the parent study reported elsewhere . A few providers touted “scare tactics” that appeal to patient fear, stating that patients have responded to other vaccine recommendations which cautioned of severe disease outcomes. One provider suggested this is an underused patient education tactic for the COVID-19 vaccine, citing the success of anti-smoking campaigns that highlighted severely impacted former smokers with chronic illness and disability. Providers also found that testimonials from recent adopters had an impact on their patients. Providers described the sharing of personal anecdotes, family stories, and introductions to other recently vaccinated patients as helpful for individuals still uncertain of their vaccination decision. Details about the unknown and prolonged effects of long-COVID would be an example of this communication strategy. When discussing successful strategies for patient communication, many providers acknowledged that addressing vaccine hesitancy often takes multiple appointments with the same patient, and adequate appointment time - both circumstances that many patients and providers cannot independently facilitate or control. Those who saw patients on a regular basis due to the type of care they provided (i.e., maternal, and prenatal health care; care for chronic conditions) noted that the ability to have multiple touchpoints with the same patient facilitated a trusting dialogue around medical recommendations, including vaccination. Overall, the focus group participants largely viewed their role as providing a source of scientific information and patient education during appointments. They saw themselves as trusted messengers for their patients, community, friends and family, but were quick to note communicating this information became more challenging during the COVID-19 pandemic. On the topic of COVID-19 vaccine hesitancy and refusal, most providers felt they had offered sufficient patient education and intervention in the year since the COVID-19 vaccine became widely available and that most unvaccinated individuals were no longer open to being counseled. Providers expressed that, for the first time, some of their patients had doubts about their clinical guidance, believing that they were influenced by pharmaceutical or other institutional forces. While most providers did not face direct accusations of purposely misleading patients (especially those with long-standing relationships with their patients), providers faced patients who expressed distrust of the accuracy of information they offered. A group of unvaccinated and/or “late adopting” providers (defined as being vaccinated after November 2021) indicated that they experienced a shift in perception of their own role in vaccine promotion. They expressed distrust stemming from their belief that vaccine mandates were implemented without comprehensive scientific evidence to support them, such as a lack of consideration for natural immunity in vaccine policy development. Importantly, these providers shared many of their patients’ COVID-19 vaccine concerns and reported that information provided by patients led them to question some key aspects of their medical training. The providers offered several strategies for promoting vaccine acceptance among patients. The most common strategy was to tailor information to each patient’s medical history and concerns related to the COVID-19 vaccine and to avoid generic guidance. In their view, this approach facilitated provider trust and mitigated any institutional mistrust. This communication strategy was echoed as effective and meaningful in subsequent in-depth interviews and focus groups with patients in the parent study reported elsewhere . A few providers touted “scare tactics” that appeal to patient fear, stating that patients have responded to other vaccine recommendations which cautioned of severe disease outcomes. One provider suggested this is an underused patient education tactic for the COVID-19 vaccine, citing the success of anti-smoking campaigns that highlighted severely impacted former smokers with chronic illness and disability. Providers also found that testimonials from recent adopters had an impact on their patients. Providers described the sharing of personal anecdotes, family stories, and introductions to other recently vaccinated patients as helpful for individuals still uncertain of their vaccination decision. Details about the unknown and prolonged effects of long-COVID would be an example of this communication strategy. When discussing successful strategies for patient communication, many providers acknowledged that addressing vaccine hesitancy often takes multiple appointments with the same patient, and adequate appointment time - both circumstances that many patients and providers cannot independently facilitate or control. Those who saw patients on a regular basis due to the type of care they provided (i.e., maternal, and prenatal health care; care for chronic conditions) noted that the ability to have multiple touchpoints with the same patient facilitated a trusting dialogue around medical recommendations, including vaccination. Information sources that support provider vaccine communication Many providers discussed the pressure to stay up to date in an evolving information environment, especially during the first year of the pandemic. They had mixed opinions on whether they had adequate resources to answer patient questions about the COVID-19 vaccination but generally agreed about their main sources of vaccine information during the pandemic. Participants cited the Centers for Disease Control and Prevention (CDC) and professional organizations like the American College of Obstetricians and Gynecologists (ACOG) as helpful. Other helpful resources included local and state health departments whose regular updates mitigated pressure on healthcare providers to stay up to date. Several providers also indicated workplace communication digests and regular team meetings led by department heads as the most helpful resources to stay current on COVID-19 information. Some of these communications from employers also included patient-facing resources, which many providers reported as necessary to facilitate conversations with new information. Provider’s recommendations to support vaccine acceptance outside the clinical setting Providers described the challenge of addressing COVID-19 vaccine questions and concerns in an environment that often left them unsupported to reduce barriers to vaccination. This discussion led to some clear guidance for policy and institutional practices to address providers’ barriers to vaccine counseling. Firstly, many providers recommended all vaccinations be provided free of charge to the patient. Providers highlight patient financial concerns surrounding the COVID-19 as well as previous vaccinations. Despite the national provision that vaccines be available free of charge, there continued to be confusion among patients about the financial cost of vaccination. This has implications for both communication strategies and ready access to vaccines. Some providers suggested continuing to offer vaccinations outside of the medical office or hospital environments (i.e., at mobile units, or pharmacies) to prevent cold supply chain challenges and other barriers in small doctors’ offices. Many providers hoped that the lessons learned during the COVID-19 vaccine rollout will inform future vaccination availability. Providers indicated a strong need for a more centralized, unified vaccine communication response from regional and federal agencies to address the ongoing challenges that they face addressing oft-conflicting vaccine messages from health officials, and government representatives. While they recommend the policies and messaging come from a centralized effort, there is additional importance of engaging local messengers. They underscored the need for local, diverse and neutral messengers from trusted community leaders to combat further politicization and polarization. Some acknowledged this could involve collaboration between other sectors that may not be traditionally involved in public health campaigns (e.g., community leaders, faith leaders). Many providers discussed the pressure to stay up to date in an evolving information environment, especially during the first year of the pandemic. They had mixed opinions on whether they had adequate resources to answer patient questions about the COVID-19 vaccination but generally agreed about their main sources of vaccine information during the pandemic. Participants cited the Centers for Disease Control and Prevention (CDC) and professional organizations like the American College of Obstetricians and Gynecologists (ACOG) as helpful. Other helpful resources included local and state health departments whose regular updates mitigated pressure on healthcare providers to stay up to date. Several providers also indicated workplace communication digests and regular team meetings led by department heads as the most helpful resources to stay current on COVID-19 information. Some of these communications from employers also included patient-facing resources, which many providers reported as necessary to facilitate conversations with new information. Providers described the challenge of addressing COVID-19 vaccine questions and concerns in an environment that often left them unsupported to reduce barriers to vaccination. This discussion led to some clear guidance for policy and institutional practices to address providers’ barriers to vaccine counseling. Firstly, many providers recommended all vaccinations be provided free of charge to the patient. Providers highlight patient financial concerns surrounding the COVID-19 as well as previous vaccinations. Despite the national provision that vaccines be available free of charge, there continued to be confusion among patients about the financial cost of vaccination. This has implications for both communication strategies and ready access to vaccines. Some providers suggested continuing to offer vaccinations outside of the medical office or hospital environments (i.e., at mobile units, or pharmacies) to prevent cold supply chain challenges and other barriers in small doctors’ offices. Many providers hoped that the lessons learned during the COVID-19 vaccine rollout will inform future vaccination availability. Providers indicated a strong need for a more centralized, unified vaccine communication response from regional and federal agencies to address the ongoing challenges that they face addressing oft-conflicting vaccine messages from health officials, and government representatives. While they recommend the policies and messaging come from a centralized effort, there is additional importance of engaging local messengers. They underscored the need for local, diverse and neutral messengers from trusted community leaders to combat further politicization and polarization. Some acknowledged this could involve collaboration between other sectors that may not be traditionally involved in public health campaigns (e.g., community leaders, faith leaders). We discuss our findings in the context of this model, focusing specifically on solutions to mitigate the negative impacts of misinformation, or evolving or confusing information on the patient-provider relationship and suggestions to create a stronger communication infrastructure that anchors patient-provider relationships. The results of this study highlight the impact of the confusing and often chaotic information environment surrounding COVID-19 and COVID-19 vaccination on patient-provider communication and demonstrate the strain caused when HCPs are not supported to respond adequately to patient concerns during clinical interactions. The impacts of misinformation specifically on the patient-provider dyad during the pandemic may be indicative of a new chapter of vaccine sentiments influencing how HCPs approach conversations about vaccination While HCPs can play an important interpersonal role in providing consistent and empathetic messaging for their patients, policies and procedures must strengthen organizational and community communication channels to better provide consistent evidence-based health information. Our findings are situated in the socioecological model to show that HCPs counsel patients within a complex communication environment and can be helped or hindered by community, organizational, and policy factors. Viewing patient-provider communication this way demonstrates, for example, that HCPs cannot be the sole combatant to pervasive and predatory misinformation as the public is exposed through various means. One of the most salient findings from our analysis was the provider recommendation for tailored interpersonal communication strategies that “meet patients where they are at.” This includes the empathetic recognition of patient questions and concerns but also the personalization of their advice to each patient’s medical needs – key strategies echoed in motivational interviewing and other tailored approaches . Providers generally agreed that vaccine acceptance requires an iterative and multi-phased process for many patients . The inclusion of anecdotes and personal perspectives has been demonstrated to be particularly effective in combating anti-vaccine misinformation . Our findings echo current COVID-19 communication literature describing HCPs as the most effective messengers to present tailored messaging to their patients . However, to ensure the sustainability of such approaches on a community-level and prevent provider burnout, other resources are needed to support broader efforts to address vaccine trust. To provide the best evidence-based communication approaches, providers require support to navigate changing medical advice while fielding patient’s questions and concerns. For example, our participants agreed that guidance condensed into digests at a regular cadence (i.e., weekly) was the easiest way to consume new information. Digests were most helpful when they came from local organizational-level messengers like a regional health department or an employer. Having hyper-local data snapshots, news and guidance mitigated the pressures felt by providers to be consistently up-to-date and helped them tailor their information to their patients given the regional nature of the pandemic experience. Unlike many web pages or e-newsletters, these channels should flow two ways to include feedback from the providers on the use and usefulness of the resources provided. While we found communication resources are most effective and impactful when tailored at the community-level, national policy and advocacy must support the collection and dissemination of up-to-date, evidence-based information that all can access and use. Furthermore, our findings indicate that providers seek resources to combat misinformation and overcome entrenched myths and misconceptions beyond currently available educational materials and resources. Bonnevie et al. (2021) have called for the development of partnerships to monitor and track sources of vaccine misinformation and responses to such campaigns through existing monitoring systems in our health infrastructure . Based on our findings, we recommend that communication and response infrastructure is set up between private organizations tracking and combating misinformation, clinical facilities providing patient education, and government actors with the resources and capital to ensure the sustainability of the collaborative before the next pandemic. The decline in government subsidies for COVID-19 vaccines and testing may impact the acceptability and uptake of vaccination and other mitigation measures . Our focus group participants believed removing financial barriers to vaccination would increase patient uptake, citing that prohibitive cost to patients would thwart any effective communication efforts. Affordable vaccinations and availability at convenient locations removes logistical barriers for vaccine willing patients. As a policy-level intervention to encourage vaccine acceptance, free or low-cost vaccination in the US must be sustained, regardless of income, insurance, or legal status. The polarized and politicized information environment during the COVID-19 pandemic had a significant impact on vaccine trust and literacy . Our participants agreed the lack of a unified response from different US federal, state, and local agencies greatly contributed to community fragmentation over COVID-19 preventative measures, including vaccination. Future efforts should be made to ensure a coordinated and unified policy-level response to limit regional and community dissension and enhance public trust and the adoption of public health measures in future emergencies. While participants spanned the United States, the limited sample size prevents generalized conclusions or assessments of correlations in our results. Although attempts were made to recruit for racial and ethnic diversity, the final sample of participants was largely white and concentrated on the coastal regions of the US. These focus groups offer a preliminary understanding of barriers and facilitators HCPs face when promoting vaccine acceptance. This study highlights the need for further research on perspectives of vaccine trust and acceptance from marginalized and rural populations. HCPs faced various unique challenges throughout the COVID-19 pandemic, including unprecedented volumes of mis- and disinformation, often rendering pre-pandemic strategies to tackle vaccine hesitancy ineffective. There is a need to recognize provider perspectives in the creation of vaccine communication programs to mitigate HCP challenges and provide sufficient, up-to-date data to address patient concerns. Most important, HCPs require the support of policies and a communication infrastructure that builds patient trust in health care institutions and the science behind vaccination. Below is the link to the electronic supplementary material. Supplementary Material 1
|
Tackling the crisis in general practice
|
c3f051c7-f94d-42e3-be71-8515d2e10314
|
10152468
|
Family Medicine[mh]
|
Some answers may lie in the Health Foundation’s report itself. Notably, GPs’ dissatisfaction does not seem, in the main, to relate to income. Instead, it is the increased volume and configuration of GPs’ workload, especially administrative work, that seems problematic. Administrative work is demanding, and the burden is compounded by the multiple operational failures routinely experienced by GPs in their daily work. Many of these relate to GPs’ role in coordinating care across multiple boundaries while often depending on incompatible systems and suboptimal communication. Individually, each operational failure—such as the rejected referral letter, the IT system that won’t load, and the delayed discharge information—may be small, but in aggregate they are time consuming, distracting, take time away from patients, and drain joy from work. Evidence from other areas such as hospitals makes clear that apparently minor workplace dysfunctions make excessive contributions to stress and dissatisfaction. Reducing the administrative burden on GPs seems like an obvious target to help reduce stress and enhance wellbeing. Automation of routine tasks offers much promise, combined with opportunities to improve processes, workflows, communication, and supply chain management to make GPs’ administrative tasks less frustrating and time consuming. But the prospect of attempting this kind of system improvement at individual practice level is daunting: in the current context of high patient demand and staff shortage, the capacity available for improvement activity is very limited. Improvement efforts at individual practice level could also easily, and paradoxically, increase stress and other challenges. Each organisation painstakingly working out its own solutions without access to essential skills and support—such as technical skills in human factors and system design—is not an effective use of precious resources. Suboptimal design of improvement efforts, which is more likely when specialist expertise is lacking, could adversely affect patient experience and satisfaction with care. This was illustrated by the rapid transition to remote consulting during the pandemic—which, while necessary, caused difficulties for some groups. As well as being wasteful, individual solutions result in lack of harmonisation across basic processes, introducing inefficiencies and threats to patient safety.
A better solution is to tackle problems collaboratively and at scale through a learning system approach that includes patients and diverse staff groups. A primary care learning system could use routinely collected data to monitor care, understand problems, identify targets for improvement, co-design and develop prototype solutions, and implement and test changes with a view to improving both patients’ and GPs’ satisfaction. Such an approach could take advantage of two key strengths of UK primary care. The first is the excellence of general practice data routinely collected by the NHS and high levels of GP data literacy . Although security concerns, incompatible and outdated IT systems, lack of training in data coding and entry, and insufficient administrative and data analytics support will all need to be resolved, general practice data represent a key asset for improvement efforts. The second is the practice cluster and network infrastructure now in place in all four countries of the UK, which provides a way of coordinating and supporting improvement at scale. These are challenging times for primary care. The causes are complex, and no single solution to the crisis exists. But supporting practices to achieve operational improvement is among the critical actions required to reverse the downward spiral in both patient and GP satisfaction, and reduce the extreme stress currently being experienced across primary care.
|
Investigation of standardized training of radiation oncology residents for gynaecological tumours in China
|
b935b617-89a8-46d5-8c0c-c48f1d9fa0ef
|
10152731
|
Gynaecology[mh]
|
Radiotherapy is the main treatment for common gynaecological malignant tumours such as cervical cancer. A combination of external beam irradiation and brachytherapy is needed in treatment, which involves high requirements for the theoretical knowledge of radiation oncology, radiation biology and physics and the clinical skills of radiation oncologists. However, most medical schools in China do not offer courses in radiation oncology, which makes it difficult for newly graduated residents to master it in the short term. Currently, the training programme of 5 + 3 + X is mostly adopted for radiation oncology residents (RORs) in China; it involves 5 years of undergraduate medical study, 3 years of standardized training (ST), and X years of specialized training (generally 2–4 years). In 2014, radiation oncology (RO) was included in the national ST of residents for a period of 3 years. The main clinical rotation departments currently include the radiotherapy department for 10 months (head and neck tumour, chest tumour, abdominal tumour, gynaecological tumour, and others), the general internal medicine department for 10 months (cardiology department, respiratory department, digestive department, infection department, neurology department, emergency department, and ICU), and tumour-related departments for 13 months (otorhinolaryngology department for 1 month, stomatology department for 1 month, imaging department for 2 months, pathology department for 2 months, internal oncology department for 3 months, and tumour surgery/general surgery for 4 months). At the end of ST, clinical theory and practical ability are assessed with a combination of the daily comprehensive score. Follow-up specialized training has not yet entered the formal implementation stage, but most hospitals have a specialized training mode. According to the requirements of ST, the training time for gynaecological tumours (GYN) is approximately 2 months, but the minimum number of cases is 10. The requirements indicate the need to master the target delineation of GYN and brachytherapy (BRT) operations. In the current ST mode, we conducted a questionnaire survey on radiotherapy ST in GYN to clarify the difficulties and needs related to ST and to explore a better training model. The online anonymous survey was conducted with the help of the “Questionnaire Star” platform, and each participant was limited to completing the survey once. The survey was conducted from January 27, 2022, to February 16, 2022. The respondents were residents specializing in radiotherapy in China. The questionnaire included 30 questions, including the basic information of the students, their knowledge of radiotherapy theory, their training in gynaecological tumours, the difficulties and needs they faced, and possible solutions. The collected data was analyzed using SPSS 19.0 software, and the comparison of variable composition ratios was carried out using the chi-square test. Basic information A total of 550 questionnaires were distributed, out of which 528 were received. Fifty-nine respondents who were not residents were excluded, including 15 chief doctors, 41 deputy chief doctors, 1 chief technician, and 2 technicians. Finally, 469 valid questionnaires (85.3%) were analyzed. The collected questionnaires were from 27 provinces, autonomous regions, and municipalities across China. However, there were no data from Qinghai Province, Jiangxi Province, Hainan, Tibet, Taiwan Province, Hong Kong, and Macau. The specific distribution is shown in Fig. . Of the respondents, 417 (88.9%) were from tertiary hospitals and 52 (11.1%) were from second-level hospitals. Among them, 392 (83.6%) were from teaching hospitals. Among the hospitals where the RORs worked, 96.4% carried out CT simulation positioning, 25.8% carried out MRI simulation positioning, 95.5% carried out intensity modulated radiotherapy, and 70.6% carried out BRT. Current status of ST of RORs in GYN Among the 469 RORs, 407 (86.8%) received ST on RO and 62 residents received non-RO ST,including 4 (0.9%) who received ST in obstetrics and gynecology, 43 (9.2%) who received ST in internal medicine, and 15 (3.2%) who received ST in surgery. Among all RORs, 327 (69.7%) junior physicians who had been engaged in radiotherapy for 1–3 years received ST on RO and 142 (30.3%) senior physicians who had been engaged in radiotherapy for 4–6 years completed ST on RO. In the two groups, only 192 (58.7%) and 84 (59.2%) underwent GYN rotation in ST (P = 0.506 ,χ 2 = 0.046), respectively; 79.2% had a rotation period of 1 to 6 months, and half of them had a rotation period of 2 to 3 months. Among the 469 residents, the number of GYN patients they treated is shown in Table . For junior doctors, the number of patients treated with definitive radiotherapy for cervical cancer was mostly 11–20 cases, and the number of patients treated with postoperative radiotherapy for GYN was 1–5 cases. Only 320 (68.2%) residents had performed gynaecological examinations for patients. Among 430 RORs who had treated GYN patients, 176 (40.9%) had experience in applicator implantation for patients with GYN. Among them, 48 (27.3%) thought they were skilful at BRT, 81 (46.0%) were familiar with BRT, 34 (19.3%) were generally familiar with BRT, and 13 (7.4%) were not familiar with BRT. Among all participants, 92.1% thought that BRT was very important for patients with locally advanced cervical cancer, 7.7% thought BRT was important, 0.2% thought BRT was generally important, and no participants thought BRT was not important. Only 50.1% of RORs knew the physical characteristics of BRT, and 49.2% thought they could choose the appropriate BRT for patients, as shown in Fig. a and b. At the end of the current ST, only 75.3% of RORs can independently complete the target delineation of GYN patients, and 56% of RORs can independently complete the BRT. Difficulties and needs Difficulties The problems faced by junior and senior RORs in treating patients with GYN are different, as shown in Fig. a. With regard to the mastery of basic knowledge and skills, senior physicians were significantly better than junior physicians (P < 0.05), but there was no significant difference in the understanding of surgical procedures between the two groups ( P = 0.714, χ 2 = 0.134). In practical clinical work, when residents encounter problems, junior physicians usually prefer to consult their superiors, textbooks or professional books, periodicals, or literature to solve the problems. The proportion of junior physicians who consulted their superiors was significantly higher than that of senior doctors (94.8% vs. 79.6%, P = 0.000), as shown in Fig. b. At the end of the ST in GYN, some residents do not reach the level of treating GYN patients independently. The survey showed that the main reasons include the scarcity of GYN patients, insufficient teaching awareness of superior physicians, and personal lack of interest, accounting for 65.7%, 50.3% and 19%, respectively. Requirements The survey results showed that at the end of the ST, to achieve the ability to treat GYN patients and complete BRT independently, the minimum number of patients to be treated was 11–20 patients with definitive radiotherapy for cervical cancer and 11–20 patients with postoperative radiotherapy. In addition, in the ST, RORs who intend to engage in the specialty of GYN in radiotherapy oncology in the future can increase the time of GYN training, participate in specialist operation training and increase their rotation in the imaging department. See Fig. for other specific requirements. All surveyed residents agreed to set up a formal course of brachytherapy, of whom 86.6% agreed to include an assessment of brachytherapy competence at the end of the training. A total of 550 questionnaires were distributed, out of which 528 were received. Fifty-nine respondents who were not residents were excluded, including 15 chief doctors, 41 deputy chief doctors, 1 chief technician, and 2 technicians. Finally, 469 valid questionnaires (85.3%) were analyzed. The collected questionnaires were from 27 provinces, autonomous regions, and municipalities across China. However, there were no data from Qinghai Province, Jiangxi Province, Hainan, Tibet, Taiwan Province, Hong Kong, and Macau. The specific distribution is shown in Fig. . Of the respondents, 417 (88.9%) were from tertiary hospitals and 52 (11.1%) were from second-level hospitals. Among them, 392 (83.6%) were from teaching hospitals. Among the hospitals where the RORs worked, 96.4% carried out CT simulation positioning, 25.8% carried out MRI simulation positioning, 95.5% carried out intensity modulated radiotherapy, and 70.6% carried out BRT. Among the 469 RORs, 407 (86.8%) received ST on RO and 62 residents received non-RO ST,including 4 (0.9%) who received ST in obstetrics and gynecology, 43 (9.2%) who received ST in internal medicine, and 15 (3.2%) who received ST in surgery. Among all RORs, 327 (69.7%) junior physicians who had been engaged in radiotherapy for 1–3 years received ST on RO and 142 (30.3%) senior physicians who had been engaged in radiotherapy for 4–6 years completed ST on RO. In the two groups, only 192 (58.7%) and 84 (59.2%) underwent GYN rotation in ST (P = 0.506 ,χ 2 = 0.046), respectively; 79.2% had a rotation period of 1 to 6 months, and half of them had a rotation period of 2 to 3 months. Among the 469 residents, the number of GYN patients they treated is shown in Table . For junior doctors, the number of patients treated with definitive radiotherapy for cervical cancer was mostly 11–20 cases, and the number of patients treated with postoperative radiotherapy for GYN was 1–5 cases. Only 320 (68.2%) residents had performed gynaecological examinations for patients. Among 430 RORs who had treated GYN patients, 176 (40.9%) had experience in applicator implantation for patients with GYN. Among them, 48 (27.3%) thought they were skilful at BRT, 81 (46.0%) were familiar with BRT, 34 (19.3%) were generally familiar with BRT, and 13 (7.4%) were not familiar with BRT. Among all participants, 92.1% thought that BRT was very important for patients with locally advanced cervical cancer, 7.7% thought BRT was important, 0.2% thought BRT was generally important, and no participants thought BRT was not important. Only 50.1% of RORs knew the physical characteristics of BRT, and 49.2% thought they could choose the appropriate BRT for patients, as shown in Fig. a and b. At the end of the current ST, only 75.3% of RORs can independently complete the target delineation of GYN patients, and 56% of RORs can independently complete the BRT. Difficulties The problems faced by junior and senior RORs in treating patients with GYN are different, as shown in Fig. a. With regard to the mastery of basic knowledge and skills, senior physicians were significantly better than junior physicians (P < 0.05), but there was no significant difference in the understanding of surgical procedures between the two groups ( P = 0.714, χ 2 = 0.134). In practical clinical work, when residents encounter problems, junior physicians usually prefer to consult their superiors, textbooks or professional books, periodicals, or literature to solve the problems. The proportion of junior physicians who consulted their superiors was significantly higher than that of senior doctors (94.8% vs. 79.6%, P = 0.000), as shown in Fig. b. At the end of the ST in GYN, some residents do not reach the level of treating GYN patients independently. The survey showed that the main reasons include the scarcity of GYN patients, insufficient teaching awareness of superior physicians, and personal lack of interest, accounting for 65.7%, 50.3% and 19%, respectively. Requirements The survey results showed that at the end of the ST, to achieve the ability to treat GYN patients and complete BRT independently, the minimum number of patients to be treated was 11–20 patients with definitive radiotherapy for cervical cancer and 11–20 patients with postoperative radiotherapy. In addition, in the ST, RORs who intend to engage in the specialty of GYN in radiotherapy oncology in the future can increase the time of GYN training, participate in specialist operation training and increase their rotation in the imaging department. See Fig. for other specific requirements. All surveyed residents agreed to set up a formal course of brachytherapy, of whom 86.6% agreed to include an assessment of brachytherapy competence at the end of the training. The problems faced by junior and senior RORs in treating patients with GYN are different, as shown in Fig. a. With regard to the mastery of basic knowledge and skills, senior physicians were significantly better than junior physicians (P < 0.05), but there was no significant difference in the understanding of surgical procedures between the two groups ( P = 0.714, χ 2 = 0.134). In practical clinical work, when residents encounter problems, junior physicians usually prefer to consult their superiors, textbooks or professional books, periodicals, or literature to solve the problems. The proportion of junior physicians who consulted their superiors was significantly higher than that of senior doctors (94.8% vs. 79.6%, P = 0.000), as shown in Fig. b. At the end of the ST in GYN, some residents do not reach the level of treating GYN patients independently. The survey showed that the main reasons include the scarcity of GYN patients, insufficient teaching awareness of superior physicians, and personal lack of interest, accounting for 65.7%, 50.3% and 19%, respectively. The survey results showed that at the end of the ST, to achieve the ability to treat GYN patients and complete BRT independently, the minimum number of patients to be treated was 11–20 patients with definitive radiotherapy for cervical cancer and 11–20 patients with postoperative radiotherapy. In addition, in the ST, RORs who intend to engage in the specialty of GYN in radiotherapy oncology in the future can increase the time of GYN training, participate in specialist operation training and increase their rotation in the imaging department. See Fig. for other specific requirements. All surveyed residents agreed to set up a formal course of brachytherapy, of whom 86.6% agreed to include an assessment of brachytherapy competence at the end of the training. The ST of residents plays an important role in postgraduation medical education. Improving the post competency of resident physicians is the core of ST for resident physicians. Radiotherapy ST has been carried out for 7 years in China. However, according to the survey results, the proportion of residents who received GYN training in the ST was only maintained at approximately 60%, which means that not all RORs receive GYN training in the ST. Even if they received GYN training, the median rotation time could only be maintained at 2–3 months. During the actual training of the GYN subspecialty in radiation oncology, the survey results showed that the number of cases treated by RORs basically met the requirements of ST but could not meet the clinical needs of trainees. In addition, due to the lack of basic theory and skill training, although most residents treated patients with GYN and almost all believed that BRT was important for GYN, only 3/4 of them could delineate the target volume independently and half of them could implant applicators for patients independently with knowledge of the physical characteristics and indications of BRT. There are three main reasons for failing to achieve the goal of ST. First, the number of patients was insufficient. Second, the superior physicians’ awareness of teaching was not strong. Third, the current training model cannot stimulate learners’ interest in GYN radiotherapy. Most of the RORs hoped to increase the rotation of the GYN specialty, increase professional operation training, set up formal BRT courses, and increase professional operation skill assessment to improve the effect of ST. Current radiotherapy techniques for GYN mainly include external irradiation and brachytherapy. Training in external irradiation technology is feasible because it is not limited by equipment. However, brachytherapy training is the opposite. In recent years, several surveys of residents in America, Europe and Australia have shown that approximately 40–70% of residents have insufficient BRT training during their residency training due to the number of patients, limited equipment, and other reasons . In China, according to the latest cancer data, cervical cancer ranks fifth (11.34/100,000) in the incidence of female malignant tumours, uterine cancer ranks eighth (6.64/100,000), and cervical cancer ranks seventh in the mortality rate (3.36/100,000). The incidence and death from cervical cancer are still on the rise . Therefore, gynaecological malignant tumours remain among the main diseases affecting women’s health in China. The distribution of patients may be uneven, but the overall number of patients is large. Increasing the flow of resident training between hospitals can compensate for the lack of training caused by the lack of patients during training. RORs still prefer the help of superior doctors to solve clinical problems, so the teaching awareness of superior doctors needs to be improved. In daily teaching, multiple forms of teaching modes should be combined to increase students’ interest in GYN. The setting of specialized courses is a common requirement in the ST of RORs, such as imaging, radiation physics, and biology . Brachytherapy is indispensable for the training of the GYN subspecialty in radiotherapy. However, because it involves patient-specific operation, improper operation during BRT will produce uncomfortable experiences for patients and reduce the operator’s confidence. In the event of an error, the teacher may terminate the resident’s operation to avoid irreparable harm to the patient. Therefore, the training process will be affected. The simulation course of BRT can help trainees repeat operations infinite times through mechanical models or virtual reality simulation to increase their proficiency and skills in operation, establish their confidence and improve their post competence . At the same time, rotation in the imaging and gynaecological oncology departments will meet the training needs of most residents. This survey was aimed at China’s radiation oncology residents, with widely distributed data sources and a valid questionnaire response rate of 85.3%, making the conclusions generally applicable nationwide. However, there were some limitations to this survey. Since the main objective was to investigate the standardized training mode in gynecological tumor subspecialties for radiation oncology residents in China, it is unfortunate that no detailed investigation was conducted on brachytherapy. In the future, we plan to investigate on the relevant details of brachytherapy. The ST of the gynaecological oncology subspecialty of radiation oncology in China has not been fully popularized, and residents’ mastery of specialized skills and theories still cannot meet the training requirements. It is necessary to increase the teaching awareness of specialized training teachers and continue to optimize the curriculum, especially by improving the curriculum for specialized operation and strict assessment systems.
|
The soil microbiomes of forest ecosystems in Kenya: their diversity and environmental drivers
|
4be4ce79-a8bc-4b7e-b28a-bf09560ae562
|
10154314
|
Microbiology[mh]
|
Forests are highly productive components of terrestrial ecosystems , covering more than 40 million km 2 and presenting 30% of the total global land area . They form part of our most precious natural resources essential to the continued balance and survival of the world’s ecosystem . Forests act as carbon sinks where soil organic matter is formed from residuals after biomass decomposition . They play a major role in the global cycling of carbon, as well as organic nitrogen mineralization and conversion of organic phosphorus into inorganic . Moreover, forests are involved in maintenance of soil structure , organic matter decomposition , degradation of pollutants and shape soil microbial communities through the symbiotic interaction with primary microbial producers such as mycorrhizal fungi . Some of bacterial taxa previously shown to dominate forest soil ecosystems and play such key roles include members of the genera Pedobacter and Chitinophaga ( Bacteroidetes ); Pseudomonas , Variovorax , Ewingella , and Stenotrophomonas ( Proteobacteria ) ; Burkholderia , Phenylobacterium , and Methylovirgula ( Pseudomonadota ) ; members of the Rhizobiales and Nitrosopumilus . Unfortunately, these forest ecosystems have suffered from serious depletion due to anthropogenic activities associated with over-farming, the pulp and paper industry and population encroachment into peri-urban areas, along with other land-use change . Soil microorganisms are an important component of the forest ecosystem , as they play fundamental roles in most nutrient transformations within forest soils, upon which the stability and sustainable development of forest ecosystems rely . The distribution and diversity of soil microbiomes is influenced by numerous aspects such as soil type, physicochemical characteristics, microclimate, vegetation and land-use . Recent microbial ecology studies have shown that different habitats harbor diverse microbial communities whose succession patterns are shaped by substrate availability, including nutrient pools, physiochemistry and vegetation . In addition, factors that modify the microclimate and forest litter chemistry such as forest type, plant species and plant diversity have also been identified as drivers of microbial community composition in forest soils . Kenya’s indigenous forests represent some of the most diverse ecosystems in the world, and provide important economic, environmental, recreational, scientific, cultural and spiritual benefits to the nation (Republic of Kenya, 2009) . Forests play a vital role in the stabilization of soils and ground water, support the conduct of reliable agricultural activity and play a crucial role in protecting water catchments in Kenya besides moderating climate by absorbing greenhouse gases . In addition, forests such as those of the Taita Hills are hotspots of biodiversity, harboring a wide variety of medicinal plants . The Forests Act has previously recognized that forests provide the main locus of Kenya’s biological diversity and a major habitat for wildlife, and acknowledges that forests and trees are the main source of domestic fuelwood . However, these forests have been subjected to land-use changes such as conversion to farmlands, ranches and settlements. Historically, the majority of forest soil microbial diversity studies have been performed in northern hemisphere countries, with very little focus on the forests of the African continent, even in global studies , . To fill this knowledge gap, this study aimed to document the microbial ecology of selected Kenyan forest soil ecosystems, and to study the possible abiotic drivers. The selected forest ecosystems are among Kenyan landscapes endowed with varied climate with different water catchment and soil regime. For instance, the Mt. Kenya, Aberdare and Taita Taveta regions are among Kenya’s water towers. The regions are characterized by a bimodal rainfall patterns which influence the vegetation within each ecoregion. This leads to a variation in moisture content within soil ecosystems further influencing microbial diversity.
Different forest soils in Kenya have unique physicochemical properties In this study, 31 soil samples were obtained from forests ecosystems within the Taita Taveta, Nairobi, Western, Aberdare and Mt. Kenya ecoregions (Supplementary Table ). Samples from the different ecoregions were shown to be significantly different (p-value ≤ 0.01, R 2 = 0.45) in terms of soil physicochemical properties, specifically in soil pH, soil texture, macro- and micro-nutrient composition and Enhanced Vegetation Index-2 (EVI2) (Fig. a and b, Supplementary Fig. a and b). Taita Taveta forest soils were highly distinct from those of the Nairobi, Aberdare and Western regions (Fig. b). Conversely Nairobi and Western region soils exhibited the least variability (Fig. b). Several soil physicochemical properties were found to be correlated, and thus could be considered as interdependent (Fig. c). For instance, the measurement of plant density, vegetation index (EVI2), was positively correlated with all the measured soil nutrients, apart from phosphorus. This is not unexpected, as nutrient-rich forest soils have been repeatedly shown to support high density plant growth . The soil samples used in this study were collected from 0 to 5 cm depth, which is within the 0–20 cm soil profile characteristically comprising the organic horizon that results from decomposition of litter-derived organic matter and representing a nutrient-rich mixture of processed, plant-derived organic matter . Low titratable phosphorus concentrations were possibly due to presence of a high content of Al and Fe, which form oxides that fix phosphorus at the low pH’s associated with these soils . In this study, the pH was positively correlated with EVI2 but negatively correlated with C and N content. This result contradicts a previous study that concluded that at higher soil pH levels, the mineralizable fractions of C and N increased, possibly due to the direct effect of pH on microbial populations and their activities . Taxonomic composition of soil microbiomes across Kenyan forest biomes Analysis of Bacterial diversity in forest soil samples indicated the presence of 34 phyla, of which 12 were dominant (i.e. represented by > 1% of ASV reads in at least 87% of the samples). The most abundant of these was Proteobacteria (30.3% mean relative abundance), followed by Acidobacteriota (23.4% mean relative abundance) and Actinobacteria (13.1% mean relative abundance) (Fig. a). Actinobacteriota members such as Frankiales and Streptomycetales are known as nitrogen-fixing bacteria and may produce biologically active secondary metabolites . The dominant bacterial phyla from the current study were consistent with other studies within two forests sites, where bacterial ASVs were assigned to 44 phyla, ten of which; ( Proteobacteria , Acidobacteria , Verrucomicrobia , Firmicutes , Actinobacteria , Bacteroidetes , Planctomycetes , Chloroflexi , WD272, and Gemmatimonadetes ) comprised more than 90% of the relative abundance in each library . Our results on bacterial abundance were also consistent with several previous studies where Proteobacteria , Acidobacteria , Verrucomicrobia , Firmicutes , Actinobacteria , Bacteroidetes , Planctomycetes , Chloroflexi were the most abundant phyla . In particular, members of Proteobacteria and Acidobacteriota phylum have been reported as ubiquitous and dominant in` soil ecosystems . Members of these Phyla, such as Anaeromyxobacter, Bradyrhizobium , Azospirillum, Ralstonia, Burkholderia, Brevundimonas Rhodopseudomonas (Proteobacteria), Mycobacterium, Nocardia , A mycolatopsis Thermobispora , Pseudonocardia , Brachybacterium , Frankia , Conexibacter (Actinobacteria), Streptococcus , Lactococcus , and Enterococcus (Firmicutes) have been reported to carry out various key ecological processes such as regulation of biogeochemical cycles, decomposition of biopolymers, exopolysaccharide secretion and plant growth promotion . The most dominant taxa at Order level, Rhizobiales (12.8% mean relative abundance), Burkholderiales (6.3% mean relative abundance) and Chitinophagales (6.2% mean relative abundance), were represented across all samples (Supplementary Table ). The order Chitinophagales contains members that are known to degrade complex organic matter, such as chitin and cellulose . The orders Rhizobiales, Xanthomonadales and Rhodospirillales found in this study are also well known for nitrogen fixation, mineralization and denitrification activities . Crenarchaeota was the most abundant Archaeal phylum represented across all samples, with 91.6% mean relative abundance (Fig. b). This phylum was further grouped into two classes; Nitrososphaeria (77.1% mean relative abundance) and Bathyarchaeia (0.2% mean relative abundance). Nitrososphaeria includes many ammonia-oxidizing taxa that have been identified previously in forest soil microbiomes . Other phyla within the Archaeal Domain included Thermoplasmatota (6.4% mean relative abundance), represented within about two thirds of the soil samples, (Supplementary Table ) while Nanoarchaeota phylum (1.7% mean relative abundance) was represented within about a quarter of the soil samples. These results agree with previous studies where archaeal communities in forest biomes were reported to be dominated by Nitrososphaera . Members of Nitrososphaera have been described as major contributors to soil biogeochemical processes such as carbon, methane, nitrogen and, sulfur cycle within many ecosystems . Alpha- and beta- diversity analysis of soil prokaryotic communities Analysis of sample alpha-diversity showed Western and Taita Taveta regions soils to have significantly different (P ≤ 0.01) levels of Archaeal richness, while Western and Aberdare regions soils displayed significantly different Shannon diversity index (P = 0.02) ( Fig. d–f). Although there were no significant differences between bacterial communities displayed within various forests ecosystems ( Fig. a–c ) soil samples under bamboo vegetation cover within Mt. Kenya and Aberdare regions displayed lower diversity than the other ecoregions. Soils collected from the Taita Taveta region (Vuria and Ngangao) were shown to have the highest number of observed prokaryotic taxa. These forests are characterized by a montane climate vegetation with thick ground cover . The high number of ASVs could be attributed to a broad range of bacterial micro-habitats associated with high nutrient availability besides other specific microbial diversity drivers such as plant density and vegetation index that positively influenced bacterial abundance . There was high prokaryotic variability observed within each region, an indication of distinct microhabitats and microclimates in each forest region covered (Fig. g,h). Beta-diversity analysis of soil samples from these regions showed a significant difference (P < 0.01) on bacterial and archaeal community structure (Bacteria R 2 = 0.22; Archaea R 2 = 0.24) (Fig. g,h). Notably, the microbial composition of samples from the Taita Taveta region showed a lower degree of overlap with other regions, which mimics the soil chemistry differences observed between the regions. Taita Hills comprise the northernmost part of the Precambrian Eastern-Arc Mountain range, known for its rich biodiversity and recognized as one of the world’s 25 biodiversity hot-spots . The highly significant (P < 0.01) richness and Shannon diversity index values for samples from Western region forests could be attributed to the tropical nature of forests within this region such as sample K21 (Kakamega forest) which is considered an important biodiversity reservoir and the only remaining Guinea-Congolian tropical rain forest in Kenya . Kakamega forest is the largest moist lowland forest ecosystem in Kenya, and has similar characteristics to Central Africa equatorial forests . To explore further the differences in the soil microbiome structure between the different forest areas, linear discriminant analysis (LDA) effect size (LEfSe) was used to detect prokaryotic taxa that were differentially abundant within and between soil samples. In a comparison of samples from the five forest regions (Aberdare, Mt. Kenya, Nairobi, Taita Taveta and Western), several taxa were identified as differentially abundant (P adj. < 0.01): 13 genera in Taita Taveta, 21 in Nairobi, 1 in Mt. Kenya, 2 in Western and 5 in Aberdare region (Fig. a). The LEfSe algorithm identified several differentially abundant archaeal taxa (P adj. < 0.01) within the three regions (Aberdare, Nairobi and Taita Taveta) each having a taxon (Fig. b). The genus Acidibacter was over-represented in Taita forest soils, possibly due to the low soil pH observed in this region. IMCC26256 was over-represented in western region. Burkholderia-Caballeronia-Paraburkholderia taxa, which typically have a very broad ecological diversity and metabolic versatility were the most abundant in Aberdare Forest soils , RB41 in Mt. Kenya while Rhodovastum was the most abundant in Mt. Kenya region soil samples (Fig. a). Environmental drivers of soil microbiomes in Kenyan forest soils A stepwise model-building approach for constrained ordination models was used to assess the potential environmental drivers of the prokaryotic communities within forest ecosystems. Canonical correspondence analysis (CCA) ordination plots showed that bacterial and archaeal community structures were significantly affected by several soil physicochemical characteristics (P < 0.01). Soil pH, Ca, K, Fe and %N were shown as key drivers of bacterial community structure, while Na, pH, Ca, P and %N were important factors in shaping archaeal community structure within forest soils (Fig. a,b). The significant effect of nitrogen to community structure is consistent with the composition of soil microbiomes described in this study, which were dominated by taxa potentially involved in nitrogen fixation such as Cyanobacteria and Nitrospirota (Supplementary Fig. a). Fe concentration and soil texture are known to be major factors in shaping bacterial community structures in some soils . Soil pH possibly affected the thermodynamics and kinetics of microbial respiration, thus shaping the microbial communities’ composition and function. The “uniqueness” of Kenyan forest microbiomes In order to address the question of whether Kenyan forest soils harbor unique microbiome compositions, the phylogenetic datasets used in this study were compared with datasets on forest soil microbiomes from other countries across the globe ( Supplementary Table ) . Comparisons of the beta-diversity scores between these datasets, based on Bray–Curtis index (Fig. ), revealed community structures of forest soil microbiomes which were, to some extent, distinguishable by the country of origin ( R 2 = 0.63; p-value < 0.01). The Kenyan dataset formed a distinct group with some degree of overlap with soil microbiomes from China, the Czech Republic, New Zealand and the USA. This overlap could be a result of common plant cover between the sampled areas in the different countries. Some forests in Kenya are known to harbor globally distributed plant species such as bamboo ( A. alpina ), indigenous plant species found within forests with highest floral diversity such as ( Coffea fadenii , Juniperus procera —African pencil cedar, Podocarpus falcatus , latifolius , Tabernaemontana stapfiana , Ocotea usambarensis , Macaranga conglomerata , and Psychotria petit . Forests harboring moderate floral diversity included Podocarpus , Dombeya , Croton megalocarpus , while dryland species included Acacia species such as A. tortilis , A. melifera , A. abyssinica, and A. polyacantha . Plantation species included Eucalyptus grandis , E. saligna , E. camaldulensis and E. urophylla . It is also worth noting that the Kenyan dataset exhibited the highest variability of beta-diversity scores, which reflect the variety of ecoregions sampled in this study. The significant compositional differences between national datasets were reflected in the LDA comparison results, which identified 178 taxa differently distributed across the datasets (Supplementary Table ). Fourteen of these were over-represented in Kenyan forest soils, including the Archeal genus Nitrososphaera . Other over-represented genera of potential ecological relevance to Kenya forest soils included Bradyrhizobium , which is positively associated with soil health and Chitinophaga , members of chitinolytic Myxobacteria known to control fungal populations in soils . It is also worth noting that several of the over-represented taxa in the Kenyan soil dataset belonged to uncultured groups of bacteria, including members of uncultured genera TK-10 and Ellin606, an indication of Kenyan forest soils may harbor a catalogue of novel taxa. During development of bio-conservation strategies in these forest regions, consideration of these distinct microbiomes with unique taxa should be taken into account.
In this study, 31 soil samples were obtained from forests ecosystems within the Taita Taveta, Nairobi, Western, Aberdare and Mt. Kenya ecoregions (Supplementary Table ). Samples from the different ecoregions were shown to be significantly different (p-value ≤ 0.01, R 2 = 0.45) in terms of soil physicochemical properties, specifically in soil pH, soil texture, macro- and micro-nutrient composition and Enhanced Vegetation Index-2 (EVI2) (Fig. a and b, Supplementary Fig. a and b). Taita Taveta forest soils were highly distinct from those of the Nairobi, Aberdare and Western regions (Fig. b). Conversely Nairobi and Western region soils exhibited the least variability (Fig. b). Several soil physicochemical properties were found to be correlated, and thus could be considered as interdependent (Fig. c). For instance, the measurement of plant density, vegetation index (EVI2), was positively correlated with all the measured soil nutrients, apart from phosphorus. This is not unexpected, as nutrient-rich forest soils have been repeatedly shown to support high density plant growth . The soil samples used in this study were collected from 0 to 5 cm depth, which is within the 0–20 cm soil profile characteristically comprising the organic horizon that results from decomposition of litter-derived organic matter and representing a nutrient-rich mixture of processed, plant-derived organic matter . Low titratable phosphorus concentrations were possibly due to presence of a high content of Al and Fe, which form oxides that fix phosphorus at the low pH’s associated with these soils . In this study, the pH was positively correlated with EVI2 but negatively correlated with C and N content. This result contradicts a previous study that concluded that at higher soil pH levels, the mineralizable fractions of C and N increased, possibly due to the direct effect of pH on microbial populations and their activities .
Analysis of Bacterial diversity in forest soil samples indicated the presence of 34 phyla, of which 12 were dominant (i.e. represented by > 1% of ASV reads in at least 87% of the samples). The most abundant of these was Proteobacteria (30.3% mean relative abundance), followed by Acidobacteriota (23.4% mean relative abundance) and Actinobacteria (13.1% mean relative abundance) (Fig. a). Actinobacteriota members such as Frankiales and Streptomycetales are known as nitrogen-fixing bacteria and may produce biologically active secondary metabolites . The dominant bacterial phyla from the current study were consistent with other studies within two forests sites, where bacterial ASVs were assigned to 44 phyla, ten of which; ( Proteobacteria , Acidobacteria , Verrucomicrobia , Firmicutes , Actinobacteria , Bacteroidetes , Planctomycetes , Chloroflexi , WD272, and Gemmatimonadetes ) comprised more than 90% of the relative abundance in each library . Our results on bacterial abundance were also consistent with several previous studies where Proteobacteria , Acidobacteria , Verrucomicrobia , Firmicutes , Actinobacteria , Bacteroidetes , Planctomycetes , Chloroflexi were the most abundant phyla . In particular, members of Proteobacteria and Acidobacteriota phylum have been reported as ubiquitous and dominant in` soil ecosystems . Members of these Phyla, such as Anaeromyxobacter, Bradyrhizobium , Azospirillum, Ralstonia, Burkholderia, Brevundimonas Rhodopseudomonas (Proteobacteria), Mycobacterium, Nocardia , A mycolatopsis Thermobispora , Pseudonocardia , Brachybacterium , Frankia , Conexibacter (Actinobacteria), Streptococcus , Lactococcus , and Enterococcus (Firmicutes) have been reported to carry out various key ecological processes such as regulation of biogeochemical cycles, decomposition of biopolymers, exopolysaccharide secretion and plant growth promotion . The most dominant taxa at Order level, Rhizobiales (12.8% mean relative abundance), Burkholderiales (6.3% mean relative abundance) and Chitinophagales (6.2% mean relative abundance), were represented across all samples (Supplementary Table ). The order Chitinophagales contains members that are known to degrade complex organic matter, such as chitin and cellulose . The orders Rhizobiales, Xanthomonadales and Rhodospirillales found in this study are also well known for nitrogen fixation, mineralization and denitrification activities . Crenarchaeota was the most abundant Archaeal phylum represented across all samples, with 91.6% mean relative abundance (Fig. b). This phylum was further grouped into two classes; Nitrososphaeria (77.1% mean relative abundance) and Bathyarchaeia (0.2% mean relative abundance). Nitrososphaeria includes many ammonia-oxidizing taxa that have been identified previously in forest soil microbiomes . Other phyla within the Archaeal Domain included Thermoplasmatota (6.4% mean relative abundance), represented within about two thirds of the soil samples, (Supplementary Table ) while Nanoarchaeota phylum (1.7% mean relative abundance) was represented within about a quarter of the soil samples. These results agree with previous studies where archaeal communities in forest biomes were reported to be dominated by Nitrososphaera . Members of Nitrososphaera have been described as major contributors to soil biogeochemical processes such as carbon, methane, nitrogen and, sulfur cycle within many ecosystems .
Analysis of sample alpha-diversity showed Western and Taita Taveta regions soils to have significantly different (P ≤ 0.01) levels of Archaeal richness, while Western and Aberdare regions soils displayed significantly different Shannon diversity index (P = 0.02) ( Fig. d–f). Although there were no significant differences between bacterial communities displayed within various forests ecosystems ( Fig. a–c ) soil samples under bamboo vegetation cover within Mt. Kenya and Aberdare regions displayed lower diversity than the other ecoregions. Soils collected from the Taita Taveta region (Vuria and Ngangao) were shown to have the highest number of observed prokaryotic taxa. These forests are characterized by a montane climate vegetation with thick ground cover . The high number of ASVs could be attributed to a broad range of bacterial micro-habitats associated with high nutrient availability besides other specific microbial diversity drivers such as plant density and vegetation index that positively influenced bacterial abundance . There was high prokaryotic variability observed within each region, an indication of distinct microhabitats and microclimates in each forest region covered (Fig. g,h). Beta-diversity analysis of soil samples from these regions showed a significant difference (P < 0.01) on bacterial and archaeal community structure (Bacteria R 2 = 0.22; Archaea R 2 = 0.24) (Fig. g,h). Notably, the microbial composition of samples from the Taita Taveta region showed a lower degree of overlap with other regions, which mimics the soil chemistry differences observed between the regions. Taita Hills comprise the northernmost part of the Precambrian Eastern-Arc Mountain range, known for its rich biodiversity and recognized as one of the world’s 25 biodiversity hot-spots . The highly significant (P < 0.01) richness and Shannon diversity index values for samples from Western region forests could be attributed to the tropical nature of forests within this region such as sample K21 (Kakamega forest) which is considered an important biodiversity reservoir and the only remaining Guinea-Congolian tropical rain forest in Kenya . Kakamega forest is the largest moist lowland forest ecosystem in Kenya, and has similar characteristics to Central Africa equatorial forests . To explore further the differences in the soil microbiome structure between the different forest areas, linear discriminant analysis (LDA) effect size (LEfSe) was used to detect prokaryotic taxa that were differentially abundant within and between soil samples. In a comparison of samples from the five forest regions (Aberdare, Mt. Kenya, Nairobi, Taita Taveta and Western), several taxa were identified as differentially abundant (P adj. < 0.01): 13 genera in Taita Taveta, 21 in Nairobi, 1 in Mt. Kenya, 2 in Western and 5 in Aberdare region (Fig. a). The LEfSe algorithm identified several differentially abundant archaeal taxa (P adj. < 0.01) within the three regions (Aberdare, Nairobi and Taita Taveta) each having a taxon (Fig. b). The genus Acidibacter was over-represented in Taita forest soils, possibly due to the low soil pH observed in this region. IMCC26256 was over-represented in western region. Burkholderia-Caballeronia-Paraburkholderia taxa, which typically have a very broad ecological diversity and metabolic versatility were the most abundant in Aberdare Forest soils , RB41 in Mt. Kenya while Rhodovastum was the most abundant in Mt. Kenya region soil samples (Fig. a).
A stepwise model-building approach for constrained ordination models was used to assess the potential environmental drivers of the prokaryotic communities within forest ecosystems. Canonical correspondence analysis (CCA) ordination plots showed that bacterial and archaeal community structures were significantly affected by several soil physicochemical characteristics (P < 0.01). Soil pH, Ca, K, Fe and %N were shown as key drivers of bacterial community structure, while Na, pH, Ca, P and %N were important factors in shaping archaeal community structure within forest soils (Fig. a,b). The significant effect of nitrogen to community structure is consistent with the composition of soil microbiomes described in this study, which were dominated by taxa potentially involved in nitrogen fixation such as Cyanobacteria and Nitrospirota (Supplementary Fig. a). Fe concentration and soil texture are known to be major factors in shaping bacterial community structures in some soils . Soil pH possibly affected the thermodynamics and kinetics of microbial respiration, thus shaping the microbial communities’ composition and function.
In order to address the question of whether Kenyan forest soils harbor unique microbiome compositions, the phylogenetic datasets used in this study were compared with datasets on forest soil microbiomes from other countries across the globe ( Supplementary Table ) . Comparisons of the beta-diversity scores between these datasets, based on Bray–Curtis index (Fig. ), revealed community structures of forest soil microbiomes which were, to some extent, distinguishable by the country of origin ( R 2 = 0.63; p-value < 0.01). The Kenyan dataset formed a distinct group with some degree of overlap with soil microbiomes from China, the Czech Republic, New Zealand and the USA. This overlap could be a result of common plant cover between the sampled areas in the different countries. Some forests in Kenya are known to harbor globally distributed plant species such as bamboo ( A. alpina ), indigenous plant species found within forests with highest floral diversity such as ( Coffea fadenii , Juniperus procera —African pencil cedar, Podocarpus falcatus , latifolius , Tabernaemontana stapfiana , Ocotea usambarensis , Macaranga conglomerata , and Psychotria petit . Forests harboring moderate floral diversity included Podocarpus , Dombeya , Croton megalocarpus , while dryland species included Acacia species such as A. tortilis , A. melifera , A. abyssinica, and A. polyacantha . Plantation species included Eucalyptus grandis , E. saligna , E. camaldulensis and E. urophylla . It is also worth noting that the Kenyan dataset exhibited the highest variability of beta-diversity scores, which reflect the variety of ecoregions sampled in this study. The significant compositional differences between national datasets were reflected in the LDA comparison results, which identified 178 taxa differently distributed across the datasets (Supplementary Table ). Fourteen of these were over-represented in Kenyan forest soils, including the Archeal genus Nitrososphaera . Other over-represented genera of potential ecological relevance to Kenya forest soils included Bradyrhizobium , which is positively associated with soil health and Chitinophaga , members of chitinolytic Myxobacteria known to control fungal populations in soils . It is also worth noting that several of the over-represented taxa in the Kenyan soil dataset belonged to uncultured groups of bacteria, including members of uncultured genera TK-10 and Ellin606, an indication of Kenyan forest soils may harbor a catalogue of novel taxa. During development of bio-conservation strategies in these forest regions, consideration of these distinct microbiomes with unique taxa should be taken into account.
Study site and sample collection This study was part of an ongoing consortium project that focused on a primary-scale survey of soil chemistry and microbiology across a range of regional and climatic zones in sub-Saharan Africa . In Kenya, a microbiome survey of the soils across selected forest ecosystems was carried out based on a census for forest regions ( http://kws.go.ke/content/overview-0 ). Data capture at each sampling site included GPS location, elevation, vegetation at the time of sample collection, slope, general soil description and general site description. To accurately show the sampled forest sites to scale, a map was constructed using the GPS coordinates captured from the forests during fieldwork using ArcGIS 10.8.1 (Environmental Systems Research Institute software application, 2020), https://www.esri.com/en-us/arcgis/products/arcgis-platform/overview ; which was used to visualize and display the sample sites. The layers for towns, rivers, lakes and roads were added from ArcGIS Online database to enrich the thematic map as shown in (Fig. ). The distribution and characteristics of the selected forests used in this study are summarized in Supplementary Table . Sampling was done by recovering 4 × 200 g topsoil samples (0–5 cm depth) at approximately 50 m spacing at each site. Each working sample was obtained by scooping a composite of 4 × 50 g pseudo-replicate samples, recovered from the corners of a one square meter virtual quadrat. Each sample was collected in a separate labelled Whirl Pak bags and stored at 4 °C prior to shipment to University of Pretoria (South Africa) for nucleic acid extraction and soil physicochemical analysis. These samples were later grouped into regions depending on geographical location on the Kenyan map as follows: Aberdare (Sample K23, K33, K34, K63 and K77); Mt. Kenya (K35, K36, K37, K38, K39, K40, K42 and K66); Nairobi (K15, K16, K29, K70 and K71); Taita Taveta (K5, K6, K7, K8, K9 and K10) and Western region (K18, K21, K24, K25, K26, K27 and K28). Soil physicochemical characteristics Soil physicochemical characteristics (Supplementary Table ) were determined using protocols outlined by AgriLASA (2004). Soil pH was measured using the slurry method at a 1:2.5 soil/water ratio, and the pH of the supernatant was recorded with a calibrated bench top pH meter (Crison Basic, + 20, Crison, Barcelona, Spain). The concentrations of soluble and exchangeable of sodium (Na), potassium (K), carbon (C), magnesium (Mg), and phosphorus (P) were determined using Mehlich 3 test . The extractable ion concentration was quantified using ICP-OES (Inductively Coupled Plasma Optical Emission Spectrometry, Spectro Genesis, SPECTRO Analytical Instruments GmbH & Co. KG, Germany). Soil particle size distribution (sand/silt/clay percent) was measured using the Bouyoucos method . Total nitrogen (TN) and soil organic carbon (TOC) were measured using the catalyzed high temperature combustion method (Dumas method) . The Enhanced Vegetation Index-2 (EVI2) was obtained from the NASA Land Processes Distributed Active Archive Center’s (LP DAAC) VIIRS Vegetation Indices dataset at a 500-m resolution. Prokaryotic DNA extraction and 16SrRNA gene sequencing Total DNA was extracted from soil samples using the DNeasy PowerSoil Kit (QIAGEN, USA) following the manufacturer's instructions with the following modifications; the elution buffer C6 was pre-heated to 55ºC for 10 min before the final elution step, and the DNA was eluted using 70 μl of the elution buffer. After extraction, DNA concentration and purity were checked using the Nanodrop 2000 (ThermoFisher, USA) and agarose gel electrophoresis. The DNA samples were sent to MRDNA laboratories ( www.mrdnalab.com , Shallowater, TX, USA) for sequencing of the V4/V5 16S rRNA gene, using the 515F (5'-GTGYCAGCMGCCGCGGTAA-3') and 909 R (5'-CCCCGYCAATTCMTTTRAGT-3') primers, according to , . Before library preparation, the regions of interest were amplified using the HotStarTaq Plus Master Mix Kit (Qiagen, USA) and subsequently purified using calibrated Ampure XP beads (Beckman Coulter Life Sciences, USA). Sequencing was performed at MR DNA ( www.mrdnalab.com , Shallowater, TX, USA) on MiSeq instrument following the manufacturer’s guidelines. Sequence analysis and taxonomic classification The generated raw amplicon sequence reads were filtered, trimmed, and clustered into unique amplicon sequence variants (ASVs) using the QIIME2 pipeline . Briefly, raw sequences were demultiplexed, quality checked and a feature table constructed using Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline inbuilt within QIIME2 .The raw sequences were denoised and chimeras removed. Sequences which were < 200 base pairs after phred20- base quality trimming, with ambiguous base calls, and those with homopolymer runs exceeding 6 bp, were removed. The forward and reverse reads were truncated at 324 base pairs. This was followed by calculation of denoising statistics, picking of representative sequences and creation of ASVs feature table. Sequencing processing resulted in a total of 1,944,316 high quality sequence reads, which were clustered into 41,901 ASVs at 3% genetic distance. Representative sequences were aligned using MAFFT and highly variable regions were masked to reduce the noise in phylogenetic analysis . Phylogenetic trees were created and rooted at midpoint on QIIME2. Taxonomic classification of ASVs was done using QIIME feature-classifier against the untrained SILVA 138.1 (release 2022.2) . Demultiplexed high-quality sequence reads were deposited in the National Centre for Biotechnology Information (NCBI) Sequence Read Archive (SRA), as Bio Project ID: PRJNA851255 and study accession numbers available for download at http://www.ncbi.nlm.nih.gov/bioproject/851255 . In addition, the metadata, soil chemistry data, input files Qiime and R analysis scripts were deposited at https://zenodo.org/ and a DOI-10.5281/zenodo.7827433 available using the link; https://doi.org/10.5281/zenodo.7827432. Data processing of amplicon datasets from other countries. Sequence datasets from selected forests around the globe were downloaded from publicly available databases (accession numbers at Supplementary Table ) and processed using the QIIME2 pipeline as described above. Raw reads from the downloaded datasets spanned the 16S rRNA gene hypervariable regions v3-v4, v4, and v4-v5, depending on the study. To keep the sample sizes between countries comparable, a subset of between 28 to 30 samples was chosen for each dataset. To accommodate the variable quality scores of the different datasets, quality threshold was set to 20 and all reads were truncated at 220 bps. After DADA2 processing, the resulting representative sequence file and ASV table were merged with the Kenyan dataset. Read counts for the combined dataset ranged from 10877 to 346157 reads (Supplementary Fig. ). The merged representative sequence file was taxonomically annotated using the un-trained SILVA database 138.1 (release 2022.2) . Statistical analysis ASVs from QIIME2 were modified for use with the package phyloseq (version 1.36.0) in RStudio . The taxonomy table was merged with the feature table, and the relative abundance and bar plots were plotted using the ggplot2 package (version 3.3.5) . The normality of the dataset was first tested with the Shapiro–Wilk test . The Kruskal–Wallis Rank Sum test was subsequently used to calculate the significance of mean differences in soil variables between forest soil samples (adj. p. value < 0.01). Tukey post hoc analysis test were used to compare significant differences between regions where soil environmental variables were normally distributed (adj. p. value < 0.01). Significant differences in soil physicochemical characteristics were calculated using the stats package version 3.6.2 in RStudio version 4.0.3 . The distribution of soil physicochemical variables across different forest sites was calculated on log-standardized data using the “ decostand ” function from vegan package (version 2.5.7) , which performs principal component analysis of the data (PCA) . The resulting distance matrix between samples was plotted in a PCA graph, with the projected direction and magnitude of the distribution for each variable plotted in a separate loading plot. The hmisc (version 4.5) package was subsequently used to calculate strong significant Pearson correlations between variables (adj. p-value < 0.01), which were plotted in a bubble graph using the corrplot (version 0.9) package . Biodiversity metrics (alpha diversity) and community structure dissimilarity (beta diversity) were calculated using the vegan (version 2.5.7) and phyloseq (version 1.16.2) packages in RStudio. Observed richness, Inverse Simson and the Shannon indexes were used as metrics for alpha-diversity . The prokaryotic ASV table was split into Archaea and Bacteria using the “subset_taxa” function in phyloseq before calculating the diversity indexes. Differences in alpha-diversity between designated regions were assessed as described for the soil physicochemical variables. Beta-diversity index of each soil sample was calculated from the Centered log-ratio transformation (CLR) ASV tables using the “vegdist” function in vegan, based on Bray-Cutis distance estimation method . Ordination of the beta-diversity scores was plotted on a principal component analysis plot (PCoA) , and the significance of beta-diversity dissimilarity between forest regions was calculated using Permutational Multivariate Analyses of Variance (PERMANOVA) with 999 permutations. Comparison of beta-diversity distribution between the samples of different countries datasets was also performed using the methodology described above. The environmental drivers of prokaryotic community structure were estimated using Redundancy analysis (RDA) . The soil physicochemical dataset was z-score standardized and tested for multicollinearity using the “vif” function from the car (version 3.0.11) package . The best models for explanatory variables were calculated using forward step-wise regression model selection method with the ordistep() function in the vegan package, with 1000 permutations, and significant variables with vif values above 10 were removed. The significance of the best fitted models and each predictor variables in the model were calculated using the ANOVA permutation test with 1000 permutations . The relative taxonomic abundances of prokaryotic taxa were compared between regions using Linear Discriminant Analysis (LDA) effect size (LEfSe) algorithm for high-dimensional biomarker discovery and explanation of differentially abundant organisms. This analysis was implemented using the package Microbiome Marker in RStudio . Differences were analyzed using Kruskal–Wallis sum-rank test to detect significant differentially abundant taxa at genus level (adj. p. value < 0.01). The biological consistency was investigated using a set of pairwise tests among genera using the Wilcoxon rank-sum test , , with an LDA threshold of 2. The same LDA method was used to detect differently abundant taxa across the country datasets.
This study was part of an ongoing consortium project that focused on a primary-scale survey of soil chemistry and microbiology across a range of regional and climatic zones in sub-Saharan Africa . In Kenya, a microbiome survey of the soils across selected forest ecosystems was carried out based on a census for forest regions ( http://kws.go.ke/content/overview-0 ). Data capture at each sampling site included GPS location, elevation, vegetation at the time of sample collection, slope, general soil description and general site description. To accurately show the sampled forest sites to scale, a map was constructed using the GPS coordinates captured from the forests during fieldwork using ArcGIS 10.8.1 (Environmental Systems Research Institute software application, 2020), https://www.esri.com/en-us/arcgis/products/arcgis-platform/overview ; which was used to visualize and display the sample sites. The layers for towns, rivers, lakes and roads were added from ArcGIS Online database to enrich the thematic map as shown in (Fig. ). The distribution and characteristics of the selected forests used in this study are summarized in Supplementary Table . Sampling was done by recovering 4 × 200 g topsoil samples (0–5 cm depth) at approximately 50 m spacing at each site. Each working sample was obtained by scooping a composite of 4 × 50 g pseudo-replicate samples, recovered from the corners of a one square meter virtual quadrat. Each sample was collected in a separate labelled Whirl Pak bags and stored at 4 °C prior to shipment to University of Pretoria (South Africa) for nucleic acid extraction and soil physicochemical analysis. These samples were later grouped into regions depending on geographical location on the Kenyan map as follows: Aberdare (Sample K23, K33, K34, K63 and K77); Mt. Kenya (K35, K36, K37, K38, K39, K40, K42 and K66); Nairobi (K15, K16, K29, K70 and K71); Taita Taveta (K5, K6, K7, K8, K9 and K10) and Western region (K18, K21, K24, K25, K26, K27 and K28).
Soil physicochemical characteristics (Supplementary Table ) were determined using protocols outlined by AgriLASA (2004). Soil pH was measured using the slurry method at a 1:2.5 soil/water ratio, and the pH of the supernatant was recorded with a calibrated bench top pH meter (Crison Basic, + 20, Crison, Barcelona, Spain). The concentrations of soluble and exchangeable of sodium (Na), potassium (K), carbon (C), magnesium (Mg), and phosphorus (P) were determined using Mehlich 3 test . The extractable ion concentration was quantified using ICP-OES (Inductively Coupled Plasma Optical Emission Spectrometry, Spectro Genesis, SPECTRO Analytical Instruments GmbH & Co. KG, Germany). Soil particle size distribution (sand/silt/clay percent) was measured using the Bouyoucos method . Total nitrogen (TN) and soil organic carbon (TOC) were measured using the catalyzed high temperature combustion method (Dumas method) . The Enhanced Vegetation Index-2 (EVI2) was obtained from the NASA Land Processes Distributed Active Archive Center’s (LP DAAC) VIIRS Vegetation Indices dataset at a 500-m resolution.
Total DNA was extracted from soil samples using the DNeasy PowerSoil Kit (QIAGEN, USA) following the manufacturer's instructions with the following modifications; the elution buffer C6 was pre-heated to 55ºC for 10 min before the final elution step, and the DNA was eluted using 70 μl of the elution buffer. After extraction, DNA concentration and purity were checked using the Nanodrop 2000 (ThermoFisher, USA) and agarose gel electrophoresis. The DNA samples were sent to MRDNA laboratories ( www.mrdnalab.com , Shallowater, TX, USA) for sequencing of the V4/V5 16S rRNA gene, using the 515F (5'-GTGYCAGCMGCCGCGGTAA-3') and 909 R (5'-CCCCGYCAATTCMTTTRAGT-3') primers, according to , . Before library preparation, the regions of interest were amplified using the HotStarTaq Plus Master Mix Kit (Qiagen, USA) and subsequently purified using calibrated Ampure XP beads (Beckman Coulter Life Sciences, USA). Sequencing was performed at MR DNA ( www.mrdnalab.com , Shallowater, TX, USA) on MiSeq instrument following the manufacturer’s guidelines.
The generated raw amplicon sequence reads were filtered, trimmed, and clustered into unique amplicon sequence variants (ASVs) using the QIIME2 pipeline . Briefly, raw sequences were demultiplexed, quality checked and a feature table constructed using Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline inbuilt within QIIME2 .The raw sequences were denoised and chimeras removed. Sequences which were < 200 base pairs after phred20- base quality trimming, with ambiguous base calls, and those with homopolymer runs exceeding 6 bp, were removed. The forward and reverse reads were truncated at 324 base pairs. This was followed by calculation of denoising statistics, picking of representative sequences and creation of ASVs feature table. Sequencing processing resulted in a total of 1,944,316 high quality sequence reads, which were clustered into 41,901 ASVs at 3% genetic distance. Representative sequences were aligned using MAFFT and highly variable regions were masked to reduce the noise in phylogenetic analysis . Phylogenetic trees were created and rooted at midpoint on QIIME2. Taxonomic classification of ASVs was done using QIIME feature-classifier against the untrained SILVA 138.1 (release 2022.2) . Demultiplexed high-quality sequence reads were deposited in the National Centre for Biotechnology Information (NCBI) Sequence Read Archive (SRA), as Bio Project ID: PRJNA851255 and study accession numbers available for download at http://www.ncbi.nlm.nih.gov/bioproject/851255 . In addition, the metadata, soil chemistry data, input files Qiime and R analysis scripts were deposited at https://zenodo.org/ and a DOI-10.5281/zenodo.7827433 available using the link; https://doi.org/10.5281/zenodo.7827432.
Sequence datasets from selected forests around the globe were downloaded from publicly available databases (accession numbers at Supplementary Table ) and processed using the QIIME2 pipeline as described above. Raw reads from the downloaded datasets spanned the 16S rRNA gene hypervariable regions v3-v4, v4, and v4-v5, depending on the study. To keep the sample sizes between countries comparable, a subset of between 28 to 30 samples was chosen for each dataset. To accommodate the variable quality scores of the different datasets, quality threshold was set to 20 and all reads were truncated at 220 bps. After DADA2 processing, the resulting representative sequence file and ASV table were merged with the Kenyan dataset. Read counts for the combined dataset ranged from 10877 to 346157 reads (Supplementary Fig. ). The merged representative sequence file was taxonomically annotated using the un-trained SILVA database 138.1 (release 2022.2) .
ASVs from QIIME2 were modified for use with the package phyloseq (version 1.36.0) in RStudio . The taxonomy table was merged with the feature table, and the relative abundance and bar plots were plotted using the ggplot2 package (version 3.3.5) . The normality of the dataset was first tested with the Shapiro–Wilk test . The Kruskal–Wallis Rank Sum test was subsequently used to calculate the significance of mean differences in soil variables between forest soil samples (adj. p. value < 0.01). Tukey post hoc analysis test were used to compare significant differences between regions where soil environmental variables were normally distributed (adj. p. value < 0.01). Significant differences in soil physicochemical characteristics were calculated using the stats package version 3.6.2 in RStudio version 4.0.3 . The distribution of soil physicochemical variables across different forest sites was calculated on log-standardized data using the “ decostand ” function from vegan package (version 2.5.7) , which performs principal component analysis of the data (PCA) . The resulting distance matrix between samples was plotted in a PCA graph, with the projected direction and magnitude of the distribution for each variable plotted in a separate loading plot. The hmisc (version 4.5) package was subsequently used to calculate strong significant Pearson correlations between variables (adj. p-value < 0.01), which were plotted in a bubble graph using the corrplot (version 0.9) package . Biodiversity metrics (alpha diversity) and community structure dissimilarity (beta diversity) were calculated using the vegan (version 2.5.7) and phyloseq (version 1.16.2) packages in RStudio. Observed richness, Inverse Simson and the Shannon indexes were used as metrics for alpha-diversity . The prokaryotic ASV table was split into Archaea and Bacteria using the “subset_taxa” function in phyloseq before calculating the diversity indexes. Differences in alpha-diversity between designated regions were assessed as described for the soil physicochemical variables. Beta-diversity index of each soil sample was calculated from the Centered log-ratio transformation (CLR) ASV tables using the “vegdist” function in vegan, based on Bray-Cutis distance estimation method . Ordination of the beta-diversity scores was plotted on a principal component analysis plot (PCoA) , and the significance of beta-diversity dissimilarity between forest regions was calculated using Permutational Multivariate Analyses of Variance (PERMANOVA) with 999 permutations. Comparison of beta-diversity distribution between the samples of different countries datasets was also performed using the methodology described above. The environmental drivers of prokaryotic community structure were estimated using Redundancy analysis (RDA) . The soil physicochemical dataset was z-score standardized and tested for multicollinearity using the “vif” function from the car (version 3.0.11) package . The best models for explanatory variables were calculated using forward step-wise regression model selection method with the ordistep() function in the vegan package, with 1000 permutations, and significant variables with vif values above 10 were removed. The significance of the best fitted models and each predictor variables in the model were calculated using the ANOVA permutation test with 1000 permutations . The relative taxonomic abundances of prokaryotic taxa were compared between regions using Linear Discriminant Analysis (LDA) effect size (LEfSe) algorithm for high-dimensional biomarker discovery and explanation of differentially abundant organisms. This analysis was implemented using the package Microbiome Marker in RStudio . Differences were analyzed using Kruskal–Wallis sum-rank test to detect significant differentially abundant taxa at genus level (adj. p. value < 0.01). The biological consistency was investigated using a set of pairwise tests among genera using the Wilcoxon rank-sum test , , with an LDA threshold of 2. The same LDA method was used to detect differently abundant taxa across the country datasets.
Supplementary Figure S1. Supplementary Figure S2. Supplementary Figure S3. Supplementary Table 1. Supplementary Table 2. Supplementary Table 3. Supplementary Table 4.
|
Relationship Between Patient-Centered Primary Care Provider
Communication and Emergency Room Visits in the Medicaid Population in North
Carolina, United States
|
943836f7-38b5-4a46-9513-9da7b53762bb
|
10155029
|
Patient-Centered Care[mh]
|
The patient-centered medical home (PCMH) aims to improve health care quality and
control costs. It focuses on patients and families, continuity of care and shared
decision-making with primary care providers (PCPs), and enhanced coordination and
access to care. Under patient-centered care patients should receive the majority of health
care services in PCP offices rather than in an emergency room (ER), which is
expensive and often inappropriate. The PCMH approach is particularly beneficial for underserved populations
because of easier access to a wide array of primary care services and referrals to
specialists. , Medicaid, a federal-state program, finances health care for eligible low-income
individuals and families. The Medicaid medical home in North Carolina (NC, USA) was
developed in response to the national movement to value-based care to improve
quality and reduce costs. Community Care of North Carolina (CCNC) is a state-wide, community-based
managed care organization that administers and controls health expenditures, cost,
and quality of care for Medicaid beneficiaries; it uses the PCMH approach. In 2019, almost 15% of the North Carolina population was covered by Medicaid. It is well-documented that Medicaid patients use disproportionately more ER care than
other patients of comparable health. - Specifically, Medicaid
patients were 700% more likely to visit ERs for non-urgent health issues than
patients with private insurance. Many “emergencies” are preventable and could be addressed by outpatient
visits or phone. According to an ER Director, approximately 80% of ER calls in 1 NC county
were for non-urgent care. Kim et al reported that patient demographics, community attributes, health status, and
health care use including receiving primary care services at least once a year
explained 44% of ER use differences between Medicaid and privately insured patients.
Thus, unmeasured factors explained almost half of non-urgent ER visits in their
study. The authors suggested that ineffective communication between PCPs and
Medicaid patients may be an unmeasured factor leading to ER overuse. Patient-centered provider communication is the foundation of high-quality
patient-centered care. Patient -centered communication is
characterized by provider encouragement of patient engagement, good interpersonal
relationships, and shared decision-making between patients and providers. , Empirical
evidence suggests that, in vulnerable populations, patient-centered provider
communication is associated with increased patient satisfaction and trust in
PCPs , ; better patient comprehension and recall of medical
information; treatment adherence; and improved clinical outcomes. , Patient-centered communication may be critical for Medicaid patients because of
their low health literacy and mistrust of the health care system and providers. There is limited empirical evidence on PCP-patient communication and ER utilization
among Medicaid patients. In the general population, patients who assessed their
provider communication highly had fewer ER visits and lower annual health care costs. ER patients with limited health literacy less often reported that their PCPs
gave clear instructions or listened carefully and used more ER care than other ER patients. Medicaid patients rated effective provider communication the second most
important characteristic of care quality, Medicaid patients prefer clear simple explanations from providers about their
health issues and treatments. Several other factors affect ER utilization. Continuity of care with primary care
providers was critical for improving clinical outcomes and was also associated with less ER use. One study reported that primary care continuity was associated with reduced
hypertension- and diabetes-related ER visits. Patient demographic variables (age, gender, race, health status, patient
residence (rural or urban), as well as dual Medicare and Medicaid eligibility status
were included in our study. Dual eligible patients are poor and aged ≥65 years
and/or being disabled also makes them eligible for Medicare, a federally-run health
insurance program that targets these 2 latter groups, which may also affect ER utilization. To date, no study has explored how the quality of patient-centered provider
communication with Medicaid patients is related to their ER utilization. Our study
examines how different aspects of PCP patient-centered communication is associated
with ER use by Medicaid managed care patients in North Carolina. Our study further
assesses the magnitude of different aspects of provider communication on the number
of ER visits by Medicaid patients.
Study Design A cross-sectional statewide telephone survey of the North Carolina ambulatory
adult (≥19 years) Medicaid managed care population provided the study data. The survey used the Consumer Assessment of Health Providers and Systems
methodology (CAHPS ® , v5.0) conducted under contract with the NC Department of Health and Human
Services (NC DHHS). The NC DHHS enrollment file was the source of the
respondent’s county of residence and dual-eligibility status. Neither
institutionalized enrollees, those eligible for skilled nursing care but
receiving it at home, nor pregnant females are included in the primary care
medical homes and were thus excluded from the study. Interviews, conducted in
both English and Spanish between September 2015 and February 2016, resulted in
4188 responses with an unadjusted response rate of 13.3%. Self-reporting a PCP
relationship of at least 6 months duration and having visited this provider at
least once in the 6 months prior to survey participation reduced the study
responses to 2652. These criteria ensured we were assessing provider
communication quality in a relationship that already existed and wherein the
patients had recently seen their PCP. Measures The outcome measure is the number of ER visits in the previous 6 months reported
by the respondents. The following 4 CAHPS questions generated the predictor
variables of interest – patient assessment of patient-centered provider
communication: “How often did your PCP show respect for what you had to say?” “How often did your PCP explain things in a way that was easy to
understand?” “How often did your PCP spend enough time with you?” “How often did your PCP listen carefully to you?” Possible responses were Always, Usually, Sometimes , and Never , which were dichotomized as Always and Not always , based on the observed very high prevalence of Always responses. Following CAHPS guidance we also created
an effective patient-centered provider communication index using the above 4 questions. The effective PCP communication index variable was assigned a value of
“ Always” in cases where all 4 individual communication
variables had a value of “Always” and “ Not
always ” otherwise. Control variables were selected to account for the patient’s physical condition
as possible moderators of the relationship between communication quality and ER
visits. These included the following variables: Self-assessed general health ( Poor, Fair, Good, Very good, and
Excellent ) dichotomized as Fair/poor and Excellent/very good/good ), Needed help with activities of daily living (ADLs) ( Yes,
No ), Received health care 3 or more times in the previous 6 months for the
same condition ( Yes, No ). Other variables which can also moderate the relationship include: Dual-eligibility status: Medicare-eligible patient due to a disabling
illness and/or age ≥65 years as well as Medicaid ( Dual and Not dual ), Patient’s county of residence ( Rural and Urban ), The length of time the patient had been with the current PCP
(≥ 1 and <1 year ). Patient age (≥65 years, 45-64, 19-44), sex ( Female,
Male ), and race ( Black, Multi/other, White ),
were chosen as general demographic descriptors. Statistical Analysis We used univariate analysis to describe both outcome and predictor variables for
the population, reporting frequency and proportions for each variable. Our
unadjusted and fully adjusted multivariable models used negative binomial
regression with a log-link because of the highly skewed distribution of the ER
visit count containing predominantly zero values (no ER visits in previous
6 months), a preferred method for regression analysis in such cases. Cases with missing values on any of the variables were eliminated from
both the univariate and multivariable regression analyses. Data analyses were
conducted using IBM SPSS version 26. All study procedures were reviewed and
approved by the relevant Institutional Review Board.
A cross-sectional statewide telephone survey of the North Carolina ambulatory
adult (≥19 years) Medicaid managed care population provided the study data. The survey used the Consumer Assessment of Health Providers and Systems
methodology (CAHPS ® , v5.0) conducted under contract with the NC Department of Health and Human
Services (NC DHHS). The NC DHHS enrollment file was the source of the
respondent’s county of residence and dual-eligibility status. Neither
institutionalized enrollees, those eligible for skilled nursing care but
receiving it at home, nor pregnant females are included in the primary care
medical homes and were thus excluded from the study. Interviews, conducted in
both English and Spanish between September 2015 and February 2016, resulted in
4188 responses with an unadjusted response rate of 13.3%. Self-reporting a PCP
relationship of at least 6 months duration and having visited this provider at
least once in the 6 months prior to survey participation reduced the study
responses to 2652. These criteria ensured we were assessing provider
communication quality in a relationship that already existed and wherein the
patients had recently seen their PCP.
The outcome measure is the number of ER visits in the previous 6 months reported
by the respondents. The following 4 CAHPS questions generated the predictor
variables of interest – patient assessment of patient-centered provider
communication: “How often did your PCP show respect for what you had to say?” “How often did your PCP explain things in a way that was easy to
understand?” “How often did your PCP spend enough time with you?” “How often did your PCP listen carefully to you?” Possible responses were Always, Usually, Sometimes , and Never , which were dichotomized as Always and Not always , based on the observed very high prevalence of Always responses. Following CAHPS guidance we also created
an effective patient-centered provider communication index using the above 4 questions. The effective PCP communication index variable was assigned a value of
“ Always” in cases where all 4 individual communication
variables had a value of “Always” and “ Not
always ” otherwise. Control variables were selected to account for the patient’s physical condition
as possible moderators of the relationship between communication quality and ER
visits. These included the following variables: Self-assessed general health ( Poor, Fair, Good, Very good, and
Excellent ) dichotomized as Fair/poor and Excellent/very good/good ), Needed help with activities of daily living (ADLs) ( Yes,
No ), Received health care 3 or more times in the previous 6 months for the
same condition ( Yes, No ). Other variables which can also moderate the relationship include: Dual-eligibility status: Medicare-eligible patient due to a disabling
illness and/or age ≥65 years as well as Medicaid ( Dual and Not dual ), Patient’s county of residence ( Rural and Urban ), The length of time the patient had been with the current PCP
(≥ 1 and <1 year ). Patient age (≥65 years, 45-64, 19-44), sex ( Female,
Male ), and race ( Black, Multi/other, White ),
were chosen as general demographic descriptors.
We used univariate analysis to describe both outcome and predictor variables for
the population, reporting frequency and proportions for each variable. Our
unadjusted and fully adjusted multivariable models used negative binomial
regression with a log-link because of the highly skewed distribution of the ER
visit count containing predominantly zero values (no ER visits in previous
6 months), a preferred method for regression analysis in such cases. Cases with missing values on any of the variables were eliminated from
both the univariate and multivariable regression analyses. Data analyses were
conducted using IBM SPSS version 26. All study procedures were reviewed and
approved by the relevant Institutional Review Board.
displays
descriptive results for the outcome and predictor variables used in subsequent
modelling efforts, showing the zero-inflated nature of the ER visit distribution
(69.6% reported no ER visits in the previous 6 months), rapidly moving to a small
number of participants with a high number of visits. also describes distributions of
the responses to the 4 communication questions, with 82.3% to 89.1% of the
respondents indicating PCP communication was always good. Approximately 71% of all
patients reported effective PCP-patient communications on the effective provider
communication index variable (ie, responded Always on all 4
communication variables). Among patient demographic covariates, approximately 30%, 45%, and 25% of respondents
were ≥65, 45 to 64, and 19 to 44 years of age, respectively. Over 2/3 of respondents
were female. Over half (53%) of the respondents were white, while 39% were Black and
8% were of Multi/other race. Patient health covariate analysis revealed that 56%
reported they were in fair or poor health, 26% needed help with at least 1 ADL, and
52% got health care services for the same condition 3 or more times in the previous
6 months. The vast majority of the respondents (89%) had been seeing their current
PCP for longer than 1 year, 52% were both Medicare and Medicaid (dual) eligible, and
36% lived in a rural county. reports negative
binomial regression results where the incidence rate ratio (IRR, an exponentiated
value), is the estimate for each predictor variable’s proportional impact on the ER
visit count. Columns indicate regression analyses conducted on each of the
individual 4 communication variables as well as the effective PCP communication
index variable. The same observations were included in all 5 regression models, thus
making comparisons across the different communication models possible. includes
unadjusted results for each PCP communication variable’s impact on the ER visit
count followed by the fully adjusted models. Unadjusted results indicate a 32% reduction in ER visits
( P < 0.001) associated with patients reporting the PCP always
communicated well on the effective PCP communication index. This was strongly
influenced by the “respect” question, which indicated a 52% reduction in the number
of ER visits ( P < .001) associated with the PCP always showing
respect for patient input. Highly significant results ( P < .001)
on the other 3 communication questions were observed as well, although the effect
sizes were smaller. As expected, adding covariates to the fully adjusted model reduced the PCP
communication impact, but the impact of the PCP always showing respect for patient
input was associated with 37% fewer ER visits ( P < .001). The
effective communication index variable (19% reduction in ER visits) and easy to
understand PCP explanations (18% reduction) had smaller but meaningful effects on
the number of ER visits ( P < .05). The PCP listening carefully
and spending enough time with patient predictor variables were no longer
statistically significant in the adjusted models. Consistent with using the identical population in each model, effect sizes and
significance were very similar for each covariate across all 5 models. Compared to
the referent of 19 to 45 years, being 45 to 64 years was associated with 26% to 28%
reduction in ER number of visits while age ≥65 years was associated with 34% to 37%
fewer ER visits (both at P < .001). Sex was not significant
while Multi/other race was significantly associated ( P < .05)
with 21% to 30% increased ER visits across the 5 communication measures. Not surprisingly, all 3 patient health covariates were significantly associated with
the number of ER visits (all P < .001). Rating one’s overall
health Fair/poor was associated with 46% to 50% increased ER visits while needing
help with ADLs was associated with a 53% to 56% increase in ER visits. Finally,
receiving health care services for the same condition 3 or more times in the
previous 6 months was associated with a 104% to 107% increase in the number of ER
visits. Time seeing the current PCP was highly significant
( P < .001 in all models) as a duration of longer than 1 year was
associated with 36% to 38% fewer ER visits. Neither dual Medicare/Medicaid
eligibility nor rurality of the patient’s residence had a significant impact.
The PCMH approach aims to improve health care quality by improving access,
coordination, and continuity of primary care while reducing fragmentation and cost. The PCMH is grounded in ongoing chronic disease management and prevention
that should minimize ER use. Thus, high ER services utilization could undermine the
health system’s commitment to the PCMH. Previous research revealed that Medicaid recipients have much higher ER use than the
general population. , , Inadequate
communication between PCPs and Medicaid patients during primary care visits could
contribute to this well-documented phenomenon. , Insufficient and/or unclear
provider explanations and care instructions may be particularly detrimental for
populations with low health literacy such as many Medicaid patients. , This study
examined how ER utilization by ambulatory NC Medicaid managed care patients was
associated with PCP patient-centered communication. We found that the vast majority
of respondents assessed their personal health care providers as effective
communicators, who always showed respect for patient input, listened carefully,
spent enough time with patients, and whose explanations were easy to understand. We
also found that overall effective patient-centered PCP communication was associated
with 19% fewer ER visits in our sample. Our study found that provider respect for the patient had the biggest impact on the
number of ER visits among NC Medicaid patients: provider respect was associated with
37% fewer ER visits in the 6 months before the survey. In the general population,
provider respect was strongly associated with higher provider and health care
quality evaluations by patients. Provider respect may be even more important for population groups that were
traditionally stigmatized and marginalized. - When treated respectfully,
patients are likely to be more open and present an honest and comprehensive
description of their health issues. This result corroborates research findings that
Medicaid patients are appreciative of providers who show respect, and carefully
listen to and take into consideration patients’ health concerns and suggestions
about their health. Another important characteristic of effective provider communication is the PCPs’
ability to provide easy to understand explanations. Effective provider communication
includes speaking slowly and understandably, explaining test results and exams, and
checking for patient understanding , which are critical for
patients with low health literacy. One study found that easy-to-understand
instructions were the most important communication dimension for older patients. Older patients reported fewer ER visits during 6 months before the survey. It may
sound counterintuitive as older people often have more chronic health conditions and
higher acuity levels. However, this finding corroborates a review that the elderly use ER care less often overall and in life-threatening
situations. Participants in this study who saw the provider for ≥1 year used ER services less
often, a finding which is consistent with earlier findings on continuity of
care. , Another recent longitudinal study of ER utilization by Medicaid
patients found that patients with fragmented primary care use were more likely to
have a higher number of ER visits. As expected, unhealthy Medicaid patients (eg, those needing help with ADLs or
seeing providers for the same reason 3 or more times) also reported more ER
visits. Our study has the standard temporality concerns in attempting to infer causality from
a cross-sectional survey. Limiting the study participants to those whose PCP-patient
relationship had lasted longer than 6 months means that, with a survey lookback
period of 6 months, the relationship was already in place before the survey data
were collected. Specifically, patient opinions about provider communication skills
are usually formed over a longer period of time and therefore precede the patient
count of ER visits. In future studies, patient-reported ER visits could be
supplemented/enhanced by ER claims. Our study has a number of strengths. It is based on a large statewide sample, using a
validated and widely accepted instrument, with well-defined participation in our
managed care model. Another strength of our study is that it controls for 3 measures
of participant health status, all of which had statistically and substantively
significant impacts; being sicker can still send a patient to the ER more often no
matter how good the PCP’s communication is or how long the relationship has lasted.
Our analysis was strengthened by controlling for several indicators of patient
health while our tested communication variables still significantly reduced the
number of ER visits. Even though the study included only Medicaid managed care
patients in North Carolina, our results should be generalizable to similar Medicaid
populations in other states.
Effective PCP communication is a critical element if the PCMH is to deliver high
quality care. Thus, health care quality improvement interventions should include
training health care providers in effective communication with patients. King and Hoppe suggest that the education to develop good communication skills with patients
must start during undergraduate and graduate studies. A recent study recommends that respective federal and state agencies should focus on
training and accreditation with a specific emphasis on the communication skills of
PCPs delivering care to Medicaid patients. The authors also proposed that state
Medicaid and managed care organizations should conduct regular assessments of
primary care Medicaid providers’ communication skills and include communication
quality metrics in PCP reimbursement. Enhancing provider communication skills to improve provider-patient interactions is
necessary, particularly for patients from vulnerable populations, often with low
health literacy rates. Providers should be trained how to interact with patients in
a respectful way. Showing respect for Medicaid patients (eg, engaging patients in
treatment decisions) and speaking clearly and understandably are critical factors to overcome
patient “reluctance to use the primary care system because of previous negative
personal experiences. . .” to prevent avoidable ER visits in the future (p. 481). Our study supports the inclusion of the PCP respect measure on
state Medicaid report cards, which usually do not include them. Medical training should increase focus on patient-centered communication
skills to improve the quality of care provided.
|
Helpfulness of Question Prompt Sheet for Patient-Physician Communication Among Patients With Advanced Cancer
|
24096be9-0481-4c4a-b45a-ffc19182dc73
|
10155065
|
Internal Medicine[mh]
|
A question prompt sheet (QPS) is a structured list of potential questions that are available for patients to ask physicians during a clinical encounter. It may allow practitioners to meet patients’ desired information needs, assist with decision-making, and improve the overall communication process. This is vital because patients sometimes are unsure about the questions to ask their physicians, forget to ask the relevant questions, or feel uncomfortable to ask certain questions. , A QPS may also prevent physicians from conveying unsolicited and potentially distressing information to patients. Studies have demonstrated the value of a QPS in patient-physician interactions in diverse fields of medicine. , , , , , , , However, there is insufficient data regarding the utility of a QPS among patients with advanced cancer. , Moreover, very few methodologically robust evaluations of a QPS in a head-to-head comparison with an attention control group have been conducted. The main objective of this study was to compare patients’ perceptions about the helpfulness, overall global evaluation, and preference for a systematically developed QPS vs a standard general information sheet (GIS) during patient-physician encounters. We also examined the effect of the QPS on participants’ anxiety, participants’ speaking time, the number of questions asked, and the length of the clinical encounter.
Study Design, Participants, Procedures This randomized clinical trial was approved by the institutional review board of the University of Texas MD Anderson Cancer Center, Houston. All participants provided written informed consent. The trial protocol and statistical analysis plan are available in . The study followed the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. This trial was conducted among patients seen at the outpatient Palliative and Supportive Care Clinic at the University of Texas MD Anderson Cancer Center from September 1, 2017, to May 31, 2019. This clinic sees patients with advanced cancer who are referred by their primary oncologists for the management of complex physical, psychosocial, and spiritual needs, as well as assistance with medical decision-making and overall goals of care. Eligible patients were aged at least 18 years, had a cancer diagnosis, were undergoing their initial outpatient consultation visit with 1 of 10 palliative care physicians, and could read and communicate in English. After providing written informed consent, patients completed baseline questionnaires and were then randomly assigned in a 1:1 fashion to receive either the QPS or the GIS 30 minutes prior to their physician consultation. Randomization was conducted by the biostatistician via the institution’s clinical trial conduct website using the Pocock-Simon method. Patients were stratified by physician to carefully control for physicians’ impact on the primary end point. Both interventions were concealed in identical opaque envelopes. Patients, research staff who enrolled the patients, and physicians were blinded to the study assignments. Patients were encouraged to read the information material before the visit. Physicians were asked to endorse the use of the information material during the encounter by asking the patient if they had any questions, and either explaining why it was important to ask questions or inviting the patient more than once to ask questions. , Conversations were audiotaped and later transcribed. At the end of the consultation, patients completed questionnaires assessing their views about the information material they received, their overall satisfaction with the consultation, and their anxiety level. The participating physicians also completed a physician assessment questionnaire. In an exploratory open-label format, patients who returned for follow-up at 4 weeks (±7 days) openly received both the QPS and the GIS 30 minutes prior to seeing their physician and were encouraged to use the materials in preparation for their visit. After the visit, they indicated which of the materials they preferred. Data Collection Patients’ demographic and clinical characteristics were obtained from their medical records. Race and ethnicity were categorized as Asian, Black, Hispanic or Latino, White, and other (including American Indian or Alaskan Native, refused to answer, and unknown). Race and ethnicity were included in analyses because we wanted to explore any potential association between the use of the communication aids and those variables. The deidentified audio recordings were transcribed by a professional medical transcription company. The number and types of questions that patients asked were carefully and independently extracted from the transcribed data by one experienced investigator (V.P.) and then verified by a second investigator (J.A.); any discrepancies were discussed in detail until a mutual agreement was reached. Study Interventions The QPS (eAppendix 1 in ) is a single-page list of 25 questions that was developed by an expert panel of clinicians using a Delphi process and later tested for its content validity among a group of patients and caregivers attending an ambulatory palliative medicine clinic. The GIS (eAppendix 2 in ) is a single page of generic informational material that was created by our group and is routinely provided to patients who are seen at the clinic. It contains general patient information about palliative care and other related information felt to be relevant to new patients. Questionnaires and Outcome Measures The primary outcome, patients’ perception of helpfulness, and other views about the information materials were assessed immediately after the consultation using the Patient Assessment Questionnaire. This is a 7-item, 0- to 10-point scale that assessed the extent to which patients felt the material helped them to communicate with their physician, was clear or easily understandable, had the right amount of information, would be recommended to other patients, did not make them anxious, helped them to think of questions or concerns they had not previously thought of, and would be used in the future. The mean score across all the 7 individual patient ratings was calculated to obtain the global perception score, with higher score indicating more positive perception. The questionnaire has been used in several previous studies. , , Patients’ satisfaction with the consultation was assessed using the Patient Satisfaction Questionnaire, , , a 5-item visual analogue scale ranging form 0 to 100, with an internal reliability (Cronbach α) of 0.90 and higher score indicating more satisfaction. Patient anxiety was measured by the Spielberger State Anxiety Inventory, a 20-item self-report scale with high reliability ( r = 0.93), internal consistency, and validity. Scores range from 20 to 80, with higher score indicating greater anxiety. Baseline patient preferences for information were measured using 2 items from the Cassileth Information Styles Questionnaire, with 1 item consisting of a 5-point Likert scale that assessed the amount of detail a patient preferred (1 indicates very little; 5, as much as possible) and the other item a multiple choice question asking what kind of information a patient preferred, with options “I want only the information needed to care for myself properly,” “I want additional information only if it is good news,” and “I want as much information as possible, good and bad.” Baseline patient preferences for level of involvement in decision-making were assessed with the validated Control Preferences Scale. , , Overall preference for the QPS or GIS was assessed using a single multiple-choice question: “Now that you have had the opportunity to use the two different information materials, overall, which of them would you prefer to use in communicating with your doctor?” Patients could select whether they preferred either material a little or a lot more, or whether they had no preference. The Physician Assessment Form asked physicians to indicate on a scale of 0 to 10 points their perception about the helpfulness of the information material to the patient, its effect on the visit duration, and their overall satisfaction with the consultation, with higher score indicating more positive perception. Other outcome measures included the total number and types of participant questions, speaking times, and overall consultation duration. Statistical Analysis The primary outcome was patients’ perception of helpfulness (0-10 scale) of the informational material. A 2-sample t test was applied to examine the outcome difference between the QPS and the GIS group. With 136 enrolled patients and a 5% attrition rate, we estimated 80% power to detect a difference in means of 2 on a 0- to 10-point scale of the primary outcome, assuming an SD of 4 using the 2-sample t test with a 2-sided significance level of P = .05. Summary statistics, such as means and SDs, were used to describe continuous variables, while frequencies and percentages were used to describe categorical variables. Similar 2-sample t and χ 2 tests or Fisher exact test were used to examine the group difference for selected secondary outcomes. Goodness-of-fit test was used for χ 2 to assess patients’ overall preference after using both information materials concurrently. Associations of the demographic or clinical factors with the primary outcome were assessed using ordinary least-squared regression. Analysis was modified intention-to-treat because 5 randomized patients (3.7%) who did not receive the allocated intervention and had missing data were excluded. P = .05 was used to determine the statistical significance for all secondary outcomes analyses, given this portion of the analyses was exploratory and was for hypothesis generating purpose. Data were analyzed using Stata/SE version 16.1 (StataCorp). Data were analyzed from May 18 to June 27, 2022.
This randomized clinical trial was approved by the institutional review board of the University of Texas MD Anderson Cancer Center, Houston. All participants provided written informed consent. The trial protocol and statistical analysis plan are available in . The study followed the Consolidated Standards of Reporting Trials ( CONSORT ) reporting guideline. This trial was conducted among patients seen at the outpatient Palliative and Supportive Care Clinic at the University of Texas MD Anderson Cancer Center from September 1, 2017, to May 31, 2019. This clinic sees patients with advanced cancer who are referred by their primary oncologists for the management of complex physical, psychosocial, and spiritual needs, as well as assistance with medical decision-making and overall goals of care. Eligible patients were aged at least 18 years, had a cancer diagnosis, were undergoing their initial outpatient consultation visit with 1 of 10 palliative care physicians, and could read and communicate in English. After providing written informed consent, patients completed baseline questionnaires and were then randomly assigned in a 1:1 fashion to receive either the QPS or the GIS 30 minutes prior to their physician consultation. Randomization was conducted by the biostatistician via the institution’s clinical trial conduct website using the Pocock-Simon method. Patients were stratified by physician to carefully control for physicians’ impact on the primary end point. Both interventions were concealed in identical opaque envelopes. Patients, research staff who enrolled the patients, and physicians were blinded to the study assignments. Patients were encouraged to read the information material before the visit. Physicians were asked to endorse the use of the information material during the encounter by asking the patient if they had any questions, and either explaining why it was important to ask questions or inviting the patient more than once to ask questions. , Conversations were audiotaped and later transcribed. At the end of the consultation, patients completed questionnaires assessing their views about the information material they received, their overall satisfaction with the consultation, and their anxiety level. The participating physicians also completed a physician assessment questionnaire. In an exploratory open-label format, patients who returned for follow-up at 4 weeks (±7 days) openly received both the QPS and the GIS 30 minutes prior to seeing their physician and were encouraged to use the materials in preparation for their visit. After the visit, they indicated which of the materials they preferred.
Patients’ demographic and clinical characteristics were obtained from their medical records. Race and ethnicity were categorized as Asian, Black, Hispanic or Latino, White, and other (including American Indian or Alaskan Native, refused to answer, and unknown). Race and ethnicity were included in analyses because we wanted to explore any potential association between the use of the communication aids and those variables. The deidentified audio recordings were transcribed by a professional medical transcription company. The number and types of questions that patients asked were carefully and independently extracted from the transcribed data by one experienced investigator (V.P.) and then verified by a second investigator (J.A.); any discrepancies were discussed in detail until a mutual agreement was reached.
The QPS (eAppendix 1 in ) is a single-page list of 25 questions that was developed by an expert panel of clinicians using a Delphi process and later tested for its content validity among a group of patients and caregivers attending an ambulatory palliative medicine clinic. The GIS (eAppendix 2 in ) is a single page of generic informational material that was created by our group and is routinely provided to patients who are seen at the clinic. It contains general patient information about palliative care and other related information felt to be relevant to new patients.
The primary outcome, patients’ perception of helpfulness, and other views about the information materials were assessed immediately after the consultation using the Patient Assessment Questionnaire. This is a 7-item, 0- to 10-point scale that assessed the extent to which patients felt the material helped them to communicate with their physician, was clear or easily understandable, had the right amount of information, would be recommended to other patients, did not make them anxious, helped them to think of questions or concerns they had not previously thought of, and would be used in the future. The mean score across all the 7 individual patient ratings was calculated to obtain the global perception score, with higher score indicating more positive perception. The questionnaire has been used in several previous studies. , , Patients’ satisfaction with the consultation was assessed using the Patient Satisfaction Questionnaire, , , a 5-item visual analogue scale ranging form 0 to 100, with an internal reliability (Cronbach α) of 0.90 and higher score indicating more satisfaction. Patient anxiety was measured by the Spielberger State Anxiety Inventory, a 20-item self-report scale with high reliability ( r = 0.93), internal consistency, and validity. Scores range from 20 to 80, with higher score indicating greater anxiety. Baseline patient preferences for information were measured using 2 items from the Cassileth Information Styles Questionnaire, with 1 item consisting of a 5-point Likert scale that assessed the amount of detail a patient preferred (1 indicates very little; 5, as much as possible) and the other item a multiple choice question asking what kind of information a patient preferred, with options “I want only the information needed to care for myself properly,” “I want additional information only if it is good news,” and “I want as much information as possible, good and bad.” Baseline patient preferences for level of involvement in decision-making were assessed with the validated Control Preferences Scale. , , Overall preference for the QPS or GIS was assessed using a single multiple-choice question: “Now that you have had the opportunity to use the two different information materials, overall, which of them would you prefer to use in communicating with your doctor?” Patients could select whether they preferred either material a little or a lot more, or whether they had no preference. The Physician Assessment Form asked physicians to indicate on a scale of 0 to 10 points their perception about the helpfulness of the information material to the patient, its effect on the visit duration, and their overall satisfaction with the consultation, with higher score indicating more positive perception. Other outcome measures included the total number and types of participant questions, speaking times, and overall consultation duration.
The primary outcome was patients’ perception of helpfulness (0-10 scale) of the informational material. A 2-sample t test was applied to examine the outcome difference between the QPS and the GIS group. With 136 enrolled patients and a 5% attrition rate, we estimated 80% power to detect a difference in means of 2 on a 0- to 10-point scale of the primary outcome, assuming an SD of 4 using the 2-sample t test with a 2-sided significance level of P = .05. Summary statistics, such as means and SDs, were used to describe continuous variables, while frequencies and percentages were used to describe categorical variables. Similar 2-sample t and χ 2 tests or Fisher exact test were used to examine the group difference for selected secondary outcomes. Goodness-of-fit test was used for χ 2 to assess patients’ overall preference after using both information materials concurrently. Associations of the demographic or clinical factors with the primary outcome were assessed using ordinary least-squared regression. Analysis was modified intention-to-treat because 5 randomized patients (3.7%) who did not receive the allocated intervention and had missing data were excluded. P = .05 was used to determine the statistical significance for all secondary outcomes analyses, given this portion of the analyses was exploratory and was for hypothesis generating purpose. Data were analyzed using Stata/SE version 16.1 (StataCorp). Data were analyzed from May 18 to June 27, 2022.
A total of 135 eligible patients were randomly assigned to receive either the QPS or GIS. After excluding the 5 randomized patients (3.7%) who did not receive the allocated intervention, data were available for 130 patients (mean [SD] age, 58.6 [13.3] years; 79 [60.8%] female), including 67 patients (51.5%) randomized to GPS and 63 patients (48.5%) randomized to GIS . There were no significant differences in the baseline demographic and clinical characteristics between the 2 groups. Perception of helpfulness was equally high, with no statistically significant difference between the QPS and the GIS groups (mean [SD] helpfulness score, 7.2 [2.3] points vs 7.1 [2.7] points; P = .79). The QPS prompted participants to think of new questions more than the GIS did (mean [SD] score, 7.0 [2.9] vs 5.3 [3.5]; P = .005). Participants had a higher global perception score for the QPS than the GIS (mean [SD] score, 7.1 [1.3] vs 6.5 [1.7]; P = .03) . All 47 participants who returned for their 4-week follow-up appointment participated in the open-label phase. The demographic and clinical characteristics of patients who returned and those who did not were not significantly different, including age, race, cancer type, type of intervention received at the initial visit, and Edmonton Symptom Assessment System (ESAS) total Symptom Distress Score. Therefore, the informative missingness of the data was largely ignorable. After using both informational materials concurrently, more participants preferred the QPS to the GIS in communicating with their physicians (24 patients [51.1%] vs 7 patients [14.9%]; no preference: 16 patients [34.0%]; P = .01) . In a separate analysis, there were no differences in the effects of the QPS and GIS on physicians’ perceptions of the helpfulness (mean [SD] score, 6.79 [2.74] vs 6.27 [2.96]; P = .32), the consultation length (mean [SD] score, 8.33 [2.53] vs 8.52 [2.14]; P = .67), or overall satisfaction (mean [SD] score, 8.74 [1.38] vs 8.72 [2.06]; P = .95). The mean physician speaking time was not significantly different between the 2 groups (eTable in ). Participants in the QPS group spoke less than those in the GIS group (mean [SD] time, 8.0 [5.3] minutes vs 10.0 [5.3] minutes; P = .06). Both groups asked more treatment-related questions and fewer prognosis- and end-of-life–related questions. No significant association was observed between the QPS and the GIS groups regarding the number and types of questions asked. Overall, both groups were equally satisfied with the consultation. (mean [SD] score, QPS: 95.01 [10.51] vs GIS: 93.90 [14.18]; P = .63). Patients’ change in anxiety scores from baseline were also similar in both groups (mean [SD] anxiety rating, 2.3 [3.7] vs 1.6 [2.7]; P = .19). shows the factors associated with participants’ perception of helpfulness the information material they received. Compared with White patients, Black and Hispanic patients were significantly more likely to perceive either of the informational materials they received as helpful (coefficient, 1.95; 95% CI, 0.72 to 3.18; P = .002). In addition, older age (coefficient, 0.04; 95% CI, 0.01 to 0.07; P = .02) and lower ESAS depression (coefficient, −0.20; 95% CI, −0.38 to −0.01; P = .04) were associated greater perceived helpfulness of the informational material.
In this randomized clinical trial, patients perceived both the QPS and GIS as helpful when communicating with their physician, with no significant difference between groups. However, patients felt the QPS facilitated generation of new questions. They also had a better overall global view of the QPS, and after using both materials concurrently during a follow-up visit, patients preferred the QPS to the GIS for communicating with their physicians. Previous studies by our group have reported the perceived helpfulness of the QPS during patient-physician communication. , In a randomized clinical trial comparing a disease-specific QPS with a GIS among 60 women with breast cancer consulting with their medical oncologists, we found that patients perceived the QPS as more helpful than the GIS. Although participants in this study perceived both materials as helpful, their better global view of and relative preference for the QPS validate its value in routine clinical care and further underscore the need for its integration in clinical guidelines and health policies. The use of a GIS as an attention control group in this study allowed for a more rigorous and robust evaluation of the QPS. Only a few studies have compared the QPS with another communication aid. Moreover, data on the focal evaluation of patients’ perceptions about QPS’ utility are limited. The QPS did not increase patient anxiety during the clinical encounter. This should reassure health care practitioners who may be concerned that the QPS questions will be emotionally upsetting and negatively impact patients’ psychological outcomes. Several studies have examined the association between the use of a QPS and patient anxiety. , , , , , , , , Many did not find any significant association with anxiety, , , , while a few studies showed a decrease in patient anxiety levels , immediately after, and 6 weeks, and 4 months, after initial the consultation. A study by Brown et al randomized 318 patients with cancer consulting with their oncologists to either receive or not receive a QPS and found that QPS patients whose physicians passively responded to questions from the QPS had higher anxiety than did those whose physicians proactively addressed questions from the QPS and controls. We found that the QPS neither prolonged the duration of the visit nor increased the physician or patient speaking time. In fact, participants in the QPS group spoke less than did those in the GIS group, suggesting that the QPS may improve the efficiency of communication without prolonging clinical encounters. Previous studies by our group and others also observed no association of the QPS with consultation length. , , , , , In a randomized clinical trial of 174 patients with advanced cancer who were assigned to receive either the QPS or standard consultation without QPS, Clayton et al found that QPS consultations were longer than controls, probably because a longer 20-page QPS brochure consisting of 112 items was used in that study. It is conceivable that such observation was not found in this study because we used a disease-specific, single-page 25-item QPS. Future studies are needed to investigate the effect of QPS length on consultation duration. Although patients felt the QPS facilitated generation of new questions, it did not result in an increase in the number of questions asked. The goal with the use of a QPS is to empower patients to generate and ask essential questions that meet their information needs. The QPS may effectively improve communication quality without necessarily increasing the number of questions that patients ask. Patients may be able to ask their most meaningful questions rather than simply asking more questions. In that regard, patient self-report of the helpfulness of the material might be a highly reliable indicator of benefit from the information material. Further studies are needed to ascertain the best means of measuring the true utility of the QPS. Compared with previous findings, , , patients in this study asked more treatment- and symptom-related questions and fewer prognosis and end-of-life–related questions. This may be because a considerable number of them were still receiving disease-directed therapy and therefore had a particular interest in treatment- and symptom-related questions and concerns. In clinical settings, such as the inpatient palliative care units where patients have more advanced disease, prognosis and end-of-life questions might be more relevant. Moreover, patients might have preferred to first focus on their acute issues and would eventually discuss the more sensitive prognosis and end-of-life issues once their acute physical symptoms were addressed and they had the opportunity to build a closer therapeutic relationship with their physicians. The reason why Black and Hispanic patients were more likely to perceive the information material as helpful is unclear, but it suggests that a written material that aids in patient communication might be particularly valued by members of racial and ethnic minority groups, including Black and Hispanic patients. In a different study, the QPS was found very acceptable by Black patients with cancer and effectively increased their active participation in racially discordant interactions. Similarly, our findings also suggest that an informational material may be particularly useful to older patients in guiding them to navigate important conversations with their physicians. Major medical organizations, such as the National Cancer Institute, the National Academy of Medicine, and the American Society for Clinical Oncology, have alluded to the benefits of good communication in quality of care and emphasized the need for improved patient-physician communication among patients with advanced illnesses. , , , The QPS is a simple, inexpensive tool that might help in achieving this goal. Despite increasing evidence regarding the utility of QPS in physician-patient consultations, it has not been fully adopted and implemented in oncologic settings. Some barriers to its full implementation include a feeling among patients of being overwhelmed by the sheer amount of written information. , It is challenging to develop a universal QPS that suits all patients’ needs in view of the vast diversity within the population of patients with cancer and the dynamic nature of patient-physician communications. Wide variations in patient learning styles, communication goals, degrees of knowledge, and emotional capabilities may present real challenges in using a standardized QPS for all. One potential solution is to ensure that the development of a QPS is distinctively tailored to specific patient populations to enhance its efficacy. An electronic health system that integrates an interactive QPS that allows patients to generate their own list of questions based on their individual preferences and information needs would be ideal. Limitations This study has some limitations. One limitation is that it was conducted at a single tertiary academic center. Therefore, the results might not be generalizable to other clinical settings. In addition, we were unable to record the specific QPS questions that participants eventually asked during the visit. A better understanding of how participants used the material in real time and which questions were the most useful should be a focus in future research. Another limitation is that hypothesis testing for the secondary end points is considered exploratory when the primary end point does not show statistical significance, which was the case for this study. Additionally, the study was conducted among ambulatory patients with relatively good functional status. Future studies should include patients in acute inpatient settings, since they might have different symptom severity and therefore different outcomes.
This study has some limitations. One limitation is that it was conducted at a single tertiary academic center. Therefore, the results might not be generalizable to other clinical settings. In addition, we were unable to record the specific QPS questions that participants eventually asked during the visit. A better understanding of how participants used the material in real time and which questions were the most useful should be a focus in future research. Another limitation is that hypothesis testing for the secondary end points is considered exploratory when the primary end point does not show statistical significance, which was the case for this study. Additionally, the study was conducted among ambulatory patients with relatively good functional status. Future studies should include patients in acute inpatient settings, since they might have different symptom severity and therefore different outcomes.
This randomized clinical trial found that patients perceived both QPS and GIS as equally helpful in communicating with their physician during consultation. However, they had a more positive global evaluation of, and preferred the QPS to the GIS. The QPS reportedly facilitated generation of new questions without increasing patient anxiety nor prolonging the consultation visit. The findings support the adoption, integration, and implementation of QPS in routine oncologic care.
|
P16 immunohistochemistry is a sensitive and specific surrogate marker for
|
2beb666e-f655-40fb-bbab-8450a57a6304
|
10155323
|
Anatomy[mh]
|
Until recently, grading of solid tumors was based solely on histology, immunohistochemistry (IHC) and ultrastructural findings. In the past decade, high throughput sequencing technology has enabled the discovery of common and unique single nucleotide variants, large gains and losses, and fusions across many tumors, some of which have impacted tumor categorization and prognostication. Genomic and epigenomic molecular analyses have provided new insights into the mechanisms for tumorigenesis, have enabled the stronger correlation of histology with tumor grade, and have helped distinguish morphologically similar tumors with unique molecular signatures. This is especially true for tumors of the central nervous system (CNS), with the latest (5th) edition of the WHO CNS tumor classification incorporating key molecular alterations into the classification and grading of glial and glioneuronal neoplasms . A notable example of a genomic alteration with evolving prognostic value in brain tumors is the loss of the tumor suppressor gene cyclin dependent kinase inhibitor 2A, CDKN2A . Alternative splicing of the CDKN2A locus on chromosome 9p21 results in the translation of two main tumor suppressor proteins: the cyclin dependent kinase inhibitor p16 (aka p16INK4A, p16INK4, CDK4I, MTS1) and, through an alternate open reading frame (ARF), the structurally distinct protein p14 (aka p14ARF) . The p16INK4A protein, referred to as p16 hereafter, inhibits abnormal cell growth and proliferation by binding to complexes of cyclin-dependent kinases (CDK) 4 and 6 and cyclin D, thus inhibiting retinoblastoma protein phosphorylation and causing cell cycle arrest in the G1 phase . In contrast, p14 functions to stabilize the tumor suppressor protein p53 and to sequester MDM2, a protein responsible for the degradation of p53. Together, both CDKN2A tumor suppressor proteins help regulate entry into the S phase of the cell cycle. CDKN2A inactivation provides a survival advantage to cancer cells, with the most common genomic alteration causing this event being the homozygous (biallelic) deletion of CDKN2A . In greater than 90% of cancer tissues harboring CDKN2A deletion, the adjacent CDKN2B gene on chromosome 9p, encoding the p15INK4B cyclin dependent kinase inhibitor, is also deleted . Tumors with the greatest prevalence of CDKN2A loss include malignant gliomas , lung adenocarcinoma, pancreatic adenocarcinoma, melanoma and bladder urothelial carcinoma . Among brain tumors, CDKN2A loss has greatest clinical implications in histologically low and intermediate grade gliomas and meningiomas . In IDH-mutant astrocytomas, the presence of homozygous deletion of CDKN2A is associated with poor outcome and an expected median overall survival of only 3 years . Hence, CDKN2A homozygous deletion is now considered a CNS WHO grade 4 diagnostic marker in IDH-mutant astrocytomas, even in the absence of necrosis and/or microvascular proliferation on histology . While 1p/19q co-deletion and IDH mutation occur in the early stages of oligodendroglioma formation, progression to a higher grade is associated with homozygous deletion of CDKN2A / B . Less than 10% of CNS WHO grade 3 IDH-mutant and 1p/19-codeleted oligodendrogliomas show homozygous CDKN2A/B deletion, associated with worse outcome and shorter overall survival . Since CDKN2A loss is not seen in low-grade IDH-mutant oligodendrogliomas, detection of this molecular alteration can be used to distinguish between grade 2 and grade 3 tumors when histology is equivocal. In pediatric low-grade gliomas, the frequency of CDKN2A/B loss varies from 6 to 20% . It is more prevalent in BRAFV600E mutant tumors. The co-existence of mutant BRAFV600E with CDKN2A loss suggests transformation into histologically higher-grade brain tumor, with more aggressive behavior and worse clinical course . In pilocytic astrocytomas, CDKN2A inactivation also heralds more aggressive clinical behavior ; its presence in astrocytomas with piloid features is now suggestive of a distinct high-grade glioma subtype . Similarly, in meningiomas CDKN2A homozygous loss is associated with anaplastic histology and with increased risk of recurrence or progression . It is now considered a diagnostic marker of grade 3 in meningioma, independent of histology . While the presence of CDKN2A loss is gaining increasing recognition as a key diagnostic and prognostic marker in gliomas and meningiomas, and is an inclusion criterion for some clinical trials, its molecular detection remains expensive, time consuming and not widely available. Testing for loss of expression in p16, the protein product of CDKN2A , by immunohistochemistry provides a simpler and low-cost alternative to CDKN2A molecular testing. There are limited studies correlating p16 protein expression by immunohistochemistry with the presence of CDKN2A loss , primarily using PCR or FISH-based determination of CDKN2A status. None so far have established a cutoff value for the sensitivity or specificity of p16 as a surrogate marker for homozygous loss of CDKN2A detected using highly sensitive next-generation DNA sequencing . This study performs semi-quantitative analysis for p16 expression across 100 IDH-wildtype and IDH-mutant gliomas, using three independent p16 immunoreactivity scores, and correlates the extent of p16 expression with CDKN2A homozygous deletion as determined by next-generation DNA sequencing. It establishes p16 as a reliable, highly sensitive surrogate marker for inference of CDKN2A homozygous deletion in gliomas, with a recommended p16 expression score of ≤ 5% for confirming and > 20% for excluding CDKN2A homozygous loss.
Samples A cohort of 100 glioma cases diagnosed from 2019 to 2022 at the Mount Sinai Health System (MSHS) were selected for the study, and were used under approved institutional review board protocol. The combined histopathologic / molecular integrated diagnosis was based on guidelines from the 5th edition (2021) of the WHO classification of CNS tumors . CDKN2A status The CDKN2A status for all cases was determined by a reference laboratory, FoundationOne®CDx (F1CDx), utilizing highly sensitive hybrid capture-based next-generation DNA sequencing technology and a customized pipeline to detect genomic alterations, including copy number alterations (CNA) such as amplifications and homozygous deletions . Briefly, to detect CNAs, a log-ratio profile of the sample was obtained by normalizing the overall sequence coverage against a process-matched normal control. This profile was then corrected for GC bias, segmented, and used to estimate copy number at each segment, purity- and ploidy-adjusted . The threshold for calling homozygous deletions was copy number in the tumor equal to zero. All cases had tumor content of 20% or greater. P16 scoring P16 immunohistochemical analysis was performed on 4 µm-thick formalin fixed and paraffin embedded sections, using the most widely used E6H4 clone of the anti-p16 mouse monoclonal primary antibody (Roche CINtec Histology, 725–4793) on the automated Ventana Ultra immunohistochemical staining system with the following optimized conditions: heat-induced epitope antigen retrieval for 172 min using CC1 buffer at high pH, pre-diluted primary antibody incubation for 32 min, and the UltraView DAB detection kit (Roche, 760–500). Whole digital slide images, from different tumor areas in most cases, were available for evaluation. Initially, two pathologists scored the percentage of p16-positive tumor cells in a blinded study with no knowledge of the histological diagnosis or molecular CDKN2A status. This was followed by a second, unblinded consensus evaluation for selected “gray-zone” cases with 6–20% p16 expression. Nuclear and/or cytoplasmic staining of tumor cells was considered as positive staining. P16 expression within each tumor was calculated based on the average of a maximum and a minimum tumor percentage score, obtained at 10X microscopic fields with highest tumor cellularity. Additionally, QuPath bioimage analysis was used as an unbiased digital quantification method of p16 expression, performed on 99 of the 100 tumors, using the same 10X fields scored by pathologists. The QuPath setup for image detection was established for brightfield with positive detection by optical density, using the following settings: 0.05 background intensity parameter, 0.1 single threshold, using compartment score at a mean optical density of nuclear DAB staining. Statistics PRISM was used for graph and receiver operating characteristic (ROC) curve generation and for all statistical calculations. Statistical significance for p16 expression in non vs. CDKN2A homozygous deleted tumors was determined using one-way ANOVA with Brown-Forsythe test correction, as well as using unpaired two-tailed Student t-test. The area under the ROC curve (AUC or C-index) was calculated to measure correlation between p16 score and CDKN2A status, with a perfect correlation considered as area = 1.0 and a random one considered as area = 0.5. A two-tailed p-value was computed using a z ratio of (AUC – 0.5) over the standard error. Statistical significance was considered at a level of p < 0.05.
A cohort of 100 glioma cases diagnosed from 2019 to 2022 at the Mount Sinai Health System (MSHS) were selected for the study, and were used under approved institutional review board protocol. The combined histopathologic / molecular integrated diagnosis was based on guidelines from the 5th edition (2021) of the WHO classification of CNS tumors .
status The CDKN2A status for all cases was determined by a reference laboratory, FoundationOne®CDx (F1CDx), utilizing highly sensitive hybrid capture-based next-generation DNA sequencing technology and a customized pipeline to detect genomic alterations, including copy number alterations (CNA) such as amplifications and homozygous deletions . Briefly, to detect CNAs, a log-ratio profile of the sample was obtained by normalizing the overall sequence coverage against a process-matched normal control. This profile was then corrected for GC bias, segmented, and used to estimate copy number at each segment, purity- and ploidy-adjusted . The threshold for calling homozygous deletions was copy number in the tumor equal to zero. All cases had tumor content of 20% or greater.
P16 immunohistochemical analysis was performed on 4 µm-thick formalin fixed and paraffin embedded sections, using the most widely used E6H4 clone of the anti-p16 mouse monoclonal primary antibody (Roche CINtec Histology, 725–4793) on the automated Ventana Ultra immunohistochemical staining system with the following optimized conditions: heat-induced epitope antigen retrieval for 172 min using CC1 buffer at high pH, pre-diluted primary antibody incubation for 32 min, and the UltraView DAB detection kit (Roche, 760–500). Whole digital slide images, from different tumor areas in most cases, were available for evaluation. Initially, two pathologists scored the percentage of p16-positive tumor cells in a blinded study with no knowledge of the histological diagnosis or molecular CDKN2A status. This was followed by a second, unblinded consensus evaluation for selected “gray-zone” cases with 6–20% p16 expression. Nuclear and/or cytoplasmic staining of tumor cells was considered as positive staining. P16 expression within each tumor was calculated based on the average of a maximum and a minimum tumor percentage score, obtained at 10X microscopic fields with highest tumor cellularity. Additionally, QuPath bioimage analysis was used as an unbiased digital quantification method of p16 expression, performed on 99 of the 100 tumors, using the same 10X fields scored by pathologists. The QuPath setup for image detection was established for brightfield with positive detection by optical density, using the following settings: 0.05 background intensity parameter, 0.1 single threshold, using compartment score at a mean optical density of nuclear DAB staining.
PRISM was used for graph and receiver operating characteristic (ROC) curve generation and for all statistical calculations. Statistical significance for p16 expression in non vs. CDKN2A homozygous deleted tumors was determined using one-way ANOVA with Brown-Forsythe test correction, as well as using unpaired two-tailed Student t-test. The area under the ROC curve (AUC or C-index) was calculated to measure correlation between p16 score and CDKN2A status, with a perfect correlation considered as area = 1.0 and a random one considered as area = 0.5. A two-tailed p-value was computed using a z ratio of (AUC – 0.5) over the standard error. Statistical significance was considered at a level of p < 0.05.
We quantified p16 expression by immunohistochemistry in 100 gliomas with diverse histological features and grades, for which CDKN2A status was determined using highly sensitive, targeted DNA-based hybridization capture next-generation sequencing technology. The histologic diagnoses included: Astrocytoma, IDH-mutant, WHO grade 2–4; Oligodendroglioma, IDH-mutant and 1p/19q-codeleted, WHO grade 2–3; Glioblastoma, IDH-wildtype, WHO grade 4; Pilocytic astrocytoma, WHO grade 1; Low-grade glioneuronal tumor/pleomorphic xanthoastrocytoma; Angiocentric glioma, WHO grade 1; Diffuse hemispheric glioma, H3G34-mutant, WHO grade 4; Diffuse low-grade glioma, MAPK pathway-altered; and Diffuse pediatric-type high-grade glioma, H3-wildtype and IDH-wildtype (Table , Additional file : Data 1). The cohort ages ranged from 2 to 85 years, with a median age of 54 years and only a slight male preponderance of 51% (Table ). The majority of cases were primary resections (79%). P16 expression was determined as the average of the minimum and maximum percent tumor cell staining, counted in a 10X microscopic field of an area with highest tumor cellularity. This was done manually in both blinded and unblinded consensus reviews by two pathologists (Figs. a, b, , Additional file : Data 1) and digitally via QuPath analysis (Figs. c, , Additional file : Data 1). Immunoreactivity for p16 in non-neoplastic endothelial cells and/or neurons, when recognized with high confidence by the pathologists, was excluded from the final tumor score in the blinded and unblinded pathologist analyses. Overall, all p16 quantification methods showed concordant results (Fig. ). In all three, there was significant difference ( p < 0.0001, one-way ANOVA and t -test) in p16 expression between tumors with CDKN2A homozygous deletion (HD) and those without (Fig. ). Classifying CDKN2A status based on p16 tumor cell expression (0–100%) demonstrated robust performance over a wide range of thresholds, with receiver operating characteristic (ROC) area under the curve (AUC) of 0.993 and 0.997 (blinded and unblinded consensus pathologist p16 scores, respectively) and 0.969 (QuPath p16 score) (Fig. ). Expression of p16 as scored by pathologists was also correlated with CDKN2A status by grouping tumors into one of five categories based on a range of p16 percent expression: 0–5%, 6–10%, 11–20%, 21–50% and 51–100% (Table ). Notably, all tumors with 0–5% p16 expression carried CDKN2A HD and were thus considered as true positives. Similarly, all tumors with 21–100% p16 expression did not carry CDKN2A HD and were considered as true negative. In contrast, tumors falling within the 6–10% and 11–20% range (gray zone) showed an imperfect correlation to CDKN2A status. To evaluate tumors in this range further, we unblinded our results and re-scored tumors based on consensus discussion and additional criteria (re-evaluation of tumor cellularity on H&E with exclusion of mostly normal-appearing areas, additional p16 staining where suboptimal, consideration of weak cytoplasmic staining as positive, recognition and exclusion of background non-neoplastic staining) (Figs. and ). Unblinded consensus rescoring resulted in a slight decrease of gray zone cases: the number of tumors in the 6–10% range went from 9 to 6, with only two false positive results; and the tumors in the 11–20% range went from 7 to 3, with only one false negative result (Table ). Examples of cases initially overscored in the blinded analysis included a tumor with CDKN2A HD, which showed retained p16 staining in scattered entrapped non-neoplastic cells (Fig. a and b). Few tumors without CDKN2A HD remained underscored even after unblinded consensus analysis, due to low tumor cellularity (Fig. c). Consensus re-scoring did not alter the overall trend for tumors within the 0–5% and 21–100% ranges. Next, diagnostic test metrics were assessed by defining false negatives and positives as determined in Table , calculated at different p16 cutoffs. In the blinded pathologist-based p16 scoring with a p16 cutoff value of 10%, overall test sensitivity was 94% and test specificity was 96%, with a positive predictive value (PPV) of 96% and a negative predictive value (NPV) of 94%. With a cutoff value of 5%, overall test sensitivity decreased to 79% and NPV decreased to 84%, while specificity and PPV increased to 100%. Unblinded consensus-score analysis improved the blinded-score analysis test sensitivity to 98% and 90%, at p16 score cutoff values of 10% and 5%, respectively. QuPath-based p16 scoring showed overall similar trends for increased test specificity and PPV with decreasing cutoffs (94% specificity and 92% PPV at 5% cutoff vs. 90% specificity and 89% PPV at 10% cutoff), at the expense of test sensitivity (89% at 10% cutoff vs. 77% at 5% cutoff) (Additional file : Data 1). Taking into account pathologists’ blinded and unblinded consensus p16 scored case distribution and a more conservative threshold that optimizes test specificity and PPV, we conclude the likelihood of a homozygous CDKN2A deletion to be very likely when p16 expression is 0–5%, and similarly, very unlikely when p16 expression is above 20%. In contrast, 6–20% range of p16 expression in gliomas represents a gray zone where molecular testing is still helpful to confirm true CDKN2A status.
While CDKN2A homozygous deletion (HD) has been recognized as both a diagnostic and a prognostic marker in gliomas and meningiomas, its detection is not widely accessible and cost effective. In this current study, we examined whether simple quantification of p16 immunoreactivity can serve as a surrogate marker for CDKN2A loss in gliomas. Our results demonstrate strong correlation between the degree of p16 immunostaining and the presence of CDKN2A HD across IDH-wildtype and IDH-mutant tumors of all grades. In tumors with pathologist-scored p16 greater than 20%, we found 100% specificity for excluding CDKN2A HD, and in tumors with p16 equal to or less than 5%, we found 100% specificity for predicting CDKN2A HD. Thereby, our study provides a cost effective and convenient method for evaluating CDKN2A homozygous loss status in glioma, as an alternative to expensive genomic sequencing. Our results build on several prior studies, which use FISH or PCR to detect CDKN2A gene copy loss and immunohistochemistry to correlate with p16 expression, many of them using the same antibody clone . Earliest studies by Rao et al. and Burns et al. used multiplex PCR to detect CDKN2A deletion in brain tumors and correlated it with p16 expression in astrocytomas, where a strong correlation was found between p16 negative tumors and homozygous loss of CDKN2A ; as well as in glioblastomas, where diffuse p16 immunostaining was found to confidently exclude CDKN2A deletion but p16 immunonegativity did not always correlate with CDKN2A deletion . A following study by Parkait et al. did find significant association between p16 immunonegativity and CDKN2A deletion detected by FISH in glioblastoma . Subsequently, Park et al. found only moderate correlation between p16 expression (performed on tissue microarrays) and CDKN2A loss as determined by FISH, but demonstrated the strong prognostic value of p16 expression in IDH-mutant astrocytomas . Most recently, Suman et al. and Geyer et al. showed evidence for the strong negative predictive value of p16 in detecting CDKN2A deletion, also using FISH for determining CDKN2A status . Some of the reported limitations in the above studies include false positive FISH results due to partial hybridization failure, artifacts, and sub-optimal p16 cutoff values, hampering the standardized use of p16 as a surrogate marker for CDKN2A homozygous deletion in gliomas. By leveraging the superior sensitivity of next-generation DNA sequencing with semi-quantitative scoring methodologies and digital pathology, our study puts forward specific threshold values for p16 expression as a surrogate marker of CDKN2A HD status, enabling greater standardization of this cost-effective tool in glioma diagnostics. Given the diagnostic and prognostic implications when CDKN2A HD is detected in a lower grade glioma, we favored a conservative threshold p16 expression value of 5%, which optimizes both test specificity and positive predictive value for CDKN2A HD detection, over a threshold of 10% or higher, which leads to occasional overcalling of CDKN2A HD (i.e. false positives). By introducing a second cutoff of 20% for the exclusion of homozygous loss and continuing to sequence cases within the 6–20% gray zone, we find virtually perfect concordance between pathologist-scored p16 expression and CDKN2A HD status, without any false positives or false negatives. Recently, an analogous analysis in meningiomas by Tang et al. showed that loss of p16 expression is a sensitive marker of CDKN2A loss determined by next-generation sequencing . Similarly to meningiomas, CDKN2A HD is a molecular signature for highest grade in IDH-mutant astrocytomas (grade 4) and in IDH-mutant and 1p/19q-codeleted oligodendroglioma (grade 3), regardless of histology . In our cohort, 3 out of 20 IDH-mutant astrocytomas and none out of 8 IDH-mutant oligodendrogliomas contained CDKN2A HD, overall consistent with prior reported frequencies (Additional file : Data 2). Importantly, the presence of CDKN2A HD (with pathologists’ p16 score of 1%) upgraded one IDH-mutant astrocytoma without microvascular proliferation or palisading necrosis to grade 4 (Additional file : Data 1). Moreover, CDKN2A HD was detected in 1 out of 5 pilocytic astrocytomas (with pathologists’ p16 score of 1–2%). This pilocytic astrocytoma displayed atypical features, including elevated mitotic activity and increased MIB1 proliferation index, as well as an aggressive clinical behavior with recurrence only 10 months after initial resection. Of note, the tumor classified as a posterior fossa pilocytic astrocytoma rather than a high-grade astrocytoma with piloid features by orthogonal DNA methylation analysis. This confirms the diagnostic and prognostic value of CDKN2A HD as previously established . As p16 in both cases was less than 5%, it further demonstrates the utility of p16 as a surrogate marker of CDKN2A HD in clinical neuropathology, enabling quicker final diagnosis and circumventing expensive molecular testing. Our study is not without limitations. While we found perfect correlation between CDKN2A HD status and pathologist-scored p16 expression in the 0–5% and 21–100% p16 score ranges, sensitivity and specificity were lower in the 6–20% range (so-called gray zone) with several false positive and false negative cases present in this range. A few of the cases in this gray zone were moved to the 0–5% and 21–100% ranges after unblinded consensus re-scoring. For example, two CDKN2A HD cases in the blinded study were over scored, but consensus discussion deemed the positive p16 staining to be mostly limited to neurons and/or glia (Fig. a) or endothelial cells (Fig. b). These examples highlight the potential confounding factor of background non-neoplastic brain tissue, which has been previously reported to show nuclear and cytoplasmic reactivity for p16 in scattered astrocytes, OPCs, and/or neurons, related to cellular senescence . In our own experience with p16, we have observed occasional and inconsistent immunoreactivity in only scattered glia, neurons, and endothelium. To minimize non-neoplastic background in our scores, we evaluated the most densely cellular tumor area, correlated it to its H&E, and subtracted p16 reactivity when confidently recognized as endothelial or neuronal. We cannot exclude the possibility of rare p16 reactivity contributed by entrapped non-neoplastic glia within the tumor bulk, as reactive and neoplastic glia are extremely challenging to discriminate. A pattern of p16 staining in which positive cells are scarce and equally distributed from one another, rather than overlapping and clustering, was suggestive of non-neoplastic background (Fig. a). Importantly, QuPath analysis was unable to perform a similar background subtraction. Conversely, few cases without CDKN2A HD were found to be under scored after unblinding our analyses. This was most often due to the tumor representing a small biopsy composed of mostly normal brain with only few tumor cells at the infiltrative edge in an otherwise low-grade glioma (Fig. c). Even after unblinding ourselves to CDKN2A status, such cases remained in the gray zone, as we could not confidently distinguish normal from neoplastic cells. QuPath analysis also underscored p16 expression in such tumors (Fig. c). Thus, areas of high tumor cellularity may be necessary for interpretation of p16 immunoreactivity, as it is hard to discriminate scattered infiltrating tumor cells amidst mostly non-neoplastic glia, especially in small biopsy specimens and when using digital software for scoring. Another caveat in correlating p16 expression to CDKN2A inactivation are the occasional tumors in which p16 expression is lost due to epigenetic silencing of the CDKN2A locus, rather than homozygous deletion . We cannot exclude that some of the false positive cases in the 6–20% gray zone may indeed have had inactivated CDKN2A transcription through an epigenetic mechanism, leading to the loss of p16 expression in the absence of genomic loss at the 9p21 locus. This caveat is especially important to consider in tumors with global epigenetic alterations. Thus, our study concludes a strong correlation between p16 expression and CDKN2A homozygous deletion, rather than between p16 expression and CDKN2A inactivation. Finally, while the utilized next-generation sequencing technology has high sensitivity for capturing homozygous CDKN2A loss with lower false positives compared to FISH, it did not include calls for tumors with a single allele (hemizygous) CDKN2A loss. Indeed, we cannot exclude that some of the cases without CDKN2A homozygous deletion may have had loss of one of the CDKN2A alleles. Given that CDKN2A encodes tumor suppressors and the current literature correlates only homozygous CDKN2A loss with prognosis and grade in gliomas and meningiomas, determining hemizygous loss in our cohort was deemed irrelevant. In all, this study supports other recent findings for the role of p16 as a surrogate marker of CDKN2A loss, and establishes a cutoff p16 value of 5% for detecting homozygous CDKN2A deletion with robust sensitivity and specificity, and a cutoff p16 value of 20% for excluding homozygous CDKN2A deletion, in both low and high-grade gliomas.
Additional file 1: Data 1 . Metadata file including de-identified patients’ demographics, tumor characteristics, average p16 scores, and molecular CDKN2A status. Data 2 . Frequency of CDKN2A homozygous deletion in tumors of different histology and grade.
|
Uptake pattern of training programs over two decades at an International Ophthalmic Training Institute in India
|
cef19ad9-1f4c-4373-b303-cc196f586e43
|
10155513
|
Ophthalmology[mh]
|
We performed a retrospective analysis of data between the years 2000 and 2019 extracted from the institute’s training database. The study protocol was approved by the ethical committee of the institute. This institute, through its eye hospital network, spread across the states of Tamil Nadu and Pondicherry along with its management training institute, offers around 40 courses including long- and short-term, clinical and non-clinical courses for eye-care personnel. These structured courses are supplemented by custom-designed courses, recognizing that the competence needs on the ground would be quite varied. These refresher training programs are well structured and application-oriented for ophthalmologists, allied ophthalmic personnel (AOP), and other supporting cadres. Design and delivery of the training programs The structure of various training programs is described in . All courses offered at the institute place a greater emphasis on the practical application of knowledge and skills. The clinical courses are designed with a combination of didactic lectures, hands-on, as well as observational training to prepare the trainees for effective diagnosis and management of clinical conditions. The long-term fellowships for ophthalmologists provide advanced training in ophthalmic subspecialties with varying durations from 12 to 24 months. The short-term clinical courses for practicing ophthalmologists are 2 weeks to 6 months long and provide specialized training. The duration of the short-term courses designed to upgrade the clinical and technical competency of the AOP ranges from 1 week to 6 months. The eye-care management training programs are designed to provide the managerial staff in eye hospitals and non-governmental organizations (NGOs) with an overview of the management principles in eye-care delivery and exposure to the operational challenges in eye hospitals or eye-care programs. The curricula for the management courses were designed through a formal workshop with participation from international eye-care NGOs, academic experts, and experienced practitioners. Selection of trainees for all courses is carried out through structured scrutiny of applications based on set eligibility criteria. Considering the participation of trainees from across the world, English is the medium of instruction for all programs. There is a standard fee structure, and the fee is set to be very nominal to maximize participation, especially from the LMICs. Participants are given a formal certificate upon successful completion of the training. To ensure an enabling environment for effective learning, the necessary facilities including well-equipped classrooms, wet lab facilities for surgical training, library, internet connectivity, accommodation, and food services are provided within the campus. For participants of clinical courses, the lectures, practical training sessions, and case presentations are documented in a logbook that gets reviewed periodically by the training supervisors. Various knowledge and skill assessment formats are used to evaluate the learning outcomes of the AOP attending the clinical and technical courses. In the management courses, the trainees are encouraged to develop strategies and actionable plans to be implemented at their respective institutions. For operational-level courses, the reporting authorities of the trainees are engaged throughout the training period so that each trainee gets the necessary support to implement their action plans following the training. There is a formal mechanism to capture the trainees’ feedback with respect to the learning, design, and delivery of the programs. Data collection and management We extracted deidentified data from the training management database—”Aurovikas,” related to the trainees enrolled in various calendared and structured training programs conducted at the institute between 2000 and 2019. We analyzed the overall growth over the 20 years in the WHO regions. For the purpose of further analysis, the entire study period was categorized into four segments of 5-year duration: 1 (2000–2004), 2 (2005–2009), 3 (2010–2014), and 4 (2015–2019). The uptake patterns and growth were analyzed in each five-year period by the type of training. Additionally, the uptake patterns were compared between training programs for “cataract” and “subspecialities.” The training programs were broadly categorized into five groups: 1: long-term fellowships for ophthalmologists, 2: short-term clinical courses for ophthalmologists, 3: short-term courses for AOP, 4: short-term technical skill training, and 5: eye-care management training. The association of the trainees’ home location with training uptake was compared across three geographic categories—India, the rest of the Southeast Asian regions, and other countries. A two-sample proportion test was used to compare the percentage of trainees attending cataract and other subspecialty courses. P values < 0.05 were considered statistically significant. The statistical analyses were performed using STATA ver. 14 (Texas, USA) software.
The structure of various training programs is described in . All courses offered at the institute place a greater emphasis on the practical application of knowledge and skills. The clinical courses are designed with a combination of didactic lectures, hands-on, as well as observational training to prepare the trainees for effective diagnosis and management of clinical conditions. The long-term fellowships for ophthalmologists provide advanced training in ophthalmic subspecialties with varying durations from 12 to 24 months. The short-term clinical courses for practicing ophthalmologists are 2 weeks to 6 months long and provide specialized training. The duration of the short-term courses designed to upgrade the clinical and technical competency of the AOP ranges from 1 week to 6 months. The eye-care management training programs are designed to provide the managerial staff in eye hospitals and non-governmental organizations (NGOs) with an overview of the management principles in eye-care delivery and exposure to the operational challenges in eye hospitals or eye-care programs. The curricula for the management courses were designed through a formal workshop with participation from international eye-care NGOs, academic experts, and experienced practitioners. Selection of trainees for all courses is carried out through structured scrutiny of applications based on set eligibility criteria. Considering the participation of trainees from across the world, English is the medium of instruction for all programs. There is a standard fee structure, and the fee is set to be very nominal to maximize participation, especially from the LMICs. Participants are given a formal certificate upon successful completion of the training. To ensure an enabling environment for effective learning, the necessary facilities including well-equipped classrooms, wet lab facilities for surgical training, library, internet connectivity, accommodation, and food services are provided within the campus. For participants of clinical courses, the lectures, practical training sessions, and case presentations are documented in a logbook that gets reviewed periodically by the training supervisors. Various knowledge and skill assessment formats are used to evaluate the learning outcomes of the AOP attending the clinical and technical courses. In the management courses, the trainees are encouraged to develop strategies and actionable plans to be implemented at their respective institutions. For operational-level courses, the reporting authorities of the trainees are engaged throughout the training period so that each trainee gets the necessary support to implement their action plans following the training. There is a formal mechanism to capture the trainees’ feedback with respect to the learning, design, and delivery of the programs.
We extracted deidentified data from the training management database—”Aurovikas,” related to the trainees enrolled in various calendared and structured training programs conducted at the institute between 2000 and 2019. We analyzed the overall growth over the 20 years in the WHO regions. For the purpose of further analysis, the entire study period was categorized into four segments of 5-year duration: 1 (2000–2004), 2 (2005–2009), 3 (2010–2014), and 4 (2015–2019). The uptake patterns and growth were analyzed in each five-year period by the type of training. Additionally, the uptake patterns were compared between training programs for “cataract” and “subspecialities.” The training programs were broadly categorized into five groups: 1: long-term fellowships for ophthalmologists, 2: short-term clinical courses for ophthalmologists, 3: short-term courses for AOP, 4: short-term technical skill training, and 5: eye-care management training. The association of the trainees’ home location with training uptake was compared across three geographic categories—India, the rest of the Southeast Asian regions, and other countries. A two-sample proportion test was used to compare the percentage of trainees attending cataract and other subspecialty courses. P values < 0.05 were considered statistically significant. The statistical analyses were performed using STATA ver. 14 (Texas, USA) software.
Between 2000 and 2019, a total of 9,091 eye-care professionals from 118 countries underwent various courses, and 61.7% (5,608) were male participants. The distribution of trainees across the five categories of courses was as follows: long-term fellowships for ophthalmologists: 1,243 (13.7%), short-term clinical courses for ophthalmologists: 3,683 (40.5%), short-term courses for AOP: 556 (6.1%), short-term technical skill training: 930 (10.2%), and eye-care management training: 2,679 (29.5%). Geographic distribution of trainees Among the 6 WHO regions, we found that the South East Asia Region (SEAR) had the most representation (81.3%). The proportion of trainees from Africa, Western Pacific, Europe, Eastern Mediterranean, and American regions were 8.3%, 5.5%, 1.8%, 1.7%, and 1.4%, respectively . Within the SEAR, most trainees (87.4%) belonged to India and the rest came from Bangladesh (6.1%), Nepal (3.9%), and other countries (2.6%). Overall, most (98.3%) of the trainees were from the LMICs. Growth in the uptake of training programs We found an average growth rate of 4.8% in the uptake of the training programs across the four 5-year segments over the 20 years. There was a steady growth from 7.4% to 47.5% between the first and the fourth 5-year periods in the uptake of long-term fellowships for ophthalmologists . The growth trend for the short-term clinical courses for ophthalmologists was high (35.79%) between the first and second 5-year periods and steadily went down to 21.3% by 2019. Enrollment in the ‘short-term allied ophthalmic courses’ grew constantly from 12.5% to 33.3% between the first and third 5- year periods and saw a slight fall to 30.8% during 2015–2019. The uptake trend for technical skill training increased up to 31.7% during the second 5-year period and steadily went down to 28.1% toward the end of the study period. The eye-care management training had a fluctuating uptake pattern across the 5-year periods, and the growth ranged from 18.5% to 33.4% over the two decades. Whereas the uptake pattern of cataract-related training programs showed a downward trend across the 5-year periods, the subspecialty courses showed an upward trend . Cataract training that predominantly included short-term training on surgical techniques (ECCE, SICS, and phacoemulsification) was attended by over 91.25% of the trainees. The average growth in the uptake of overall subspecialty training across the four 5-year segments was 8.2%. In the first 5-year period (2000–2004), the proportions of cataract and subspecialty trainees were 60% and 40%, respectively, and these proportions were reversed to 30% and 70%, respectively, in the fourth 5-year period. Among the long-term subspecialty fellowships, retina-vitreous, cornea, and glaucoma fellowships were attended by more participants compared to the others. Among the short-term subspecialty courses, the lasers in diabetic retinopathy and clinical observership in glaucoma had higher participation. However, the upward growth trend in overall subspecialty training was consistent for every specialty. The difference between the proportions of training in cataract and subspecialty training was found to be statistically significant, in both overall and within each 5-year period ( P < 0.001). describes the association of trainees’ home location and gender with the uptake of training. Overall, 71% of the trainees were from India, 10% from other SEAR countries, and 19% from other countries. Although the proportions varied across countries, Indians significantly dominated in all categories. For the long-term clinical fellowships and short-term clinical courses, the proportions of Indian participants were 95% and 75%, respectively. For the other three categories, the proportion of Indians was around 60%. The proportion of participants from the rest of SEAR was around 17% for the short-term allied ophthalmic courses, technical skill training, and eye-care management training. The proportions of participants for the above three programs from other countries, were 23%, 25%, and 22%, respectively. More men (61.7%) compared to women attended the training programs, overall. There was a statistically significant male dominance ( P < 0.01) in all the course categories except in the technical skill training for which the proportions were comparable for men (52.6%) and women (47.3%). Eye-care management training was the category that had the highest male dominance of 75.3%.
Among the 6 WHO regions, we found that the South East Asia Region (SEAR) had the most representation (81.3%). The proportion of trainees from Africa, Western Pacific, Europe, Eastern Mediterranean, and American regions were 8.3%, 5.5%, 1.8%, 1.7%, and 1.4%, respectively . Within the SEAR, most trainees (87.4%) belonged to India and the rest came from Bangladesh (6.1%), Nepal (3.9%), and other countries (2.6%). Overall, most (98.3%) of the trainees were from the LMICs.
We found an average growth rate of 4.8% in the uptake of the training programs across the four 5-year segments over the 20 years. There was a steady growth from 7.4% to 47.5% between the first and the fourth 5-year periods in the uptake of long-term fellowships for ophthalmologists . The growth trend for the short-term clinical courses for ophthalmologists was high (35.79%) between the first and second 5-year periods and steadily went down to 21.3% by 2019. Enrollment in the ‘short-term allied ophthalmic courses’ grew constantly from 12.5% to 33.3% between the first and third 5- year periods and saw a slight fall to 30.8% during 2015–2019. The uptake trend for technical skill training increased up to 31.7% during the second 5-year period and steadily went down to 28.1% toward the end of the study period. The eye-care management training had a fluctuating uptake pattern across the 5-year periods, and the growth ranged from 18.5% to 33.4% over the two decades. Whereas the uptake pattern of cataract-related training programs showed a downward trend across the 5-year periods, the subspecialty courses showed an upward trend . Cataract training that predominantly included short-term training on surgical techniques (ECCE, SICS, and phacoemulsification) was attended by over 91.25% of the trainees. The average growth in the uptake of overall subspecialty training across the four 5-year segments was 8.2%. In the first 5-year period (2000–2004), the proportions of cataract and subspecialty trainees were 60% and 40%, respectively, and these proportions were reversed to 30% and 70%, respectively, in the fourth 5-year period. Among the long-term subspecialty fellowships, retina-vitreous, cornea, and glaucoma fellowships were attended by more participants compared to the others. Among the short-term subspecialty courses, the lasers in diabetic retinopathy and clinical observership in glaucoma had higher participation. However, the upward growth trend in overall subspecialty training was consistent for every specialty. The difference between the proportions of training in cataract and subspecialty training was found to be statistically significant, in both overall and within each 5-year period ( P < 0.001). describes the association of trainees’ home location and gender with the uptake of training. Overall, 71% of the trainees were from India, 10% from other SEAR countries, and 19% from other countries. Although the proportions varied across countries, Indians significantly dominated in all categories. For the long-term clinical fellowships and short-term clinical courses, the proportions of Indian participants were 95% and 75%, respectively. For the other three categories, the proportion of Indians was around 60%. The proportion of participants from the rest of SEAR was around 17% for the short-term allied ophthalmic courses, technical skill training, and eye-care management training. The proportions of participants for the above three programs from other countries, were 23%, 25%, and 22%, respectively. More men (61.7%) compared to women attended the training programs, overall. There was a statistically significant male dominance ( P < 0.01) in all the course categories except in the technical skill training for which the proportions were comparable for men (52.6%) and women (47.3%). Eye-care management training was the category that had the highest male dominance of 75.3%.
A significant gap in the availability and distribution of trained manpower has already been established in all regions of the world. Need-based, well-structured training programs have been proposed as a solution to address this challenge by the WHO and VISION 2020: The Right to Sight initiative. In spite of considerable progress in the availability of trained ophthalmic human resources in all WHO regions, there still is a need for more to meet future challenges, especially in developing countries. The high number of participants, from 118 countries, attending various training programs in the study setting reflects the high demand for regular, structured capacity building of eye-care personnel across regions. Over 80% of the trainees coming from SEARs and close to 90% of them from India denotes the influence of geographical proximity to a training institute on the uptake of its programs. Visa formalities and other travel requirements that vary across countries over time and the increasing cost of travel could have been a barrier to the uptake of the courses by international participants. Overall upward growth in the training uptake could be an indication of the increase in demand for additional skill development among eye-care personnel, globally. A greater emphasis on practical application and the quality of the training might also have enhanced participation over the years. However, a downward trend in participation was observed in three categories—short-term clinical courses, short-term allied ophthalmic courses, and technical skill training. The decline in short-term clinical courses is mainly due to the decreasing participation in cataract surgical training over the years. Unlike doctors who mostly organize funding to support their training themselves, the AOP and technicians would depend on their respective institutions for funding support for refresher training. In most settings, it might not be feasible to have these cadres of the personnel go away for training for 1–2 months. These could be the reasons for the decline in the number of trainees from these cadres. We found a downward trend in the uptake of cataract surgery training and an upward trend in subspecialty fellowships across the study period. This could be an indication that institutions in most countries have started their own cataract surgery training programs as cataract is the predominant cause of blindness worldwide. Hands-on training for cataract surgery has increasingly been included in the ophthalmology residency programs in many countries, thereby minimizing the necessity for additional training. The upward trend in the uptake of subspecialty fellowship training indicates that eye-care institutions are focusing more on building subspecialty services in recent years. Great efforts have been made over the past decade to strengthen the capacity of the eye-care training centers in Africa through various programs. The “VISION 2020 Links program” established in 2004 was a similar initiative that focused on improving the capacity of the training institutes in developing countries, mainly in Africa. Several eye hospitals in developing countries are linked to a training institute in the United Kingdom to facilitate knowledge and skill transfer for a particular period. Similarly, the Queen Elizabeth Diamond Jubilee Trust through the commonwealth eye health consortium sponsored around 140 ophthalmologists and other eye-care personnel from sub-Saharan Africa to undertake clinical fellowships at centers of excellence, predominantly in Asia including 42 trainees in the study setting. These initiatives have enabled a large number of eye-care personnel to build their competence within a short period. More such initiatives are necessary to achieve the required skill enhancement in the developing world. It will be challenging for smaller countries with very few ophthalmologists and eye-care professionals to offer such training, especially in subspecialty areas. Therefore, there is a need for more regional institutes similar to the study setting to offer skill development training to personnel from countries with inadequate training capabilities. Improvised teaching methods and the use of appropriate technology such as online education platforms can further enhance the reach of the training. We found the trainees’ home location and gender to be associated with the uptake pattern of the training programs. The long-term clinical fellowships were taken predominantly by the Indian participants, whereas comparatively more foreign participants were found to have taken the short-term clinical training programs. This trend, in fact, is also a reflection of the organizational policy that admits only domestic candidates to long-term clinical training, considering multiple factors. Firstly, it has been found impractical for foreign candidates to be away from their clinical practice for long durations of 18 to 24 months. Secondly, the long-term trainees will have to be involved in regular clinical activities requiring close interaction with the patients and attendants. These challenges are minimized to some extent in short-term training, and this could have led to a higher uptake of short-term programs by foreign trainees. The overall male dominance among the trainees could possibly be a reflection of the higher proportion of men in the eye-care workforce in general. Studies have reported varying reasons for the lower proportion of women in the eye-care workforce. The fact that women find it difficult to travel long distances and stay away from home due to challenges such as family commitments might also be a reason for lower uptake by women. Comparatively higher gender disparity among the participants of eye-care management training might be due to the general trend of more men than women taking up leadership and managerial roles in eye-care institutions. Even though academic programs intended to train ophthalmologists and optometrists are available in almost all regions, lack of structured curricula and appropriate training infrastructure tend to undermine the quality and productivity of these programs. Our study findings suggest that the uptake of training programs can be maximized when offered locally. Therefore, setting up localized training institutes that can offer structured training programs is essential to ensure continuous professional development of eye-care human resources, especially in LMICs. Recognizing this ardent need, the study institute, in collaboration with Seva Foundation USA has initiated “Eyexcel”, a training program to support organizations in setting up training programs. This program has trained over 100 teams from across the world over the past 14 years. The strength of our study includes a large data set covering a variety of training programs conducted over two decades. The participants came from all 6 WHO regions and represented a wide range of cadres. Because our study was based on retrospective data available in the training database, the scope of our analysis was restricted. There is a scope for conducting a prospective study by which this limitation can be addressed.
A good representation of participants from developing countries in the training programs is encouraging as it corresponds to the higher eye-care needs in those counties. The higher growth in clinical subspecialty training could be an indication of a welcome trend toward comprehensive eye care. Given the strong influence of distance in accessing training, the development of institutes similar to the study setting in other regions would, hopefully, enhance global efforts to eliminate needless blindness. It is also important that governments and NGOs become proactive in promoting and supporting such skill development training programs and thereby enhance the quantum and quality of eye care. Abbreviations WHO: World Health Organization; SEAR: South East Asia Region; AECS: Aravind Eye Care System; AOP: Allied Ophthalmic Personnel Ethics approval and consent to participate The study used retrospective data. However, as mandated by our institutional policy, the study with the project code RET200000319 was presented to the “Institutional Ethics Committee—Aravind Eye Hospital” and was approved (ECR/182/INST/TN/2013/RR-19) on November 27, 2020. Availability of data and materials The datasets generated and analyzed during the current study are available in the Harvard Dataverse repository [10.7910/DVN/PTYQOQ]. Authors’ contributions KG conceptualized the idea, KG and SJ designed the study, managed the collection and analysis of the data, and prepared the manuscript. TR provided significant inputs to the study design and did a substantial revision of the manuscript. All authors read and approved the final manuscript. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
WHO: World Health Organization; SEAR: South East Asia Region; AECS: Aravind Eye Care System; AOP: Allied Ophthalmic Personnel
The study used retrospective data. However, as mandated by our institutional policy, the study with the project code RET200000319 was presented to the “Institutional Ethics Committee—Aravind Eye Hospital” and was approved (ECR/182/INST/TN/2013/RR-19) on November 27, 2020.
The datasets generated and analyzed during the current study are available in the Harvard Dataverse repository [10.7910/DVN/PTYQOQ].
KG conceptualized the idea, KG and SJ designed the study, managed the collection and analysis of the data, and prepared the manuscript. TR provided significant inputs to the study design and did a substantial revision of the manuscript. All authors read and approved the final manuscript.
Nil.
There are no conflicts of interest.
|
Artificial intelligence and machine learning in ophthalmology: A review
|
c49cc4f0-a7ca-4b79-afad-afa9f33e7bf7
|
10155540
|
Ophthalmology[mh]
|
Diabetic retinopathy Screening for diabetic retinopathy (DR) is essential as it facilitates early detection and treatment, thereby preventing vision loss. This is relevant in Canada as 3.7 million people have diabetic retinopathy; the incidence of DR is reported to be as high as 40% in at-risk populations, and a significant proportion of patients are not screened. DR is an optimally suited area for AI, which can help overcome screening barriers, improving access and preventing vision loss. Early studies of AI and DR focused on lesion detection and have evolved classifying DR with a predominant focus on standard color fundus photography. In 2016, both Abràmoff et al . and Gulshan et al . reported algorithms using convolutional neural networks (CNN) that were able to detect referrable diabetic retinopathy (area under the curve [AUC] of 0.980 and 0.991, respectively). Subsequent studies used larger data sets demonstrated good detection of referrable diabetic retinopathy with AUCs of 0.97 and 0.94, respectively. Further studies have prospectively evaluated the performance of AI in detecting referrable DR. Heydon et al . reported that EyeArt v2.1 had a 95.7% sensitivity for referrable DR. In addition to standard fundus photography, AI detection of DR has been studied using optical coherence tomography (OCT) images, ultra-widefield (UWF) imaging, and even smartphone-captured retinal images. Intraretinal fluid identified by OCT can be identified accurately by CNN; for instance Lee et al . used manually segmented macular OCT images to develop a CNN capable of detecting macular edema (with a cross-validation Dice coefficient of 0.911). UWF imaging allows visualization of up to 200° of the fundus, potentially catching additional diabetic-related peripheral disease. Nagasawa et al . found high sensitivity (94.7%) and specificity (97.2%) of a CNN in detecting proliferative DR on UWF images. Similarly, Wang et al . found high sensitivity (91.7%), though limited specificity (50.0%), for referrable DR using UWF images. Access and availability of imaging tools are challenges in effective DR screening. Natarajan et al . reported a smartphone-based, offline AI system that had a high sensitivity for detecting referrable diabetic retinopathy. There are a number of commercially available AI-developed DR screening platforms including IDx-DR (Iowa), which holds FDA approval, and EyeArt (California), which is designated as a European Union Class IIa medical device. Age-related macular degeneration Age-related macular degeneration (AMD) is a common cause of vision loss, with an estimated 196 million patients impacted globally. Early detection and treatment of wet AMD can minimize vision loss. Given the burden of disease, AI could assist in mass screening of OCT and retinal photographs without in-person evaluations. The research in this field started from ML with databases of under 1000 images to now over 490,000 images with high sensitivity and specificity rates. Burlina et al . used a database of over 130,000 images from 4613 patients to develop a DL algorithm for automated detection of AMD. Their DL system reported a 92% accuracy in identifying individuals with moderate and advanced AMD. Similarly, a study by Vaghefi et al . demonstrated that combining DL modalities in AMD—specifically fundus photographs, OCT, and OCT angiography scans—increased accuracy from 91% to 96% in detecting AMD compared to OCT alone. Keenan et al . recently published a paper on an AI algorithm that could accurately quantify volume of fluid in neovascular AMD patients. This has potential in monitoring response to treatment. Deep learning has also been used to quantify other key features associated with AMD including intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), ellipsoid zone loss, drusen, fibrosis, and subretinal hyperreflective material. Similarly, Moraes et al . published a paper on automated quantification of key features in AMD while Fu et al . demonstrated that automatically captured quantitative parameters could predict visual change following treatment. Additional applications in the retina Moving beyond diagnosis of individual disease entities, De Fauw et al . reported a deep learning architecture that identified referrable retinal disease via OCT images, achieving a performance comparable to retina subspecialists (AUC = 99.21). This system was able to identify neovascular AMD, geographic atrophy, drusen, macular edema, macular holes, central serous retinopathy, vitreomacular traction, and epiretinal membrane. Deep learning is able to predict retinal function on microperimetry based on structural assessment of OCT in patients with Stargardt disease. This may assist in assessing patients with inherited retinal disease while monitoring progression or treatment effect in clinical trials. Other AI systems are able to identify central serous retinopathy, pachychoroid vasculopathy, sickle cell disease, and macular telangiectasia. Aside from ocular diagnosis, DL can also predict demographics including age, gender, and cardiovascular risk factors such as systolic blood pressure, smoking status, and major adverse cardiac events.
Screening for diabetic retinopathy (DR) is essential as it facilitates early detection and treatment, thereby preventing vision loss. This is relevant in Canada as 3.7 million people have diabetic retinopathy; the incidence of DR is reported to be as high as 40% in at-risk populations, and a significant proportion of patients are not screened. DR is an optimally suited area for AI, which can help overcome screening barriers, improving access and preventing vision loss. Early studies of AI and DR focused on lesion detection and have evolved classifying DR with a predominant focus on standard color fundus photography. In 2016, both Abràmoff et al . and Gulshan et al . reported algorithms using convolutional neural networks (CNN) that were able to detect referrable diabetic retinopathy (area under the curve [AUC] of 0.980 and 0.991, respectively). Subsequent studies used larger data sets demonstrated good detection of referrable diabetic retinopathy with AUCs of 0.97 and 0.94, respectively. Further studies have prospectively evaluated the performance of AI in detecting referrable DR. Heydon et al . reported that EyeArt v2.1 had a 95.7% sensitivity for referrable DR. In addition to standard fundus photography, AI detection of DR has been studied using optical coherence tomography (OCT) images, ultra-widefield (UWF) imaging, and even smartphone-captured retinal images. Intraretinal fluid identified by OCT can be identified accurately by CNN; for instance Lee et al . used manually segmented macular OCT images to develop a CNN capable of detecting macular edema (with a cross-validation Dice coefficient of 0.911). UWF imaging allows visualization of up to 200° of the fundus, potentially catching additional diabetic-related peripheral disease. Nagasawa et al . found high sensitivity (94.7%) and specificity (97.2%) of a CNN in detecting proliferative DR on UWF images. Similarly, Wang et al . found high sensitivity (91.7%), though limited specificity (50.0%), for referrable DR using UWF images. Access and availability of imaging tools are challenges in effective DR screening. Natarajan et al . reported a smartphone-based, offline AI system that had a high sensitivity for detecting referrable diabetic retinopathy. There are a number of commercially available AI-developed DR screening platforms including IDx-DR (Iowa), which holds FDA approval, and EyeArt (California), which is designated as a European Union Class IIa medical device.
Age-related macular degeneration (AMD) is a common cause of vision loss, with an estimated 196 million patients impacted globally. Early detection and treatment of wet AMD can minimize vision loss. Given the burden of disease, AI could assist in mass screening of OCT and retinal photographs without in-person evaluations. The research in this field started from ML with databases of under 1000 images to now over 490,000 images with high sensitivity and specificity rates. Burlina et al . used a database of over 130,000 images from 4613 patients to develop a DL algorithm for automated detection of AMD. Their DL system reported a 92% accuracy in identifying individuals with moderate and advanced AMD. Similarly, a study by Vaghefi et al . demonstrated that combining DL modalities in AMD—specifically fundus photographs, OCT, and OCT angiography scans—increased accuracy from 91% to 96% in detecting AMD compared to OCT alone. Keenan et al . recently published a paper on an AI algorithm that could accurately quantify volume of fluid in neovascular AMD patients. This has potential in monitoring response to treatment. Deep learning has also been used to quantify other key features associated with AMD including intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), ellipsoid zone loss, drusen, fibrosis, and subretinal hyperreflective material. Similarly, Moraes et al . published a paper on automated quantification of key features in AMD while Fu et al . demonstrated that automatically captured quantitative parameters could predict visual change following treatment.
Moving beyond diagnosis of individual disease entities, De Fauw et al . reported a deep learning architecture that identified referrable retinal disease via OCT images, achieving a performance comparable to retina subspecialists (AUC = 99.21). This system was able to identify neovascular AMD, geographic atrophy, drusen, macular edema, macular holes, central serous retinopathy, vitreomacular traction, and epiretinal membrane. Deep learning is able to predict retinal function on microperimetry based on structural assessment of OCT in patients with Stargardt disease. This may assist in assessing patients with inherited retinal disease while monitoring progression or treatment effect in clinical trials. Other AI systems are able to identify central serous retinopathy, pachychoroid vasculopathy, sickle cell disease, and macular telangiectasia. Aside from ocular diagnosis, DL can also predict demographics including age, gender, and cardiovascular risk factors such as systolic blood pressure, smoking status, and major adverse cardiac events.
While AI has been heavily researched in the posterior segment, the application of AI in anterior segment disease and diagnostic research is now coming to the forefront of ophthalmology literature. Conjunctivitis Using the Japan Ocular Allergy Society (JOAS) classification, Hiroki Masumoto trained a neural network to grade conjunctival hyperemia. The system graded the severity of the hyperemia with a high degree of accuracy. Trachoma is a blinding disease secondary to infection by ocular strains of Chlamydia trachomatis . Using eyelid images from a database of two clinical trials—the Niger arm of the Partnership for Rapid Elimination of Trachoma trial (PRET) and the Trachoma Amelioration in Northern Amhara (TANA) trial—machine learning was used to accurately classify trachomatis changes. Lacrimal apparatus Lacrimal scintigraphy (LS) is an objective and reliable method of studying the lacrimal drainage system and tear flow. Park et al . developed machine and deep learning algorithms using LS images to classify lacrimal duct pathology in patients with epiphora. The system showed accuracy comparable to a trained oculoplastic specialist. Dry eye Meibomian glands (MGs) are believed to play a critical role in ocular surface health. Dysfunction of MGs is the most frequent cause of dry eyes. Meibography, or photo documentation of MGs of the eyelids with transillumination or infrared light, is a common test for the diagnosis, treatment, and management of MG dysfunction (MGD). Wang et al . developed a DL approach to digitally segment MG atrophy and computing percent atrophy in meibography images, providing quantitative information of gland atrophy. The algorithm achieved a 95.6% meiboscore grading accuracy, outperforming the lead clinical investigator by 16.0% and the clinical team by 40.6%. The algorithm also achieved a 97.6% and 95.4% accuracy for eyelid and atrophy segmentations, respectively. Stegman et al . developed a ML segmentation algorithm to measure tear meniscus thickness via OCT to measure tear film quantity. The system showed reproducible results although the sample size was small. Keratoconus Keratoconus is a non-inflammatory corneal disorder characterized by stromal thinning and astigmatism. Kuo et al . retrospectively collected corneal topographic results over time to develop a DL algorithm to detect keratoconus. The model had fair accuracy for keratoconus screening, and furthermore, it predicted subclinical keratoconus. The sensitivity and specificity of all CNN models were over 0.90, and the AUC reached 0.995 in one of the three tested models. Dos Santos et al . designed and trained a neural network (CorneaNet) to segment cornea OCT images. The algorithm measured the thickness of the three main layers, namely the epithelium, Bowman’s layer, and the middle stroma, in patients with keratoconus and those with healthy eyes. All models revealed very similar performances when identifying keratoconus and had a validation accuracy ranging from 99.45% to 99.57%. Lavric et al . devised the KeratoDetect, a neural network that achieved a high level of performance in detecting keratoconus from cornea topographies. With an accuracy of 99.33%, the author claimed that it could assist ophthalmologists in rapid screening of patients. Similarly, Kamiya et al . evaluated the diagnostic accuracy of six colored anterior segment OCT maps: anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power, and pachymetry map. DL was able to identify and classify keratoconus eyes and stage of disease. Shi et al . developed an automated classification system using ML and combining Scheimpflug and ultra-high resolution OCT. The system showed excellent performance (AUC = 0.93) in discriminating subclinical keratoconus from normal corneas. The author found that epithelial features assessed by OCT were the most important features when identifying keratoconus. Abdelmotaal et al . was able to identify keratoconus and subclinical keratoconus via ML using color-coded corneal maps obtained by a Scheimpflug camera.
Using the Japan Ocular Allergy Society (JOAS) classification, Hiroki Masumoto trained a neural network to grade conjunctival hyperemia. The system graded the severity of the hyperemia with a high degree of accuracy. Trachoma is a blinding disease secondary to infection by ocular strains of Chlamydia trachomatis . Using eyelid images from a database of two clinical trials—the Niger arm of the Partnership for Rapid Elimination of Trachoma trial (PRET) and the Trachoma Amelioration in Northern Amhara (TANA) trial—machine learning was used to accurately classify trachomatis changes.
Lacrimal scintigraphy (LS) is an objective and reliable method of studying the lacrimal drainage system and tear flow. Park et al . developed machine and deep learning algorithms using LS images to classify lacrimal duct pathology in patients with epiphora. The system showed accuracy comparable to a trained oculoplastic specialist.
Meibomian glands (MGs) are believed to play a critical role in ocular surface health. Dysfunction of MGs is the most frequent cause of dry eyes. Meibography, or photo documentation of MGs of the eyelids with transillumination or infrared light, is a common test for the diagnosis, treatment, and management of MG dysfunction (MGD). Wang et al . developed a DL approach to digitally segment MG atrophy and computing percent atrophy in meibography images, providing quantitative information of gland atrophy. The algorithm achieved a 95.6% meiboscore grading accuracy, outperforming the lead clinical investigator by 16.0% and the clinical team by 40.6%. The algorithm also achieved a 97.6% and 95.4% accuracy for eyelid and atrophy segmentations, respectively. Stegman et al . developed a ML segmentation algorithm to measure tear meniscus thickness via OCT to measure tear film quantity. The system showed reproducible results although the sample size was small.
Keratoconus is a non-inflammatory corneal disorder characterized by stromal thinning and astigmatism. Kuo et al . retrospectively collected corneal topographic results over time to develop a DL algorithm to detect keratoconus. The model had fair accuracy for keratoconus screening, and furthermore, it predicted subclinical keratoconus. The sensitivity and specificity of all CNN models were over 0.90, and the AUC reached 0.995 in one of the three tested models. Dos Santos et al . designed and trained a neural network (CorneaNet) to segment cornea OCT images. The algorithm measured the thickness of the three main layers, namely the epithelium, Bowman’s layer, and the middle stroma, in patients with keratoconus and those with healthy eyes. All models revealed very similar performances when identifying keratoconus and had a validation accuracy ranging from 99.45% to 99.57%. Lavric et al . devised the KeratoDetect, a neural network that achieved a high level of performance in detecting keratoconus from cornea topographies. With an accuracy of 99.33%, the author claimed that it could assist ophthalmologists in rapid screening of patients. Similarly, Kamiya et al . evaluated the diagnostic accuracy of six colored anterior segment OCT maps: anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power, and pachymetry map. DL was able to identify and classify keratoconus eyes and stage of disease. Shi et al . developed an automated classification system using ML and combining Scheimpflug and ultra-high resolution OCT. The system showed excellent performance (AUC = 0.93) in discriminating subclinical keratoconus from normal corneas. The author found that epithelial features assessed by OCT were the most important features when identifying keratoconus. Abdelmotaal et al . was able to identify keratoconus and subclinical keratoconus via ML using color-coded corneal maps obtained by a Scheimpflug camera.
Glaucoma is the second most common cause of irreversible blindness worldwide. Early detection of glaucoma has been shown to reduce vision loss. Digital photography of the optic nerve is a common method to screen for glaucoma and is used effectively as part of many teleglaucoma programs. Comprehensive evaluation of a glaucoma suspect might include spectral domain optical coherence tomography (SD-OCT), perimetry, tonometry, pachymetry, and gonioscopy. AI algorithms have been developed to identify optic nerve changes via optic disc photographs and SD-OCT and thereby predict glaucomatous field changes. Very little work has been done to date with DL for tonometry, pachymetry, and gonioscopy. Glaucoma diagnosis and screening Color fundus photography of the optic nerve is an inexpensive and available method to screen for glaucoma. ML has been utilized to improve identification of early glaucomatous changes to the optic nerve captured by photography. Computer segmentation of the optic nerve into disc, cup, and vessels to create a glaucoma score showed an AUC of 98.2% when compared to standard cup-to-disc ratio score (AUC 91.4%) on three glaucoma-related public datasets. Once incorporated into teleglaucoma screening programs, such algorithms will automate detection of early glaucoma. Simultaneous or near-simultaneous capture of optic nerve OCT scans at the time of color fundus photography has enabled the rapid development of highly accurate DL algorithms for identification of glaucomatous nerve damage. This method of training DL systems—first published by Medeiros et al . in 2018 using high resolution digital images captured in the research setting—has removed the errors and biases associated with human grading of nerve damage. They showed that once trained with OCT data as truth, the DL system could discriminate glaucomatous optic nerves from healthy eyes with the area under the ROC curves of 0.944 (95% confidence interval [CI], 0.912–0.966) and 0.940 (95% CI, 0.902–0.966), respectfully ( P = 0.724). This seminal work has been replicated and confirmed through publications by various groups around the world utilizing other fundus image repositories. Machine-to-machine (M2M) deep learning algorithms trained with SD-OCT to assess monoscopic optic nerve photographs are able to identify glaucomatous optic nerve damage more accurately than glaucoma specialists. This increase in accuracy suggests that DL systems will replace human review of disc photographs for glaucoma screening programs of the future. It is possible to detect progression of glaucomatous nerve damage by fundus photography utilizing DL algorithms confirmed by OCT. Medeiros et al . assessed temporally disparate disc photographs of 5529 patients over time. They utilized a DL CNN trained with OCT data from the same patients. The ROC curve area was 0.86 (95% CI, 0.83–0.88) when differentiating between progressors and non-progressors. Agitha et al . showed a similar benefit of a DL model used on 1113 fundus images to achieve an accuracy of 94%, sensitivity of 85%, and specificity of 100% in the automatic diagnosis of glaucoma. In the setting of glaucoma detection, OCT has been utilized primarily to provide an objective truth reference for glaucoma-related DL CNN training. Recent work has shown that OCT DL algorithms are able to identify glaucomatous damage reliably by utilizing various datasets and various artificial intelligence algorithms. DL algorithms are also able to predict glaucomatous visual field with OCT nerve topography. The sensitivity and specificity of ML classifiers to diagnose glaucoma can be improved by combining standard automated perimetry and OCT data when compared to OCT alone. Standard automated perimetry (SAP) is perhaps the most exciting area to assess with ML. The availability of long-term visual field test results, often over decades, in patients with and without glaucoma has provided extensive datasets for artificial intelligence researchers. Artificial intelligence is able to identify glaucoma four years in advance of diagnosis using original visual field data with good reliability. Asaoka et al . retrospectively assessed visual field data over 15 years in 51 patients with open-angle glaucoma and 87 healthy participants. Their deep feedforward neural network (FNN) showed an AUC of 92.6% (95% CI, 89.8%–95.4%) when identifying pre-perimetric glaucoma. Unsupervised ML classifiers showed a sensitivity of 82.8% and specificity of 93.1% in the identification of glaucomatous patterns by frequency doubling technology (FDT). This suggests that machine learning could become an important adjunct where visual field testing is performed as part of any glaucoma screening program. Glaucoma progression Machine classifier algorithms are able to identify glaucoma progression by visual fields. Progression of patterns (POP) is a variational machine learning classifier that was able to identify more eyes with progression of glaucomatous optic neuropathy in glaucoma suspects and glaucoma than were identified by guided progression analysis (GPA). Deep learning is also able to forecast future Humphrey visual fields in patients with glaucoma. Wen et al . was able to predict the development of future visual field changes up to five years in the future using deep learning networks.
Color fundus photography of the optic nerve is an inexpensive and available method to screen for glaucoma. ML has been utilized to improve identification of early glaucomatous changes to the optic nerve captured by photography. Computer segmentation of the optic nerve into disc, cup, and vessels to create a glaucoma score showed an AUC of 98.2% when compared to standard cup-to-disc ratio score (AUC 91.4%) on three glaucoma-related public datasets. Once incorporated into teleglaucoma screening programs, such algorithms will automate detection of early glaucoma. Simultaneous or near-simultaneous capture of optic nerve OCT scans at the time of color fundus photography has enabled the rapid development of highly accurate DL algorithms for identification of glaucomatous nerve damage. This method of training DL systems—first published by Medeiros et al . in 2018 using high resolution digital images captured in the research setting—has removed the errors and biases associated with human grading of nerve damage. They showed that once trained with OCT data as truth, the DL system could discriminate glaucomatous optic nerves from healthy eyes with the area under the ROC curves of 0.944 (95% confidence interval [CI], 0.912–0.966) and 0.940 (95% CI, 0.902–0.966), respectfully ( P = 0.724). This seminal work has been replicated and confirmed through publications by various groups around the world utilizing other fundus image repositories. Machine-to-machine (M2M) deep learning algorithms trained with SD-OCT to assess monoscopic optic nerve photographs are able to identify glaucomatous optic nerve damage more accurately than glaucoma specialists. This increase in accuracy suggests that DL systems will replace human review of disc photographs for glaucoma screening programs of the future. It is possible to detect progression of glaucomatous nerve damage by fundus photography utilizing DL algorithms confirmed by OCT. Medeiros et al . assessed temporally disparate disc photographs of 5529 patients over time. They utilized a DL CNN trained with OCT data from the same patients. The ROC curve area was 0.86 (95% CI, 0.83–0.88) when differentiating between progressors and non-progressors. Agitha et al . showed a similar benefit of a DL model used on 1113 fundus images to achieve an accuracy of 94%, sensitivity of 85%, and specificity of 100% in the automatic diagnosis of glaucoma. In the setting of glaucoma detection, OCT has been utilized primarily to provide an objective truth reference for glaucoma-related DL CNN training. Recent work has shown that OCT DL algorithms are able to identify glaucomatous damage reliably by utilizing various datasets and various artificial intelligence algorithms. DL algorithms are also able to predict glaucomatous visual field with OCT nerve topography. The sensitivity and specificity of ML classifiers to diagnose glaucoma can be improved by combining standard automated perimetry and OCT data when compared to OCT alone. Standard automated perimetry (SAP) is perhaps the most exciting area to assess with ML. The availability of long-term visual field test results, often over decades, in patients with and without glaucoma has provided extensive datasets for artificial intelligence researchers. Artificial intelligence is able to identify glaucoma four years in advance of diagnosis using original visual field data with good reliability. Asaoka et al . retrospectively assessed visual field data over 15 years in 51 patients with open-angle glaucoma and 87 healthy participants. Their deep feedforward neural network (FNN) showed an AUC of 92.6% (95% CI, 89.8%–95.4%) when identifying pre-perimetric glaucoma. Unsupervised ML classifiers showed a sensitivity of 82.8% and specificity of 93.1% in the identification of glaucomatous patterns by frequency doubling technology (FDT). This suggests that machine learning could become an important adjunct where visual field testing is performed as part of any glaucoma screening program.
Machine classifier algorithms are able to identify glaucoma progression by visual fields. Progression of patterns (POP) is a variational machine learning classifier that was able to identify more eyes with progression of glaucomatous optic neuropathy in glaucoma suspects and glaucoma than were identified by guided progression analysis (GPA). Deep learning is also able to forecast future Humphrey visual fields in patients with glaucoma. Wen et al . was able to predict the development of future visual field changes up to five years in the future using deep learning networks.
Retinopathy of prematurity Retinopathy of prematurity (ROP) is a vasoproliferative retinal disease that is a leading cause of childhood blindness. The Early Treatment for Retinopathy of Prematurity (ETROP) study has shown that screening and early intervention is critical for improving visual outcomes. Improved survival of extremely premature infants has increased the prevalence of ROP, particularly in developing nations. AI can play a vital role in assisting with ROP diagnosis, thereby improving treatment outcomes. In a study by Brown et al. , researchers showed that a DL algorithm trained with wide-field retinal photographs outperformed 6 out of 8 ROP experts on an independent data set of 100 images in diagnosing ROP. The algorithm was trained with a database of 5511 fundus images and demonstrated a 93% sensitivity and 94% specificity in determining ROP severity. Tong et al . developed a neural network that was trained for ROP identification with 36,000 fundus images. This system achieved an accuracy of 0.903 for ROP severity classification and demonstrated comparable to or better diagnostic ability when compared to retina subspecialist. Other research have shown similar success with ROP severity classification and deep learning. Integration of AI into an ROP screening program will likely occur in the near future. Congenital cataracts Pediatric cataract is one of the leading causes of juvenile blindness, with an estimated prevalence of 4.24 per 10,000 live births. Congenital cataract guardian (CC-Guardian) is an AI agent that incorporates individualized prediction and scheduling, and intelligent telehealth follow-up computing for congenital cataracts. The system exhibits high sensitivity and specificity and has been integrated to a web-based smartphone app. The intelligent agent consists of three functional modules: (i) a prediction module that identifies potential high-risk congenital cataract patients who are likely to suffer complications, (ii) a dispatching module that schedules individual follow-up based on the prediction results, and (iii) a telehealth module that makes intervention decisions in each follow-up examination. All the records were derived from routine examinations at the Childhood Cataract Program of the Chinese Ministry of Health. Amblyopia In Korea, Chun et al . assessed a DL system to predict the range of refractive error in children using a smartphone photorefraction image to screen for amblyopia and compared it to a cycloplegic refraction. The DL tool showed an accuracy of 81.6%.
Retinopathy of prematurity (ROP) is a vasoproliferative retinal disease that is a leading cause of childhood blindness. The Early Treatment for Retinopathy of Prematurity (ETROP) study has shown that screening and early intervention is critical for improving visual outcomes. Improved survival of extremely premature infants has increased the prevalence of ROP, particularly in developing nations. AI can play a vital role in assisting with ROP diagnosis, thereby improving treatment outcomes. In a study by Brown et al. , researchers showed that a DL algorithm trained with wide-field retinal photographs outperformed 6 out of 8 ROP experts on an independent data set of 100 images in diagnosing ROP. The algorithm was trained with a database of 5511 fundus images and demonstrated a 93% sensitivity and 94% specificity in determining ROP severity. Tong et al . developed a neural network that was trained for ROP identification with 36,000 fundus images. This system achieved an accuracy of 0.903 for ROP severity classification and demonstrated comparable to or better diagnostic ability when compared to retina subspecialist. Other research have shown similar success with ROP severity classification and deep learning. Integration of AI into an ROP screening program will likely occur in the near future.
Pediatric cataract is one of the leading causes of juvenile blindness, with an estimated prevalence of 4.24 per 10,000 live births. Congenital cataract guardian (CC-Guardian) is an AI agent that incorporates individualized prediction and scheduling, and intelligent telehealth follow-up computing for congenital cataracts. The system exhibits high sensitivity and specificity and has been integrated to a web-based smartphone app. The intelligent agent consists of three functional modules: (i) a prediction module that identifies potential high-risk congenital cataract patients who are likely to suffer complications, (ii) a dispatching module that schedules individual follow-up based on the prediction results, and (iii) a telehealth module that makes intervention decisions in each follow-up examination. All the records were derived from routine examinations at the Childhood Cataract Program of the Chinese Ministry of Health.
In Korea, Chun et al . assessed a DL system to predict the range of refractive error in children using a smartphone photorefraction image to screen for amblyopia and compared it to a cycloplegic refraction. The DL tool showed an accuracy of 81.6%.
The development and modern usage of AI in research has become a breakthrough for optimization and efficiency. With the growth of electronic medical records, healthcare providers and hospitals are able to accumulate a wealth of patient information. A common barrier to sifting through this information is the time required to appropriately review each individual item. With the advent of AI, after developing a computer-generated algorithm or suitably training an automated system to batch patient information, data collection can be completed in a fraction of the time that it would take to be done manually. Ophthalmology is a medical specialty that is conducive to retrieving these large amounts of data due to its rapid access of ophthalmic imaging and objective markers (e.g., visual acuity, intraocular pressure [IOP], retinal thickness, etc.). The Intelligent Research in Sight (IRIS) Registry is one of the largest clinical datasets that includes data about demographics, disease conditions, and visit rates in ophthalmology. The Smart Eye Database stores electronic medical records of ophthalmology patients which are stratified by eye conditions. Datasets such as IRIS and the Smart Eye Database allow us to appreciate subtle correlations, conduct multicenter studies, incorporate multimodal analyses, identify novel imaging patterns, and increase the power in studies, all of which may not be possible with smaller sets of data. As described by Joshi et al. , this large collection of medical information, or “big data,” serves as a perfect substrate for AI, ML, and DL to develop and run algorithms at a scale that would never have been possible before.
Ophthalmology is a specialty well-suited for AI integration. The extensive use of multi-modal digital imaging and diagnostic tests captured over time in all ophthalmology subspecialties provide a treasure trove of opportunities for machine learning that are now being realized. Artificial intelligence and machine learning solutions have begun the evolution from research setting to a clinical tool that will be invaluable for ophthalmologists in all clinical settings. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Nil.
There are no conflicts of interest.
|
Indian Journal of Ophthalmology – A catalyst to change
|
f4c20dd7-5aa0-4fdf-9faf-9c71b6396068
|
10155555
|
Ophthalmology[mh]
|
“The most effective way to do it is to do it.” – Amelia Earhart A 400% growth in the number of manuscripts submitted to IJO has been commensurate with the rising academic aspirations of Indian Ophthalmologists. Editorial support in developing the manuscripts, and optimized timelines in manuscript processing and publication have further encouraged the potential authors to consider IJO as their preferred journal. Noteworthy is the rise in submissions by Indian authors – from 575 in 2016 to over 3000 currently – an impressive five-fold jump! Despite the daunting increase in numbers, the timelines have been optimized. The average time to peer review and the initial decision has been reduced to less than 6 weeks as promised. The final manuscript acceptance time has been reduced from an average of 202 days for review, revision, and acceptance in 2016 to under 70 days in 2022. The average acceptance to publication time has been reduced to 60 days. IJO is on-time every time, with the electronic Table of Contents (eToC) and PDF/flipbook circulated consistently at the beginning of each month.
“Whatever course you decide upon, there is always someone to tell you that you are wrong. There are always difficulties arising which tempt you to believe that your critics are right. To map out a course of action and follow it to an end requires courage.” – Ralph Waldo Emerson Apart from its fresh and catchy cover pages, IJO has courageously invested in reinventing itself and making it’s content attractive, readable, practical, and relevant to its readers. IJO special issues focus on a contemporary theme. Special issues on Cataract Surgery in December 2017, Retina in December 2018, Retinopathy of Prematurity in June 2019, Ocular Oncology in December 2019, Community Ophthalmology (with a special supplement) in February 2020, COVID-19 in May 2020, Uvea in September 2020, Refractive Surgery in December 2020, Pediatric Retina in August 2021, Diabetic Retinopathy in November 2021, Rare Eye Diseases in July 2022 and Manual Small Incision Cataract Surgery in November 22 with well-curated articles have become very popular. A special issue on Dry Eye is due in April 2023. While some of the new academic sections – One Minute Ophthalmology, Perspective, Education and Training, Innovations, and Preferred Practice have built a committed readership base already, personality-based features such as Living Legends, Tales of Yore, and Women in Ophthalmology have positively enthused the readership. The IJO Living Legends Series continues to elucidate the life and times and inspiring stories of those who have ushered in a paradigm change in the understanding and practice of Ophthalmology. Dr. Bruce Spivey, Dr. Bradley Straatsma, Dr. Sohan Singh Hayreh, Dr. Jerry A. Shields, Dr. Carol L. Shields, Dr. Narsing A. Rao, Dr. Ioannis Pallikaris, and Dr. Michael T. Trese have been featured as part of this unique series. In the platinum jubilee year, the Living Legends Series began featuring Indian Ophthalmologists who have made the world of a change – Dr. Pran Nath Nagpal, Dr. Sengamedu Badrinath, Dr. P. Namperumalsamy and Dr. Gullapalli N. Rao to date, with many more in the pipeline. Our monthly series - Tales of Yore and Women in Ophthalmology, meant to pay tribute to the legends and inspire the young, have been featuring heartwarming stories of Indian idols – Dr. Govindappa Venkataswamy, Dr. Mahesh Prasad Mehray, Dr. Inder Sen Jain, Dr. Lalit Prakash Agarwal, Dr. Lakshman Chandra Dutta, Dr. Vasundhara Kalevar, and Dr. Sudha Sutaria. The companion journal IJO Case Reports, now in its second year of publication has published over 800 well-illustrated brief reports, each with a succinct teaching point. It is following due timelines for possible indexation in the coming years. IJO Videos, a multimedia Journal published as part of IJO and PubMed-indexed right from the first issue, is dedicated to publishing educational videos. Despite all the new developments, most of which required a starter financial support along with the financially stressful COVID-19 times, IJO is currently in excellent financial health, with an over INR 20 million-large corpus fund and INR 10 million surpluses in its expense account. The Journal could ably adapt to the financially challenging circumstances by going green – online access, flipbook and PDF for everyone, with paper copies to those who opted in, and augmenting print and digital advertisement support, library subscriptions, and royalties. The success of IJO can be ascribed to the hard work and intellectual inputs of a highly committed, responsible, and cohesive team, a wise Editorial Board, a supportive managing editor, a workaholic associate editor, expert section editors, young and restless assistant editors, and an army of over 3000 active reviewers. “They, while their companions slept, were toiling upward in the night” – Henry Wadsworth Longfellow
“Setting an example is not the main means of influencing others; it is the only means.” – Albert Einstein IJO is an open-access Journal, and everyone can download full-text articles. On average, about two-thirds of AIOS members access the monthly eToC within a week after it is released. Online hits have touched an unprecedented high of 5,00,000 from across the world – that is one hit every 5 seconds! The latest initiative to provide the complete PDF and flipbook through WhatsApp broadcasts has seen a very encouraging uptake with over 50% of Indian Ophthalmologists having subscribed to this service already. One of the objective assessments of a Journal’s academic relevance is the Impact Factor, which has scaled up from 0.835 at the 2016 baseline to 2.969 in 2021 . CiteScore has risen from 1.8 in 2016 to 3.1 in the last quarter of 2022, with the annual number of citations increasing by 480% from 1424 to 6841 in the same period [Figs. and ]. The overall impact of Indian Ophthalmic research was globally ranked 6 (795 publications in Ophthalmology) in 2016. There has been a huge leap to country rank 3 in 2021 with 2256 publications, a significant portion of which have been in IJO . Publications in IJO have helped add over 2000 unique first/corresponding authors and co-authors to the pool of Indian researchers in the last six years. The ranking of IJO itself has improved to 12 on the Google Scholar journal ranking and it now stands tall among its international peers . We continue to challenge ourselves to make the Journal bigger, brighter and better with each issue. However, the binding principles of IJO remain the same as was emphasized in the past: “While concentrating on raising the impact of the Journal and initiating innovative manuscript formats to make it interesting and appealing to the entire spectrum of Ophthalmologists, the priority is to support the soaring academic aspirations of Indian Ophthalmologists and be an integral part of the robust growth of Indian Ophthalmology. A collective effort and synergy between the authors, Journal, and readers – authors submitting the best of their works to IJO by choice, the Journal continuing to optimize the peer review and publication timelines, and the discerning readers accessing the publications to update their knowledge and adapt contemporary practice patterns, is bound to take IJO to newer heights in the years to come.” “Trust is earned when actions meet words.” – Chris Butle We remain grateful for your trust.
|
Commentary: Ophthalmology training programs: Optimization of human resource to supplement clinical expertise and strengthen eye care delivery systems
|
f5caed95-b59f-4dee-bd4d-2a7ff35c85e2
|
10155585
|
Ophthalmology[mh]
| |
Does workplace telepressure get under the skin? Protocol for an ambulatory assessment study on wellbeing and health-related physiological, experiential, and behavioral concomitants of workplace telepressure
|
c3b48442-b626-4732-9781-38779cf9e7b2
|
10155671
|
Physiology[mh]
|
Modern information and communication technologies at work Modern information and communication technology (ICT) devices such as computers, laptops, tablets, and smartphones are important tools in the daily working life of many employees . ICT has been transforming the way many people work by creating the conditions for work to take place anywhere at any time . In particular, smartphones serve as small computers that include numerous functions such as digital calendars, phone calls, internet and social media access, and especially sending and receiving emails and text messages . ICT-mediated communication and especially email communication is essential in many organizations . Message-based ICTs such as email and text messages may increase flexibility and convenience in responding to work-related requests and facilitate team collaboration across geographic and other accessibility barriers . With the Covid-19 outbreak, a sudden shift from office-centric workplaces to remote work took place , a phenomenon labeled “forced flexibility” by Franken and colleagues . With this shift to working from home, the number of work emails sent during both work and non-work hours has dramatically increased . Emails and voice mails, as asynchronous forms of communication, allow the receiver flexibility and control in choosing when and where to handle received messages. However, as shown by several surveys and studies , many employees have limited or no response flexibility and feel the need to be continuously connected to the workplace and to respond promptly to work-related communication through ICT during work hours and off-job time − a phenomenon called the autonomy paradox . For example, a qualitative study reported that insurance company employees were required to respond to customer chat messages within 15 s , and a survey found that 38% of Australian workers checked their emails during non-work hours and kept their mobile phones switched on . To better characterize the ambivalence of employees’ relationship to ICTs and mobile technology in general, Vanden Abeele recently introduced the concept of digital wellbeing. According to her model, digital wellbeing refers to “a subjective individual experience of optimal balance between the benefits and drawbacks obtained from mobile connectivity” (p. 938). One possible challenge in achieving and maintaining digital wellbeing is workplace telepressure (WTP). The concept of workplace telepressure Barber and Santuzzi introduced the concept of WTP to describe the preoccupation with and urge for responding quickly to work-related ICT messages. As such, WTP is a psychological state experienced by the employee. Both personal factors such as neuroticism and workaholism and organizational factors such as prescriptive norms appear to contribute to the experience of WTP . Workplace telepressure is supposed to emerge when workers begin to view the use of asynchronous communication technologies as similar to synchronous communication forms (e.g., face-to-face communication), which generally require immediate responses. As the employees prioritize ICT-assisted communications during worktime and off-job time, the response flexibility and control over response times that asynchronous communication would normally allow are canceled out. Work periods without interruptions that would be required to accomplish work tasks as well as necessary uninterrupted time for recovery become less frequent and shorter. Workplace telepressure can ultimately lead employees to perceive the use of message-based technology for work purposes as inescapable work instead of flexible work access . Since the initial work by Barber and Santuzzi, researchers have shown a growing interest in studying WTP (e.g., ). Workplace telepressure, connection to work, and wellbeing and health There is growing evidence that high levels of WTP might represent a significant risk factor for employees’ wellbeing and health. For instance, employees reporting higher levels of WTP also reported higher levels of burnout, absenteeism due to physical or mental health issues, and worse satisfaction with work-life balance (e.g., ). In the proposed project, we aim to further examine the potential effects of WTP on wellbeing and health by investigating how WTP is related to important indices of wellbeing and health that have been only partially or not yet considered in research on WTP. Drawing from the Effort-Recovery Model , we suggest that WTP can deteriorate employees’ wellbeing and health by prolonging employees’ work-related psychophysiological effort expenditure and by impairing psychophysiological recovery. The Effort-Recovery Model posits that work-related demands require effort, which strains employees’ psychophysiological systems. During non-work time, the psychophysiological systems can revert to pre-demand states as long as employees refrain from putting additional strain on their psychophysiological systems. If employees’ psychophysiological systems recover sufficiently, there should be no long-term negative consequences for employees’ wellbeing and health. In contrast, if exposure to work-related demands is prolonged, recovery is likely to be insufficient. This imbalance is expected to result in an accumulation of psychophysiological alterations − also known as "wear and tear" within the concept of allostatic load − that can deteriorate employees’ wellbeing and health. We hypothesize that compared to lower levels of WTP, higher levels of WTP are associated with a more unfavorable wellbeing and health profile. Moreover, we aim to uncover potential underlying mechanisms of the hypothesized relationships between WTP and the wellbeing and health-related measures. Barber and Santuzzi suggested that WTP might be a critical factor for employees’ wellbeing and health because it has the potential to extend employees’ work stress both during designated work times and during non-work times by encouraging continued connection to work activities. We hypothesize that the preoccupation and urge to respond to message-based ICTs for work purposes that defines WTP prolong employees’ work-related psychophysiological effort expenditure and impair psychophysiological recovery by increasing connection to work. We operationalize connection to work in terms of work-related workload and work-related perseverative cognition. Our conceptual model is depicted in Fig. . To address these questions, we plan to use ambulatory assessment methods, which in contrast to the more common cross-sectional survey studies allow for the measurement of human behavior where and when it happens and for the analysis of day-level within-person associations . Below, we introduce the wellbeing and health-related outcomes and connection to work with its two components “work-related workload” and “work-related perseverative cognition” as potential mediating factors. In reviewing research on WTP, we use “workplace telepressure” when referring to this concept in a broad sense, “general workplace telepressure” to describe the WTP that employees report to experience in general in their life, and “daily workplace telepressure” to describe the WTP that employees report to experience during a specific day. Most researchers have considered only general WTP. Cambier and colleagues showed that there is substantial within-person variability in daily WTP (around 50%); this finding points to the importance of assessing daily WTP and thus the need for ambulatory assessment studies. Wellbeing and health-related outcomes Biological parameters The hypothalamic–pituitary–adrenal (HPA) axis is a central regulatory system implicated in the organism’s reaction to stressors . Cortisol and dehydroepiandrosterone (DHEA) are the main products of the HPA axis . Cortisol and DHEA exhibit the highest levels after awakening, followed by a decline throughout the afternoon and evening . Psychosocial stressors elicit in most healthy people an activation of the HPA axis resulting in increased salivary cortisol (sC) and salivary DHEA (sDHEA) secretion . Dysregulation of the HPA axis in the form of abnormal cortisol and/or DHEA responses to stressors has been linked to several health problems . Anabolic balance is the ratio of DHEA to cortisol and has been suggested to be a sensitive indicator of wellbeing and health, more so than assessing only cortisol or DHEA . Lower anabolic balance is associated with more unfavorable wellbeing and health outcomes . A second main regulatory system involved in the response to stressors is the sympathoadrenal-medullary (SAM) axis . Salivary alpha-amylase (sAA) is an enzyme secreted from the salivary glands that has gained interest over the last fifteen years as a marker of the SAM axis activity . SAA activity is low in the morning and steadily increases over the course of the day, typically reaching its peak in the late afternoon . In healthy individuals, psychosocial stressors induce increased sAA activity (e.g., ). Heart rate variability (HRV) represents the change in the time interval between successive heartbeats. Its assessment is of particular interest because HRV can provide an index of the activity of the parasympathetic nervous system , which is associated with many psychophysiological processes . Low cardiac parasympathetic activity is an important predictor of disease and mortality . We refer to parasympathetic activity of the heart as cardiac vagal tone . The four parameters sC, sDHEA, sAA, and HRV index the activity of three intertwined yet distinct biological stress-related systems. Together, these systems provide a comprehensive and complementary in-depth picture of the biological response to short-term changes in psychosocial stress factors and are thus well-suited for investigating the effects of day-to-day variations in WTP. Although there is reasonable theoretical justification for an association between WTP and HPA axis, SAM axis, and cardiac parasympathetic activity, there is no empirical evidence yet. Psychosomatic complaints Psychosomatic complaints refer to self-reported health problems such as musculoskeletal pain and headache . Psychosomatic complaints are very common in the general population and are frequent reasons reported for health care utilization and for sick leave . A few findings indirectly suggest that WTP might be significantly associated with psychosomatic complaints . In this study we integrate cognitive weariness, a core component of burnout assessment, into the concept of psychosomatic complaints. Cognitive weariness is defined as the difficulty to maintain and optimize cognitive and intellectual abilities over time on sustained cognitive demands . General WTP has been associated with increased cognitive weariness . The planned study is the first one to investigate the relationship between WTP and psychosomatic complaints using an ambulatory assessment approach. Sleep quality Researchers in occupational health psychology have been increasingly acknowledging the importance of studying the associations among the three major areas of life: work, non-work, and sleep . Sleep disturbances adversely affect physical and mental health . Compared to the gold standard of polysomnography, actigraphy is considered a good low-cost, non-invasive, objective approach to continuously monitoring sleep behavior . Actigraphy-derived sleep fragmentation at night has been shown to be sensitive to work stressors . Subjective sleep measures and actigraphy-based sleep parameters are not highly correlated . Three cross-sectional survey studies found that higher levels of general WTP were significantly correlated with poorer self-reported sleep quality . No ambulatory data exist on the association between WTP and both self-reported sleep quality and actigraphy-based sleep parameters. Mood Moods are important components of subjective wellbeing . Park and colleagues reported that general WTP predicted higher levels of negative affect across a five-week period but did not report statistical analyses. No ambulatory data exist on the association between WTP and mood. The mediating factor: connection to work Drawing from Barber and Santuzzi , we hypothesize that WTP might impair employees’ wellbeing and health by encouraging continued connection to work during designated work times and during non-work times. Some support for this contention comes from the psychological detachment literature. Psychological detachment from work refers to “the individual’s sense of being away from the work situation” ( p. 579). Increased connection to work means that psychological detachment from work is impaired. Higher levels of general WTP are associated with less general psychological detachment from work . Moreover, Santuzzi and Barber found that general WTP was indirectly related to burnout and poorer sleep quality through psychological detachment at the between-person level. In a five-day diary study, Cambier and colleagues found that the negative association between WTP during off-job hours and psychological detachment during off-job hours was significant at the between-subject level but not at the within-subject level. Lack of psychological detachment results from performing work activities, from not disconnecting mentally from work during breaks and before and after work, or from a combination of the two . In this project, we aim to extend the existing literature on WTP as we consider the possible association between WTP and connection to work by operationalizing connection to work in terms of work-related workload and work-related perseverative cognition. Work-related workload Urges are difficult to resist . Consequently, employees might be expected to give in to their urges and thus engage more frequently in behaviors such as checking, reading, and writing emails and text messages when experiencing high levels of WTP than when experiencing low levels of WTP. Work-related electronic communication may often entail requests that generate additional work in the form of calls or other tasks such as web-browsing for work-related purposes and using computer software to perform tasks such as text processing. Thus, we would predict that higher levels of WTP are associated with more time spent on work activities. In line with these ideas, survey studies have shown that employees who reported higher levels of general WTP also reported to respond more frequently to work emails during both work and non-work hours, vacation days, and even sick days than employees with lower levels of general WTP . Furthermore, employees with higher levels of general WTP exhibited shorter response latencies to work emails during work hours . In a diary study, Van Laethem and colleagues found that employees displaying higher levels of general WTP reported significantly more work-related smartphone use both during work and after work than employees with lower levels of general WTP. In another diary study, Cambier and colleagues reported that daily WTP during off-job time was significantly related to daily work-related smartphone use during off-job time. Cross-sectional analyses revealed that general WTP was positively related to frequency of ICT use at work and to perform work tasks and arrange work schedules at home and during non-work hours . Taken together, these studies suggest that higher levels of WTP may be associated with higher work-related workload throughout a workday. Moreover, we hypothesize that work-related workload is partially mediating the relationship between WTP and the studied measures of wellbeing and health. As suggested by the Effort-Recovery Model , increased demand exposure via increased working hours could exhaust employees’ resources to the point of poor wellbeing and health. Several studies have shown that long working hours adversely affect health (e.g., ). A significant linear relationship has been reported between the number of working hours and sleep disturbances . Associations between longer working hours and physiological changes relevant to the planned project have been also reported. Compared to employees working regular hours, employees working long hours exhibited decreased vagal activity and increased sympathetic activity as indexed by measures of HRV . Moreover, some evidence exists for incomplete or insufficient physiological recovery as a mechanism that may explain the relationship between long working hours and impaired wellbeing and health . Incomplete or insufficient recovery from work, which is positively related to the number of working hours (e.g., ), is associated with higher cortisol levels and an elevated risk of cardiovascular death . With regard to ICT-assisted work tasks more specifically, ICT use for work purposes during off-job time has been associated with worse psychological wellbeing in most investigations , worse affect , and poorer self-rated sleep quality . Employees reporting to be contacted often or sometimes outside of regular working hours (e.g., by email) in the past 12 months exhibited a higher risk of health impairments (e.g., musculoskeletal and gastrointestinal complaints) and a higher risk of sickness absence during the same period . Using a cross-sectional design, Hu and colleagues found that general WTP had an indirect effect on physical exhaustion and sleep quality via the use of technology devices to perform work tasks and arrange work schedules at home and during non-work hours. The results by Hu and colleagues refer to between-person associations. Work-related perseverative cognition Perseverative cognition has been defined as “repetitive or sustained activation of cognitive representations of past stressful events or feared events in the future” (, p. 407). Prototypical forms of perseverative cognition are future-oriented worry and past-oriented rumination . Higher levels of general WTP predicted higher levels of negative rumination after work over a five-week period . More recently, Cambier and Vlerick embedded boundary-crossing contexts in Barber and Santuzzi’s telepressure measure, such that the new item pool assesses both WTP during leisure time and private life telepressure (PTP, i.e., the preoccupation with and urge for responding quickly to personal ICT messages) at work. Their findings revealed a positive association between WTP during leisure time and work-related rumination during leisure time, that is, employees tended to ruminate more about work-related issues during leisure time on days when they experienced more WTP during leisure time. These authors found a similar association between PTP at work and private life rumination at work. Perseverative cognition has been identified as a common cognitive process likely to play a significant role in both psychological and somatic health . There is evidence that perseverative cognition is associated with higher cortisol levels, higher sAA activity, and lower cardiac vagal tone . The link between perseverative cognition and DHEA remains to be investigated. Perseverative cognition is positively and prospectively associated with the number of subjective health complaints , worse sleep quality , and worse mood . Aims and hypotheses This project aims to contribute to further our knowledge on the timely topic of WTP by testing a model that proposes that WTP is significantly associated with a more unfavorable profile of wellbeing and health-related measures, and that these associations are mediated by connection to work. We have the following four hypotheses (Fig. ). Hypothesis 1 1.1 Higher levels of WTP are associated with lower anabolic balance, higher sAA activity, and lower HRV, and 1.2 work-related workload and work-related perseverative cognition are significant mediators of these relationships (Fig. a). Hypothesis 2 2.1 Higher levels of WTP are associated with more psychosomatic complaints, and 2.2 work-related workload and work-related perseverative cognition are significant mediators of this relationship (Fig. b). Hypothesis 3 3.1 Higher levels of WTP are associated with both worse self-rated and actigraphy-derived sleep quality, and 3.2 work-related workload and work-related perseverative cognition are significant mediators of these relationships (Fig. c). Hypothesis 4 4.1 Higher levels of WTP are associated with worse mood, and 4.2 work-related workload and work-related perseverative cognition are significant mediators of this relationship (Fig. d). All hypotheses are at the within-person level. Modern information and communication technology (ICT) devices such as computers, laptops, tablets, and smartphones are important tools in the daily working life of many employees . ICT has been transforming the way many people work by creating the conditions for work to take place anywhere at any time . In particular, smartphones serve as small computers that include numerous functions such as digital calendars, phone calls, internet and social media access, and especially sending and receiving emails and text messages . ICT-mediated communication and especially email communication is essential in many organizations . Message-based ICTs such as email and text messages may increase flexibility and convenience in responding to work-related requests and facilitate team collaboration across geographic and other accessibility barriers . With the Covid-19 outbreak, a sudden shift from office-centric workplaces to remote work took place , a phenomenon labeled “forced flexibility” by Franken and colleagues . With this shift to working from home, the number of work emails sent during both work and non-work hours has dramatically increased . Emails and voice mails, as asynchronous forms of communication, allow the receiver flexibility and control in choosing when and where to handle received messages. However, as shown by several surveys and studies , many employees have limited or no response flexibility and feel the need to be continuously connected to the workplace and to respond promptly to work-related communication through ICT during work hours and off-job time − a phenomenon called the autonomy paradox . For example, a qualitative study reported that insurance company employees were required to respond to customer chat messages within 15 s , and a survey found that 38% of Australian workers checked their emails during non-work hours and kept their mobile phones switched on . To better characterize the ambivalence of employees’ relationship to ICTs and mobile technology in general, Vanden Abeele recently introduced the concept of digital wellbeing. According to her model, digital wellbeing refers to “a subjective individual experience of optimal balance between the benefits and drawbacks obtained from mobile connectivity” (p. 938). One possible challenge in achieving and maintaining digital wellbeing is workplace telepressure (WTP). Barber and Santuzzi introduced the concept of WTP to describe the preoccupation with and urge for responding quickly to work-related ICT messages. As such, WTP is a psychological state experienced by the employee. Both personal factors such as neuroticism and workaholism and organizational factors such as prescriptive norms appear to contribute to the experience of WTP . Workplace telepressure is supposed to emerge when workers begin to view the use of asynchronous communication technologies as similar to synchronous communication forms (e.g., face-to-face communication), which generally require immediate responses. As the employees prioritize ICT-assisted communications during worktime and off-job time, the response flexibility and control over response times that asynchronous communication would normally allow are canceled out. Work periods without interruptions that would be required to accomplish work tasks as well as necessary uninterrupted time for recovery become less frequent and shorter. Workplace telepressure can ultimately lead employees to perceive the use of message-based technology for work purposes as inescapable work instead of flexible work access . Since the initial work by Barber and Santuzzi, researchers have shown a growing interest in studying WTP (e.g., ). There is growing evidence that high levels of WTP might represent a significant risk factor for employees’ wellbeing and health. For instance, employees reporting higher levels of WTP also reported higher levels of burnout, absenteeism due to physical or mental health issues, and worse satisfaction with work-life balance (e.g., ). In the proposed project, we aim to further examine the potential effects of WTP on wellbeing and health by investigating how WTP is related to important indices of wellbeing and health that have been only partially or not yet considered in research on WTP. Drawing from the Effort-Recovery Model , we suggest that WTP can deteriorate employees’ wellbeing and health by prolonging employees’ work-related psychophysiological effort expenditure and by impairing psychophysiological recovery. The Effort-Recovery Model posits that work-related demands require effort, which strains employees’ psychophysiological systems. During non-work time, the psychophysiological systems can revert to pre-demand states as long as employees refrain from putting additional strain on their psychophysiological systems. If employees’ psychophysiological systems recover sufficiently, there should be no long-term negative consequences for employees’ wellbeing and health. In contrast, if exposure to work-related demands is prolonged, recovery is likely to be insufficient. This imbalance is expected to result in an accumulation of psychophysiological alterations − also known as "wear and tear" within the concept of allostatic load − that can deteriorate employees’ wellbeing and health. We hypothesize that compared to lower levels of WTP, higher levels of WTP are associated with a more unfavorable wellbeing and health profile. Moreover, we aim to uncover potential underlying mechanisms of the hypothesized relationships between WTP and the wellbeing and health-related measures. Barber and Santuzzi suggested that WTP might be a critical factor for employees’ wellbeing and health because it has the potential to extend employees’ work stress both during designated work times and during non-work times by encouraging continued connection to work activities. We hypothesize that the preoccupation and urge to respond to message-based ICTs for work purposes that defines WTP prolong employees’ work-related psychophysiological effort expenditure and impair psychophysiological recovery by increasing connection to work. We operationalize connection to work in terms of work-related workload and work-related perseverative cognition. Our conceptual model is depicted in Fig. . To address these questions, we plan to use ambulatory assessment methods, which in contrast to the more common cross-sectional survey studies allow for the measurement of human behavior where and when it happens and for the analysis of day-level within-person associations . Below, we introduce the wellbeing and health-related outcomes and connection to work with its two components “work-related workload” and “work-related perseverative cognition” as potential mediating factors. In reviewing research on WTP, we use “workplace telepressure” when referring to this concept in a broad sense, “general workplace telepressure” to describe the WTP that employees report to experience in general in their life, and “daily workplace telepressure” to describe the WTP that employees report to experience during a specific day. Most researchers have considered only general WTP. Cambier and colleagues showed that there is substantial within-person variability in daily WTP (around 50%); this finding points to the importance of assessing daily WTP and thus the need for ambulatory assessment studies. Biological parameters The hypothalamic–pituitary–adrenal (HPA) axis is a central regulatory system implicated in the organism’s reaction to stressors . Cortisol and dehydroepiandrosterone (DHEA) are the main products of the HPA axis . Cortisol and DHEA exhibit the highest levels after awakening, followed by a decline throughout the afternoon and evening . Psychosocial stressors elicit in most healthy people an activation of the HPA axis resulting in increased salivary cortisol (sC) and salivary DHEA (sDHEA) secretion . Dysregulation of the HPA axis in the form of abnormal cortisol and/or DHEA responses to stressors has been linked to several health problems . Anabolic balance is the ratio of DHEA to cortisol and has been suggested to be a sensitive indicator of wellbeing and health, more so than assessing only cortisol or DHEA . Lower anabolic balance is associated with more unfavorable wellbeing and health outcomes . A second main regulatory system involved in the response to stressors is the sympathoadrenal-medullary (SAM) axis . Salivary alpha-amylase (sAA) is an enzyme secreted from the salivary glands that has gained interest over the last fifteen years as a marker of the SAM axis activity . SAA activity is low in the morning and steadily increases over the course of the day, typically reaching its peak in the late afternoon . In healthy individuals, psychosocial stressors induce increased sAA activity (e.g., ). Heart rate variability (HRV) represents the change in the time interval between successive heartbeats. Its assessment is of particular interest because HRV can provide an index of the activity of the parasympathetic nervous system , which is associated with many psychophysiological processes . Low cardiac parasympathetic activity is an important predictor of disease and mortality . We refer to parasympathetic activity of the heart as cardiac vagal tone . The four parameters sC, sDHEA, sAA, and HRV index the activity of three intertwined yet distinct biological stress-related systems. Together, these systems provide a comprehensive and complementary in-depth picture of the biological response to short-term changes in psychosocial stress factors and are thus well-suited for investigating the effects of day-to-day variations in WTP. Although there is reasonable theoretical justification for an association between WTP and HPA axis, SAM axis, and cardiac parasympathetic activity, there is no empirical evidence yet. Psychosomatic complaints Psychosomatic complaints refer to self-reported health problems such as musculoskeletal pain and headache . Psychosomatic complaints are very common in the general population and are frequent reasons reported for health care utilization and for sick leave . A few findings indirectly suggest that WTP might be significantly associated with psychosomatic complaints . In this study we integrate cognitive weariness, a core component of burnout assessment, into the concept of psychosomatic complaints. Cognitive weariness is defined as the difficulty to maintain and optimize cognitive and intellectual abilities over time on sustained cognitive demands . General WTP has been associated with increased cognitive weariness . The planned study is the first one to investigate the relationship between WTP and psychosomatic complaints using an ambulatory assessment approach. Sleep quality Researchers in occupational health psychology have been increasingly acknowledging the importance of studying the associations among the three major areas of life: work, non-work, and sleep . Sleep disturbances adversely affect physical and mental health . Compared to the gold standard of polysomnography, actigraphy is considered a good low-cost, non-invasive, objective approach to continuously monitoring sleep behavior . Actigraphy-derived sleep fragmentation at night has been shown to be sensitive to work stressors . Subjective sleep measures and actigraphy-based sleep parameters are not highly correlated . Three cross-sectional survey studies found that higher levels of general WTP were significantly correlated with poorer self-reported sleep quality . No ambulatory data exist on the association between WTP and both self-reported sleep quality and actigraphy-based sleep parameters. Mood Moods are important components of subjective wellbeing . Park and colleagues reported that general WTP predicted higher levels of negative affect across a five-week period but did not report statistical analyses. No ambulatory data exist on the association between WTP and mood. The hypothalamic–pituitary–adrenal (HPA) axis is a central regulatory system implicated in the organism’s reaction to stressors . Cortisol and dehydroepiandrosterone (DHEA) are the main products of the HPA axis . Cortisol and DHEA exhibit the highest levels after awakening, followed by a decline throughout the afternoon and evening . Psychosocial stressors elicit in most healthy people an activation of the HPA axis resulting in increased salivary cortisol (sC) and salivary DHEA (sDHEA) secretion . Dysregulation of the HPA axis in the form of abnormal cortisol and/or DHEA responses to stressors has been linked to several health problems . Anabolic balance is the ratio of DHEA to cortisol and has been suggested to be a sensitive indicator of wellbeing and health, more so than assessing only cortisol or DHEA . Lower anabolic balance is associated with more unfavorable wellbeing and health outcomes . A second main regulatory system involved in the response to stressors is the sympathoadrenal-medullary (SAM) axis . Salivary alpha-amylase (sAA) is an enzyme secreted from the salivary glands that has gained interest over the last fifteen years as a marker of the SAM axis activity . SAA activity is low in the morning and steadily increases over the course of the day, typically reaching its peak in the late afternoon . In healthy individuals, psychosocial stressors induce increased sAA activity (e.g., ). Heart rate variability (HRV) represents the change in the time interval between successive heartbeats. Its assessment is of particular interest because HRV can provide an index of the activity of the parasympathetic nervous system , which is associated with many psychophysiological processes . Low cardiac parasympathetic activity is an important predictor of disease and mortality . We refer to parasympathetic activity of the heart as cardiac vagal tone . The four parameters sC, sDHEA, sAA, and HRV index the activity of three intertwined yet distinct biological stress-related systems. Together, these systems provide a comprehensive and complementary in-depth picture of the biological response to short-term changes in psychosocial stress factors and are thus well-suited for investigating the effects of day-to-day variations in WTP. Although there is reasonable theoretical justification for an association between WTP and HPA axis, SAM axis, and cardiac parasympathetic activity, there is no empirical evidence yet. Psychosomatic complaints refer to self-reported health problems such as musculoskeletal pain and headache . Psychosomatic complaints are very common in the general population and are frequent reasons reported for health care utilization and for sick leave . A few findings indirectly suggest that WTP might be significantly associated with psychosomatic complaints . In this study we integrate cognitive weariness, a core component of burnout assessment, into the concept of psychosomatic complaints. Cognitive weariness is defined as the difficulty to maintain and optimize cognitive and intellectual abilities over time on sustained cognitive demands . General WTP has been associated with increased cognitive weariness . The planned study is the first one to investigate the relationship between WTP and psychosomatic complaints using an ambulatory assessment approach. Researchers in occupational health psychology have been increasingly acknowledging the importance of studying the associations among the three major areas of life: work, non-work, and sleep . Sleep disturbances adversely affect physical and mental health . Compared to the gold standard of polysomnography, actigraphy is considered a good low-cost, non-invasive, objective approach to continuously monitoring sleep behavior . Actigraphy-derived sleep fragmentation at night has been shown to be sensitive to work stressors . Subjective sleep measures and actigraphy-based sleep parameters are not highly correlated . Three cross-sectional survey studies found that higher levels of general WTP were significantly correlated with poorer self-reported sleep quality . No ambulatory data exist on the association between WTP and both self-reported sleep quality and actigraphy-based sleep parameters. Moods are important components of subjective wellbeing . Park and colleagues reported that general WTP predicted higher levels of negative affect across a five-week period but did not report statistical analyses. No ambulatory data exist on the association between WTP and mood. Drawing from Barber and Santuzzi , we hypothesize that WTP might impair employees’ wellbeing and health by encouraging continued connection to work during designated work times and during non-work times. Some support for this contention comes from the psychological detachment literature. Psychological detachment from work refers to “the individual’s sense of being away from the work situation” ( p. 579). Increased connection to work means that psychological detachment from work is impaired. Higher levels of general WTP are associated with less general psychological detachment from work . Moreover, Santuzzi and Barber found that general WTP was indirectly related to burnout and poorer sleep quality through psychological detachment at the between-person level. In a five-day diary study, Cambier and colleagues found that the negative association between WTP during off-job hours and psychological detachment during off-job hours was significant at the between-subject level but not at the within-subject level. Lack of psychological detachment results from performing work activities, from not disconnecting mentally from work during breaks and before and after work, or from a combination of the two . In this project, we aim to extend the existing literature on WTP as we consider the possible association between WTP and connection to work by operationalizing connection to work in terms of work-related workload and work-related perseverative cognition. Work-related workload Urges are difficult to resist . Consequently, employees might be expected to give in to their urges and thus engage more frequently in behaviors such as checking, reading, and writing emails and text messages when experiencing high levels of WTP than when experiencing low levels of WTP. Work-related electronic communication may often entail requests that generate additional work in the form of calls or other tasks such as web-browsing for work-related purposes and using computer software to perform tasks such as text processing. Thus, we would predict that higher levels of WTP are associated with more time spent on work activities. In line with these ideas, survey studies have shown that employees who reported higher levels of general WTP also reported to respond more frequently to work emails during both work and non-work hours, vacation days, and even sick days than employees with lower levels of general WTP . Furthermore, employees with higher levels of general WTP exhibited shorter response latencies to work emails during work hours . In a diary study, Van Laethem and colleagues found that employees displaying higher levels of general WTP reported significantly more work-related smartphone use both during work and after work than employees with lower levels of general WTP. In another diary study, Cambier and colleagues reported that daily WTP during off-job time was significantly related to daily work-related smartphone use during off-job time. Cross-sectional analyses revealed that general WTP was positively related to frequency of ICT use at work and to perform work tasks and arrange work schedules at home and during non-work hours . Taken together, these studies suggest that higher levels of WTP may be associated with higher work-related workload throughout a workday. Moreover, we hypothesize that work-related workload is partially mediating the relationship between WTP and the studied measures of wellbeing and health. As suggested by the Effort-Recovery Model , increased demand exposure via increased working hours could exhaust employees’ resources to the point of poor wellbeing and health. Several studies have shown that long working hours adversely affect health (e.g., ). A significant linear relationship has been reported between the number of working hours and sleep disturbances . Associations between longer working hours and physiological changes relevant to the planned project have been also reported. Compared to employees working regular hours, employees working long hours exhibited decreased vagal activity and increased sympathetic activity as indexed by measures of HRV . Moreover, some evidence exists for incomplete or insufficient physiological recovery as a mechanism that may explain the relationship between long working hours and impaired wellbeing and health . Incomplete or insufficient recovery from work, which is positively related to the number of working hours (e.g., ), is associated with higher cortisol levels and an elevated risk of cardiovascular death . With regard to ICT-assisted work tasks more specifically, ICT use for work purposes during off-job time has been associated with worse psychological wellbeing in most investigations , worse affect , and poorer self-rated sleep quality . Employees reporting to be contacted often or sometimes outside of regular working hours (e.g., by email) in the past 12 months exhibited a higher risk of health impairments (e.g., musculoskeletal and gastrointestinal complaints) and a higher risk of sickness absence during the same period . Using a cross-sectional design, Hu and colleagues found that general WTP had an indirect effect on physical exhaustion and sleep quality via the use of technology devices to perform work tasks and arrange work schedules at home and during non-work hours. The results by Hu and colleagues refer to between-person associations. Work-related perseverative cognition Perseverative cognition has been defined as “repetitive or sustained activation of cognitive representations of past stressful events or feared events in the future” (, p. 407). Prototypical forms of perseverative cognition are future-oriented worry and past-oriented rumination . Higher levels of general WTP predicted higher levels of negative rumination after work over a five-week period . More recently, Cambier and Vlerick embedded boundary-crossing contexts in Barber and Santuzzi’s telepressure measure, such that the new item pool assesses both WTP during leisure time and private life telepressure (PTP, i.e., the preoccupation with and urge for responding quickly to personal ICT messages) at work. Their findings revealed a positive association between WTP during leisure time and work-related rumination during leisure time, that is, employees tended to ruminate more about work-related issues during leisure time on days when they experienced more WTP during leisure time. These authors found a similar association between PTP at work and private life rumination at work. Perseverative cognition has been identified as a common cognitive process likely to play a significant role in both psychological and somatic health . There is evidence that perseverative cognition is associated with higher cortisol levels, higher sAA activity, and lower cardiac vagal tone . The link between perseverative cognition and DHEA remains to be investigated. Perseverative cognition is positively and prospectively associated with the number of subjective health complaints , worse sleep quality , and worse mood . Urges are difficult to resist . Consequently, employees might be expected to give in to their urges and thus engage more frequently in behaviors such as checking, reading, and writing emails and text messages when experiencing high levels of WTP than when experiencing low levels of WTP. Work-related electronic communication may often entail requests that generate additional work in the form of calls or other tasks such as web-browsing for work-related purposes and using computer software to perform tasks such as text processing. Thus, we would predict that higher levels of WTP are associated with more time spent on work activities. In line with these ideas, survey studies have shown that employees who reported higher levels of general WTP also reported to respond more frequently to work emails during both work and non-work hours, vacation days, and even sick days than employees with lower levels of general WTP . Furthermore, employees with higher levels of general WTP exhibited shorter response latencies to work emails during work hours . In a diary study, Van Laethem and colleagues found that employees displaying higher levels of general WTP reported significantly more work-related smartphone use both during work and after work than employees with lower levels of general WTP. In another diary study, Cambier and colleagues reported that daily WTP during off-job time was significantly related to daily work-related smartphone use during off-job time. Cross-sectional analyses revealed that general WTP was positively related to frequency of ICT use at work and to perform work tasks and arrange work schedules at home and during non-work hours . Taken together, these studies suggest that higher levels of WTP may be associated with higher work-related workload throughout a workday. Moreover, we hypothesize that work-related workload is partially mediating the relationship between WTP and the studied measures of wellbeing and health. As suggested by the Effort-Recovery Model , increased demand exposure via increased working hours could exhaust employees’ resources to the point of poor wellbeing and health. Several studies have shown that long working hours adversely affect health (e.g., ). A significant linear relationship has been reported between the number of working hours and sleep disturbances . Associations between longer working hours and physiological changes relevant to the planned project have been also reported. Compared to employees working regular hours, employees working long hours exhibited decreased vagal activity and increased sympathetic activity as indexed by measures of HRV . Moreover, some evidence exists for incomplete or insufficient physiological recovery as a mechanism that may explain the relationship between long working hours and impaired wellbeing and health . Incomplete or insufficient recovery from work, which is positively related to the number of working hours (e.g., ), is associated with higher cortisol levels and an elevated risk of cardiovascular death . With regard to ICT-assisted work tasks more specifically, ICT use for work purposes during off-job time has been associated with worse psychological wellbeing in most investigations , worse affect , and poorer self-rated sleep quality . Employees reporting to be contacted often or sometimes outside of regular working hours (e.g., by email) in the past 12 months exhibited a higher risk of health impairments (e.g., musculoskeletal and gastrointestinal complaints) and a higher risk of sickness absence during the same period . Using a cross-sectional design, Hu and colleagues found that general WTP had an indirect effect on physical exhaustion and sleep quality via the use of technology devices to perform work tasks and arrange work schedules at home and during non-work hours. The results by Hu and colleagues refer to between-person associations. Perseverative cognition has been defined as “repetitive or sustained activation of cognitive representations of past stressful events or feared events in the future” (, p. 407). Prototypical forms of perseverative cognition are future-oriented worry and past-oriented rumination . Higher levels of general WTP predicted higher levels of negative rumination after work over a five-week period . More recently, Cambier and Vlerick embedded boundary-crossing contexts in Barber and Santuzzi’s telepressure measure, such that the new item pool assesses both WTP during leisure time and private life telepressure (PTP, i.e., the preoccupation with and urge for responding quickly to personal ICT messages) at work. Their findings revealed a positive association between WTP during leisure time and work-related rumination during leisure time, that is, employees tended to ruminate more about work-related issues during leisure time on days when they experienced more WTP during leisure time. These authors found a similar association between PTP at work and private life rumination at work. Perseverative cognition has been identified as a common cognitive process likely to play a significant role in both psychological and somatic health . There is evidence that perseverative cognition is associated with higher cortisol levels, higher sAA activity, and lower cardiac vagal tone . The link between perseverative cognition and DHEA remains to be investigated. Perseverative cognition is positively and prospectively associated with the number of subjective health complaints , worse sleep quality , and worse mood . This project aims to contribute to further our knowledge on the timely topic of WTP by testing a model that proposes that WTP is significantly associated with a more unfavorable profile of wellbeing and health-related measures, and that these associations are mediated by connection to work. We have the following four hypotheses (Fig. ). Hypothesis 1 1.1 Higher levels of WTP are associated with lower anabolic balance, higher sAA activity, and lower HRV, and 1.2 work-related workload and work-related perseverative cognition are significant mediators of these relationships (Fig. a). Hypothesis 2 2.1 Higher levels of WTP are associated with more psychosomatic complaints, and 2.2 work-related workload and work-related perseverative cognition are significant mediators of this relationship (Fig. b). Hypothesis 3 3.1 Higher levels of WTP are associated with both worse self-rated and actigraphy-derived sleep quality, and 3.2 work-related workload and work-related perseverative cognition are significant mediators of these relationships (Fig. c). Hypothesis 4 4.1 Higher levels of WTP are associated with worse mood, and 4.2 work-related workload and work-related perseverative cognition are significant mediators of this relationship (Fig. d). All hypotheses are at the within-person level. 1.1 Higher levels of WTP are associated with lower anabolic balance, higher sAA activity, and lower HRV, and 1.2 work-related workload and work-related perseverative cognition are significant mediators of these relationships (Fig. a). 2.1 Higher levels of WTP are associated with more psychosomatic complaints, and 2.2 work-related workload and work-related perseverative cognition are significant mediators of this relationship (Fig. b). 3.1 Higher levels of WTP are associated with both worse self-rated and actigraphy-derived sleep quality, and 3.2 work-related workload and work-related perseverative cognition are significant mediators of these relationships (Fig. c). 4.1 Higher levels of WTP are associated with worse mood, and 4.2 work-related workload and work-related perseverative cognition are significant mediators of this relationship (Fig. d). All hypotheses are at the within-person level. Participants Participants will be employees (50% female) recruited from a variety of organizations and occupations. We will recruit the participants using flyers, the local press, and social media platforms. Based on sample size calculations (see Sect. " "), we will need 120 participants with complete data to test our hypotheses. Because of potential device malfunctioning, dropouts, non-compliance of participants, or not usable data, we will schedule 10 additional participants, i.e., 130 participants in total. Participants will receive a compensation of up to 352 Swiss Francs and will be reimbursed for study-related travel expenses. To be included in the study, prospective participants must fulfill the following criteria: (1) being healthy, (2) being over 18 years old, (3) having good French skills, (4) working a paid daytime job with a weekly regular schedule of at least four consecutive workdays within the same organization in Switzerland, and (5) using ICT daily to communicate for work-related purposes with supervisor, coworkers, subordinates, clients, or patients. The psychophysiological variables of our study have been shown to be affected by (1) shift work , (2) a Body Mass Index > 30 kg/m 2 , (3) cardiovascular, neurological, metabolic, endocrine, respiratory, autoimmune, psychiatric disorders, or the sleep disorders severe insomnia and sleep apnea , (4) pregnancy and breastfeeding , and (5) alcohol abuse . We will consider these factors as exclusion criteria. We will also exclude participants who (6) use psychotropic drugs or any medication known to affect our variables , except hormonal contraceptives for women of childbearing age and hormonal replacement therapy drugs for postmenopausal women. We will finally exclude employees who (7) wear a pacemaker. Procedure The study’s procedure for each participant will consist of three main phases: an online entry questionnaire, a laboratory visit, and an ambulatory assessment. The entire study will be conducted in French. Online entry questionnaire Participants who contact the research team will receive an email including the study information sheet and the internet link to the entry questionnaire. They will be invited to carefully read the information sheet before proceeding with the entry questionnaire. In accordance with article 9 of the Swiss Human Research Ordinance , participants will have to tick a box to accept that their data will be saved and treated in order to establish their eligibility for this study. Laboratory visit Eligible participants will be invited to our laboratory for an initial meeting. After being explained about all the procedures to be undertaken and signing the consent form, they will be asked to fill out baseline questionnaires. These questionnaires will allow us to better characterize the sample and to control statistically for potential confounding variables. Afterwards, participants will be familiarized with the questionnaires, instruments, and procedure of the ambulatory assessment. At the end of the meeting, they will leave with a briefcase containing the material for the ambulatory assessment. Ambulatory assessment Participants will be monitored during seven days. The assessment will be scheduled during a workweek that is expected to be typical for each employee. The participation week will be scheduled so that the weekend days are assessed consecutively. Furthermore, the participation week will not be preceded or followed by vacations given the effects that these can have on employees . For women of childbearing age, the ambulatory assessment phase will start following the end of their period. Data collection will end with the morning questionnaires of the eighth day to assess sleep-related variables of the night before. Participants will be able to contact the research staff throughout the assessment period should the need arise. At the end of the assessment period, an investigator will meet the participants at a mutually agreed upon location to pick up the material. Measures The lists of all measures with detailed information are given in the Supplementary Material (Additional file ). Online entry questionnaire measures Sociodemographic data Ad-hoc questions will be used to assess the following sociodemographic data: age, sex, body height, body weight, mother tongue, French skills, workweek schedule, average number of actual work hours per week, frequency of ICT use for work-related communication during the workweek and the weekend, work on weekends according to contract, on-call hours according to contract, and shift work. Health-related data We will ask participants to report any known current disease or medical condition. Additionally, we will ask specifically for sleep apnea with one question and assess insomnia using the 7-item Insomnia Severity Index . One sample item reads as follows, “To what extent do you consider your sleep problem to interfere with your daily functioning (e.g., daytime fatigue, ability to function at work/daily chores, concentration, memory, and mood)?” (Cronbach’s α = 0.74; ) and is scored on a 5-point Likert scale (0 = “Not at all interfering” to 4 = “Very much interfering”). The total score ranges from 0 to 28, with higher scores indicating more severe insomnia: 0–7 = No clinically significant insomnia, 8–14 = Subthreshold insomnia, 15–21 = Clinical insomnia, and 22–28 = Severe insomnia). We will also ask participants to list any medication intake and to indicate if they smoke (e.g., cigarettes, e-cigarettes, pipes, smokeless tobacco), take recreational/psychotropic drugs (e.g., stimulant drugs, opioids, anabolic steroids), wear a pacemaker, are pregnant, or are lactating. Finally, we will assess alcohol abuse/dependence during the past six months using the 6-item Alcohol Abuse/Dependence Module of the Patient Health Questionnaire . General workplace telepressure and private life telepressure measures Although the concept of telepressure has been first developed in the context of message-based technology use for work purposes, people can experience telepressure also when using message-based technology for non-work purposes. We will measure general WTP and general PTP using adapted versions of the 6-item WTP measure . The adaptation consists in adding “work-related” and “personal”, respectively, in five of the six items. One sample item reads as follows, “I can’t stop thinking about a [work-related] / [personal] message until I’ve responded”. All items are scored on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). The mean score ranges from 1 to 5, with higher scores indicating higher levels of general WTP and PTP. Cronbach’s alphas of our adapted general WTP and PTP measures were 0.89 and 0.90, respectively, in a sample of 75 employees. Laboratory visit measures Complementary sociodemographic and health-related data Additional sociodemographic data will include marital status, total number of individuals living in the household, total number of adults living in the household, total number of children under 18 years old currently living in the household, educational level, current occupation/job, managerial/supervisory role, place of work, and job tenure. We will ask women of childbearing age to indicate the average length of their menstrual cycle, their average period length, and the first day of their last period. This information will be used to ascertain that their ambulatory assessment phase is scheduled outside of their period. Workplace fear of missing out Workplace fear of missing out will be assessed using the 10-item workplace Fear of Missing Out scale . One sample item reads as follows, “I worry that I will not know what is happening at work” (α = 0.90–0.94; ) and is preceded by the stem “When I am absent or disconnected from work…”. All items are scored on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). The mean score ranges from 1 to 5, with higher scores indicating stronger workplace fear of missing out. Psychological detachment from work We will use the 4-item psychological detachment subscale of the recovery experience questionnaire to assess psychological detachment from work. One sample item reads as follows, “I don’t think about work at all” (α = 0.84–0.89; ) and is preceded by the stem “During time after work:”. All items are scored on a 5-point Likert scale (1 = “I do not agree at all” to 5 = “I fully agree”). The mean score ranges from 1 to 5, with higher scores indicating better psychological detachment. ICT-related response expectations and availability Response expectations. ICT-related response expectations of employees will be assessed with the 2-item response expectations subscale of the ICT Demand Scale . One sample item reads as follows, “I am expected to respond to e-mail messages immediately” (α = 0.78–0.86; ). Items are scored on a 5-point Likert scale (0 = “Never” to 4 = “Almost always”). Mean scores range from 0 to 4, with higher scores indicating higher response expectations. Availability. ICT-related availability of employees will be assessed with the 4-item availability subscale of the ICT Demand Scale . One sample item reads as follows, “I’m contacted about work-related issues outside of regular work hours” (α = 0.71–0.83; ). Items are scored on a 5-point Likert scale (0 = “Never” to 4 = “Almost always”). Mean scores range from 0 to 4, with higher scores indicating higher availability. Technostress creators Technostress creators are factors that induce stress due to the use of ICTs. According to Ragu-Nathan and colleagues , they can be grouped into five technostress dimensions, i.e., techno-overload, techno-invasion, techno-complexity, techno-insecurity, and techno-uncertainty. We will assess these five dimensions with the 21-item technostress creators scale . One sample item of techno-overload reads as follows, “I am forced by this technology to work much faster” (α = 0.90). One sample item of techno-invasion reads as follows, “I feel my personal life is being invaded by this technology” (α = 0.88). One sample item of techno-complexity reads as follows, “I do not know enough about this technology to handle my job satisfactorily” (α = 0.88). One sample item of techno-insecurity reads as follows, “I have to constantly update my skills to avoid being replaced” (α = 0.84). One sample item of techno-uncertainty reads as follows, “There are constant changes in computer software in our organization” (α = 0.91). All items are scored on a 7-point Likert scale (1 = “Strongly disagree” to 7 = “Strongly agree”). For all dimensions, mean scores range from 1 to 7, with higher scores indicating more technostress creators. Workaholism We will assess workaholism using the two working excessively and working compulsively subscales of the 10-item short version of the Dutch Work Addiction Scale . One sample item of the working excessively subscale reads as follows, “I find myself continuing work after my co-workers have called it quits” (α = 0.65–0.81; ). One sample item of the working compulsively subscale reads as follows, “I often feel that there’s something inside me that drives me to work hard” (α = 0.69–0.81; ). All items are scored on a 4-point scale (1 = “(Almost) Never” to 4 = “(Almost) Always”). The mean score ranges from 1 to 4, with higher scores indicating higher levels of workaholism. Segmentation preferences and supplies Segmentation preferences refer to the degree to which employees prefer to keep aspects of work and home separated from one another . We will measure this construct using the 4-item segmentation preferences scale . One sample item reads as follows, “I like to be able to leave work behind when I go home” (α = 0.91, ). Segmentation supplies refer to the employees’ perception of the degree to which their organization/workplace provides freedom of work-home segmentation . We will assess this construct using the 4-item workplace segmentation supplies scale . One sample item reads as follows, “At my workplace, people are able to prevent work issues from creeping into their home life” (α = 0.94, ). The items of both scales are scored on a 7-point Likert scale (1 = “Strongly disagree” to 7 = “Strongly agree”). The mean score ranges from 1 to 7, with higher scores indicating stronger segmentation preferences and norms in the organization. Personality traits The two personality traits neuroticism and conscientiousness will be assessed using respectively eight and nine items of the Big Five Inventory . One sample item assessing neuroticism reads as follows, “Gets nervous easily” (α = 0.82, ). One sample item assessing conscientiousness reads as follows, “Perseveres until the task is finished” (α = 0.80, ). The items are preceded by the stem, “I see myself as someone who…”. All items are scored on a 5-point Likert scale (1 = “Disagree strongly” to 5 = “Agree strongly”). The mean score for each scale ranges from 1 to 5, with higher scores indicating more neuroticism and conscientiousness. Depression, anxiety, and stress We will assess depressive symptoms, anxiety, and stress with the 21-item Depression, Anxiety, and Stress Scale . One sample item assessing depression reads as follows, “I felt I wasn’t worth much as a person” (α = 0.91, ). One sample item assessing anxiety reads as follows, “I felt I was close to panic” (α = 0.84, ). One sample item assessing stress reads as follows, “I found it difficult to relax” (α = 0.90, ). All items are scored on a scale from 0 = “did not apply to me at all” to 3 = “applied to me very much or most of the time”. The total score of each subscale ranges from 0 to 21, with higher scores indicating worse depression, anxiety, and stress outcomes over the past week. Trait mindfulness Attention and awareness in daily life will be assessed using the 15-item Mindful Attention Awareness Scale . This questionnaire requires the respondent to evaluate the frequency of different everyday life experiences using a 6-point Likert scale (1 = “Almost never” to 6 = “Almost always”). One sample item reads as follows, “I rush through activities without being really attentive to them” (α = 0.84, ). The mean score ranges from 1 to 6, with higher scores indicating higher levels of dispositional mindfulness. Finally, we will ask participants to fill out the general WTP measure and the general PTP measure again in order to evaluate the test–retest reliability of these two instruments. Ambulatory assessment measures The daily schedule of the ambulatory assessment measures is shown in Table . It consists of five sampling occasions during wake time: (1) immediately after awakening while still lying in bed, (2) 30 min after awakening, (3) 12:30 p.m. (± 30 min), (4) 5:30 p.m. (± 30 min), and (5) bedtime. Participants will be asked to complete questionnaires four times per day (30 min after awakening, 12:30 p.m., 5:30 p.m., and bedtime) with an iPad Mini 2 (Apple Inc.) using the software iDialogPad developed by G. Mutz at the University of Cologne. The full iDialogPad script is given in the Supplementary Material. The iPad’s screen will have a blue light filter to ensure that the evening surveys do not expose participants to the artificial light that can affect their sleep. Sampling times will be automatically registered on the iPad. Participants will collect their saliva on each sampling occasion for sC, sDHEA, and sAA assessment. They will also continuously wear the MotionWatch 8 (CamNtech Ltd., Cambridgeshire, England) and Bittium Faros 180L ECG monitor (Bittium Corporation, Oulu, Finland) for actigraphic and electrocardiographic recordings, respectively. Daily workplace telepressure and private life telepressure measures We will assess daily WTP and daily PTP with the 6-item WTP and the 6-item PTP measures adjusted for repeated measurement during a day, respectively. The verb tense of the items is changed from present tense to past tense and a time reference is given (“Since the last assessment…”). One sample item reads as follows, “It was difficult for me to resist responding to a work-related message right away”. The scoring of these measures is the same as the scoring of the general WTP and general PTP measures. Work-related workload We will assess work-related workload using the 3-item workload measure of Derks and colleagues . One sample item reads as follows, “I had to work extra hard to finish things” (α = 0.91, ). We adapted the instruction to ensure that only work-related workload is assessed. The instruction reads as follows, “Please rate how much you agree or disagree with the following statements. Refer only to work-related activities that you have performed since the last assessment.” All items are scored on a 5-point Likert scale (1 = “Totally disagree” to 5 = “Totally agree”). The mean score ranges from 1 to 5, with higher scores indicating higher work-related workload. Additionally, we will ask participants to report (1) the total time spent performing work activities and (2) the time (in percentage) spent working at the workplace, at home, or other places since the last assessment (e.g., 50% at the workplace, 40% at home, and 10% in other places). Private life workload We adapted the 3-item work-related workload measure of Derks and colleagues to assess private life workload. The three items remain the same as in Derks and colleagues but the instruction is altered to ensure that only private life workload is assessed. The instruction reads as follows, “Please rate how much you agree or disagree with the following statements. Refer only to activities at home/in your private life that you have performed since the last assessment.” The scoring of the scale is the same as the scoring of the work-related workload measure. Higher scores indicate higher private life workload. Work-related perseverative cognition We will use the 3-item work-related worry/rumination measure to assess work-related perseverative cognition. One sample item reads as follows, “My thoughts kept returning to a stressful situation at work” (α = 0.74, ). Participants will be asked to rate the extent to which they experienced such thoughts since the last assessment. All items are scored on a 5-point Likert scale (1 = “Not at all” to 5 = “A great deal”). The mean score ranges from 1 to 5, with higher scores indicating more work-related worry and rumination. Additionally, participants will estimate the total duration of their work-related worry/rumination since the last assessment. Private life perseverative cognition We adapted the 3-item work-related worry/rumination measure of Flaxman and colleagues to assess private life perseverative cognition. One adapted item reads as follows, “My thoughts kept returning to a stressful situation in my private life ” . The scoring remains the same as in the original scale. Additionally, participants will estimate the total duration of their private life perseverative cognition since the last assessment. Number of stressful events Participants will be asked to report the number of stressful events they experienced since the last assessment. They will be provided with the following definition of stressful events: “Stressful events are minor and major events that have made you feel tense, irritated, angry, sad, disappointed, or negative in any other way” . Sleep We will use an 11-item sleep questionnaire to assess different daily aspects of sleep. Qualitative aspects of sleep will be measured with five items inspired by the Karolinska Sleep Index , the Spiegel Sleep Questionnaire , and the St. Mary’s Hospital Sleep Questionnaire . The five items cover central qualitative aspects of sleep (overall sleep quality, restless sleep, difficulty falling asleep, difficulty maintaining sleep, and premature awakening). All items are scored on a 5-point Likert scale. The total score ranges from 5 to 25, with higher scores indicating better subjective sleep quality. Quantitative aspects of sleep will be measured with five items from the St. Mary’s Hospital Sleep Questionnaire. Quantitative aspects of sleep include bedtime, sleep onset latency, waking time, getting-out-of-bed time, and sleep duration. Additionally, the mode of awakening will be assessed using the following question, “This morning, did you wake up spontaneously/naturally?”. Response options are “Yes” and “No”. Biobehavioral measures Following previous methods , we will ask the participants to report the number of caffeinated beverages, alcoholic beverages, tobacco products, e-cigarettes as well as any drugs and medication consumed since the last assessment. Psychosomatic complaints The questionnaire assessing psychosomatic complaints consists of two subscales: one assessing somatic complaints and the other cognitive weariness. Somatic complaints will be measured with seven items from the Somatic Symptom Scale-8 . One sample item reads as follows, “Back pain”. Cognitive weariness will be assessed using the 5-item cognitive weariness subscale of the Shirom-Melamed Burnout Measure . One sample item reads as follows, “I have difficulty concentrating” (α = 0.93, ). All items are preceded by the following question: “At this moment, how much are you bothered by any of the following problems?” and scored on a 5-point Likert scale (0 = “Not at all” to 4 = “Very much”). The total score ranges from 0 to 48, with higher scores indicating more severe psychosomatic complaints. Mood Following the conceptualization of Matthews and colleagues and Schimmack and Grob , the three basic dimensions of mood, valence, calmness, and energetic arousal will be measured using an 8-item mood scale, which is a modified version of the 6-item mood scale developed by Wilhelm and Schoebi . Two items have been added to the original 6-item scale following recommendations of P. Wilhelm (personal communication, 17.10.2022). The initial instruction reads as follows, “At this moment I am/feel:”. One sample item of the 3-item valence subscale is “1. Unwell - 8. Well”, one sample item of the 3-item calmness subscale is “1. Agitated - 8. Calm”, and one sample item of the 2-item energetic arousal subscale is “1. Full of energy - 8. Without energy”. All items are scored on an eight-point bipolar scale (1 = “Extremely” to 4 = “Rather”; 5 = “Rather” to 8 = “Extremely”). The mean score of each dimension ranges from 1 to 8. Scores of four items are reversed in order to ensure that higher scores indicate better mood (i.e., higher positive valence, higher calmness, and higher energetic arousal). Saliva sampling In order for participants to collect their saliva five times per day (i.e., immediately after awakening while still lying in bed, 30 min after awakening, 12:30 p.m. (± 30 min), 5:30 p.m. (± 30 min), and bedtime) in a hygienic and convenient way, we will ask them to use the set of SaliCaps (IBL International, Hamburg, Germany). SaliCaps are low-bind polypropylene 2 mL cryovials that allow the collection of saliva using a polypropylene straw. Participants will be asked to follow the saliva sampling instruction displayed on the iPads. The instruction reads as follows, “First, swallow the saliva currently in your mouth. Now, hold your saliva in your mouth for two minutes. You can no longer swallow and must then transfer the accumulated saliva into the tube. Press OK to start the timer. [2 min later] Now transfer the saliva into the tube using the straw. Make sure you have sealed the tube on all sides.”. Then, they will be asked whether they were able to follow the following instructions: no drinking (other than water − at the latest 10 min before saliva sampling), eating, smoking, or engaging in vigorous physical activity in the last 30 min, and no tooth brushing in the last 60 min. The obtained saliva samples will be stored during the assessment period in a provided plastic freezer bag in the participants’ refrigerators and then kept in a freezer at − 30 °C in our laboratory before being shipped to the Biochemical Laboratory of the Department of Clinical Psychology, at the University of Vienna headed by U.M. Nater. Free sC concentrations will be measured using a Cortisol Saliva Luminescence Immunoassay (IBL-Tecan, Hamburg, Germany). DHEA concentrations will be measured using a DHEA Saliva Enzyme-Linked Immunosorbent Assay (IBL-Tecan, Hamburg, Germany). SAA activity will be measured using reagents provided by DiaSys Diagnostic Systems (Holzheim, Germany). Electrocardiographic measures The Bittium Faros 180L (Bittium Corporation, Oulu, Finland) is a lightweight (18 g), small, unobtrusive, and waterproof ECG device, equipped with a long-lasting battery allowing continuous recording for up to eight days. It is attached to the chest using three adhesive electrodes, a single ECG patch electrode, or a chest belt. The ECG will be recorded at a sampling rate of 250 Hz together with an accelerometer sampled at 25 Hz. Data will be analyzed with the Bittium Cardiac Navigator software (Bittium Corporation, Oulu, Finland) to obtain indices of HRV. The root mean square of successive differences (RMSSD) will be the main HRV index. RMSSD reflects cardiac vagal tone and is relatively free of respiratory influences . Actigraphic measures We will use the MotionWatch 8 (CamNtech Ltd., Cambridgeshire, England) to record participants’ wake and sleep periods (rest/activity cycles). The MotionWatch 8 is a lightweight (~ 10 g), small, and waterproof wristwatch-like activity-monitoring device. The watch is equipped with a light sensor and a very long-lasting battery allowing a continuous recording for up to 91 days. Participants will wear the MotionWatch 8 on the wrist of the non-dominant arm. We will also ask them to event-mark two time points that are essential for the computation of sleep quality and quantity indices: when they (i) get out of bed in the morning and (ii) are ready to sleep at night. Compliance to actigraphy event markers is generally moderate to high . Movement of the wrist will be recorded at a sampling rate of 50 Hz using 30-s epochs. The actigraphic recordings will be analyzed with the MotionWare software (CamNtech Ltd., Cambridgeshire, England) to obtain indices of sleep quality (e.g., fragmentation index) and sleep quantity (e.g., total sleep time). Data-analytic plan The collected data have a multilevel structure (i.e., repeated measurements nested within individuals). We will test our hypotheses with multilevel mixed-effects mediation analyses following principles and methods described in . The statistical software package used is Mplus (see https://www.statmodel.com/ ). We expect a total of 840 usable data points for measures assessed once a day (sleep measures), 2520 data points for measures assessed three times a day (e.g., WTP), 3360 data points for measures assessed four times a day (e.g., mood), and 4200 data points for measures assessed five times a day (salivary parameters). Where appropriate, skewed variables will be transformed. We will use an alpha level of 0.05 for all tests. We will conduct sensitivity analyses by adding control variables to the models. Sample size calculation The sample size calculation was performed with the support of a statistician. The power computations are based on a model by which the effect of WTP on each wellbeing and health-related outcome (Y) follows two paths, direct and indirect. In the indirect path, WTP acts on work-related workload (WL) and on work-related perseverative cognition (PC), which both act on Y. The computations rely on the repeated simulations of such model. The model and its assumptions are given in the Supplementary Material. Participants will be employees (50% female) recruited from a variety of organizations and occupations. We will recruit the participants using flyers, the local press, and social media platforms. Based on sample size calculations (see Sect. " "), we will need 120 participants with complete data to test our hypotheses. Because of potential device malfunctioning, dropouts, non-compliance of participants, or not usable data, we will schedule 10 additional participants, i.e., 130 participants in total. Participants will receive a compensation of up to 352 Swiss Francs and will be reimbursed for study-related travel expenses. To be included in the study, prospective participants must fulfill the following criteria: (1) being healthy, (2) being over 18 years old, (3) having good French skills, (4) working a paid daytime job with a weekly regular schedule of at least four consecutive workdays within the same organization in Switzerland, and (5) using ICT daily to communicate for work-related purposes with supervisor, coworkers, subordinates, clients, or patients. The psychophysiological variables of our study have been shown to be affected by (1) shift work , (2) a Body Mass Index > 30 kg/m 2 , (3) cardiovascular, neurological, metabolic, endocrine, respiratory, autoimmune, psychiatric disorders, or the sleep disorders severe insomnia and sleep apnea , (4) pregnancy and breastfeeding , and (5) alcohol abuse . We will consider these factors as exclusion criteria. We will also exclude participants who (6) use psychotropic drugs or any medication known to affect our variables , except hormonal contraceptives for women of childbearing age and hormonal replacement therapy drugs for postmenopausal women. We will finally exclude employees who (7) wear a pacemaker. The study’s procedure for each participant will consist of three main phases: an online entry questionnaire, a laboratory visit, and an ambulatory assessment. The entire study will be conducted in French. Online entry questionnaire Participants who contact the research team will receive an email including the study information sheet and the internet link to the entry questionnaire. They will be invited to carefully read the information sheet before proceeding with the entry questionnaire. In accordance with article 9 of the Swiss Human Research Ordinance , participants will have to tick a box to accept that their data will be saved and treated in order to establish their eligibility for this study. Laboratory visit Eligible participants will be invited to our laboratory for an initial meeting. After being explained about all the procedures to be undertaken and signing the consent form, they will be asked to fill out baseline questionnaires. These questionnaires will allow us to better characterize the sample and to control statistically for potential confounding variables. Afterwards, participants will be familiarized with the questionnaires, instruments, and procedure of the ambulatory assessment. At the end of the meeting, they will leave with a briefcase containing the material for the ambulatory assessment. Ambulatory assessment Participants will be monitored during seven days. The assessment will be scheduled during a workweek that is expected to be typical for each employee. The participation week will be scheduled so that the weekend days are assessed consecutively. Furthermore, the participation week will not be preceded or followed by vacations given the effects that these can have on employees . For women of childbearing age, the ambulatory assessment phase will start following the end of their period. Data collection will end with the morning questionnaires of the eighth day to assess sleep-related variables of the night before. Participants will be able to contact the research staff throughout the assessment period should the need arise. At the end of the assessment period, an investigator will meet the participants at a mutually agreed upon location to pick up the material. Participants who contact the research team will receive an email including the study information sheet and the internet link to the entry questionnaire. They will be invited to carefully read the information sheet before proceeding with the entry questionnaire. In accordance with article 9 of the Swiss Human Research Ordinance , participants will have to tick a box to accept that their data will be saved and treated in order to establish their eligibility for this study. Eligible participants will be invited to our laboratory for an initial meeting. After being explained about all the procedures to be undertaken and signing the consent form, they will be asked to fill out baseline questionnaires. These questionnaires will allow us to better characterize the sample and to control statistically for potential confounding variables. Afterwards, participants will be familiarized with the questionnaires, instruments, and procedure of the ambulatory assessment. At the end of the meeting, they will leave with a briefcase containing the material for the ambulatory assessment. Participants will be monitored during seven days. The assessment will be scheduled during a workweek that is expected to be typical for each employee. The participation week will be scheduled so that the weekend days are assessed consecutively. Furthermore, the participation week will not be preceded or followed by vacations given the effects that these can have on employees . For women of childbearing age, the ambulatory assessment phase will start following the end of their period. Data collection will end with the morning questionnaires of the eighth day to assess sleep-related variables of the night before. Participants will be able to contact the research staff throughout the assessment period should the need arise. At the end of the assessment period, an investigator will meet the participants at a mutually agreed upon location to pick up the material. The lists of all measures with detailed information are given in the Supplementary Material (Additional file ). Online entry questionnaire measures Sociodemographic data Ad-hoc questions will be used to assess the following sociodemographic data: age, sex, body height, body weight, mother tongue, French skills, workweek schedule, average number of actual work hours per week, frequency of ICT use for work-related communication during the workweek and the weekend, work on weekends according to contract, on-call hours according to contract, and shift work. Health-related data We will ask participants to report any known current disease or medical condition. Additionally, we will ask specifically for sleep apnea with one question and assess insomnia using the 7-item Insomnia Severity Index . One sample item reads as follows, “To what extent do you consider your sleep problem to interfere with your daily functioning (e.g., daytime fatigue, ability to function at work/daily chores, concentration, memory, and mood)?” (Cronbach’s α = 0.74; ) and is scored on a 5-point Likert scale (0 = “Not at all interfering” to 4 = “Very much interfering”). The total score ranges from 0 to 28, with higher scores indicating more severe insomnia: 0–7 = No clinically significant insomnia, 8–14 = Subthreshold insomnia, 15–21 = Clinical insomnia, and 22–28 = Severe insomnia). We will also ask participants to list any medication intake and to indicate if they smoke (e.g., cigarettes, e-cigarettes, pipes, smokeless tobacco), take recreational/psychotropic drugs (e.g., stimulant drugs, opioids, anabolic steroids), wear a pacemaker, are pregnant, or are lactating. Finally, we will assess alcohol abuse/dependence during the past six months using the 6-item Alcohol Abuse/Dependence Module of the Patient Health Questionnaire . General workplace telepressure and private life telepressure measures Although the concept of telepressure has been first developed in the context of message-based technology use for work purposes, people can experience telepressure also when using message-based technology for non-work purposes. We will measure general WTP and general PTP using adapted versions of the 6-item WTP measure . The adaptation consists in adding “work-related” and “personal”, respectively, in five of the six items. One sample item reads as follows, “I can’t stop thinking about a [work-related] / [personal] message until I’ve responded”. All items are scored on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). The mean score ranges from 1 to 5, with higher scores indicating higher levels of general WTP and PTP. Cronbach’s alphas of our adapted general WTP and PTP measures were 0.89 and 0.90, respectively, in a sample of 75 employees. Laboratory visit measures Complementary sociodemographic and health-related data Additional sociodemographic data will include marital status, total number of individuals living in the household, total number of adults living in the household, total number of children under 18 years old currently living in the household, educational level, current occupation/job, managerial/supervisory role, place of work, and job tenure. We will ask women of childbearing age to indicate the average length of their menstrual cycle, their average period length, and the first day of their last period. This information will be used to ascertain that their ambulatory assessment phase is scheduled outside of their period. Workplace fear of missing out Workplace fear of missing out will be assessed using the 10-item workplace Fear of Missing Out scale . One sample item reads as follows, “I worry that I will not know what is happening at work” (α = 0.90–0.94; ) and is preceded by the stem “When I am absent or disconnected from work…”. All items are scored on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). The mean score ranges from 1 to 5, with higher scores indicating stronger workplace fear of missing out. Psychological detachment from work We will use the 4-item psychological detachment subscale of the recovery experience questionnaire to assess psychological detachment from work. One sample item reads as follows, “I don’t think about work at all” (α = 0.84–0.89; ) and is preceded by the stem “During time after work:”. All items are scored on a 5-point Likert scale (1 = “I do not agree at all” to 5 = “I fully agree”). The mean score ranges from 1 to 5, with higher scores indicating better psychological detachment. ICT-related response expectations and availability Response expectations. ICT-related response expectations of employees will be assessed with the 2-item response expectations subscale of the ICT Demand Scale . One sample item reads as follows, “I am expected to respond to e-mail messages immediately” (α = 0.78–0.86; ). Items are scored on a 5-point Likert scale (0 = “Never” to 4 = “Almost always”). Mean scores range from 0 to 4, with higher scores indicating higher response expectations. Availability. ICT-related availability of employees will be assessed with the 4-item availability subscale of the ICT Demand Scale . One sample item reads as follows, “I’m contacted about work-related issues outside of regular work hours” (α = 0.71–0.83; ). Items are scored on a 5-point Likert scale (0 = “Never” to 4 = “Almost always”). Mean scores range from 0 to 4, with higher scores indicating higher availability. Technostress creators Technostress creators are factors that induce stress due to the use of ICTs. According to Ragu-Nathan and colleagues , they can be grouped into five technostress dimensions, i.e., techno-overload, techno-invasion, techno-complexity, techno-insecurity, and techno-uncertainty. We will assess these five dimensions with the 21-item technostress creators scale . One sample item of techno-overload reads as follows, “I am forced by this technology to work much faster” (α = 0.90). One sample item of techno-invasion reads as follows, “I feel my personal life is being invaded by this technology” (α = 0.88). One sample item of techno-complexity reads as follows, “I do not know enough about this technology to handle my job satisfactorily” (α = 0.88). One sample item of techno-insecurity reads as follows, “I have to constantly update my skills to avoid being replaced” (α = 0.84). One sample item of techno-uncertainty reads as follows, “There are constant changes in computer software in our organization” (α = 0.91). All items are scored on a 7-point Likert scale (1 = “Strongly disagree” to 7 = “Strongly agree”). For all dimensions, mean scores range from 1 to 7, with higher scores indicating more technostress creators. Workaholism We will assess workaholism using the two working excessively and working compulsively subscales of the 10-item short version of the Dutch Work Addiction Scale . One sample item of the working excessively subscale reads as follows, “I find myself continuing work after my co-workers have called it quits” (α = 0.65–0.81; ). One sample item of the working compulsively subscale reads as follows, “I often feel that there’s something inside me that drives me to work hard” (α = 0.69–0.81; ). All items are scored on a 4-point scale (1 = “(Almost) Never” to 4 = “(Almost) Always”). The mean score ranges from 1 to 4, with higher scores indicating higher levels of workaholism. Segmentation preferences and supplies Segmentation preferences refer to the degree to which employees prefer to keep aspects of work and home separated from one another . We will measure this construct using the 4-item segmentation preferences scale . One sample item reads as follows, “I like to be able to leave work behind when I go home” (α = 0.91, ). Segmentation supplies refer to the employees’ perception of the degree to which their organization/workplace provides freedom of work-home segmentation . We will assess this construct using the 4-item workplace segmentation supplies scale . One sample item reads as follows, “At my workplace, people are able to prevent work issues from creeping into their home life” (α = 0.94, ). The items of both scales are scored on a 7-point Likert scale (1 = “Strongly disagree” to 7 = “Strongly agree”). The mean score ranges from 1 to 7, with higher scores indicating stronger segmentation preferences and norms in the organization. Personality traits The two personality traits neuroticism and conscientiousness will be assessed using respectively eight and nine items of the Big Five Inventory . One sample item assessing neuroticism reads as follows, “Gets nervous easily” (α = 0.82, ). One sample item assessing conscientiousness reads as follows, “Perseveres until the task is finished” (α = 0.80, ). The items are preceded by the stem, “I see myself as someone who…”. All items are scored on a 5-point Likert scale (1 = “Disagree strongly” to 5 = “Agree strongly”). The mean score for each scale ranges from 1 to 5, with higher scores indicating more neuroticism and conscientiousness. Depression, anxiety, and stress We will assess depressive symptoms, anxiety, and stress with the 21-item Depression, Anxiety, and Stress Scale . One sample item assessing depression reads as follows, “I felt I wasn’t worth much as a person” (α = 0.91, ). One sample item assessing anxiety reads as follows, “I felt I was close to panic” (α = 0.84, ). One sample item assessing stress reads as follows, “I found it difficult to relax” (α = 0.90, ). All items are scored on a scale from 0 = “did not apply to me at all” to 3 = “applied to me very much or most of the time”. The total score of each subscale ranges from 0 to 21, with higher scores indicating worse depression, anxiety, and stress outcomes over the past week. Trait mindfulness Attention and awareness in daily life will be assessed using the 15-item Mindful Attention Awareness Scale . This questionnaire requires the respondent to evaluate the frequency of different everyday life experiences using a 6-point Likert scale (1 = “Almost never” to 6 = “Almost always”). One sample item reads as follows, “I rush through activities without being really attentive to them” (α = 0.84, ). The mean score ranges from 1 to 6, with higher scores indicating higher levels of dispositional mindfulness. Finally, we will ask participants to fill out the general WTP measure and the general PTP measure again in order to evaluate the test–retest reliability of these two instruments. Ambulatory assessment measures The daily schedule of the ambulatory assessment measures is shown in Table . It consists of five sampling occasions during wake time: (1) immediately after awakening while still lying in bed, (2) 30 min after awakening, (3) 12:30 p.m. (± 30 min), (4) 5:30 p.m. (± 30 min), and (5) bedtime. Participants will be asked to complete questionnaires four times per day (30 min after awakening, 12:30 p.m., 5:30 p.m., and bedtime) with an iPad Mini 2 (Apple Inc.) using the software iDialogPad developed by G. Mutz at the University of Cologne. The full iDialogPad script is given in the Supplementary Material. The iPad’s screen will have a blue light filter to ensure that the evening surveys do not expose participants to the artificial light that can affect their sleep. Sampling times will be automatically registered on the iPad. Participants will collect their saliva on each sampling occasion for sC, sDHEA, and sAA assessment. They will also continuously wear the MotionWatch 8 (CamNtech Ltd., Cambridgeshire, England) and Bittium Faros 180L ECG monitor (Bittium Corporation, Oulu, Finland) for actigraphic and electrocardiographic recordings, respectively. Daily workplace telepressure and private life telepressure measures We will assess daily WTP and daily PTP with the 6-item WTP and the 6-item PTP measures adjusted for repeated measurement during a day, respectively. The verb tense of the items is changed from present tense to past tense and a time reference is given (“Since the last assessment…”). One sample item reads as follows, “It was difficult for me to resist responding to a work-related message right away”. The scoring of these measures is the same as the scoring of the general WTP and general PTP measures. Work-related workload We will assess work-related workload using the 3-item workload measure of Derks and colleagues . One sample item reads as follows, “I had to work extra hard to finish things” (α = 0.91, ). We adapted the instruction to ensure that only work-related workload is assessed. The instruction reads as follows, “Please rate how much you agree or disagree with the following statements. Refer only to work-related activities that you have performed since the last assessment.” All items are scored on a 5-point Likert scale (1 = “Totally disagree” to 5 = “Totally agree”). The mean score ranges from 1 to 5, with higher scores indicating higher work-related workload. Additionally, we will ask participants to report (1) the total time spent performing work activities and (2) the time (in percentage) spent working at the workplace, at home, or other places since the last assessment (e.g., 50% at the workplace, 40% at home, and 10% in other places). Private life workload We adapted the 3-item work-related workload measure of Derks and colleagues to assess private life workload. The three items remain the same as in Derks and colleagues but the instruction is altered to ensure that only private life workload is assessed. The instruction reads as follows, “Please rate how much you agree or disagree with the following statements. Refer only to activities at home/in your private life that you have performed since the last assessment.” The scoring of the scale is the same as the scoring of the work-related workload measure. Higher scores indicate higher private life workload. Work-related perseverative cognition We will use the 3-item work-related worry/rumination measure to assess work-related perseverative cognition. One sample item reads as follows, “My thoughts kept returning to a stressful situation at work” (α = 0.74, ). Participants will be asked to rate the extent to which they experienced such thoughts since the last assessment. All items are scored on a 5-point Likert scale (1 = “Not at all” to 5 = “A great deal”). The mean score ranges from 1 to 5, with higher scores indicating more work-related worry and rumination. Additionally, participants will estimate the total duration of their work-related worry/rumination since the last assessment. Private life perseverative cognition We adapted the 3-item work-related worry/rumination measure of Flaxman and colleagues to assess private life perseverative cognition. One adapted item reads as follows, “My thoughts kept returning to a stressful situation in my private life ” . The scoring remains the same as in the original scale. Additionally, participants will estimate the total duration of their private life perseverative cognition since the last assessment. Number of stressful events Participants will be asked to report the number of stressful events they experienced since the last assessment. They will be provided with the following definition of stressful events: “Stressful events are minor and major events that have made you feel tense, irritated, angry, sad, disappointed, or negative in any other way” . Sleep We will use an 11-item sleep questionnaire to assess different daily aspects of sleep. Qualitative aspects of sleep will be measured with five items inspired by the Karolinska Sleep Index , the Spiegel Sleep Questionnaire , and the St. Mary’s Hospital Sleep Questionnaire . The five items cover central qualitative aspects of sleep (overall sleep quality, restless sleep, difficulty falling asleep, difficulty maintaining sleep, and premature awakening). All items are scored on a 5-point Likert scale. The total score ranges from 5 to 25, with higher scores indicating better subjective sleep quality. Quantitative aspects of sleep will be measured with five items from the St. Mary’s Hospital Sleep Questionnaire. Quantitative aspects of sleep include bedtime, sleep onset latency, waking time, getting-out-of-bed time, and sleep duration. Additionally, the mode of awakening will be assessed using the following question, “This morning, did you wake up spontaneously/naturally?”. Response options are “Yes” and “No”. Biobehavioral measures Following previous methods , we will ask the participants to report the number of caffeinated beverages, alcoholic beverages, tobacco products, e-cigarettes as well as any drugs and medication consumed since the last assessment. Psychosomatic complaints The questionnaire assessing psychosomatic complaints consists of two subscales: one assessing somatic complaints and the other cognitive weariness. Somatic complaints will be measured with seven items from the Somatic Symptom Scale-8 . One sample item reads as follows, “Back pain”. Cognitive weariness will be assessed using the 5-item cognitive weariness subscale of the Shirom-Melamed Burnout Measure . One sample item reads as follows, “I have difficulty concentrating” (α = 0.93, ). All items are preceded by the following question: “At this moment, how much are you bothered by any of the following problems?” and scored on a 5-point Likert scale (0 = “Not at all” to 4 = “Very much”). The total score ranges from 0 to 48, with higher scores indicating more severe psychosomatic complaints. Mood Following the conceptualization of Matthews and colleagues and Schimmack and Grob , the three basic dimensions of mood, valence, calmness, and energetic arousal will be measured using an 8-item mood scale, which is a modified version of the 6-item mood scale developed by Wilhelm and Schoebi . Two items have been added to the original 6-item scale following recommendations of P. Wilhelm (personal communication, 17.10.2022). The initial instruction reads as follows, “At this moment I am/feel:”. One sample item of the 3-item valence subscale is “1. Unwell - 8. Well”, one sample item of the 3-item calmness subscale is “1. Agitated - 8. Calm”, and one sample item of the 2-item energetic arousal subscale is “1. Full of energy - 8. Without energy”. All items are scored on an eight-point bipolar scale (1 = “Extremely” to 4 = “Rather”; 5 = “Rather” to 8 = “Extremely”). The mean score of each dimension ranges from 1 to 8. Scores of four items are reversed in order to ensure that higher scores indicate better mood (i.e., higher positive valence, higher calmness, and higher energetic arousal). Saliva sampling In order for participants to collect their saliva five times per day (i.e., immediately after awakening while still lying in bed, 30 min after awakening, 12:30 p.m. (± 30 min), 5:30 p.m. (± 30 min), and bedtime) in a hygienic and convenient way, we will ask them to use the set of SaliCaps (IBL International, Hamburg, Germany). SaliCaps are low-bind polypropylene 2 mL cryovials that allow the collection of saliva using a polypropylene straw. Participants will be asked to follow the saliva sampling instruction displayed on the iPads. The instruction reads as follows, “First, swallow the saliva currently in your mouth. Now, hold your saliva in your mouth for two minutes. You can no longer swallow and must then transfer the accumulated saliva into the tube. Press OK to start the timer. [2 min later] Now transfer the saliva into the tube using the straw. Make sure you have sealed the tube on all sides.”. Then, they will be asked whether they were able to follow the following instructions: no drinking (other than water − at the latest 10 min before saliva sampling), eating, smoking, or engaging in vigorous physical activity in the last 30 min, and no tooth brushing in the last 60 min. The obtained saliva samples will be stored during the assessment period in a provided plastic freezer bag in the participants’ refrigerators and then kept in a freezer at − 30 °C in our laboratory before being shipped to the Biochemical Laboratory of the Department of Clinical Psychology, at the University of Vienna headed by U.M. Nater. Free sC concentrations will be measured using a Cortisol Saliva Luminescence Immunoassay (IBL-Tecan, Hamburg, Germany). DHEA concentrations will be measured using a DHEA Saliva Enzyme-Linked Immunosorbent Assay (IBL-Tecan, Hamburg, Germany). SAA activity will be measured using reagents provided by DiaSys Diagnostic Systems (Holzheim, Germany). Electrocardiographic measures The Bittium Faros 180L (Bittium Corporation, Oulu, Finland) is a lightweight (18 g), small, unobtrusive, and waterproof ECG device, equipped with a long-lasting battery allowing continuous recording for up to eight days. It is attached to the chest using three adhesive electrodes, a single ECG patch electrode, or a chest belt. The ECG will be recorded at a sampling rate of 250 Hz together with an accelerometer sampled at 25 Hz. Data will be analyzed with the Bittium Cardiac Navigator software (Bittium Corporation, Oulu, Finland) to obtain indices of HRV. The root mean square of successive differences (RMSSD) will be the main HRV index. RMSSD reflects cardiac vagal tone and is relatively free of respiratory influences . Actigraphic measures We will use the MotionWatch 8 (CamNtech Ltd., Cambridgeshire, England) to record participants’ wake and sleep periods (rest/activity cycles). The MotionWatch 8 is a lightweight (~ 10 g), small, and waterproof wristwatch-like activity-monitoring device. The watch is equipped with a light sensor and a very long-lasting battery allowing a continuous recording for up to 91 days. Participants will wear the MotionWatch 8 on the wrist of the non-dominant arm. We will also ask them to event-mark two time points that are essential for the computation of sleep quality and quantity indices: when they (i) get out of bed in the morning and (ii) are ready to sleep at night. Compliance to actigraphy event markers is generally moderate to high . Movement of the wrist will be recorded at a sampling rate of 50 Hz using 30-s epochs. The actigraphic recordings will be analyzed with the MotionWare software (CamNtech Ltd., Cambridgeshire, England) to obtain indices of sleep quality (e.g., fragmentation index) and sleep quantity (e.g., total sleep time). Sociodemographic data Ad-hoc questions will be used to assess the following sociodemographic data: age, sex, body height, body weight, mother tongue, French skills, workweek schedule, average number of actual work hours per week, frequency of ICT use for work-related communication during the workweek and the weekend, work on weekends according to contract, on-call hours according to contract, and shift work. Health-related data We will ask participants to report any known current disease or medical condition. Additionally, we will ask specifically for sleep apnea with one question and assess insomnia using the 7-item Insomnia Severity Index . One sample item reads as follows, “To what extent do you consider your sleep problem to interfere with your daily functioning (e.g., daytime fatigue, ability to function at work/daily chores, concentration, memory, and mood)?” (Cronbach’s α = 0.74; ) and is scored on a 5-point Likert scale (0 = “Not at all interfering” to 4 = “Very much interfering”). The total score ranges from 0 to 28, with higher scores indicating more severe insomnia: 0–7 = No clinically significant insomnia, 8–14 = Subthreshold insomnia, 15–21 = Clinical insomnia, and 22–28 = Severe insomnia). We will also ask participants to list any medication intake and to indicate if they smoke (e.g., cigarettes, e-cigarettes, pipes, smokeless tobacco), take recreational/psychotropic drugs (e.g., stimulant drugs, opioids, anabolic steroids), wear a pacemaker, are pregnant, or are lactating. Finally, we will assess alcohol abuse/dependence during the past six months using the 6-item Alcohol Abuse/Dependence Module of the Patient Health Questionnaire . General workplace telepressure and private life telepressure measures Although the concept of telepressure has been first developed in the context of message-based technology use for work purposes, people can experience telepressure also when using message-based technology for non-work purposes. We will measure general WTP and general PTP using adapted versions of the 6-item WTP measure . The adaptation consists in adding “work-related” and “personal”, respectively, in five of the six items. One sample item reads as follows, “I can’t stop thinking about a [work-related] / [personal] message until I’ve responded”. All items are scored on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). The mean score ranges from 1 to 5, with higher scores indicating higher levels of general WTP and PTP. Cronbach’s alphas of our adapted general WTP and PTP measures were 0.89 and 0.90, respectively, in a sample of 75 employees. Ad-hoc questions will be used to assess the following sociodemographic data: age, sex, body height, body weight, mother tongue, French skills, workweek schedule, average number of actual work hours per week, frequency of ICT use for work-related communication during the workweek and the weekend, work on weekends according to contract, on-call hours according to contract, and shift work. We will ask participants to report any known current disease or medical condition. Additionally, we will ask specifically for sleep apnea with one question and assess insomnia using the 7-item Insomnia Severity Index . One sample item reads as follows, “To what extent do you consider your sleep problem to interfere with your daily functioning (e.g., daytime fatigue, ability to function at work/daily chores, concentration, memory, and mood)?” (Cronbach’s α = 0.74; ) and is scored on a 5-point Likert scale (0 = “Not at all interfering” to 4 = “Very much interfering”). The total score ranges from 0 to 28, with higher scores indicating more severe insomnia: 0–7 = No clinically significant insomnia, 8–14 = Subthreshold insomnia, 15–21 = Clinical insomnia, and 22–28 = Severe insomnia). We will also ask participants to list any medication intake and to indicate if they smoke (e.g., cigarettes, e-cigarettes, pipes, smokeless tobacco), take recreational/psychotropic drugs (e.g., stimulant drugs, opioids, anabolic steroids), wear a pacemaker, are pregnant, or are lactating. Finally, we will assess alcohol abuse/dependence during the past six months using the 6-item Alcohol Abuse/Dependence Module of the Patient Health Questionnaire . Although the concept of telepressure has been first developed in the context of message-based technology use for work purposes, people can experience telepressure also when using message-based technology for non-work purposes. We will measure general WTP and general PTP using adapted versions of the 6-item WTP measure . The adaptation consists in adding “work-related” and “personal”, respectively, in five of the six items. One sample item reads as follows, “I can’t stop thinking about a [work-related] / [personal] message until I’ve responded”. All items are scored on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). The mean score ranges from 1 to 5, with higher scores indicating higher levels of general WTP and PTP. Cronbach’s alphas of our adapted general WTP and PTP measures were 0.89 and 0.90, respectively, in a sample of 75 employees. Complementary sociodemographic and health-related data Additional sociodemographic data will include marital status, total number of individuals living in the household, total number of adults living in the household, total number of children under 18 years old currently living in the household, educational level, current occupation/job, managerial/supervisory role, place of work, and job tenure. We will ask women of childbearing age to indicate the average length of their menstrual cycle, their average period length, and the first day of their last period. This information will be used to ascertain that their ambulatory assessment phase is scheduled outside of their period. Workplace fear of missing out Workplace fear of missing out will be assessed using the 10-item workplace Fear of Missing Out scale . One sample item reads as follows, “I worry that I will not know what is happening at work” (α = 0.90–0.94; ) and is preceded by the stem “When I am absent or disconnected from work…”. All items are scored on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). The mean score ranges from 1 to 5, with higher scores indicating stronger workplace fear of missing out. Psychological detachment from work We will use the 4-item psychological detachment subscale of the recovery experience questionnaire to assess psychological detachment from work. One sample item reads as follows, “I don’t think about work at all” (α = 0.84–0.89; ) and is preceded by the stem “During time after work:”. All items are scored on a 5-point Likert scale (1 = “I do not agree at all” to 5 = “I fully agree”). The mean score ranges from 1 to 5, with higher scores indicating better psychological detachment. ICT-related response expectations and availability Response expectations. ICT-related response expectations of employees will be assessed with the 2-item response expectations subscale of the ICT Demand Scale . One sample item reads as follows, “I am expected to respond to e-mail messages immediately” (α = 0.78–0.86; ). Items are scored on a 5-point Likert scale (0 = “Never” to 4 = “Almost always”). Mean scores range from 0 to 4, with higher scores indicating higher response expectations. Availability. ICT-related availability of employees will be assessed with the 4-item availability subscale of the ICT Demand Scale . One sample item reads as follows, “I’m contacted about work-related issues outside of regular work hours” (α = 0.71–0.83; ). Items are scored on a 5-point Likert scale (0 = “Never” to 4 = “Almost always”). Mean scores range from 0 to 4, with higher scores indicating higher availability. Technostress creators Technostress creators are factors that induce stress due to the use of ICTs. According to Ragu-Nathan and colleagues , they can be grouped into five technostress dimensions, i.e., techno-overload, techno-invasion, techno-complexity, techno-insecurity, and techno-uncertainty. We will assess these five dimensions with the 21-item technostress creators scale . One sample item of techno-overload reads as follows, “I am forced by this technology to work much faster” (α = 0.90). One sample item of techno-invasion reads as follows, “I feel my personal life is being invaded by this technology” (α = 0.88). One sample item of techno-complexity reads as follows, “I do not know enough about this technology to handle my job satisfactorily” (α = 0.88). One sample item of techno-insecurity reads as follows, “I have to constantly update my skills to avoid being replaced” (α = 0.84). One sample item of techno-uncertainty reads as follows, “There are constant changes in computer software in our organization” (α = 0.91). All items are scored on a 7-point Likert scale (1 = “Strongly disagree” to 7 = “Strongly agree”). For all dimensions, mean scores range from 1 to 7, with higher scores indicating more technostress creators. Workaholism We will assess workaholism using the two working excessively and working compulsively subscales of the 10-item short version of the Dutch Work Addiction Scale . One sample item of the working excessively subscale reads as follows, “I find myself continuing work after my co-workers have called it quits” (α = 0.65–0.81; ). One sample item of the working compulsively subscale reads as follows, “I often feel that there’s something inside me that drives me to work hard” (α = 0.69–0.81; ). All items are scored on a 4-point scale (1 = “(Almost) Never” to 4 = “(Almost) Always”). The mean score ranges from 1 to 4, with higher scores indicating higher levels of workaholism. Segmentation preferences and supplies Segmentation preferences refer to the degree to which employees prefer to keep aspects of work and home separated from one another . We will measure this construct using the 4-item segmentation preferences scale . One sample item reads as follows, “I like to be able to leave work behind when I go home” (α = 0.91, ). Segmentation supplies refer to the employees’ perception of the degree to which their organization/workplace provides freedom of work-home segmentation . We will assess this construct using the 4-item workplace segmentation supplies scale . One sample item reads as follows, “At my workplace, people are able to prevent work issues from creeping into their home life” (α = 0.94, ). The items of both scales are scored on a 7-point Likert scale (1 = “Strongly disagree” to 7 = “Strongly agree”). The mean score ranges from 1 to 7, with higher scores indicating stronger segmentation preferences and norms in the organization. Personality traits The two personality traits neuroticism and conscientiousness will be assessed using respectively eight and nine items of the Big Five Inventory . One sample item assessing neuroticism reads as follows, “Gets nervous easily” (α = 0.82, ). One sample item assessing conscientiousness reads as follows, “Perseveres until the task is finished” (α = 0.80, ). The items are preceded by the stem, “I see myself as someone who…”. All items are scored on a 5-point Likert scale (1 = “Disagree strongly” to 5 = “Agree strongly”). The mean score for each scale ranges from 1 to 5, with higher scores indicating more neuroticism and conscientiousness. Depression, anxiety, and stress We will assess depressive symptoms, anxiety, and stress with the 21-item Depression, Anxiety, and Stress Scale . One sample item assessing depression reads as follows, “I felt I wasn’t worth much as a person” (α = 0.91, ). One sample item assessing anxiety reads as follows, “I felt I was close to panic” (α = 0.84, ). One sample item assessing stress reads as follows, “I found it difficult to relax” (α = 0.90, ). All items are scored on a scale from 0 = “did not apply to me at all” to 3 = “applied to me very much or most of the time”. The total score of each subscale ranges from 0 to 21, with higher scores indicating worse depression, anxiety, and stress outcomes over the past week. Trait mindfulness Attention and awareness in daily life will be assessed using the 15-item Mindful Attention Awareness Scale . This questionnaire requires the respondent to evaluate the frequency of different everyday life experiences using a 6-point Likert scale (1 = “Almost never” to 6 = “Almost always”). One sample item reads as follows, “I rush through activities without being really attentive to them” (α = 0.84, ). The mean score ranges from 1 to 6, with higher scores indicating higher levels of dispositional mindfulness. Finally, we will ask participants to fill out the general WTP measure and the general PTP measure again in order to evaluate the test–retest reliability of these two instruments. Additional sociodemographic data will include marital status, total number of individuals living in the household, total number of adults living in the household, total number of children under 18 years old currently living in the household, educational level, current occupation/job, managerial/supervisory role, place of work, and job tenure. We will ask women of childbearing age to indicate the average length of their menstrual cycle, their average period length, and the first day of their last period. This information will be used to ascertain that their ambulatory assessment phase is scheduled outside of their period. Workplace fear of missing out will be assessed using the 10-item workplace Fear of Missing Out scale . One sample item reads as follows, “I worry that I will not know what is happening at work” (α = 0.90–0.94; ) and is preceded by the stem “When I am absent or disconnected from work…”. All items are scored on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). The mean score ranges from 1 to 5, with higher scores indicating stronger workplace fear of missing out. We will use the 4-item psychological detachment subscale of the recovery experience questionnaire to assess psychological detachment from work. One sample item reads as follows, “I don’t think about work at all” (α = 0.84–0.89; ) and is preceded by the stem “During time after work:”. All items are scored on a 5-point Likert scale (1 = “I do not agree at all” to 5 = “I fully agree”). The mean score ranges from 1 to 5, with higher scores indicating better psychological detachment. Response expectations. ICT-related response expectations of employees will be assessed with the 2-item response expectations subscale of the ICT Demand Scale . One sample item reads as follows, “I am expected to respond to e-mail messages immediately” (α = 0.78–0.86; ). Items are scored on a 5-point Likert scale (0 = “Never” to 4 = “Almost always”). Mean scores range from 0 to 4, with higher scores indicating higher response expectations. Availability. ICT-related availability of employees will be assessed with the 4-item availability subscale of the ICT Demand Scale . One sample item reads as follows, “I’m contacted about work-related issues outside of regular work hours” (α = 0.71–0.83; ). Items are scored on a 5-point Likert scale (0 = “Never” to 4 = “Almost always”). Mean scores range from 0 to 4, with higher scores indicating higher availability. Technostress creators are factors that induce stress due to the use of ICTs. According to Ragu-Nathan and colleagues , they can be grouped into five technostress dimensions, i.e., techno-overload, techno-invasion, techno-complexity, techno-insecurity, and techno-uncertainty. We will assess these five dimensions with the 21-item technostress creators scale . One sample item of techno-overload reads as follows, “I am forced by this technology to work much faster” (α = 0.90). One sample item of techno-invasion reads as follows, “I feel my personal life is being invaded by this technology” (α = 0.88). One sample item of techno-complexity reads as follows, “I do not know enough about this technology to handle my job satisfactorily” (α = 0.88). One sample item of techno-insecurity reads as follows, “I have to constantly update my skills to avoid being replaced” (α = 0.84). One sample item of techno-uncertainty reads as follows, “There are constant changes in computer software in our organization” (α = 0.91). All items are scored on a 7-point Likert scale (1 = “Strongly disagree” to 7 = “Strongly agree”). For all dimensions, mean scores range from 1 to 7, with higher scores indicating more technostress creators. We will assess workaholism using the two working excessively and working compulsively subscales of the 10-item short version of the Dutch Work Addiction Scale . One sample item of the working excessively subscale reads as follows, “I find myself continuing work after my co-workers have called it quits” (α = 0.65–0.81; ). One sample item of the working compulsively subscale reads as follows, “I often feel that there’s something inside me that drives me to work hard” (α = 0.69–0.81; ). All items are scored on a 4-point scale (1 = “(Almost) Never” to 4 = “(Almost) Always”). The mean score ranges from 1 to 4, with higher scores indicating higher levels of workaholism. Segmentation preferences refer to the degree to which employees prefer to keep aspects of work and home separated from one another . We will measure this construct using the 4-item segmentation preferences scale . One sample item reads as follows, “I like to be able to leave work behind when I go home” (α = 0.91, ). Segmentation supplies refer to the employees’ perception of the degree to which their organization/workplace provides freedom of work-home segmentation . We will assess this construct using the 4-item workplace segmentation supplies scale . One sample item reads as follows, “At my workplace, people are able to prevent work issues from creeping into their home life” (α = 0.94, ). The items of both scales are scored on a 7-point Likert scale (1 = “Strongly disagree” to 7 = “Strongly agree”). The mean score ranges from 1 to 7, with higher scores indicating stronger segmentation preferences and norms in the organization. The two personality traits neuroticism and conscientiousness will be assessed using respectively eight and nine items of the Big Five Inventory . One sample item assessing neuroticism reads as follows, “Gets nervous easily” (α = 0.82, ). One sample item assessing conscientiousness reads as follows, “Perseveres until the task is finished” (α = 0.80, ). The items are preceded by the stem, “I see myself as someone who…”. All items are scored on a 5-point Likert scale (1 = “Disagree strongly” to 5 = “Agree strongly”). The mean score for each scale ranges from 1 to 5, with higher scores indicating more neuroticism and conscientiousness. We will assess depressive symptoms, anxiety, and stress with the 21-item Depression, Anxiety, and Stress Scale . One sample item assessing depression reads as follows, “I felt I wasn’t worth much as a person” (α = 0.91, ). One sample item assessing anxiety reads as follows, “I felt I was close to panic” (α = 0.84, ). One sample item assessing stress reads as follows, “I found it difficult to relax” (α = 0.90, ). All items are scored on a scale from 0 = “did not apply to me at all” to 3 = “applied to me very much or most of the time”. The total score of each subscale ranges from 0 to 21, with higher scores indicating worse depression, anxiety, and stress outcomes over the past week. Attention and awareness in daily life will be assessed using the 15-item Mindful Attention Awareness Scale . This questionnaire requires the respondent to evaluate the frequency of different everyday life experiences using a 6-point Likert scale (1 = “Almost never” to 6 = “Almost always”). One sample item reads as follows, “I rush through activities without being really attentive to them” (α = 0.84, ). The mean score ranges from 1 to 6, with higher scores indicating higher levels of dispositional mindfulness. Finally, we will ask participants to fill out the general WTP measure and the general PTP measure again in order to evaluate the test–retest reliability of these two instruments. The daily schedule of the ambulatory assessment measures is shown in Table . It consists of five sampling occasions during wake time: (1) immediately after awakening while still lying in bed, (2) 30 min after awakening, (3) 12:30 p.m. (± 30 min), (4) 5:30 p.m. (± 30 min), and (5) bedtime. Participants will be asked to complete questionnaires four times per day (30 min after awakening, 12:30 p.m., 5:30 p.m., and bedtime) with an iPad Mini 2 (Apple Inc.) using the software iDialogPad developed by G. Mutz at the University of Cologne. The full iDialogPad script is given in the Supplementary Material. The iPad’s screen will have a blue light filter to ensure that the evening surveys do not expose participants to the artificial light that can affect their sleep. Sampling times will be automatically registered on the iPad. Participants will collect their saliva on each sampling occasion for sC, sDHEA, and sAA assessment. They will also continuously wear the MotionWatch 8 (CamNtech Ltd., Cambridgeshire, England) and Bittium Faros 180L ECG monitor (Bittium Corporation, Oulu, Finland) for actigraphic and electrocardiographic recordings, respectively. Daily workplace telepressure and private life telepressure measures We will assess daily WTP and daily PTP with the 6-item WTP and the 6-item PTP measures adjusted for repeated measurement during a day, respectively. The verb tense of the items is changed from present tense to past tense and a time reference is given (“Since the last assessment…”). One sample item reads as follows, “It was difficult for me to resist responding to a work-related message right away”. The scoring of these measures is the same as the scoring of the general WTP and general PTP measures. Work-related workload We will assess work-related workload using the 3-item workload measure of Derks and colleagues . One sample item reads as follows, “I had to work extra hard to finish things” (α = 0.91, ). We adapted the instruction to ensure that only work-related workload is assessed. The instruction reads as follows, “Please rate how much you agree or disagree with the following statements. Refer only to work-related activities that you have performed since the last assessment.” All items are scored on a 5-point Likert scale (1 = “Totally disagree” to 5 = “Totally agree”). The mean score ranges from 1 to 5, with higher scores indicating higher work-related workload. Additionally, we will ask participants to report (1) the total time spent performing work activities and (2) the time (in percentage) spent working at the workplace, at home, or other places since the last assessment (e.g., 50% at the workplace, 40% at home, and 10% in other places). Private life workload We adapted the 3-item work-related workload measure of Derks and colleagues to assess private life workload. The three items remain the same as in Derks and colleagues but the instruction is altered to ensure that only private life workload is assessed. The instruction reads as follows, “Please rate how much you agree or disagree with the following statements. Refer only to activities at home/in your private life that you have performed since the last assessment.” The scoring of the scale is the same as the scoring of the work-related workload measure. Higher scores indicate higher private life workload. Work-related perseverative cognition We will use the 3-item work-related worry/rumination measure to assess work-related perseverative cognition. One sample item reads as follows, “My thoughts kept returning to a stressful situation at work” (α = 0.74, ). Participants will be asked to rate the extent to which they experienced such thoughts since the last assessment. All items are scored on a 5-point Likert scale (1 = “Not at all” to 5 = “A great deal”). The mean score ranges from 1 to 5, with higher scores indicating more work-related worry and rumination. Additionally, participants will estimate the total duration of their work-related worry/rumination since the last assessment. Private life perseverative cognition We adapted the 3-item work-related worry/rumination measure of Flaxman and colleagues to assess private life perseverative cognition. One adapted item reads as follows, “My thoughts kept returning to a stressful situation in my private life ” . The scoring remains the same as in the original scale. Additionally, participants will estimate the total duration of their private life perseverative cognition since the last assessment. Number of stressful events Participants will be asked to report the number of stressful events they experienced since the last assessment. They will be provided with the following definition of stressful events: “Stressful events are minor and major events that have made you feel tense, irritated, angry, sad, disappointed, or negative in any other way” . Sleep We will use an 11-item sleep questionnaire to assess different daily aspects of sleep. Qualitative aspects of sleep will be measured with five items inspired by the Karolinska Sleep Index , the Spiegel Sleep Questionnaire , and the St. Mary’s Hospital Sleep Questionnaire . The five items cover central qualitative aspects of sleep (overall sleep quality, restless sleep, difficulty falling asleep, difficulty maintaining sleep, and premature awakening). All items are scored on a 5-point Likert scale. The total score ranges from 5 to 25, with higher scores indicating better subjective sleep quality. Quantitative aspects of sleep will be measured with five items from the St. Mary’s Hospital Sleep Questionnaire. Quantitative aspects of sleep include bedtime, sleep onset latency, waking time, getting-out-of-bed time, and sleep duration. Additionally, the mode of awakening will be assessed using the following question, “This morning, did you wake up spontaneously/naturally?”. Response options are “Yes” and “No”. Biobehavioral measures Following previous methods , we will ask the participants to report the number of caffeinated beverages, alcoholic beverages, tobacco products, e-cigarettes as well as any drugs and medication consumed since the last assessment. Psychosomatic complaints The questionnaire assessing psychosomatic complaints consists of two subscales: one assessing somatic complaints and the other cognitive weariness. Somatic complaints will be measured with seven items from the Somatic Symptom Scale-8 . One sample item reads as follows, “Back pain”. Cognitive weariness will be assessed using the 5-item cognitive weariness subscale of the Shirom-Melamed Burnout Measure . One sample item reads as follows, “I have difficulty concentrating” (α = 0.93, ). All items are preceded by the following question: “At this moment, how much are you bothered by any of the following problems?” and scored on a 5-point Likert scale (0 = “Not at all” to 4 = “Very much”). The total score ranges from 0 to 48, with higher scores indicating more severe psychosomatic complaints. Mood Following the conceptualization of Matthews and colleagues and Schimmack and Grob , the three basic dimensions of mood, valence, calmness, and energetic arousal will be measured using an 8-item mood scale, which is a modified version of the 6-item mood scale developed by Wilhelm and Schoebi . Two items have been added to the original 6-item scale following recommendations of P. Wilhelm (personal communication, 17.10.2022). The initial instruction reads as follows, “At this moment I am/feel:”. One sample item of the 3-item valence subscale is “1. Unwell - 8. Well”, one sample item of the 3-item calmness subscale is “1. Agitated - 8. Calm”, and one sample item of the 2-item energetic arousal subscale is “1. Full of energy - 8. Without energy”. All items are scored on an eight-point bipolar scale (1 = “Extremely” to 4 = “Rather”; 5 = “Rather” to 8 = “Extremely”). The mean score of each dimension ranges from 1 to 8. Scores of four items are reversed in order to ensure that higher scores indicate better mood (i.e., higher positive valence, higher calmness, and higher energetic arousal). Saliva sampling In order for participants to collect their saliva five times per day (i.e., immediately after awakening while still lying in bed, 30 min after awakening, 12:30 p.m. (± 30 min), 5:30 p.m. (± 30 min), and bedtime) in a hygienic and convenient way, we will ask them to use the set of SaliCaps (IBL International, Hamburg, Germany). SaliCaps are low-bind polypropylene 2 mL cryovials that allow the collection of saliva using a polypropylene straw. Participants will be asked to follow the saliva sampling instruction displayed on the iPads. The instruction reads as follows, “First, swallow the saliva currently in your mouth. Now, hold your saliva in your mouth for two minutes. You can no longer swallow and must then transfer the accumulated saliva into the tube. Press OK to start the timer. [2 min later] Now transfer the saliva into the tube using the straw. Make sure you have sealed the tube on all sides.”. Then, they will be asked whether they were able to follow the following instructions: no drinking (other than water − at the latest 10 min before saliva sampling), eating, smoking, or engaging in vigorous physical activity in the last 30 min, and no tooth brushing in the last 60 min. The obtained saliva samples will be stored during the assessment period in a provided plastic freezer bag in the participants’ refrigerators and then kept in a freezer at − 30 °C in our laboratory before being shipped to the Biochemical Laboratory of the Department of Clinical Psychology, at the University of Vienna headed by U.M. Nater. Free sC concentrations will be measured using a Cortisol Saliva Luminescence Immunoassay (IBL-Tecan, Hamburg, Germany). DHEA concentrations will be measured using a DHEA Saliva Enzyme-Linked Immunosorbent Assay (IBL-Tecan, Hamburg, Germany). SAA activity will be measured using reagents provided by DiaSys Diagnostic Systems (Holzheim, Germany). Electrocardiographic measures The Bittium Faros 180L (Bittium Corporation, Oulu, Finland) is a lightweight (18 g), small, unobtrusive, and waterproof ECG device, equipped with a long-lasting battery allowing continuous recording for up to eight days. It is attached to the chest using three adhesive electrodes, a single ECG patch electrode, or a chest belt. The ECG will be recorded at a sampling rate of 250 Hz together with an accelerometer sampled at 25 Hz. Data will be analyzed with the Bittium Cardiac Navigator software (Bittium Corporation, Oulu, Finland) to obtain indices of HRV. The root mean square of successive differences (RMSSD) will be the main HRV index. RMSSD reflects cardiac vagal tone and is relatively free of respiratory influences . Actigraphic measures We will use the MotionWatch 8 (CamNtech Ltd., Cambridgeshire, England) to record participants’ wake and sleep periods (rest/activity cycles). The MotionWatch 8 is a lightweight (~ 10 g), small, and waterproof wristwatch-like activity-monitoring device. The watch is equipped with a light sensor and a very long-lasting battery allowing a continuous recording for up to 91 days. Participants will wear the MotionWatch 8 on the wrist of the non-dominant arm. We will also ask them to event-mark two time points that are essential for the computation of sleep quality and quantity indices: when they (i) get out of bed in the morning and (ii) are ready to sleep at night. Compliance to actigraphy event markers is generally moderate to high . Movement of the wrist will be recorded at a sampling rate of 50 Hz using 30-s epochs. The actigraphic recordings will be analyzed with the MotionWare software (CamNtech Ltd., Cambridgeshire, England) to obtain indices of sleep quality (e.g., fragmentation index) and sleep quantity (e.g., total sleep time). We will assess daily WTP and daily PTP with the 6-item WTP and the 6-item PTP measures adjusted for repeated measurement during a day, respectively. The verb tense of the items is changed from present tense to past tense and a time reference is given (“Since the last assessment…”). One sample item reads as follows, “It was difficult for me to resist responding to a work-related message right away”. The scoring of these measures is the same as the scoring of the general WTP and general PTP measures. We will assess work-related workload using the 3-item workload measure of Derks and colleagues . One sample item reads as follows, “I had to work extra hard to finish things” (α = 0.91, ). We adapted the instruction to ensure that only work-related workload is assessed. The instruction reads as follows, “Please rate how much you agree or disagree with the following statements. Refer only to work-related activities that you have performed since the last assessment.” All items are scored on a 5-point Likert scale (1 = “Totally disagree” to 5 = “Totally agree”). The mean score ranges from 1 to 5, with higher scores indicating higher work-related workload. Additionally, we will ask participants to report (1) the total time spent performing work activities and (2) the time (in percentage) spent working at the workplace, at home, or other places since the last assessment (e.g., 50% at the workplace, 40% at home, and 10% in other places). We adapted the 3-item work-related workload measure of Derks and colleagues to assess private life workload. The three items remain the same as in Derks and colleagues but the instruction is altered to ensure that only private life workload is assessed. The instruction reads as follows, “Please rate how much you agree or disagree with the following statements. Refer only to activities at home/in your private life that you have performed since the last assessment.” The scoring of the scale is the same as the scoring of the work-related workload measure. Higher scores indicate higher private life workload. We will use the 3-item work-related worry/rumination measure to assess work-related perseverative cognition. One sample item reads as follows, “My thoughts kept returning to a stressful situation at work” (α = 0.74, ). Participants will be asked to rate the extent to which they experienced such thoughts since the last assessment. All items are scored on a 5-point Likert scale (1 = “Not at all” to 5 = “A great deal”). The mean score ranges from 1 to 5, with higher scores indicating more work-related worry and rumination. Additionally, participants will estimate the total duration of their work-related worry/rumination since the last assessment. We adapted the 3-item work-related worry/rumination measure of Flaxman and colleagues to assess private life perseverative cognition. One adapted item reads as follows, “My thoughts kept returning to a stressful situation in my private life ” . The scoring remains the same as in the original scale. Additionally, participants will estimate the total duration of their private life perseverative cognition since the last assessment. Participants will be asked to report the number of stressful events they experienced since the last assessment. They will be provided with the following definition of stressful events: “Stressful events are minor and major events that have made you feel tense, irritated, angry, sad, disappointed, or negative in any other way” . We will use an 11-item sleep questionnaire to assess different daily aspects of sleep. Qualitative aspects of sleep will be measured with five items inspired by the Karolinska Sleep Index , the Spiegel Sleep Questionnaire , and the St. Mary’s Hospital Sleep Questionnaire . The five items cover central qualitative aspects of sleep (overall sleep quality, restless sleep, difficulty falling asleep, difficulty maintaining sleep, and premature awakening). All items are scored on a 5-point Likert scale. The total score ranges from 5 to 25, with higher scores indicating better subjective sleep quality. Quantitative aspects of sleep will be measured with five items from the St. Mary’s Hospital Sleep Questionnaire. Quantitative aspects of sleep include bedtime, sleep onset latency, waking time, getting-out-of-bed time, and sleep duration. Additionally, the mode of awakening will be assessed using the following question, “This morning, did you wake up spontaneously/naturally?”. Response options are “Yes” and “No”. Following previous methods , we will ask the participants to report the number of caffeinated beverages, alcoholic beverages, tobacco products, e-cigarettes as well as any drugs and medication consumed since the last assessment. The questionnaire assessing psychosomatic complaints consists of two subscales: one assessing somatic complaints and the other cognitive weariness. Somatic complaints will be measured with seven items from the Somatic Symptom Scale-8 . One sample item reads as follows, “Back pain”. Cognitive weariness will be assessed using the 5-item cognitive weariness subscale of the Shirom-Melamed Burnout Measure . One sample item reads as follows, “I have difficulty concentrating” (α = 0.93, ). All items are preceded by the following question: “At this moment, how much are you bothered by any of the following problems?” and scored on a 5-point Likert scale (0 = “Not at all” to 4 = “Very much”). The total score ranges from 0 to 48, with higher scores indicating more severe psychosomatic complaints. Following the conceptualization of Matthews and colleagues and Schimmack and Grob , the three basic dimensions of mood, valence, calmness, and energetic arousal will be measured using an 8-item mood scale, which is a modified version of the 6-item mood scale developed by Wilhelm and Schoebi . Two items have been added to the original 6-item scale following recommendations of P. Wilhelm (personal communication, 17.10.2022). The initial instruction reads as follows, “At this moment I am/feel:”. One sample item of the 3-item valence subscale is “1. Unwell - 8. Well”, one sample item of the 3-item calmness subscale is “1. Agitated - 8. Calm”, and one sample item of the 2-item energetic arousal subscale is “1. Full of energy - 8. Without energy”. All items are scored on an eight-point bipolar scale (1 = “Extremely” to 4 = “Rather”; 5 = “Rather” to 8 = “Extremely”). The mean score of each dimension ranges from 1 to 8. Scores of four items are reversed in order to ensure that higher scores indicate better mood (i.e., higher positive valence, higher calmness, and higher energetic arousal). In order for participants to collect their saliva five times per day (i.e., immediately after awakening while still lying in bed, 30 min after awakening, 12:30 p.m. (± 30 min), 5:30 p.m. (± 30 min), and bedtime) in a hygienic and convenient way, we will ask them to use the set of SaliCaps (IBL International, Hamburg, Germany). SaliCaps are low-bind polypropylene 2 mL cryovials that allow the collection of saliva using a polypropylene straw. Participants will be asked to follow the saliva sampling instruction displayed on the iPads. The instruction reads as follows, “First, swallow the saliva currently in your mouth. Now, hold your saliva in your mouth for two minutes. You can no longer swallow and must then transfer the accumulated saliva into the tube. Press OK to start the timer. [2 min later] Now transfer the saliva into the tube using the straw. Make sure you have sealed the tube on all sides.”. Then, they will be asked whether they were able to follow the following instructions: no drinking (other than water − at the latest 10 min before saliva sampling), eating, smoking, or engaging in vigorous physical activity in the last 30 min, and no tooth brushing in the last 60 min. The obtained saliva samples will be stored during the assessment period in a provided plastic freezer bag in the participants’ refrigerators and then kept in a freezer at − 30 °C in our laboratory before being shipped to the Biochemical Laboratory of the Department of Clinical Psychology, at the University of Vienna headed by U.M. Nater. Free sC concentrations will be measured using a Cortisol Saliva Luminescence Immunoassay (IBL-Tecan, Hamburg, Germany). DHEA concentrations will be measured using a DHEA Saliva Enzyme-Linked Immunosorbent Assay (IBL-Tecan, Hamburg, Germany). SAA activity will be measured using reagents provided by DiaSys Diagnostic Systems (Holzheim, Germany). The Bittium Faros 180L (Bittium Corporation, Oulu, Finland) is a lightweight (18 g), small, unobtrusive, and waterproof ECG device, equipped with a long-lasting battery allowing continuous recording for up to eight days. It is attached to the chest using three adhesive electrodes, a single ECG patch electrode, or a chest belt. The ECG will be recorded at a sampling rate of 250 Hz together with an accelerometer sampled at 25 Hz. Data will be analyzed with the Bittium Cardiac Navigator software (Bittium Corporation, Oulu, Finland) to obtain indices of HRV. The root mean square of successive differences (RMSSD) will be the main HRV index. RMSSD reflects cardiac vagal tone and is relatively free of respiratory influences . We will use the MotionWatch 8 (CamNtech Ltd., Cambridgeshire, England) to record participants’ wake and sleep periods (rest/activity cycles). The MotionWatch 8 is a lightweight (~ 10 g), small, and waterproof wristwatch-like activity-monitoring device. The watch is equipped with a light sensor and a very long-lasting battery allowing a continuous recording for up to 91 days. Participants will wear the MotionWatch 8 on the wrist of the non-dominant arm. We will also ask them to event-mark two time points that are essential for the computation of sleep quality and quantity indices: when they (i) get out of bed in the morning and (ii) are ready to sleep at night. Compliance to actigraphy event markers is generally moderate to high . Movement of the wrist will be recorded at a sampling rate of 50 Hz using 30-s epochs. The actigraphic recordings will be analyzed with the MotionWare software (CamNtech Ltd., Cambridgeshire, England) to obtain indices of sleep quality (e.g., fragmentation index) and sleep quantity (e.g., total sleep time). The collected data have a multilevel structure (i.e., repeated measurements nested within individuals). We will test our hypotheses with multilevel mixed-effects mediation analyses following principles and methods described in . The statistical software package used is Mplus (see https://www.statmodel.com/ ). We expect a total of 840 usable data points for measures assessed once a day (sleep measures), 2520 data points for measures assessed three times a day (e.g., WTP), 3360 data points for measures assessed four times a day (e.g., mood), and 4200 data points for measures assessed five times a day (salivary parameters). Where appropriate, skewed variables will be transformed. We will use an alpha level of 0.05 for all tests. We will conduct sensitivity analyses by adding control variables to the models. The sample size calculation was performed with the support of a statistician. The power computations are based on a model by which the effect of WTP on each wellbeing and health-related outcome (Y) follows two paths, direct and indirect. In the indirect path, WTP acts on work-related workload (WL) and on work-related perseverative cognition (PC), which both act on Y. The computations rely on the repeated simulations of such model. The model and its assumptions are given in the Supplementary Material. Relevance and impact Workplace telepressure is a recent concept in a quickly changing working world. It is essential to develop theory on how WTP may affect employees’ behavior, wellbeing, and health. The planned project will be the most comprehensive study to this day on short-term associations between within-person variations in WTP and important wellbeing and health-related experiential, physiological, and behavioral measures. The anticipated findings will greatly enhance our knowledge and understanding of WTP and thus help establish its relevance within the research domains of work, stress, and health. The carefully selected outcome measures are indicators of potential early-stage dysregulation of the allostatic processes. Investigating whether WTP is significantly associated with these indicators at the day level through the mediating role of work-related workload and work-related perseverative cognition is an important step towards understanding how high levels of WTP may lead in the long run to secondary alterations (e.g., hypertension, chronic inflammation, burnout) and disease (e.g., heart disease, clinical depression). The ambulatory assessment approach of the planned project using state-of-the-art methodological strategies is highly relevant as it allows for the investigation of employees’ subjective experience, physiology, and behavior in their everyday lives resulting in high ecological validity of the findings. High-quality ambulatory assessment studies are important to determine whether findings from studies in the setting of a laboratory also hold in more real-life environments. As several measures of the proposed project represent potential intervention targets, we also anticipate the findings of the project to contribute significantly to guiding the development and implementation of theory-led interventions, programs, and policies. Such interventions aim at managing work-related demands and behaviors and favor employees’ wellbeing and health (e.g., sleep hygiene interventions, technology use education interventions at the organizational and personal level, perseverative cognition reduction interventions). Understanding the role of these potential intervention targets is necessary for accurate design and evaluation of effective interventions, programs, and policies. Moreover, the biobehavioral monitoring of the planned project consisting of salivary biomarkers, electrocardiographic measurement, and sleep actigraphy can add an important dimension to the evaluation of the effectiveness of such interventions, programs, and policies, which often relies on self-report measures only. Possible challenges The planned project is the first to investigate relationships between WTP and several psychophysiological parameters in an ambulatory approach. Therefore, we are expecting possible challenges throughout the data collection period. Firstly, some of our exclusion criteria (see Sect. " ") drastically affect the number of participants eligible for the ambulatory phase. For instance, sleep apnea is one of our exclusion criteria. Sleep apnea is a highly prevalent sleep disorder in the general adult population with an overall population prevalence ranging from 9 to 38% . Similarly, approximately 120 000 persons suffer from this sleep-disordered breathing in the overall Swiss adult population, with 2% of middle-aged women and 4% of middle-aged men presenting at least five events per hour and 23% of adult women and 50% of adult men presenting at least 15 events per hour in the Lausanne’s population where the present study is conducted. Additionally, we will exclude participants who take any medication that can affect the psychophysiological parameters of interest, in particular cardiac, salivary, and sleep parameters. For example, according to a Swiss health survey conducted in 2018, 18% of the Swiss adult population take hypertension medication . As such, we might have to exclude a considerable number of participants from the ambulatory phase. Secondly, we are aware that the study’s requirements facing participants are substantial. Given (1) the length of the participation period (i.e., seven consecutive days), (2) the number of daily sampling points (filling out questionnaires and saliva sampling four and five times per day, respectively) as well as the time spent on performing them, and (3) the burden of continuously wearing an ECG device throughout the seven-day participation period, we anticipate dropouts and non-compliance. Lastly, we expect to deal with technical issues such as malfunctioning devices and software or hardware bugs. However, given that most participants will be recruited from nearby cities and surrounding areas, we are confident that participants would be able to promptly come to our laboratory in case we need to urgently fix a technical issue (e.g., unresponsive iPad, ECG monitor stops recording). However, it is noteworthy that our research team managed to recruit 72 music students for a more demanding study, the pool of music students available in Switzerland being much smaller than the pool of employees using ICTs regularly . The study was an ambulatory assessment in which Gomez and colleagues measured physiological, experiential, and behavioral parameters during seven days. The participants were asked to fill in questionnaires and collect saliva samples six times throughout the day. The participants filled in the questionnaires with an iPod Touch using the software iDialogPad. The participants also wore the MotionWatch 8 and an electrocardiogram device. U.M. Nater performed the biochemical analyses of the saliva samples to determine sC and sAA. R. Heinzer and J. Haba-Rubio were consultants for the sleep-related part of the project. The questionnaires included the daily assessment of perseverative cognition, self-reported sleep duration and quality, mood, subjective health complaints, biobehavioral variables, and self-reported stressful events. Despite the challenges of this kind of research protocol, the team was able to achieve excellent success rates in terms of data acquisition. Over the seven days, participants had to answer 1533 items. In total, 95% of all answers were available for analysis. With regard to the salivary measures, participants had to collect 42 samples. For comparison, in the present study, participants will have to collect a total of 35 saliva samples (17% less). For sC and sAA, 95% and 92% of all samples were available for analysis, respectively. As to the actigraphic data, 85% of all possible data were available for analysis. This relatively low rate was due to a bug in the firmware of the MotionWatch 8 at the start of the study. We are therefore very confident that data collection of the present study will be highly successful. Additionally, we have implemented several methods to face the challenges we might encounter. Firstly, we have set different recruitment strategies that we will progressively deploy depending on the success rate of each strategy until reaching 120 participants with usable data. We will start by hanging our flyers in nearby universities and institutions and posting online versions on their respective websites to target the local and academic population. We will then publish a recruitment announcement in a local newspaper to target a much larger and more diverse population. This newspaper is delivered to over 100′000 readers in the Lausanne area. A final strategy will consist in sharing a recruitment announcement with the Human Resources of companies. Secondly, during the laboratory visit, we will familiarize the participants with all the requirements of the study with the support of a PowerPoint presentation and information and instructions sheets. We will invite the participants to wear the Bittium Faros 180L and the MotionWatch 8 and perform saliva sampling to prepare them for the ambulatory phase. We will also go through all daily questionnaires using the same iPad they will bring home with them. All in all, the laboratory visit will allow the participants to know what the exact requirements of the ambulatory assessment are and, thus, decide whether they can comply satisfactorily with them. At the end of the laboratory visit, the participants should feel confident and ready for the ambulatory phase. Finally, to increase participants’ compliance, the investigator will explain to them that the financial remuneration will be proportional to their degree of compliance with the requirements. A 100% compliance (i.e., filling in all diaries and collecting all saliva samples at the defined times) will be rewarded with a 20% bonus per day. Lastly, in order to minimize data loss, we will provide participants a troubleshooting sheet including all the information they would need to fix unexpected technical issues with the devices. We will also inform them that they can contact us throughout the ambulatory phase and that, should the need arise, they can either come to our laboratory or meet us at a mutually agreed upon location to come up with adequate solutions. Workplace telepressure is a recent concept in a quickly changing working world. It is essential to develop theory on how WTP may affect employees’ behavior, wellbeing, and health. The planned project will be the most comprehensive study to this day on short-term associations between within-person variations in WTP and important wellbeing and health-related experiential, physiological, and behavioral measures. The anticipated findings will greatly enhance our knowledge and understanding of WTP and thus help establish its relevance within the research domains of work, stress, and health. The carefully selected outcome measures are indicators of potential early-stage dysregulation of the allostatic processes. Investigating whether WTP is significantly associated with these indicators at the day level through the mediating role of work-related workload and work-related perseverative cognition is an important step towards understanding how high levels of WTP may lead in the long run to secondary alterations (e.g., hypertension, chronic inflammation, burnout) and disease (e.g., heart disease, clinical depression). The ambulatory assessment approach of the planned project using state-of-the-art methodological strategies is highly relevant as it allows for the investigation of employees’ subjective experience, physiology, and behavior in their everyday lives resulting in high ecological validity of the findings. High-quality ambulatory assessment studies are important to determine whether findings from studies in the setting of a laboratory also hold in more real-life environments. As several measures of the proposed project represent potential intervention targets, we also anticipate the findings of the project to contribute significantly to guiding the development and implementation of theory-led interventions, programs, and policies. Such interventions aim at managing work-related demands and behaviors and favor employees’ wellbeing and health (e.g., sleep hygiene interventions, technology use education interventions at the organizational and personal level, perseverative cognition reduction interventions). Understanding the role of these potential intervention targets is necessary for accurate design and evaluation of effective interventions, programs, and policies. Moreover, the biobehavioral monitoring of the planned project consisting of salivary biomarkers, electrocardiographic measurement, and sleep actigraphy can add an important dimension to the evaluation of the effectiveness of such interventions, programs, and policies, which often relies on self-report measures only. The planned project is the first to investigate relationships between WTP and several psychophysiological parameters in an ambulatory approach. Therefore, we are expecting possible challenges throughout the data collection period. Firstly, some of our exclusion criteria (see Sect. " ") drastically affect the number of participants eligible for the ambulatory phase. For instance, sleep apnea is one of our exclusion criteria. Sleep apnea is a highly prevalent sleep disorder in the general adult population with an overall population prevalence ranging from 9 to 38% . Similarly, approximately 120 000 persons suffer from this sleep-disordered breathing in the overall Swiss adult population, with 2% of middle-aged women and 4% of middle-aged men presenting at least five events per hour and 23% of adult women and 50% of adult men presenting at least 15 events per hour in the Lausanne’s population where the present study is conducted. Additionally, we will exclude participants who take any medication that can affect the psychophysiological parameters of interest, in particular cardiac, salivary, and sleep parameters. For example, according to a Swiss health survey conducted in 2018, 18% of the Swiss adult population take hypertension medication . As such, we might have to exclude a considerable number of participants from the ambulatory phase. Secondly, we are aware that the study’s requirements facing participants are substantial. Given (1) the length of the participation period (i.e., seven consecutive days), (2) the number of daily sampling points (filling out questionnaires and saliva sampling four and five times per day, respectively) as well as the time spent on performing them, and (3) the burden of continuously wearing an ECG device throughout the seven-day participation period, we anticipate dropouts and non-compliance. Lastly, we expect to deal with technical issues such as malfunctioning devices and software or hardware bugs. However, given that most participants will be recruited from nearby cities and surrounding areas, we are confident that participants would be able to promptly come to our laboratory in case we need to urgently fix a technical issue (e.g., unresponsive iPad, ECG monitor stops recording). However, it is noteworthy that our research team managed to recruit 72 music students for a more demanding study, the pool of music students available in Switzerland being much smaller than the pool of employees using ICTs regularly . The study was an ambulatory assessment in which Gomez and colleagues measured physiological, experiential, and behavioral parameters during seven days. The participants were asked to fill in questionnaires and collect saliva samples six times throughout the day. The participants filled in the questionnaires with an iPod Touch using the software iDialogPad. The participants also wore the MotionWatch 8 and an electrocardiogram device. U.M. Nater performed the biochemical analyses of the saliva samples to determine sC and sAA. R. Heinzer and J. Haba-Rubio were consultants for the sleep-related part of the project. The questionnaires included the daily assessment of perseverative cognition, self-reported sleep duration and quality, mood, subjective health complaints, biobehavioral variables, and self-reported stressful events. Despite the challenges of this kind of research protocol, the team was able to achieve excellent success rates in terms of data acquisition. Over the seven days, participants had to answer 1533 items. In total, 95% of all answers were available for analysis. With regard to the salivary measures, participants had to collect 42 samples. For comparison, in the present study, participants will have to collect a total of 35 saliva samples (17% less). For sC and sAA, 95% and 92% of all samples were available for analysis, respectively. As to the actigraphic data, 85% of all possible data were available for analysis. This relatively low rate was due to a bug in the firmware of the MotionWatch 8 at the start of the study. We are therefore very confident that data collection of the present study will be highly successful. Additionally, we have implemented several methods to face the challenges we might encounter. Firstly, we have set different recruitment strategies that we will progressively deploy depending on the success rate of each strategy until reaching 120 participants with usable data. We will start by hanging our flyers in nearby universities and institutions and posting online versions on their respective websites to target the local and academic population. We will then publish a recruitment announcement in a local newspaper to target a much larger and more diverse population. This newspaper is delivered to over 100′000 readers in the Lausanne area. A final strategy will consist in sharing a recruitment announcement with the Human Resources of companies. Secondly, during the laboratory visit, we will familiarize the participants with all the requirements of the study with the support of a PowerPoint presentation and information and instructions sheets. We will invite the participants to wear the Bittium Faros 180L and the MotionWatch 8 and perform saliva sampling to prepare them for the ambulatory phase. We will also go through all daily questionnaires using the same iPad they will bring home with them. All in all, the laboratory visit will allow the participants to know what the exact requirements of the ambulatory assessment are and, thus, decide whether they can comply satisfactorily with them. At the end of the laboratory visit, the participants should feel confident and ready for the ambulatory phase. Finally, to increase participants’ compliance, the investigator will explain to them that the financial remuneration will be proportional to their degree of compliance with the requirements. A 100% compliance (i.e., filling in all diaries and collecting all saliva samples at the defined times) will be rewarded with a 20% bonus per day. Lastly, in order to minimize data loss, we will provide participants a troubleshooting sheet including all the information they would need to fix unexpected technical issues with the devices. We will also inform them that they can contact us throughout the ambulatory phase and that, should the need arise, they can either come to our laboratory or meet us at a mutually agreed upon location to come up with adequate solutions. Additional file 1 . Questionnaires.
|
Essential equipment and services for otolaryngology care: a proposal by the Global Otolaryngology-Head and Neck Surgery Initiative
|
77963812-d189-4cfd-adae-f0adc1daf036
|
10155687
|
Otolaryngology[mh]
|
Safe surgical and procedural care is a critical component of ensuring high-quality healthcare delivery in global settings . Effective surgical care requires costly infrastructure for the acquisition, sterilization, and maintenance of essential equipment and robust support services such as imaging, laboratory testing, histopathology, and blood banking. The field of otolaryngology–head and neck surgery (OHNS) encompasses a breadth of conditions and operative techniques necessitating a wide variety of equipment to provide essential care. Given variations in resource access and health system infrastructures, availability of equipment varies regionally and by practice setting. Surgical subspecialties, such as pediatric surgery, have created essential equipment and health service frameworks to promote the quality of surgical infrastructure, advocate for resources, and enable surgeons to deliver standard surgical care in diverse settings. To date, no inventory of essential equipment and services has been developed for OHNS on a global scale. Our review aims to highlight guidelines for prioritization of resources and to provide a list of essential equipment for OHNS surgical care, based on international input from experienced OHNS providers.
Surgical, anesthetic, and ancillary medical services are essential for the treatment of operable conditions, which are estimated to comprise 28–32% of the total global burden of disability and mortality . Surgery has been under-prioritized within global health efforts for reasons including the perceived high cost of surgical infrastructure and complexity of surgical care delivery . Although the moral imperative to provide high-quality healthcare is reason enough to expand access to surgical care, surgery has also been proven to be a cost-effective intervention. For example, a 2014 systematic review of cost-effectiveness studies in low- and middle-income countries (LMICs) determined that surgical intervention can be cost-effective or very cost-effective based on World Health Organization (WHO) criteria and compares favorably to currently accepted public health interventions . Without considering the prioritization of surgical equipment in certain subspecialties, such as OHNS, surgical care cannot be given in a satisfactory manner. A core component of the delivery of surgical care is specialized equipment and infrastructure. In 2005, the WHO launched its Global Initiative for Emergency and Essential Surgical Care which published standards for public district hospitals to promote adequately equipped operating theaters, basic intensive care units, and the ability to treat several life- threatening and highly-disabling surgical conditions . To this end, the WHO drew on government, clinical, biomedical engineering, and medical device stakeholders to increase the availability of essential surgical equipment in LMICs [ – ]. Several inventories have since been developed to appraise surgical capacity, including the WHO Tool for Situational Analysis to Assess Emergency and Essential Surgical Care (SAT) ; the Personnel, Infrastructure, Procedures, Equipment, and Supplies (PIPES) tool ; and the International Assessment of Capacity for Trauma (INTACT) index . These inventories have demonstrated stark surgical equipment shortages in various countries in sub-Saharan Africa [ – ], Asia [ – ], and Central/South America [ – ], highlighting the need for government involvement in surgical capacity building for both infrastructure and personnel. The next generation of surgical equipment appraisal has been marked by the delineation of surgical equipment lists beyond the context of general and trauma surgery. The WHO and the World Federation of Societies of Anesthesiologists produced the International Standards for a Safe Practice of Anesthesia, which introduced concrete recommendations for anesthetic equipment and support personnel at various care levels . The Global Initiative for Children's Surgery, an independent consortium of pediatric surgical providers, created consensus guidelines on optimal supplies and equipment for the care of pediatric surgical conditions in LMICs . Surgical subspecialty groups have also adapted the existing PIPES tool to enable the evaluation of neurosurgical and pediatric surgical capacity, thereby broadening the scope of existing assessment tools . Such surgical equipment lists are used not only for infrastructure assessment but also for internal quality improvement, surgical policy development by health ministries, and investment priority-setting for advocacy and charitable efforts.
OHNS conditions remain relatively understudied with respect to global surgical care delivery, despite OHNS conditions representing a significant burden of disease . The Institute for Health Metrics and Evaluation's Global Burden of Disease study identified that hearing loss (with a ≥20-dB threshold for mild hearing loss) affects 1.57 billion people globally and is the third largest cause of disability in the global burden of disease . Otitis media, one of the most common and preventable causes of hearing loss in children, has an estimated incidence of 471–709 million cases per year . Head and neck cancers account for 5.7% of global cancer- related mortality, with a significantly higher mortality burden and subsequent economic loss in LMICs compared to high-income countries (HICs) . Other OHNS conditions with high global burden of disease include upper respiratory infections, cleft lip/palate, head and neck trauma, pediatric foreign body, and deep neck space infections [ , , ▪▪ ]. The cost of acquiring and maintaining subspecialty-specific equipment has been identified as a pronounced barrier to care for OHNS conditions . Recent work conducted during the COVID-19 pandemic further highlighted challenges faced by otolaryngologists working in LMICs, who faced obstacles such as insufficient personal protective and surgical equipment to maintain surgical output . Despite the high global burden of OHNS conditions, there is not yet a description of essential surgical equipment necessary for the delivery of high- quality OHNS care worldwide. To fill this gap, the Global OHNS Initiative developed an expert-driven list of essential equipment and services for the delivery of high-quality OHNS surgical care.
The Global OHNS Initiative is a global consortium of OHNS clinical providers, trainees, and researchers with a vision for “ universal access to high-quality, safe, timely, and affordable care for those with OHNS conditions ”. To begin defining the role of OHNS care within comprehensive health systems, the group previously used the Delphi methodology to identify a consensus of priority OHNS conditions and procedures which all national health systems should be capable of managing . The initiative then used these findings to develop an expert-driven list of the minimal equipment necessary for the medical and surgical care of the priority conditions. This list was created under the assumption that a facility providing OHNS care would already have the resources required for general surgery care; as such, equipment was excluded if they were included in most general surgery equipment checklists [ – ]. OHNS providers across a variety of practice settings were consulted to add additional equipment or services regularly employed in their clinical practice. Once a preliminary list was compiled, an internal survey was disseminated to OHNS providers and advanced-level trainees within the initiative. Respondents to the survey included respondents who practice in eleven countries, including the United States, Uganda, Israel, Pakistan, Kenya, Lebanon, Chile, Myanmar (Burma), the United Kingdom, India, and Austria. The equipment included in the survey spanned the following OHNS subspecialties: general otolaryngology, otology, head and neck surgery, rhinology, skull base surgery, and pediatric otolaryngology. Providers were asked to rate the utility of each type of equipment and service at the primary and tertiary care levels, which were defined as follows: (1) Primary = ear, nose and throat (ENT) care provided at a community-level hospital or clinic (2) Tertiary = a referral-based center for specialist or sub-specialist ENT care not regularly managed at the community level Equipment and ancillary service utility was categorized under three designations: (1) “Essential” – This equipment/service must be accessible in-house and is critical to the care of the ENT conditions encountered at the respective care level. (2) “Aspirational” – This equipment may not be necessary to provide care at this respective healthcare level but could be useful for ENT needs. If it were available, it would be regularly used. (3) “Nonessential” – This equipment/service is not necessary to manage the ENT conditions managed at the respective care level. There may be sufficient substitutes that perform the same function as this equipment or service. The internal survey results were compiled and reviewed through multiple group consensus meetings. A final list of essential equipment and ancillary services for baseline OHNS care was generated (Tables and ). This list of essential OHNS equipment and services may serve as a resource to support the development of high-quality OHNS care in various healthcare settings and to permit a high standard of care for all patients with OHNS conditions. Stratification of equipment and services by primary and tertiary facility levels permits a more nuanced understanding of the resources needed for appropriate OHNS care. We also categorized equipment as “essential” or “aspirational” to indicate relative prioritization. Aspirational equipment often included items that are not absolutely necessary for the provision of OHNS care, but have grown increasingly popular within high-resource settings to improve patient safety and overall quality of care. For the “essential” categorization, survey respondents prioritized global standards of care over newer technologies to create a more equitable benchmark that could be reached by a greater proportion of OHNS providers, facilities, or hospital systems. These categorizations are subject to change with the evolution of disease burden, training standards, and equipment availability. Survey responses highlighted variations in equipment use as a result of resource constraints and training standards. At the primary level, in the general otolaryngology care section, laryngeal mirrors for indirect laryngoscopy were deemed essential (Table ). However, clinicians in HICs have trended away from using laryngeal mirrors, instead utilizing fiberoptic laryngoscopy (FOL) or rigid laryngeal endoscopy for visualization of the supraglottic and glottic regions due to patient comfort and completeness of laryngeal examination . Thus, both FOL and laryngeal mirrors were considered essential to encompass the spectrum of infrastructure availability and evolving training standards across economic strata. For endoscopy at the primary level, respondents categorized a rigid bronchoscope as essential and a flexible bronchoscope as aspirational (Table ). In subsequent discussion, respondents indicated that almost any tracheal foreign body, lesion, or tumor can be treated using a rigid bronchoscope. However, current literature demonstrates that flexible bronchoscopy may help to definitively exclude foreign body aspiration when rigid bronchoscopic examination is equivocal or unable to reach more distal locations in the airway . Although typically a tool in the arsenal of pulmonologists and thoracic surgeons, the use of this equipment by OHNS providers continues to expand , indicating the potential for recategorization of equipment as essential in future iterations of these lists. In open-ended responses, a few survey respondents reported various applications of equipment to provide care beyond the original intended use. For example, two respondents commented that nasal endoscopes were frequently repurposed for otologic procedures and pediatric airway foreign body removal. Another respondent remarked that their facility used otologic instruments for pediatric anterior skull base surgery. Born out of equipment shortages during the COVID-19 pandemic, there has been an interest in developing cost-effective strategies for surgical capacity – including the reuse and repurposing of equipment . A recent study on the benefits of equipment repurposing reported that endoscopic approaches to the middle ear show improved anatomic visualization with similar audiometric and surgical outcomes seen with binocular approaches . What is more, the endoscopic surgical setup has far fewer logistical and cost-related barriers compared to the otologic microscopic surgical setup, making the endoscope a feasible option for otologic surgical teaching in LMICs . Thus, the range of applications for certain equipment items was taken into consideration when categorizing equipment priority for the lists. Loupes were categorized by survey respondents as aspirational at the primary level compared to an operating microscope for microsurgical work in head and neck operations, which was deemed essential at the primary level (Table ). Loupes-only magnification has been utilized for microsurgical anastomosis in a variety of applications ; however, the categorization of “aspirational” may reflect the fact that loupes must be fitted to an individual surgeon as opposed to microscopes being accessible to any operating surgeon who is able to adjust magnification. Operating microscopes may also be shared with other surgical services that require an operating microscope. Otoendoscopes were also deemed aspirational at the primary level (Table ), despite evidence demonstrating ergonomic benefits and similar outcomes compared to traditional microscopic ear surgery . This is perhaps due to its relatively recent arrival to the otology armamentarium and its steep learning curve for those trained only with operative microscopes to reliably benefit from the use of otology equipment . It should be noted that at the tertiary level, only computer-assisted navigation for skull base surgery was deemed aspirational (Table ). This system, which provides real-time computed tomography-based guidance in surgery, may have been considered aspirational due to its prohibitive cost, lack of definitive evidence supporting improved outcomes, and the need for trained personnel for its use . This survey included perspectives of OHNS providers from both HICs and LMICs to describe the need for OHNS equipment and services across economic strata. There are broad uses for this set of essential equipment and services. First, this list might be deployed to measure resource availability, expanding the potential for current surgical capacity assessments to include OHNS care. Accurate capacity assessments are critical for internal appraisals of health systems and broader goals in academic global surgery. Second, this list can be used to guide investment in OHNS equipment by ministries of health, health systems, and facilities. OHNS conditions have been underemphasized in national surgical plans; however, the list of essential equipment and services may inform policy development to improve OHNS care. Third, this list may be used to advocate for the charitable provision of essential equipment in countries that lack access to equipment needed for high-quality OHNS care. Similar lists have been used to leverage HIC academic centers, medical equipment companies, and nongovernmental entities to donate “essential equipment kits” to resource-limited clinical centers . Together, these lists can be used to optimize resource allocation and support a higher standard of OHNS care for patients around the world
The lack of equipment and ancillary support services continues to be a significant barrier to OHNS care in health systems around the world. Surgical providers have developed essential resource checklists to fulfill the need for infrastructure capacity assessment and targeted resource investment. This expert-driven list of essential OHNS equipment and services functions as an initial framework to be adapted for internal quality assessment, implementation research, health policy development, and economic priority-setting. Ultimately, we hope that these lists of essential equipment and services for care delivery will contribute to improved health outcomes globally and shape benchmarks of quality for OHNS care delivery.
The authors would like to express their gratitude to the fellow members of the Global Otolaryngology-Head and Neck Surgery Initiative for contributing to this internal survey and providing valuable feedback throughout the project development. Special thanks to Dr Mahmood Bhutta and Dr Johannes J. Fagan for their guidance during the survey dissemination, and to Dr Estephania Candelo, Keshav Shah, and Sarah Nuss for their valuable feedback during manuscript preparation. Financial support and sponsorship None. Conflicts of interest There are no conflicts of interest.
None.
There are no conflicts of interest.
|
Psychophysiological and behavioral responses to descriptive labels in modern art museums
|
1e6bb838-5d78-45a0-8b05-c635887c3f50
|
10155981
|
Physiology[mh]
|
Over the last few years, museums have seen a significant increase in specific attention to the quality of visitors’ experience . Understanding the behavior of the public, their needs, expectations, and learning processes, is now a prerequisite for the development of any project addressing the enhancement and communication of heritage. In this context, the pandemic crisis has made even more evident the need to pursue research and experimentation initiatives aimed at identifying tools and conditions useful for improving the quality of the cultural and aesthetic experience. The beneficial and soothing effect of contact with artworks has been recognized by the World Health Organisation (WHO) and recently reaffirmed by the Organisation for Economic Co-operation and Development itself , as an important factor in the prevention of diseases and in increasing the state of well-being of the population. For these reasons, museums should focus their attention not only on “what” is exhibited but also on “how” works are exhibited and explained and try to adopt policies for reaching the large public of non-expert visitors. In this framework, basing strategic choices only on qualitative data rather than scientific evidence may not ensure reliable results. In the last few years, studies have focused specifically on the quality of the visitor experience in terms of psychological and cognitive satisfaction . Most studies on empirical aesthetics have been conducted in laboratories, assessing the experience with questionnaires . For example, Nadal and colleagues (2010) explored the influence of complexity, degree of abstraction, and artistry on beauty appreciation of artistic stimuli using multiple subjective rating scales . Other studies also measured psychophysiological parameters in response to pieces of art, such as skin conductance, heart rate, eye movements, and pupillary response . These parameters are known to reflect emotional and cognitive processes and they could be considered measures of individual reactions to artworks. For instance, skin conductance is a sensitive marker of individual meaningful events related to emotion, novelty, or attention, therefore it can be considered a “particularly pertinent window on the mind, when subjectively reported experience is not possible” . In laboratory settings, it has also been found that gazing behavior and pupillary responses reflect the internal state of the observer in terms of attention, pleasure, understanding, familiarity, imagination, cognitive effort, and subjective interpretation of complex visual stimuli . Some laboratory studies, focused on the effect of artworks’ title and labels on the aesthetic experience, found that elaborative titles congruent with the content of the paintings, as well as descriptive information, facilitate the comprehension of the artworks and increase aesthetic appreciation . Although the importance of these studies is largely recognized, recent research on art perception showed that when moving from the lab to the museum, looking at art becomes far more engaging and satisfying . For example, original artworks in museums were liked more, viewed longer, and found more arousing compared to their digital reproductions in the laboratory . Also, according to Mastandrea and colleagues (2009), one aspect that characterizes visitor experience and expectation for museums of ancient and modern art was to see the work in person . Therefore, research should be conducted in the real context where art is exhibited because the originality of the artworks together with the exhibition display contribute to the complexity of the aesthetic experience. For these reasons, more recently, some studies have been conducted inside museums, mainly through observing visitors’ behavior in free-choice setting conditions and administering questionnaires after the visit, but also recording psychophysiological parameters thanks to advanced psycho-physiological portable devices . Recent studies have also analyzed visitors’ pathways and experiences in relation to the arrangement of the exhibition . For example, Reitstätter and colleagues (2020) investigate how the rearrangement of a museum influences the way people see and experience art, combining mobile eye tracking, subjective mapping, and a questionnaire . In particular, regarding the introduction of interpretive labels in the museum setting, they wonder how visitors combine looking at art and reading labels, finding that the introduction of new labels provides benefits to artworks’ viewing time and that visitors’ engagement with the artworks was deeper, as assessed by post-visit exhibition verbal reflections . A more recent study investigated the role of the presence and consistency of titles influences visual exploration of artworks, finding that consistent titles produce longer saccade durations and amplitudes than untitled artworks . Although the evidence suggests that the educational tools in museums may be crucial to improve the process of understanding, appreciation, and promoting individual well-being , their role has been challenged and some museums have chosen to reduce or even eliminate explanations and labels in the attempt to make the experience more emotional and less cultural-driven . Scientifically evaluating the impact of labels on the perception and understanding of artworks can thus contribute to enhancing the engagement of museums in developing the quality of visitors’ experience and the efficacy of their educational offer . This is particularly relevant for modern/contemporary art museums and visitors with poor art training. Non-expert people usually prefer figurative paintings compared to abstract ones , since their content is very often ambiguous and indefinite, compared to figurative art, where the objects represented are clearly recognizable. Indeed, appreciation is correlated to the understanding of artworks , and incomplete comprehension may lead to visitors’ disappointment and potentially discourage further museum visits . The exploratory studies described above have delivered remarkable results and present the advantage of large sample sizes due to working with regular museum visitors. However, they do not allow measuring, with accuracy and reproducibility, the very specific cognitive and emotional processes that occur in the observer while looking at artworks as a function of specific variables, such as the labels provided by museums. Also, none of the studies investigating the impact of different labels have recorded multiple physiological and behavioral parameters in the context of a real art exhibition. Therefore, here we aim to conduct a comprehensive study to test whether descriptive labels improve the aesthetic experience, by combining multiple objective and subjective measurements in a structured experimental protocol in the very context of a modern art museum. To this purpose, we specifically tested the impact of essential and more descriptive written labels on the fruition of XX-XXI century paintings, for which the lay public expresses a lot of difficulty and perplexity in understanding and appreciating the content. We measured psychophysiological (skin conductance, heart rate, pupillary response, eye movements) and behavioral (viewing time, questionnaires) parameters in a group of art-naïve participants while looking at the artworks with different types of labels. Participants assigned to the experimental condition experienced the artworks with essential labels during a first visit and with descriptive labels during a second visit (intra-subject design). To control that the effects can be actually attributed to descriptive labels and not to the double exposure to paintings and essential labels, which could lead to familiarity effects, we introduced a control condition, in which essential labels were shown to an additional sample of participants during both sessions. We hypothesize that descriptive labels can influence both aesthetic emotional reactions and cognitive judgments . Indeed, we expect increased skin conductance, heart rate, and pupillary dilation, due to changes in physiological arousal and emotional response . Furthermore, we expect that descriptive labels yield a more detailed visual inspection and prolonged viewing of paintings, leading to a better understanding of artworks revealed by higher questionnaire scores. The outcome of this study could be of interest to museum operators, which can receive useful insight to offer more educational, descriptive, and interesting visits to a wider public. Participants Thirty healthy volunteers participated in the present study (aged 21–30 years, M = 23.60, SD = 0.44); randomly assigned to the experimental (twenty observers) or the control condition (ten observers). Prior to the experiment, we collected information about participants’ personal data, art historical background, and art expertise. All selected participants had normal or corrected-to-normal visual acuity, did not take any type of medication, did not present any brain damage, and were free of cognitive disorders. All participants were university students (not art students) naive to the purpose of the experiment, with high-school level art history background. None of them were painters. On average, they had visited museums or art exhibitions only 1 or 2 times in the last year and they did not read art-related blogs, magazines, or books. To measure participants’ artistic preferences for different art types, items (e.g., “how much do you like abstract art?”) were rated with a 5-point Likert scale. On average, they like “figurative” art significantly more than “abstract” art (mean score = 3.5 ± 0.2 vs. 2.7 ± 0.1; t (29) = 1.8, p <0.05). Participants were mostly unfamiliar with the paintings and their authors: they only knew the author Mirò (14 over 30), and only two of them were familiar with the painting used. All participants were covid-free. Ethics Experimental procedures were approved by the local ethics committee (Comitato Etico Pediatrico Regionale–Azienda Ospedaliero-Universitaria Meyer–Firenze FI) and are in line with the Declaration of Helsinki. Written informed consent was obtained from each participant prior to their inclusion in the study. Setup Pupil and gaze data were recorded by means of a wearable eye tracking headset (Pupil Core from Pupil Labs, Berlin, Germany), composed of two eye cameras (200Hz) and a world camera (60Hz). The device was USB-connected to a MacBook Pro running a dedicated software (Pupil Capture, version 3.5.7) that enabled real-time data capture, camera recording, and calibration routines for natural conditions. A wearable wireless device equipped with high-quality data sensors (E4 wristband from Empatica Inc, Boston, USA) was used to acquire electrodermal activity (EDA, 4Hz) and heart rate (HR, computed in spans of 10 seconds) measures. The internal memory of E4 allowed us to record data continuously during the daily session (about 30 minutes per participant). The E4 device also included the possibility to press a central button during the session to mark the times of our events of interest (“tags”). Stimuli The present study was conducted in the “Roberto Casamonti Collection” ( https://collezionerobertocasamonti.com ), a modern and contemporary art (XX-XXI century) private museum, hosted in Palazzo Bartolini Salimbeni in Florence. During each session, participants were required to follow the visit path indicated by the experimenter and stop in front of eight selected paintings. The presentation order was the same for all participants, following the position of the artwork in the exhibition. Experimental paintings were selected before data collection, excluding those representing human figures, those totally black or white, and those that were too small or too big to be framed by the Pupil Core world camera. For each selected painting, we set an adequate distance at which observers had to stop for observation, such that each one subtended a visual angle of about 21°x15°. The paintings’ physical luminance was measured at five different points of the canvas (top-left and top-right, lower-left and lower-right, and in the center) that were averaged in a single value. Five paintings resulted in the range between 11–31 cd/m2 (16 cd/m 2 on average), two were darker (5 and 10 cd/m 2 , 7.5 cd/m 2 on average) and one lighter (116 cd/m 2 ). We then created three luminance control stimuli (30x42cm uniform-colour canvas) for the three different levels of luminance, in order to measure baseline individual pupil diameter to those luminances. To see all selected paintings, see . Conditions Participants were divided into two groups: twenty of them participated in the experimental condition and ten participated in the control condition. Both conditions consisted of two sessions at the museum on two different days, at least one month apart (on average, the second session was carried out five/six weeks after the first). In the first session, all participants were presented with essential labels (i.e., author, title, year, and technique) before seeing the paintings. In the second session, experimental participants were provided with descriptive labels (i.e., author, title, year, technique, and description of the painting’s content and the technique), whereas control participants were shown the same essential label as in the first session. See for all essential and descriptive labels. Procedure At the beginning of each session, participants wore the instruments and familiarized with them in a dedicated room. Before starting the visit, they were positioned in front of the three luminance-control stimuli and asked to look at each for ten seconds. They were then instructed to follow the experimenter from one painting to the next and to press the timestamp button on the Empatica wristband (“tag”) every time they started and stopped reading a label and looked at the painting. A two-second red light was displayed on the wristband after each button press; therefore, the experimenter could check that participants correctly pressed the tag when needed (and promptly reminded him/her to press it in case of occasional forgetfulness). After the pre-session measurements, observers reached the first painting indicated by the experimenter and stood in front at the preset distance. Once the eye tracker was calibrated (we used an 8-points natural-features calibration routine), participants could read the label. The label, written on a sheet of paper, was shown by one experimenter standing in front of the participant. Then participants looked at the painting for as long as they wanted, pressing a “tag” when they started and stopped reading and observing. After they finished observing the painting, the experimenter asked them some questions about the artwork and reported the answers on a notepad. The questionnaire required the participants to score on a 5-points Likert scale the following items: complexity, comprehensibility, title informativeness, positive emotions, negative emotions, appreciation, interest, and curiosity for other works of the same author. Participants were also asked to report if the paintings and the authors were familiar or unfamiliar. Then participants continued the visit to the next selected painting. For a schematic representation of the experimental procedure see . Data processing and statistical analysis Physiological parameters were recorded from the start to the end of each museum session, so that, for each participant, we obtained a continuous recording of about 25–30 minutes per session. Raw data from the wristband and the eye tracker were extracted in.csv format and synchronized through an ad-hoc procedure in Matlab (R2020b version; Natick, Massachusetts: The MathWorks Inc.). The timestamps (“tags”) were converted to real times and used to delimitate our events of interest. The participants’ artistic preferences for different art types, rated with a 5-point Likert scale, were calculated. We performed paired sample t -tests across subjects to assess significant differences between art types. The reading time of each label was calculated as the difference in seconds between the two tags indicating the start and the end of the observer’s reading. The time of viewing of each painting was calculated as the difference in seconds between the two tags indicating the start and the end of the observer’s visualization. Thus, the artworks’ viewing time does not include the time spent reading the labels. For each participant, the viewing times of all the paintings were averaged together. Then, the times of all participants were averaged. To compare the average viewing time between the essential and the descriptive label sessions, and between the two control sessions with essential labels, a two-way ANOVA analysis with within-subjects factor session (two levels: first vs. second session) and between-subjects factor condition (two levels: experimental vs. control condition) was done. P -values obtained from post hoc analyses were adjusted using the Bonferroni correction. Effect sizes of the differences were estimated by eta-squared statistics ( η 2 ) with 95% confidence intervals. The average viewing time of each participant was also correlated (Pearson linear-correlation coefficient) to the information collected prior to the experiment about their art expertise and artistic preferences. For each questionnaire item, administered during the experiment after viewing each painting, the scores assigned by each participant to each painting were averaged together. To compare average scores between the essential and the descriptive label conditions, and between the two control sessions with essential labels, paired-sample Wilcoxon signed-rank tests across paintings were performed. The effect size of differences between conditions was estimated by Rank-Biserial correlation (r rb ) with 95% confidence intervals. Also, individual scores for each painting were related to the corresponding EDA, HR, and pupil responses to calculate the Pearson linear-correlation coefficient. For measuring the changes in EDA and HR responses induced by the painting, we normalized each trace considering as baseline the average EDA/HR value in the last three seconds before looking at the artwork. Pupil diameter was converted from pixel to millimeters by measuring the eye tracker recording of a 4 mm artificial pupil, positioned at the location of the observer’s eyes. For measuring pupil size variations induced by paintings, each trace was normalized, considering as baseline the average pupil diameter in response to the luminance-matched control stimulus presented before each session . To produce plots as a function of time, for each painting, normalized traces were averaged for each recorded time across participants. Since viewing time changes across participants, only means including at least five participants were considered. This process has led to average recordings where the initial values include all participants, whereas the last values include only participants with long viewing times. Finally, average traces for each painting were averaged together. To perform statistical analysis, for each normalized trace of each participant for each painting, the average value and the root mean square error (RMSE) during the whole viewing time were calculated. The means and RMSE of all parameters were compared with two-way ANOVA analyses with within-subjects factor session (two levels: first vs. second session) and between-subjects factor condition (two levels: experimental vs. control condition). P -values obtained from post hoc analyses were adjusted using the Bonferroni correction. Effect sizes of the differences were estimated by eta squared statistics ( η 2 ) with 95% confidence intervals. Since the gaze is recorded through a head-centred camera, and thus subjected to head movements, to analyze the gaze pattern we adopted a manual procedure . We subdivided each painting into 25 equally sized areas so that each area subtended a visual angle of about 4°x3° in each painting. All video recordings were extracted by using the Pupil Player software and each position of the gaze shown in the videos was manually converted to a position in one of the 25 areas. Then we counted how many times each area had been watched by each participant. To compare the difference of fixations in the descriptive vs. essential label or between the two control sessions for each of the 25 areas a heat map for each painting was calculated as follows. Since the number of fixations in the two sessions is different, the proportion of fixations in each area (with respect to the total number of fixations in that condition) in the different sessions was calculated and then their difference was computed. For representational purposes, the distribution of differences between the second and first session was binned into five density levels: one (the middle) corresponding to the median of the distribution (equal density of fixations), the others corresponding to quartiles of the distribution. To study the distribution of fixations as a function of eccentricity, three eccentricities were considered for the whole canvas area: central area 0°-7°, nearby periphery 7°-14°, and periphery 14°-21°. Then the number of fixations for each eccentricity and for all observers was calculated. Finally, for each eccentricity, the difference between fixations in the descriptive label and essential label sessions and between the two control sessions was calculated. Comparisons of these values between different eccentricities were done with paired-sample two-tailed t -tests. The effect size of differences between conditions was estimated by Cohen’s d statistics with 95% confidence intervals. Thirty healthy volunteers participated in the present study (aged 21–30 years, M = 23.60, SD = 0.44); randomly assigned to the experimental (twenty observers) or the control condition (ten observers). Prior to the experiment, we collected information about participants’ personal data, art historical background, and art expertise. All selected participants had normal or corrected-to-normal visual acuity, did not take any type of medication, did not present any brain damage, and were free of cognitive disorders. All participants were university students (not art students) naive to the purpose of the experiment, with high-school level art history background. None of them were painters. On average, they had visited museums or art exhibitions only 1 or 2 times in the last year and they did not read art-related blogs, magazines, or books. To measure participants’ artistic preferences for different art types, items (e.g., “how much do you like abstract art?”) were rated with a 5-point Likert scale. On average, they like “figurative” art significantly more than “abstract” art (mean score = 3.5 ± 0.2 vs. 2.7 ± 0.1; t (29) = 1.8, p <0.05). Participants were mostly unfamiliar with the paintings and their authors: they only knew the author Mirò (14 over 30), and only two of them were familiar with the painting used. All participants were covid-free. Experimental procedures were approved by the local ethics committee (Comitato Etico Pediatrico Regionale–Azienda Ospedaliero-Universitaria Meyer–Firenze FI) and are in line with the Declaration of Helsinki. Written informed consent was obtained from each participant prior to their inclusion in the study. Pupil and gaze data were recorded by means of a wearable eye tracking headset (Pupil Core from Pupil Labs, Berlin, Germany), composed of two eye cameras (200Hz) and a world camera (60Hz). The device was USB-connected to a MacBook Pro running a dedicated software (Pupil Capture, version 3.5.7) that enabled real-time data capture, camera recording, and calibration routines for natural conditions. A wearable wireless device equipped with high-quality data sensors (E4 wristband from Empatica Inc, Boston, USA) was used to acquire electrodermal activity (EDA, 4Hz) and heart rate (HR, computed in spans of 10 seconds) measures. The internal memory of E4 allowed us to record data continuously during the daily session (about 30 minutes per participant). The E4 device also included the possibility to press a central button during the session to mark the times of our events of interest (“tags”). The present study was conducted in the “Roberto Casamonti Collection” ( https://collezionerobertocasamonti.com ), a modern and contemporary art (XX-XXI century) private museum, hosted in Palazzo Bartolini Salimbeni in Florence. During each session, participants were required to follow the visit path indicated by the experimenter and stop in front of eight selected paintings. The presentation order was the same for all participants, following the position of the artwork in the exhibition. Experimental paintings were selected before data collection, excluding those representing human figures, those totally black or white, and those that were too small or too big to be framed by the Pupil Core world camera. For each selected painting, we set an adequate distance at which observers had to stop for observation, such that each one subtended a visual angle of about 21°x15°. The paintings’ physical luminance was measured at five different points of the canvas (top-left and top-right, lower-left and lower-right, and in the center) that were averaged in a single value. Five paintings resulted in the range between 11–31 cd/m2 (16 cd/m 2 on average), two were darker (5 and 10 cd/m 2 , 7.5 cd/m 2 on average) and one lighter (116 cd/m 2 ). We then created three luminance control stimuli (30x42cm uniform-colour canvas) for the three different levels of luminance, in order to measure baseline individual pupil diameter to those luminances. To see all selected paintings, see . Participants were divided into two groups: twenty of them participated in the experimental condition and ten participated in the control condition. Both conditions consisted of two sessions at the museum on two different days, at least one month apart (on average, the second session was carried out five/six weeks after the first). In the first session, all participants were presented with essential labels (i.e., author, title, year, and technique) before seeing the paintings. In the second session, experimental participants were provided with descriptive labels (i.e., author, title, year, technique, and description of the painting’s content and the technique), whereas control participants were shown the same essential label as in the first session. See for all essential and descriptive labels. At the beginning of each session, participants wore the instruments and familiarized with them in a dedicated room. Before starting the visit, they were positioned in front of the three luminance-control stimuli and asked to look at each for ten seconds. They were then instructed to follow the experimenter from one painting to the next and to press the timestamp button on the Empatica wristband (“tag”) every time they started and stopped reading a label and looked at the painting. A two-second red light was displayed on the wristband after each button press; therefore, the experimenter could check that participants correctly pressed the tag when needed (and promptly reminded him/her to press it in case of occasional forgetfulness). After the pre-session measurements, observers reached the first painting indicated by the experimenter and stood in front at the preset distance. Once the eye tracker was calibrated (we used an 8-points natural-features calibration routine), participants could read the label. The label, written on a sheet of paper, was shown by one experimenter standing in front of the participant. Then participants looked at the painting for as long as they wanted, pressing a “tag” when they started and stopped reading and observing. After they finished observing the painting, the experimenter asked them some questions about the artwork and reported the answers on a notepad. The questionnaire required the participants to score on a 5-points Likert scale the following items: complexity, comprehensibility, title informativeness, positive emotions, negative emotions, appreciation, interest, and curiosity for other works of the same author. Participants were also asked to report if the paintings and the authors were familiar or unfamiliar. Then participants continued the visit to the next selected painting. For a schematic representation of the experimental procedure see . Physiological parameters were recorded from the start to the end of each museum session, so that, for each participant, we obtained a continuous recording of about 25–30 minutes per session. Raw data from the wristband and the eye tracker were extracted in.csv format and synchronized through an ad-hoc procedure in Matlab (R2020b version; Natick, Massachusetts: The MathWorks Inc.). The timestamps (“tags”) were converted to real times and used to delimitate our events of interest. The participants’ artistic preferences for different art types, rated with a 5-point Likert scale, were calculated. We performed paired sample t -tests across subjects to assess significant differences between art types. The reading time of each label was calculated as the difference in seconds between the two tags indicating the start and the end of the observer’s reading. The time of viewing of each painting was calculated as the difference in seconds between the two tags indicating the start and the end of the observer’s visualization. Thus, the artworks’ viewing time does not include the time spent reading the labels. For each participant, the viewing times of all the paintings were averaged together. Then, the times of all participants were averaged. To compare the average viewing time between the essential and the descriptive label sessions, and between the two control sessions with essential labels, a two-way ANOVA analysis with within-subjects factor session (two levels: first vs. second session) and between-subjects factor condition (two levels: experimental vs. control condition) was done. P -values obtained from post hoc analyses were adjusted using the Bonferroni correction. Effect sizes of the differences were estimated by eta-squared statistics ( η 2 ) with 95% confidence intervals. The average viewing time of each participant was also correlated (Pearson linear-correlation coefficient) to the information collected prior to the experiment about their art expertise and artistic preferences. For each questionnaire item, administered during the experiment after viewing each painting, the scores assigned by each participant to each painting were averaged together. To compare average scores between the essential and the descriptive label conditions, and between the two control sessions with essential labels, paired-sample Wilcoxon signed-rank tests across paintings were performed. The effect size of differences between conditions was estimated by Rank-Biserial correlation (r rb ) with 95% confidence intervals. Also, individual scores for each painting were related to the corresponding EDA, HR, and pupil responses to calculate the Pearson linear-correlation coefficient. For measuring the changes in EDA and HR responses induced by the painting, we normalized each trace considering as baseline the average EDA/HR value in the last three seconds before looking at the artwork. Pupil diameter was converted from pixel to millimeters by measuring the eye tracker recording of a 4 mm artificial pupil, positioned at the location of the observer’s eyes. For measuring pupil size variations induced by paintings, each trace was normalized, considering as baseline the average pupil diameter in response to the luminance-matched control stimulus presented before each session . To produce plots as a function of time, for each painting, normalized traces were averaged for each recorded time across participants. Since viewing time changes across participants, only means including at least five participants were considered. This process has led to average recordings where the initial values include all participants, whereas the last values include only participants with long viewing times. Finally, average traces for each painting were averaged together. To perform statistical analysis, for each normalized trace of each participant for each painting, the average value and the root mean square error (RMSE) during the whole viewing time were calculated. The means and RMSE of all parameters were compared with two-way ANOVA analyses with within-subjects factor session (two levels: first vs. second session) and between-subjects factor condition (two levels: experimental vs. control condition). P -values obtained from post hoc analyses were adjusted using the Bonferroni correction. Effect sizes of the differences were estimated by eta squared statistics ( η 2 ) with 95% confidence intervals. Since the gaze is recorded through a head-centred camera, and thus subjected to head movements, to analyze the gaze pattern we adopted a manual procedure . We subdivided each painting into 25 equally sized areas so that each area subtended a visual angle of about 4°x3° in each painting. All video recordings were extracted by using the Pupil Player software and each position of the gaze shown in the videos was manually converted to a position in one of the 25 areas. Then we counted how many times each area had been watched by each participant. To compare the difference of fixations in the descriptive vs. essential label or between the two control sessions for each of the 25 areas a heat map for each painting was calculated as follows. Since the number of fixations in the two sessions is different, the proportion of fixations in each area (with respect to the total number of fixations in that condition) in the different sessions was calculated and then their difference was computed. For representational purposes, the distribution of differences between the second and first session was binned into five density levels: one (the middle) corresponding to the median of the distribution (equal density of fixations), the others corresponding to quartiles of the distribution. To study the distribution of fixations as a function of eccentricity, three eccentricities were considered for the whole canvas area: central area 0°-7°, nearby periphery 7°-14°, and periphery 14°-21°. Then the number of fixations for each eccentricity and for all observers was calculated. Finally, for each eccentricity, the difference between fixations in the descriptive label and essential label sessions and between the two control sessions was calculated. Comparisons of these values between different eccentricities were done with paired-sample two-tailed t -tests. The effect size of differences between conditions was estimated by Cohen’s d statistics with 95% confidence intervals. Considering average time viewing, ANOVA analysis reveals no significant difference between sessions (F(1,24) = 1.04, p >0.05, η 2 = 0.003). On the other hand, there is a significant effect of conditions (F(1,28) = 5.2, p <0.01, η 2 = 0.1) and of the interaction between factors (F(1,28) = 40.9, p <0.001, η 2 = 0.1). Indeed, in the experimental condition, the average time spent viewing the paintings is significantly lower in the first than in the second session, thus observers’ viewing time is significantly longer after reading a descriptive label compared to an essential label (post-hoc comparisons; t = -4.6, p <0.001; ) . On the contrary, in the control condition, the average viewing time is significantly longer in the first session than in the second session with essential labels (post-hoc comparisons; t = 4.5, p <0.001; ) . Viewing times in the first sessions of the experimental and control conditions are comparable s ince both groups read essential labels during the first visit (post-hoc comparisons; t = -0.013, p >0.05). Average viewing times of each painting in the experimental and control condition are shown in . Average reading times of labels of each painting in the experimental and control condition are reported in . A positive correlation emerges between participants’ preference for abstract art (rated with a 5-point Likert scale before the experiment; see section) and the average viewing time of paintings during the first visit (Pearson linear-correlation; r(28) = 0.56, p <0.01). For the experimental condition, questionnaire’ scoring reveals that descriptive labels, compared to essential labels, influence with very strong effect sizes several dimensions : perception of artwork’s complexity decreases (Paired-sample Wilcoxon signed-rank test; W(7) = 36.01, p <0.05, r rb = 1.00, 95% CI [1.00, 1.00]), contents’ comprehensibility increases (W(7) = 0.01, p <0.01, r rb = 1.00, CI [-1.00, -1.00]), the title looks more informative (W(7) = 2.01, p <0.05, r rb = 0.89, CI [-0.97, -0.56]), positive emotions increase (W(7) = 0.01, p <0.05, r rb = 1.00), while negative emotions decrease (W(7) = 35.01, p <0.05, r rb = 0.94, CI [0.76, 0.99]). Aesthetic appreciation, interest, and curiosity for other artworks of the same authors do not change significantly between different labels. No significant questionnaire differences were found between sessions in the control condition . Regarding the average EDA response, ANOVA analysis reveals a significant difference between conditions (F(1,14) = 12.3, p <0.01, η 2 = 0.2) and sessions (F(1,14) = 27.7, p <0.001, η 2 = 0.3). Indeed, EDA in the second session increases during the first seconds of painting viewing more than with essential labels and remains higher during the whole viewing time (line graph in ), as well as in the control condition (see line graph in ). The average EDA response is significantly higher both for experimental (see bar graph in —left panel) and control condition (see bar graph in —left panel). No significant interactions emerge between factors (F(1,14) = 0.3, p >0.05, η 2 = 0.004). For RMSE, ANOVA shows a significant effect of the session (F(1,14) = 26.7, p <0.001, η 2 = 0.3), no effect of the condition (F(1,14) = 0.07, p >0.05, η 2 = 0.002), and a significant interaction between the two factors (F(1,14) = 10.9, p <0.01, η 2 = 0. 1). Particularly, post-hoc comparisons reveal a statistical difference between the first and the second session during the experimental condition (t = -6.1, p <0.001; —right panel ). However, no statistical differences in the average RMSE are found in the control condition (see –right panel ). Average EDA responses to each painting in the experimental and control condition are shown in . Neither session (F(1,14) = 0.0, p >0.05, η 2 = 0.0), condition (F(1,14) = 0.07, p >0.05, η 2 = 0.002) or their interaction (F(1,14) = 1.3, p >0.05, η 2 = 0.05) affect heart rate response. Considering pupillary responses, ANOVA reveals no statistical differences between conditions (F(1,14) = 0.003, p >0.05, η 2 = 0.0). On the contrary, there is a significant effect of the session (F(1,14) = 5.9, p <0.05, η 2 = 0.02) and of the interaction between factors (F(1,14) = 8.4, p <0.05, η 2 = 0.02). Indeed, in the experimental condition, pupillary responses differ between the two sessions: the pupil is always more dilated with descriptive than with essential labels (line graph in ). Average pupil variation is positive and statistically higher with descriptive labels than with essential labels (post-hoc comparisons; t = -6.1, p <0.001; bar graph in ), but no differences are found in the control condition . Average pupillary responses to each painting in the experimental and control condition are shown in . There are no correlations (Pearson linear-correlation) between individual psychophysiological responses to paintings and corresponding questionnaire scores (all p >0.05). Also, the responses are not affected by familiarity with the paintings (i.e., the responses to the most familiar painting— Femme , Miró, 1977–1978 –are the same as those found for all the other unknown stimuli). Eye movements analysis during painting viewing highlights some differences across label conditions (see ). First, gaze patterns result to be related to the painting’s description. For example, eye movements in the Miró painting ( Femme; 1977–1978) are directed toward the elements depicting feminine body parts, as described in the descriptive label of the experimental condition (see and the heatmap in ). Instead, in the control condition without a description, the eyes are less directed towards salient paintings’ elements (see the heat map in ). Since participants in the experimental condition spent more time looking at the painting with descriptive labels, the number of fixations in this condition is higher (275±30 vs 220±37, on average). On average, after reading the description, participants tend to fixate more in the closer periphery (7°-14°) than the painting’s centre (0°-7°; Paired sample t -test; t(7) = -4.05, p < .01, d = -1.43, 95% CI [-2.42, -0.41]; see the bar graph in ). On the other hand, in the control condition, the number of fixations is lower in the second session, as expected by less time spent observing the painting. Observers mainly fixate the centre of the paintings: the number of fixations in the near (7–14°) and far periphery (14–21°) is much lower than in the centre (0–7°) between the two sessions (t(7) = 5.78, p <0.001, d = 2.05, CI [0.77, 3.28] and t(7) = 5.84, p <0.001, d = 2.07, CI [0.78, 3.31] respectively; see bar graph ). In the present study, we compared the impact of essential and descriptive labels on the cognitive and emotional experience of naïve visitors, through multiple objective and subjective measurements, focusing on controversial modern paintings. Indeed, socio-cultural bias and stereotypes tend to twist the answers of people, often worried to be judged for their lack of expertise in art history, and this process becomes particularly relevant in the context of modern art museums, where the sense of self-efficacy and ease of visitors tend to decrease and–at the same time–the need for educational support, from the majority of the public, becomes unavoidable. People can feel a sense of frustration in front of artists and movements which they merely know and, most of all, hardly understand . Our decision to work with a collection of modern art derives from these considerations and from the widely diffused commonplace that ancient art is “easier” than modern art . Coherently with these premises, we have decided to select a group of people who lacked any particular artistic experience and art history background: a characteristic which made our sample quite homogenous from the art-expertise point of view. Our results show that objective and subjective responses while inspecting modern artworks change depending on the information received before experiencing the paintings. A detailed description encourages participants to spend more time observing artworks, following the information provided. It is difficult for non-experts to catch the meaning of modern artworks . For example, the Miró painting may appear as a series of wide black brushstrokes with small, coloured spots. But when participants come to know that the spots outline the shape of a female body, their eyes perform a greater number of fixations on the parts depicting the figure. This suggests that the explanation provides a key to cognitive and emotive comprehension, confirmed by the subjective perception of increased positive feelings and comprehension. The increase and variability of electrodermal activity and pupil dilation, suggest an increase in physiological arousal . This could be due to a deeper understanding of artworks and a higher cognitive load , as well as to an increase in emotional load . Subjective judgments after viewing the paintings cannot further shed light on the relative contribution of the two dimensions. The increased EDA in the control condition suggests that this psychophysiological response might be modulated by the familiarity for the stimuli, which has been linked to faster processing and higher preference for familiar stimuli compared to novel ones . However, the phasic EDA activity changes when subjects have read the painting description, maybe related to focusing on different elements, while it does not change in the control condition . Our results also show that participants who appreciate more abstract art are the ones spending more time in front of the paintings. However, aesthetic appreciation for the specific paintings presented during the experiment does not change upon explanation. This suggests that, although labels can facilitate comprehension, this is not enough to cause an increased appreciation. We can speculate that specific art training is needed to appreciate modern artworks. Indeed, expertise in art facilitates the so-called aesthetic fluency ; a process that could lead people to better grasp the meaning of an artwork and to its aesthetic appreciation. Also, this result may be interpreted by the fact that modern/contemporary art does not have as its main objective to be "beautiful", rather to be interesting, activating, provocative, ambiguous, and meaningful. Overall, our findings show that visitors do receive important benefits from reading detailed information about artworks. On a more general ground, art description leads to changes in aesthetic judgment and aesthetic emotion outputs, described in the Leder and colleagues’ aesthetic experience model . Since descriptive labels are used when paintings are also seen for the second time, it could be that label-based effects may be conditional on paintings that participants are already familiar with. On the other hand, participants who visited the museum twice receiving the same essential information do not show increased satisfaction. They spent less time observing the painting and assigned the same scores to the questionnaire for understanding and appreciation of the artworks. Also, observers fixate and explore less the artworks, as expected. Except for a slight increase in electrodermal activity, variability of EDA and pupil dilation do not change in the second session. Overall, these results suggest that familiarity with the stimuli without additional information does not improve the museum experience in terms of aesthetic judgments and emotional reactions in naïve visitors, but rather causes them to pay less attention to the artworks . Some of our findings can be compared to those of previous studies. The average viewing times found here with essential and descriptive labels are in line with those found in previous works using unfamiliar artworks . Viewing times are generally longer for well-known paintings (e.g., “The Kiss” by Klimt; ); however, here the only painting familiar to our participants ( Femme , Miró, 1977–1978) received the same amount of time. Mastandrea and colleagues (2019) measured blood pressure and heart rate before and after museum visits, finding that visits to art museums decreased the level of systolic blood pressure but did not influence the heart rate . This is in line with our absence of effects of labels on the HR. It has been found that the display influences the way people experience art, causing different viewing times, levels of engagement and different patterns of fixations , as we found with different types of labels. In contrast, interests in specific artworks and art style preferences proved to be robust and independent of presentation modes. This confirms our results on aesthetic appreciation, which does not increase by introducing descriptive labels. They also found that when labels are more complex (with more text), visitors’ interpretation differs according to the information received, and so do we. Tschacher and colleagues (2012) found that artworks’ understanding was correlated with higher skin conductance variability . We also found higher EDA variability with descriptive labels, even if we did not find any correlation with specific cognitive or emotional domains of the questionnaire. Increased pupil dilation, which we found after presenting a description of the artwork, has been found in some studies conducted outside the museum context, where it was associated with aspects of aesthetic emotions . Note however that, none of this research has measured psychophysiological and behavioral responses during the museum visit as a function of descriptive material, that could influence the aesthetic experience. We cannot rule out that our findings depend on the particular type of artworks involved. Further research might be undertaken in order to make comparisons between “ancient” art and “modern” art, trying to explore possible differences in reactions of visitors in front of visual languages which are perceived as more familiar or simply closer to a general “common taste”. Indeed, robust findings show that non-expert observers usually prefer figurative paintings compared to abstract or conceptual artworks , and that art appreciation correlates with educational level . Ancient art is more easily recognizable and can result to be less anxiety-trigger, especially for non-expert visitors; nevertheless, we should not underestimate the emotional effect (in terms of involvement and gratification) that “ancient masters” can activate on the general public. During real museum visits, visitors can go back to view some paintings they particularly like while ignoring others, they usually go back and forth between reading labels and viewing artworks, sometimes they may be in the company of other people, and they generally experience many artworks, facing the problem of museum fatigue (for a review, see ). In our study, we gave up these situations occurring in an optimal ecological condition in favor of reproducible and accurate measurements, with the aim of avoiding confounding variables. Also, we cannot exclude that the outcome might have been different having as participants a sample of regular museum-goers. In the future, it would be interesting to test art-expert participants; the knowledge they already possess should be enough to understand the meaning of the artwork and have a satisfying experience without the need for informative materials. Expertise is known to lead to higher aesthetic appreciation of artworks and differences in viewing strategies, gaze patterns, fixation distributions, and even in electrophysiological correlates . Fixations should be more focused on salient parts of artworks because the meaning could be grasped even without a descriptive label. More complex is the question of aesthetic appreciation, which experts tend to underestimate in comparison to the complexity of meaning (while for naïve visitors may result as the leading value). We expect that a naïve group would receive more beneficial effects from the explanation of artworks than experts would do, as they might be influenced by their personal evaluation of the quality and amount of information received. Overall, our work suggests that elaborating effective labels, based on scientific evidence rather than on qualitative observations, should be a primary goal for museums. Indeed, if museums aim to attract a wider public, they need to focus their attention on didactic tools provided by panels and captions, with the hope to fill the gap generated by the lack of art knowledge. This is particularly relevant for modern art which is less known and harder to understand and appreciate by non-art experts, that can otherwise perceive the museum visit as a frustrating experience due to their little art education. Aesthetic experience is a psychophysiological process which arises during artworks fruition and involving a variety of emotional and cognitive responses. Here we use multiple psychophysiological and behavioral tools to measure rigorously, in the very context of a modern art museum, the effects of explanatory texts and labels on modern art experience. Our findings show that people receive important benefits, in terms of cognitive and emotional involvement, by reading detailed descriptions of modern artworks. The outcome of our studies could be of interest to museum operators and may become instrumental for improving exhibition, website information content, and advertising material, and for achieving optimal fruition, satisfaction, and thus contribute to well-being of naive visitors as well as experts. On more general grounds our results indicate that psychophysiological changes can be an effective probe into artworks processing and interpretation, making them useful tools for the study of museum aesthetic experience. S1 Table Paintings and their essential and descriptive labels. First column: web link to the paintings used as stimuli (in the same order as presented to the visitors at the Casamonti collection). Second column: essential label read by the participants in the first experimental session and by control participants in the two control sessions (regular style: Italian version; italics style: English version). Third column: descriptive label read by the participants in the second experimental session (regular style: Italian version; italics style: English version). (DOCX) Click here for additional data file. S2 Table Reading time of labels of each painting in the experimental and control condition. Data in the table are reading time in seconds averaged across participants. (DOCX) Click here for additional data file. S1 Fig Viewing time of each painting. (A) Experimental condition. (B) Control condition. The bars show the viewing time of each painting (from painting 1 to painting 8, in the same order as presented to the visitors at the Casamonti collection), averaged across participants. Errors are SE across participants. (TIF) Click here for additional data file. S2 Fig Electrodermal response to each painting. (A) Experimental condition. (B) Control condition. The bars show the EDA response to each painting (from painting 1 to painting 8, in the same order as presented to the visitors at the Casamonti collection), averaged across participants. Errors are SE across participants. (TIF) Click here for additional data file. S3 Fig Pupillary response to each painting. (A) Experimental condition. (B) Control condition. The lines show pupil response over time to each painting (from painting 1 to painting 8, in order as presented to the visitors at the Casamonti collection), averaged across participants. Errors are SE across participants. (TIF) Click here for additional data file.
|
Why is ophthalmology so brilliant?
|
750d52e5-e243-4f49-b257-b911c8ffb756
|
10156072
|
Ophthalmology[mh]
|
Ophthalmology is a field solely dedicated towards treating disorders of the eye and visual pathway. The question arises, how can a specialty focused on an organ measuring just an inch and weighing only several grams be considered “ brilliant ”? The answer becomes clearer as we consider the significance of vision and the management of ophthalmic disease. Our dependence on vision makes any threat to our eyes disturbing and transformative. Consequently, the ability to restore vision is a gratifying privilege shared amongst ophthalmologists. Whilst limited in organ size, ophthalmology encompasses both medicine and surgery, has roughly nine distinctive (yet interlinked) sub-specialities and offers prospects within public health and research. The breadth of opportunity presents clinicians with a balanced profession that is stimulating and ever evolving. Furthermore, we must also consider what constitutes brilliance . From its French origins, brilliance refers to an object that shines brightly. Fittingly, ophthalmology is a field built upon illumination, lasers and imaging. However, to be brilliant also means to be special, skilled and clever. Therefore, we must holistically consider brilliance through multiple lenses. Firstly, the brilliance of the eye and the impact of pathology; secondly, the brilliance of the ophthalmologist; and finally, the brilliance of ophthalmic innovation. We must start by exploring sight and the impact of visual impairment. Across aeons of evolution, the eye appeared in a blink, several hundred million years ago, yet this profound instance transformed nature entirely. Its stunning intricacies became a fundamental argument for creationism, characterised by William Paley’s infamous watchmaker analogy and even Charles Darwin conceded it “absurd” to consider the eye a product of evolution (although this was expertly deconstructed in his On the Origin of Species ). Whilst initially a selective advantage in the prey-predator contest, eyes later held pervasive significance across history. In ancient Egypt, the Eye of Horus was an omnipotent symbol of well-being, healing, and protection. In the sacred tale, Horus offered his healed eye to his father, Osiris, to sustain him in the afterlife. In Hindu mythology, Shiva’s third eye, if opened, is considered apocalyptic and in Genesis after Adam and Eve eat the forbidden fruit “the eyes of both of them were opened”. Today, despite reduced symbolism, vision remains integral to social functioning. A recent study discovered 88% of participants adjudged sight to be the most valuable sense and yet its true value may only be appreciated during disease. Those with major visual impairments have their quality of life reduced and independence endangered. As William Shakespeare’s Romeo perfectly described: “He that is strucken blind cannot forget. The precious treasure of his eyesight lost.” Patients become unable to enjoy quotidian details whilst becoming reliant on societal adjustments, with a recent meta-analysis determining depression to be roughly 25% prevalent among ophthalmology patients . Fortunately, however, many causes of visual impairment can be treated, and the return of vision can be just as transformative as its loss. This capability and privilege falls upon the field of ophthalmology. Cataract surgery typifies this notion. An operation lasting minutes can eliminate visual obscuration and re-establish 6/6 vision. This direct gratification felt in restoring vision is difficult to rival from other walks of life. Ophthalmology is brilliant because it dedicates itself to restoring vision, an ability of intrinsic value. Alongside vision and pathology, the perspective of the doctor is equally important. Is ophthalmology brilliant because its doctors are? Brilliant is to be skilled and intelligent. Whilst some may consider this a pre-requisite to being a doctor, if extrapolated to mean reaching one’s potential, ophthalmology provides a vast and varied opportunity to accomplish this. Retinopathy of prematurity affects those taking their first breaths, whilst age-related macular degeneration manifests in our last decades. Acute angle-closure glaucoma is a medical emergency, whilst diabetic retinopathy runs an insidious course. Ptosis repair requires the precision of a plastic surgeon, whilst Charles Bonnet syndrome necessitates the insight of a neurologist. Ophthalmology allows each clinician to discover their niche, maximise their talent and achieve brilliance. Furthermore, ophthalmic burden is substantial. In 2020, ophthalmology accounted for 7.9 million NHS appointments, consisting of 40% of outpatient service and major day-case operating . This colossal demand is only dwarfed when considering resource-scarce regions, such as Sub-Saharan Africa, which has an estimated 2.5 ophthalmologists per million people . Such demand commands opportunity for making profound impact through initiatives such as Unite for Sight , a charity that has cared for nearly three million patients in poverty. Moreover, this pressure has selected ophthalmology as a pioneer to areas such as Tele-medicine. We have seen development of apps, such as Alleye , that permit home-monitoring of macular diseases, and success of consultation platforms, such as AttendAnywhere , which helped sustain services through the Covid-19 pandemic . Ophthalmology is a richly diverse profession that allows clinicians to become well-rounded yet specialist and ultimately achieve brilliance . The eye, unique in its function, is appropriately unique in its structure. This novelty has permitted multiple innovations, exemplified by the fields of gene therapy and artificial intelligence (AI). In 2017, the first gene therapy was approved for Leber congenital amaurosis, a retinal dystrophy . This success is partly attributed to the immunological privilege of the eye. The eye limits the reach of the immune system in order to prioritise vision. Consequently, transplanted cells are able to avoid rejection and cell function can be rescued. Furthermore, the transparent window created by the pupil, lens, aqueous and vitreous permits light to reach the neurosensory retina. This also allows for direct visualisation of internal tissue and AI, specifically machine-learning, has capitalised on this. A pre-requisite to building machine-learning algorithms is a substantial dataset, which materialised as 14,884 OCT scans during the Google DeepMind and Moorfields Eye Hospital collaboration. These provided high-volume, non-invasive, and high-resolution data for the software to learn from. The resultant AI system enabled instantaneous detection and prediction that rivalled expert performance in identifying retinal pathology . The hope is that such clinical support systems can help ease the significant burden described previously. These brilliant innovations offer promise for efficient management across medicine. Ophthalmology, through the eyes of a foundation doctor, appears to truly be a brilliant speciality that offers an abundance of challenges, opportunities and satisfaction for both its service-users and service-providers. The importance of sight cannot be overstated, and this provides doctors with an intensely gratifying opportunity to alleviate disease and improve quality of life. The immense and intricate disease spectrum renders ophthalmology a field with ample opportunity to develop specialist aptitude.
|
Validation of MICA
|
f197c95d-3c50-442c-b812-a84e37d46919
|
10156412
|
Microbiology[mh]
|
Legionnaires’ disease is a potentially fatal lung infection due to pathogenic bacteria that develop in hot water systems and cooling tower systems. According to the Center for Disease Control an Prevention (CDC) ( ), about one out of every ten people who gets sick with Legionnaires’ disease will die due to complications from their illness ( ); for those who get Legionnaires’ disease during a stay in a healthcare facility, about one out of every four will die ( ). The infection is contracted via inhalation of small aerosols of contaminated water. The vast majority of cases are due to Legionella pneumophila ( , ), mostly from serogroup 1, but also from other serogroups (8 to 15% of the infections by L. pneumophila are due to serogroups other than serogroup 1; ). Importantly, Legionnaires’ disease is commonly diagnosed by a urinary antigen test specific to L. pneumophila serogroup 1, leading to underdiagnosis for other serogroups ( , ). The number of infections is increasing every year due to climate change and increased population density in urban areas; it has, for example, increased by 220% in Europe and by 550% in the United States since 2005 ( , ). Moreover, the number of cases is potentially greatly underestimated ( ). Regular monitoring of the presence of L. pneumophila in hot water and cooling tower systems is the major strategy used to limit the occurrence of outbreaks. It is required (or at least highly recommended) by most health risk monitoring organizations worldwide ( ). However, the standard, culture-based methods, such as ISO 11731:2017, require up to 10 days to determine the presence of this bacterium in a water system ( ). This delay considerably limits the frequency of the tests. It also means that the effectiveness of any treatment can only be known 10 days after the treatment, leading to shutdowns of water systems for longer than needed. Additionally, this standard method is time-consuming and needs expert-trained technicians for identification of Legionella , leading to interpretation differences between technicians depending on their experience ( ). Another drawback of ISO 11731:2017 is the number of different pretreatment and culture plates it requires. As the Legionella culture plates are not highly selective, pretreatment of the sample with acid and heat shock is often necessary to reduce the number of interfering organisms. The plating of the different combinations of these pretreatments on different dilutions of the sample requires several plates per sample, which weighs on the time and cost of the analysis. Methods allowing fast and reliable detection and quantification of L. pneumophila would greatly improve the risk management and have a major impact on the incidence of legionellosis ( ).
MICA Legionella is a detection method allowing detection of culturable microcolonies of L. pneumophila at 48 h of growth instead of 10 days in the standard procedures such as ISO 17731:2017. The original water sample is concentrated by membrane filtration as in the standard procedure. The membrane is then laid over a drop of culture supplement on a standard selective Legionella agar plate [Glycine, Vancomycin, Polymyxin B, Cycloheximide (GVPC)]. This culture supplement contains Diamidex’s patented molecule, a precursor of legionaminic acid coupled with the bio-orthogonal azido group (pLeg-N3; ). Legionaminic acid is a specific component of the O-antigen of L. pneumophila , so the molecule will be specifically internalized by growing L. pneumophila and integrated in their O-antigen on the surface of the cells. After 48 h of bacterial growth, the membrane is transferred onto a drop of tagging solution containing a fluorescent molecule that will bind by click chemistry specifically onto the bio-orthogonal azido group, i.e., the labelled L. pneumophila . This specific fluorescent tagging allows the CFU to be automatically detected at the microcolony stage by solid-phase cytometry using the MICA microcolony counter which will perform a high-resolution scan of the membrane. The MICA Legionella AI (artificial intelligence) analyzer then uses multiple parameters to specifically identify L. pneumophila microcolonies (as low as 2 CFU per test portion) and gives a result as a concentration of L. pneumophila in the original sample. For a better reproducibility, the MICA software provided with the microcolony counter provides a step-by-step protocol guide, including control of the incubation times and reagent traceability. The guidance and analysis by the software allows the MICA Legionella method to be used by anyone. This validation study was conducted under the AOAC Research Institute Performance Tested Method SM program and follows the AOAC INTERNATIONAL Methods Committee Guidelines for Validation of Microbiological Methods for Food and Environmental Surfaces ( ). Method developer studies were conducted in the laboratories of Diamidex, and included the inclusivity/exclusivity study, matrix studies for all claimed matrixes, product consistency and stability studies, and robustness testing. The independent laboratory study was conducted by Q Laboratories (MicroVal expert lab, accredited ISO 17 025) and included a matrix study for cooling tower water.
Target organisms. — L. pneumophila , all serogroups Matrixes. —Hot domestic/tap water and cooling tower water Summary of validated performance claims. —The MICA Legionella for L . pneumophila is a simple and fast kit, which detects and counts only L . pneumophila bacteria capable of being cultivated (similar to regulatory procedure NF T90-431 or ISO 11731:2017) in samples of select environmental and domestic waters. The sensitivity (inclusivity) of MICA Legionella was found to be 94% and specificity (exclusivity) 97%. Performance of the kit is equivalent to ISO 11731:2017 for the enumeration of L . pneumophila in hot domestic water and can be better than ISO 11731:2017 on cooling tower water.
Repeatibility standard deviation (s r ). —Standard deviation of replicates for each strain at each concentration of each matrix for each method. Bias .—Bias is the difference between the candidate method mean result and the true value or reference method value, [mean candidate − known spike] or [mean candidate − mean reference ]. Selectivity .—Ability of the method to detect analyte without interference from matrix or other components with similar behavior. Sensitivity .—Probability of the method giving a positive response when the sample is truly without analyte. Specificity .—Probability of the method giving a negative response when the sample is truly without analyte. Repeatability .—Precision where independent test results are obtained with the same method on equivalent test items in the same laboratory by the same operator using the same equipment within a short interval of time. RSD .—The ratio of the standard deviation to the mean, often reported as a percentage. Confidence interval (CI) .—A confidence interval displays the probability that a parameter will fall between a pair of values around the mean. Confidence intervals are calculated at the 90 and 95% levels. Statistical equivalence .—The acceptance criterion for statistical equivalence is that the 90% CI of the bias between the methods falls within −0.5, 0.5.
MICA Legionella Test Kit Information Kit name .—MICA Legionella Detection Kit. Cat. No .—00 917 (for laboratories) or 00 916 (includes sterile distilled water, pipet tips, etc. for non-laboratory customers). Ordering information .—DIAMIDEX, Grand Luminy Technopole, Zone Luminy Entreprise Biotech, Case 922, 163 Avenue de Luminy, 13288 Marseille Cedex 09, France, [email protected] , tel +33 (0)6 61 93 49 29. MICA Legionella Test Kit Components ( ) Filtration membranes .—48 mm diameter, with a tab for orientation. PVDF (polyvinylidene fluoride), white, 0.45 µm pore size. Reagent A .—Freeze-dried culture supplement containing Diamidex’s patented molecule for specific labelling of L . pneumophila . Storage between +4°C and +8°C. Reagent B .—Freeze-dried tagging solution. Storage between +4°C and +8°C. Reagent C .—pH 2 Solution for acid treatment of the sample, as in ISO 11731:2017. Reagent D .—Sterile water for washing Reagent C. Fiberglass pads .—Provide a proper surface for the tagging step. Concentrated wash buffer .—11X Concentrated buffer to wash away excess tagging solution. Storage between +4°C and +8°C. Sterile distilled water vials, 10 mL .—For hydration of the membrane during the scan. Labels .—For printing the sample QR codes. Additional Supplies and Reagents for the MICA Legionella Test Kit Sterile distilled water .—Available as part of the all-in-one version of the MICA Legionella detection kit, Cat. No 00 916. Agar plates .—GVPC agar plates, Thermo Scientific Oxoid, Cat. No. PO5074A for the internal studies; Hardy Diagnostic buffered charcoal yeast extract (BCYE) selective agar plates with GPCV, ref. W169, for the external study. Other agars can also be used, such as KANTO CHROMagar (ref. 717592-1), Bio-Rad (Cat. No. 3563717) or Liofilchem (Cat. No. 10128). Disinfection solution .—Effective against Legionella and non-corrosive for the equipment, for example, hydrogen peroxide at 6% or ethanol at 70%. An adequate disinfectant can be supplied as part of the all-in-one version of the MICA Legionella detection kit. Apparatus for MICA Legionella ( ) MICA microcolony counter with reading cassettes .—Diamidex, Cat. No. 00 877. MICA Legionella software .—Diamidex, Cat. No. 01 037. Microcolony counter accessory set .—Barcode scanner, label printer, USB hub and touch pen. Diamidex, Cat. No. 01 108. MICA washing bench .—Diamidex, Cat. No. 00 755. MICA tagging tray .—Diamidex, Cat. No. 00 721. MICA Petri dish holders .—Diamidex, Cat. No. 01 002. Filtration manifold with filtration units and pump .—Up to six filtering positions, filtration units containing at least 20 mL Diamidex, Cat. No. 00 878 (or equivalent). Incubator .—Suitable for cultures, capable of maintaining 37 ± 1°C, Diamidex, Cat. No. 00 903 (or equivalent). Incubator .—Suitable for heat treatment, capable of maintaining 52 ± 1°C, Diamidex, Cat. No. 01 183 (or equivalent). Incubator .—Suitable for tagging incubation, capable of maintaining 30 ± 1°C, Diamidex, Cat. No. 00 913 (or equivalent). Precision pipets .—Capable of dispensing 500 and 700 µL, Diamidex Cat. No. 00 904 and 00 905 (or equivalent). Filtered micropipette tips .—To use with the precision pipets. Tweezers .—Suitable for handling membranes, Diamidex Cat. No. 00 821 (or equivalent). Dispensers .—Capable of dispensing 5 and 10 mL volumes, resistant to low pH, Diamidex Cat. No. 01 030 and 01 031 (or equivalent). Refrigerator .—Capable of maintaining 2–8°C, Diamidex Cat. No. 00 883 (or equivalent). Reference Materials Bacterial strains for this study were obtained from ATCC ( ), DSMZ ( ), the Pasteur Institute ( ), or characterized by the CNR-L ( ). All strains used in this study are listed in and . General Preparation for MICA Legionella Apart from the apparatus setup, the MICA software guides the user through the entire protocol, from the preparation of the reagents to the final results. Step-by-step instructions ( , panel D) with videos are available to ensure that mistakes are prevented and to provide traceability of the process. This also includes countdowns for all incubations, scanning of the QR codes or barcodes of all reagents and samples, as well as alerts when the reagents are used up or out of date. Assemble the filtration manifold according to the manufacturer’s instructions. Assemble the MICA washing bench according to the manufacturer’s instructions Set up the temperature of the incubators at least 30 min ahead of time to allow for equilibration at the desired temperature (culture incubator at 37°C, heat treatment incubator at 52°C if necessary, tagging incubator at 30°C). Rehydrate Reagents A and B from the MICA Legionella test kit according to the manufacturer’s instructions. Sample Preparation for a MICA Legionella Analysis Using the MICA software, enter the requested information for each water sample to be analyzed. Scan or enter the batch numbers requested by the MICA software for the following items: GVPC agar plates, vial A, vial B, membrane filters, and bottle C (pH 2). Print a label for each sample to be analyzed and attach it to the bottle. Print a second label for each sample and attach it to the corresponding GVPC agar plate. Filter 20 mL of the test sample using the MICA filtration membrane. Apply a pH 2 treatment: add 5 mL Reagent C (pH 2 solution) over the filtration membrane, incubate for 5 mins at room temperature (countdown on the software), eliminate Reagent C by filtration, then add 10 mL sterile distilled water to wash away the pH 2 solution and eliminate by filtration. Analysis of the Sample with MICA Legionella Labelling and culture step. Put a 500 µL drop of Reagent A onto a GVPC plate. Lay the filtration membrane (filtered bacteria facing up) over the drop of Reagent A. Arrange the GVPC plate inverted in the numbered Petri dish holder indicated by the software. For cooling tower water samples only: incubate the plate upside down at 52°C for 45 min. Incubate the plate upside-down for 48h. This step allows for the formation of microcolonies of L. pneumophila and their labelling by Diamidex’s patented molecule. Follow the prompts in the software to decontaminate the equipment. Tagging step. After incubation is complete, remove the indicated numbered Petri dish holder from the incubator and scan the label on each plate. Lay a fiberglass pad into the tagging tray and soak it with 700 µL Reagent B. Lay the filtration membrane (with the microcolonies facing up) over the soaked fiberglass pad. Incubate for 15 min at 30°C, following the countdown on the software. This step tags the microcolonies with a fluorescent molecule, via a click-chemistry reaction to bind the fluorescent molecule to the Diamidex-patented molecule bound to the bacteria. During the 15 min incubation, prepare the washing bench: pour into the trough a vial (50 mL) of concentrated wash buffer and 500 mL sterile distilled water, then start the pump of the washing bench at 50 rpm (rotations per min). At the end of the 15 min incubation, transfer the filtration membrane onto the washing bench (microcolonies still facing up), and allow for 15 min of washing to eliminate excess fluorescent molecules (countdown on the software). Lay three drops of sterile distilled water onto a reading cassette and transfer the filtration membrane onto the cassette, microcolonies facing up, taking care to avoid air bubbles under the membrane. Read the membrane on the cassette with the MICA microcolony counter. Transfer the membrane to its respective GVPC plate and dispose of according to laboratory procedures for decontaminating biohazardous waste or disinfect with bleach before disposal. Follow the prompts in the software to decontaminate the equipment. The result is displayed on screen in the MICA software as CFU/L. Confirmation is unnecessary. Calculations, Interpretation, and Test Result Report of MICA Analyses The AI analyzer integrated in the MICA software automatically identify microcolonies of L. pneumophila on the membrane based on a multi-parametric analysis and directly gives a concentration of L. pneumophila in the water sample (in CFU/L). No human interpretation or calculation is needed, reducing the inter-user variability, and giving more reproducible results. The results are stored in the MICA software and can be accessed at any time there or exported as a csv file or as pdf analysis reports. Traceability sheets are also available for each analysis. Enumeration of L. pneumophila following ISO 11731:2017 Briefly, for ISO 11731 analyses of hot domestic water, each test portion is split into three parts. One part, 0.2 mL, is plated without treatment on a GVPC plate. The other two parts, respectively 10 and 100 mL, are concentrated by filtration; the filtration membrane is then covered by 5 mL pH 2 solution and incubated for 5 min; after filtering out the pH 2 solution, the membrane is rinsed with 10 mL sterile distilled water; finally, the membrane is transferred onto a GVPC agar plate. For analyses of cooling tower water, each test portion is split into three parts. Two parts, respectively of 0.02 and 0.2 mL are directly plated by spreading onto GVPC agar plates. The other part, 50 mL, is concentrated by filtration and resuspended in sterile phosphate buffer. The concentrate obtained is then split into three parts. One part (0.1 mL) is plated untreated onto a GVPC plate, another part is incubated at 50°C for 30 min before plating 0.1 mL onto a GVPC plate, and the last part is diluted by half with pH 2 solution and incubated for 5 min at room temperature before plating onto a GVPC plate. For both sample types, all GVPC plates are then incubated at 37°C and read at 3 and 7 days. Suspected Legionella colonies are confirmed by a latex agglutination test (Oxoid). According to ISO 11731, for each sample test portion, the plate giving the highest density per liter is used for the final result. For the independent laboratory study (on cooling tower water only), the sample is divided into three portions: one portion is direct plated (0.1 mL) onto GVPC agar; the second portion is used for a 1:10 dilution in pH 2 acid treatment solution prior to plating; and the third portion is used for heat treatment at 50 ± 1°C for 30 ± 0.5 min prior to plating. Additionally, 50 mL of bulk inoculated sample is filter concentrated and the filter washed in 5 mL phosphate-buffered saline (pH 7.5) and plated untreated, acid treated, and heat treated, as previously described. All GVPC plates are incubated at 36 ± 2°C for 7 to 10 days. The plates are observed for suspect L. pneumophila colonies at Day 4 and on the final day of incubation. Typical colonies are enumerated, and the results recorded. Sampling of Matrixes for the Matrix Study The sample of cooling tower water and the sample of hot domestic water matrix were collected following ISO 19458 ( ) Cooling tower water samples were too small individually, so they were pooled from different locations to obtain mixed samples of the required volume. All samples were screened using ISO 11731 and no natural Legionella contamination was found. Artificial Sample Contamination Due to the low frequency of water samples contaminated by L . pneumophila , all samples in this study are artificially contaminated. A stock culture stored at −70°C is streaked onto a GVPC agar plate. The plate is then incubated at 37°C for 3 to 4 days. A Legionella liquid medium is prepared freshly (10 g/L yeast extract, supplemented with BCYE supplement SR0110A from Oxoid) and inoculated from the agar plate. This liquid culture is grown overnight at 37°C with shaking at 160 rpm before diluting at the appropriate concentration in the water matrix. Serial dilutions of the liquid culture are plated onto GVPC agar plates to determine the theoretical inoculation concentration in the water matrix. For the independent laboratory matrix study, the matrix is artificially contaminated with L . pneumophila serogroup 1, ATCC 33152. The culture is propagated on BCYE agar from a stock culture stored at −70°C. The BCYE agar plate is incubated at 37 ± 1°C for 72–96 h before transferring a single colony to Legionella enrichment broth (Sigma Aldrich) and incubating statically at 37 ± 1°C for 96 ± 4 h. Serial dilutions of the culture are prepared to achieve the target concentrations. Statistical Analysis for This Study To allow reliable statistical analysis as well as a clear graphical presentation of the results, all results, initially in CFU/L, are converted to log 10 , with an offset of +1 to accommodate the zeros in the data set. When comparing results obtained from two different methods, they are considered significantly different if the 95% confidence interval of the bias extends outside of the −0.5 to +0.5 range, according to the recommendations of AOAC.
Kit name .—MICA Legionella Detection Kit. Cat. No .—00 917 (for laboratories) or 00 916 (includes sterile distilled water, pipet tips, etc. for non-laboratory customers). Ordering information .—DIAMIDEX, Grand Luminy Technopole, Zone Luminy Entreprise Biotech, Case 922, 163 Avenue de Luminy, 13288 Marseille Cedex 09, France, [email protected] , tel +33 (0)6 61 93 49 29.
) Filtration membranes .—48 mm diameter, with a tab for orientation. PVDF (polyvinylidene fluoride), white, 0.45 µm pore size. Reagent A .—Freeze-dried culture supplement containing Diamidex’s patented molecule for specific labelling of L . pneumophila . Storage between +4°C and +8°C. Reagent B .—Freeze-dried tagging solution. Storage between +4°C and +8°C. Reagent C .—pH 2 Solution for acid treatment of the sample, as in ISO 11731:2017. Reagent D .—Sterile water for washing Reagent C. Fiberglass pads .—Provide a proper surface for the tagging step. Concentrated wash buffer .—11X Concentrated buffer to wash away excess tagging solution. Storage between +4°C and +8°C. Sterile distilled water vials, 10 mL .—For hydration of the membrane during the scan. Labels .—For printing the sample QR codes.
Sterile distilled water .—Available as part of the all-in-one version of the MICA Legionella detection kit, Cat. No 00 916. Agar plates .—GVPC agar plates, Thermo Scientific Oxoid, Cat. No. PO5074A for the internal studies; Hardy Diagnostic buffered charcoal yeast extract (BCYE) selective agar plates with GPCV, ref. W169, for the external study. Other agars can also be used, such as KANTO CHROMagar (ref. 717592-1), Bio-Rad (Cat. No. 3563717) or Liofilchem (Cat. No. 10128). Disinfection solution .—Effective against Legionella and non-corrosive for the equipment, for example, hydrogen peroxide at 6% or ethanol at 70%. An adequate disinfectant can be supplied as part of the all-in-one version of the MICA Legionella detection kit.
) MICA microcolony counter with reading cassettes .—Diamidex, Cat. No. 00 877. MICA Legionella software .—Diamidex, Cat. No. 01 037. Microcolony counter accessory set .—Barcode scanner, label printer, USB hub and touch pen. Diamidex, Cat. No. 01 108. MICA washing bench .—Diamidex, Cat. No. 00 755. MICA tagging tray .—Diamidex, Cat. No. 00 721. MICA Petri dish holders .—Diamidex, Cat. No. 01 002. Filtration manifold with filtration units and pump .—Up to six filtering positions, filtration units containing at least 20 mL Diamidex, Cat. No. 00 878 (or equivalent). Incubator .—Suitable for cultures, capable of maintaining 37 ± 1°C, Diamidex, Cat. No. 00 903 (or equivalent). Incubator .—Suitable for heat treatment, capable of maintaining 52 ± 1°C, Diamidex, Cat. No. 01 183 (or equivalent). Incubator .—Suitable for tagging incubation, capable of maintaining 30 ± 1°C, Diamidex, Cat. No. 00 913 (or equivalent). Precision pipets .—Capable of dispensing 500 and 700 µL, Diamidex Cat. No. 00 904 and 00 905 (or equivalent). Filtered micropipette tips .—To use with the precision pipets. Tweezers .—Suitable for handling membranes, Diamidex Cat. No. 00 821 (or equivalent). Dispensers .—Capable of dispensing 5 and 10 mL volumes, resistant to low pH, Diamidex Cat. No. 01 030 and 01 031 (or equivalent). Refrigerator .—Capable of maintaining 2–8°C, Diamidex Cat. No. 00 883 (or equivalent).
Bacterial strains for this study were obtained from ATCC ( ), DSMZ ( ), the Pasteur Institute ( ), or characterized by the CNR-L ( ). All strains used in this study are listed in and .
Apart from the apparatus setup, the MICA software guides the user through the entire protocol, from the preparation of the reagents to the final results. Step-by-step instructions ( , panel D) with videos are available to ensure that mistakes are prevented and to provide traceability of the process. This also includes countdowns for all incubations, scanning of the QR codes or barcodes of all reagents and samples, as well as alerts when the reagents are used up or out of date. Assemble the filtration manifold according to the manufacturer’s instructions. Assemble the MICA washing bench according to the manufacturer’s instructions Set up the temperature of the incubators at least 30 min ahead of time to allow for equilibration at the desired temperature (culture incubator at 37°C, heat treatment incubator at 52°C if necessary, tagging incubator at 30°C). Rehydrate Reagents A and B from the MICA Legionella test kit according to the manufacturer’s instructions.
Using the MICA software, enter the requested information for each water sample to be analyzed. Scan or enter the batch numbers requested by the MICA software for the following items: GVPC agar plates, vial A, vial B, membrane filters, and bottle C (pH 2). Print a label for each sample to be analyzed and attach it to the bottle. Print a second label for each sample and attach it to the corresponding GVPC agar plate. Filter 20 mL of the test sample using the MICA filtration membrane. Apply a pH 2 treatment: add 5 mL Reagent C (pH 2 solution) over the filtration membrane, incubate for 5 mins at room temperature (countdown on the software), eliminate Reagent C by filtration, then add 10 mL sterile distilled water to wash away the pH 2 solution and eliminate by filtration.
Labelling and culture step. Put a 500 µL drop of Reagent A onto a GVPC plate. Lay the filtration membrane (filtered bacteria facing up) over the drop of Reagent A. Arrange the GVPC plate inverted in the numbered Petri dish holder indicated by the software. For cooling tower water samples only: incubate the plate upside down at 52°C for 45 min. Incubate the plate upside-down for 48h. This step allows for the formation of microcolonies of L. pneumophila and their labelling by Diamidex’s patented molecule. Follow the prompts in the software to decontaminate the equipment. Tagging step. After incubation is complete, remove the indicated numbered Petri dish holder from the incubator and scan the label on each plate. Lay a fiberglass pad into the tagging tray and soak it with 700 µL Reagent B. Lay the filtration membrane (with the microcolonies facing up) over the soaked fiberglass pad. Incubate for 15 min at 30°C, following the countdown on the software. This step tags the microcolonies with a fluorescent molecule, via a click-chemistry reaction to bind the fluorescent molecule to the Diamidex-patented molecule bound to the bacteria. During the 15 min incubation, prepare the washing bench: pour into the trough a vial (50 mL) of concentrated wash buffer and 500 mL sterile distilled water, then start the pump of the washing bench at 50 rpm (rotations per min). At the end of the 15 min incubation, transfer the filtration membrane onto the washing bench (microcolonies still facing up), and allow for 15 min of washing to eliminate excess fluorescent molecules (countdown on the software). Lay three drops of sterile distilled water onto a reading cassette and transfer the filtration membrane onto the cassette, microcolonies facing up, taking care to avoid air bubbles under the membrane. Read the membrane on the cassette with the MICA microcolony counter. Transfer the membrane to its respective GVPC plate and dispose of according to laboratory procedures for decontaminating biohazardous waste or disinfect with bleach before disposal. Follow the prompts in the software to decontaminate the equipment. The result is displayed on screen in the MICA software as CFU/L. Confirmation is unnecessary.
The AI analyzer integrated in the MICA software automatically identify microcolonies of L. pneumophila on the membrane based on a multi-parametric analysis and directly gives a concentration of L. pneumophila in the water sample (in CFU/L). No human interpretation or calculation is needed, reducing the inter-user variability, and giving more reproducible results. The results are stored in the MICA software and can be accessed at any time there or exported as a csv file or as pdf analysis reports. Traceability sheets are also available for each analysis.
Briefly, for ISO 11731 analyses of hot domestic water, each test portion is split into three parts. One part, 0.2 mL, is plated without treatment on a GVPC plate. The other two parts, respectively 10 and 100 mL, are concentrated by filtration; the filtration membrane is then covered by 5 mL pH 2 solution and incubated for 5 min; after filtering out the pH 2 solution, the membrane is rinsed with 10 mL sterile distilled water; finally, the membrane is transferred onto a GVPC agar plate. For analyses of cooling tower water, each test portion is split into three parts. Two parts, respectively of 0.02 and 0.2 mL are directly plated by spreading onto GVPC agar plates. The other part, 50 mL, is concentrated by filtration and resuspended in sterile phosphate buffer. The concentrate obtained is then split into three parts. One part (0.1 mL) is plated untreated onto a GVPC plate, another part is incubated at 50°C for 30 min before plating 0.1 mL onto a GVPC plate, and the last part is diluted by half with pH 2 solution and incubated for 5 min at room temperature before plating onto a GVPC plate. For both sample types, all GVPC plates are then incubated at 37°C and read at 3 and 7 days. Suspected Legionella colonies are confirmed by a latex agglutination test (Oxoid). According to ISO 11731, for each sample test portion, the plate giving the highest density per liter is used for the final result. For the independent laboratory study (on cooling tower water only), the sample is divided into three portions: one portion is direct plated (0.1 mL) onto GVPC agar; the second portion is used for a 1:10 dilution in pH 2 acid treatment solution prior to plating; and the third portion is used for heat treatment at 50 ± 1°C for 30 ± 0.5 min prior to plating. Additionally, 50 mL of bulk inoculated sample is filter concentrated and the filter washed in 5 mL phosphate-buffered saline (pH 7.5) and plated untreated, acid treated, and heat treated, as previously described. All GVPC plates are incubated at 36 ± 2°C for 7 to 10 days. The plates are observed for suspect L. pneumophila colonies at Day 4 and on the final day of incubation. Typical colonies are enumerated, and the results recorded.
The sample of cooling tower water and the sample of hot domestic water matrix were collected following ISO 19458 ( ) Cooling tower water samples were too small individually, so they were pooled from different locations to obtain mixed samples of the required volume. All samples were screened using ISO 11731 and no natural Legionella contamination was found.
Due to the low frequency of water samples contaminated by L . pneumophila , all samples in this study are artificially contaminated. A stock culture stored at −70°C is streaked onto a GVPC agar plate. The plate is then incubated at 37°C for 3 to 4 days. A Legionella liquid medium is prepared freshly (10 g/L yeast extract, supplemented with BCYE supplement SR0110A from Oxoid) and inoculated from the agar plate. This liquid culture is grown overnight at 37°C with shaking at 160 rpm before diluting at the appropriate concentration in the water matrix. Serial dilutions of the liquid culture are plated onto GVPC agar plates to determine the theoretical inoculation concentration in the water matrix. For the independent laboratory matrix study, the matrix is artificially contaminated with L . pneumophila serogroup 1, ATCC 33152. The culture is propagated on BCYE agar from a stock culture stored at −70°C. The BCYE agar plate is incubated at 37 ± 1°C for 72–96 h before transferring a single colony to Legionella enrichment broth (Sigma Aldrich) and incubating statically at 37 ± 1°C for 96 ± 4 h. Serial dilutions of the culture are prepared to achieve the target concentrations.
To allow reliable statistical analysis as well as a clear graphical presentation of the results, all results, initially in CFU/L, are converted to log 10 , with an offset of +1 to accommodate the zeros in the data set. When comparing results obtained from two different methods, they are considered significantly different if the 95% confidence interval of the bias extends outside of the −0.5 to +0.5 range, according to the recommendations of AOAC.
Inclusivity To determine the sensitivity of MICA Legionella , 35 different strains of L. pneumophila were tested with MICA Legionella ( ). Approximately 10 3 to 5 × 10 3 cells were used to artificially contaminate 20 mL test portions of sterile phosphate buffer. The artificially contaminated test portions were processed with MICA Legionella and the results were compared with the theoretical inoculation density ( ). Out of the 35 tested L. pneumophila strains, 33 (94%) were correctly detected, covering all serogroups. The only two exceptions were from L. pneumophila serogroup 7 (strains Nos.19 and 23) for which four other strains were properly detected. Serogroup 7 is a very poorly represented serogroup both in infection cases and in the environment ( , , ). Lower identification of this serogroup can be explained by an atypical composition of the O-antigen of this serogroup ( ). Exclusivity To determine the specificity of MICA Legionella , 16 non- pneumophila Legionella strains and 13 non- Legionella strains, chosen among possible water background flora, were tested with MICA Legionella ( ). Approximately 10 4 to 5 × 10 4 cells were used to artificially contaminate 20 mL test portions of sterile phosphate buffer. The artificially contaminated test portions were processed with MICA Legionella ( ). Results are summarized in . Out of 29 tested Legionella non- pneumophila and background water-borne organisms, 28 (97%) correctly produced negative results, while only one produced a positive result. This strain, Legionella norrlandica (strain No. 48), was isolated in 2015 from the biopurification system of wood processing plants in Sweden. It is closer to L . pneumophila than the other known Legionella non- pneumophila species and contains most of the virulence genes of L . pneumophila , in particular its cell wall structure ( ), which explains its detection by MICA Legionella . It is classified as a class-2 pathogen, as is L. pneumophila , and its presence in the water systems should be treated as is the presence of L. pneumophila . Thus, getting a positive result for this strain is more of an advantage than a trouble as its presence should lead to the same treatment as L. pneumophila . Method Developer Matrix Study The results of MICA Legionella were compared with that of the standard reference method ISO 11731:2017 on two different matrixes: domestic hot water and cooling tower water. The hot domestic water matrix did not contain background flora growing on GVPC at 37°C, while the cooling tower water contained 7 × 10 6 CFU/L of background flora growing on GVPC at 37°C. Artificial contamination of the matrixes was performed using liquid cultures of L. pneumophila serogroup1 (strain No. 5) and L. pneumophila serogroup 6 (strain No.15), respectively, for the cooling tower water and the hot domestic water, at low level (≈10 3 CFU/L), medium level (≈10 4 CFU/L) and high level (≈10 5 CFU/L). The theoretical inoculation density was estimated by plating serial dilutions of each culture. MICA Legionella and ISO 11731:2017 analyses were both started on the day of the inoculation. Five test portions of each contamination level and of the uncontaminated matrixes were tested with both methods. Results in CFU/L are converted to log 10 before statistical analysis and comparison. They are summarized in and and detailed in . From domestic hot water, both methods showed very low standard deviation on positive samples, ranging from 0.01 to 0.1 log unit for both methods, indicating a very good reproducibility of the methods. Importantly, the correlation of the results of the two methods is very high (correlation coefficient R 2 = 0.99, , panel A), indicating that MICA Legionella gives similar results to ISO 11731:2017 on domestic hot water. From cooling tower water, ISO 11731:2017 shows a very high variability: the standard deviation on positive samples ranges from 0.18 to 1.6 log unit, with two false-negative results on the low-level contamination, due to background flora growth over the entire agar plates. On the other hand, MICA Legionella results show low standard deviations ranging only from 0.16 to 0.25 log units, without any false negatives. Comparison of each method with the theoretical inoculation level of the cooling tower water ( , panels C and D) shows that MICA Legionella provides results closer to the theoretical inoculation level than ISO 11731:2017 ( R 2 = 0.99 vs R 2 = 0.80). It is striking that the new MICA Legionella method performs better than the gold standard ISO 11731:2017 on this more complex matrix, but it is easily explained. Indeed, with such a matrix containing a high amount of background flora, when the plates are read for ISO 11731:2017 after 3 to 10 days of incubation they are often covered up on large parts by the background flora, hiding an unknown number of Legionella colonies. In contrast, when the plates are read for MICA Legionella after only 48 h of incubation, the background flora has not yet grown as much and they hide only a few parts of the plates. Thus, unlike ISO 11731:2017, MICA Legionella is not affected by the abundant background flora often found in cooling tower waters and gives more reliable results than ISO 11731:2017 on this type of matrix. Independent Laboratory Matrix Study An independent laboratory study was conducted on the most complex of the two types of matrixes: cooling tower water. The matrix was artificially contaminated with Legionella pneumophila serogroup 1 ATCC 33152, originally isolated from a human, at the following target concentrations: 5 × 10 2 , 10 3 , 10 4 , 10 5 , and 10 6 CFU/L. Prior to inoculation, the cooling tower water was dosed with liquid chlorine and thoroughly homogenized to achieve a level of 0.1 ppm (parts per million, mg/L). For the MICA Legionella test portions, 500 mL was prepared for each contamination level and the uninoculated level. A 20 mL volume was taken for each of the five replicates from the 500 mL bulk sample for analysis. For the ISO 11731:2017 test portions, 500 mL was prepared for each contamination level and the uninoculated level. A 50 mL volume was taken for each of the five replicates from the 500 mL bulk sample for analysis in addition to the aliquots required for direct plating and pre-treatment. Results are summarized in and and detailed in . The 90% confidence interval of the bias between the two methods fell between −0.5 to 0.5 log 10 for each concentration indicating equivalence between the two methods. The repeatability (s r ) calculated as SD, of the Diamidex MICA Legionella pneumophila kit and the reference method was determined for the cooling tower matrix. The MICA Legionella pneumophila kit proved to be a more rapid, reliable, and sensitive culture method when compared to the ISO 11731:2017 reference standard for enumeration of L. pneumophila in cooling tower water. The results of the statistical analysis using the difference of means with calculated 90/95% confidence intervals indicated equivalence between the MICA Legionella pneumophila kit and the reference standard in three of the five artificial contamination levels analyzed: low, medium, and high. For the very low and very high concentration levels the results of the statistical analysis demonstrated a statistically significant increase in the sensitivity of the MICA Legionella method over the ISO 11731 culture method. Robustness To assess the robustness of the MICA Legionella method, variations of three key parameters were tested ( ) and the analysis results compared with the recommended conditions ( see for details). The results proved that MICA Legionella is resilient to most tested variations of the protocol. Nonetheless, to prevent the risk of deviation from the recommended parameters, the MICA software does not allow shorter culture or labelling times (the most impactful variations) and gives a warning for any incubation exceeding the tolerance margin. Thus, the combination of the protocol resilience with the guidance provided by the software ensures that the MICA Legionella performance is highly robust. Test Kit Consistency and Stability The product consistency and stability studies were conducted together. Three lots are tested at time point 0 for the consistency study. Kits from each lot are then stored at 25°C for the accelerated stability study and at 4°C for the real-time stability study (see details in ). At time point 0, all three tested lots give similar results, with no significant difference from the inoculation density, indicating excellent reproducibility of the test kit. Both the accelerated and real-time stability studies demonstrate that the test kit is stable up to 18 months at 4°C. Further time points (24 months, maybe more) of the real-time study will be performed on time to check for a potential longer stability than initially expected.
To determine the sensitivity of MICA Legionella , 35 different strains of L. pneumophila were tested with MICA Legionella ( ). Approximately 10 3 to 5 × 10 3 cells were used to artificially contaminate 20 mL test portions of sterile phosphate buffer. The artificially contaminated test portions were processed with MICA Legionella and the results were compared with the theoretical inoculation density ( ). Out of the 35 tested L. pneumophila strains, 33 (94%) were correctly detected, covering all serogroups. The only two exceptions were from L. pneumophila serogroup 7 (strains Nos.19 and 23) for which four other strains were properly detected. Serogroup 7 is a very poorly represented serogroup both in infection cases and in the environment ( , , ). Lower identification of this serogroup can be explained by an atypical composition of the O-antigen of this serogroup ( ).
To determine the specificity of MICA Legionella , 16 non- pneumophila Legionella strains and 13 non- Legionella strains, chosen among possible water background flora, were tested with MICA Legionella ( ). Approximately 10 4 to 5 × 10 4 cells were used to artificially contaminate 20 mL test portions of sterile phosphate buffer. The artificially contaminated test portions were processed with MICA Legionella ( ). Results are summarized in . Out of 29 tested Legionella non- pneumophila and background water-borne organisms, 28 (97%) correctly produced negative results, while only one produced a positive result. This strain, Legionella norrlandica (strain No. 48), was isolated in 2015 from the biopurification system of wood processing plants in Sweden. It is closer to L . pneumophila than the other known Legionella non- pneumophila species and contains most of the virulence genes of L . pneumophila , in particular its cell wall structure ( ), which explains its detection by MICA Legionella . It is classified as a class-2 pathogen, as is L. pneumophila , and its presence in the water systems should be treated as is the presence of L. pneumophila . Thus, getting a positive result for this strain is more of an advantage than a trouble as its presence should lead to the same treatment as L. pneumophila .
The results of MICA Legionella were compared with that of the standard reference method ISO 11731:2017 on two different matrixes: domestic hot water and cooling tower water. The hot domestic water matrix did not contain background flora growing on GVPC at 37°C, while the cooling tower water contained 7 × 10 6 CFU/L of background flora growing on GVPC at 37°C. Artificial contamination of the matrixes was performed using liquid cultures of L. pneumophila serogroup1 (strain No. 5) and L. pneumophila serogroup 6 (strain No.15), respectively, for the cooling tower water and the hot domestic water, at low level (≈10 3 CFU/L), medium level (≈10 4 CFU/L) and high level (≈10 5 CFU/L). The theoretical inoculation density was estimated by plating serial dilutions of each culture. MICA Legionella and ISO 11731:2017 analyses were both started on the day of the inoculation. Five test portions of each contamination level and of the uncontaminated matrixes were tested with both methods. Results in CFU/L are converted to log 10 before statistical analysis and comparison. They are summarized in and and detailed in . From domestic hot water, both methods showed very low standard deviation on positive samples, ranging from 0.01 to 0.1 log unit for both methods, indicating a very good reproducibility of the methods. Importantly, the correlation of the results of the two methods is very high (correlation coefficient R 2 = 0.99, , panel A), indicating that MICA Legionella gives similar results to ISO 11731:2017 on domestic hot water. From cooling tower water, ISO 11731:2017 shows a very high variability: the standard deviation on positive samples ranges from 0.18 to 1.6 log unit, with two false-negative results on the low-level contamination, due to background flora growth over the entire agar plates. On the other hand, MICA Legionella results show low standard deviations ranging only from 0.16 to 0.25 log units, without any false negatives. Comparison of each method with the theoretical inoculation level of the cooling tower water ( , panels C and D) shows that MICA Legionella provides results closer to the theoretical inoculation level than ISO 11731:2017 ( R 2 = 0.99 vs R 2 = 0.80). It is striking that the new MICA Legionella method performs better than the gold standard ISO 11731:2017 on this more complex matrix, but it is easily explained. Indeed, with such a matrix containing a high amount of background flora, when the plates are read for ISO 11731:2017 after 3 to 10 days of incubation they are often covered up on large parts by the background flora, hiding an unknown number of Legionella colonies. In contrast, when the plates are read for MICA Legionella after only 48 h of incubation, the background flora has not yet grown as much and they hide only a few parts of the plates. Thus, unlike ISO 11731:2017, MICA Legionella is not affected by the abundant background flora often found in cooling tower waters and gives more reliable results than ISO 11731:2017 on this type of matrix.
An independent laboratory study was conducted on the most complex of the two types of matrixes: cooling tower water. The matrix was artificially contaminated with Legionella pneumophila serogroup 1 ATCC 33152, originally isolated from a human, at the following target concentrations: 5 × 10 2 , 10 3 , 10 4 , 10 5 , and 10 6 CFU/L. Prior to inoculation, the cooling tower water was dosed with liquid chlorine and thoroughly homogenized to achieve a level of 0.1 ppm (parts per million, mg/L). For the MICA Legionella test portions, 500 mL was prepared for each contamination level and the uninoculated level. A 20 mL volume was taken for each of the five replicates from the 500 mL bulk sample for analysis. For the ISO 11731:2017 test portions, 500 mL was prepared for each contamination level and the uninoculated level. A 50 mL volume was taken for each of the five replicates from the 500 mL bulk sample for analysis in addition to the aliquots required for direct plating and pre-treatment. Results are summarized in and and detailed in . The 90% confidence interval of the bias between the two methods fell between −0.5 to 0.5 log 10 for each concentration indicating equivalence between the two methods. The repeatability (s r ) calculated as SD, of the Diamidex MICA Legionella pneumophila kit and the reference method was determined for the cooling tower matrix. The MICA Legionella pneumophila kit proved to be a more rapid, reliable, and sensitive culture method when compared to the ISO 11731:2017 reference standard for enumeration of L. pneumophila in cooling tower water. The results of the statistical analysis using the difference of means with calculated 90/95% confidence intervals indicated equivalence between the MICA Legionella pneumophila kit and the reference standard in three of the five artificial contamination levels analyzed: low, medium, and high. For the very low and very high concentration levels the results of the statistical analysis demonstrated a statistically significant increase in the sensitivity of the MICA Legionella method over the ISO 11731 culture method.
To assess the robustness of the MICA Legionella method, variations of three key parameters were tested ( ) and the analysis results compared with the recommended conditions ( see for details). The results proved that MICA Legionella is resilient to most tested variations of the protocol. Nonetheless, to prevent the risk of deviation from the recommended parameters, the MICA software does not allow shorter culture or labelling times (the most impactful variations) and gives a warning for any incubation exceeding the tolerance margin. Thus, the combination of the protocol resilience with the guidance provided by the software ensures that the MICA Legionella performance is highly robust.
The product consistency and stability studies were conducted together. Three lots are tested at time point 0 for the consistency study. Kits from each lot are then stored at 25°C for the accelerated stability study and at 4°C for the real-time stability study (see details in ). At time point 0, all three tested lots give similar results, with no significant difference from the inoculation density, indicating excellent reproducibility of the test kit. Both the accelerated and real-time stability studies demonstrate that the test kit is stable up to 18 months at 4°C. Further time points (24 months, maybe more) of the real-time study will be performed on time to check for a potential longer stability than initially expected.
Since its discovery in 1976, Legionella pneumophila has been considered an important pathogen that should be monitored in domestic hot water and cooling tower water. Several detection methods have been developed, but the gold standard remains a culture method, as in ISO 11731:2017. However, this method has important issues, such as the long time-to-result, the high amount of human time and number of culture plates required, as well as the high level of training for the technicians. These issues can be addressed by the development of new, culture-based detection methods that must achieve the same performance level as the standard method while allowing for a shorter result delay (ideally 24 to 48 h) and rely as little as possible on human skills ( ). Indeed, a short result delay allows a better reactivity both in the case of a contamination and in the case of a successful disinfection of the water system, leading to lower sanitary risks, lower use of sanitizers, and shorter shutdown events; a low requirement of human skills allows for better reproducibility and reliability of the results and makes it easier to implement the method directly on site instead of relying on expert laboratories. Diamidex developed MICA Legionella to answer this ( ). As shown in this study, MICA Legionella can detect all serogroups of L. pneumophila and does not wrongly recognize other species. The protocol proved robust to variations and, additionally, the MICA legionella software reduces the risk of deviations from the protocol by providing a step-by-step protocol and control of incubation time. Furthermore, the final result does not rely on human interpretation, but instead on automatic identification of microcolonies of L. pneumophila by the AI analyzer and automatic calculation of contamination density in the original water sample, thus reducing both the required human time and skills and the risk of human mistakes. Another advantage is the use of a single culture plate without extra confirmation steps instead of up to nine initial plates plus extra confirmation plates for the standard method, which not only reduces the waste but also further reduces the human time and skills needed for the analysis ( ). When compared to ISO 11731:2017, MICA Legionella gives in 48 h equivalent results to the standard method in 10 days for a simple matrix (hot sanitary water). On complex matrixes (cooling tower water), MICA Legionella performs better than the standard method, thanks to the shorter culture incubation time that makes it less sensitive to background flora interference at reading time. Another advantage of this low sensitivity to background flora is that the volume of analyzed sample can be higher for MICA Legionella than for ISO 11731:2017, which leads to a lower LOD. In the present study, the LOD of the ISO method on the complex matrix was 1000 CFU/L, while the LOD of MICA Legionella was 100 CFU/L. Moreover, the LOD of MICA Legionella could be further lowered by increasing the filtered volume. Importantly for a routine analysis method, the MICA Legionella test kit is reproducible from lot-to-lot and is stable at the recommended storage temperature (4°C) for a long time, up to 18 months according to the stability study. Altogether, MICA Legionella can be considered as a reliable and fast alternative to the standard methods for enumeration of L. pneumophila in hot domestic water and cooling tower water and has been granted PTM certification.
qsac150_Supplementary_Data Click here for additional data file.
|
Isolation and Characterization of
|
d75a896b-6862-47eb-a05e-7f39c59f6b78
|
10156456
|
Microbiology[mh]
|
Probiotics have a long history of human consumption; for example, cultured dairy products (curds and yogurts) are traditionally consumed in several parts of the world. The word probiotics is of Greek origin, meaning “for life”, which is the antonym of antibiotics . Those living microorganisms that are administered in an adequate amount and have a beneficial effect on human health are known as probiotics . According to Ranadheera et al., and Oleskin, & Shenderov , the human gastrointestinal tract contains more bacteria than eukaryotic cells. This gut flora also contains probiotics, which constructively influence the body by offering health encouragement to the host . Microorganisms used as probiotics, specifically lactic acid bacteria (LAB), include Lactobacillus species, some species of Bifidobacteria, Enterococcus, and Streptococcus . Most of these bacterial species reside in the human intestine . The only probiotic yeast that exists is the nonpathogenic Saccharomyces boulardii [ – ]. LAB has prevalent exploitation in fermented food manufacturing and is a generally recognized as safe (GRAS) organism that can be securely used for medical or veterinary purposes . If LAB is exposed to traumatic circumstances in the gastrointestinal tract like acidic gastric juice, bile salt, and/or altered microbial balance of the intestinal tract, then administration of any kind of antibiotic may suppress the probiotic population. So, to get the maximum benefits out of the probiotic bacterial population, the selection of probiotics is made after various in vitro and in vivo tests . There are a variety of traditional fermented food products produced by probiotic fermentation like fruits and vegetables including olives, beetroot, cabbage, and other leafy vegetables. Fermentation is done by keeping the vegetables in a 2% brine solution and allowing the stored vegetables to be fermented with LAB . In the food industry, especially for ready-to-eat foods, microbiological quality is a persistent concern. Lactic acid bacteria (LAB) could be used to prevent the growth of spoiling and pathogenic bacteria . Some LAB exhibits active microbial antagonism through competition mechanisms, the production of bacteriocins, or the production of organic acids [ – ]. The plant named “olive” is a small tree of the family Oleaceae , and its binomial name is Olea europaea . Olives are bitter when raw or fresh, so they must be treated and fermented to make them edible. Both green olives, which are full-size olives plucked before ripening, and black olives, which are completely matured, ripened olives, can be fermented. The main reasons for olive processing are the exclusion of bitterness by hydrolysis of some phenolic compounds (like oleuropein), preservation of the fruit, and enhancement of the organoleptic characteristics of the ultimate product . Due to the high concentration of dietary fiber, vitamins, antioxidants, and anticancer chemicals in table olives, they have therefore been considered useful food . Table olives are called pickled vegetables, in which preparation and maintenance are achieved by an amalgamation of salting, fermentation, and acidification. Some studies have been conducted to widen the range of useful food types by utilizing of the microarchitecture of the surface of olives, and the dietary characteristics of olive pulp to produce a flavorsome, vegetable-based efficient food comprising of table olives equipped with probiotic strains . In research, the capacity of seven strains from the probiotic species Lactobacillus rhamnosus , Lactobacillus paracasei , Bifidobacterium umbifidum , and Bifidobacterium longum to survive on the olive surface and the suitability of table olives as a biological carrier for probiotic microorganisms were studied . The resultant table olives can be stored with or without refrigeration, and probiotic-dominated fermentations are generally considered the most suitable method of curing olives . This study is aimed at identifying and functionalizing the characterization of isolated probiotics from commercially available green and black olives.
The research work was done in the Microbiology & Biotechnology Laboratory at Fatima Jinnah Women University, Rawalpindi. The work is divided into two phases: isolation and identification of bacteria and physiological characterization of probiotic isolates. 2.1. Sample Collection Fermented pitted Spanish green and black olive samples of the same brand (Figaro Company) were purchased from local superstores located in the Pakistan Aeronautical Complex (PAC), Karma, and Rawalpindi. The sealed fermented green and black olives bottles were opened in sterile condition near a flame to avoid contamination. Using sterile forceps, a few olives from the bottles were taken out and then minced inside a mortar and pestle after disinfecting it with an ethanol (70%) swab. Before being minced, big chunks of both black and green olives were kept inside sterile universal bottles with the help of sterile forceps; later, the olive mince from the mortar was also collected in universal bottles and infused in autoclaved distilled water using a sterile spatula. The brine from black and green olive samples was also stored in a sterile universal bottle following the method of Doulgeraki et al. with slight workable modifications. All the universal bottles containing olive and brine samples were stored in the refrigerator for isolation. 2.2. Isolation of Probiotics Strains from olives were isolated on selective de Man, Rogosa, and Sharpe (MRS) media (Merck Millipore Germany, catalog # 110660) . Isolations from chunks, brine, and minced fermented black and green olives were done on MRS media by the spread plate method . Isolates were characterized by morphological, biochemical, and physiological characteristics. 2.3. Morphological and Biochemical Characterizations Morphological characterization was studied through the gram staining technique by Hans Christian Gram . The catalase test , Simmons citrate test , methyl red and Voges–Proskauer's test , indole production test , and oxidation fermentation test were performed for biochemical characterization of the isolates. The identification of the isolated bacteria was done by the Standard API 50-CHL system . 2.4. Physiological Characterization 2.4.1. Determination of Optimal Temperature Isolated bacteria were incubated at 25°C, 37°C, and 45°C for 24 hours to determine their optimal temperature for growth, and the results were calculated by the spectrometric method at 600 nm . 2.4.2. Determination of Optimal pH Fresh bacterial cells were grown for 24 hours at pH ranges of 4–9 to determine whether the isolates were acidophilic, neutrophilic, or alkaliphilic. Optical density was measured by spectroscopic readings at 600 nm . 2.4.3. The Antibiotic Sensitivity Assay An antibiotic susceptibility test was conducted using the disc diffusion method . Fresh overnight cultures of bacterial isolates were spread onto Mueller-Hinton (MH) agar media plates (recommended by the National Committee for Clinical Laboratory Standards (NCCLS, CLSI (2018)), and 10 antibiotic discs were placed on the media plates and incubated at 37°C for 24 h. The antibiotics used were amoxicillin (10 μ g), gentamicin (10 μ g), streptomycin (10 μ g), tetracycline (30 μ g), kanamycin (30 μ g), imipenem (10 μ g), chloramphenicol (30 μ g), bacitracin (10 μ g), erythromycin (15 μ g) and neomycin (30 μ g). 2.4.4. Antimicrobial Activity The antimicrobial activity of all isolates against indicator bacteria was determined by the agar-well diffusion method . Pseudomonas geniculate , Microbacterium oxydans , Bacillus subtilis , Streptomyces laurentii , Klebiella pneumonia , Bacillus pumilus , Bacillus cereus , Acaligens facelis , Enterococcus faecium , and Enterococcus facelis were used as indicator bacteria (obtained from the Microbiology & Biotechnology Lab, FJWU) against which the antimicrobial activity of isolated strains was assayed. Each test string was inoculated into 5 ml of nutrient broth and incubated at 37°C for 24 hours on a shaking incubator at 150 rpm. After incubation, each culture was centrifuged at 10000 rpm for 5 minutes to obtain a cell-free supernatant. Supernatants of isolated species were identified for antibacterial activity besides indicator bacteria. For antimicrobial activity identification, Mueller-Hinton agar medium (for antimicrobial testing) was prepared, autoclaved, and poured separately into the sterile Petri dishes. By the spread plate method, the plates were inoculated with indicator bacterial suspension. Five wells, each with an 8 mm diameter, were generated in every nutrient agar plate, and the base of each well was sealed with soft agar (0.7%) plugs. To identify the antibacterial activity of probiotic isolates, 100 μ l of cell-free supernatants of probiotic strain was added to the well. The plates were incubated for 24 h at 37°C, and the diameter of the zone of inhibition was measured in millimeters on both nutrient agar and MH agar media. 2.4.5. Salt Tolerance Assay Isolate tolerance to NaCl was determined by supplementing nutrient broth with varying salt concentrations ranging from 1 to 11% . 2.4.6. Organic Acid Production Assay Lactic acid bacteria are known for the production of organic acids, specifically lactic acids. Isolates were considered to have the same property, and to determine this ability, an acid production assay was conducted . The brine in which fermented green olives and black olives were stored was also assayed for organic acid produced by the isolated strains inhabiting the brine at the beginning, in the middle, and at the end of the research work. Powdered skim milk was purchased from the local market. Autoclaved distilled water was taken and mixed with 10% of powdered skim milk to make sterile skim milk with a pH of 6.68. Five ml of skimmed milk was inoculated with a 24 h fresh culture of isolated bacteria and incubated at 37°C for 24, 48, and 72 h. After incubation, coagulated skim milk was filtered, and the pH of each filtrate was measured with a digital electrode pH meter for lactic acid production. The filtrate was also titrated with 0.1 N NaOH, and organic acid production was quantified in terms of percentage strength. 2.4.7. Bile Salt Tolerance Assay The ability of isolated species to survive and grow in the presence of bile salts was investigated by growing the isolated strains with different bile salt concentrations (0.1, 0.3, 0.5, and 1%) in nutrient broth as described by Dunne et al. . The broth was incubated for 4 h at 37°C, and the optical density (OD) of the cultures was then measured at 600 nm. The viability and growth of isolates at this concentration showed tolerance of isolated strains within the gastrointestinal tract of humans . 2.5. Tolerance to Simulated Gastric Juice Stimulated gastric juice tolerance was determined by the method described by Graciela and Maria . Stimulated gastric juice was freshly prepared and sterilized by filter sterilization. 2.6. Numerical Taxonomy of Probiotic Isolates Similarity among isolates was checked by taking data from fifty (50) different biochemical sugar fermentation tests from the API CH 50 kit and converting them into binary data (0 or 1) for negative or positive test results, respectively, using PAST (Paleontological Statistics Software Package for Education and Data Analysis software). Similarities amongst the strains were estimated using the Jacquard coefficient, and the unweighted average linkage gave the cluster.
Fermented pitted Spanish green and black olive samples of the same brand (Figaro Company) were purchased from local superstores located in the Pakistan Aeronautical Complex (PAC), Karma, and Rawalpindi. The sealed fermented green and black olives bottles were opened in sterile condition near a flame to avoid contamination. Using sterile forceps, a few olives from the bottles were taken out and then minced inside a mortar and pestle after disinfecting it with an ethanol (70%) swab. Before being minced, big chunks of both black and green olives were kept inside sterile universal bottles with the help of sterile forceps; later, the olive mince from the mortar was also collected in universal bottles and infused in autoclaved distilled water using a sterile spatula. The brine from black and green olive samples was also stored in a sterile universal bottle following the method of Doulgeraki et al. with slight workable modifications. All the universal bottles containing olive and brine samples were stored in the refrigerator for isolation.
Strains from olives were isolated on selective de Man, Rogosa, and Sharpe (MRS) media (Merck Millipore Germany, catalog # 110660) . Isolations from chunks, brine, and minced fermented black and green olives were done on MRS media by the spread plate method . Isolates were characterized by morphological, biochemical, and physiological characteristics.
Morphological characterization was studied through the gram staining technique by Hans Christian Gram . The catalase test , Simmons citrate test , methyl red and Voges–Proskauer's test , indole production test , and oxidation fermentation test were performed for biochemical characterization of the isolates. The identification of the isolated bacteria was done by the Standard API 50-CHL system .
2.4.1. Determination of Optimal Temperature Isolated bacteria were incubated at 25°C, 37°C, and 45°C for 24 hours to determine their optimal temperature for growth, and the results were calculated by the spectrometric method at 600 nm . 2.4.2. Determination of Optimal pH Fresh bacterial cells were grown for 24 hours at pH ranges of 4–9 to determine whether the isolates were acidophilic, neutrophilic, or alkaliphilic. Optical density was measured by spectroscopic readings at 600 nm . 2.4.3. The Antibiotic Sensitivity Assay An antibiotic susceptibility test was conducted using the disc diffusion method . Fresh overnight cultures of bacterial isolates were spread onto Mueller-Hinton (MH) agar media plates (recommended by the National Committee for Clinical Laboratory Standards (NCCLS, CLSI (2018)), and 10 antibiotic discs were placed on the media plates and incubated at 37°C for 24 h. The antibiotics used were amoxicillin (10 μ g), gentamicin (10 μ g), streptomycin (10 μ g), tetracycline (30 μ g), kanamycin (30 μ g), imipenem (10 μ g), chloramphenicol (30 μ g), bacitracin (10 μ g), erythromycin (15 μ g) and neomycin (30 μ g). 2.4.4. Antimicrobial Activity The antimicrobial activity of all isolates against indicator bacteria was determined by the agar-well diffusion method . Pseudomonas geniculate , Microbacterium oxydans , Bacillus subtilis , Streptomyces laurentii , Klebiella pneumonia , Bacillus pumilus , Bacillus cereus , Acaligens facelis , Enterococcus faecium , and Enterococcus facelis were used as indicator bacteria (obtained from the Microbiology & Biotechnology Lab, FJWU) against which the antimicrobial activity of isolated strains was assayed. Each test string was inoculated into 5 ml of nutrient broth and incubated at 37°C for 24 hours on a shaking incubator at 150 rpm. After incubation, each culture was centrifuged at 10000 rpm for 5 minutes to obtain a cell-free supernatant. Supernatants of isolated species were identified for antibacterial activity besides indicator bacteria. For antimicrobial activity identification, Mueller-Hinton agar medium (for antimicrobial testing) was prepared, autoclaved, and poured separately into the sterile Petri dishes. By the spread plate method, the plates were inoculated with indicator bacterial suspension. Five wells, each with an 8 mm diameter, were generated in every nutrient agar plate, and the base of each well was sealed with soft agar (0.7%) plugs. To identify the antibacterial activity of probiotic isolates, 100 μ l of cell-free supernatants of probiotic strain was added to the well. The plates were incubated for 24 h at 37°C, and the diameter of the zone of inhibition was measured in millimeters on both nutrient agar and MH agar media. 2.4.5. Salt Tolerance Assay Isolate tolerance to NaCl was determined by supplementing nutrient broth with varying salt concentrations ranging from 1 to 11% . 2.4.6. Organic Acid Production Assay Lactic acid bacteria are known for the production of organic acids, specifically lactic acids. Isolates were considered to have the same property, and to determine this ability, an acid production assay was conducted . The brine in which fermented green olives and black olives were stored was also assayed for organic acid produced by the isolated strains inhabiting the brine at the beginning, in the middle, and at the end of the research work. Powdered skim milk was purchased from the local market. Autoclaved distilled water was taken and mixed with 10% of powdered skim milk to make sterile skim milk with a pH of 6.68. Five ml of skimmed milk was inoculated with a 24 h fresh culture of isolated bacteria and incubated at 37°C for 24, 48, and 72 h. After incubation, coagulated skim milk was filtered, and the pH of each filtrate was measured with a digital electrode pH meter for lactic acid production. The filtrate was also titrated with 0.1 N NaOH, and organic acid production was quantified in terms of percentage strength. 2.4.7. Bile Salt Tolerance Assay The ability of isolated species to survive and grow in the presence of bile salts was investigated by growing the isolated strains with different bile salt concentrations (0.1, 0.3, 0.5, and 1%) in nutrient broth as described by Dunne et al. . The broth was incubated for 4 h at 37°C, and the optical density (OD) of the cultures was then measured at 600 nm. The viability and growth of isolates at this concentration showed tolerance of isolated strains within the gastrointestinal tract of humans .
Isolated bacteria were incubated at 25°C, 37°C, and 45°C for 24 hours to determine their optimal temperature for growth, and the results were calculated by the spectrometric method at 600 nm .
Fresh bacterial cells were grown for 24 hours at pH ranges of 4–9 to determine whether the isolates were acidophilic, neutrophilic, or alkaliphilic. Optical density was measured by spectroscopic readings at 600 nm .
An antibiotic susceptibility test was conducted using the disc diffusion method . Fresh overnight cultures of bacterial isolates were spread onto Mueller-Hinton (MH) agar media plates (recommended by the National Committee for Clinical Laboratory Standards (NCCLS, CLSI (2018)), and 10 antibiotic discs were placed on the media plates and incubated at 37°C for 24 h. The antibiotics used were amoxicillin (10 μ g), gentamicin (10 μ g), streptomycin (10 μ g), tetracycline (30 μ g), kanamycin (30 μ g), imipenem (10 μ g), chloramphenicol (30 μ g), bacitracin (10 μ g), erythromycin (15 μ g) and neomycin (30 μ g).
The antimicrobial activity of all isolates against indicator bacteria was determined by the agar-well diffusion method . Pseudomonas geniculate , Microbacterium oxydans , Bacillus subtilis , Streptomyces laurentii , Klebiella pneumonia , Bacillus pumilus , Bacillus cereus , Acaligens facelis , Enterococcus faecium , and Enterococcus facelis were used as indicator bacteria (obtained from the Microbiology & Biotechnology Lab, FJWU) against which the antimicrobial activity of isolated strains was assayed. Each test string was inoculated into 5 ml of nutrient broth and incubated at 37°C for 24 hours on a shaking incubator at 150 rpm. After incubation, each culture was centrifuged at 10000 rpm for 5 minutes to obtain a cell-free supernatant. Supernatants of isolated species were identified for antibacterial activity besides indicator bacteria. For antimicrobial activity identification, Mueller-Hinton agar medium (for antimicrobial testing) was prepared, autoclaved, and poured separately into the sterile Petri dishes. By the spread plate method, the plates were inoculated with indicator bacterial suspension. Five wells, each with an 8 mm diameter, were generated in every nutrient agar plate, and the base of each well was sealed with soft agar (0.7%) plugs. To identify the antibacterial activity of probiotic isolates, 100 μ l of cell-free supernatants of probiotic strain was added to the well. The plates were incubated for 24 h at 37°C, and the diameter of the zone of inhibition was measured in millimeters on both nutrient agar and MH agar media.
Isolate tolerance to NaCl was determined by supplementing nutrient broth with varying salt concentrations ranging from 1 to 11% .
Lactic acid bacteria are known for the production of organic acids, specifically lactic acids. Isolates were considered to have the same property, and to determine this ability, an acid production assay was conducted . The brine in which fermented green olives and black olives were stored was also assayed for organic acid produced by the isolated strains inhabiting the brine at the beginning, in the middle, and at the end of the research work. Powdered skim milk was purchased from the local market. Autoclaved distilled water was taken and mixed with 10% of powdered skim milk to make sterile skim milk with a pH of 6.68. Five ml of skimmed milk was inoculated with a 24 h fresh culture of isolated bacteria and incubated at 37°C for 24, 48, and 72 h. After incubation, coagulated skim milk was filtered, and the pH of each filtrate was measured with a digital electrode pH meter for lactic acid production. The filtrate was also titrated with 0.1 N NaOH, and organic acid production was quantified in terms of percentage strength.
The ability of isolated species to survive and grow in the presence of bile salts was investigated by growing the isolated strains with different bile salt concentrations (0.1, 0.3, 0.5, and 1%) in nutrient broth as described by Dunne et al. . The broth was incubated for 4 h at 37°C, and the optical density (OD) of the cultures was then measured at 600 nm. The viability and growth of isolates at this concentration showed tolerance of isolated strains within the gastrointestinal tract of humans .
Stimulated gastric juice tolerance was determined by the method described by Graciela and Maria . Stimulated gastric juice was freshly prepared and sterilized by filter sterilization.
Similarity among isolates was checked by taking data from fifty (50) different biochemical sugar fermentation tests from the API CH 50 kit and converting them into binary data (0 or 1) for negative or positive test results, respectively, using PAST (Paleontological Statistics Software Package for Education and Data Analysis software). Similarities amongst the strains were estimated using the Jacquard coefficient, and the unweighted average linkage gave the cluster.
3.1. Isolation and Identification of Potential Probiotic Bacteria In the present study, probiotics were isolated from commercially available fermented black and green olives from the Figaro Company. Fermenter probiotic bacteria were isolated on an MRS medium, and a total of 12 isolates were obtained from brine, chunks, and suspension from minced olives ( ). As shown in , potential probiotic bacteria were isolated from black and green olives. Identification was done by cell and colony morphology, which showed diversity ranging from 0.5 to 11.5 mm in size, circular to irregular shape, and white to pale color. All the isolated probiotic bacteria were stained as Gram-positive, and most of them were rod-shaped. For Lactobacillus crispatus, Lactococcus lactis, and Carnobacterium divergens , the results of biochemical tests showed characteristics similar to those of reported probiotics, for example, not being catalase producers (anaerobes or facultative anaerobes that only ferment and do not respire using oxygen as a terminal electron acceptor), being nonspore formers, and having a thick peptidoglycan cell wall structure (gram-positive). The isolates ferment glucose during anaerobic incubation as well as during aerobic incubation, but no gas was produced during the 24 h of incubation, both under aerobic and anaerobic conditions ( ). Whereas the isolates were identified by carbohydrate fermentation patterns on the API 50 CH panel, the carbohydrate fermentation ability of the isolated strains was analyzed against frothy nine different sugars ( ). The heat map summarizes the sugar fermentation pattern of probiotic bacteria from fermented olives and groups the strains based on similarities and differences in the sugar fermentation profile shown in . The species-level identification was done using API web50 CHL v5.1 ( ). 3.2. Characterization of Optimal Temperature and pH Carnobacterium divergens MB421 from green olives and Lactoccocus lactis MB418 from black olives grew optimally at 25°C, and Lactobacillus crispatus MB417 showed optimal growth at 37°C ( ). Optimal growth of Carnobacterium divergens MB421, Lactobacillus crispatus MB417, and Lactoccocus lactis MB418 was observed at pH 7, showing the neutrophilic behavior of the isolates ( ). 3.3. Evaluation of Antibiotic Susceptibility Profile The antibiotic susceptibility of isolates to various antibiotic discs by disc diffusion assay was determined in terms of standard inhibitory zones, and the results are shown in . All the isolated strains were sensitive to imipenem. Lactobacillus crispatus MB417 and Carnobacterium divergens MB421 showed resistance to amoxicillin. 3.4. Antimicrobial Activity The isolates were assessed for antimicrobial properties and tested negative on nutrient agar medium against Bacillus cereus MB401, Streptomyces laurentii MB319, Klebsiella pneumoniae MB081, and Acaligens facelis MB090. On Mueller-Hinton (MH) agar medium, isolates behaved differently as compared to nutrient agar medium. This might be due to the MH medium, which is the standard medium for the performance of such tests in microbiology laboratories. Lactobacillus crispatis MB417 and Lactococcus lactis MB418 isolated from black olives were sensitive to Streptomyces laurentii MB319. Isolates depicted no inhibitory zone against Microbacterium oxydans MB325 , Klebsiella pneumoniae MB081, and Acaligens facelis MB090, but showed antimicrobial activity against Bacillus cereus MB401, Bacillus subtilis MB405 and Streptomyces laurentii MB319. The growth of Enterococcus faecium JH22 and Enterococcus facelis OGRE1 was tested on Brain Heart Infusion (BHI) agar medium. Carnobacterium divergens MB421 produced an inhibitory zone against Enterococcus facelis OGRE1. 3.5. Bile Salt Tolerance Assay Isolated probiotics were able to tolerate 0.1-1% bile salt and flourish at 0.3% gastrointestinal bile salt concentration ( ) . The isolates Lactobacillus crispatus MB417 and Lactococcus lactis MB418 grew optimally at 0.3% bile salt concentration with a gradual decrease to higher concentrations that were 0.5% and 1.0%, whereas Carnobacterium divergens MB421 showed a linearly increasing growth trend starting from 0.1 to 1% bile salt concentration. 3.6. Tolerance to Simulated Gastric Juice Potential probiotic bacterial isolates were able to tolerate the acidic pH (2) of stimulated gastric juice through the course of incubation, but some isolates were unable to withstand such harsh conditions in the gut. Colony-forming units per milliliter (CFU/ml) were calculated for black and green olive isolates ( ). Carnobacterium divergens MB421 showed a gradual reducing trend during incubation inside the artificially stimulated gastric juice; while Lactococcus lactis MB18 showed the same reducing viability trend till 90 min of incubation (might be the acclimatization time). The viability count increased at 120 min of incubation and decreased when measured after 24 h of incubation. Lactobacillus crispatus MB417 showed a parabolic endurance pattern peaking at 90 min of incubation in simulated gastric juice. 3.7. Salt Tolerance Following the assessment of salt tolerance, isolates were shown to be resistant to high NaCl concentrations ( ), even at a 9% concentration of salt, especially the black olive isolate Lactobacillus crispatus MB417. Isolates Lactococcus lactis MB418 and Carnobacterium divergens MB421 endure up to 8% and 7% NaCl concentrations, respectively. 3.8. Quantification of Organic Acid Production All three isolates exhibited the capability to coagulate skim milk and produce organic acid under gradually decreasing pH ( ). Against 0.1 M NaOH, Lactobacillus crispatus MB417 produced elevated organic acids during 24 to 72 h. Similar results were shown by Carnobacterium divergens MB421, but in the case of Lactococcus lactis MB418 decreased with time, measured at 24, 48, and 72 h of incubation. The organic acid production ability of residing potential probiotic bacteria inside the brine (in which fermented olives were stored) was assayed during research work, that is, at the beginning (1st stage), in the middle (2nd stage), and at the end of practical work (3rd stage). During incubation, the pH and acidity of the brine remained unchanged; this could be because of the refrigerated storage of the fermented olives ( ).
In the present study, probiotics were isolated from commercially available fermented black and green olives from the Figaro Company. Fermenter probiotic bacteria were isolated on an MRS medium, and a total of 12 isolates were obtained from brine, chunks, and suspension from minced olives ( ). As shown in , potential probiotic bacteria were isolated from black and green olives. Identification was done by cell and colony morphology, which showed diversity ranging from 0.5 to 11.5 mm in size, circular to irregular shape, and white to pale color. All the isolated probiotic bacteria were stained as Gram-positive, and most of them were rod-shaped. For Lactobacillus crispatus, Lactococcus lactis, and Carnobacterium divergens , the results of biochemical tests showed characteristics similar to those of reported probiotics, for example, not being catalase producers (anaerobes or facultative anaerobes that only ferment and do not respire using oxygen as a terminal electron acceptor), being nonspore formers, and having a thick peptidoglycan cell wall structure (gram-positive). The isolates ferment glucose during anaerobic incubation as well as during aerobic incubation, but no gas was produced during the 24 h of incubation, both under aerobic and anaerobic conditions ( ). Whereas the isolates were identified by carbohydrate fermentation patterns on the API 50 CH panel, the carbohydrate fermentation ability of the isolated strains was analyzed against frothy nine different sugars ( ). The heat map summarizes the sugar fermentation pattern of probiotic bacteria from fermented olives and groups the strains based on similarities and differences in the sugar fermentation profile shown in . The species-level identification was done using API web50 CHL v5.1 ( ).
Carnobacterium divergens MB421 from green olives and Lactoccocus lactis MB418 from black olives grew optimally at 25°C, and Lactobacillus crispatus MB417 showed optimal growth at 37°C ( ). Optimal growth of Carnobacterium divergens MB421, Lactobacillus crispatus MB417, and Lactoccocus lactis MB418 was observed at pH 7, showing the neutrophilic behavior of the isolates ( ).
The antibiotic susceptibility of isolates to various antibiotic discs by disc diffusion assay was determined in terms of standard inhibitory zones, and the results are shown in . All the isolated strains were sensitive to imipenem. Lactobacillus crispatus MB417 and Carnobacterium divergens MB421 showed resistance to amoxicillin.
The isolates were assessed for antimicrobial properties and tested negative on nutrient agar medium against Bacillus cereus MB401, Streptomyces laurentii MB319, Klebsiella pneumoniae MB081, and Acaligens facelis MB090. On Mueller-Hinton (MH) agar medium, isolates behaved differently as compared to nutrient agar medium. This might be due to the MH medium, which is the standard medium for the performance of such tests in microbiology laboratories. Lactobacillus crispatis MB417 and Lactococcus lactis MB418 isolated from black olives were sensitive to Streptomyces laurentii MB319. Isolates depicted no inhibitory zone against Microbacterium oxydans MB325 , Klebsiella pneumoniae MB081, and Acaligens facelis MB090, but showed antimicrobial activity against Bacillus cereus MB401, Bacillus subtilis MB405 and Streptomyces laurentii MB319. The growth of Enterococcus faecium JH22 and Enterococcus facelis OGRE1 was tested on Brain Heart Infusion (BHI) agar medium. Carnobacterium divergens MB421 produced an inhibitory zone against Enterococcus facelis OGRE1.
Isolated probiotics were able to tolerate 0.1-1% bile salt and flourish at 0.3% gastrointestinal bile salt concentration ( ) . The isolates Lactobacillus crispatus MB417 and Lactococcus lactis MB418 grew optimally at 0.3% bile salt concentration with a gradual decrease to higher concentrations that were 0.5% and 1.0%, whereas Carnobacterium divergens MB421 showed a linearly increasing growth trend starting from 0.1 to 1% bile salt concentration.
Potential probiotic bacterial isolates were able to tolerate the acidic pH (2) of stimulated gastric juice through the course of incubation, but some isolates were unable to withstand such harsh conditions in the gut. Colony-forming units per milliliter (CFU/ml) were calculated for black and green olive isolates ( ). Carnobacterium divergens MB421 showed a gradual reducing trend during incubation inside the artificially stimulated gastric juice; while Lactococcus lactis MB18 showed the same reducing viability trend till 90 min of incubation (might be the acclimatization time). The viability count increased at 120 min of incubation and decreased when measured after 24 h of incubation. Lactobacillus crispatus MB417 showed a parabolic endurance pattern peaking at 90 min of incubation in simulated gastric juice.
Following the assessment of salt tolerance, isolates were shown to be resistant to high NaCl concentrations ( ), even at a 9% concentration of salt, especially the black olive isolate Lactobacillus crispatus MB417. Isolates Lactococcus lactis MB418 and Carnobacterium divergens MB421 endure up to 8% and 7% NaCl concentrations, respectively.
All three isolates exhibited the capability to coagulate skim milk and produce organic acid under gradually decreasing pH ( ). Against 0.1 M NaOH, Lactobacillus crispatus MB417 produced elevated organic acids during 24 to 72 h. Similar results were shown by Carnobacterium divergens MB421, but in the case of Lactococcus lactis MB418 decreased with time, measured at 24, 48, and 72 h of incubation. The organic acid production ability of residing potential probiotic bacteria inside the brine (in which fermented olives were stored) was assayed during research work, that is, at the beginning (1st stage), in the middle (2nd stage), and at the end of practical work (3rd stage). During incubation, the pH and acidity of the brine remained unchanged; this could be because of the refrigerated storage of the fermented olives ( ).
The present study was conducted to isolate and characterize the potential probiotic isolates from fermented black and green olives. Table olives are the best source of probiotic bacteria . Table olives are considered functional foods because of their nutritional value related to the presence of phenolic compounds and monounsaturated fatty acids . The oxidation-fermentation test in this study identified that the isolated bacteria as facultative anaerobes about their ability to perform fermentation in aerobic as well as anaerobic conditions. The results indicated that all the isolates were able to conduct fermentation during aerobic conditions. All the isolates ferment lactose to produce acid, which in turn changes the color of the medium from green to yellow, showing the isolates were fermenters. Positive fermentation ability was also reported by other scientists on lactic acid bacteria from table olives [ – ] and lactic acid bacterial isolates from yogurt . The study showed that most of the isolates were mesophiles, as they prefer 37 °C, but some of the isolates were slightly thermophilic. Isolates from another study on green Algerian olives demonstrated tolerance to temperatures ranging from 15 to 45°C . In the case of the isolation of probiotics from yogurt, isolates could tolerate pH up to 2.5 with good growth, but according to the current study, after pH 4, growth was very low. This was most likely because isolates from yogurt or dairy origins are more adapted to low pH than isolates from vegetables and meat (due to the source's lactose content, which is then converted to lactic acid after fermentation) . Lactobacillus crispatus MB417, Lactococcus lactis MB418, and Carnobacterium divergens MB421 showed good growth at pH 7, depicting their mesophilic nature. Similarly, mesophilic (pH 6.5-7.5) strains of lactic acid bacteria were reported from various research data [ , – ]. The genomes of probiotic bacterial isolates may contain many antimicrobial resistance genes, van(X) , van(E) , gyr(A) , and tet(M) genes that code for resistance to the respective vancomycin, ciprofloxacin, and tetracycline antibiotics [ – ]. As a key feature, a good probiotic candidate should not possess or acquire any antibiotic-resistance genes. This research found that most of the isolates were resistant to amoxicillin. This might be because of the wide and nonspecific use of antibiotics that could subsidize the propagation of resistance in the bacterial population used for the fermentation of olives . Along with this, in Pakistan, the use of unprescribed antibiotics is a common practice, which results in the development of resistance to many of the pathogenic bacteria from antibiotics, including amoxicillin. Whereas, it was also reported that all of the probiotic isolates were sensitive to streptomycin, imipenem, and chloramphenicol. Out of a total of 60 combinations of tests for bacteriocin production activity of six isolates against 10 different indicator bacteria, only two gave positive results on nutrient agar medium. This finding was comparable with some other reported probiotic strains, L. caseishirota , L. paracasei sub spp. tolerans , L. plantarum , and L. fermentum , which also did not produce bacteriocin against other bacteria. These results concluded that there is no existence of bacteriocin-like action , and inhibition of surrounding microbes was due to the acidic environment produced by probiotic LAB strains . The present study showed that all isolates were able to tolerate salt concentrations up to 7%, but few were able to tolerate high concentrations. All isolates showed maximum growth with a 1% NaCl concentration, which is consistent with the findings of other studies . In the present study, 0.1, 0.3, 0.5, and 1% bile salt concentrations were used for the growth of potential probiotic isolates. In a healthy human, 0.3% bile salt is present in the GIT, and for any bacteria to be used in probiotic production, it must tolerate this bile concentration [ , , , ]. All of the black olive isolates and a few of the green olive isolates gave maximum growth on 0.3% bile salt, and almost all of the isolates tolerate this bile concentration (up to 1%). This was because of a vital characteristic of Lactobacilli which enabled them to survive, grow, and exert their action in the GIT due to the action of bile salt hydrolase, thus reducing the toxic side effects of bile salts. Along with this, some components of food protect and promote resistance among strains to bile salts . To achieve health benefits, probiotic foods must comprise an adequate amount of live bacteria, at least 10 6-7 CFU/g . The ability to tolerate harsh conditions and viable cell counts of isolated potential probiotic bacteria were checked while incubating in artificial gastric juice with pH 2.2 for 30, 60, 90, 120 minutes, and 24 h. The experiments yielded ≤10 5 bacterial counts, and recent studies have provided significant data on the valuable immunological effects derived from dead probiotic cells . The viability of isolates from black olives ( Lactoccocus lactis MB418) and green olives (MB422) gradually decreased with incubation time, while black olive isolates MB416, and Lactobacillus crispatus MB417; and green olive isolates MB424, MB422 showed stability up to 24 h of incubation in acidic gastric juice. Results of the experiment predict that gastric juice in the GIT has fewer effects that could be hostile to most of the probiotic isolates . Similar viability counts (3.4-5.6 logs CFU/ml) for the fermented olive isolates from the present study were also observed previously for L. plantarum, L. pentosus, and L. paraplantarum , showing 3-6 log CFU/ml viable bacterial cells . In natural olives, organic acids can be added to create an optimal initial pH for the proliferation of LAB . Olive samples used in the study also contained added lactic, ascorbic, and citric acids before packaging. The organic acid production ability of probiotic bacteria present inside the brine of black and green olives was determined to have an idea of the shelf life of the fermented food, which can be spoiled in terms of taste, texture, and smell due to altered lactic acid production. The Acidity of the brine of black and green olives remained the same throughout, but the brine of green olives was more acidic with a lower pH as compared to black olives. Because green olives were un-ripened, the bacteria employed for their fermentation had to work more to make them less bitter than for fully ripened black olives, for example, maturation of olives will condition phenolic compounds, sugar, and cell wall permeability . In our study, the data analysis showed that one of the isolates, MB422, from green olives, is the least similar to the rest of the isolates. Isolates from black olives MB416 and green olives Carnobacterium divergens MB421 showed 84% similarity, which is 72% similar to Lactococcus lactis MB418. The cluster is 60% similar to other clusters, such as MB424 and Lactobacillus crispatus MB417 (75% similar).
It is concluded that isolated potential probiotic bacteria are well adapted to the olive surface or brine. Thus, the olives, either green or black, contained good microbial flora of probiotics and could be consumed as an effective probiotic food. As a result, the isolates from our study, specifically MB417, MB418, MB421, and MB424, demonstrated relatively good antipathogenic activity and survival in harsh conditions, suggesting that they could be considered viable “next-generation” probiotic candidates, which could be beneficial to the pharmaceutical industry. 5.1. Future Applications It would be advantageous to conduct a detailed study on the identification of isolates using molecular methods. The use of bacteriocin producers as starters is of considerable interest because bacteriocin production is a significant factor that helps to promote the safety and quality of fermented table olives and can also be used as natural antibiotics.
It would be advantageous to conduct a detailed study on the identification of isolates using molecular methods. The use of bacteriocin producers as starters is of considerable interest because bacteriocin production is a significant factor that helps to promote the safety and quality of fermented table olives and can also be used as natural antibiotics.
|
Clinicians’ experiences with cancer patients living longer with incurable cancer: a focus group study in the Netherlands
|
cd2de592-1c36-4644-a065-e2d17909528e
|
10156464
|
Internal Medicine[mh]
|
Advances in oncology have resulted in an increased number of cancer survivors (Harley et al . ; Heins et al . ; IKNL, ). This has resulted in an increased life span for many patients with incurable cancer. Some forms of cancer (especially breast, prostate and colon cancer as well as haematological cancers) seem to slowly develop into ‘chronic’ diseases (Harley et al . ; Buiting et al . ). Up till now, it is to a great extent unknown how these patients should be approached and defined to serve them best (Schildmann et al . ). It could be argued that in patients living longer with incurable cancer (e.g. > 1 year) two distinct care approaches could be applied: a palliative care approach and a survivorship/psychosocial care approach. A palliative care approach is aimed at improving the quality of life of patients with life-threatening illness and their families, without the aim of life-prolongation (Fadul et al . ; Thoonsen et al . ). During the provision of anti-cancer treatment (e.g. ‘standard oncology care’), this is usually integrated with elements of palliative care, from diagnosis of incurable cancer until death (Murray et al . ; Greer et al . ; Frick et al . ). Studies about early palliative care often encompass care for approximately one year; longer periods have not been studied. A survivorship care approach is a different care approach. According to Frick et al (among others), this approach also appeals to many patients living longer with incurable cancer (Frick et al . ). This approach focuses on quality of life as well as on survival and includes interventions aimed at optimal living (Starreveld et al . ). Both approaches could apply to care for patients living longer with incurable cancer, but the philosophies (and medical specialties) of both approaches differ. At present, oncologists as well as primary care physicians (PCPs) are exploring how to improve care that is better tailored to patients living longer with incurable cancer. Our previous study at the in-patient oncology unit showed that patients living longer with incurable cancer experience problems in how to deal with a prognosis that is insecure (Buiting et al . ). In the Netherlands, almost everyone has a PCP and patients can consult a PCP free-of-charge (Dutch Patient Federation). The Dutch Cancer Society (KWF) and the Dutch Health Council already propagated 9 years ago that PCPs do have an important task in the care for patients living longer with incurable cancer (KWF ). It could be argued that patients living longer with incurable cancer could be (partially) followed by PCPs as well. Right now, there is no established framework about the right care approach for patients living longer with incurable cancer, as the care approach for incurable patients mainly focuses on end-stage/terminal care. The urge to carefully follow the group of (ex)cancer patients is nowadays acknowledged. However, it is still unclear what the role of PCPs can be, and to what extent this may influence the organisation of care (Hoopes et al . ). The fact that patients are living in relatively good physical condition in which the setting partly shifts from ‘clinical’ towards ‘daily life’ automatically results in different responsibilities towards health care. In this focus group study, we explored the (1) experiences of PCPs and oncological medical specialists about providing care to patients living longer with incurable cancer and (2) their preferences concerning different care approaches (palliative support, psychological/survivorship support).
Design and setting This study is part of a larger project that examines the experiences, needs and wishes of patients and healthcare professionals living longer with incurable cancer (Buiting et al . ). In this specific study, our project group further explored the collaboration between PCPs and oncological medical specialists in a focus group study, with a specific focus on the care for patients living longer with incurable cancer. In doing this, our project group ensured that all necessary items to guarantee adequate qualitative research were checked with the COREQ-checklist. This was evaluated in accordance with the standards of O’Brien et al. (Tong et al . ). Because the topic of this study was relatively new to do research on, we chose to explore experiences and attitudes via different focus group sessions. A strength of focus group studies is that different participants are brought together, to which face validity increases. To facilitate the focus group discussion, we established a definitional framework beforehand (see Table ). Recruitment and sampling Our project group started three focus groups with PCPs in November 2017 ( n = 6 PCPs) and 2 focus groups in January 2018 ( n = 5 PCPs; n = 4 PCPs). When we noticed that PCPs were unaware of certain aspects in the hospital setting or felt hampered by medical specialists to have in-depth discussions about this topic, we transitioned to multidisciplinary group sessions. PCPs sometimes reported to have received information after quite a long period of time from medical specialists, through which assessing the severity of the patients’ situation became more difficult for them. In March 2018, our project group started with multidisciplinary focus groups with PCPs and medical specialists also ( n = 9 PCPs; n = 2 medical specialists); May 2018 ( n = 4 PCPs, n = 3 medical specialists) and June 2018 ( n = 4 PCPs and 1 medical specialist). In the multidisciplinary group sessions, we included PCPs as well as urologists, oncologists, pulmonologists and head and neck surgeons. All participants were recruited by snowball sampling in existing professional networks (telephone or e-mail) assisted by healthcare organisations such as the Netherlands Comprehensive Cancer Organisation (IKNL) and local organisations focusing on palliative care or oncology. Our project group took care that PCPS and medical specialists covered various experiences in every focus group. Although research members HMB and TB were present during all focus group sessions, they did not add to the discussion, apart from moderating. Our project group held three focus groups in the eastern/mid part of the Netherlands, and three focus groups in the western part of the Netherlands. Participants varied in years of work experience and gender, see also Table . Our project group excluded PCPs and medical specialists with less than one year of experience. Main reasons for not participating were time constraints. All doctors consented the focus group to be audiotaped and transcribed. Our project group checked part of the transcripts with the audio and noticed that all records were adequately described. The transcripts were anonymised to ensure the participants’ anonymity. Access to the data was limited to the researchers. Focus groups The focus groups were moderated by TB (FG1, FG2, FG4) or HMB (FG3, FG5 and FG6). They were both experienced moderators. The median/mean meeting time of the focus group sessions (including breaks) was 2.5 hours. We sent all participants background information about the study in advance and received their written consent beforehand. Moreover, each focus group was started with some background oncologic information as well as a definition and clarification about patients living longer with incurable cancer. For every focus group, one meeting was held. Subsequently, our project group presented participants with open-ended questions as well as case descriptions. We discussed four different cases that differed in treating medical specialist, type of cancer and duration of disease. As of focus group 4, our project group started a discussion about the ideal definition of the disease phase in which patients are living longer with incurable cancer and added new questions to the discussion, such as; ‘Is the role of the PCP clear to patients?’; ‘What makes chronic different from palliative?’; etc. During focus group 6, data saturation was reached as no new themes with respect to the research questions emerged. We did not use an interview guide but provided guidance via a PowerPoint presentation. In this presentation, we (again) shortly introduced the topic of patients living longer with incurable cancer, and to what extent this differs from terminally ill patients and patients that can still be cured. Moreover, we mentioned the topics which we wanted to discuss, such as familiarity with this patient group, experiences with multidisciplinary collaboration, etc. Data analysis The transcripts of all six focus groups were coded and analysed using Atlas-ti 8.2. FB and HMB coded all focus groups individually. We discussed the themes and verified for interpreter consensus. We arranged several meetings to discuss themes and underlying themes/items to develop a scheme to index text fragments with similar content (in Atlas-ti 8.2). We eventually chose for 3 overarching themes: the process of awareness, the definition and marking of patients living longer with incurable cancer, and communication and caring in this disease phase. Underlying themes were for instance: the proactive role of the PCP, PCPs’ wishes, PCPs’ experiences, the care for the patient, and communication and collaboration. By analysing the themes, through thematic analysis, hypotheses emerged and were monitored with the data. A professional translator translated the chosen quotes that illustrated our results. The quotes are from PCPs, unless stated otherwise. According to Dutch policy, the study did not require a review by an ethics committee because the data collection was anonymous with regard to the participants (healthcare professionals) and the content of the discussions was not considered to be possibly incriminating. The consulted committee provided us with a declaration of no objection. One of the project members sent a short report to the participants straight after the focus group; we will send a Dutch version of this paper to the participants after publication. One participant also participated as a co-author in the paper by reflecting about the general findings and by editing the paper. Involving participants working in clinical practice is nowadays more often used to ensure that data will be analysed in close connection with actual medical practice (Richards et al . ). During a period of 8 months, six focus group sessions were held with PCPs and medical specialists. In this time period, a switch in mindset seemed to have happened among PCPs. Whereas the first focus group participants initially focussed on mainstream palliative care (e.g. patients living approximately < 1 year), participants in subsequent focus groups slowly started to broaden their scope towards other patients with life expectancies of more than 1 year. Since we shared previous findings, this probably quickened the participants’ view in that the discussed patient group was somewhat different compared to the mainstream group of patients receiving palliative care (where palliative care is usually implemented if they have life expectancies of less than 6 months).
This study is part of a larger project that examines the experiences, needs and wishes of patients and healthcare professionals living longer with incurable cancer (Buiting et al . ). In this specific study, our project group further explored the collaboration between PCPs and oncological medical specialists in a focus group study, with a specific focus on the care for patients living longer with incurable cancer. In doing this, our project group ensured that all necessary items to guarantee adequate qualitative research were checked with the COREQ-checklist. This was evaluated in accordance with the standards of O’Brien et al. (Tong et al . ). Because the topic of this study was relatively new to do research on, we chose to explore experiences and attitudes via different focus group sessions. A strength of focus group studies is that different participants are brought together, to which face validity increases. To facilitate the focus group discussion, we established a definitional framework beforehand (see Table ).
Our project group started three focus groups with PCPs in November 2017 ( n = 6 PCPs) and 2 focus groups in January 2018 ( n = 5 PCPs; n = 4 PCPs). When we noticed that PCPs were unaware of certain aspects in the hospital setting or felt hampered by medical specialists to have in-depth discussions about this topic, we transitioned to multidisciplinary group sessions. PCPs sometimes reported to have received information after quite a long period of time from medical specialists, through which assessing the severity of the patients’ situation became more difficult for them. In March 2018, our project group started with multidisciplinary focus groups with PCPs and medical specialists also ( n = 9 PCPs; n = 2 medical specialists); May 2018 ( n = 4 PCPs, n = 3 medical specialists) and June 2018 ( n = 4 PCPs and 1 medical specialist). In the multidisciplinary group sessions, we included PCPs as well as urologists, oncologists, pulmonologists and head and neck surgeons. All participants were recruited by snowball sampling in existing professional networks (telephone or e-mail) assisted by healthcare organisations such as the Netherlands Comprehensive Cancer Organisation (IKNL) and local organisations focusing on palliative care or oncology. Our project group took care that PCPS and medical specialists covered various experiences in every focus group. Although research members HMB and TB were present during all focus group sessions, they did not add to the discussion, apart from moderating. Our project group held three focus groups in the eastern/mid part of the Netherlands, and three focus groups in the western part of the Netherlands. Participants varied in years of work experience and gender, see also Table . Our project group excluded PCPs and medical specialists with less than one year of experience. Main reasons for not participating were time constraints. All doctors consented the focus group to be audiotaped and transcribed. Our project group checked part of the transcripts with the audio and noticed that all records were adequately described. The transcripts were anonymised to ensure the participants’ anonymity. Access to the data was limited to the researchers.
The focus groups were moderated by TB (FG1, FG2, FG4) or HMB (FG3, FG5 and FG6). They were both experienced moderators. The median/mean meeting time of the focus group sessions (including breaks) was 2.5 hours. We sent all participants background information about the study in advance and received their written consent beforehand. Moreover, each focus group was started with some background oncologic information as well as a definition and clarification about patients living longer with incurable cancer. For every focus group, one meeting was held. Subsequently, our project group presented participants with open-ended questions as well as case descriptions. We discussed four different cases that differed in treating medical specialist, type of cancer and duration of disease. As of focus group 4, our project group started a discussion about the ideal definition of the disease phase in which patients are living longer with incurable cancer and added new questions to the discussion, such as; ‘Is the role of the PCP clear to patients?’; ‘What makes chronic different from palliative?’; etc. During focus group 6, data saturation was reached as no new themes with respect to the research questions emerged. We did not use an interview guide but provided guidance via a PowerPoint presentation. In this presentation, we (again) shortly introduced the topic of patients living longer with incurable cancer, and to what extent this differs from terminally ill patients and patients that can still be cured. Moreover, we mentioned the topics which we wanted to discuss, such as familiarity with this patient group, experiences with multidisciplinary collaboration, etc.
The transcripts of all six focus groups were coded and analysed using Atlas-ti 8.2. FB and HMB coded all focus groups individually. We discussed the themes and verified for interpreter consensus. We arranged several meetings to discuss themes and underlying themes/items to develop a scheme to index text fragments with similar content (in Atlas-ti 8.2). We eventually chose for 3 overarching themes: the process of awareness, the definition and marking of patients living longer with incurable cancer, and communication and caring in this disease phase. Underlying themes were for instance: the proactive role of the PCP, PCPs’ wishes, PCPs’ experiences, the care for the patient, and communication and collaboration. By analysing the themes, through thematic analysis, hypotheses emerged and were monitored with the data. A professional translator translated the chosen quotes that illustrated our results. The quotes are from PCPs, unless stated otherwise. According to Dutch policy, the study did not require a review by an ethics committee because the data collection was anonymous with regard to the participants (healthcare professionals) and the content of the discussions was not considered to be possibly incriminating. The consulted committee provided us with a declaration of no objection. One of the project members sent a short report to the participants straight after the focus group; we will send a Dutch version of this paper to the participants after publication. One participant also participated as a co-author in the paper by reflecting about the general findings and by editing the paper. Involving participants working in clinical practice is nowadays more often used to ensure that data will be analysed in close connection with actual medical practice (Richards et al . ). During a period of 8 months, six focus group sessions were held with PCPs and medical specialists. In this time period, a switch in mindset seemed to have happened among PCPs. Whereas the first focus group participants initially focussed on mainstream palliative care (e.g. patients living approximately < 1 year), participants in subsequent focus groups slowly started to broaden their scope towards other patients with life expectancies of more than 1 year. Since we shared previous findings, this probably quickened the participants’ view in that the discussed patient group was somewhat different compared to the mainstream group of patients receiving palliative care (where palliative care is usually implemented if they have life expectancies of less than 6 months).
Awareness about the increasing frequency of patients living longer with incurable cancer with adequate to high quality of life, throughout the focus groups, was not only a result of the time period across the different groups. We therefore depicted this finding as one of the themes that emerged during the focus group sessions (Theme 1). Other themes were (2) the definition and recognition of a disease phase in which patients are living longer with incurable cancer, and (3) communication and caring in this disease phase. Awareness of patients living longer with incurable cancer In the first focus group, most PCPs reported that they considered patients living longer with incurable cancer as similar to the palliative disease phase (which, in general, concerns a patient with a life expectancy of ~ 1–6 months). They described timely marking of the palliative phase as crucial for appropriate care, referring to ongoing projects such as PATZ (a project stimulating palliative home care) (van der Plas et al . ). PCPs reported to follow the ‘surprise question’ if answered negatively (e.g. ‘Would you be surprised if the patient died within the next 12 months?’) as the starting point of initiating palliative care. During the first focus group, participants only spoke about patients with an estimated life expectancy of less than one year, as defined by the common palliative care approach. Interestingly, in our third, fourth and fifth focus groups, the disease trajectory we were focusing on seemed to be more accepted as a distinct part of the patient’s disease trajectory (e.g., separate from the terminal disease stage). Both PCPs and medical specialists did not associate these patients with having a terminal form of cancer and/or an approaching death although they were aware of the incurable nature of their disease. The difficulties they described in defining this specific disease phase, e.g., patients living longer with incurable cancer, were partly because they considered this to be a grey zone. Box 1. Respondent 2: Well, anyway, we were talking about the intermediate phase [patients do not request palliative care specifically, but are aware of the incurable nature of the disease].This is quite a different phase. [Focus group 5] Although PCPs acknowledged their added value in this specific disease phase, they also noted that their level of involvement should depend on the patient’s preference. Some reported to be aware of the fact that if patients preferred to stay in touch with their PCP, the chance of receiving further treatment could be lower than when they (also) preferred to stay in contact with their medical specialist. Box 2. Put it another way, if you are going to ask for advice, you search to see who you want to ask. And you know that if you go to the medical specialist, you’ll get suggestions for further treatment and if you go to your PCP, you will get a more palliative approach, people know that… [Focus group 5] Medical specialists agreed that patients living longer with incurable cancer should be approached differently than patients in the palliative phase of cancer (e.g. estimated life expectancy < 1 year). They acknowledged this disease trajectory as a distinct phase compared to mainstream palliative care. Interestingly, instead of PCPs’ problems regarding the dichotomy ‘incurable/mainstream’ palliative care (< one year to live), medical specialists especially focused on the dichotomy ‘curable/incurable’. Box 3. Respondent (medical specialist): For me, as a medical specialist, of course my preference is to be on time, if anything can still be treated, be cured. […] So I monitor someone closely if possible, or if necessary, whereas if you know someone can’t be cured any more, then the focus is on the quality of life and prolonging life if possible. Then you have a very different attitude. [Focus group 4] Defining and differentiating different disease phases During all group sessions, the participants did not always speak about the same disease phase, despite our efforts to clearly define this disease phase beforehand. Participants sometimes used the terms protracted incurable cancer, the terminal disease phase and the palliative disease phase interchangeably. Insecurity about prognosis could result in miscommunication surrounding terminology: During the sessions, there seemed to be no consensus or shared definition between healthcare professionals on patients living longer with incurable cancer. Accordingly, most participants experienced problems in providing a specific name to this disease phase. Whereas some of the PCPs and medical specialists reported that ‘chronic disease’ did not fit with the disease trajectory of these patients, others were very much in favour of this term. Box 4. Respondent 1: Right, ‘chronic’ doesn’t really fit but on the other hand there’s no better term for it. […] Respondent 2: Right, diabetes is a chronic condition but you don’t die from diabetes, you die from the complications. That’s the difference with chronicity. Respondent 1: So, what is a chronic disorder? Respondent 2: Diabetes is a chronic disorder. Respondent 3: Yes, but what’s the definition of a chronic disorder? Respondent 2: Something you’re stuck with for the rest of your life. […] Respondent 2: Why do you actually have to define […] Respondent 3: It’s handy to define something we are all talking about. [Focus group 4] The major problem both PCPs and medical specialists addressed was that care among patients living longer with incurable cancer could only be distinguished from the terminal/mainstream palliative disease phase in hindsight . Most of the participants considered ‘chronic’ a correct term. Box 5. Respondent 1: Can you just look back afterwards or not? Can you already kind of say, “Well, this will be a chronic phase”? Respondent 2: (medical specialist) Yes, that’s all quite tricky… yes, pretty awkward. Respondent 3: Because when do you decide that? If a therapy starts to work or whatever after the first phase, do you then say “Now it’s stable”? Respondent 2: (medical specialist) Right, then I say “It’s stable now, you know”. Our policy is to wait and see, then someone comes to the outpatient clinic a few months later. I don’t say then, “I hope an intermediate phase is starting but equally it could go wrong in three months”. [Focus group 5] PCPs generally acknowledged their role in assisting patients in the very last stage of life. They however reported to prefer to contact patients in an earlier stage of their disease as well. Some of the medical specialists reported to try to contact the PCP by phone when the patients’ condition deteriorated, in line with the preferences of most PCPs in this study. ‘We need each other’ was a frequent comment. Medical specialists worried that contacting all PCPs about this expanding patient group, on time, will become unfeasible in future. Other problems that medical specialists themselves faced were time constraints, the struggle in finding the appropriate care role, a lack of knowledge on and experience with this specific disease phase and dealing with an uncertain prognosis. Box 6. Medical specialist: Well, then you get, um… you get that drug A. 70% chance of you still being around one year on. It works out fine. A year later, you see a recurrence, but you don’t take it too hard: ah, resistance. “Oh well, I’ve still got another drug”. Again it’s a 60% chance, but this also works out fine. Well, anyway, you carry on like this but it’s really the same every time: You go along, you tell them again that they’re dying as it were, the progression… then a couple of weeks later you say “No, I’ve got something for you”. I’m really pleased. […] How do you cope emotionally, how can you manage? [Focus group 4] Communication and care in a trajectory of patients living longer with incurable cancer The PCPs’ main concern was to guarantee optimal communication with patients as well as medical specialists. They often reported barriers when they tried to reach medical specialists. Accordingly, being able to approach their patients on time if medical specialists did not contact the PCPs themselves seemed difficult. Moreover, participants were hesitant to reach out to their patients. They doubted whether patients in fact desired additional contact with their PCPs on top of the contact with their medical specialist. As a result, many PCPs decided to adopt a ‘passive’ attitude in this disease phase, for example, awaiting whether the patient would approach them, although they themselves were willing to see them. Box 7. Well, the first thing that I find remarkable [reading this case scenario] is that the patient is saying ‘I do not need any extra care from the PCP’. As a PCP, I’d actually quite like to know, really find out from such a p atient what the developments with respect to her cancer are, so I like to see those patients from time to time to discuss that. [Focus group 1] This passive attitude of PCPs was partly related to the fact that these patients were regarded as patients in good condition, for example, the ‘chronic’ cancer patients. PCPs mostly explored what was going on in the patient’s life – regarding work, relations and their mental and physical condition (if patients requested a consultation with their PCP). PCPs with special education in palliative care felt they generally were more engaged with patients with cancer than colleagues who had not followed this course/had no special knowledge about palliative care; they seemed to be more inclined to contact patients themselves. Nevertheless, all of them reported difficulties in tracing these patients at the right time and finding the time to contact these patients if they did not contact the PCPs themselves. Box 8. Yes, but what’s tricky is how to keep an eye on everyone. We’ve got a big practice with just the two of us, 4500 patients and that’s quite a lot of people. Then I say “I’ll do it (taking care for a patient living longer with incurable cancer)” but occasionally I think “Oh no, I completely forgot”… so that’s… you want to (taking care/monitoring these patients) but I can’t; the way I do things at the moment, I can’t keep track of everyone. [Focus group 3] Some patients asked their PCPs for a ‘second opinion’. After having heard the advice of their medical specialist, they discussed their options with their PCP. A large proportion of PCPs reported they felt somewhat incompetent for this task because they were not up-to-date regarding the latest developments concerning anti-cancer treatment. PCPs reported to highly appreciate easy-to-understand letters from medical specialists when treatment decisions had been made or serious side-effects emerged. Box 9. Respondent 1: So if they say, “I’d rather get it from the PCP”… I’ve also had people saying, “Well, I heard this story from the specialist; now I’m coming to you to discuss this a bit”… Respondent 2: Yes, exactly. Respondent 1: But then of course you need to be informed as a PCP about what they said so that you can respond properly, because PCP can’t keep up with all the new techniques and studies that are going on. [Focus group 1]
In the first focus group, most PCPs reported that they considered patients living longer with incurable cancer as similar to the palliative disease phase (which, in general, concerns a patient with a life expectancy of ~ 1–6 months). They described timely marking of the palliative phase as crucial for appropriate care, referring to ongoing projects such as PATZ (a project stimulating palliative home care) (van der Plas et al . ). PCPs reported to follow the ‘surprise question’ if answered negatively (e.g. ‘Would you be surprised if the patient died within the next 12 months?’) as the starting point of initiating palliative care. During the first focus group, participants only spoke about patients with an estimated life expectancy of less than one year, as defined by the common palliative care approach. Interestingly, in our third, fourth and fifth focus groups, the disease trajectory we were focusing on seemed to be more accepted as a distinct part of the patient’s disease trajectory (e.g., separate from the terminal disease stage). Both PCPs and medical specialists did not associate these patients with having a terminal form of cancer and/or an approaching death although they were aware of the incurable nature of their disease. The difficulties they described in defining this specific disease phase, e.g., patients living longer with incurable cancer, were partly because they considered this to be a grey zone. Box 1. Respondent 2: Well, anyway, we were talking about the intermediate phase [patients do not request palliative care specifically, but are aware of the incurable nature of the disease].This is quite a different phase. [Focus group 5] Although PCPs acknowledged their added value in this specific disease phase, they also noted that their level of involvement should depend on the patient’s preference. Some reported to be aware of the fact that if patients preferred to stay in touch with their PCP, the chance of receiving further treatment could be lower than when they (also) preferred to stay in contact with their medical specialist. Box 2. Put it another way, if you are going to ask for advice, you search to see who you want to ask. And you know that if you go to the medical specialist, you’ll get suggestions for further treatment and if you go to your PCP, you will get a more palliative approach, people know that… [Focus group 5] Medical specialists agreed that patients living longer with incurable cancer should be approached differently than patients in the palliative phase of cancer (e.g. estimated life expectancy < 1 year). They acknowledged this disease trajectory as a distinct phase compared to mainstream palliative care. Interestingly, instead of PCPs’ problems regarding the dichotomy ‘incurable/mainstream’ palliative care (< one year to live), medical specialists especially focused on the dichotomy ‘curable/incurable’. Box 3. Respondent (medical specialist): For me, as a medical specialist, of course my preference is to be on time, if anything can still be treated, be cured. […] So I monitor someone closely if possible, or if necessary, whereas if you know someone can’t be cured any more, then the focus is on the quality of life and prolonging life if possible. Then you have a very different attitude. [Focus group 4]
During all group sessions, the participants did not always speak about the same disease phase, despite our efforts to clearly define this disease phase beforehand. Participants sometimes used the terms protracted incurable cancer, the terminal disease phase and the palliative disease phase interchangeably. Insecurity about prognosis could result in miscommunication surrounding terminology: During the sessions, there seemed to be no consensus or shared definition between healthcare professionals on patients living longer with incurable cancer. Accordingly, most participants experienced problems in providing a specific name to this disease phase. Whereas some of the PCPs and medical specialists reported that ‘chronic disease’ did not fit with the disease trajectory of these patients, others were very much in favour of this term. Box 4. Respondent 1: Right, ‘chronic’ doesn’t really fit but on the other hand there’s no better term for it. […] Respondent 2: Right, diabetes is a chronic condition but you don’t die from diabetes, you die from the complications. That’s the difference with chronicity. Respondent 1: So, what is a chronic disorder? Respondent 2: Diabetes is a chronic disorder. Respondent 3: Yes, but what’s the definition of a chronic disorder? Respondent 2: Something you’re stuck with for the rest of your life. […] Respondent 2: Why do you actually have to define […] Respondent 3: It’s handy to define something we are all talking about. [Focus group 4] The major problem both PCPs and medical specialists addressed was that care among patients living longer with incurable cancer could only be distinguished from the terminal/mainstream palliative disease phase in hindsight . Most of the participants considered ‘chronic’ a correct term. Box 5. Respondent 1: Can you just look back afterwards or not? Can you already kind of say, “Well, this will be a chronic phase”? Respondent 2: (medical specialist) Yes, that’s all quite tricky… yes, pretty awkward. Respondent 3: Because when do you decide that? If a therapy starts to work or whatever after the first phase, do you then say “Now it’s stable”? Respondent 2: (medical specialist) Right, then I say “It’s stable now, you know”. Our policy is to wait and see, then someone comes to the outpatient clinic a few months later. I don’t say then, “I hope an intermediate phase is starting but equally it could go wrong in three months”. [Focus group 5] PCPs generally acknowledged their role in assisting patients in the very last stage of life. They however reported to prefer to contact patients in an earlier stage of their disease as well. Some of the medical specialists reported to try to contact the PCP by phone when the patients’ condition deteriorated, in line with the preferences of most PCPs in this study. ‘We need each other’ was a frequent comment. Medical specialists worried that contacting all PCPs about this expanding patient group, on time, will become unfeasible in future. Other problems that medical specialists themselves faced were time constraints, the struggle in finding the appropriate care role, a lack of knowledge on and experience with this specific disease phase and dealing with an uncertain prognosis. Box 6. Medical specialist: Well, then you get, um… you get that drug A. 70% chance of you still being around one year on. It works out fine. A year later, you see a recurrence, but you don’t take it too hard: ah, resistance. “Oh well, I’ve still got another drug”. Again it’s a 60% chance, but this also works out fine. Well, anyway, you carry on like this but it’s really the same every time: You go along, you tell them again that they’re dying as it were, the progression… then a couple of weeks later you say “No, I’ve got something for you”. I’m really pleased. […] How do you cope emotionally, how can you manage? [Focus group 4]
The PCPs’ main concern was to guarantee optimal communication with patients as well as medical specialists. They often reported barriers when they tried to reach medical specialists. Accordingly, being able to approach their patients on time if medical specialists did not contact the PCPs themselves seemed difficult. Moreover, participants were hesitant to reach out to their patients. They doubted whether patients in fact desired additional contact with their PCPs on top of the contact with their medical specialist. As a result, many PCPs decided to adopt a ‘passive’ attitude in this disease phase, for example, awaiting whether the patient would approach them, although they themselves were willing to see them. Box 7. Well, the first thing that I find remarkable [reading this case scenario] is that the patient is saying ‘I do not need any extra care from the PCP’. As a PCP, I’d actually quite like to know, really find out from such a p atient what the developments with respect to her cancer are, so I like to see those patients from time to time to discuss that. [Focus group 1] This passive attitude of PCPs was partly related to the fact that these patients were regarded as patients in good condition, for example, the ‘chronic’ cancer patients. PCPs mostly explored what was going on in the patient’s life – regarding work, relations and their mental and physical condition (if patients requested a consultation with their PCP). PCPs with special education in palliative care felt they generally were more engaged with patients with cancer than colleagues who had not followed this course/had no special knowledge about palliative care; they seemed to be more inclined to contact patients themselves. Nevertheless, all of them reported difficulties in tracing these patients at the right time and finding the time to contact these patients if they did not contact the PCPs themselves. Box 8. Yes, but what’s tricky is how to keep an eye on everyone. We’ve got a big practice with just the two of us, 4500 patients and that’s quite a lot of people. Then I say “I’ll do it (taking care for a patient living longer with incurable cancer)” but occasionally I think “Oh no, I completely forgot”… so that’s… you want to (taking care/monitoring these patients) but I can’t; the way I do things at the moment, I can’t keep track of everyone. [Focus group 3] Some patients asked their PCPs for a ‘second opinion’. After having heard the advice of their medical specialist, they discussed their options with their PCP. A large proportion of PCPs reported they felt somewhat incompetent for this task because they were not up-to-date regarding the latest developments concerning anti-cancer treatment. PCPs reported to highly appreciate easy-to-understand letters from medical specialists when treatment decisions had been made or serious side-effects emerged. Box 9. Respondent 1: So if they say, “I’d rather get it from the PCP”… I’ve also had people saying, “Well, I heard this story from the specialist; now I’m coming to you to discuss this a bit”… Respondent 2: Yes, exactly. Respondent 1: But then of course you need to be informed as a PCP about what they said so that you can respond properly, because PCP can’t keep up with all the new techniques and studies that are going on. [Focus group 1]
Statement of principal findings In this focus group study, we consecutively held three group discussions with PCPs and three multidisciplinary group discussions of PCPs and medical specialists about patients living longer with incurable cancer. PCPs as well as medical specialists acknowledged that providing care to these patients is both challenging (e.g. patients were living longer) and complex (due to the unpredictable disease trajectory, prognosis and side-effects). They also struggled in finding the right label for this specific disease phase, using the terms ‘stable’, ‘chronic’ and ‘palliative’ interchangeably. All participants acknowledged problems in the communication, both with patients and colleagues. Whereas PCPs generally preferred a proactive role with their patients, they reportedly stayed passive in some cases, leaving the initiative of PCP consultation to the patient. Strengths and weaknesses of the study This study explored a relatively new topic and illustrated an important hiatus in (primary) oncologic care. Although the number of patients living longer with incurable cancer is growing, many PCPs did not acknowledge this disease phase as different compared to the terminal disease phase (<6 months life expectancy). Although this could be considered a limitation of the study, it was also an important finding. Furthermore, we evaluated how opinions and ideas developed throughout the study period. A feature that increases the validity of this study is the multidisciplinary composition of the focus groups, in which ideas and information were exchanged between PCPs and medical specialists. A great advantage is the immediate exchange of ideas and information when something may appear to be unclear by one of the focus group members. We specifically chose to the diverse group composition to be able to discuss this topic in the most broadest sense (e.g. young/old, palliative/not palliative minded, male/female, etc.). Our study has limitations too. First, recruiting participants depending on the use of local contacts is susceptible to ‘volunteer bias’. Since we recruited participants via various organisations and persons we believe that this form of bias is limited. Second, one of the focus groups was rather small ( N = 4). Third, social desirable answers might have been given, especially since some participants were acquainted. Fourth, in our first three focus group sessions only PCPs participated. It might have been better to start with multidisciplinary groups since the interaction between PCPs and medical specialists significantly improved the outcomes of the discussions. Fifth, focus groups are susceptible to moderator bias. We however followed a standard scheme and did not try to incorporate our own opinion in any way. Finally, we did not include the patient perspective in this study and our findings cannot be transferred to all contexts, which is interesting to explore in future studies. Findings in relation to other studies Defining the trajectory of patients living longer with incurable cancer Awareness of patients living longer with incurable cancer than the estimated period of one year (the recommended period for ‘mainstream’ palliative care) is the first step to improve care for these patients (Boyd et al . ; Buiting and Bolt ; Schildmann et al . ). Our study convincingly showed that even in those 8 months study period, a switch in mindset seemed to have arisen. Apart from awareness about this disease phase due to media attention, more clarity about the impact of choosing specific labels for this disease phase on patients’ well-being (and accordingly their decision-making capacity) is another important step. Also, because labels may influences how healthcare professionals themselves act. Most of our participants were unsure what the appropriate disease label for this specific disease phase should be. It however could be argued whether one specific disease label would be worthwhile for this disease phase, and/or whether labels to a certain extent differ between different stakeholders (e.g. healthcare professional, patient, policymaker, etc.). It is probable that a disease label that is used among healthcare professionals (across different healthcare professionals/in the medical record) and towards patients (during consultations) can have important implications. At first sight, the disease label that is chosen seems a strong fundamental in formulating treatment aims (doctor perspective) as well as coping strategies, well-being and treatment decisions (patient perspective), which to a certain extent differs across disciplines. Our study for instance showed that medical specialists are more inclined to differentiate between the labels curable/incurable instead of incurable/the last stage of life. This does not mean that they also communicate this as such towards their patients. In fact, in the very last stage of life (life expectancy of a couple of months) it is generally the PCP and not the medical specialist taking care for the patient, which could be one explanation for this difference in mindset. Moreover, the difference could also be explained by the fact that medical specialists take care for their patients while providing treatment, whereas PCPS have in particular a supportive role (if consulted) about the patient’s life-story. Although it is generally agreed that medical specialists need to prevent overtreatment (Buiting et al. ; van Ommen-Nijhof and Sonke ; The et al. ), it at the same time seems logical that medical specialists have a different mindset and are inclined to advise differently than PCPs while they know their patients’ treatment trajectory for such a long time (Buiting et al. ). They probably can better estimate whether additional treatment could be beneficial or not. Knowledge Both PCPs and medical specialists reported a lack of knowledge on patients living longer with incurable cancer. This is not surprising, since this phenomenon is new for many physicians (Buiting et al . ). Current literature on survivorship care rarely specifically touches upon patients with incurable cancer (Vijayvergia et al . ), while literature in palliative care generally excludes patients living with a metastasised form of cancer, for more than one year. However, the number of studies that describe patients living longer with incurable cancer slowly increases in for instance breast cancer, lung cancer and prostate cancer (Buiting et al. ; Harley et al. ). It thus seems that patients both receiving anti-cancer treatment and (if wanted) supportive/survivorship care currently mainly rely on the oncologist, for example, anti-cancer treatment either in combination with paramedical care. With the introduction of a new journal, for example, BMJ Palliative & Supportive Care, attention and overlap towards both disciplines seem to increase. Still, supportive care is primarily focused on patients who can be cured, whereas palliative care is primarily focused on patients in their last year of life. Using different terminology to a great extent seems to determine how care is circumvented, for example, for patients hearing about stage IV disease (medical jargon for metastasised disease) is different compared to patients with an incurable form of cancer. Today, new treatment options, such as immunotherapy and checkpoint inhibitors, can have astonishing effects, with longer survival rates and lower risk of side-effects (Blank et al . ). At the same time, new side-effects are observed, which are to a great extent unknown to both PCPs and medical specialists. It is therefore not unexpected that participants were more or less uncertain about the effects of these new anti-cancer drugs, and accordingly about treating patients in this specific disease phase. Preventing reluctance of PCPs wanting to become involved, a clear role of both specialties need to be further explored. We previously reported that – at present – the role of PCPs taking care for patients living longer with incurable cancer is mostly limited to the psychosocial aspects of the decision-making process and treatment of common comorbidities (Buiting and Bolt ). A study of Klabunde et al reported barriers to effective communication between PCPs and medical specialists in survivorship care (curable and incurable) (Klabunde et al . ; Klabunde et al . ). Bringing expertise and experiences together and weighing up the available options could possibly improve the decision-making process. Combining the strengths of the medical oncologist (adequate provision of anti-cancer treatment, doctor-patient communication) and the PCP regarding oncology patients (supportive care, doctor-patient communication, life course medicine) may in certain situations be ideal. Integrating elements of shared care in a multidisciplinary setting is challenging and more research in this field is warranted. It requires a comprehensive and multidisciplinary care infrastructure between various healthcare professionals (Doull ; Loonen et al . ). Conclusions for research and/or practice Providing care to patients living longer with incurable cancer (e.g., a life expectancy of at least 1 year) is considered both challenging (e.g. patients were living longer) and complex (due to the unpredictable disease trajectory, prognosis and side-effects). Using specific labels towards patients (next to other elements that determine patients’ well-being during consultations (Buiting et al. , )) can have a tremendous impact on patients’ well-being, and accordingly, which decisions they for instance would like to make regarding anti-cancer treatment. Both PCPs as well as medical specialists need to be aware of using terms such as ‘stable’, ‘chronic’ and ‘palliative’ interchangably (Buiting et al. ). Although this exploratory research provides indications that the term ‘chronic’ would suit patients in this disease phase best, our research in which the patient perspective is included also should strengthen these results even more. PCPs will have an increasing number of patients living longer with incurable cancer in their practice. However, in a single PCP practice, the experience with incurable cancer patients remains low, partly because patients often prefer to stay in contact with their specialist. PCPs as well as medical specialists are unsure how we should label these patients best, and how their care can be guaranteed. The development of an education module could possibly add in motivating PCPs and medical specialists to find options to better interact with each other and to make better demarcations between the ‘mainstream’ palliative disease phase and longer disease phases with metastatic cancer. Medical specialists in particular will be more aware of the group of patients living longer with incurable cancer, including their care needs while receiving anti-cancer treatment. They can have an important role by timely referring their patients to their PCPs. During the total period of patients living longer with incurable cancer, contact with both PCPs and medical specialists seems preferable.
In this focus group study, we consecutively held three group discussions with PCPs and three multidisciplinary group discussions of PCPs and medical specialists about patients living longer with incurable cancer. PCPs as well as medical specialists acknowledged that providing care to these patients is both challenging (e.g. patients were living longer) and complex (due to the unpredictable disease trajectory, prognosis and side-effects). They also struggled in finding the right label for this specific disease phase, using the terms ‘stable’, ‘chronic’ and ‘palliative’ interchangeably. All participants acknowledged problems in the communication, both with patients and colleagues. Whereas PCPs generally preferred a proactive role with their patients, they reportedly stayed passive in some cases, leaving the initiative of PCP consultation to the patient.
This study explored a relatively new topic and illustrated an important hiatus in (primary) oncologic care. Although the number of patients living longer with incurable cancer is growing, many PCPs did not acknowledge this disease phase as different compared to the terminal disease phase (<6 months life expectancy). Although this could be considered a limitation of the study, it was also an important finding. Furthermore, we evaluated how opinions and ideas developed throughout the study period. A feature that increases the validity of this study is the multidisciplinary composition of the focus groups, in which ideas and information were exchanged between PCPs and medical specialists. A great advantage is the immediate exchange of ideas and information when something may appear to be unclear by one of the focus group members. We specifically chose to the diverse group composition to be able to discuss this topic in the most broadest sense (e.g. young/old, palliative/not palliative minded, male/female, etc.). Our study has limitations too. First, recruiting participants depending on the use of local contacts is susceptible to ‘volunteer bias’. Since we recruited participants via various organisations and persons we believe that this form of bias is limited. Second, one of the focus groups was rather small ( N = 4). Third, social desirable answers might have been given, especially since some participants were acquainted. Fourth, in our first three focus group sessions only PCPs participated. It might have been better to start with multidisciplinary groups since the interaction between PCPs and medical specialists significantly improved the outcomes of the discussions. Fifth, focus groups are susceptible to moderator bias. We however followed a standard scheme and did not try to incorporate our own opinion in any way. Finally, we did not include the patient perspective in this study and our findings cannot be transferred to all contexts, which is interesting to explore in future studies.
Defining the trajectory of patients living longer with incurable cancer Awareness of patients living longer with incurable cancer than the estimated period of one year (the recommended period for ‘mainstream’ palliative care) is the first step to improve care for these patients (Boyd et al . ; Buiting and Bolt ; Schildmann et al . ). Our study convincingly showed that even in those 8 months study period, a switch in mindset seemed to have arisen. Apart from awareness about this disease phase due to media attention, more clarity about the impact of choosing specific labels for this disease phase on patients’ well-being (and accordingly their decision-making capacity) is another important step. Also, because labels may influences how healthcare professionals themselves act. Most of our participants were unsure what the appropriate disease label for this specific disease phase should be. It however could be argued whether one specific disease label would be worthwhile for this disease phase, and/or whether labels to a certain extent differ between different stakeholders (e.g. healthcare professional, patient, policymaker, etc.). It is probable that a disease label that is used among healthcare professionals (across different healthcare professionals/in the medical record) and towards patients (during consultations) can have important implications. At first sight, the disease label that is chosen seems a strong fundamental in formulating treatment aims (doctor perspective) as well as coping strategies, well-being and treatment decisions (patient perspective), which to a certain extent differs across disciplines. Our study for instance showed that medical specialists are more inclined to differentiate between the labels curable/incurable instead of incurable/the last stage of life. This does not mean that they also communicate this as such towards their patients. In fact, in the very last stage of life (life expectancy of a couple of months) it is generally the PCP and not the medical specialist taking care for the patient, which could be one explanation for this difference in mindset. Moreover, the difference could also be explained by the fact that medical specialists take care for their patients while providing treatment, whereas PCPS have in particular a supportive role (if consulted) about the patient’s life-story. Although it is generally agreed that medical specialists need to prevent overtreatment (Buiting et al. ; van Ommen-Nijhof and Sonke ; The et al. ), it at the same time seems logical that medical specialists have a different mindset and are inclined to advise differently than PCPs while they know their patients’ treatment trajectory for such a long time (Buiting et al. ). They probably can better estimate whether additional treatment could be beneficial or not. Knowledge Both PCPs and medical specialists reported a lack of knowledge on patients living longer with incurable cancer. This is not surprising, since this phenomenon is new for many physicians (Buiting et al . ). Current literature on survivorship care rarely specifically touches upon patients with incurable cancer (Vijayvergia et al . ), while literature in palliative care generally excludes patients living with a metastasised form of cancer, for more than one year. However, the number of studies that describe patients living longer with incurable cancer slowly increases in for instance breast cancer, lung cancer and prostate cancer (Buiting et al. ; Harley et al. ). It thus seems that patients both receiving anti-cancer treatment and (if wanted) supportive/survivorship care currently mainly rely on the oncologist, for example, anti-cancer treatment either in combination with paramedical care. With the introduction of a new journal, for example, BMJ Palliative & Supportive Care, attention and overlap towards both disciplines seem to increase. Still, supportive care is primarily focused on patients who can be cured, whereas palliative care is primarily focused on patients in their last year of life. Using different terminology to a great extent seems to determine how care is circumvented, for example, for patients hearing about stage IV disease (medical jargon for metastasised disease) is different compared to patients with an incurable form of cancer. Today, new treatment options, such as immunotherapy and checkpoint inhibitors, can have astonishing effects, with longer survival rates and lower risk of side-effects (Blank et al . ). At the same time, new side-effects are observed, which are to a great extent unknown to both PCPs and medical specialists. It is therefore not unexpected that participants were more or less uncertain about the effects of these new anti-cancer drugs, and accordingly about treating patients in this specific disease phase. Preventing reluctance of PCPs wanting to become involved, a clear role of both specialties need to be further explored. We previously reported that – at present – the role of PCPs taking care for patients living longer with incurable cancer is mostly limited to the psychosocial aspects of the decision-making process and treatment of common comorbidities (Buiting and Bolt ). A study of Klabunde et al reported barriers to effective communication between PCPs and medical specialists in survivorship care (curable and incurable) (Klabunde et al . ; Klabunde et al . ). Bringing expertise and experiences together and weighing up the available options could possibly improve the decision-making process. Combining the strengths of the medical oncologist (adequate provision of anti-cancer treatment, doctor-patient communication) and the PCP regarding oncology patients (supportive care, doctor-patient communication, life course medicine) may in certain situations be ideal. Integrating elements of shared care in a multidisciplinary setting is challenging and more research in this field is warranted. It requires a comprehensive and multidisciplinary care infrastructure between various healthcare professionals (Doull ; Loonen et al . ).
Awareness of patients living longer with incurable cancer than the estimated period of one year (the recommended period for ‘mainstream’ palliative care) is the first step to improve care for these patients (Boyd et al . ; Buiting and Bolt ; Schildmann et al . ). Our study convincingly showed that even in those 8 months study period, a switch in mindset seemed to have arisen. Apart from awareness about this disease phase due to media attention, more clarity about the impact of choosing specific labels for this disease phase on patients’ well-being (and accordingly their decision-making capacity) is another important step. Also, because labels may influences how healthcare professionals themselves act. Most of our participants were unsure what the appropriate disease label for this specific disease phase should be. It however could be argued whether one specific disease label would be worthwhile for this disease phase, and/or whether labels to a certain extent differ between different stakeholders (e.g. healthcare professional, patient, policymaker, etc.). It is probable that a disease label that is used among healthcare professionals (across different healthcare professionals/in the medical record) and towards patients (during consultations) can have important implications. At first sight, the disease label that is chosen seems a strong fundamental in formulating treatment aims (doctor perspective) as well as coping strategies, well-being and treatment decisions (patient perspective), which to a certain extent differs across disciplines. Our study for instance showed that medical specialists are more inclined to differentiate between the labels curable/incurable instead of incurable/the last stage of life. This does not mean that they also communicate this as such towards their patients. In fact, in the very last stage of life (life expectancy of a couple of months) it is generally the PCP and not the medical specialist taking care for the patient, which could be one explanation for this difference in mindset. Moreover, the difference could also be explained by the fact that medical specialists take care for their patients while providing treatment, whereas PCPS have in particular a supportive role (if consulted) about the patient’s life-story. Although it is generally agreed that medical specialists need to prevent overtreatment (Buiting et al. ; van Ommen-Nijhof and Sonke ; The et al. ), it at the same time seems logical that medical specialists have a different mindset and are inclined to advise differently than PCPs while they know their patients’ treatment trajectory for such a long time (Buiting et al. ). They probably can better estimate whether additional treatment could be beneficial or not.
Both PCPs and medical specialists reported a lack of knowledge on patients living longer with incurable cancer. This is not surprising, since this phenomenon is new for many physicians (Buiting et al . ). Current literature on survivorship care rarely specifically touches upon patients with incurable cancer (Vijayvergia et al . ), while literature in palliative care generally excludes patients living with a metastasised form of cancer, for more than one year. However, the number of studies that describe patients living longer with incurable cancer slowly increases in for instance breast cancer, lung cancer and prostate cancer (Buiting et al. ; Harley et al. ). It thus seems that patients both receiving anti-cancer treatment and (if wanted) supportive/survivorship care currently mainly rely on the oncologist, for example, anti-cancer treatment either in combination with paramedical care. With the introduction of a new journal, for example, BMJ Palliative & Supportive Care, attention and overlap towards both disciplines seem to increase. Still, supportive care is primarily focused on patients who can be cured, whereas palliative care is primarily focused on patients in their last year of life. Using different terminology to a great extent seems to determine how care is circumvented, for example, for patients hearing about stage IV disease (medical jargon for metastasised disease) is different compared to patients with an incurable form of cancer. Today, new treatment options, such as immunotherapy and checkpoint inhibitors, can have astonishing effects, with longer survival rates and lower risk of side-effects (Blank et al . ). At the same time, new side-effects are observed, which are to a great extent unknown to both PCPs and medical specialists. It is therefore not unexpected that participants were more or less uncertain about the effects of these new anti-cancer drugs, and accordingly about treating patients in this specific disease phase. Preventing reluctance of PCPs wanting to become involved, a clear role of both specialties need to be further explored. We previously reported that – at present – the role of PCPs taking care for patients living longer with incurable cancer is mostly limited to the psychosocial aspects of the decision-making process and treatment of common comorbidities (Buiting and Bolt ). A study of Klabunde et al reported barriers to effective communication between PCPs and medical specialists in survivorship care (curable and incurable) (Klabunde et al . ; Klabunde et al . ). Bringing expertise and experiences together and weighing up the available options could possibly improve the decision-making process. Combining the strengths of the medical oncologist (adequate provision of anti-cancer treatment, doctor-patient communication) and the PCP regarding oncology patients (supportive care, doctor-patient communication, life course medicine) may in certain situations be ideal. Integrating elements of shared care in a multidisciplinary setting is challenging and more research in this field is warranted. It requires a comprehensive and multidisciplinary care infrastructure between various healthcare professionals (Doull ; Loonen et al . ).
Providing care to patients living longer with incurable cancer (e.g., a life expectancy of at least 1 year) is considered both challenging (e.g. patients were living longer) and complex (due to the unpredictable disease trajectory, prognosis and side-effects). Using specific labels towards patients (next to other elements that determine patients’ well-being during consultations (Buiting et al. , )) can have a tremendous impact on patients’ well-being, and accordingly, which decisions they for instance would like to make regarding anti-cancer treatment. Both PCPs as well as medical specialists need to be aware of using terms such as ‘stable’, ‘chronic’ and ‘palliative’ interchangably (Buiting et al. ). Although this exploratory research provides indications that the term ‘chronic’ would suit patients in this disease phase best, our research in which the patient perspective is included also should strengthen these results even more. PCPs will have an increasing number of patients living longer with incurable cancer in their practice. However, in a single PCP practice, the experience with incurable cancer patients remains low, partly because patients often prefer to stay in contact with their specialist. PCPs as well as medical specialists are unsure how we should label these patients best, and how their care can be guaranteed. The development of an education module could possibly add in motivating PCPs and medical specialists to find options to better interact with each other and to make better demarcations between the ‘mainstream’ palliative disease phase and longer disease phases with metastatic cancer. Medical specialists in particular will be more aware of the group of patients living longer with incurable cancer, including their care needs while receiving anti-cancer treatment. They can have an important role by timely referring their patients to their PCPs. During the total period of patients living longer with incurable cancer, contact with both PCPs and medical specialists seems preferable.
|
Defining genomic epidemiology thresholds for common-source bacterial outbreaks: a modelling study
|
b416911f-e6e0-4b32-98e0-016e52b11ffe
|
10156608
|
Microbiology[mh]
|
Epidemics caused by exposure to a single common source (eg, unsafe food or contaminated water) are important targets of epidemiological surveillance and infection control strategies. , Rapid identification of the source enables outbreak control and is therefore crucial to public health. In the simplest and most common cases, a single pathogenic strain contaminates the source and subsequently causes infections (referred to as a clonal outbreak), which is often the case for contaminated food, water, or environmental sources. Such sources generally remain uncontaminated and often exist in the context of strong regulatory measures, especially in high-income countries; however, clonal outbreaks are still major causes of disease. Many countries have surveillance systems, to rapidly identify such outbreaks (eg , for foodborne pathogens such as Salmonella spp or Listeria monocytogenes ), that use genome sequencing to identify related strains. , This strategy, named reverse epidemiology, forms the basis of surveillance systems used for foodborne pathogens such as PulseNet, one of the largest surveillance networks of bacterial genotypes worldwide. Molecular surveillance (ie, genetic fingerprinting) enables the detection of nearly identical infectious isolates and might trigger epidemiological investigations. These investigations include the search for case-associated risk factors such as consumption of a particular food item and microbiological analyses of suspected sources. Such investigations can lead to infection control measures that can prevent further cases. , Research in context Evidence before this study We searched PubMed for studies published in English from database inception to April 3, 2021, with the terms (threshold OR cut-off OR genetic relatedness) AND (outbreak) AND (cgMLST OR wgMLST OR SNPs) AND (microbial OR bacteria OR bacterial OR pathogen). We found 222 related articles. Most studies define a fixed single-nucleotide polymorphism (SNP) threshold that relates outbreak strains based on previous observations. One original study identifies outbreak clusters based on transmission events. However, this study relies on strong assumptions about molecular clock and transmission processes. Added value of this study Our study describes a new method based on a forward Wright-Fisher model to find the most appropriate genetic distance threshold to discriminate between outbreak and non-outbreak isolates. This method is fast and simple to use with only few assumptions, informed by outbreak duration and pathogen mutation rate. By using SNP or core genome multilocus sequence typing pairwise distances and sample collection dates of the outbreak of interest, the algorithm provides context-based guidance to separate outbreak strains from outliers. Implications of all the available evidence The fast and easy method developed in this study facilitates hypothesis-driven definitions of outbreak thresholds, in lieu of predefined thresholds. Defining clusters more accurately on the basis of an outbreak's specific epidemiological features, and estimating the most probable duration of the outbreak (time since initial source contamination), provides greatly needed precision for epidemiological surveillance and outbreak investigation. This novel approach might enable more efficient leveraging of molecular epidemiology data for the purposes of uncovering contamination sources. Distinguishing case cluster isolates from sporadic ones has been the long-standing conundrum faced by molecular epidemiological surveillance. The identification of single-strain clusters of infections is confounded by a background of sporadic cases caused by exposure to unrelated sources. Defining a single strain typically uses a threshold of genetic distance, which discriminates between isolates that are related or unrelated to the same source or transmission event, and many attempts have been made to define such thresholds. , In the whole-genome sequencing era, thresholds have become smaller and more precise than with pre-genomic methods such as pulsed-field gel electrophoresis. , , , , , Threshold definition is usually based on the genetic variability observed within previous, well characterised outbreaks, an approach rooted in the epidemiological concordance principle. , However, consensus on the interpretation of molecular data for strain definition has not been reached. , , From an evolutionary perspective, bacteria that contaminate an initially sterile source can be considered as subpopulations of individual bacteria that have evolved from a single common ancestor (ie, the original strain) for a particular time period (ie, the duration since initial source contamination). Main factors that affect genetic distances between isolates include: (1) the amount of time between initial contamination of the source and the first infection, (2) the mutation rate of the pathogen's genomic markers, and (3) the sampling dates of infected patients. However, the genetic distance between outbreak isolates and the closest detected non-outbreak isolate depends on sampled genomes outside the contamination event. All these parameters considered, a threshold for one outbreak is unlikely to be applicable to another outbreak, even if they involve the same pathogen. For example, it is unrealistic to consider the same genetic threshold for an outbreak lasting 2 years versus 2 weeks, or for two pathogens with mutation rates that are orders of magnitude apart. Instead, using outbreak-specific thresholds defined based on the genetic diversity expected given their particular epidemiological contexts is likely to represent a more successful strategy. Attempts to ground threshold definition in evolutionary biology include the use of coalescent models, transmission models, and Bayesian most recent common ancestor models. , In this study, we aimed to develop a novel modelling framework, which we will refer to as sameStrain, to estimate genetic distance thresholds for single-strain outbreaks from a contaminated environmental or food source, based on outbreak-specific features (ie, pathogen genetic mutation rate and time since initial source contamination), by simulating the accumulation of mutations using these parameters. We embed this model into a Markov Chain Monte Carlo (MCMC) framework to estimate—from data including sampling dates and isolates' genetic variation—mutation rate or time since source contamination.
Definition of an outbreak We define an outbreak (or cluster of cases) as a group of cases caused by a single strain (monoclonal outbreak), excluding co-occurring cases caused by genetically unrelated strains (ie, from other sources). Note that we focus here on environmental or food outbreaks with a single source (ie, excluding outbreaks involving human-to-human transmission). Identification of outbreak datasets To identify datasets we searched PubMed and Google Scholar using the keywords “foodborne disease”, “foodborne investigation”, “foodborne illness”, “food source”, “data from outbreak investigation”, or “outbreak surveillance data”, published from database inception up until April 3, 2021, for studies published in English. We screened the articles for presence of genomic data, sampling dates, and clear epidemiological conclusions as to the inferred relationships of isolates with the infection cluster. We included articles that described only one pathogen. Evolutionary model Our evolutionary formalisation ( ) is based on a Wright-Fisher forward model of haploid infectious agent evolution with constant population size. , Each simulation is initialised with a homogeneous population of an infectious agent characterised by five properties: (1) L , the genome length (in base pairs) or the average length of genes of multilocus sequence typing approaches; (2) g , the number of genes; (3) μ, the number of substitutions per site per year; (4) D, the duration (in days) of the outbreak, defined as the time elapsed between the initial contamination of the source, and the sampling date of the last isolate; and (5) S d , the set of isolate sampling dates, which is defined either directly from the source sampling dates or from infection sampling dates, ignoring incubation period and within-host evolution. Substitutions are introduced randomly at each time step, with their number following a Poisson distribution of parameter: λ = μ 365 N L g where N is the population size, in individual bacteria sampled uniformly with replacement from the population. After simulating over D days, we generated the final distribution of pairwise genetic distances by which one individual bacterium was randomly sampled from the observed sample set, Sd , on each sampling date. We then generated a distribution of pairwise genetic distances for these sampled individuals, which we used to define the genetic threshold value. Details of the sameStrain framework are provided in the . Analysis of published outbreak datasets We reviewed the published datasets of bacterial source-related outbreaks and analysed the datasets using our modelling framework. Inclusion criteria were: (1) an identified foodborne outbreak, (2) the availability of whole-genome sequence data, and (3) availability of collection dates for described isolates. In a first analysis, we extracted information on D from the original publications describing these outbreaks. We also used previously estimated values of μ and g for the corresponding pathogen from the literature. We label D and μ values taken from the literature as D lit and μ lit , whereas those derived from our MCMC estimation (described later) are labelled as D estimated and μ estimated. Statistical analysis To test the ability of the framework to distinguish between outbreak and non-outbreak cases, we ran a simulation study. We generated synthetic outbreaks from different combinations of D and μ ( ). We applied our framework to 171 independent simulated outbreaks generated with 19 distinct values of D, each combined with nine distinct values of μ and including simulated non-outbreak or sporadic isolates ( ). For each simulated outbreak, we assessed the global sensitivity and specificity of the framework ( ). To address uncertainty underlying key model parameters, including the time since initial source contamination and the genetic mutation rate, our model was embedded into a Bayesian statistical inference framework to enable estimation of either the duration ( D ) or the substitution rate (μ) of studied outbreaks when unknown ( ; ). Briefly, we estimate D or μ from the observed pairwise genetic distance matrix by using an MCMC algorithm. The simulated outbreaks described earlier were used to assess the ability of the model to estimate D and μ, and their impact on estimation of the genetic threshold. We used the 95% highest posterior density (95% HPD) intervals to assess accuracy of the estimates. We used the Kolmogorov-Smirnoff test statistic to compare real distributions with simulated distributions as a goodness of fit indicator. No ethical approval was needed for this study. Role of the funding source The funder of the study had no role in study design, data collection, data analysis, data interpretation or writing of the report.
We define an outbreak (or cluster of cases) as a group of cases caused by a single strain (monoclonal outbreak), excluding co-occurring cases caused by genetically unrelated strains (ie, from other sources). Note that we focus here on environmental or food outbreaks with a single source (ie, excluding outbreaks involving human-to-human transmission).
To identify datasets we searched PubMed and Google Scholar using the keywords “foodborne disease”, “foodborne investigation”, “foodborne illness”, “food source”, “data from outbreak investigation”, or “outbreak surveillance data”, published from database inception up until April 3, 2021, for studies published in English. We screened the articles for presence of genomic data, sampling dates, and clear epidemiological conclusions as to the inferred relationships of isolates with the infection cluster. We included articles that described only one pathogen.
Our evolutionary formalisation ( ) is based on a Wright-Fisher forward model of haploid infectious agent evolution with constant population size. , Each simulation is initialised with a homogeneous population of an infectious agent characterised by five properties: (1) L , the genome length (in base pairs) or the average length of genes of multilocus sequence typing approaches; (2) g , the number of genes; (3) μ, the number of substitutions per site per year; (4) D, the duration (in days) of the outbreak, defined as the time elapsed between the initial contamination of the source, and the sampling date of the last isolate; and (5) S d , the set of isolate sampling dates, which is defined either directly from the source sampling dates or from infection sampling dates, ignoring incubation period and within-host evolution. Substitutions are introduced randomly at each time step, with their number following a Poisson distribution of parameter: λ = μ 365 N L g where N is the population size, in individual bacteria sampled uniformly with replacement from the population. After simulating over D days, we generated the final distribution of pairwise genetic distances by which one individual bacterium was randomly sampled from the observed sample set, Sd , on each sampling date. We then generated a distribution of pairwise genetic distances for these sampled individuals, which we used to define the genetic threshold value. Details of the sameStrain framework are provided in the .
We reviewed the published datasets of bacterial source-related outbreaks and analysed the datasets using our modelling framework. Inclusion criteria were: (1) an identified foodborne outbreak, (2) the availability of whole-genome sequence data, and (3) availability of collection dates for described isolates. In a first analysis, we extracted information on D from the original publications describing these outbreaks. We also used previously estimated values of μ and g for the corresponding pathogen from the literature. We label D and μ values taken from the literature as D lit and μ lit , whereas those derived from our MCMC estimation (described later) are labelled as D estimated and μ estimated.
To test the ability of the framework to distinguish between outbreak and non-outbreak cases, we ran a simulation study. We generated synthetic outbreaks from different combinations of D and μ ( ). We applied our framework to 171 independent simulated outbreaks generated with 19 distinct values of D, each combined with nine distinct values of μ and including simulated non-outbreak or sporadic isolates ( ). For each simulated outbreak, we assessed the global sensitivity and specificity of the framework ( ). To address uncertainty underlying key model parameters, including the time since initial source contamination and the genetic mutation rate, our model was embedded into a Bayesian statistical inference framework to enable estimation of either the duration ( D ) or the substitution rate (μ) of studied outbreaks when unknown ( ; ). Briefly, we estimate D or μ from the observed pairwise genetic distance matrix by using an MCMC algorithm. The simulated outbreaks described earlier were used to assess the ability of the model to estimate D and μ, and their impact on estimation of the genetic threshold. We used the 95% highest posterior density (95% HPD) intervals to assess accuracy of the estimates. We used the Kolmogorov-Smirnoff test statistic to compare real distributions with simulated distributions as a goodness of fit indicator. No ethical approval was needed for this study.
The funder of the study had no role in study design, data collection, data analysis, data interpretation or writing of the report.
To test the ability of the framework to distinguish between outbreak and non-outbreak cases, we generated independent synthetic outbreaks from different combinations of D and μ ( ). As expected, specificity was poor for lower values of μ, especially when the ratio of evolution duration between outbreak and non-outbreak genomes ( R d ) was small—ie, when non-outbreak genomes were more related genetically to outbreak genomes ( ). By contrast, as expected, sensitivity was always high (>99%), irrespective of the parameter combinations (not shown; same parameters as ). We also observed that the 95% specificity D -value threshold decreased with increasing values of R d and μ ( )—ie, less time is needed to accurately discriminate between outbreak and non-outbreak genomes when the non-outbreak genomes are more distinct or when the mutation rate is higher. We next evaluated whether the model and framework could accurately estimate the parameters D and μ from outbreak data. To do so, we simulated synthetic outbreaks for which the values of D and μ were known, and attempted to estimate one or the other. Regarding D estimation, all 95% HPD estimates included the true value, and higher values of D were associated with smaller 95% HPD ( ). Similarly, μ was adequately estimated, and best estimates were closer to the target value for higher μ values ( ). Because higher D or μ values, or both, lead on average to more single-nucleotide polymorphisms (SNPs), greater precision in HPD estimates were expected in these cases. We also investigated the effect of sampling density (ie, the number of isolates sampled divided by the outbreak duration in days) on estimation accuracy and precision ( ). First, increasing sampling density increases precision in the estimation of both D and μ, whatever the duration of the outbreak. Second, when the number of isolates is too low (<10), estimates are generally biased, with underestimation of both D and μ. This effect is shown on simulation results corresponding to the 60-day outbreaks ( ), when density is low (5–20%), which correspond to three to 12 isolates. Importantly, we show that sampling densities higher than 10% led to unbiased estimates. We finally used our framework to analyse data from outbreaks found in the literature. 16 outbreaks were included in our analysis ( ), , , , , , , which are described in more detail in the ). For each of the 16 identified published outbreaks, we applied our framework to estimate an expected outbreak-specific genetic threshold value (an example of outbreak 11 is shown in ; all other outbreaks are shown in the ). We found that, for 14 of 16 outbreaks, the classification of isolates as being outbreak related or sporadic is consistent with previously reported proposals, stemming from epidemiological information. Four of these outbreaks included outliers (outbreaks 1, 4, 12, and 16), which were correctly classified as being beyond the threshold of exclusion estimated by our model, except for one isolate of outbreak 4 ( ; ; note that outbreak 4 comprised three different co-contaminating genetic clusters; here the defined outbreak strain was ST528). Ten other outbreaks (2, 3, 5, 6, 7, 9, 10, 13, 14, and 15) had no sporadic cases, and our framework clustered all previously suspected isolates as outbreak related. For two of the 16 outbreaks, conclusions from our model were discordant with published results. In outbreak 8 ( Listeria monocytogenes , beef), two isolates were classified as outliers by our model ( ), whereas they were initially classified as outbreak related in the associated publication. In outbreak 11 ( L monocytogenes , ox tongue), two isolates came from food and two others from humans. Our algorithm separated food samples into one cluster and human samples into another, whereas the isolates were initially grouped together based on epidemiological and genetic evidence. When evaluating the influence of outliers on the inferred threshold by removing them from the analysis, we found that, in all cases, the absence of outliers did not affect the outbreak threshold. For outbreaks 1, 4, and 16, this removal did not change the threshold value, but improved the fit between the pairwise distance distribution from the observed data and from the simulated one ( ). For each of the 16 outbreaks, we used our framework to re-estimate outbreak duration D ( D estimated ) and substitution rate μ (μ estimated ) separately, and used these values (instead of D lit and μ lit taken directly from the literature and used above) to infer the genetic distance threshold ( ). For 10 of the 16 outbreaks, D lit was well estimated: three D lit values (outbreaks 1, 13, and 15) were within the corresponding HPD intervals and seven (outbreaks 2, 3, 5, 7, 8, 9, and 12) were just below. For the six remaining outbreaks, we found higher D estimated values compared with previously reported D lit ( ). Regarding μ, for 11 outbreaks the HPD intervals included μ lit , whereas μ estimated was lower than μ lit for just one outbreak (outbreak 2), and higher than μ lit for the four remaining outbreaks (outbreaks 4, 11, 14, and 16). It is important to note that for these four latter outbreaks, the 95% HPD of D estimated was also higher than D lit ( ). After re-analysing the outbreaks using our estimated values of either D estimated or μ estimated in lieu of D lit or μ lit , we observed that the newly obtained thresholds did not affect the attribution of isolates to the outbreak or sporadic categories, with three exceptions. First, for outbreak 4, using D estimated or μ estimated increased the threshold from four to 11 SNPs, leading to the addition of the previously missing isolate but still excluding the outliers. Second, for outbreak 15, a decreased genetic threshold (four SNPs instead of five, in both independent estimation analyses for D estimated and μ estimated ) led to the exclusion of one isolate that was initially identified by the authors as belonging to the outbreak cluster. Finally, for outbreak 11, the genetic threshold was increased from four SNPs to seven SNPs using D estimated and 10 SNPs using μ estimated , leading to the grouping of all isolates from both food and human samples ( ). We also observed that, in five of 16 cases, using D estimated and μ estimated improved the fit of the genetic distance distribution compared with D lit or μ lit ( ).
In this study, we developed an original evolutionary approach to the single-strain threshold conundrum that incorporates epidemiological and microbiological specificities of the outbreak under study. Our framework, which we have named sameStrain, had high sensitivity and specificity for isolate classification when tested using simulations and the results from 16 published datasets from real-world foodborne outbreaks. sameStrain led to consistent isolate classification for most of the 16 outbreaks, and refined the outbreak definition for two outbreaks. Molecular surveillance facilitates the identification of common exposures to a single source of infection, even when dates and places of infection are distant. , , Given the high heterogeneity among the microbiological and epidemiological characteristics of different outbreaks, it is increasingly recognised that no universal single-species threshold exists that can be applied to distinguish between outbreak and non-outbreak isolates. This situation motivates a need for novel methods that estimate the expected genetic relatedness of outbreak isolates of a particular pathogen stemming from a common source, in the context of that outbreak's particular epidemiological characteristics. To our knowledge, Octavia and colleagues were the first to attempt to model the expected genetic distance among foodborne outbreak isolates. Although the authors incorporated mutation rate and outbreak duration in their model, they did not account for sampling dates. Consequently, their proposed thresholds depend on strong assumptions about the duration of the outbreak (referred to as the ex-vivo or in-vivo evolution time). Stimson and colleagues modelled the number of transmission events that separate infection cases, using a probabilistic model that incorporates the transmission process in addition to mutation rate and timing of infections. Because it models between-host transmission, this approach does not apply to point-source food outbreaks. Lastly, Coll and colleagues aimed at defining an SNP threshold above which transmission of Staphylococcus aureus between humans can be ruled out, by incorporating the timing of transmission and within-host diversity. This evolutionary modelling approach provides a robust SNP cutoff applicable to this specific ecological situation. In our study, the simulation showed that our model performed well at grouping outbreak cases. We also observed that high values of D and μ led to more accurate estimates of genetic thresholds: in other words, model specificity increased with genetic diversity. This finding is akin to higher-resolution typing methods being better at discriminating related from non-related cases. We also found an effect of the evolutionary distance between outbreak and sporadic isolates on model specificity, consistent with known uncertainty in ruling out sporadic cases for genetically homogeneous pathogens. Additionally, we found that the sampling density is important, because it influences the number of observed genetic differences: outbreaks with low diversity will require more samples to capture enough pairwise differences for accurate estimation. Our model assumes a constant pathogen population size N over time to avoid potential increases in computation time with growing populations. For accurate parameter estimation, the assumed value of N must be high enough to sufficiently capture the population's genetic diversity throughout the sampling process. Indeed, to increase its real-world applicability, our model simulates microbiological sampling processes and does not analyse the whole N population. Because λ, the Poisson parameter, is defined as a function of N , a population of 500 or 1000 individual bacteria is usually enough to capture all bacterial diversity, but higher values should be tested further when extreme substitution rates or durations are explored. In most outbreak investigations, the time since initial source contamination is unknown, and underestimation of D is a common risk given the possibility of cryptic transmission—ie, unreported cases having occurred before initial outbreak detection. Previous knowledge of μ is also subject to uncertainty: this parameter strongly depends on the species, strain, environmental conditions (eg, temperature and cellular stress), and potentially other factors. Our results suggest that, although model-based estimates of D and μ were largely consistent with published information informed by epidemiological data collection, they were nonetheless often larger than previously assumed values from the literature. This finding suggests that assumed parameters were less consistent with the observed diversity, suggesting either a longer duration since source contamination or a faster effective substitution rate. As D and μ both affect the expected genetic diversity in the same direction, it is impossible to know whether it is the rate, or the duration, that was higher than initially suspected. We suggest that, in the absence of evidence for higher μ, fixing μ and estimating D might provide important clues regarding previous cryptic transmission. Considering higher D values than suggested by case recognition is clearly relevant for epidemiological investigations of outbreaks, because it widens the considered time window and might lead to the identification of initially unsuspected sources of contamination. When the sample size is high enough (eg, >10), re-estimation of these parameters is recommended to refine the analysis. The analysis of the 16 published outbreaks led to the definition of genetic thresholds that were largely consistent with previous epidemiological evidence. For outbreaks 4 and 11, our model inferred a lower threshold than initially used in published reports of these outbreaks, defining as sporadic outliers some isolates that were initially considered as part of the cluster by the authors. When estimating the duration or substitution rate for both outbreaks, higher values were obtained by our model than values assumed from the literature. However, our model nonetheless grouped isolates consistently with respect to epidemiological evidence. Outbreak 11 involved foodborne listeriosis with contaminated food, where the two food samples differed by nine SNPs from the human samples, themselves separated by two SNPs. The two food samples were isolated from two food outlets that had the same meat producer. Because the incubation period of listeriosis is between 3 and 70 days, and because intermittent L monocytogenes contamination during meat production was observed, the duration of contamination D might have been higher than initially defined by the authors, suggesting that the true common ancestor of food and human isolates was in fact older than initially estimated in the original publication. This example illustrates how our estimation framework could inform epidemiological investigations. Interestingly, when using model-estimated duration of outbreak or substitution rate, we often observed an improved fit of the pairwise distance distributions ( ). For outbreak 8, low quality of sequence data was observed for three genomes, including the two genomes excluded from the outbreak by our model. Low-quality data might have artificially inflated their genetic distinctness, which underlines the importance of input sequence data quality. It is important to highlight the following limitations of our work. First, all presented results were generated by initialising the models with a fully homogeneous ancestral population. However, the contaminating population might be slightly heterogeneous if it has a non-negligible population size and had itself already evolved previously. In these cases, D might be interpreted as incorporating the diversification time before source contamination. Second, we only modelled mutation, neglecting other evolutionary processes such as genetic recombination. Detection of recombination among very closely related isolates is very challenging and its effect on genetic relatedness of co-occurring isolates would be negligible. However, recombination with genetically distinct co-contaminants might occur, and recombined chromosomal regions should be removed from the analysis, especially when using SNP-based analyses (by design, multilocus sequence typing moderates the effect of homologous recombination). Third, the model does not incorporate demographic events within the contaminated source, including population bottlenecks, which are potentially common in food-processing chains, but which would be challenging to infer and model. This limitation prevents the application of our model to outbreaks involving human-to-human transmission, be they community or hospital outbreaks. Finally, the framework is designed for a single evolving population derived from a single bacterial ancestor. However, when there is more than one contaminating genotype, our framework could be used for each of these separately. We describe an innovative approach to the single-strain definition that uses genomic data and the most relevant epidemiological features of specific outbreaks to estimate an informed genetic distance threshold. This approach is grounded in evolutionary biology and alleviates the need for predefined thresholds, which are often not justified and might be inappropriate in most cases. The inferred, outbreak-specific genetic thresholds provide a reliable, non-arbitrary method of defining epidemiologically related cases of infection, and to exclude non-related sporadic isolates. This approach is fast and easy to use and can be run in real time, to generate an optimal threshold based on initial sampling dates, to rule out future samples. Upon subsequent source sampling or inclusion of suspected cases, it can be rerun for updated threshold definition (which is expected to increase with outbreak duration). The additional ability to estimate outbreak duration should also prove useful for common-source disease outbreak studies, by informing an appropriate temporal window for epidemiological investigations aimed at identifying and eliminating the source of contamination. For PulseNet see https://www.cdc.gov/pulsenet/about/index.html
All data and code used for this manuscript are available online at https://gitlab.pasteur.fr/BEBP/samestrain-r-package .
We declare no competing interests.
|
Bioprospecting and Challenges of Plant Microbiome Research for Sustainable Agriculture, a Review on Soybean Endophytic Bacteria
|
13da3099-5d3b-4773-b926-fd3a07887f84
|
10156819
|
Microbiology[mh]
|
Globally, diverse oilseed crops are cultivated for edible oil production to safeguard humans from malnutrition and related illnesses . Their production rate differs from one country to another due to adaptation and growth under different weather conditions by region (e.g., temperate, tropical, and subtropical) . The major type of oilseed crops are canola, groundnut, palm oil, sunflower, soybean, peanut, rapeseed, and cottonseed . In 2020/2021, statistics of USDA showed an account of 362.05 soybeans, 68.87 rapeseed, 49.46 sunflower seed, 47.79 peanuts, and 41.80 of cottonseed, 19.96 palm kernel, and 5.75 copra world oilseed production (million metric tons) with soybean estimated of about a 90% production in the USA . Also, in Sub-Saharan Africa, Nigeria produces and exports a larger percentage of soybean annually. Soybeans are leguminous plants in the family Fabaceae. Interest in soybean cultivation relies on their economic value, the edible oil-producing potential of about 20%, and protein content of 20–25% . Notably, soybeans serve as an inexpensive and excellent source of high-quality edible oil and protein for humans as compared to other leguminous crops and animal protein , and can be a supplement food source for livestock. Yet, soybean’s market value and maximum utilization are less explored in many countries . Soybean can be processed into composite food products, substituting animal proteins, i.e., eggs, meat, and milk. The uncertainties and challenges facing soybean cultivation may include poor and inefficient farming systems, drought, disease invasion, pest attack, lack of disease-resistant cultivars, etc. [ – ]. Diseases such as stem and root blight, bacterial leaf blight, downy mildew, bacterial pustule, rust, purple seed stain, frog-eye leaf spot, brown spot, charcoal rot, and soybean mosaic virus are the most common peculiar to soybean . The control of disease in plants and crops under storage can be achieved by either biological, chemical, or physical means. Therefore, adopting proper control measures against phytopathogens in soybean can sustain plant health and crop productivity. From antiquity, farmers adopted diverse cropping systems (crop rotation, mixed farming, organic farming, etc.) and agricultural practices (e.g., agrochemicals, irrigation, and harrowing) to mitigate bottlenecks limiting the cultivation of soybean and other food crops. Over time, agrochemical use has been a major concern to environmentalists, ecologists, and microbiologists due to the negative impact on the ecosystem . The peculiarity of these challenges is not limited to soybean cultivation alone, but other economical food crops. In recent times, research efforts are on the increase to devise a sustainable means of improving soybean, and other food crop production in order to help solve food scarcity, hunger, and malnutrition . Because of the environmental threats posed by the synthetic fertilizer application and the incessant population increase, the need to employ biorational approaches and sustainable measures to enhance soybean production has become imperative. Naturally, soybean houses endophytic microbes capable of increasing the nitrogen pool in the soil to enhance plant nutrition for higher productivity . The natural occurrence of these nitrogen-fixing bacteria is a promising way to reclaim lost soil nutrients for food production to meet the demand of the ever-growing population and relieve farmers of the cost and over-dependence on chemical fertilizers by farmers. Thus, harnessing endophytic bacteria as bioinoculants to oppose chemical fertilizers is critical as the best alternative. The plant root endosphere represents discreet regions occupied by diverse, endophytic microorganisms , where these microbes exhibit mutualistic, neutral, or antagonistic relationships with the host plants. The emphasis on the root-associated bacteria will be most considered in this review, as soybean root nodules naturally contain diverse nitrogen-fixing bacteria (NFB) . The complementary effects of root-associated bacteria and root nodule NFB can positively influence plant growth and survival under nitrogen-limiting soils . Here we emphasize that the nitrogen-fixing potential of endophytic bacteria in leaves, stems, seeds, flowers, ovules, etc. may be of greater importance in plant growth when compared to root nodules NFB only. Nevertheless, comparative studies of these bacteria from various plant organs upon inoculation under greenhouse and field experiments are required to ascertain this claim, requiring further studies. The molecular insights into plant–microbe interactions have unveiled important functions of some endophytic microbes, which suggests their maximum exploration as bioinoculants in sustaining plant growth and health . For instance, a few beneficial nodule endophytic microbes associated with soybeans have been assessed under greenhouse and field trials to enhance soybean yield and in vitro screening for their antimicrobial properties against phytopathogens . The interdependence of endophytic bacteria with the host plants confers beneficial effects in soybeans and other food crops, such that it stimulates plant growth promoters, antibiosis activity against phytopathogens for plant health, defense against oxidative stress, and yield enhancement without any pathogenic effects . Limited information is available in the literature on the plant growth stimulation and biocontrol potential of endophytic bacteria inhabiting soybean, thus limiting their ecological services. Nevertheless, exploring endophytic bacteria as bioinoculants can provide several opportunities in mitigating diverse agricultural problems, such as biotic and abiotic stress, and climate change. Furthermore, addressing the challenges and uncertainties limiting plant microbiome biotechnologically will ultimately reveal the amazing realities of incorporating endophytic resources from soybean and other food crops into agricultural management. Our review brings to light the endophytic microbial dynamics of soybeans and current status of plant microbiome research for sustainable agriculture.
It is essential to evaluate the diversity and population of endophytes in soybean plants in different environments, as a knowledge of this would serve as a background to promote their usage as biofertilizers, soil amendments, plant growth enhancers, and biocontrol agents with the overall aim of increasing different plant yield. Despite the success recorded in soybean’s endophytic microbiome research with promises to achieve future agricultural productivity [ , , ], there is still a need for further studies. Sustaining plant health is paramount, as it is relatively mirrored in crop yield. The antimicrobial compounds and metabolites naturally found in economical plants, coupled with the biocontrol potential of some endophytic microbes, can contribute to plant health by reducing plant pathogenicity . Inefficient control of plant pathogens results in yield loss to crops . To help ameliorate these threats, in vitro screening of novel endophytic bacteria from economic plants for antimicrobial activity became important in identifying targeted biocontrol agents to specific pathogens in the host plants . Taking account of key certain environmental factors that influence microbial community structure by monitoring different ecological niches is vital to ascertain specific environmental factors influencing microbial diversity in plants. For instance, in the phyllosphere, a limited supply of nutrients, ultraviolet light, humidity, temperature, oxygen concentration, pH, etc. influence the microbiome in this niche . In the root endosphere, pathogens, nutrient deposition, and versatility might be key factors influencing the diversity of the microbes in different plants. The effect of ultraviolet “B” has been reported to influence the bacterial community structure in the soybean phyllosphere . The factors (geographical location, carbohydrates, amino acids, and other soil nutrients) influence the microbial diversity in the root endosphere . The soil-inhabiting microbes and some phyllosphere endophytic microbes can withstand high ultraviolet radiation due to the presence of pigments, i.e., melanin, xanthomonadine, and carotenoids . The microorganisms found in the same ecological niche can be differentiated based on their characterization, genetic composition, and metabolic activities . Plant endosphere ecology comprises microbial domains found in the below (root, sometimes seeds) and above (stem, leaf, seed, flower, and ovule) plant parts . The microbial population and diversity in the plant root may be dissimilar compared to the other plant parts. The root endophytes are influenced by the exudate-secondary metabolites released into the soil-root environment . Mina, Pereira, Lino-Neto, and Baptista stated that the endophyte diversity in the different organs of a particular plant is mediated by the physical and chemical properties. This claim was relative to soybean as de Almeida Lopes, Carpentieri‐Pipolo, Oro, Stefani Pagliosa, and Degrassi observed a similarity in the diversity of microorganisms in soybean. Endophytic Bacteria Associated with Soybeans Studies on the functional traits exhibited by endophytic bacteria associated with soybean and Arabidopsis aim to reveal their significance in agriculture, industry, and medicine . The effects of some endophytic bacteria from legumes and other food crops on plant growth are presented in Table . Hence, the advantages of beneficial endophytic bacteria (e.g., plant production, growth, secondary metabolites) found in different food crops from various plant habitats remains crucial in plant growth promotion, inducing plant tolerance to harsh environmental conditions and disease control. These researchers reported Citrobacter freundii and Enterobacter asburiae from the root and stem; Kosakonia cowanii , Pantoea agglomerans , and Variovorax paradoxus from the root and leaf; Staphylococcus aureus from the stem and leaf; and Enterobacter ludwigii from the root, stem, and leaf of soybean. Likewise, Dubey, Saiyam, Kumar, Hashem, Abd_Allah and Khan and Brunda, Jahagirdar, and Kambrekar also isolated Bacillus pumilus from the stem and leaf of the soybean plant, which aligns closely with the claims of de Almeida Lopes, Carpentieri‐Pipolo, Oro, Stefani Pagliosa, and Degrassi , who observed similar bacterium in different organs of soybean. Hence, it is crucial to carry out more research to have a deeper understanding of the inherent factors affecting the diversity of endophytes in different plant parts for maximum exploration in solving agricultural problems. The selection of endophytic bacteria based on taxonomy and functions can help understand diverse bacteria communities in different plants . Plants of the same species may have different bacteria compositions and associations, depending on the location, genotype, cropping system, climatic conditions, and growth stage . The genomic data available on the microbes from soybean with unique metabolic features reveal their genetic variation. The notable genes involved in flagella biosynthesis ( flg , fil , flh ), chemotaxis ( che ABRVWZ, mpc ), IAA synthesis ( trp ABCDE), nitrogen fixation ( isc U), and phosphate solubilization ( pst ABCS) identified in the genome of Pseudomonas fluorescens BRZ63 isolated from rapeseed may be responsible for the bacterium functions in enhancing plant growth and disease control . A study by Adeleke, Ayangbenro, and Babalola reported genes involved in nitrogen fixation, phosphate transport and solubilization, siderophore production, secretion systems, iron transport, flagella, flagella biosynthesis, and phytohormones in the genome of endophytic Bacillus cereus T4S isolated from sunflower, which enhanced sunflower yield. Furthermore, studies should also be intensified on soybean to unravel the genes in their different endophytes, enhancing plant growth. Plant–microbe cooperation can modulate the transfer of certain genetic traits in the host plant by genome modulation, which may assist plants in acquiring novel traits and in boosting their adaptation mode of actions in diverse environments. The level of genetic communication in the root-soil interface facilitates microbial infiltration into the plants . However, the similar genetic complexity between rhizosphere microbes and endophytic microbes provide new insights into their colonization pattern into the root endosphere and become endophytes . Therefore, the mechanisms employed by soybean endophytic microbes in plant growth promotion need to be understood to ascertain their roles in the plant endosphere. Endophytic Fungi Associated with Soybean Providing information on endophytic fungi (EF) inhabiting the root of soybean can help unravel the prospects of soybean in sustainable crop production. The plant growth–promoting attributes of bacteria and fungi inhabiting the root of plants may share significant similarities depending on the sample type, isolation source, and growth conditions . EF employs multifunctional strategies for plant growth and protection against biotic and abiotic stressors . Unraveling the community structure and complex plant–microbe synergies in the host plants has made the science of endophyte interesting as a way of maximizing their bio-products (bioinoculants) to ensure food security . The EF which forms part of plant lifestyle with a strong affinity in the root endosphere due to the presence of cell organelle (mycelia) can be explored in agriculture [ – ]. Despite the ecological services of plant-associated EF, there is still a need to further investigate EF associated with soybean. For instance, the biocontrol potential of EF isolated from rapeseed against Botrytis cinerea and Sclerotinia sclerotiorum , which causes gray mold and Sclerotinia stem root, has necessitated further their exploration . A study by Sallam, Ali, Seleim, and Bagy reported antagonistic activity of the endophytic fungus Trichoderma spp. isolated from the soybean against Rhizoctonia solani , which reduces their effect on soybean yield under greenhouse experiments. Other research findings (to mention but a few) on the plant growth promotion and antifungal attributes of EF against plant pathogens were evident in literature due to phytohormone and metabolite secretions [ , – ]. The biotechnological potential of diverse EF in the production of therapeutic agents and antibiotics revealed their beneficial effect on plant immunity and growth enhancement . The mechanism of action and factors influencing the diversity of root-associated endophytic bacteria and root-associated EF may be similar, possibly based on the same source of identification. Some identifiable EF isolated from the root, stem, and leaves of soybean with detailed biological activities for sustainable plant health includes Trichoderma asperellum , T. longibrachiatum , and T. atroviride , Colletotrichum spp., Pestalotiopsis spp., Botryosphaeria spp., Diaporthe spp. , and Fusarium , Alternata . Despite their multifaceted attributes in plant growth promotion, disease suppressiveness, stress alleviation, metal reduction, and nutrient mineralization [ – ], there is still a need for more studies into the EF colonizing the root of soybean.
Studies on the functional traits exhibited by endophytic bacteria associated with soybean and Arabidopsis aim to reveal their significance in agriculture, industry, and medicine . The effects of some endophytic bacteria from legumes and other food crops on plant growth are presented in Table . Hence, the advantages of beneficial endophytic bacteria (e.g., plant production, growth, secondary metabolites) found in different food crops from various plant habitats remains crucial in plant growth promotion, inducing plant tolerance to harsh environmental conditions and disease control. These researchers reported Citrobacter freundii and Enterobacter asburiae from the root and stem; Kosakonia cowanii , Pantoea agglomerans , and Variovorax paradoxus from the root and leaf; Staphylococcus aureus from the stem and leaf; and Enterobacter ludwigii from the root, stem, and leaf of soybean. Likewise, Dubey, Saiyam, Kumar, Hashem, Abd_Allah and Khan and Brunda, Jahagirdar, and Kambrekar also isolated Bacillus pumilus from the stem and leaf of the soybean plant, which aligns closely with the claims of de Almeida Lopes, Carpentieri‐Pipolo, Oro, Stefani Pagliosa, and Degrassi , who observed similar bacterium in different organs of soybean. Hence, it is crucial to carry out more research to have a deeper understanding of the inherent factors affecting the diversity of endophytes in different plant parts for maximum exploration in solving agricultural problems. The selection of endophytic bacteria based on taxonomy and functions can help understand diverse bacteria communities in different plants . Plants of the same species may have different bacteria compositions and associations, depending on the location, genotype, cropping system, climatic conditions, and growth stage . The genomic data available on the microbes from soybean with unique metabolic features reveal their genetic variation. The notable genes involved in flagella biosynthesis ( flg , fil , flh ), chemotaxis ( che ABRVWZ, mpc ), IAA synthesis ( trp ABCDE), nitrogen fixation ( isc U), and phosphate solubilization ( pst ABCS) identified in the genome of Pseudomonas fluorescens BRZ63 isolated from rapeseed may be responsible for the bacterium functions in enhancing plant growth and disease control . A study by Adeleke, Ayangbenro, and Babalola reported genes involved in nitrogen fixation, phosphate transport and solubilization, siderophore production, secretion systems, iron transport, flagella, flagella biosynthesis, and phytohormones in the genome of endophytic Bacillus cereus T4S isolated from sunflower, which enhanced sunflower yield. Furthermore, studies should also be intensified on soybean to unravel the genes in their different endophytes, enhancing plant growth. Plant–microbe cooperation can modulate the transfer of certain genetic traits in the host plant by genome modulation, which may assist plants in acquiring novel traits and in boosting their adaptation mode of actions in diverse environments. The level of genetic communication in the root-soil interface facilitates microbial infiltration into the plants . However, the similar genetic complexity between rhizosphere microbes and endophytic microbes provide new insights into their colonization pattern into the root endosphere and become endophytes . Therefore, the mechanisms employed by soybean endophytic microbes in plant growth promotion need to be understood to ascertain their roles in the plant endosphere.
Providing information on endophytic fungi (EF) inhabiting the root of soybean can help unravel the prospects of soybean in sustainable crop production. The plant growth–promoting attributes of bacteria and fungi inhabiting the root of plants may share significant similarities depending on the sample type, isolation source, and growth conditions . EF employs multifunctional strategies for plant growth and protection against biotic and abiotic stressors . Unraveling the community structure and complex plant–microbe synergies in the host plants has made the science of endophyte interesting as a way of maximizing their bio-products (bioinoculants) to ensure food security . The EF which forms part of plant lifestyle with a strong affinity in the root endosphere due to the presence of cell organelle (mycelia) can be explored in agriculture [ – ]. Despite the ecological services of plant-associated EF, there is still a need to further investigate EF associated with soybean. For instance, the biocontrol potential of EF isolated from rapeseed against Botrytis cinerea and Sclerotinia sclerotiorum , which causes gray mold and Sclerotinia stem root, has necessitated further their exploration . A study by Sallam, Ali, Seleim, and Bagy reported antagonistic activity of the endophytic fungus Trichoderma spp. isolated from the soybean against Rhizoctonia solani , which reduces their effect on soybean yield under greenhouse experiments. Other research findings (to mention but a few) on the plant growth promotion and antifungal attributes of EF against plant pathogens were evident in literature due to phytohormone and metabolite secretions [ , – ]. The biotechnological potential of diverse EF in the production of therapeutic agents and antibiotics revealed their beneficial effect on plant immunity and growth enhancement . The mechanism of action and factors influencing the diversity of root-associated endophytic bacteria and root-associated EF may be similar, possibly based on the same source of identification. Some identifiable EF isolated from the root, stem, and leaves of soybean with detailed biological activities for sustainable plant health includes Trichoderma asperellum , T. longibrachiatum , and T. atroviride , Colletotrichum spp., Pestalotiopsis spp., Botryosphaeria spp., Diaporthe spp. , and Fusarium , Alternata . Despite their multifaceted attributes in plant growth promotion, disease suppressiveness, stress alleviation, metal reduction, and nutrient mineralization [ – ], there is still a need for more studies into the EF colonizing the root of soybean.
The identification of endophytes in their host plants is somewhat difficult because some endophytic microbes might not be easy to culture in the laboratory , while some are viable but non-culturable. Hence, the use of culture-dependent and culture-independent methods remain important as the case may be. In the use of culture-dependent methods, the population of microbes are easily evaluated, while in contrast, the culture-independent methods are more useful in assessing the entire microbiome in the samples . Culture-dependent methods, which involve microbial isolation on nutrient-rich microbiological media under specific revolutionized growth conditions, are important to determine microbial physiology and genes and screening for plant growth-promoting traits . Conversely, this technique is laborious, revealing detailed microbial diversity and networking in econiches. Also, the proliferation of undesirable microorganisms on the cultured plates, which compete for nutrients needed by the desirable microorganisms, has been identified as a major challenge when isolating endophytic microbes by culturing methods . Hence, the application of culture-independent methods is profound in characterizing yet-to-be cultured microorganisms. Authors Alain and Querellou , Torsvik and Ovresas , and Afzal, Shinwari, Sikandar, and Shahzad stated that culturable bacteria represent about 0.0001–1% of the total endophytes in plants. Hence, the interest of researchers on purposeful research design should be considered before selecting a method for isolating endophytic microbes. Endophytes can be cultured on agar plates, and then microbial DNA can be extracted before carrying out polymerase chain reaction (PCR). Garcias-Bonet, Arrieta, de Santana, Duarte, and Marbà employed a commercial DNA extraction kit specific for plant DNA extraction to extract microbial endophytic DNA and used primer meant for the bacteria domain to carry out the PCR procedure. However, it should be noted that when amplifying a specific region of bacteria DNA, the mitochondria and chloroplast DNA found in plants may have a close resemblance to that of endophytes; hence, this method might not be too appropriate. In this light, next-generation sequencing is recommended without denaturing gradient gel electrophoresis (DGGE) analysis. Piccolo, Ferraro, Alfonzo, Settanni, Ercolini, Burruano, and Moschetti demonstrated the use of fluorescence in situ hybridization (FISH) technique in studying endophytic microbes. However, this can only be done in the natural habitat, thus making the laboratory isolation complicated. On the other hand, Ikeda, Kaneko, Okubo, Rallos, Eda, Mitsui, Sato, Nakamura, Tabata, and Minamisawa developed a procedure to enrich bacterial cells when isolating unculturable endophytes from the stem of a soybean by fractionalizing the homogenated soybean stem. This method was achieved by differential centrifugation and Nycodenz density gradient centrifugation. This method proved effective compared to when DNA was isolated from the soybean stem due to the higher intensity and number of amplicons of the bacteria when the efficiency of the bacteria cell was fortified using ribosomal intergenic spacer analysis. Equally, Lundberg, Yourstone, Mieczkowski, Jones, and Dangl also worked on an improved technique for 16S ribosomal rRNA sequencing, where unique template molecules were tagged before PCR by mapping amplicon sequences (to their original templates), which help to prevent error and bias arising from the amplification process. This method uses a base pair sequence with a higher temperature (melting) than the primer set, which is designed to attach to the host’s DNA. The culture-independent methods are more advanced due to attention drawn to them which facilitated more research to improve them. For instance, a modern analytical approach has been documented to advance the science of the plant microbiome . The use of combined stable isotope probing (SIP) and nanoscale secondary ion mass spectrometry techniques (NanoSIMS) coupled with advanced Raman spectroscopy-based single cell–based methods have been envisaged in studying plant microbiome in situ and to determine their biological functions in the bioremediation of complex pollutants from metal-polluted soil . More importantly, the specific metabolic functions of endophytic microbes can be better understood by combining SIP with other molecular methods, such as qPCR, finger printing, and cloning. Dos Santos and Olivares reported the use of microcosm combined with bacteria stocks as a reference to determine bacteria assemblage in the root of plants and their plant growth-promoting potential. Also, Hartman, van der Heijden, Roussely-Provent, Walser, and Schlaeppi reported a microcosm approach in elucidating the bacteria diversity and function in the root of red clover. Furthermore, Hartman, van der Heijden, Roussely-Provent, Walser, and Schlaeppi revealed a significant reduction in the growth of red clover upon mono-inoculation with Flavobacterium compared to the co-inoculation of red clover with root microbiome, which enhanced plant growth by reducing the negative effect of mono-inoculation of red clover with Flavobacterium . Finally, a microcosm study performed by Eldridge, Travers, Val, Ding, Wang, Singh, and Delgado‐Baquerizo reported diverse microbiome and their functions on 15 plant species growing in terrestrial habitats to reveal the preference of plant-associated microbes and their importance in plant germination. It will be interesting to fashion out how these modern approaches can be employed in the science of endophytes to better understand endosphere biology. Different molecular approaches exist for the identification of endophytic bacteria and the combination of recent molecular approaches, such as genome sequencing and metagenome. The use of DNA extracted from the root of plants can be employed in unraveling microbial community structure and functions in soybean. The extracted DNA from plant tissues after surface sterilization with water, hypochlorite, or ethanol for endophytic studies might contain a certain proportion of plant DNA, which is needed to be depleted using appropriate sequencing techniques and plantforms (e.g., Illumina, PacBio, and DNA fingerprinting). Many techniques exist for DNA fingerprinting. These include restriction fragment length polymorphism (RFLP), simple sequence repeat (SSR), terminal-RFLP, rapid amplified polymorphic DNA, amplified fragment length polymorphism, inter-SSR, single-stranded conformation polymorphism, and DGGE . The analysis of diverse plant microbiomes based on genetic composition can be achieved by the real-time polymerase chain reaction, FISH, automated version of ribosomal intergenic spacer analysis, terminal restriction fragment length polymorphisms, and DGGE, and phospholipid and fatty acid have also been documented [ , – ]. It is noteworthy to understand the use of molecular methods in identifying yet-to-be microbial endophytes by using appropriate methods to maximally recover endophytic DNA after the extraction process. The advent of PCR-based approaches in the Plant-Microbial Genome Project has provided vast advantages and opportunities for the detection, multiplication, quantification, and synthesis of copies of DNA in large amounts, differentiated from one another . PCR techniques have been widely employed for the detection of diverse genes responsible for microbial functions . The PCR and DNA sequencing aims at measuring the presence, taxonomy, and functions of plant microbiome from various samples, although, despite the importance of these techniques, there are limitations surrounding the PCR amplification process and DNA sequencing, mostly when extracting DNA from plant samples. The limitations include (i) contamination during the DNA extraction for PCR reaction and library preparation which may affect the DNA integrity, resulting in result errors and false outcome; (ii) primers’ design which require some previous sequence information; and (iii) the specific PCR product obtained during amplification process may be altered from one microbe to another based on non-specific binding of primers to other identical targeted sequences . Piccolo, Ferraro, Alfonzo, Settanni, Ercolini, Burruano, and Moschetti demonstrated the use of FISH technique in studying endophytic microbes; however, this can only be done in the natural habitat, thus making the laboratory isolation complicated. Furthermore, addressing these limitations specific to sequences may help devise approaches for the normalization of sequenced data to reveal microbial composition in its entirety. The use of PCR coupled with other sequence-based approaches is promising with more insights into plant microbiome gene combinations . The advent of advanced molecular techniques for endophytes’ identification has succeeded in DNA fingerprinting, for instance, the use of omics approaches where DNA is retrieved from bacteria to evaluate the diversity, functions, genes, metabolites, transcripts, and proteins with the aid of next-generation sequencing. The DNA fingerprinting methods have been overtaken by more technical procedures, such as metagenomics which involves DNA extraction from the total bacteria population using next-generation sequencing . This method has proven to better unravel the total endophytes from plant tissues compared to the fingerprinting techniques. Aside from omics approaches, the use of microscopy techniques, epifluorescence light microscopy, bright-field light microscopy, interferential and differential contrast light microscopy, scanning electron, and transmission electron microscopy in determining visual evidence of microbial colonization patterns in plants, has been documented [ , , ]. The use of culture-independent techniques, which involved DNA/RNA extraction from environmental samples coupled with omics approaches, has revolutionized the science of endophyte microbiology in generating large sequence datasets. This next-generation sequencing approach involving no DNA cloning has been employed to unveil the community structure, diversity, taxonomic and functional profiling, metabolites, and metabolic pathways of the plant microbiome . So far, the few research efforts utilizing next-generation sequencing from soybeans and other food crops revealed their taxonomic and functional attributes of endophytes in different plant species (Table ). Furthermore, addressing these limitations specific to sequences may help devise approaches for the normalization of sequenced data to reveal microbial composition in its entirety. The use of PCR coupled with other sequence-based approaches is promising with more insights into plant microbiome gene combinations . The combination of recent molecular approaches, such as genome sequencing and metagenome using DNA extracted from the root of plants, can be employed in unraveling microbial community structure and functions in soybean. More importantly, the specific metabolic functions of endophytic microbes can be better understood by combining SIP with other molecular methods, such as qPCR, finger printing, and cloning. The advancement in plant microbiome studies has revealed certain traits, which mediate their functions, such as secondary metabolites, genetic information, proteins, and transcripts using culture-dependent and culture-independent techniques [ – ]. Modern approaches to studying diverse endophytic microbes and functions are being employed to understand the colonization pattern for plant–microbe interactions based on host specificity and signaling networking for microbial communications linked to root exudation . The genes involved in flagellation, chemotaxis, motility, and biofilm formation has been reported in many bacteria strains, which facilitate their attachment/adherence, penetration, and colonization in the host plants . The host plants’ specific signal-networking and plant–microbe communications can reveal how microbes exhibit mutual relationships and antagonistic toward the phytopathogens by triggering host immune responses . Aside from genes involved in beneficial bacterial colonization, other genes have also been documented to partake in microbial biological processes. For instance, the genes involved in carbohydrate metabolism, phytohormone synthesis, secretion systems, biocontrol activity, and oxidative stress identified in the genome of endophytic bacteria from sunflower, apricot, and poplar that are important in agriculture, biotechnology, and industry have been documented [ – ]. In line with the aforementioned approaches and conventional techniques, studying the plant microbiome can be easier. Hence, it is recommended to compare the different recovering or identifying endophytes. This would assist in selecting the best method to use to identify endophytic microbe from plant samples. On the other hand, both the culture-dependent and culture-independent methods of endophyte analysis can help have a broader view of the diversity and population of plant endophytes and their functional attributes in the ecosystem. Briefly, the advantages and disadvantages of the techniques and approaches employed in the study of plant-associated microbes are highlighted in Table .
The microbes recruited into the plant endosphere and those inhabiting the external root environment contribute to plant growth in diverse ways, as shown in Fig. . Reports by Ku et al. showed root surface and hair colonization by an endophytic bacterium, B. cereus , in Chinese cabbage, soybean, and wheat, with evidence in understanding the mode of actions of plant microbes and how they influence plant growth. Aside from endosphere and rhizosphere research findings, fewer studies have documented the microbiome inhabiting the antosphere, caulosphere, carposhere, and spermosphere. Research into microbiome in the plant environments, such as rhizosphere, root, seed, and stem, have been documented, and their possible use in agricultural biotechnology is profound. For instance, Kumawat et al. reported an increase in the growth, symbiotic efficacy, nutrient acquisition, and yield of soybean co-inoculated with endophytic Pseudomonas oryzihabitans and Bradyrhizobium spp. Also, an increase in the crop yield, oil content, antioxidant content, seed quality, carbohydrates, and chemical composition (protein and lipid) of soybean inoculated with endophytic Bacillus amyloliquefaciens has been reported by Sheteiwy et al. , which suggests their future exploration as bioinoculants in growing soybean under drought stress. Because plants harbor diverse number of microorganisms, the better understanding of their complexity and functional traits will help unraveled their biological activities. Rhizosphere and Bulk Soil Microbiome The rhizosphere (plant microhabitats) represents soil regions closer to the plant root environment . The rhizosphere is often referred to as a “hotspot” for microbial activities due to the excess release of root exudates, which supply the required energy for microbial metabolic activities . The response of soil microbes to the diverse chemical compounds and varied soil parameters, which favors soil microflora, can be an indicator for selecting them over others. The shaping of the rhizosphere microbiome can be a function of the quantity of exudate released from one plant to another. Some examples of secondary metabolite organic compounds include amino acids, phenols, organic acids, sugars, siderophores, polysaccharides, etc. When released from plant roots, it influences a higher microbial population in the rhizosphere than in bulk soil . Bulk soil is the soil that is equidistance away from the rhizosphere region without root penetration . The microbial inhabitant in the bulk soil can be less in diversity due to the fewer organic compounds than the rhizosphere soil inhabitants with identical species. Some examples of bacteria occupying the rhizosphere root environment include Bradyrhizobium diazoefficiens, Bacillus subtilis, B. velezensis , etc. [ – ]. High microbial colonization, diversity, and activities are easily mediated by rhizodeposition, reducing compared to the adjacent or bulk soil . Usually, the variations in the rhizosphere microbial communities in soybean and other food crops can be linked to the geographical locations, growing seasons, crop rotation, plant growth stages, cultivars, farm practices, etc. The identification of diverse bacterial phyla, such as Acidobacteria, Actinobacteria, Bacteroidetes, Chloroflexi, Gemmatimonadetes, and Proteobacteria from rhizosphere soils under the different growing conditions and soil types, has been reported with greater influence on the bacterial diversity . Hence, there is a need for further research to ascertain if the microbes present in soybean across multiple locations are different, since there is less information of the soybean rhizosphere microbial communities. Due to the nodule formation in the root of soybean, the endosymbiotic relationship with nitrogen-fixing bacteria can be an advantage in releasing excess quality, and quantity of root exudate different from non-nodulating plants to help establish discreet microbial biomass in the rhizosphere . High-throughput sequencing in determining the rhizosphere bacterial and fungal communities of rapeseed have revealed varied operational taxonomic units at seedling, flowering, and maturity stages . The assessment of a bacterial community in rapeseed using a molecular ecology network with random matrix theory showed bacteria genera, such as Rhizobium , Flavobacterium , and Pseudomonas , at the network level . Interestingly, research on rhizosphere microbiome with the view of mapping out strategies for their incorporation into agriculture has been emphasized in recent times [ – ]. Nevertheless, the presence of pathogens may influence rhizosphere microbes in many ecological processes. Furthermore, the source of rhizosphere microbes is important as most of them may be introduced into the soil through seed planting . Seed Microbiome Aside from the rhizosphere microbiome , research advancements have shown microbial composition on the surface and internal tissue of seeds can be beneficial or pathogenic. The beneficial microbes influence seed growth at pre-germination, germination, flowering, and maturation stages . The recruitment of microbiome in seeds can be achieved by vertical (from the mother plant) or horizontal (environment) transmission . Vertical transmission of seed endophytes is believed to originate from the leaves and flowering parts. Upon planting into the soil, the seed undergoes imbibition, which enables them to absorb soil nutrients and then germinate. During the imbibition process, the release of metabolic compounds in the spermosphere region, i.e., soil-seed environment, creates an attractive environment for the soil microbes to compete with the natural soil pathogens . At this stage, the seed microbiome infiltrate or release to the soil environment via horizontal transfer. Seed endophytic bacteria and their mode of transmission enabled them to occupy diverse niches, such as the pericarp, seed embryo or cotyledon, and endosperm . The transmission of seed endophytes may differ depending on the organ location. For instance, endophytic in the pericarp are horizontally transmitted, while those colonizing the endosperm and embryo are easily transmitted by vertical processes . More research should be done on soybean to understand how endophytes are transferred in their root region. Microscopy and high throughput sequencing approaches have been employed to characterize seed microbiome, especially endophytic bacteria in some leguminous plants . Sánchez-López et al. reported dominant endophytic bacteria phyla, Proteobacteria, Firmicutes, Chlamydiales, and Bacteroidetes while investigating endophytic bacteria in the seed of Low Rattlebox. Information on the microbial community structure of endophytic bacteria in the seed of soybean and other leguminous food crops using high-throughput sequencing are scanty in literature. Consequently, differentiating seed endophyte and soil microbiome are still less understood. Also, seed endophytes can be found in other plant parts via infiltration from the rhizosphere to the above ground level. Interestingly, the synergistic cooperation between the soil microbes and seed endophytes has contributed to plant health and nutrition . Root and Shoot Microbiome The root and shoot form a key component in the study of the plant microbiome . The microbes found in the root and shoot can be less in number compared to the higher microbial profiling in the rhizosphere due to the nutrients and exudate secretion attributes . The detection of genes involved in bacteria attachment due to specialized cell organelles, such as fimbriae, flagella, and pili in the plant surface, assisted bacteria adhesion to the cell surfaces to form a biofilm . The plant-bacteria interaction and transient within the plant tissue can result due to a rise in water flux during transpiratory processes in plants. Across the plant parts, the presence of the targeted microorganisms may be influenced by the organ location and accessibility to plant nutrients . For microbes to efficiently colonize the host plant, the line of mode of actions involved includes (i) adherence to the root surface, (ii) multiplication, (iii) invasion from the external root environment, and (iv) colonization . After the colonization process, the movement of endophytic microbe from the belowground to the shoot through microbial networking can be achieved. The type and quantity of nutrients available in the plant endosphere can modulate the extent of bacteria diversity. Adeleke et al. and Jie et al. reported diverse endophytic bacteria phyla Chloroflexi, Nitrospirae, Planctomycetes, Palescibacteria, Acidobacteria, Actinobacteria, Cyanobacteria, Saccharibacteria, Firmicutes, Gemmatimonadetes, Bacteroidetes, and Proteobacteria from the root of sunflower and soybean. Recently, endophytic bacteria genera Bacillus , Staphylococcus , Serratia , Stenotrophomonas , Pseudomonas , Enterobacteria , and Erwinia from healthy rapeseed has been reported as part of the shoot microbiome . Endophytic bacteria in the external root environment are usually higher compared to the internal part of roots. In the findings by Adeleke et al. , the authors reported a dominant and high bacteria population in the root of growing sunflower compared to the stem due to the agricultural practices, geographical locations, plant type, organ location, etc., which contribute to the bacterial diversity. The reason for microbial differences in the rhizosphere, endosphere, and phyllosphere can be biological, chemical, or physical factors, which may exert selective pressure on endophytic bacteria to infiltrate the root endosphere . The endophytic microbiome tends to adjust to a plant environment with stable biomass, while the rhizosphere microbiome may vary due to niche complexity. Acknowledging the fact that plants harbor a multifunctional microbiome in the root and shoot can be a pointer to understanding factors that modulate the shape of the microbiome in plants.
The rhizosphere (plant microhabitats) represents soil regions closer to the plant root environment . The rhizosphere is often referred to as a “hotspot” for microbial activities due to the excess release of root exudates, which supply the required energy for microbial metabolic activities . The response of soil microbes to the diverse chemical compounds and varied soil parameters, which favors soil microflora, can be an indicator for selecting them over others. The shaping of the rhizosphere microbiome can be a function of the quantity of exudate released from one plant to another. Some examples of secondary metabolite organic compounds include amino acids, phenols, organic acids, sugars, siderophores, polysaccharides, etc. When released from plant roots, it influences a higher microbial population in the rhizosphere than in bulk soil . Bulk soil is the soil that is equidistance away from the rhizosphere region without root penetration . The microbial inhabitant in the bulk soil can be less in diversity due to the fewer organic compounds than the rhizosphere soil inhabitants with identical species. Some examples of bacteria occupying the rhizosphere root environment include Bradyrhizobium diazoefficiens, Bacillus subtilis, B. velezensis , etc. [ – ]. High microbial colonization, diversity, and activities are easily mediated by rhizodeposition, reducing compared to the adjacent or bulk soil . Usually, the variations in the rhizosphere microbial communities in soybean and other food crops can be linked to the geographical locations, growing seasons, crop rotation, plant growth stages, cultivars, farm practices, etc. The identification of diverse bacterial phyla, such as Acidobacteria, Actinobacteria, Bacteroidetes, Chloroflexi, Gemmatimonadetes, and Proteobacteria from rhizosphere soils under the different growing conditions and soil types, has been reported with greater influence on the bacterial diversity . Hence, there is a need for further research to ascertain if the microbes present in soybean across multiple locations are different, since there is less information of the soybean rhizosphere microbial communities. Due to the nodule formation in the root of soybean, the endosymbiotic relationship with nitrogen-fixing bacteria can be an advantage in releasing excess quality, and quantity of root exudate different from non-nodulating plants to help establish discreet microbial biomass in the rhizosphere . High-throughput sequencing in determining the rhizosphere bacterial and fungal communities of rapeseed have revealed varied operational taxonomic units at seedling, flowering, and maturity stages . The assessment of a bacterial community in rapeseed using a molecular ecology network with random matrix theory showed bacteria genera, such as Rhizobium , Flavobacterium , and Pseudomonas , at the network level . Interestingly, research on rhizosphere microbiome with the view of mapping out strategies for their incorporation into agriculture has been emphasized in recent times [ – ]. Nevertheless, the presence of pathogens may influence rhizosphere microbes in many ecological processes. Furthermore, the source of rhizosphere microbes is important as most of them may be introduced into the soil through seed planting .
Aside from the rhizosphere microbiome , research advancements have shown microbial composition on the surface and internal tissue of seeds can be beneficial or pathogenic. The beneficial microbes influence seed growth at pre-germination, germination, flowering, and maturation stages . The recruitment of microbiome in seeds can be achieved by vertical (from the mother plant) or horizontal (environment) transmission . Vertical transmission of seed endophytes is believed to originate from the leaves and flowering parts. Upon planting into the soil, the seed undergoes imbibition, which enables them to absorb soil nutrients and then germinate. During the imbibition process, the release of metabolic compounds in the spermosphere region, i.e., soil-seed environment, creates an attractive environment for the soil microbes to compete with the natural soil pathogens . At this stage, the seed microbiome infiltrate or release to the soil environment via horizontal transfer. Seed endophytic bacteria and their mode of transmission enabled them to occupy diverse niches, such as the pericarp, seed embryo or cotyledon, and endosperm . The transmission of seed endophytes may differ depending on the organ location. For instance, endophytic in the pericarp are horizontally transmitted, while those colonizing the endosperm and embryo are easily transmitted by vertical processes . More research should be done on soybean to understand how endophytes are transferred in their root region. Microscopy and high throughput sequencing approaches have been employed to characterize seed microbiome, especially endophytic bacteria in some leguminous plants . Sánchez-López et al. reported dominant endophytic bacteria phyla, Proteobacteria, Firmicutes, Chlamydiales, and Bacteroidetes while investigating endophytic bacteria in the seed of Low Rattlebox. Information on the microbial community structure of endophytic bacteria in the seed of soybean and other leguminous food crops using high-throughput sequencing are scanty in literature. Consequently, differentiating seed endophyte and soil microbiome are still less understood. Also, seed endophytes can be found in other plant parts via infiltration from the rhizosphere to the above ground level. Interestingly, the synergistic cooperation between the soil microbes and seed endophytes has contributed to plant health and nutrition .
The root and shoot form a key component in the study of the plant microbiome . The microbes found in the root and shoot can be less in number compared to the higher microbial profiling in the rhizosphere due to the nutrients and exudate secretion attributes . The detection of genes involved in bacteria attachment due to specialized cell organelles, such as fimbriae, flagella, and pili in the plant surface, assisted bacteria adhesion to the cell surfaces to form a biofilm . The plant-bacteria interaction and transient within the plant tissue can result due to a rise in water flux during transpiratory processes in plants. Across the plant parts, the presence of the targeted microorganisms may be influenced by the organ location and accessibility to plant nutrients . For microbes to efficiently colonize the host plant, the line of mode of actions involved includes (i) adherence to the root surface, (ii) multiplication, (iii) invasion from the external root environment, and (iv) colonization . After the colonization process, the movement of endophytic microbe from the belowground to the shoot through microbial networking can be achieved. The type and quantity of nutrients available in the plant endosphere can modulate the extent of bacteria diversity. Adeleke et al. and Jie et al. reported diverse endophytic bacteria phyla Chloroflexi, Nitrospirae, Planctomycetes, Palescibacteria, Acidobacteria, Actinobacteria, Cyanobacteria, Saccharibacteria, Firmicutes, Gemmatimonadetes, Bacteroidetes, and Proteobacteria from the root of sunflower and soybean. Recently, endophytic bacteria genera Bacillus , Staphylococcus , Serratia , Stenotrophomonas , Pseudomonas , Enterobacteria , and Erwinia from healthy rapeseed has been reported as part of the shoot microbiome . Endophytic bacteria in the external root environment are usually higher compared to the internal part of roots. In the findings by Adeleke et al. , the authors reported a dominant and high bacteria population in the root of growing sunflower compared to the stem due to the agricultural practices, geographical locations, plant type, organ location, etc., which contribute to the bacterial diversity. The reason for microbial differences in the rhizosphere, endosphere, and phyllosphere can be biological, chemical, or physical factors, which may exert selective pressure on endophytic bacteria to infiltrate the root endosphere . The endophytic microbiome tends to adjust to a plant environment with stable biomass, while the rhizosphere microbiome may vary due to niche complexity. Acknowledging the fact that plants harbor a multifunctional microbiome in the root and shoot can be a pointer to understanding factors that modulate the shape of the microbiome in plants.
Beneficial plant microbiome helps in sustaining the ecosystem . The ecological services range from plant growth promotion, pathogen control, phytoremediation, biofertilization, and abiotic stress mitigation to human safety . In recent times, the multifunctional attributes of endophytic microbes as plant growth stimulators and bioinoculants promise to revolutionize agriculture without negative ecological effects . Also, the role of endophytic microbes in agricultural biotechnology has been focused on; yet, research is still ongoing to meet zero ecological threats for maximum food production . Exploration of endophytic resources to provide alternative measures in ensuring a safe environment and sustainable agricultural productivity have been emphasized due to the negative impact of chemical fertilizers on the ecosystem . From the multifaceted application perspective, the mechanisms employed by endophytic microbes immensely contribute to plant growth and health . Microbes employ direct or indirect mechanisms in sustaining plant growth and health . The core attributes of endophytic microbes in enhancing plant growth include nutrient acquisition and mineralization, phosphate solubilization, nitrogen fixation, siderophore and enzyme production, and synthesis of growth hormones, such as indole-3-acetic acid, gibberellic acid, and abscisic acid, while indirectly, ACC deaminase, exopolysaccharide, and hydrogen cyanide production by endophytic microbes contribute to plants survival under drought stress . All the aforementioned processes, specifically, have been screened from endophytic microbes associated with sunflower and soybean . In addition, the suppression of phytopathogens through the induction of systemic resistance and antibiosis activities of endophytic microbes boost plant immunity against soil and host invading pathogens . Also, findings by Zhao, Xu, and Lai reported high inhibitory activity of soybean nodule endophytic bacterium Acinetobacter calcoaceticus against pathogenic fungus Phytophthora sojae due to their close association with the root of the plants. Endophytic microbes are said to deliver effectively in enhancing plant growth due to their close interaction, colonization, less composition in plants, and non-exposure to harsh environmental conditions . These attributes make endophytic studies interesting compared to the rhizosphere microbes. The synergistic effect of nodule endophytic bacteria, Pseudomonas aeruginosa (LSE-2) and Bradyrhizobium sp. (LSBR-3) from soybean, has been investigated as a source of bioinoculants and biofertilizers due to their root colonization potential through molecular crosstalk, which supports plant growth and nutrition . Some endophytic microbes solubilize phosphate in natural form by producing organic acids, which lower soil pH and chelate iron for easy phosphate assimilation by plants in soluble form . The ability of endophytic bacteria to produce phosphatases also helps in the mineralization of organic phosphorus . In vitro screening of phosphate-solubilizing endophytic bacteria has been investigated from soybean, sunflower, and rapeseed [ – ]. For example, Acinetobacter calcoaceticus , Ochrobactrum haematophilum , B. panacihum , Bacillus subtilis , B. australimaris , B. thuringiensis , B. zhangzhouensis , and Lysinibacillus pakistanensis have been isolated from leguminous crops [ , , ]. Kenasa, Nandeshwar, and Assefa reported the identification of cowpea root endophytic bacteria, Pseudomonas putida , and Bacillus subtilis phosphate producers in their study. Also, a study by Yasmeen and Bano reported an increase in soybean yield co-inoculated with phosphate-solubilizing bacteria, Rhizobium and Enterobacter . The rhizobacteria in the root nodule of leguminous plants naturally fix nitrogen in the soil, which is needed for plant nutrition . The nitrogen fixation potential of endophytic bacteria in the root nodules of leguminous crops, effectively, has enhanced the nitrogen pool in soil deficient in nitrogen supply . The nitrogen fixation by endophytic bacteria may differ compared to rhizobacteria found in the root of legumes . Interestingly, exploration of the endophytic bacterium Gluconacetobacter diazotrophicus with exceptional nitrogen fixation in plants has long been reported in reclaiming nitrogen loss in the soil . The ability of endophytic bacteria to produce siderophores also plays a major role in plant health sustainability . For instance, biocontrol activity which limits iron supply to the pathogens, heavy metal reduction, and induction of systemic resistance can be linked to the siderophore compounds, i.e., catecholate and hydroxamate, produced by endophytic bacteria . Diverse endophytic bacteria associated with soybean have been reported as siderophore producers . Bhutani et al. and Maheshwari et al. reported siderophore-producing endophytic bacteria strains from legumes. The suppressive and biocontrol activity of endophytic Burkholderia contaminans against Macrophomina phaseolina causing root rot, stem rot, seedling blight, damping off, and charcoal rot in jute due to siderophore biosynthesis has been reported . Since the presence of nitrogen-fixing and siderophore-producing bacteria has been established in soybean, other functions of these bacteria should be further studied. Similarly, phytohormones, such as ethylene, IAA, cytokinins, and gibberellin, modulating plant growth via diverse pathways are evident in endophytic microbes . Notably, IAA biosynthesis facilitates root development, which enables plants to absorb nutrients and water from the soil . Tryptophan, which serves as a precursor for IAA production by endophytic microbes in a growth media, helps differentiate IAA-producing bacteria from non-IAA-producing bacteria . Evidence of IAA and other phytohormones, such as gibberellin, and cytokinin production by endophytic bacteria to enhance plant growth, have been documented . Some endophytic bacteria, which produce 1-aminocyclopropane-1-carboxylate (ACC), a precursor for ethylene production, contribute to plant growth and are resilient to drought stress . The ability of endophytic bacteria to circumvent the effect of pathogens by producing jasmonic acid, antibiotics, salicylic acid, volatile compounds, siderophore, and lipopolysaccharide, elicit induced systemic resistance, and abiotic stress amelioration in the host plants . The actual mode of actions employed by endophytic bacteria in oilseed crop soybean is yet to be fully understood. Similarly, the biosynthesis and metabolism of reacting molecules as precursors for the synthesis of novel metabolites or enhancing already identified metabolites are poorly understood. The synthesis of secondary metabolites, such as alkaloids, terpenoids, phenols, organic acids, and flavonoids, which induce antibiosis, can be achieved by endophytic microbes specific to the host plants . Some examples of purified secondary metabolites produced by endophytic bacteria from some economic plants with related biological functions are presented in Table . Information relating to secondary metabolites sourced from endophytic bacteria associated with soybean is less documented in the literature. Hence, research focusing on secondary metabolites from endophytic bacteria associated with soybean and their exploration will further reveal their bioprospecting in plant disease management. The biomolecules sourced from endophytic bacteria stand promising in agriculture, environment, industry, and human safety. Hence, genomic insights into plant microbiome aim to reveal their functions and activity in plant physiology and metabolism. Additionally, it is imperative to unravel soybean-associated endophytic bacteria’s biological functions and physiological attributes, using culture-dependent and culture-independent techniques to identify secondary metabolites in the bacteria genome ; making information available on secondary metabolites produced by endophytic bacteria will help find a solution to diverse agricultural problems.
The interdisciplinary synergies among researchers in studying plant–microbe interactions continue to progress. Research efforts to study and explore endophytic bacteria from the leguminous crop as bioinoculants for plant growth and sustainable ecosystem have increased tremendously with driven biotechnological advances and low-cost analysis . Interestingly, the commercialization of endophytic bioinoculants is possible in sustainable agriculture . The computational knowledge about next-generation sequencing and other innovative techniques have informed scientists with accurate information on microbial diversity and related genes . Better still, there is a need to develop robust bioinformatics tools and analytical techniques with the existing technologies to generate microbiome data as a guide for further experiments. Adeleke et al. and Adeleke et al. reported the genomic characterization of plant growth-promoting endophytic bacteria, Bacillus cereus T4S and Stenotrophomonas maltophilia JVB5 as copious sunflower growth enhancers. Furthermore, effort on the use of these endophytic bacteria as biocontrol agent against phytopathogens is expected to be investigated in the future studies. Employing this approach by ecologists, environmental and computational scientists, microbiologists, agriculturists, and industrialists aim to provide insights into plant microbiome research as a reference for further studies. Furthermore, understanding the dynamics and role of endophytic microbes in plants using up-to-date techniques and bioinformatics tools, however, can help develop multiple strategies in understanding their functions in diverse fields, such as agriculture, ecology, medicine, forensics, and exobiology. The dominant bacteria phyla, Actinobacteria, Firmicutes, Proteobacteria, Bacteroidetes, and Chloroflexi in the root endosphere of food crops, such as maize, cowpea, sorghum, sunflower, soybean, have been reported using culture-independent techniques [ , , ]. Yet, there is a need to investigate further using appropriate techniques to access plant growth-promoting endophytic bacteria in different legumes and other food crops under different climatic conditions. Hence, identifying these bacteria for bioinoculants formulation can serve as a pointer to achieving ecofriendly agriculture sustainably.
This review evaluates endophytic bacteria in soybean and other food crops. The bioprospecting of these bacteria enhances their potential for sustainable yield enhancement. Soybean was discussed as a reference crop for oilseed crops due to its economic importance, high yield, and nutritional value. Soybean harbors some endophytic microbes important in agriculture. Beneficial endophytic microbes inhabiting different parts of the plants can potentially contribute to the growth of soybeans and other food crops. For instance, root nodule bacteria and endophytic bacteria enhanced nitrogen fixation in soybean, which promotes their yield and other yield parameters, enhance immunity, and boost plant defense against diseases. However, the root endophytes are emphasized because of high metabolic activities occurring below ground level due to the high quantity of metabolite secretion, which contributes to plant physiological functions. Different conventional and molecular techniques have been employed in the past to unravel endophytic microbes in some plants; nevertheless, each method comes with shortcomings. For instance, some endophytes can be difficult to culture on media despite their viability, such that culturing method can only unravel a lesser percentage (i.e., 0.1%) of endophytic populations. Hence, the advancement of endophytic microbes discovery using molecular techniques has proven more promising, although, with diverse challenges. Extracted endophytic bacteria DNA might contain traces of plant DNA, the chloroplast, and mitochondria DNA, which are identical to the targeted endophytic bacteria DNA . Host depletion techniques have been employed to remove a substantial amount of plant DNA that might be present in the DNA extracted from the plant tissues. Conversely, the use of fluorescence in situ hybridization (FISH) is inefficient, because it can only be carried out in a natural habitat. Oilseed crop soybean is economically important due to their high yield, and nutritional value. The mechanism employed by the endophytes present in the seed, shoot, leaves, roots, and other microbes inhabiting the rhizosphere, bulk soil in plant growth promotion, and disease control still needs to be emphasized, although some research information are available on them. The variation in the diversity and population of microbes inhabiting different plant parts can be due to difference in the geographical locations, cropping system, developmental stage of the plants and the farming practices adopted. These key factors may affect the crop yield, microbial diversity and their ability to produce secondary metabolites. It is therefore very important to understand the mechanisms behind the production of secondary metabolites in soybean as a measure to improve their production, oil content, antioxidant content, seed quality, carbohydrates, chemical composition, and yield in different environments and also as a model to the research of other crops. More research should also be carried out to help understand the use of endophytes in the agriculture, industry, and medical industries, owing to the production of bioproducts. Better still, there is a need to develop robust bioinformatics tools and analytical techniques with the existing technologies to generate microbiome data as a guide for further experiments. Employing this approach by ecologists, environmental and computational scientists, microbiologists, agriculturists, and industrialists aims to provide insights into plant microbiome research as a reference for further studies. Hence, the authors conclude and recommend that the current approaches highlighted in this review will be of help to researchers in understanding the dynamics, prospect, and potential of endophytic microbes in soybeans and other food crops as agricultural bio-input to ensure food security and sustainable agriculture.
|
Soil Bacterial Assemblage Across a Production Landscape: Agriculture Increases Diversity While Revegetation Recovers Community Composition
|
32eab8b0-64b8-4ecd-a68a-e063396ef50d
|
10156840
|
Microbiology[mh]
|
The intensification of agriculture that has occurred over the last 100 years, while increasing food production , has led to the degradation of many agricultural and natural landscapes . It is perhaps the conversion of natural ecosystems to production systems that is most profound, directly evident by a reduction in aboveground diversity. Production agriculture can also negatively influence soil condition, through depletion of soil organic carbon, acceleration of erosion, reduction of soil fertility, and through acidification and salinisation [ , , ], affecting the productivity and sustainability of aboveground ecosystems [ – ]. Soil degradation leads to reduced agricultural output , as well as driving fundamental changes in soil biology [ – ], notably, the balance between component groups of microorganisms, many of which play a pivotal role in broader ecosystem function. Bacteria perform a variety of functions critical to soil and plant health [ – ]. Bacteria assist in the conversion and uptake of plant available nutrients [ – ], act as phytostimulators promoting plant growth and resilience , and biological control agents that protect plants against phytopathogens . Secretions from soil bacteria help form microaggregates by binding soil particles that affect soil structure . Soil microaggregates are increasingly recognised as a characteristic of healthy soil, improving gas exchange, water infiltration, and water holding capacity of the soil . Given the range of functions performed by bacteria, the diversity and composition of their component communities can provide valuable insights into the health and function of associated environments [ – ], including agricultural systems and practices [ , , – ]. The investigation of soil bacterial communities has been suggested as a way to evaluate the condition of soil and productivity of correlating ecosystems [ , – ], with particular relevance to production systems where soil and plant health are intrinsically linked to productivity. Different land systems and management practices can modify vegetation and soil physicochemical properties, which in turn influence aboveground biodiversity and ecological processes such as nutrient cycling and gas exchange [ , – ]. And while there is some evidence that land use practice can modify belowground microbial communities [ – ], it is still unclear how aboveground land systems influence the structure of soil bacterial communities, how much variation exists between and within these communities, and what is driving it [ , , ]. The effect of land use change from natural to managed agriculture on soil bacterial communities is poorly understood, with both positive and negative correlations reported [ , – ], while an assumption of above- and belowground diversity linkage still exists. Further investigation across different environments is required to explore these assumptions . Much research has focused on the implications of aboveground land use on soil microbial communities across temperate and mesic biomes [ , – ] with considerably fewer studies investigating these interactions in less productive arid systems , such as those found throughout much of Australia. Australia’s semi-arid zone occurs through the interior of the continent where average rainfall is between 250 and 500 mm per year. These systems support sclerophyllous vegetation of predominantly low-growing Eucalyptus species (commonly termed mallee or mallee scrub), drought-tolerant understory shrubs (e.g. Acacia and Chenapod species), and ephemeral grasses and herbs. Here, we investigated the conversion of these systems to production agriculture (vineyards) and the impact of this transition on soil physicochemical characteristics and bacterial community composition. Conversely, we investigate how the restoration of sites with a legacy of agriculture (ex-pastoral land) influences these critical soil components. Given the growing global interest in ecological restoration as a strategy to restore the flow of ecosystem services , such investigations are increasingly important in evaluating the success of restorative actions taken. A greater understanding of how the recovery of native plant communities (through active revegetation) influences soil microbial communities can help shed light on the significance of such actions on soil condition. We hypothesised that managed agricultural systems would be associated with elevated concentrations of key nutrients (nitrate and phosphorus), and that distinct bacterial communities would be associated with different land use systems (i.e. vineyards, remnant mallee vegetation, revegetation, and ex agricultural land). Further, we expected to find a positive correlation between above and belowground diversities, and consequently that the conversion of diverse natural systems (remnant mallee vegetation) to monoculture agriculture (vineyards) would result in a reduction in soil bacterial diversity.
Study site The study site was a mixed-use agriculture production landscape, encompassing agricultural and natural systems (Fig. ). Located on the River Murray in the New South Wales Murray Darling Wine Region of Australia (34°37′47.8″S, 143°00′56.2″E), the site consisted of two commercial agricultural operations: wine vineyards and dried fruit vineyards. The region is highly productive producing high-value crops such as grapes, citrus, olives, nuts, stone fruit, cereal crops, and livestock. The area was classified as semi-arid with most of the annual average rainfall of ~ 300 mm falling during the Austral winter (i.e. June–August) . Soil, aspect, and elevation were consistent across the site, with soils classified as calcarosols or mallee loam, ranging from brown to red-brown loamy sand, sandy loam, or loam . Much of the site was dominated by irrigated vineyards (roughly 50%), which has replaced the remnant native Eucalyptus mallee that would have occurred across most of the site and of the region prior to its conversion to agriculture (Fig. ). The management practices of both vineyard systems were consistent, both being applied with two microbial inoculants (Supplementary ). Along with the active vineyard operations and remaining remnant mallee vegetation, an ex-pastoral/cropping section existed along the north-eastern boarder of the site (~ 420 ha). This section was abandoned for agricultural use within the last 5 years (assessed as unsuitable for irrigated agriculture) although still possesses a legacy of past cropping and pastoralism via a system dominated by wheat, and mixed native and introduced grasslands. Ecological systems Five distinct land use/ecological systems were identified across the study site (landscape units, hereafter) (Figs. and ). Remnant mallee vegetation of mixed Eucalyptus ( E . gracilis , E. brachycalyx , E. leptophylla , E. incrassata ) (RemVeg, hereafter) was identified and used as the natural reference system in which the impact of land use change could be measured against. Three agricultural landscape units were identified: established vineyards (OldVineyard, hereafter) consisting of grape vines over 10 years old, new vineyards (NewVineyard, hereafter) consisting of grape vines under 2 years old, and a grassland section (old pasture) that had been abandoned for agricultural use (Excrop, hereafter). Finally, a native revegetation landscape unit (Reveg, hereafter) was identified, consisting of three plantings undertaken within 2 years of sampling (2019–2020); one seedling planted site, and two direct seeded revegetation sites which comprised a seed mix of approximately 15 local native plant species that had not yet emerged at time of sampling. Replicates were identified for each of the five landscape units, totaling 32 individual sampling sites, broken down as follows; 15 replicates of the RemVeg landscape unit, six replicates of the OldVineyard landscape unit, five replicates of the NewVineyard landscape unit, three replicates of the Reveg landscape unit, and three replicates of the Excrop landscape unit (Fig. ). The sampling design was developed with consideration to soil and landscape variation (aspect and elevation). Replicates were chosen based on their location across the study site, where possible landscape units were identified and sampled that were in close vicinity to one another and in consistent soil types, allowing for comparative analysis between each. Soil sampling Soils were sampled following the Biomes of Australian Soil Environments (BASE) project protocol , in December 2020 (i.e. Austral Summer). Briefly, one soil sample was taken from each of the landscape unit replicates ( n = 32), which comprised three pooled sub-samples taken from a 30 m radius within the replicate. Soil was collected from the bulk soil surface horizon (0–10 cm depth), a portion (approx. 50 g) of which was stored in a sterile 50-mL tube to be used for DNA extraction, and another larger portion (~ 300 g) stored in a ziplock bag for physicochemical analysis. The top litter layer was carefully removed and a scoop taken to required depth (10 cm), with the three sub-samples thoroughly mixed prior to portioning. Samples to undergo physicochemical analysis were air dried in bag and stored at room temperature, while tubes where immediately stored at − 8 °C until microbial analysis was performed. Additional metadata were also collected at each of the landscape unit replicates, consisting of photos of each sub-sampling location, GPS coordinates, and notes on vegetation community variables including a plant species list. Plant species lists were compiled via visual inventories during a 5-min search of the immediate area surrounding the sampling location (30 m soil sampling radius). Soil analysis Soil physicochemical analysis was undertaken by the Australian Precision Ag Laboratory (APAL, Adelaide, Australia). Specifically, ammonium (NH 4 + ), nitrate (NO 3 − ), plant-available (Colwell) phosphorus, potassium, sodium, magnesium, calcium, organic carbon, soil pH (CaCl 2 ), and soil texture were quantified. The Colwell phosphorus test employed provides a measure of plant-available phosphorus, that being the bicarbonate-extractable phosphorus. The Colwell phosphorus method is considered to estimate phosphorus quantity and is the most common soil phosphorus test used in Australia . Organic carbon was determined using the Walkley and Black wet oxidation method, providing an approximation of total soil organic carbon by measuring the readily oxidisable/decomposable carbon which is considered to account for roughly 80% of the total soil organic carbon pool . DNA extraction and sequencing were undertaken by the Australian Genome Research Facility (AGRF, Adelaide, Australia) using the ‘DNeasy PowerSoil Pro Kit’ from Qiagen . Briefly, soil samples were added to a bead beating tube for rapid and thorough homogenisation, cell lysis occurred by mechanical and chemical methods, total genomic DNA was captured on a silica membrane in a spin column format, and DNA was washed and eluted from the membrane ready for downstream analysis . Bacterial 16S ribosomal rRNA was PCR amplified for each replicate using the forward 27f (AGAGTTTGATCMTGGCTCAG) and reverse 519r (GWATTACCGCGGCKGCTG) primers. Sequence data was analysed using the QIIME 2 (2019.7) platform . The demultiplexed raw reads were primer trimmed using the cutadapt plugin, with a length cut-off of 240 bp for the forward primer (default –error-rate 0.1 –times 1 –overlap 3). DADA2 with default setting (–p-max-e 2, –p-chimera-method consensus) was used to denoise, dereplicate, and filter chimeras . Taxonomy was assigned to amplicon sequence variants (ASVs) using the q2 feature classifier . Sequences from the Greengenes databases (v13.8) were trimmed for the targeted regions (V1–V3) and used as a training dataset for the classifier resulting in an absolute abundance ASV table to be used in downstream analysis. Statistical analysis The vegetation data (plant species list) was used to determine the mean plant functional diversity associated with each of the landscape units (Supplementary and ). Observed species were categorised into one of five functional groups; perennial herbaceous groundcover (0–30 cm height, grasses and forbes); annual herbaceous groundcover (0–30 cm height, grasses and forbes); small shrub (< 1 m height, woody perennial); medium/large shrub (> 1 m height, woody perennial); and tree (woody plants with trunk and canopy over 3 m height). The observed plant functional groups were summed for each replicate, providing a plant functional diversity score for each sample replicate from which a mean plant function diversity score could be derived for each landscape unit. The majority of statistical analysis was undertaken using R software (v4.03) , employing the microbiome data analysis framework of the phyloseq package (v1.32.1) . Both rarefied and non-rarefied data were analysed dependent on input and standardisation requirements of particular analysis. Firstly, rare sequence variants were removed (< 10 sequence reads) from the ASV table, using the ‘prune_taxa’ function of the phyloseq package. A linear model (LM) was used to identify significant relationships between soil variables and landscape units using the ‘lm’ function of the phyloseq package. To investigate differences in community composition between landscape units, ordination of ASV beta diversity was calculated with the ‘ordinate’ function in phyloseq using unrarefied data. Constrained analysis of principal coordinates (CAP) was performed on the Bray–Curtis dissimilarity matrix constrained by soil variables: organic carbon, nitrate, phosphorus, sodium, pH, calcium, magnesium, ammonium, and by plant functional diversity. Potassium was removed from ordination, as it was highly correlated with other variables (significance cut-off of > 0.7 or < − 0.7, Pearson’s product-moment correlation). Constraining variable significance was assessed non-parametrically via 999 permutations. The ‘betadisper’ function was used to test for homogeneity of group dispersions. A PERMANOVA (999 iterations) was run with the ‘pairwise_adonis2’ function of the pairwise.adonis package to test the significance of community compositional variation between landscape units . To investigate diversity of landscape units, alpha diversity was calculated at ASV level using observed richness and Shannon and Simpson diversity indices, performed using the ‘estimate_richness’ function in the phyloseq package. Prior to alpha diversity calculations, ASV level data was rarefied to using the phyloseq packages ‘rarefy_even_depth’ function, and Shannon and Simpson index values were transformed to effective number of ASVs. A negative binomial generalised linear model (GLM) was used to test for differences in alpha diversity between landscape units, followed by ‘goodness of fit’ analysis using chi-squared distribution ‘pchisq’ function in the phyloseq package. A type II Wald chi2 test was run with the ‘ANOVA’ function of the car package to test main effects of the GLM’s (v3.0–10) . Pairwise comparisons using Holm-Bonferroni P -adjustment were then made between landscape units using ‘pairwise’ function in the phyloseq package. A correlation matrix (Pearson product-moment) was used to identify significant relationships between soil and ecological (plant functional diversity) variables and diversity metrics, and to determine any correlating variables. Bacterial community composition was further investigated via a relative abundance stack plot, created by converting the rarefied family abundances to percentages. Rare families (< 2% of total rarefied sequences) were pooled into a single group named ‘pooled (< 2% relative abundance)’.
The study site was a mixed-use agriculture production landscape, encompassing agricultural and natural systems (Fig. ). Located on the River Murray in the New South Wales Murray Darling Wine Region of Australia (34°37′47.8″S, 143°00′56.2″E), the site consisted of two commercial agricultural operations: wine vineyards and dried fruit vineyards. The region is highly productive producing high-value crops such as grapes, citrus, olives, nuts, stone fruit, cereal crops, and livestock. The area was classified as semi-arid with most of the annual average rainfall of ~ 300 mm falling during the Austral winter (i.e. June–August) . Soil, aspect, and elevation were consistent across the site, with soils classified as calcarosols or mallee loam, ranging from brown to red-brown loamy sand, sandy loam, or loam . Much of the site was dominated by irrigated vineyards (roughly 50%), which has replaced the remnant native Eucalyptus mallee that would have occurred across most of the site and of the region prior to its conversion to agriculture (Fig. ). The management practices of both vineyard systems were consistent, both being applied with two microbial inoculants (Supplementary ). Along with the active vineyard operations and remaining remnant mallee vegetation, an ex-pastoral/cropping section existed along the north-eastern boarder of the site (~ 420 ha). This section was abandoned for agricultural use within the last 5 years (assessed as unsuitable for irrigated agriculture) although still possesses a legacy of past cropping and pastoralism via a system dominated by wheat, and mixed native and introduced grasslands.
Five distinct land use/ecological systems were identified across the study site (landscape units, hereafter) (Figs. and ). Remnant mallee vegetation of mixed Eucalyptus ( E . gracilis , E. brachycalyx , E. leptophylla , E. incrassata ) (RemVeg, hereafter) was identified and used as the natural reference system in which the impact of land use change could be measured against. Three agricultural landscape units were identified: established vineyards (OldVineyard, hereafter) consisting of grape vines over 10 years old, new vineyards (NewVineyard, hereafter) consisting of grape vines under 2 years old, and a grassland section (old pasture) that had been abandoned for agricultural use (Excrop, hereafter). Finally, a native revegetation landscape unit (Reveg, hereafter) was identified, consisting of three plantings undertaken within 2 years of sampling (2019–2020); one seedling planted site, and two direct seeded revegetation sites which comprised a seed mix of approximately 15 local native plant species that had not yet emerged at time of sampling. Replicates were identified for each of the five landscape units, totaling 32 individual sampling sites, broken down as follows; 15 replicates of the RemVeg landscape unit, six replicates of the OldVineyard landscape unit, five replicates of the NewVineyard landscape unit, three replicates of the Reveg landscape unit, and three replicates of the Excrop landscape unit (Fig. ). The sampling design was developed with consideration to soil and landscape variation (aspect and elevation). Replicates were chosen based on their location across the study site, where possible landscape units were identified and sampled that were in close vicinity to one another and in consistent soil types, allowing for comparative analysis between each.
Soils were sampled following the Biomes of Australian Soil Environments (BASE) project protocol , in December 2020 (i.e. Austral Summer). Briefly, one soil sample was taken from each of the landscape unit replicates ( n = 32), which comprised three pooled sub-samples taken from a 30 m radius within the replicate. Soil was collected from the bulk soil surface horizon (0–10 cm depth), a portion (approx. 50 g) of which was stored in a sterile 50-mL tube to be used for DNA extraction, and another larger portion (~ 300 g) stored in a ziplock bag for physicochemical analysis. The top litter layer was carefully removed and a scoop taken to required depth (10 cm), with the three sub-samples thoroughly mixed prior to portioning. Samples to undergo physicochemical analysis were air dried in bag and stored at room temperature, while tubes where immediately stored at − 8 °C until microbial analysis was performed. Additional metadata were also collected at each of the landscape unit replicates, consisting of photos of each sub-sampling location, GPS coordinates, and notes on vegetation community variables including a plant species list. Plant species lists were compiled via visual inventories during a 5-min search of the immediate area surrounding the sampling location (30 m soil sampling radius).
Soil physicochemical analysis was undertaken by the Australian Precision Ag Laboratory (APAL, Adelaide, Australia). Specifically, ammonium (NH 4 + ), nitrate (NO 3 − ), plant-available (Colwell) phosphorus, potassium, sodium, magnesium, calcium, organic carbon, soil pH (CaCl 2 ), and soil texture were quantified. The Colwell phosphorus test employed provides a measure of plant-available phosphorus, that being the bicarbonate-extractable phosphorus. The Colwell phosphorus method is considered to estimate phosphorus quantity and is the most common soil phosphorus test used in Australia . Organic carbon was determined using the Walkley and Black wet oxidation method, providing an approximation of total soil organic carbon by measuring the readily oxidisable/decomposable carbon which is considered to account for roughly 80% of the total soil organic carbon pool . DNA extraction and sequencing were undertaken by the Australian Genome Research Facility (AGRF, Adelaide, Australia) using the ‘DNeasy PowerSoil Pro Kit’ from Qiagen . Briefly, soil samples were added to a bead beating tube for rapid and thorough homogenisation, cell lysis occurred by mechanical and chemical methods, total genomic DNA was captured on a silica membrane in a spin column format, and DNA was washed and eluted from the membrane ready for downstream analysis . Bacterial 16S ribosomal rRNA was PCR amplified for each replicate using the forward 27f (AGAGTTTGATCMTGGCTCAG) and reverse 519r (GWATTACCGCGGCKGCTG) primers. Sequence data was analysed using the QIIME 2 (2019.7) platform . The demultiplexed raw reads were primer trimmed using the cutadapt plugin, with a length cut-off of 240 bp for the forward primer (default –error-rate 0.1 –times 1 –overlap 3). DADA2 with default setting (–p-max-e 2, –p-chimera-method consensus) was used to denoise, dereplicate, and filter chimeras . Taxonomy was assigned to amplicon sequence variants (ASVs) using the q2 feature classifier . Sequences from the Greengenes databases (v13.8) were trimmed for the targeted regions (V1–V3) and used as a training dataset for the classifier resulting in an absolute abundance ASV table to be used in downstream analysis.
The vegetation data (plant species list) was used to determine the mean plant functional diversity associated with each of the landscape units (Supplementary and ). Observed species were categorised into one of five functional groups; perennial herbaceous groundcover (0–30 cm height, grasses and forbes); annual herbaceous groundcover (0–30 cm height, grasses and forbes); small shrub (< 1 m height, woody perennial); medium/large shrub (> 1 m height, woody perennial); and tree (woody plants with trunk and canopy over 3 m height). The observed plant functional groups were summed for each replicate, providing a plant functional diversity score for each sample replicate from which a mean plant function diversity score could be derived for each landscape unit. The majority of statistical analysis was undertaken using R software (v4.03) , employing the microbiome data analysis framework of the phyloseq package (v1.32.1) . Both rarefied and non-rarefied data were analysed dependent on input and standardisation requirements of particular analysis. Firstly, rare sequence variants were removed (< 10 sequence reads) from the ASV table, using the ‘prune_taxa’ function of the phyloseq package. A linear model (LM) was used to identify significant relationships between soil variables and landscape units using the ‘lm’ function of the phyloseq package. To investigate differences in community composition between landscape units, ordination of ASV beta diversity was calculated with the ‘ordinate’ function in phyloseq using unrarefied data. Constrained analysis of principal coordinates (CAP) was performed on the Bray–Curtis dissimilarity matrix constrained by soil variables: organic carbon, nitrate, phosphorus, sodium, pH, calcium, magnesium, ammonium, and by plant functional diversity. Potassium was removed from ordination, as it was highly correlated with other variables (significance cut-off of > 0.7 or < − 0.7, Pearson’s product-moment correlation). Constraining variable significance was assessed non-parametrically via 999 permutations. The ‘betadisper’ function was used to test for homogeneity of group dispersions. A PERMANOVA (999 iterations) was run with the ‘pairwise_adonis2’ function of the pairwise.adonis package to test the significance of community compositional variation between landscape units . To investigate diversity of landscape units, alpha diversity was calculated at ASV level using observed richness and Shannon and Simpson diversity indices, performed using the ‘estimate_richness’ function in the phyloseq package. Prior to alpha diversity calculations, ASV level data was rarefied to using the phyloseq packages ‘rarefy_even_depth’ function, and Shannon and Simpson index values were transformed to effective number of ASVs. A negative binomial generalised linear model (GLM) was used to test for differences in alpha diversity between landscape units, followed by ‘goodness of fit’ analysis using chi-squared distribution ‘pchisq’ function in the phyloseq package. A type II Wald chi2 test was run with the ‘ANOVA’ function of the car package to test main effects of the GLM’s (v3.0–10) . Pairwise comparisons using Holm-Bonferroni P -adjustment were then made between landscape units using ‘pairwise’ function in the phyloseq package. A correlation matrix (Pearson product-moment) was used to identify significant relationships between soil and ecological (plant functional diversity) variables and diversity metrics, and to determine any correlating variables. Bacterial community composition was further investigated via a relative abundance stack plot, created by converting the rarefied family abundances to percentages. Rare families (< 2% of total rarefied sequences) were pooled into a single group named ‘pooled (< 2% relative abundance)’.
Aboveground diversity Remnant mallee vegetation (RemVeg) had the highest plant species richness and plant functional diversity, possessing all five functional groups across replicates (Table ; Supplementary and ). Four plant functional groups were observed across replicates of the revegetation system (Reveg), which was also found to contain some large established and recruiting native vegetation. Seedlings of planted species were not included in the Reveg landscape unit species list, as they were not yet established. The managed vineyard systems (OldVineyard and NewVineyard) were found to have low plant functional diversity, typically consisting of a medium shrub layer ( Vitis sp.) and an annual grassy groundcover. Likewise, the ex-pasture systems (ExCrop) predominantly consisted of two plant functional groups (perennial and annual herbaceous groundcover). Soil physicochemical properties Soil texture was consistent across the study site (Supplementary ). Replicates of the remnant mallee vegetation landscape unit (RemVeg) ranged from sandy loam to silty loam. Similarly, both the vineyard landscape units (OldVineyard and NewVineyard), the revegetation (Reveg), and the ex-cropping landscape units (ExCrop) were identified as either silty loam, sandy loam, or loam. Linear models (soil variable against landscape units) revealed a number of statistically significant correlations between physicochemical variables and landscape units, including a number of key nutrients. Nitrate (NO 3 − ) was highest in both vineyard systems (NewVineyard, p < 0.001; OldVineyard, p = 0.004), plant-available (Colwell) phosphorus was elevated in the OldVineyard landscape unit ( p < 0.001), while potassium was significantly lower in the RemVeg landscape unit (Fig. ). Magnesium was elevated in the OldVineyard ( p < 0.00) landscape units, calcium was elevated in the ExCrop ( p = 0.007), NewVineyard ( p = 0.003) and OldVineyard ( p = 0.001) landscape units (Fig. ), and sodium was elevated in the two vineyard systems (new, p = 0.015; old, p = 0.020). Bacterial community composition Landscape units were found to be associated with distinct bacterial communities (Fig. ), with a significant PERMANOVA test on the dissimilarity matrix (Bray–Curtis, F = 2.86, p < 0.001). Pairwise community analysis revealed a significant difference in community composition between all but one of the 10 landscape unit pairwise comparisons, that being the ExCrop and Reveg landscape units (Supplementary ). Constrained ordination (CAP) indicated a clear shift in community composition from the more natural landscape units (RemVeg and Reveg) to the highly modified systems (ExCrop, OldVineyard, and NewVineyard) (Fig. ). Constraining gradients of soil physicochemistry and plant functional diversity explained 48% of variance in community composition, with organic carbon, nitrate, phosphorus, pH, and calcium found to be significant (Table ). Beta dispersion test was significant ( F = 17.839, p < 0.001), indicating that landscape units had variable species turnover among replicates. The more natural predominantly unmanaged native vegetation land systems (RemVeg and Reveg) separate out from the highly modified managed systems (OldVineyard, NewVineyard, and ExCrop) along the primary x axis (CAP1, 18.1% of variation explained) (Fig. ). This partitioning appears to be strongly influenced by plant functional diversity and agricultural inputs, evident by vectors plant functional diversity, nitrate, phosphorus, calcium, and magnesium with soil pH also an influential variable affecting group partitioning along the x axis (CAP1). Bacterial community shift is also apparent within the highly modified and managed systems along the y axis (CAP2, 9.8% of variation explained), with the more natural ExCrop landscape unit (in comparison to the vineyard systems) clearly removed from the established vineyard system (OldVineyard), with the newly established vineyard system (NewVineyard) sitting between the two (Fig. ). Bacterial alpha diversity Soil bacterial diversity was compared between landscape units at ASV level. Of the three diversity metrics calculated (observed richness, effective Shannon, and effective Simpson), Simpson diversity was found to be significantly different among landscape units, as determined by GLM (Simpson, chi 2 = 0.225, p = 0.005; Shannon, chi 2 = 0.225, p = 0.078) (Supplementary ). The managed vineyard systems returned the highest bacterial diversity across all measured metrics, with both these systems returning significant Shannon diversity (NewVineyard, Z = 2.037, p = 0.042, OldVineyard, Z = 2.483, p = 0.013), while the OldVineyard returned significant Simpson diversity ( Z = 3.496, p = 0.0004) (Fig. , Supplementary ). Alpha diversity pairwise comparisons revealed a significant difference (Simpson diversity) between the RemVeg and OldVineyard landscape units ( Z = − 3.496, p = 0.004) (Supplementary ). No significant correlations (cut-off of 0.7, Pearson’s product-moment correlation) were found between any of the soil physicochemical or ecological (plant functional diversity) variables measured and any of the calculated diversity metrics (observed richness, Shannon, Simpson) (Table ). Several significant correlations were found between soil physicochemical variables (potassium/calcium, r = + 0.722; potassium/phosphorus, r = + 0.747; potassium/plant functional diversity, r = 0.675). Taxa analysis Bacterial taxa analysis was undertaken at the family taxonomic level, as it was expected that more specific and precise functional information could be sought because it was thought that similar functional groups (e.g. decomposers, parasites, mutualists) would be represented. Analysis revealed that rare taxa (< 2% relative abundance) dominate the system (34% relative abundance), with only 10 of the 210 families identified across the site found to be abundant (> 2% relative abundance), these being Rubrobacteraceae (17.4% relative abundance), Bacillaceae (9.2% relative abundance), Bradyrhizobiaceae (7.2% relative abundance), Pseudonocardiaceae (3.6% relative abundance), Micrococcaceae (3.1% relative abundance), Geodermatophilaceae (2.7% relative abundance), Rhodospirillaceae (5.1% relative abundance), Sphingomonadaceae (3.4% relative abundance), Sinobacteraceae (2.8% relative abundance), and Hyphomicrobiaceae (2.7% relative abundance). The relative abundance of bacterial families within landscape units was further investigated, similarly finding that bacterial communities were dominated by rare taxa (< 2% relative abundance), ranging from 38% of community composition in the Reveg landscape unit to 30% in the ExCrop landscape unit (Fig. , Supplementary 8). Interestingly, the Rubrobacteraceae family was found to be the dominant taxa in all landscape unit communities except the established vineyard system (OldVineyard), where the Bacillaceae family was found to be most relatively abundant (Fig. , Supplementary 8).
Remnant mallee vegetation (RemVeg) had the highest plant species richness and plant functional diversity, possessing all five functional groups across replicates (Table ; Supplementary and ). Four plant functional groups were observed across replicates of the revegetation system (Reveg), which was also found to contain some large established and recruiting native vegetation. Seedlings of planted species were not included in the Reveg landscape unit species list, as they were not yet established. The managed vineyard systems (OldVineyard and NewVineyard) were found to have low plant functional diversity, typically consisting of a medium shrub layer ( Vitis sp.) and an annual grassy groundcover. Likewise, the ex-pasture systems (ExCrop) predominantly consisted of two plant functional groups (perennial and annual herbaceous groundcover).
Soil texture was consistent across the study site (Supplementary ). Replicates of the remnant mallee vegetation landscape unit (RemVeg) ranged from sandy loam to silty loam. Similarly, both the vineyard landscape units (OldVineyard and NewVineyard), the revegetation (Reveg), and the ex-cropping landscape units (ExCrop) were identified as either silty loam, sandy loam, or loam. Linear models (soil variable against landscape units) revealed a number of statistically significant correlations between physicochemical variables and landscape units, including a number of key nutrients. Nitrate (NO 3 − ) was highest in both vineyard systems (NewVineyard, p < 0.001; OldVineyard, p = 0.004), plant-available (Colwell) phosphorus was elevated in the OldVineyard landscape unit ( p < 0.001), while potassium was significantly lower in the RemVeg landscape unit (Fig. ). Magnesium was elevated in the OldVineyard ( p < 0.00) landscape units, calcium was elevated in the ExCrop ( p = 0.007), NewVineyard ( p = 0.003) and OldVineyard ( p = 0.001) landscape units (Fig. ), and sodium was elevated in the two vineyard systems (new, p = 0.015; old, p = 0.020).
Landscape units were found to be associated with distinct bacterial communities (Fig. ), with a significant PERMANOVA test on the dissimilarity matrix (Bray–Curtis, F = 2.86, p < 0.001). Pairwise community analysis revealed a significant difference in community composition between all but one of the 10 landscape unit pairwise comparisons, that being the ExCrop and Reveg landscape units (Supplementary ). Constrained ordination (CAP) indicated a clear shift in community composition from the more natural landscape units (RemVeg and Reveg) to the highly modified systems (ExCrop, OldVineyard, and NewVineyard) (Fig. ). Constraining gradients of soil physicochemistry and plant functional diversity explained 48% of variance in community composition, with organic carbon, nitrate, phosphorus, pH, and calcium found to be significant (Table ). Beta dispersion test was significant ( F = 17.839, p < 0.001), indicating that landscape units had variable species turnover among replicates. The more natural predominantly unmanaged native vegetation land systems (RemVeg and Reveg) separate out from the highly modified managed systems (OldVineyard, NewVineyard, and ExCrop) along the primary x axis (CAP1, 18.1% of variation explained) (Fig. ). This partitioning appears to be strongly influenced by plant functional diversity and agricultural inputs, evident by vectors plant functional diversity, nitrate, phosphorus, calcium, and magnesium with soil pH also an influential variable affecting group partitioning along the x axis (CAP1). Bacterial community shift is also apparent within the highly modified and managed systems along the y axis (CAP2, 9.8% of variation explained), with the more natural ExCrop landscape unit (in comparison to the vineyard systems) clearly removed from the established vineyard system (OldVineyard), with the newly established vineyard system (NewVineyard) sitting between the two (Fig. ).
Soil bacterial diversity was compared between landscape units at ASV level. Of the three diversity metrics calculated (observed richness, effective Shannon, and effective Simpson), Simpson diversity was found to be significantly different among landscape units, as determined by GLM (Simpson, chi 2 = 0.225, p = 0.005; Shannon, chi 2 = 0.225, p = 0.078) (Supplementary ). The managed vineyard systems returned the highest bacterial diversity across all measured metrics, with both these systems returning significant Shannon diversity (NewVineyard, Z = 2.037, p = 0.042, OldVineyard, Z = 2.483, p = 0.013), while the OldVineyard returned significant Simpson diversity ( Z = 3.496, p = 0.0004) (Fig. , Supplementary ). Alpha diversity pairwise comparisons revealed a significant difference (Simpson diversity) between the RemVeg and OldVineyard landscape units ( Z = − 3.496, p = 0.004) (Supplementary ). No significant correlations (cut-off of 0.7, Pearson’s product-moment correlation) were found between any of the soil physicochemical or ecological (plant functional diversity) variables measured and any of the calculated diversity metrics (observed richness, Shannon, Simpson) (Table ). Several significant correlations were found between soil physicochemical variables (potassium/calcium, r = + 0.722; potassium/phosphorus, r = + 0.747; potassium/plant functional diversity, r = 0.675).
Bacterial taxa analysis was undertaken at the family taxonomic level, as it was expected that more specific and precise functional information could be sought because it was thought that similar functional groups (e.g. decomposers, parasites, mutualists) would be represented. Analysis revealed that rare taxa (< 2% relative abundance) dominate the system (34% relative abundance), with only 10 of the 210 families identified across the site found to be abundant (> 2% relative abundance), these being Rubrobacteraceae (17.4% relative abundance), Bacillaceae (9.2% relative abundance), Bradyrhizobiaceae (7.2% relative abundance), Pseudonocardiaceae (3.6% relative abundance), Micrococcaceae (3.1% relative abundance), Geodermatophilaceae (2.7% relative abundance), Rhodospirillaceae (5.1% relative abundance), Sphingomonadaceae (3.4% relative abundance), Sinobacteraceae (2.8% relative abundance), and Hyphomicrobiaceae (2.7% relative abundance). The relative abundance of bacterial families within landscape units was further investigated, similarly finding that bacterial communities were dominated by rare taxa (< 2% relative abundance), ranging from 38% of community composition in the Reveg landscape unit to 30% in the ExCrop landscape unit (Fig. , Supplementary 8). Interestingly, the Rubrobacteraceae family was found to be the dominant taxa in all landscape unit communities except the established vineyard system (OldVineyard), where the Bacillaceae family was found to be most relatively abundant (Fig. , Supplementary 8).
Bacterial community composition and soil physicochemical characteristics of different land systems (landscape units) were investigated across a semi-arid production landscape to explore the impact of land use on soil bacterial communities. As hypothesised, the landscape units differed in their soil physicochemical characteristics. This was linked to a shift in the soil microbiome, such that distinct bacterial communities were associated with land use systems. Interestingly, highest bacterial diversity was observed in the managed vineyard system, highlighting that aboveground diversity does not necessarily correlate with belowground diversity. The restoration of native plant communities appears to be acting to recover native bacterial communities, suggesting that such actions have the capacity to not only influence aboveground species composition but also belowground bacterial community assemblage. Managed vineyard systems associated with elevated levels of key nutrients Elevated concentrations of nitrate (NO 3 − ) and plant-available (Colwell) phosphorus were identified in the managed vineyard systems (OldVineyard and NewVineyard). The higher concentrations of these nutrients were not surprising, given the addition of soil microbial inoculants containing proportions of these nutrients (product A, nitrogen = 2.66% w/v, phosphorus = 1.2% w/v, potassium 0.25% w/v; product B, phosphorus = 2.09%), and the addition of other fertilisers that would also likely contain these nutrients. This result serves to highlight the physiochemical changes in agricultural systems (mallee vegetation to vineyard agriculture) in relation to soil nutrition. Although increased concentrations of common agricultural inputs such as nitrogen and phosphorus could be viewed as positive in the context of agricultural productivity, the long-term sustainability of the system could be questioned given the well-recognised negative impacts associated with fertiliser use . Land systems/practices drive distinct bacterial communities Our results indicate that bacterial community composition is strongly associated with land use, based on pairwise comparisons and constrained ordination (CAP) analysis of bacterial community composition (Fig. , Supplementary 4). Only one of the pairwise comparisons was not significant (ExCrop – Reveg). This is likely due to the fact that the revegetation systems have only recently (within the last 2 years) been converted from pastoral land (ExCrop), and as such the associated bacterial community still resembles the community associated with the ExCrop landscape units. The observed separation of natural systems (RemVeg and Reveg) from the modified agricultural systems (ExCrop, OldVineyard, and NewVineyard) in the CAP analysis indicates that agricultural land use change has modified bacterial community composition across the study system. The observed partition of natural and agricultural communities appears to be strongly correlated with, and potentially driven by, plant functional diversity and management practice. Elevated concentrations of key nutrients in the vineyard systems may be the result of microbial inoculants and additional fertiliser inputs, indicating that these practices, and the associated change in soil nutrients, have influenced community composition, evident by statistically significant nitrate and phosphorus-constraining variables in CAP analysis (Fig. ). Although not found to correlate with bacterial diversity (Table ), soil pH also appears to influence community composition, in line with other studies returning similar results . Constrained ordination (CAP) also revealed that plant functional diversity influenced community composition in the opposite direction to key nutrient vectors (nitrate and plant-available (Colwell) phosphorus) in the ordination space (Fig. ). As expected, the conversion of remnant mallee vegetation to vineyard agriculture was found to reduce plant functional diversity, suggesting that land use change is also a key factor influencing bacterial community composition. The Rubrobacteraceae family was the most abundant family across the study site (~ 17.4% relative abundance) and was most abundant in all landscape units except the established vineyards (OldVineyard). Recognised as one of the most radiation-resistant organisms , and halotolerant and desiccation tolerant , the Rubrobacteraceae has a selective advantage in extreme environments, including arid soils , permafrost , and saline environments . The greater relative abundance of Rubrobacteraceae across the study system is likely a legacy of the semi-arid soils that much of the site would have consisted of before its conversion to irrigated production agriculture. The reduced abundance of this family in vineyard systems highlights the ability of land use to alter the abundance of specific taxa. Indeed, revegetation of old pasture sites (Reveg landscape unit) has acted to shift the bacterial community back towards a reference state (Fig. )—community associated with remnant mallee vegetation (RemVeg). This finding suggests that the restoration of aboveground ecosystems can act to restore belowground bacterial communities, as reported in other studies , with potential implications for soil and wider ecosystem health. For instance, native microbial communities likely harbor a greater proportion of species possessing advantageous traits to local environmental conditions, providing a pool of well adapted (potentially plant beneficial) species that may disperse to adjacent production systems, such as vineyards. To explore the efficacy of management practices employed to improve soil condition (inoculation), inoculated groups were investigated (where possible). The inoculated bacterial family Pseudomonadaceae was not found to be abundant (> 2% relative abundance) in any of the landscape units (Supplementary 8), including the landscape units in which it was applied (OldVineyard and NewVineyard), indicating that the addition of inoculants has not influenced the abundance of this family. No conclusion could be drawn regarding the efficacy of applied inoculants to proliferate members of the Actinomycetes group, as no information could be sourced regarding which specific taxon’s (e.g. species, genera, families) the inoculants contained. Conversion of remnant mallee vegetation to vineyard agriculture increases bacterial diversity Observed bacterial richness at amplicon sequence variant level was not significant among the seven landscape units assessed, while both effective Simpson and Shannon diversity metrics (which account for abundance and evenness) were found to be statistically significant (Supplementary 2). Shannon and Simpson diversity metrics are widely recommended and commonly used when analysing microbial diversity and have been shown to reduce the bias (richness over evenness) often associated with other diversity metrics . Results revealed that the managed vineyard landscape units had highest soil bacterial diversity, with both the established and new vineyard systems (NewVineyard and OldVineyard) returning statistically significant diversity results. This result was not in line with our expectations or with other studies that suggest a positive correlation between plant diversity/complexity and bacterial diversity [ , , , ]. The conversion of the more diverse remnant mallee vegetation to monoculture agriculture has, in fact, increased belowground bacterial diversity, suggesting that agriculturally driven land use change has resulted in a decoupling of above- and belowground diversities. Although a positive relationship between above- and belowground diversities has been observed in other studies , and could be considered a broadly accepted principle , we suggest that agriculture can act to disrupt this relationship via major modification of the natural soil system; modification occurring through the removal and replacement of natural plant communities (and their associated inputs), and ongoing management practices associated with production systems, such as the planting of crops, chemical/fertiliser application, and soil tillage. In this regard, we propose that agricultural land use (and associated practices) may be a stronger driver of soil bacterial diversity than aboveground plant diversity. In their analysis of experimental grasslands, Zak et al. (2003) found that plant diversity increased the biomass and composition of soil microbial communities, but attributed this to the increase in plant production associated with greater species diversity, rather than to plant diversity per se . This goes some way to explaining findings here, given that plant production would likely be greatest in the agricultural systems (due to management practices such as fertiliser and water inputs) where bacterial diversity was also found to be highest (New and OldVineyards), adding support for our suggestion that agriculture fundamentally disrupts the natural processes that otherwise govern above- and belowground diversity linkage, such as plant production. Further support for this can be found in a global meta-analysis of more than 84 studies. Liu et al. (2020) found that microbial richness showed a moderate but positive correlation with plant diversity, and likewise suggested that plant communities with higher diversity may promote more diverse microbial communities through greater diversity of inputs (chemical, i.e. root exudates and physical, i.e. litter) and higher productivity, leading to increased niche space . Another factor likely impacting diversity, and typical of agricultural systems, is disturbance. The intermediate disturbance hypothesis (where the diversity of competing species will be maximised at intermediate frequencies and/or intensities of disturbance or environmental change) partly explains the diversity results found here. In stable-state environments, fewer well-adapted taxa outcompete and dominate those less adapted, reducing diversity. Conversely, intermittently disturbed environments can result in the persistence of a greater number of taxa due to increased environmental heterogeneity or habitats (niche space) in which different taxa are suitably adapted, and can exploit . Although it is recognised that the study here is not appropriately designed to test the IDH, results do point to disturbance as a significant factor driving bacterial diversity. Land use change (from remnant mallee vegetation to managed vineyards) and associated management practices employed (e.g. cover cropping, chemical/fertiliser application, tillage) could be viewed as having a positive effect on soil health, given that there is some evidence to suggest that soil microbial diversity confers stability to stress and protection against soil-borne disease . However, while microbial diversity is a valuable tool in evaluating change in soil condition, it is acknowledged that increased diversity does not necessarily indicate positive change. For instance, a bacterial community may have higher diversity than another while also consisting a greater proportion of parasitic taxa, potentially indicating poor soil condition (in the context of plant productivity). Thus, taxonomic community shifts should also be considered in evaluating the impacts of aboveground land use change, as this variable may be of greater significance to soil–plant systems than diversity per se. It is also recognised that increased bacterial diversity and soil fertility associated with the managed vineyard systems are likely the results of other inputs, such as microbial amendments, fertilizer, and water. Interestingly, nitrate and plant-available (Colwell) phosphorus were found to be elevated in both vineyard systems (Fig. ), while no significant correlation was identified between these nutrients and bacterial diversity (Table ). It could be assumed that the amendments (hydrolysed molasses, amino acids, fulvic acid, seaweed, and liquid fish) and/or the living bacteria within the inoculants are driving diversity in the inoculated systems (New and OldVineyards) . However, this assumption lacks the appropriate statistical evidence to be validated here, and further work would be required to assess the impact of these soil amendments on bacterial diversity.
Elevated concentrations of nitrate (NO 3 − ) and plant-available (Colwell) phosphorus were identified in the managed vineyard systems (OldVineyard and NewVineyard). The higher concentrations of these nutrients were not surprising, given the addition of soil microbial inoculants containing proportions of these nutrients (product A, nitrogen = 2.66% w/v, phosphorus = 1.2% w/v, potassium 0.25% w/v; product B, phosphorus = 2.09%), and the addition of other fertilisers that would also likely contain these nutrients. This result serves to highlight the physiochemical changes in agricultural systems (mallee vegetation to vineyard agriculture) in relation to soil nutrition. Although increased concentrations of common agricultural inputs such as nitrogen and phosphorus could be viewed as positive in the context of agricultural productivity, the long-term sustainability of the system could be questioned given the well-recognised negative impacts associated with fertiliser use .
Our results indicate that bacterial community composition is strongly associated with land use, based on pairwise comparisons and constrained ordination (CAP) analysis of bacterial community composition (Fig. , Supplementary 4). Only one of the pairwise comparisons was not significant (ExCrop – Reveg). This is likely due to the fact that the revegetation systems have only recently (within the last 2 years) been converted from pastoral land (ExCrop), and as such the associated bacterial community still resembles the community associated with the ExCrop landscape units. The observed separation of natural systems (RemVeg and Reveg) from the modified agricultural systems (ExCrop, OldVineyard, and NewVineyard) in the CAP analysis indicates that agricultural land use change has modified bacterial community composition across the study system. The observed partition of natural and agricultural communities appears to be strongly correlated with, and potentially driven by, plant functional diversity and management practice. Elevated concentrations of key nutrients in the vineyard systems may be the result of microbial inoculants and additional fertiliser inputs, indicating that these practices, and the associated change in soil nutrients, have influenced community composition, evident by statistically significant nitrate and phosphorus-constraining variables in CAP analysis (Fig. ). Although not found to correlate with bacterial diversity (Table ), soil pH also appears to influence community composition, in line with other studies returning similar results . Constrained ordination (CAP) also revealed that plant functional diversity influenced community composition in the opposite direction to key nutrient vectors (nitrate and plant-available (Colwell) phosphorus) in the ordination space (Fig. ). As expected, the conversion of remnant mallee vegetation to vineyard agriculture was found to reduce plant functional diversity, suggesting that land use change is also a key factor influencing bacterial community composition. The Rubrobacteraceae family was the most abundant family across the study site (~ 17.4% relative abundance) and was most abundant in all landscape units except the established vineyards (OldVineyard). Recognised as one of the most radiation-resistant organisms , and halotolerant and desiccation tolerant , the Rubrobacteraceae has a selective advantage in extreme environments, including arid soils , permafrost , and saline environments . The greater relative abundance of Rubrobacteraceae across the study system is likely a legacy of the semi-arid soils that much of the site would have consisted of before its conversion to irrigated production agriculture. The reduced abundance of this family in vineyard systems highlights the ability of land use to alter the abundance of specific taxa. Indeed, revegetation of old pasture sites (Reveg landscape unit) has acted to shift the bacterial community back towards a reference state (Fig. )—community associated with remnant mallee vegetation (RemVeg). This finding suggests that the restoration of aboveground ecosystems can act to restore belowground bacterial communities, as reported in other studies , with potential implications for soil and wider ecosystem health. For instance, native microbial communities likely harbor a greater proportion of species possessing advantageous traits to local environmental conditions, providing a pool of well adapted (potentially plant beneficial) species that may disperse to adjacent production systems, such as vineyards. To explore the efficacy of management practices employed to improve soil condition (inoculation), inoculated groups were investigated (where possible). The inoculated bacterial family Pseudomonadaceae was not found to be abundant (> 2% relative abundance) in any of the landscape units (Supplementary 8), including the landscape units in which it was applied (OldVineyard and NewVineyard), indicating that the addition of inoculants has not influenced the abundance of this family. No conclusion could be drawn regarding the efficacy of applied inoculants to proliferate members of the Actinomycetes group, as no information could be sourced regarding which specific taxon’s (e.g. species, genera, families) the inoculants contained.
Observed bacterial richness at amplicon sequence variant level was not significant among the seven landscape units assessed, while both effective Simpson and Shannon diversity metrics (which account for abundance and evenness) were found to be statistically significant (Supplementary 2). Shannon and Simpson diversity metrics are widely recommended and commonly used when analysing microbial diversity and have been shown to reduce the bias (richness over evenness) often associated with other diversity metrics . Results revealed that the managed vineyard landscape units had highest soil bacterial diversity, with both the established and new vineyard systems (NewVineyard and OldVineyard) returning statistically significant diversity results. This result was not in line with our expectations or with other studies that suggest a positive correlation between plant diversity/complexity and bacterial diversity [ , , , ]. The conversion of the more diverse remnant mallee vegetation to monoculture agriculture has, in fact, increased belowground bacterial diversity, suggesting that agriculturally driven land use change has resulted in a decoupling of above- and belowground diversities. Although a positive relationship between above- and belowground diversities has been observed in other studies , and could be considered a broadly accepted principle , we suggest that agriculture can act to disrupt this relationship via major modification of the natural soil system; modification occurring through the removal and replacement of natural plant communities (and their associated inputs), and ongoing management practices associated with production systems, such as the planting of crops, chemical/fertiliser application, and soil tillage. In this regard, we propose that agricultural land use (and associated practices) may be a stronger driver of soil bacterial diversity than aboveground plant diversity. In their analysis of experimental grasslands, Zak et al. (2003) found that plant diversity increased the biomass and composition of soil microbial communities, but attributed this to the increase in plant production associated with greater species diversity, rather than to plant diversity per se . This goes some way to explaining findings here, given that plant production would likely be greatest in the agricultural systems (due to management practices such as fertiliser and water inputs) where bacterial diversity was also found to be highest (New and OldVineyards), adding support for our suggestion that agriculture fundamentally disrupts the natural processes that otherwise govern above- and belowground diversity linkage, such as plant production. Further support for this can be found in a global meta-analysis of more than 84 studies. Liu et al. (2020) found that microbial richness showed a moderate but positive correlation with plant diversity, and likewise suggested that plant communities with higher diversity may promote more diverse microbial communities through greater diversity of inputs (chemical, i.e. root exudates and physical, i.e. litter) and higher productivity, leading to increased niche space . Another factor likely impacting diversity, and typical of agricultural systems, is disturbance. The intermediate disturbance hypothesis (where the diversity of competing species will be maximised at intermediate frequencies and/or intensities of disturbance or environmental change) partly explains the diversity results found here. In stable-state environments, fewer well-adapted taxa outcompete and dominate those less adapted, reducing diversity. Conversely, intermittently disturbed environments can result in the persistence of a greater number of taxa due to increased environmental heterogeneity or habitats (niche space) in which different taxa are suitably adapted, and can exploit . Although it is recognised that the study here is not appropriately designed to test the IDH, results do point to disturbance as a significant factor driving bacterial diversity. Land use change (from remnant mallee vegetation to managed vineyards) and associated management practices employed (e.g. cover cropping, chemical/fertiliser application, tillage) could be viewed as having a positive effect on soil health, given that there is some evidence to suggest that soil microbial diversity confers stability to stress and protection against soil-borne disease . However, while microbial diversity is a valuable tool in evaluating change in soil condition, it is acknowledged that increased diversity does not necessarily indicate positive change. For instance, a bacterial community may have higher diversity than another while also consisting a greater proportion of parasitic taxa, potentially indicating poor soil condition (in the context of plant productivity). Thus, taxonomic community shifts should also be considered in evaluating the impacts of aboveground land use change, as this variable may be of greater significance to soil–plant systems than diversity per se. It is also recognised that increased bacterial diversity and soil fertility associated with the managed vineyard systems are likely the results of other inputs, such as microbial amendments, fertilizer, and water. Interestingly, nitrate and plant-available (Colwell) phosphorus were found to be elevated in both vineyard systems (Fig. ), while no significant correlation was identified between these nutrients and bacterial diversity (Table ). It could be assumed that the amendments (hydrolysed molasses, amino acids, fulvic acid, seaweed, and liquid fish) and/or the living bacteria within the inoculants are driving diversity in the inoculated systems (New and OldVineyards) . However, this assumption lacks the appropriate statistical evidence to be validated here, and further work would be required to assess the impact of these soil amendments on bacterial diversity.
A reduction in aboveground plant diversity is inevitable when natural systems are converted to large-scale production monocultures, as was found here. It is also broadly assumed that this aboveground change results in a reduction in belowground diversity (above- and belowground diversity linkage). Our results stand in contrast to this assumption with the finding that agricultural systems (with reduced aboveground diversity) have increased soil bacterial diversity. We highlight that restoration of native plant communities can act to rapidly recover natural soil bacterial communities, which in turn could improve soil and plant health. However, the impact of shifts in bacterial community composition associated with land use systems detected here is not fully understood without a greater understanding of the functional significance of the key bacterial groups identified. Such shifts should be considered in future studies seeking to further our understanding of land use impacts on soil and plant health.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 52 KB)
|
COVID-19 in cancer patients: update from the joint analysis of the ESMO-CoCARE, BSMO, and PSMO international databases
|
4802cbfe-48c3-45ae-b2c9-2fcf1c868fc4
|
10156988
|
Internal Medicine[mh]
|
The severe acute respiratory syndrome coronavirus 2 (SARs-CoV-2) emerged in December 2019, infecting more than 645 million people and resulting in >6 million deaths from the coronavirus disease (COVID-19). COVID-19 has had important consequences on health systems across the world, with cancer patients especially vulnerable given the increased risk of SARS-CoV2 infection, morbidity, and mortality, , , , and also given the global disruption of cancer care from early detection and diagnosis to optimal care. , , In January 2020, the European Society for Medical Oncology (ESMO) initiated the ESMO COVID-19 and CAncer REgistry (ESMO-CoCARE) in order to study the effects of COVID-19 in patients with cancer and propose approaches to mitigate the risks related to COVID-19 and cancer diagnosis/treatment, as well as the evolution of both diseases. ESMO-CoCARE was amongst the largest, observational, multicenter registries, including centers from Europe, Africa, and Asia/Oceania. The first analysis of the ESMO-CoCARE registry showed that patient/cancer characteristics related to gender, ethnicity, poor fitness, comorbidities, systemic inflammation, and the presence of an active malignancy were associated with moderate/severe disease and adverse outcomes from COVID-19. These initial findings highlighted the need to adapt the daily practice in oncology. , With the evolution of the pandemic, prevention of infection and subsequent severe COVID-19 appeared to be crucial for patients with cancer, with vaccination being the most effective method of achieving this goal. Fortunately, owing to a massive global effort, several highly effective vaccines, particularly messenger RNA (mRNA)-based (BNT162b2 and mRNA-1273) and adenovirus-vectored vaccines (ChAdOx1 nCoV19, Ad26.COV2-S, and Gam-COVID-Vac), have been developed at an unprecedented speed. , , , These vaccines were safe and effective in patients with cancer. , , , Moreover, several patients experienced natural SARS-CoV2 infection and acquired immune protection. , As the pandemic has progressed, the availability of effective vaccines and therapeutics as well as the development of new variants could influence the severity of COVID-19 in cancer patients, hence we proceeded to the second analysis of ESMO-CoCARE data, jointly with BSMO (Belgian Society of Medical Oncology) and PSMO (Portuguese Society of Medical Oncology) registries. Herein, we report on the results of this updated analysis, which aimed at assessing significant prognostic factors for the COVID-19 outcomes of hospitalization, mortality, intensive care unit (ICU) admission, and overall survival (OS). In addition, subgroup analyses by pandemic phase and vaccination status were carried out.
Study design and participants This is an observational prospective study, based on longitudinal multicenter surveys of cancer patients diagnosed with COVID-19. The aim of the study is primarily to describe the characteristics of COVID-19 in patients with cancer, exploring associations with both cancer and COVID-19 outcomes. The current analysis cohort includes cancer patients with COVID-19, registered in CoCARE, BSMO, and PSMO. Ιn the BSMO registry, only hospitalized patients with cancer and COVID-19 were included. All three registries collected data on clinical features, course of disease, management, and outcomes for both cancer and COVID-19 disease. Data reported here were extracted from medical records of consecutive patients diagnosed with COVID-19 from 1 January 2020. Study objectives and endpoints The present analysis focuses on the identification of factors potentially associated with COVID-19 hospitalization and mortality over the different pandemic phases (also named waves) and subgroups of special interest. The primary endpoints were (i) COVID-19 hospitalization, categorized based on hospitalization requirement and indication for ICU admission (no hospitalization versus hospitalization indicated/took place, with or without ICU indication/admission) and (ii) COVID-19 mortality, including deaths reported for patients who did not recover from COVID-19, as well as deaths reported for patients who recovered but died later due to COVID-19 complications. Secondary endpoints included admission to ICU (ICU indication/admission versus no hospitalization or hospitalization indicated/took place, without ICU) and OS (time-to-event endpoint), defined as time from the date of formal COVID-19 diagnosis until death from any cause. Of note, the analysis of COVID-19 hospitalization did not include BSMO, since all patients from BSMO were hospitalized, while COVID-19 mortality was analyzed for hospitalized patients only (among non-hospitalized only 2.8% died due to COVID-19). Statistical analysis Significant risk factors for COVID-19 hospitalization, COVID-19 mortality, and admission to ICU were examined through multivariable logistic regression models, stratified by registry, odds ratios (OR) are provided (multicollinearity also checked). Multivariable Cox proportional hazards models, stratified by registry, were fitted for OS, hazard ratios (HR) are provided (proportionality was explored by Schoenfeld’s residuals). For the multivariable analyses, a pre-selection of explanatory variables was made to avoid overfitting of the model. Initial variable selection was based on significance from univariable analysis stratified by registry ( P < 0.10), possible correlation between variables, importance of factors, and data availability. For all the multivariable models, the factors with significant effects were derived based on the backward elimination method (removal criterion P > 10%). Several important factors were further explored, including (i) phase of the pandemic [phase I (January to May 2020); phase II (June to September 2020); phase III (October 2020 to February 2021); phase IV (March to December 2021)], (ii) vaccination status at COVID-19 diagnosis [no vaccination/vaccination not completed; vaccination completed (at least 2 weeks)], (iii) age at COVID-19 diagnosis (<50; 50-69; ≥70 years), and (iv) ethnicity (Caucasian; Asian; other). The association of these factors with patient/clinical/cancer/COVID-19 characteristics was explored through Fisher’s exact test. Subgroup analyses for the primary outcomes of COVID-19 hospitalization/mortality were carried out to determine whether the effect of these characteristics of interest was consistent across the various subgroups. Separate multivariable logistic regression analyses were carried out for each subgroup. For the subgroup analysis by vaccination status, the propensity score matching method was used to create two cohorts of the same size and similar characteristics, adjusting for confounding factors and reducing potential bias resulting from factors’ inequalities between the two cohorts (‘1 to 1 Greedy Matching’ algorithm). All P values are two-sided and considered statistically significant if ≤0.05. Due to the exploratory setting of this analysis, multiplicity adjustment is not applied. Data were analyzed using SAS v9.4 and R v4.0.5 software.
This is an observational prospective study, based on longitudinal multicenter surveys of cancer patients diagnosed with COVID-19. The aim of the study is primarily to describe the characteristics of COVID-19 in patients with cancer, exploring associations with both cancer and COVID-19 outcomes. The current analysis cohort includes cancer patients with COVID-19, registered in CoCARE, BSMO, and PSMO. Ιn the BSMO registry, only hospitalized patients with cancer and COVID-19 were included. All three registries collected data on clinical features, course of disease, management, and outcomes for both cancer and COVID-19 disease. Data reported here were extracted from medical records of consecutive patients diagnosed with COVID-19 from 1 January 2020.
The present analysis focuses on the identification of factors potentially associated with COVID-19 hospitalization and mortality over the different pandemic phases (also named waves) and subgroups of special interest. The primary endpoints were (i) COVID-19 hospitalization, categorized based on hospitalization requirement and indication for ICU admission (no hospitalization versus hospitalization indicated/took place, with or without ICU indication/admission) and (ii) COVID-19 mortality, including deaths reported for patients who did not recover from COVID-19, as well as deaths reported for patients who recovered but died later due to COVID-19 complications. Secondary endpoints included admission to ICU (ICU indication/admission versus no hospitalization or hospitalization indicated/took place, without ICU) and OS (time-to-event endpoint), defined as time from the date of formal COVID-19 diagnosis until death from any cause. Of note, the analysis of COVID-19 hospitalization did not include BSMO, since all patients from BSMO were hospitalized, while COVID-19 mortality was analyzed for hospitalized patients only (among non-hospitalized only 2.8% died due to COVID-19).
Significant risk factors for COVID-19 hospitalization, COVID-19 mortality, and admission to ICU were examined through multivariable logistic regression models, stratified by registry, odds ratios (OR) are provided (multicollinearity also checked). Multivariable Cox proportional hazards models, stratified by registry, were fitted for OS, hazard ratios (HR) are provided (proportionality was explored by Schoenfeld’s residuals). For the multivariable analyses, a pre-selection of explanatory variables was made to avoid overfitting of the model. Initial variable selection was based on significance from univariable analysis stratified by registry ( P < 0.10), possible correlation between variables, importance of factors, and data availability. For all the multivariable models, the factors with significant effects were derived based on the backward elimination method (removal criterion P > 10%). Several important factors were further explored, including (i) phase of the pandemic [phase I (January to May 2020); phase II (June to September 2020); phase III (October 2020 to February 2021); phase IV (March to December 2021)], (ii) vaccination status at COVID-19 diagnosis [no vaccination/vaccination not completed; vaccination completed (at least 2 weeks)], (iii) age at COVID-19 diagnosis (<50; 50-69; ≥70 years), and (iv) ethnicity (Caucasian; Asian; other). The association of these factors with patient/clinical/cancer/COVID-19 characteristics was explored through Fisher’s exact test. Subgroup analyses for the primary outcomes of COVID-19 hospitalization/mortality were carried out to determine whether the effect of these characteristics of interest was consistent across the various subgroups. Separate multivariable logistic regression analyses were carried out for each subgroup. For the subgroup analysis by vaccination status, the propensity score matching method was used to create two cohorts of the same size and similar characteristics, adjusting for confounding factors and reducing potential bias resulting from factors’ inequalities between the two cohorts (‘1 to 1 Greedy Matching’ algorithm). All P values are two-sided and considered statistically significant if ≤0.05. Due to the exploratory setting of this analysis, multiplicity adjustment is not applied. Data were analyzed using SAS v9.4 and R v4.0.5 software.
Cohort description The overall analysis cohort includes 3294 patients with cancer history and COVID-19 diagnosis from January 2020 to February 2022: 2049 (62%) from CoCARE (23 countries, with the UK (31%) and Spain (20%) contributing most, database cut-off date: 17 May 2022), 928 (28%) from Belgian centers (BSMO), and 317 (10%) from Portuguese centers (PSMO) ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). Overall, 36% of the analysis cases were diagnoses from the first phase of the pandemic, 9% from phase II, 41% from phase III, and 12% from phase IV ( A). This time distribution holds also for CoCARE, whereas in BSMO, most of the cases were from phase I (54%), and in PSMO from phase III (62%). Of note, in BSMO registry, there is available information only until January 2021 (i.e. until phase III of the pandemic). A flow chart of the analysis for the overall population and by registry is presented in , available at https://doi.org/10.1016/j.esmoop.2023.101566 . Cohort demographics, clinical, and cancer disease characteristics are provided in , available at https://doi.org/10.1016/j.esmoop.2023.101566 . Median age of the overall cohort was 66 years (interquartile range 55-75 years), with half of the patients being females. Among patients with known ethnicity, 68% were Caucasian and 8% Asian. Almost equal were the never smokers (38%) with the former/current smokers (37%), while 60% had Eastern Cooperative Oncology Group performance status (ECOG PS) 0/1. Most of the patients had pre-existing co-morbidities (74%), with cardiovascular (49%) and metabolic (33%) the most common ones, while 69% received at least one concomitant medication ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). The vast majority (88%) of patients had solid tumors (breast: 21%, colorectal: 13%, lung: 13%, prostate: 7%; other: 34%), with hematological malignancies reported for 9%. Most of the patients had evidence of active cancer at COVID-19 diagnosis (62%) with 21% having no evidence of disease (excluding BSMO with non-available cancer status), whereas half (50%) had cancer stage III/IV. Some 60% were on cancer treatment (including any antineoplastic therapies within 3 months before COVID-19 diagnosis). For 31% of the patients, the cancer treatment plan was adjusted due to COVID-19 (25% delay, 3% cancellation). summarizes the vaccination status of patients. Among the 2366 CoCARE/PSMO patients, 534 (23%) had an initial vaccination and 186 (8%) had also received a booster dose. At COVID-19 diagnosis, 103 patients (4%) had completed vaccination (last dose at least 2 weeks before COVID-19 diagnosis) and 1419 (60%) were either not vaccinated (1058; 45%) or vaccination was not completed (361; 15%). COVID-19 diagnosis, course of illness, and outcome Details on COVID-19 diagnosis and course of illness are provided in , available at https://doi.org/10.1016/j.esmoop.2023.101566 . COVID-19 hospitalization was reported for 65% of the patients, including 14% with ICU admission. Of note, the COVID-19 hospitalization rate (excluding BSMO who were all hospitalized) was 54%. At initial presentation of COVID-19, 76% had at least one symptom, most commonly fever (46%), cough (41%), and dyspnea (31%). Complications occurred to 35% of the patients, most frequently pulmonary (24%), cardiovascular (7%), and systemic (6%). Furthermore, based on CoCARE/PSMO, 13% experienced serious complications, 32% required supplemental oxygen, whereas treatment of COVID-19 or its sequelae was administered to 42%, including azithromycin (19%), anticoagulation (18%), hydroxychloroquine (15%), and corticosteroids (15%). Regarding clinical outcome ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), among 2809 patients with available follow-up, 622 (22%) died due to COVID-19. Overall, 1031 (37%) deaths were recorded, with the most common reasons being COVID-19 complications (60%) and progressive disease (PD) (cancer) (18%). Among patients who recovered ( n = 2437), 7% had major complications, including lung function (3%), pneumonitis (3%), and fatigue (2%). Thirty patients had been re-infected or COVID-19 was re-activated ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). Association with baseline factors—multivariable analysis and temporal trends COVID-19 hospitalization According to the multivariable logistic model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), COVID-19 hospitalization rate was higher in older patients, with Asian/other ethnicity compared with Caucasian, worse ECOG PS (≥2) and a higher number of co-morbidities (OR range 1.26-2.90). Patients with breast, colorectal, or other solid tumors had lower hospitalization rate than patients with hematological malignancies (OR = 0.34, 0.48, and 0.59, respectively) whereas patients with PD (cancer) needed to be hospitalized more often compared with those with no evidence of disease (NED) (OR = 1.67). During the second, third, and fourth phases, lower hospitalization rates were observed compared with the first phase (OR = 0.26, 0.36, and 0.19, respectively). Asymptomatic patients were hospitalized less often, as expected (OR = 0.13), whereas, patients in the poorer risk category of neutrophil-lymphocyte ratio (NLR) (≥6), platelet-lymphocyte ratio (PLR) (≥270), OnCOVID inflammatory score (OIS) (≤40), and prognostic index (PI) had higher hospitalization rates (OR range 1.60-5.48). COVID-19 mortality (among hospitalized patients) COVID-19 mortality among hospitalized patients (multivariable model; , available at https://doi.org/10.1016/j.esmoop.2023.101566 ) was higher in male patients, older, with ethnicity other than Asian/Caucasian, worse ECOG PS (≥2), and BMI < 25 (OR range 1.34-2.14). Patients with prostate or other solid tumors had fewer COVID-19-related deaths than patients with hematological malignancies (OR = 0.37 and 0.55, respectively). Patients with progressive tumor, however, died more often due to COVID-19 compared with those with NED (OR = 2.50). Finally, as expected, patients with stage I/II or III, had lower COVID-19 mortality rates than patients with stage IV (OR = 0.40 and 0.62, respectively). This held also for asymptomatic patients (OR = 0.43). COVID-19 ICU admission ICU admission ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ) was higher in patients with worse ECOG PS (≥2), coming from centers in lower-middle-income countries compared with high-income (OR = 1.87 and 2.02, respectively), whereas, older age patients had a lower ICU admission rate (OR = 0.89). Patients with solid tumors were admitted to ICU less frequently than patients with hematological malignancies (OR range 0.31-0.63). Patients with PD, however, had higher ICU admission rates compared with those with NED (OR = 1.70) as well as patients in the poorer risk category of OIS (≤40) (OR = 1.71). Asymptomatic patients were admitted to ICU less often (OR = 0.19). Due to the unexpected finding regarding age, a sensitivity analysis was conducted excluding BSMO (with only hospitalized patients). In this case, the pandemic phase was found significant and remained in the model instead of age (all other variables and their effect were the same) ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). OS Among 2791 patients with available information, median follow-up was 6.05 months (interquartile range 5.95-11.99). A total of 1013 (36%) deaths were recorded, with 58.8% [95% confidence interval (CI) 56.6% to 61.0%] 1-year OS rate and median OS 13.6 months (95% CI 12.6-16.5 months) ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). According to the final stratified multivariable Cox model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), males showed a higher mortality risk (HR = 1.40). The risk of death was lower for Asians in comparison with Caucasians (HR = 0.55), whereas higher for other ethnicity (HR = 1.31). Worse ECOG PS and lower BMI were also associated with increased risk of death (HR = 2.09 and 1.32, respectively). Patients with breast and prostate tumors had lower mortality risk compared with those with hematological malignancies (HR = 0.63 and 0.55, respectively), as well as patients with stage I/II and III compared with IV (HR = 0.38 and 0.62, respectively). Immunotherapy/targeted therapy (alone or with chemotherapy) and other cancer treatment or no treatment, were also associated with lower risk of death compared with chemotherapy (HR range 0.70-0.79). Finally, patients in the NLR poorer risk category displayed higher mortality risk (HR = 1.44). In sensitivity analysis excluding BSMO, only BMI (compared with the above model) was not found to be significant and thus not included in the corresponding final model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). Subgroup analysis by pandemic phase Hospitalization, ICU admission, and all-cause death rates decreased significantly across the four pandemic phases (hospitalization from 78% in January to May 2020 to 34% in March to December 2021, ICU from 16% to 10%, all-cause death from 41% to 19%). Among hospitalized patients, no significant change was observed for COVID-19 mortality, as opposed to COVID-19 mortality for the overall cohort . Respective rates for the overall cohort are presented in B. Most of the patient/clinical/cancer/COVID-19 characteristics differed significantly among the different pandemic phases ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). According to multivariable models for COVID-19 hospitalization and COVID-19 mortality (among hospitalized), by pandemic phase ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), age, ECOG PS, cancer status, tumor type were significant prognostic factors for both endpoints, in most of the phases. Symptoms were also significant for COVID-19 hospitalization in all phases, as well as in phase I for COVID-19 mortality. The effect of gender was significant only in phase I. Ethnicity and BMI exhibited a significant effect only on the hospitalization rate in phase III. The effect of country’s income level on hospitalization was significant in phases I and III, but in the opposite direction: in phase I, hospitalization rate was higher in upper-middle-income countries (95%) compared with high-income economies (75%), whilst in phase ΙΙΙ the opposite association was detected (22% versus 46%). Subgroup analysis by vaccination status Vaccination had a protective effect in COVID-19 hospitalization (OR = 0.24), ICU admission (OR = 0.29), and OS (HR = 0.39), whereas no difference was shown in COVID-19 mortality rate among hospitalized patients . The association of vaccination status with variables of interest is presented in , available at https://doi.org/10.1016/j.esmoop.2023.101566 , with a significantly higher vaccination rate for Caucasians (9%), patients with ECOG PS 0 (9%), centers in Northern/Western Europe (10%), and upper-middle-income economies (13%). Breast (10%) and prostate (9%) cancer patients had the highest vaccination rates, as well as patients on active cancer treatment at COVID-19 diagnosis (8%). The vaccination rate, however, was significantly lower among symptomatic patients (6%), with pulmonary/cardiovascular/systemic complications (2%/1%/2%), requiring O 2 (2%), as well as those requiring COVID-19 treatment (5%). In , available at https://doi.org/10.1016/j.esmoop.2023.101566 , the multivariable logistic models for COVID-19 hospitalization are presented for each vaccination subgroup, after matching for baseline characteristics. For non-completely vaccinated patients, less hospitalizations occurred in the upper-middle economies compared with high-income countries, possibly due to health system capacity saturation. Age, symptoms and PLR were significant factors in the vaccination group, whereas ethnicity, ECOG PS, BMI, and cancer status were significant factors in the non-vaccinated one. No model was fitted for COVID-19 mortality due to the small number of patients and events in each subgroup. Subgroup analyses by age group and ethnicity Subgroup analysis by age is based on the following grouping: 543 (17%) ‘<50 years’, 1379 (42%) ‘50-69 years’, and 1324 (41%) ‘≥70 years’. Older patients had significantly higher rates of hospitalization (70%), COVID-19 death among hospitalized (37%), and all-cause death (45%) . Age was significantly associated with most of the factors examined ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). In , available at https://doi.org/10.1016/j.esmoop.2023.101566 , results from the multivariable models for COVID-19 hospitalization and COVID-19 mortality (among hospitalized) are presented for each age subgroup. ECOG PS was a significant prognostic factor for both endpoints, in all subgroups, with stronger effect for younger patients. Cancer status (PD versus NED) was significant for COVID-19 hospitalization and mortality in the <50 and 50-69 groups, whereas cancer stage had a significant effect on the mortality of patients >50 years old. Symptoms had a significant effect (increasing with age) on hospitalization for all age groups, and for mortality of older patients (≥70). PI significantly affected the hospitalization of younger patients, and PLR and OIS the hospitalization of middle-age patients, whereas modified Glasgow prognostic score (mGPS) affected older patients. Ethnicity was found to be significant for COVID-19 hospitalization only, in the groups of <50 years and 50-69 years (with Caucasian having lower hospitalization rate), whereas gender, co-morbidities, and vaccination status were significant prognostic factors for hospitalization in the <50 years age group. The pandemic phase was significant for COVID-19 hospitalization, for all age groups (with less hospitalizations during phases II/III/IV compared with I), as well as for older patients (≥70 years) for COVID-19 mortality (with less COVID-19 deaths for phase II only compared with I). Subgroup analysis by ethnicity was based on 2036 patients with known ethnicity: 1390 (68%) Caucasian, 163 (8%) Asian, and 483 (24%) other ethnicities. Patients with ethnicity other than Asian/Caucasian had significantly higher rates of COVID-19 hospitalization (62%), COVID-19-related deaths among hospitalized (47%), ICU admission (17%), and all-cause death (41%) . Ethnicity was also significantly associated with most of the factors examined ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). In , available at https://doi.org/10.1016/j.esmoop.2023.101566 , the multivariable logistic models for COVID-19 hospitalization and COVID-19 mortality among hospitalized are presented for each ethnicity subgroup. Results for the Asian subgroup are mainly descriptive due to the small number of patients.
The overall analysis cohort includes 3294 patients with cancer history and COVID-19 diagnosis from January 2020 to February 2022: 2049 (62%) from CoCARE (23 countries, with the UK (31%) and Spain (20%) contributing most, database cut-off date: 17 May 2022), 928 (28%) from Belgian centers (BSMO), and 317 (10%) from Portuguese centers (PSMO) ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). Overall, 36% of the analysis cases were diagnoses from the first phase of the pandemic, 9% from phase II, 41% from phase III, and 12% from phase IV ( A). This time distribution holds also for CoCARE, whereas in BSMO, most of the cases were from phase I (54%), and in PSMO from phase III (62%). Of note, in BSMO registry, there is available information only until January 2021 (i.e. until phase III of the pandemic). A flow chart of the analysis for the overall population and by registry is presented in , available at https://doi.org/10.1016/j.esmoop.2023.101566 . Cohort demographics, clinical, and cancer disease characteristics are provided in , available at https://doi.org/10.1016/j.esmoop.2023.101566 . Median age of the overall cohort was 66 years (interquartile range 55-75 years), with half of the patients being females. Among patients with known ethnicity, 68% were Caucasian and 8% Asian. Almost equal were the never smokers (38%) with the former/current smokers (37%), while 60% had Eastern Cooperative Oncology Group performance status (ECOG PS) 0/1. Most of the patients had pre-existing co-morbidities (74%), with cardiovascular (49%) and metabolic (33%) the most common ones, while 69% received at least one concomitant medication ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). The vast majority (88%) of patients had solid tumors (breast: 21%, colorectal: 13%, lung: 13%, prostate: 7%; other: 34%), with hematological malignancies reported for 9%. Most of the patients had evidence of active cancer at COVID-19 diagnosis (62%) with 21% having no evidence of disease (excluding BSMO with non-available cancer status), whereas half (50%) had cancer stage III/IV. Some 60% were on cancer treatment (including any antineoplastic therapies within 3 months before COVID-19 diagnosis). For 31% of the patients, the cancer treatment plan was adjusted due to COVID-19 (25% delay, 3% cancellation). summarizes the vaccination status of patients. Among the 2366 CoCARE/PSMO patients, 534 (23%) had an initial vaccination and 186 (8%) had also received a booster dose. At COVID-19 diagnosis, 103 patients (4%) had completed vaccination (last dose at least 2 weeks before COVID-19 diagnosis) and 1419 (60%) were either not vaccinated (1058; 45%) or vaccination was not completed (361; 15%).
Details on COVID-19 diagnosis and course of illness are provided in , available at https://doi.org/10.1016/j.esmoop.2023.101566 . COVID-19 hospitalization was reported for 65% of the patients, including 14% with ICU admission. Of note, the COVID-19 hospitalization rate (excluding BSMO who were all hospitalized) was 54%. At initial presentation of COVID-19, 76% had at least one symptom, most commonly fever (46%), cough (41%), and dyspnea (31%). Complications occurred to 35% of the patients, most frequently pulmonary (24%), cardiovascular (7%), and systemic (6%). Furthermore, based on CoCARE/PSMO, 13% experienced serious complications, 32% required supplemental oxygen, whereas treatment of COVID-19 or its sequelae was administered to 42%, including azithromycin (19%), anticoagulation (18%), hydroxychloroquine (15%), and corticosteroids (15%). Regarding clinical outcome ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), among 2809 patients with available follow-up, 622 (22%) died due to COVID-19. Overall, 1031 (37%) deaths were recorded, with the most common reasons being COVID-19 complications (60%) and progressive disease (PD) (cancer) (18%). Among patients who recovered ( n = 2437), 7% had major complications, including lung function (3%), pneumonitis (3%), and fatigue (2%). Thirty patients had been re-infected or COVID-19 was re-activated ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ).
COVID-19 hospitalization According to the multivariable logistic model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), COVID-19 hospitalization rate was higher in older patients, with Asian/other ethnicity compared with Caucasian, worse ECOG PS (≥2) and a higher number of co-morbidities (OR range 1.26-2.90). Patients with breast, colorectal, or other solid tumors had lower hospitalization rate than patients with hematological malignancies (OR = 0.34, 0.48, and 0.59, respectively) whereas patients with PD (cancer) needed to be hospitalized more often compared with those with no evidence of disease (NED) (OR = 1.67). During the second, third, and fourth phases, lower hospitalization rates were observed compared with the first phase (OR = 0.26, 0.36, and 0.19, respectively). Asymptomatic patients were hospitalized less often, as expected (OR = 0.13), whereas, patients in the poorer risk category of neutrophil-lymphocyte ratio (NLR) (≥6), platelet-lymphocyte ratio (PLR) (≥270), OnCOVID inflammatory score (OIS) (≤40), and prognostic index (PI) had higher hospitalization rates (OR range 1.60-5.48). COVID-19 mortality (among hospitalized patients) COVID-19 mortality among hospitalized patients (multivariable model; , available at https://doi.org/10.1016/j.esmoop.2023.101566 ) was higher in male patients, older, with ethnicity other than Asian/Caucasian, worse ECOG PS (≥2), and BMI < 25 (OR range 1.34-2.14). Patients with prostate or other solid tumors had fewer COVID-19-related deaths than patients with hematological malignancies (OR = 0.37 and 0.55, respectively). Patients with progressive tumor, however, died more often due to COVID-19 compared with those with NED (OR = 2.50). Finally, as expected, patients with stage I/II or III, had lower COVID-19 mortality rates than patients with stage IV (OR = 0.40 and 0.62, respectively). This held also for asymptomatic patients (OR = 0.43). COVID-19 ICU admission ICU admission ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ) was higher in patients with worse ECOG PS (≥2), coming from centers in lower-middle-income countries compared with high-income (OR = 1.87 and 2.02, respectively), whereas, older age patients had a lower ICU admission rate (OR = 0.89). Patients with solid tumors were admitted to ICU less frequently than patients with hematological malignancies (OR range 0.31-0.63). Patients with PD, however, had higher ICU admission rates compared with those with NED (OR = 1.70) as well as patients in the poorer risk category of OIS (≤40) (OR = 1.71). Asymptomatic patients were admitted to ICU less often (OR = 0.19). Due to the unexpected finding regarding age, a sensitivity analysis was conducted excluding BSMO (with only hospitalized patients). In this case, the pandemic phase was found significant and remained in the model instead of age (all other variables and their effect were the same) ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). OS Among 2791 patients with available information, median follow-up was 6.05 months (interquartile range 5.95-11.99). A total of 1013 (36%) deaths were recorded, with 58.8% [95% confidence interval (CI) 56.6% to 61.0%] 1-year OS rate and median OS 13.6 months (95% CI 12.6-16.5 months) ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). According to the final stratified multivariable Cox model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), males showed a higher mortality risk (HR = 1.40). The risk of death was lower for Asians in comparison with Caucasians (HR = 0.55), whereas higher for other ethnicity (HR = 1.31). Worse ECOG PS and lower BMI were also associated with increased risk of death (HR = 2.09 and 1.32, respectively). Patients with breast and prostate tumors had lower mortality risk compared with those with hematological malignancies (HR = 0.63 and 0.55, respectively), as well as patients with stage I/II and III compared with IV (HR = 0.38 and 0.62, respectively). Immunotherapy/targeted therapy (alone or with chemotherapy) and other cancer treatment or no treatment, were also associated with lower risk of death compared with chemotherapy (HR range 0.70-0.79). Finally, patients in the NLR poorer risk category displayed higher mortality risk (HR = 1.44). In sensitivity analysis excluding BSMO, only BMI (compared with the above model) was not found to be significant and thus not included in the corresponding final model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ).
According to the multivariable logistic model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), COVID-19 hospitalization rate was higher in older patients, with Asian/other ethnicity compared with Caucasian, worse ECOG PS (≥2) and a higher number of co-morbidities (OR range 1.26-2.90). Patients with breast, colorectal, or other solid tumors had lower hospitalization rate than patients with hematological malignancies (OR = 0.34, 0.48, and 0.59, respectively) whereas patients with PD (cancer) needed to be hospitalized more often compared with those with no evidence of disease (NED) (OR = 1.67). During the second, third, and fourth phases, lower hospitalization rates were observed compared with the first phase (OR = 0.26, 0.36, and 0.19, respectively). Asymptomatic patients were hospitalized less often, as expected (OR = 0.13), whereas, patients in the poorer risk category of neutrophil-lymphocyte ratio (NLR) (≥6), platelet-lymphocyte ratio (PLR) (≥270), OnCOVID inflammatory score (OIS) (≤40), and prognostic index (PI) had higher hospitalization rates (OR range 1.60-5.48).
COVID-19 mortality among hospitalized patients (multivariable model; , available at https://doi.org/10.1016/j.esmoop.2023.101566 ) was higher in male patients, older, with ethnicity other than Asian/Caucasian, worse ECOG PS (≥2), and BMI < 25 (OR range 1.34-2.14). Patients with prostate or other solid tumors had fewer COVID-19-related deaths than patients with hematological malignancies (OR = 0.37 and 0.55, respectively). Patients with progressive tumor, however, died more often due to COVID-19 compared with those with NED (OR = 2.50). Finally, as expected, patients with stage I/II or III, had lower COVID-19 mortality rates than patients with stage IV (OR = 0.40 and 0.62, respectively). This held also for asymptomatic patients (OR = 0.43).
ICU admission ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ) was higher in patients with worse ECOG PS (≥2), coming from centers in lower-middle-income countries compared with high-income (OR = 1.87 and 2.02, respectively), whereas, older age patients had a lower ICU admission rate (OR = 0.89). Patients with solid tumors were admitted to ICU less frequently than patients with hematological malignancies (OR range 0.31-0.63). Patients with PD, however, had higher ICU admission rates compared with those with NED (OR = 1.70) as well as patients in the poorer risk category of OIS (≤40) (OR = 1.71). Asymptomatic patients were admitted to ICU less often (OR = 0.19). Due to the unexpected finding regarding age, a sensitivity analysis was conducted excluding BSMO (with only hospitalized patients). In this case, the pandemic phase was found significant and remained in the model instead of age (all other variables and their effect were the same) ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ).
Among 2791 patients with available information, median follow-up was 6.05 months (interquartile range 5.95-11.99). A total of 1013 (36%) deaths were recorded, with 58.8% [95% confidence interval (CI) 56.6% to 61.0%] 1-year OS rate and median OS 13.6 months (95% CI 12.6-16.5 months) ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). According to the final stratified multivariable Cox model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), males showed a higher mortality risk (HR = 1.40). The risk of death was lower for Asians in comparison with Caucasians (HR = 0.55), whereas higher for other ethnicity (HR = 1.31). Worse ECOG PS and lower BMI were also associated with increased risk of death (HR = 2.09 and 1.32, respectively). Patients with breast and prostate tumors had lower mortality risk compared with those with hematological malignancies (HR = 0.63 and 0.55, respectively), as well as patients with stage I/II and III compared with IV (HR = 0.38 and 0.62, respectively). Immunotherapy/targeted therapy (alone or with chemotherapy) and other cancer treatment or no treatment, were also associated with lower risk of death compared with chemotherapy (HR range 0.70-0.79). Finally, patients in the NLR poorer risk category displayed higher mortality risk (HR = 1.44). In sensitivity analysis excluding BSMO, only BMI (compared with the above model) was not found to be significant and thus not included in the corresponding final model ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ).
Hospitalization, ICU admission, and all-cause death rates decreased significantly across the four pandemic phases (hospitalization from 78% in January to May 2020 to 34% in March to December 2021, ICU from 16% to 10%, all-cause death from 41% to 19%). Among hospitalized patients, no significant change was observed for COVID-19 mortality, as opposed to COVID-19 mortality for the overall cohort . Respective rates for the overall cohort are presented in B. Most of the patient/clinical/cancer/COVID-19 characteristics differed significantly among the different pandemic phases ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). According to multivariable models for COVID-19 hospitalization and COVID-19 mortality (among hospitalized), by pandemic phase ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ), age, ECOG PS, cancer status, tumor type were significant prognostic factors for both endpoints, in most of the phases. Symptoms were also significant for COVID-19 hospitalization in all phases, as well as in phase I for COVID-19 mortality. The effect of gender was significant only in phase I. Ethnicity and BMI exhibited a significant effect only on the hospitalization rate in phase III. The effect of country’s income level on hospitalization was significant in phases I and III, but in the opposite direction: in phase I, hospitalization rate was higher in upper-middle-income countries (95%) compared with high-income economies (75%), whilst in phase ΙΙΙ the opposite association was detected (22% versus 46%).
Vaccination had a protective effect in COVID-19 hospitalization (OR = 0.24), ICU admission (OR = 0.29), and OS (HR = 0.39), whereas no difference was shown in COVID-19 mortality rate among hospitalized patients . The association of vaccination status with variables of interest is presented in , available at https://doi.org/10.1016/j.esmoop.2023.101566 , with a significantly higher vaccination rate for Caucasians (9%), patients with ECOG PS 0 (9%), centers in Northern/Western Europe (10%), and upper-middle-income economies (13%). Breast (10%) and prostate (9%) cancer patients had the highest vaccination rates, as well as patients on active cancer treatment at COVID-19 diagnosis (8%). The vaccination rate, however, was significantly lower among symptomatic patients (6%), with pulmonary/cardiovascular/systemic complications (2%/1%/2%), requiring O 2 (2%), as well as those requiring COVID-19 treatment (5%). In , available at https://doi.org/10.1016/j.esmoop.2023.101566 , the multivariable logistic models for COVID-19 hospitalization are presented for each vaccination subgroup, after matching for baseline characteristics. For non-completely vaccinated patients, less hospitalizations occurred in the upper-middle economies compared with high-income countries, possibly due to health system capacity saturation. Age, symptoms and PLR were significant factors in the vaccination group, whereas ethnicity, ECOG PS, BMI, and cancer status were significant factors in the non-vaccinated one. No model was fitted for COVID-19 mortality due to the small number of patients and events in each subgroup.
Subgroup analysis by age is based on the following grouping: 543 (17%) ‘<50 years’, 1379 (42%) ‘50-69 years’, and 1324 (41%) ‘≥70 years’. Older patients had significantly higher rates of hospitalization (70%), COVID-19 death among hospitalized (37%), and all-cause death (45%) . Age was significantly associated with most of the factors examined ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). In , available at https://doi.org/10.1016/j.esmoop.2023.101566 , results from the multivariable models for COVID-19 hospitalization and COVID-19 mortality (among hospitalized) are presented for each age subgroup. ECOG PS was a significant prognostic factor for both endpoints, in all subgroups, with stronger effect for younger patients. Cancer status (PD versus NED) was significant for COVID-19 hospitalization and mortality in the <50 and 50-69 groups, whereas cancer stage had a significant effect on the mortality of patients >50 years old. Symptoms had a significant effect (increasing with age) on hospitalization for all age groups, and for mortality of older patients (≥70). PI significantly affected the hospitalization of younger patients, and PLR and OIS the hospitalization of middle-age patients, whereas modified Glasgow prognostic score (mGPS) affected older patients. Ethnicity was found to be significant for COVID-19 hospitalization only, in the groups of <50 years and 50-69 years (with Caucasian having lower hospitalization rate), whereas gender, co-morbidities, and vaccination status were significant prognostic factors for hospitalization in the <50 years age group. The pandemic phase was significant for COVID-19 hospitalization, for all age groups (with less hospitalizations during phases II/III/IV compared with I), as well as for older patients (≥70 years) for COVID-19 mortality (with less COVID-19 deaths for phase II only compared with I). Subgroup analysis by ethnicity was based on 2036 patients with known ethnicity: 1390 (68%) Caucasian, 163 (8%) Asian, and 483 (24%) other ethnicities. Patients with ethnicity other than Asian/Caucasian had significantly higher rates of COVID-19 hospitalization (62%), COVID-19-related deaths among hospitalized (47%), ICU admission (17%), and all-cause death (41%) . Ethnicity was also significantly associated with most of the factors examined ( , available at https://doi.org/10.1016/j.esmoop.2023.101566 ). In , available at https://doi.org/10.1016/j.esmoop.2023.101566 , the multivariable logistic models for COVID-19 hospitalization and COVID-19 mortality among hospitalized are presented for each ethnicity subgroup. Results for the Asian subgroup are mainly descriptive due to the small number of patients.
This updated analysis showed a significant decrease across the four pandemic phases in COVID-19-related hospitalization, ICU admissions, and overall COVID-19-mortality; however, no significant change was reported in COVID-19-related mortality among hospitalized patients, which remained relatively stable across pandemic phases. At the time of analysis, the COVID-19-related death rate in our cohort was 22% with 622 deaths. This result was similar to that of 24.5%, reported in the first analysis of ESMO-CoCARE but higher compared with the mortality rates in the general population. , , Although, these results were better than those reported initially in COVID-19 patients with cancer, , a great variability was observed across studies in the literature, with a mortality rate between 13% and 33%. , , , A meta-analysis of 110 studies showed a pooled mortality rate of 14.1% in patients with cancer and COVID-19. On the contrary, a different meta-analysis on 33 879 patients yielded a mortality rate of 25.4%, in line with the first CoCARE analysis. , This heterogeneity could be explained by geographic location, pandemic phases, as well as access to cancer treatment and COVID-19 care. Regarding the stability in COVID-19-related mortality among hospitalized patients across the pandemic phases, this could be due to the increased risk of infection after the first lockdown, a negative selection bias for the high-risk population, and ineffective antiviral treatments for severe COVID-19. Indeed, modalities of viral transmission have changed and some patients might have had more advanced and severe cancers during phases III and IV of the pandemic owing to a disrupted access to care. A study showed a decrease in mortality between the first and the second outbreaks, and this finding was also present in our analysis when evaluating COVID-19 mortality over all CoCARE/PSMO patients, but stability was observed for COVID-19 mortality over hospitalized patients in our cohort (primary endpoint). The hospitalization rate was 54%, close to that of 58% reported by Grivas et al. The actual number of patients with cancer and COVID-19 may have not been accurate, with some patients having asymptomatic or minimally symptomatic COVID-19 not being tested and consequently not being included in the studies. , , Another study showed a decrease in mortality across the wave during the acute phase of COVID-19 infection with a possible benefit of steroids. Possible explanations were the difference of duration of follow-up or the fewer number of different countries with less heterogeneity in the management of COVID infection and therefore a better management of patients and their complications. The factors associated with increased COVID-19-related risk of mortality in hospitalized patients were male gender, older age, ethnicity other than Caucasian/Asian, worse ECOG PS, BMI < 25, and an active malignancy—in line with published studies , , and the first CoCARE analysis. Hematological malignancy had significantly higher risk of death than prostate cancer or other solid tumors. Hematological malignancies had already been associated with worst COVID-19 outcome , and reduced immune responses to the vaccination contributing to ongoing unfavorable COVID-19 outcomes. , , The HR for ‘no treatment’ versus chemotherapy with regard to hospitalization was 2.81; this result is consistent with those of CCC19, 24 which suggested that cancer treatment could be continued during the pandemic in view of the benefit-risk ratio, even for cytotoxic chemotherapy if clinically indicated. Interestingly, in our study COVID-19 hospitalizations and ICU admissions decreased across the four pandemic phases. The OR between the first phase (January to May 2020) compared with the subsequent pandemic phases ranged from 0.15 to 0.30. This result could be explained by better management of COVID-19, acquired anti-SARS-Cov2 immunity either naturally or through vaccination, early diagnosis and supportive therapy, the presence of less aggressive SARS-Cov2 variants, and a lower tendency to hospitalize minimally symptomatic patients. Indeed, avoiding hospitalization could also decrease the risk of nosocomial transmission and complication. Our observation that older age patients had a lower ICU admission rate could reflect an age limit of ICU admissions during the highest peaks of the COVID-19 pandemic. Because of the effectiveness of COVID-19 vaccines in patients with cancer, we carried out a subgroup analysis in patients with complete vaccination and found—in univariate analysis—a significant decrease in COVID-19 hospitalization, ICU admission, along with an increased OS. Of note, complete vaccination rate in our cohort was only 7%, which did not allow the assessment of the effect of vaccination on COVID-19 mortality. Our findings are consistent with previous studies that showed the effect of vaccination on humoral and T-cell-mediated responses , or on COVID-19 infection , , ; however, clinical outcome was explored on limited numbers of vaccinated patients. Vaccination against COVID-19 proves to be an effective strategy in protecting vulnerable populations, including patients with cancer, and boosters could further increase its benefit. The study OnCovid recently showed a reduction of infection's morbidity and mortality in patients with breast cancer with complete vaccination. It is important to highlight that in our study complete vaccination had a significantly protective effect against COVID-19 hospitalization, ICU admission, and OS only in univariable analysis; this effect, however, was lost when adjusting for socioeconomic and demographic parameters. In this respect, we report that complete vaccination rates were significantly higher in Northern/Western Europe and in upper-middle-income level countries, which confirms previous observations in the health care utilization and health outcomes of populations during the COVID-19 pandemic, especially in terms of morbidity and mortality. , Indeed, gross disparities in hospitalization rates and mortality between racial/ethnic groups and geographical locations in the context of COVID-19 highlighted the shortcomings of public health strategies in achieving best health for all. For instance, several studies have shown disproportionate adverse effects of COVID-19 on African Americans. , Progressive pandemic planning in the next decade must be inclusive, aware of the social gradient of risk, and reflecting a whole-of-society approach to risk reduction. In addition, our results support the importance of SARS-CoV-2 vaccination in cancer patients. Indeed, vaccine hesitancy was present in all populations, including patients with cancer, , , as demonstrated by a large meta-analysis that found only 59% vaccine acceptance. Limitations to our study include the potential selection bias due to the observational nature of our registries; the presence of missing values, the enrichment with mainly severe COVID-19 cases, the heterogeneity in patient management, and data collection across individual registries and institutions. Despite these limitations, with >3000 cases from real-world electronic health record data included, our study allowed for a robust statistical analysis partly mitigating its intrinsic selection bias. In conclusion, we showed a decrease in COVID-19 hospitalization and ICU admission rates across the pandemic phases. Complete vaccination had a protective effect against severe COVID-19 but did not remain significant when adjusting for other socioeconomic and demographic parameters. Our study highlights factors that significantly affect COVID-19 outcomes, providing actionable clues for further reducing mortality. Collectively, our results have risk stratification and resource use implications that may be informative for future public health challenges experienced by patients, clinicians, and health care systems.
|
Trends and characteristics of fertility-sparing treatment for atypical endometrial hyperplasia and endometrial cancer in Japan: a survey by the Gynecologic Oncology Committee of the Japan Society of Obstetrics and Gynecology
|
493da867-94e7-4fa0-9177-4efd34255356
|
10157339
|
Gynaecology[mh]
|
Due to a tendency for delayed marriage, the age of pregnancy is delayed in Japan . This problem overlaps with the occurrence of gynecological cancer in the reproductive age group. Recently, the number of endometrial cancer (EC) patients younger than 40 years has been increasing . There are approximately 500 patients with EC younger than 40 years per year in Japan. Of them, 77% were stage IA . Using the National Cancer Database, Ruiz et al. reported that the proportion of endometrial cancer patients who were treated with progestin therapy increased from 2.4% in 2004 to 5.9% in 2014. The recommended fertility-sparing (FS) treatments in the National Comprehensive Cancer Network (NCCN) guidelines include hormone therapy (medroxyprogesterone [MPA] and megestrol acetate [MA]) and levonorgestrel-releasing intra-uterine devices (LNG-IUDs) . Japanese treatment guidelines for EC mention that FS treatment is a treatment option for young patients with atypical endometrial hyperplasia (AEH) and EC (endometrioid carcinoma grade 1 [ECG1] and lesion limited to the endometrium) . However, since a variety of FS treatment regimens have been widely adopted, the current trends in FS-treatment are relatively unknown. To elucidate current trends in FS treatment, a questionnaire-style survey regarding FS treatment was performed, in Japan Society of Obstetrics and Gynecology (JSOG) gynecological cancer registered institutions. In addition, this study was performed to identify factors correlated with the clinical response to FS treatment, disease recurrence, pregnancy outcome, and any deviations from the eligibility criteria by analyzing the detailed information of each patient, which were difficult to collect from meta-analysis. In our view, this is the largest-scale evaluation to date in a retrospective nationwide study of FS treatment for AEH and EC patients. 1. Study design and patients This study was conducted by the Committee on Gynecologic Oncology of JSOG in the 2017–2018 fiscal year. A nationwide, retrospective questionnaire style survey—as performed. The survey items included patient demographics (age, body mass index [BMI], complications, family history, desire to have children, etc.), examinations for diagnosis, pathological diagnosis, regimen of FS treatment, adverse events (AEs), presence of myometrial invasion [MI]), maintenance therapy (oral contraceptives/low dose estrogen progestin, estrogen+progestin, or progestin only), outcomes of initial and recurrent FS treatment, and pregnancy outcomes. AEs were assessed using the Common Terminology Criteria for Adverse Events (CTCAE; ver. 4.0; National Institutes of Health, Bethesda, MA, USA). The data of patients with AEH and EC receiving FS treatment between January 2009 and December 2013 were collected from JSOG gynecological cancer registered institutions. These institutions consisted of medical training institutions, cancer specialty hospitals, and local core hospitals. This study was approved by the Institutional Review Board (IRB) of Kurume University and JSOG (IRB registration No. 17310/ UMIN No. 000034254). The present study was conducted after obtaining approval from each IRB. 2. Statistical analysis The endpoint of this study was to examine the current trends in FS treatment for AEH and ECG1 patients in Japan. The secondary objective was to examine the associations of clinical characteristics with the pathological complete remission (CR) rate, recurrence-free survival (RFS), and pregnancy and live birth rates. RFS was measured from the end date of the initial FS treatment to the date recurrence was confirmed. Time to complete remission (TTCR) was measured from the day starting initial treatment to the day achieving CR. TTCR was classified into 2 groups (TTCR <6 and ≥6 months). Survival curves were calculated using the Kaplan-Meier method, and the curves were compared using the log-rank test. A Cox proportional hazards model and logistic regression analysis were used for multivariate analysis. Frequency distributions were compared using the χ 2 test, unless the expected frequency was <5, in which case, Fisher’s exact test was used. All statistical analyses were performed using JMP software (version 14; SAS Institute, Cary, NC, USA). A value of p<0.05 was considered significant. This study was conducted by the Committee on Gynecologic Oncology of JSOG in the 2017–2018 fiscal year. A nationwide, retrospective questionnaire style survey—as performed. The survey items included patient demographics (age, body mass index [BMI], complications, family history, desire to have children, etc.), examinations for diagnosis, pathological diagnosis, regimen of FS treatment, adverse events (AEs), presence of myometrial invasion [MI]), maintenance therapy (oral contraceptives/low dose estrogen progestin, estrogen+progestin, or progestin only), outcomes of initial and recurrent FS treatment, and pregnancy outcomes. AEs were assessed using the Common Terminology Criteria for Adverse Events (CTCAE; ver. 4.0; National Institutes of Health, Bethesda, MA, USA). The data of patients with AEH and EC receiving FS treatment between January 2009 and December 2013 were collected from JSOG gynecological cancer registered institutions. These institutions consisted of medical training institutions, cancer specialty hospitals, and local core hospitals. This study was approved by the Institutional Review Board (IRB) of Kurume University and JSOG (IRB registration No. 17310/ UMIN No. 000034254). The present study was conducted after obtaining approval from each IRB. The endpoint of this study was to examine the current trends in FS treatment for AEH and ECG1 patients in Japan. The secondary objective was to examine the associations of clinical characteristics with the pathological complete remission (CR) rate, recurrence-free survival (RFS), and pregnancy and live birth rates. RFS was measured from the end date of the initial FS treatment to the date recurrence was confirmed. Time to complete remission (TTCR) was measured from the day starting initial treatment to the day achieving CR. TTCR was classified into 2 groups (TTCR <6 and ≥6 months). Survival curves were calculated using the Kaplan-Meier method, and the curves were compared using the log-rank test. A Cox proportional hazards model and logistic regression analysis were used for multivariate analysis. Frequency distributions were compared using the χ 2 test, unless the expected frequency was <5, in which case, Fisher’s exact test was used. All statistical analyses were performed using JMP software (version 14; SAS Institute, Cary, NC, USA). A value of p<0.05 was considered significant. We collected the data of 413 patients from JSOG gynecological cancer registered institutions, consisting of medical training institutions (n=262, 63%), cancer specialty hospitals (n=58, 14%), and local core hospitals (n=93, 22%). A total of 102 institutions had eligible patients. Finally, the clinical information (103 questions/patient) of 413 patients was collected. 1. Patients’ characteristics The median follow-up time was 2,290 days. Patients’ median age and BMI were 35 years and 24.5 kg/m 2 , respectively. Most of the histological types were ECG1 (54.7%) and AEH (41.4%), although there were nine ECG2 patients. Major concomitant conditions were diabetes mellitus (DM) (9%), hypertension (8.7%), and polycystic ovarian syndrome (PCOS) (23.5%). We confirmed family history of cancer (Lynch syndrome suspected clinically) in 30 patients (7.3%). Twenty-six percent of patients had a history of infertility treatment . Thirty-six percent of patients had atypical genital bleeding at the first visit to the hospital. Fifty-two percent of patients had irregular menstrual cycles. 2. Initial treatment The examinations for pre-initial pathological diagnosis included dilatation & curettage (D&C) (80.6%), endometrial biopsy by hysteroscopy (11.9%), blind endometrial biopsy (8.2%), and endometrial cytology (0.2%). MPA was used in 98.8% for initial treatment. In 408 patients treated by MPA, 360 patients (87.2%) used MPA alone, and 48 patients (11.6%) were combined with metformin. The main dosages of MPA were 600 (79.4%) to 400 mg (18.9%). The dosages of metformin were varied from 750 to 2,500 mg . AEs were observed in 4.4% of patients. Grade 3 massive genital bleeding was observed in 2 patients (0.49%). One of the patients underwent hysterectomy to control genital bleeding and discontinued FS treatment. A total of 253 patients (61.3%) took low-dose aspirin during FS treatment to prevent thrombosis. Grade 3 thrombosis was observed in only 1 patient (0.24%), even though she had been taking low-dose aspirin. Body weight gain (20% more) was observed in three patients (0.73%). There were no grade 4 AEs. CR after initial treatment was achieved in 78.2% (323/413) of patients. To accurately determine the response to FS treatment in eligible patients, 360 patients who matched common FS treatment criteria of several guidelines (Pathology: AEH or ECG1 without MI, Treatment: MPA 400 or 600 mg) were selected. Of these 360 selected patients, CR was achieved in 79.1% (285/360). Each treatment response (MPA 400 mg, 600 mg, and MPA + metformin) of initial treatments was higher in AEH patients (78.4%, 83%, and 95.5%) than in ECG1 patients (65.4%, 76.2%, and 83.3%) . We performed univariate and multivariate analysis to examine the relationship between clinicopathological factors (age, BMI, PCOS, family history of cancer, DM, histology, treatment, and treatment period) and the clinical response after initial treatment of AEH and ECG1 patients . On univariate analysis, there were significant differences in BMI (<25 vs ≥25 kg/m 2 : p=0.028), treatment (MPA 400 mg vs. MPA + metformin; p=0.003), and treatment period (<6 vs. ≥6 months; p=0.002). There were no significant differences, but some trends were seen in histology (AEH vs. ECG1; p=0.061) and treatment (MPA 600 mg vs. MPA + metformin; p=0.088 and MPA 400 or 600 mg vs. MPA + metformin; p=0.057, respectively). On multivariate analysis, BMI ≥25 kg/m 2 (hazard ratio [HR]=2.24), ECG1 (HR=2.28) and treatment period <6 months (HR=2.5) were related to a poor response to initial FS treatment. 3. RFS in patients with AEH and ECG1 Univariate and multivariate analyses were performed with clinicopathological factors and RFS of AEH and ECG1 patients without MI treated with MPA 400 or 600 mg . On univariate analysis, there were significant differences in BMI (<25 vs. ≥25 kg/m 2 ; p=0.021), histology (AEH vs. ECG1; p=0.016), TTCR (<6 vs. ≥6 months; p=0.007), maintenance therapy (− vs. +; p<0.001), and pregnancy (− vs. +; p<0.001). On multivariate analysis, ECG1 (HR=1.73), TTCR ≥6 months (HR=1.51), maintenance therapy (−) (HR=2.1), and pregnancy (−) (HR=2.8) were associated with a significantly higher risk of recurrence on multivariate analysis in patients with AEH and ECG1 treated with MPA 400 or 600 mg. shows RFS curves by histology, TTCR, maintenance therapy, and pregnancy. The overall recurrence rate (ORR) was 39.5% in AEH, and 55.0% in ECG1. In terms of TTCR, ORR was 38.5% in TTCR <6 months and 55% in TTCR ≥6 months. In patients on maintenance therapy, ORR after achieving CR to initial treatment was lower (37.4%) than in patients not on maintenance therapy (57.6%). Furthermore, we found lower ORR in patients who could conceive than those who couldn’t. (29.3% vs. 57.3%). To examine the additional therapeutic effect of metformin, we reanalyzed the clinicopathological factors and RFS in patients treated with MPA + metformin or MPA (400 or 600 mg) . On multivariate analysis, there were significant differences in MPA 400 mg versus MPA + metformin (HR=0.33; p=0.026) and MPA 600 mg versus MPA + metformin (HR=0.36; p=0.021), in addition to histology, maintenance therapy, and pregnancy. However, there was no significant difference in TTCR (HR=1.38; p=0.093), which was significantly different in patients treated with MPA 400 or 600 mg. 4. The pathological discrepancies of patients who did not achieve CR after initial treatment Among patients who did not achieve CR after initial treatment, it was suspected that there were some pathological discrepancies between before-and after-initial treatment. Therefore, we examined pathological discrepancies between before-initial treatment (diagnosed by D&C or not D&C) and after initial treatment (diagnosed by hysterectomy). Surprisingly, the pathological discrepancies was 81.3% (13/16) in AEH, while lower in ECG1 (19.6%, 9/46) . The rate of diagnostic discrepancies in AEH (n=16) differed by the method of pathological examination (D&C or not D&C). The rate of diagnostic discrepancies of AEH was 75.0% (9/12; AEH: 3, ECG1: 9) with D&C, and 100.0% (4/4; ECG1: 1, ECG2: 2, ECG3: 1) without D&C (endometrial biopsy). Of note, among patients (AEH + ECG1) who did not achieve CR after initial treatment, 8.1% (5/62) had high-grade carcinoma (endometrioid carcinoma grade3 [ECG3]: 2, clear cell carcinoma: 1, dedifferentiated carcinoma: 2). 5. Cases of deviating from the eligibility criteria (patient with MI and ECG2) In this survey, there were 19 patients (4.6%, 19/413) who were suspected of having MI at pre-initial treatment by pelvic magnetic resonance imaging (MRI). CR rates were 42.1% (8/19) with MI and 74.6% (132/177) without MI. There was a significant difference in the CR rates between the 2 groups (relative risk=2.28; 95% confidence interval=1.4–3.6; p=0.005). The effectiveness of FS treatment in ECG2 was uncertain. In this survey, there were nine patients with ECG2. The CR rate after initial treatment was 88.9% (8/9), and the recurrence rate for those who achieved CR after initial treatment was 55.6% (5/9). 6. Pregnancy outcomes A total of 217 patients desired children after achieving CR following initial treatment. 76% of these patients underwent infertility treatment, whereas 24% did not. In patients with infertility treatment, the pregnancy rate was 58.5%, and the live birth rate was 50.6%. On the other hand, in patients who did not have infertility treatment, the pregnancy and live birth rate were only 11.5%/7.7%. There were significant differences in the pregnancy and live birth rates between infertility and no infertility treatment groups (p<0.010) . 7. Treatments for patients with recurrence Treatments for recurrent AEH and ECG1 patients without MI before initial treatment (n=126) who achieved CR after initial treatment were examined. Most recurrent sites were endometrium (98.4%, 124/126), with only 2 cases outside the uterus (1.6%, 2/126). The treatments for patients with recurrence were repeated FS treatment (42.9%, 54/126), surgery (40.5%, 51/126), repeated FS treatment to surgery (14.3%, 18/126), and unknown (2.4%, 3/126). Furthermore, we examined the efficacy of repeated FS treatment (MPA or MPA + metformin) for recurrent AEH and ECG1. The CR rates in recurrent AEH patients were 91.7% (22/24) with MPA, and 100% (6/6) with MPA + metformin. The CR rates of recurrent ECG1 patients were 90.9% (10/11) with MPA and 100% (1/1) with MPA + metformin. 8. Occurrence of ovarian cancer There were 15 cases of simultaneous ovarian cancer (3.6%) and one case of peritoneal cancer (0.24%). The timing of occurrence of ovarian cancer was before-FS treatment in 7.1%, during FS treatment in 20%, and after-FS treatment in 73.3%. Sixty-seven percent were diagnosed as primary ovarian cancer, whereas 13.3% were diagnosed as metastatic cancer from EMCA. Most pathology was endometrioid adenocarcinoma (85.7%) same as endometrial cancer . 9. Prognosis The prognosis of FS treatment was examined. The rates of patients with no evidence of disease (NED), alive with disease (AWD), and died of disease (DOD) were 95.6% (395/413), 2.7% (11/413), and 1.5% (6/413), respectively. The pathology before initial treatment of patients with DOD (n=6) was AEH in two cases and ECG1 in four cases. All of them were diagnosed by D&C before initial treatment. One patient (17%) was suspected of having MI before initial treatment. Two AEH patients (33.3%) achieved CR after initial treatment, although four ECG1 patients (66.7%) were PD. Five patients (83.3%) underwent surgery; all of their surgical pathology specimens showed high-grade carcinoma (carcinosarcoma: 2, ECG3: 1, clear cell carcinoma: 1, dedifferentiated carcinoma: 1) . The median follow-up time was 2,290 days. Patients’ median age and BMI were 35 years and 24.5 kg/m 2 , respectively. Most of the histological types were ECG1 (54.7%) and AEH (41.4%), although there were nine ECG2 patients. Major concomitant conditions were diabetes mellitus (DM) (9%), hypertension (8.7%), and polycystic ovarian syndrome (PCOS) (23.5%). We confirmed family history of cancer (Lynch syndrome suspected clinically) in 30 patients (7.3%). Twenty-six percent of patients had a history of infertility treatment . Thirty-six percent of patients had atypical genital bleeding at the first visit to the hospital. Fifty-two percent of patients had irregular menstrual cycles. The examinations for pre-initial pathological diagnosis included dilatation & curettage (D&C) (80.6%), endometrial biopsy by hysteroscopy (11.9%), blind endometrial biopsy (8.2%), and endometrial cytology (0.2%). MPA was used in 98.8% for initial treatment. In 408 patients treated by MPA, 360 patients (87.2%) used MPA alone, and 48 patients (11.6%) were combined with metformin. The main dosages of MPA were 600 (79.4%) to 400 mg (18.9%). The dosages of metformin were varied from 750 to 2,500 mg . AEs were observed in 4.4% of patients. Grade 3 massive genital bleeding was observed in 2 patients (0.49%). One of the patients underwent hysterectomy to control genital bleeding and discontinued FS treatment. A total of 253 patients (61.3%) took low-dose aspirin during FS treatment to prevent thrombosis. Grade 3 thrombosis was observed in only 1 patient (0.24%), even though she had been taking low-dose aspirin. Body weight gain (20% more) was observed in three patients (0.73%). There were no grade 4 AEs. CR after initial treatment was achieved in 78.2% (323/413) of patients. To accurately determine the response to FS treatment in eligible patients, 360 patients who matched common FS treatment criteria of several guidelines (Pathology: AEH or ECG1 without MI, Treatment: MPA 400 or 600 mg) were selected. Of these 360 selected patients, CR was achieved in 79.1% (285/360). Each treatment response (MPA 400 mg, 600 mg, and MPA + metformin) of initial treatments was higher in AEH patients (78.4%, 83%, and 95.5%) than in ECG1 patients (65.4%, 76.2%, and 83.3%) . We performed univariate and multivariate analysis to examine the relationship between clinicopathological factors (age, BMI, PCOS, family history of cancer, DM, histology, treatment, and treatment period) and the clinical response after initial treatment of AEH and ECG1 patients . On univariate analysis, there were significant differences in BMI (<25 vs ≥25 kg/m 2 : p=0.028), treatment (MPA 400 mg vs. MPA + metformin; p=0.003), and treatment period (<6 vs. ≥6 months; p=0.002). There were no significant differences, but some trends were seen in histology (AEH vs. ECG1; p=0.061) and treatment (MPA 600 mg vs. MPA + metformin; p=0.088 and MPA 400 or 600 mg vs. MPA + metformin; p=0.057, respectively). On multivariate analysis, BMI ≥25 kg/m 2 (hazard ratio [HR]=2.24), ECG1 (HR=2.28) and treatment period <6 months (HR=2.5) were related to a poor response to initial FS treatment. Univariate and multivariate analyses were performed with clinicopathological factors and RFS of AEH and ECG1 patients without MI treated with MPA 400 or 600 mg . On univariate analysis, there were significant differences in BMI (<25 vs. ≥25 kg/m 2 ; p=0.021), histology (AEH vs. ECG1; p=0.016), TTCR (<6 vs. ≥6 months; p=0.007), maintenance therapy (− vs. +; p<0.001), and pregnancy (− vs. +; p<0.001). On multivariate analysis, ECG1 (HR=1.73), TTCR ≥6 months (HR=1.51), maintenance therapy (−) (HR=2.1), and pregnancy (−) (HR=2.8) were associated with a significantly higher risk of recurrence on multivariate analysis in patients with AEH and ECG1 treated with MPA 400 or 600 mg. shows RFS curves by histology, TTCR, maintenance therapy, and pregnancy. The overall recurrence rate (ORR) was 39.5% in AEH, and 55.0% in ECG1. In terms of TTCR, ORR was 38.5% in TTCR <6 months and 55% in TTCR ≥6 months. In patients on maintenance therapy, ORR after achieving CR to initial treatment was lower (37.4%) than in patients not on maintenance therapy (57.6%). Furthermore, we found lower ORR in patients who could conceive than those who couldn’t. (29.3% vs. 57.3%). To examine the additional therapeutic effect of metformin, we reanalyzed the clinicopathological factors and RFS in patients treated with MPA + metformin or MPA (400 or 600 mg) . On multivariate analysis, there were significant differences in MPA 400 mg versus MPA + metformin (HR=0.33; p=0.026) and MPA 600 mg versus MPA + metformin (HR=0.36; p=0.021), in addition to histology, maintenance therapy, and pregnancy. However, there was no significant difference in TTCR (HR=1.38; p=0.093), which was significantly different in patients treated with MPA 400 or 600 mg. Among patients who did not achieve CR after initial treatment, it was suspected that there were some pathological discrepancies between before-and after-initial treatment. Therefore, we examined pathological discrepancies between before-initial treatment (diagnosed by D&C or not D&C) and after initial treatment (diagnosed by hysterectomy). Surprisingly, the pathological discrepancies was 81.3% (13/16) in AEH, while lower in ECG1 (19.6%, 9/46) . The rate of diagnostic discrepancies in AEH (n=16) differed by the method of pathological examination (D&C or not D&C). The rate of diagnostic discrepancies of AEH was 75.0% (9/12; AEH: 3, ECG1: 9) with D&C, and 100.0% (4/4; ECG1: 1, ECG2: 2, ECG3: 1) without D&C (endometrial biopsy). Of note, among patients (AEH + ECG1) who did not achieve CR after initial treatment, 8.1% (5/62) had high-grade carcinoma (endometrioid carcinoma grade3 [ECG3]: 2, clear cell carcinoma: 1, dedifferentiated carcinoma: 2). In this survey, there were 19 patients (4.6%, 19/413) who were suspected of having MI at pre-initial treatment by pelvic magnetic resonance imaging (MRI). CR rates were 42.1% (8/19) with MI and 74.6% (132/177) without MI. There was a significant difference in the CR rates between the 2 groups (relative risk=2.28; 95% confidence interval=1.4–3.6; p=0.005). The effectiveness of FS treatment in ECG2 was uncertain. In this survey, there were nine patients with ECG2. The CR rate after initial treatment was 88.9% (8/9), and the recurrence rate for those who achieved CR after initial treatment was 55.6% (5/9). A total of 217 patients desired children after achieving CR following initial treatment. 76% of these patients underwent infertility treatment, whereas 24% did not. In patients with infertility treatment, the pregnancy rate was 58.5%, and the live birth rate was 50.6%. On the other hand, in patients who did not have infertility treatment, the pregnancy and live birth rate were only 11.5%/7.7%. There were significant differences in the pregnancy and live birth rates between infertility and no infertility treatment groups (p<0.010) . Treatments for recurrent AEH and ECG1 patients without MI before initial treatment (n=126) who achieved CR after initial treatment were examined. Most recurrent sites were endometrium (98.4%, 124/126), with only 2 cases outside the uterus (1.6%, 2/126). The treatments for patients with recurrence were repeated FS treatment (42.9%, 54/126), surgery (40.5%, 51/126), repeated FS treatment to surgery (14.3%, 18/126), and unknown (2.4%, 3/126). Furthermore, we examined the efficacy of repeated FS treatment (MPA or MPA + metformin) for recurrent AEH and ECG1. The CR rates in recurrent AEH patients were 91.7% (22/24) with MPA, and 100% (6/6) with MPA + metformin. The CR rates of recurrent ECG1 patients were 90.9% (10/11) with MPA and 100% (1/1) with MPA + metformin. There were 15 cases of simultaneous ovarian cancer (3.6%) and one case of peritoneal cancer (0.24%). The timing of occurrence of ovarian cancer was before-FS treatment in 7.1%, during FS treatment in 20%, and after-FS treatment in 73.3%. Sixty-seven percent were diagnosed as primary ovarian cancer, whereas 13.3% were diagnosed as metastatic cancer from EMCA. Most pathology was endometrioid adenocarcinoma (85.7%) same as endometrial cancer . The prognosis of FS treatment was examined. The rates of patients with no evidence of disease (NED), alive with disease (AWD), and died of disease (DOD) were 95.6% (395/413), 2.7% (11/413), and 1.5% (6/413), respectively. The pathology before initial treatment of patients with DOD (n=6) was AEH in two cases and ECG1 in four cases. All of them were diagnosed by D&C before initial treatment. One patient (17%) was suspected of having MI before initial treatment. Two AEH patients (33.3%) achieved CR after initial treatment, although four ECG1 patients (66.7%) were PD. Five patients (83.3%) underwent surgery; all of their surgical pathology specimens showed high-grade carcinoma (carcinosarcoma: 2, ECG3: 1, clear cell carcinoma: 1, dedifferentiated carcinoma: 1) . This study mainly focused on examining the factors that correlated with the response to initial treatment and recurrent risk factors who achieved CR to initial treatment. Furthermore, we specifically examined the pregnancy and live birth rate with and without the introduction of fertility treatment, the concurrent occurrence of ovarian cancer, the histological discrepancy before and after treatment in AEH, and deviations from the eligibility criteria. In terms of the response to initial treatment, we confirmed almost the same remission rate as the previous reports . Li et al. reported that the histology and BMI were significantly associated with a higher likelihood of achieving CR, although age, PCOS, and hormonal agents did not affect CR. In our study, the significant clinicopathological factors correlated to CR were histology (AEH), BMI (<25 kg/m 2 ), and treatment period (≥6 months). There is no consensus on the optimal duration of the treatment period to date . Based on a previous prospective study in Japan in which the treatment period was set at 6 months, it is possible that a certain number of the stable disease and progressive disease cases also completed treatment at six months . Wang et al. examined the treatment period in 68 patients (1 AH and 37 ECG1) to see whether a more extended treatment period affects oncologic results, causing disease progression or recurrence. They found cumulative complete response rates of 59% (≤6 months), 76% (6–9 months), and 95.5% (>9 months), respectively. In the present study, we also confirmed that a more extended treatment period (≥6 months) was correlated with CR. Further investigation of optimal treatment periods is warranted. About recurrent risk factors who achieved CR to initial treatment, we confirmed that ECG1, TTCR ≥6 months, maintenance therapy (−), and pregnancy (−) were associated with a significantly higher recurrent risk on multivariate analysis in the patients. Even in responders, the rate of recurrence was high in EC . The results of four meta-analyses showed 31%–41% of ECG1 patients develop recurrence after the initial response, and the potential risk factors associated with recurrence were BMI ≥25 kg/m 2 , PCOS, and EC . Furthermore, TTCR, family history of cancer, diabetes, pregnancy, progestin type, and maintenance therapy (−) have also been reported in other studies as independent risk factors for recurrence . From this study, we could confirm almost identical recurrent risk factors as previous studies. We confirmed the benefit of maintenance therapy for RFS; 55% of patients without maintenance therapy recurred within 5 years, whereas only 35% of patients on maintenance therapy recurred. About the importance of maintenance therapy to prevent a recurrence, KGOG study also emphasized this point . Maintenance therapy should be recommended for patients who do not express the desire to have children at the time of CR after initial FS treatment. In this study, 48 (11.6%) patients used metformin combined with MPA as initial treatment. Metformin increases insulin sensitivity and activates the AMPK pathway, counteracting the PI3K/ mTOR/Akt/FOXO1 signaling pathway, promoting endometrial proliferation . We found a better response rate to initial treatment and reduced recurrent risk as previous reports . When we analyzed RFS, including with MPA + metformin, TTCR ≥6 months was not a risk factor for RFS, although it was a significant risk factor in patients treated with MPA only. The addition of metformin to MPA might decrease the recurrent risk associated with longer TTCR. Because a small number of patients were treated with metformin, we could not conclude the response of the additional effect of metformin from this study. We would know this exciting answer from the ongoing prospective, randomized study of MPA + metformin versus MPA only (FELICIA trial) . Regarding the pregnancy and live birth rate in the present study, those with infertility treatment (58.5%/50.6%) were better than those for spontaneous pregnancy (11.5%/7.7%). Not only with assisted reproductive technology (ART) but there was also a relatively high pregnancy rate with timing treatment and intrauterine insemination. In previous meta-analysis, the live birth rate of patients who had ART was 39.4%, whereas it was 14.9% in patients who tried to conceive spontaneously . In the present study, the first- and second-year recurrence rates were 25% and 36%, respectively, in ECG1. Therefore, the recommended timing of pregnancy is early after achieving CR to initial treatment. The implementation of in vitro fertilization techniques increases the chance of conception and may also decrease the time to conception . The American Society of Clinical Oncology recommends that physicians “should refer patients who express an interest in fertility preservation to reproductive specialists” . The rate of ovarian cancer during follow-up was reported to be 3.6% in a previous meta-analysis . However, the detailed information of the patients was poorly reported (i.e., primary or metastatic, histology, and response to initial treatment) because the original primary reports did not include these. In the present survey, there were 15 cases of simultaneous ovarian cancer (3.6%). Of these, 67% were diagnosed with primary ovarian cancer, whereas 13% were diagnosed with metastatic cancer from EMCA. The developments of ovarian cancer were mostly detected post-FS treatment (73.3%). In addition, most patients (80%) achieved CR to initial treatment, and the histology of ovarian cancer was endometrioid carcinoma (85.7%), which is the same as endometrial cancer. Two cases of the patients diagnosed with primary ovarian cancer had suspected Lynch syndrome (20%, 2/10). The details of the etiology of ovarian cancer are unknown because genetic testing was not performed in this study. In the future, genetic testing of cases with concurrent ovarian cancer may help to elucidate these factors. Throughout the FS treatment period, gynecologists need to be very careful about the simultaneous occurrence of ovarian cancer. If ovarian tumors are detected during and post FS treatment, patients should be carefully examined by pelvic enhanced MRI and whole-body enhanced CT to rule out or confirm ovarian cancer and metastasis. In this nationwide survey, approximately 10% of patients (patients with ECG2 or MI) did not match the eligibility of standard FS treatment criteria. Through this research study, it was found that these originally off-label cases are being treated. We found initial FS treatment efficacy was equivalent to ECG1 for ECG2 patients but lower for MI cases than those without MI. Although, these are only the results of a few instances, and adhering to the standard indications for FS treatment is still essential. We found that patients with AEH who failed the initial FS treatment had more histologic discrepancies. In particular, the pathological discrepancies between pre- and post-treatment histology were more common in cases where the pre-treatment pathological diagnosis was made without D&C. This histological discrepancy due to differences in testing methods is also reported in the KGOG study . In our study, 8.1% of the patients with initial FS treatment failure had high-grade carcinoma. It is important to perform D&C for histological confirmation pre- and post-treatment. Especially in initial FS treatment failure cases, we need to thoroughly check for hidden high-grade carcinoma. The limitation of the present study is that it was a retrospective, questionnaire-style survey. In addition, this study was performed in a single country. The optimal progestin regimen remains to be elucidated, although MPA and MPA + metformin were mainly analyzed in the present study. The strength of the present study is that it was a nationwide, multi-center survey, and likely, the largest sample size compared with previous reports. Compared with a meta-analysis, although the number of patients was smaller, it was possible to accurately examine potential predictors affecting response to initial therapy, significant risk factors related to recurrence, ovarian cancer patients, and pregnancy data for each assisted reproduction treatment obtained for each patient, which was collected from the same questionnaire data, which covered the patients’ detailed background characteristics. In summary, the present study provides further insight into the current trends of FS treatment in AEH and ECG1 patients who want to have children. The patients who choose this FS treatment should be informed of the relatively low live birth rate, the high chance of recurrence, and the possibility of the occurrence of ovarian cancer, which could be life-threatening. Before FS treatment, detailed pathological examination is needed to rule out high-grade histology by D&C, MI, and concurrent ovarian cancer by MRI. If CR is achieved, gynecologists need to consult with reproductive specialists about infertility treatment, including assisted reproductive treatments, to maximize chances of a live birth. The trends of FS treatment shown in this study are clinically meaningful and may influence the future direction of investigation.
|
Challenges on sexual health communication with secondary school learners, Limpopo province
|
7052bf35-626a-4bb1-9c30-d2cc2b484d14
|
10157411
|
Health Communication[mh]
|
Traditionally, sexuality is discussed in hushed tones in proverbs and is reserved for adults. On the other hand, adolescents require information about their sexuality to make informed decisions about their sexual behaviour (Baku et al. :1). Science-based, realistic, nonjudgemental information about sex and relationships is provided in sexuality education at an age-appropriate, culturally relevant level (Kemigisha et al. :2; Leung et al. :2). Promoting sexuality education through parent–child communication is a positive and effective strategy for achieving long-term behavioural change and thus a reduction in unintended pregnancy. Sexuality education can reduce sexually transmitted infections (STIs), early sexual debut, multiple sex partners and low, inconsistent condom use (Bonjour & Van der Vlugt :14; Daminabo, Teibowei & Agharandu :65; Pleaner et al. :4). However, parents do not often discuss sexuality with their children because it is embarrassing and uncomfortable to communicate (Rodgers et al. :628). Globally, parent–child communication is recognised as a crucial strategy to reduce sexual health risks (Othman et al. :313). Sexuality communication plays a vital role in the preparation of teenagers for a safe, productive and fulfilling life. Effective and positive communication between parents and their children about sexual health helps adolescents to establish individual values and make sexually healthy decisions (Venketsamy & Kinear :2). A number of studies have shown that learners who have not been taught sexuality are more likely to engage in high-risk sexual behaviours than those who have been taught sexuality. Learners who received sexuality education were less likely to have several sexual partners, participate in unprotected sex, or become pregnant as teenagers. Additionally, they frequently use condoms or other contraceptive methods (Ram, Andajani & Mohammadnezhad :2). Despite the evidence that parent–child communication is globally recognised and that parents are acknowledged as the primary source of information that can best influence decision-making responsive to the adolescent’s needs, however, it is still a hurdle because of various sociocultural and religious challenges such as lack of communication skills, low self-efficacy, ignorance and a lack of concrete information and parental underestimation of the child’s sexual behaviour (Aventin et al. :3). Learners are explorative, and with the advent of technology and social media content, they are usually misinformed as they do not know the right information. Döring ( :9) indicates that learners may need help and support with sexuality information and strategies to build their confidence. In African countries, culture turns out to be a barrier for parents to openly discuss sexuality with their children. Modise ( :85) indicated that parents felt uncomfortable and believed that discussing sexuality was taboo. However, some parents discussed sexual and reproductive health with children, especially female parents. This is an indication that women are closer to their children as primary caregivers and men are culturally detached from their parental role, as they are only seen as providers. Research suggests that children’s age and gender do predict interaction of sexual communication with parents; parents are more likely to share messages with female than male teens, focusing on abstinence and resisting a partner’s advances, while other research shows that teen girls are more likely than boys to talk with family members about sex (Grossman, Jenkins & Ritcher :6). Parents who browsed the subjects preferred to address a few topics such as abstinence, menstruation and human immunodeficiency virus (HIV) and acquired immunodeficiency syndrome (AIDS). Dioubaté et al. ( :5–6) highlighted that condom use and contraception were hardly discussed because parents think that talking about such issues is promoting sexual immorality or encouraging children to engage in sexual relationships (Shams et al. :5). Problem statement and research objectives Sexuality education is regarded as a cultural taboo subject by most African people, especially among black communities. Society feels that it is not appropriate to open such conversations with children; however, learners still get information from friends and social media. Literature has indicated that the timing of education, the lack of knowledge of parents and their reluctance in communicating with learners has resulted in sexual ill-health and misbehaviours of sexual activities. The South African government saw the importance of sexuality education and introduced life orientation in schools, because parents were reluctant to communicate with learners and shifted the responsibility to the schools. However, the status quo remains because teachers are parents from the same communities affected by sociocultural and religious barriers and fail to provide comprehensive sexual education. The involvement of the parents might have the potential to impact adolescents in decision-making and is often referred to as a primary and preferred source of sexual health information. Parents can play an important role in supervising learners’ activities as primary caregivers by conveying appropriate sexual health information in a respectful manner. In relation to sexual health practices, role models can exert a considerable influence on adolescents’ attitudes, values and beliefs. For that reason, the researchers aimed to determine parents’ views regarding challenges of sexual health communication among secondary school learners in the Limpopo province. The objectives of the study were to explore and describe parents’ views regarding challenges of sexual health communication among secondary school learners in the Limpopo province. Sexuality education is regarded as a cultural taboo subject by most African people, especially among black communities. Society feels that it is not appropriate to open such conversations with children; however, learners still get information from friends and social media. Literature has indicated that the timing of education, the lack of knowledge of parents and their reluctance in communicating with learners has resulted in sexual ill-health and misbehaviours of sexual activities. The South African government saw the importance of sexuality education and introduced life orientation in schools, because parents were reluctant to communicate with learners and shifted the responsibility to the schools. However, the status quo remains because teachers are parents from the same communities affected by sociocultural and religious barriers and fail to provide comprehensive sexual education. The involvement of the parents might have the potential to impact adolescents in decision-making and is often referred to as a primary and preferred source of sexual health information. Parents can play an important role in supervising learners’ activities as primary caregivers by conveying appropriate sexual health information in a respectful manner. In relation to sexual health practices, role models can exert a considerable influence on adolescents’ attitudes, values and beliefs. For that reason, the researchers aimed to determine parents’ views regarding challenges of sexual health communication among secondary school learners in the Limpopo province. The objectives of the study were to explore and describe parents’ views regarding challenges of sexual health communication among secondary school learners in the Limpopo province. This study employed a qualitative, explorative, descriptive and contextual design. The study aimed to explore and describe parents’ views regarding challenges of sexual health communication among secondary school learners in the Limpopo province. The study design was chosen as it provides an extensive discovery of information about promoting sexuality education for Grade 8 learners and was found suitable for this study. Setting Mopani and Vhembe Districts in the Limpopo province were the study areas. Mopani District is located on the eastern side of the Limpopo province, bordering Mozambique, while Vhembe District is located on the northern side of the Limpopo province, bordering Zimbabwe (municipalities.co.za ). These two districts were purposefully selected because of the high sexual health issues that are prevalent in the district that might be attested to poor sexual health communication. The researcher approached public secondary schools with the aim of accessing a population of parents through the school governing bodies and committees. Population and sampling (participants) The target population were all parents with learners in the school where the study was conducted. Nonprobability purposive sampling was employed to recruit parents who were willing to be part of the study with the assistance of the principals and school governing bodies. The sample consisted of 56 participants from the eight schools within the two districts, resulting in a total of five focus groups. Data collection Data were collected from selected schools after making appointments with participants. Each meeting lasted for 1 h – 1.5 h. A conducive climate was created for everyone to feel free to share their challenges, and the seating was arranged in a semicircle. Each focus group had 8–12 members. Data were collected by means of a focus group discussion, using an unstructured interview method. The purpose of using this method was to understand parents’ views regarding challenges of sexual health communication among secondary school learners in the Limpopo province. Different methods to enhance trustworthiness of data were used such as taking notes, observation and using an audiotape to reduce bias (De Vos et al. :361). The focus group started with a central question: ‘can you share your challenges regarding sexual health communication with your children?’ The central question was asked to all five focus group discussion groups, followed by probing questions because the researcher wanted to understand more about their challenges regarding sexual health communication. Data saturation was achieved when no new information was upcoming from the focus group discussions, and there was no substantial addition to the codes and themes being developed (Brink, Van de Walt & Rensburg :193). Termination of the focus group interviews occurred after a maximum of two visits for some three visits. Data were collected for a period of three months (May 2019 – July 2019). Data analysis The data were analysed conceptually, which included reading, coding and developing themes (Brink et al. :193). Raw data were transcribed verbatim, including observational notes collected from the focus group discussions. The data were condensed and organised into themes and subthemes to make sense. The researcher approached an experienced qualitative data coder to analyse the data again, then an agreement was achieved. Literature control was presented after data were analysed to compare findings of this study (Brink et al. :193). Measures to ensure trustworthiness The present study used the model of Lincoln and Guba to ensure trustworthiness. This model is characterised by four strategies for ensuring trustworthiness: credibility, transferability, dependability and confirmability. Credibility was ensured through prolonged engagement with the participants, as more than one visit was made for focus group discussion interviews, and during transcription of data, when clarity was needed, the researcher contacted the participants for further clarity; furthermore, field notes and observational notes were captured to enhance credibility. Transferability is the extent to which the results can be transferred to other contexts or settings; this was ensured by the purposive sampling technique to make sure that the selected participants were representative of the different views of parents across the different settings in the Limpopo province. To enhance the dependability of the data, an audit trial was used where a track record of the data collection process was developed, and during analysis, field notes and observations written during data collection were compared with data and corroborated with the findings of this study (Brink et al. :172). Confirmability is the extent of the confidence that the results would be confirmed or corroborated by other researchers; this was achieved through reflexivity, where weekly meetings with promoters and independent coder were held after data analysis to reflect on the transcripts and themes. Feedback from the promoters and independent coder confirmed that all the quotes used in the participants’ transcripts supported the identified themes. Ethical considerations Permissions Ethical clearance was granted by the University of Venda Human and Clinical Trial Research Ethics Committee (HCTREC) (reference number SHS/19/PDC/37/2410). Provincial Department of Education Limpopo province and the district managers and principals granted permission. It was agreed that the researcher would not visit schools during examination time and would not disrupt classes. A written informed consent form was obtained from each participant. Consent Consent is morally justified primarily on the basis of autonomy, as research participants’ autonomy can be protected and supported through the consent process. A brief explanation of the research purpose and the fact that participants are not forced to participate were given to the participants. Based on the information given to them, they made a free choice. Confidentiality and anonymity Once the researcher has information, confidentiality pertains to what the researcher does with it, specifically how much he or she discloses to others. The researcher gave an assurance that data would be reported anonymously; anonymity, in contrast, is concerned with the attribution of information. Participants were informed that the researchers would maintain their anonymity and would not report actual names or other identifying information. There is no control over participants breaking internal confidentiality in focus groups, but the researcher relies on the following ground rules and adheres to consent procedures. As the researcher explains the purpose of the study, he or she also explains that no information should be discussed outside the focus group meeting. Mopani and Vhembe Districts in the Limpopo province were the study areas. Mopani District is located on the eastern side of the Limpopo province, bordering Mozambique, while Vhembe District is located on the northern side of the Limpopo province, bordering Zimbabwe (municipalities.co.za ). These two districts were purposefully selected because of the high sexual health issues that are prevalent in the district that might be attested to poor sexual health communication. The researcher approached public secondary schools with the aim of accessing a population of parents through the school governing bodies and committees. The target population were all parents with learners in the school where the study was conducted. Nonprobability purposive sampling was employed to recruit parents who were willing to be part of the study with the assistance of the principals and school governing bodies. The sample consisted of 56 participants from the eight schools within the two districts, resulting in a total of five focus groups. Data were collected from selected schools after making appointments with participants. Each meeting lasted for 1 h – 1.5 h. A conducive climate was created for everyone to feel free to share their challenges, and the seating was arranged in a semicircle. Each focus group had 8–12 members. Data were collected by means of a focus group discussion, using an unstructured interview method. The purpose of using this method was to understand parents’ views regarding challenges of sexual health communication among secondary school learners in the Limpopo province. Different methods to enhance trustworthiness of data were used such as taking notes, observation and using an audiotape to reduce bias (De Vos et al. :361). The focus group started with a central question: ‘can you share your challenges regarding sexual health communication with your children?’ The central question was asked to all five focus group discussion groups, followed by probing questions because the researcher wanted to understand more about their challenges regarding sexual health communication. Data saturation was achieved when no new information was upcoming from the focus group discussions, and there was no substantial addition to the codes and themes being developed (Brink, Van de Walt & Rensburg :193). Termination of the focus group interviews occurred after a maximum of two visits for some three visits. Data were collected for a period of three months (May 2019 – July 2019). The data were analysed conceptually, which included reading, coding and developing themes (Brink et al. :193). Raw data were transcribed verbatim, including observational notes collected from the focus group discussions. The data were condensed and organised into themes and subthemes to make sense. The researcher approached an experienced qualitative data coder to analyse the data again, then an agreement was achieved. Literature control was presented after data were analysed to compare findings of this study (Brink et al. :193). The present study used the model of Lincoln and Guba to ensure trustworthiness. This model is characterised by four strategies for ensuring trustworthiness: credibility, transferability, dependability and confirmability. Credibility was ensured through prolonged engagement with the participants, as more than one visit was made for focus group discussion interviews, and during transcription of data, when clarity was needed, the researcher contacted the participants for further clarity; furthermore, field notes and observational notes were captured to enhance credibility. Transferability is the extent to which the results can be transferred to other contexts or settings; this was ensured by the purposive sampling technique to make sure that the selected participants were representative of the different views of parents across the different settings in the Limpopo province. To enhance the dependability of the data, an audit trial was used where a track record of the data collection process was developed, and during analysis, field notes and observations written during data collection were compared with data and corroborated with the findings of this study (Brink et al. :172). Confirmability is the extent of the confidence that the results would be confirmed or corroborated by other researchers; this was achieved through reflexivity, where weekly meetings with promoters and independent coder were held after data analysis to reflect on the transcripts and themes. Feedback from the promoters and independent coder confirmed that all the quotes used in the participants’ transcripts supported the identified themes. Permissions Ethical clearance was granted by the University of Venda Human and Clinical Trial Research Ethics Committee (HCTREC) (reference number SHS/19/PDC/37/2410). Provincial Department of Education Limpopo province and the district managers and principals granted permission. It was agreed that the researcher would not visit schools during examination time and would not disrupt classes. A written informed consent form was obtained from each participant. Consent Consent is morally justified primarily on the basis of autonomy, as research participants’ autonomy can be protected and supported through the consent process. A brief explanation of the research purpose and the fact that participants are not forced to participate were given to the participants. Based on the information given to them, they made a free choice. Confidentiality and anonymity Once the researcher has information, confidentiality pertains to what the researcher does with it, specifically how much he or she discloses to others. The researcher gave an assurance that data would be reported anonymously; anonymity, in contrast, is concerned with the attribution of information. Participants were informed that the researchers would maintain their anonymity and would not report actual names or other identifying information. There is no control over participants breaking internal confidentiality in focus groups, but the researcher relies on the following ground rules and adheres to consent procedures. As the researcher explains the purpose of the study, he or she also explains that no information should be discussed outside the focus group meeting. Ethical clearance was granted by the University of Venda Human and Clinical Trial Research Ethics Committee (HCTREC) (reference number SHS/19/PDC/37/2410). Provincial Department of Education Limpopo province and the district managers and principals granted permission. It was agreed that the researcher would not visit schools during examination time and would not disrupt classes. A written informed consent form was obtained from each participant. Consent is morally justified primarily on the basis of autonomy, as research participants’ autonomy can be protected and supported through the consent process. A brief explanation of the research purpose and the fact that participants are not forced to participate were given to the participants. Based on the information given to them, they made a free choice. Once the researcher has information, confidentiality pertains to what the researcher does with it, specifically how much he or she discloses to others. The researcher gave an assurance that data would be reported anonymously; anonymity, in contrast, is concerned with the attribution of information. Participants were informed that the researchers would maintain their anonymity and would not report actual names or other identifying information. There is no control over participants breaking internal confidentiality in focus groups, but the researcher relies on the following ground rules and adheres to consent procedures. As the researcher explains the purpose of the study, he or she also explains that no information should be discussed outside the focus group meeting. The following themes emerged during data collection: communication concerns, role shifting in imparting sexuality education and poor parent–child relationships. Eight subthemes also emerged. The narratives of parents’ views are presented as direct quotations of participants. Themes and subthemes are indicated in . Demographic characteristics of participants The study included 56 participants in focus groups consisting of parents from the Greater Giyani and Thulamela municipalities. Parents were aged between 35 and 63 years, both female and male. Most of the participants were female ( n = 42) and a minority were male ( n = 14). The majority of participants were not employed. The highest qualification held by three participants was a degree; 24 participants did not have a Grade 12 certificate. Themes and subthemes identified in the study are discussed in the following sections, supported with direct quotations and the literature. Theme 1: Communication concerns Participants expressed that communication concerns inhibited parent–child dialogue about sexuality. However, some participants considered the provision of sexuality education only under conditions, while others did not. Communication concerns about promoting sexuality education were further classified into five subthemes, namely: task-shifting of responsibilities, cultural barriers, the uncertainty of time or age to impart knowledge, fear of embarrassment, increased temptation and communication after things go wrong. Subtheme 1.1: Task shifting of responsibilities: In this study, participants indicated that parent–child communication is lacking because they shift their responsibility to teach learners about sexuality. A parent assigns a family member such as a spouse, siblings or elders in the family to discuss information about sexuality with their children. One participant said: ‘My wife is the one who is supposed to talk with children and only report to me [ husband ] back the child who is naughty.’ (FGD 4 – P7, female, 47 years old) Another participant concurred: ‘I would rather have another person talk to my child on my behalf. I believe that the other person will be much more open than me because I am afraid the child will disrespect me.’ (FGD 1 – P3, male, 44 years old) Task shifting has been observed in communities where parents as primary caregivers are no longer staying with their children and have shifted the responsibility to the grandparents, especially grandmothers. Subtheme 1.2: Cultural barrier: Participants expressed that cultural barriers are a challenge that restrict parents from communicating with their learners. Participants indicated that it is taboo and insulting for adults to discuss sexuality matters with persons younger than them. One participant said: ‘Culturally, we are not allowed to talk about sex-related issues with young people.’ (FGD 3 – P4, female, 38 years old) Another participant said: ‘In Venda culture, for us to talk to your child about sex, especially my daughter, is a taboo.’ (FGD 4 – P2, female, 53 years old) Traditionally, in many African cultures, sexual health communication is considered highly inappropriate when parents talk about sexuality matters but emphasise abstinence without explanations. Subtheme 1.3: Uncertainty of time or age at which to impart knowledge: This study revealed that uncertainty of time or age to impart knowledge about sexuality leaves some participants unsure about the age to have such a discussion with their children. Participants indicated that it is challenging to impart sexuality matters to Grade 8 learners because they believe that learners are still young. The information may influence learners to become sexually active. A participant said: ‘I do not know when the right time is and how to start teaching my child.’ (FGD 6 – P2, female, 54 years old) Another one said: ‘I cannot talk about sex with my children. They are still young. I am unsure if I can start now.’ (FGD 3 – P2, female, 37 years old) Appropriate age-specific time is difficult to determine, as parents underestimate the sexual activity of their children; recent observations reveal early sexual debuts and incidences of teen pregnancies among the ages of 10–12 years. Subtheme 1.4: Fear of embarrassment: Parent–child communication is limited by fear of embarrassment and parents’ lack of knowledge about sexual matters. Participants pointed out that it is embarrassing to answer sexually intimate questions. When is the right time to have an intimate relationship? One participant indicated that: ‘I did not know what to say when my daughter asked me when the right time was to start dating. I felt embarrassed to answer her because it is not culturally accepted. Although I know I should tell her the truth.’ (FGD 5 – P2, female, 56 years old) Another participant said: ‘Talking about contraception is embarrassing. As a parent, it was difficult for me to explain.’ (FGD 4 – P3, female, 43 years old) Parents highlighted that sexuality communication is a shameful and embarrassing topic because it is usually associated with humiliation, self-guilt and stigma by society. Subtheme 1.5: Reactive sharing of information instead of being proactive: Reactive sharing of information is viewed as an obstacle to promoting sexuality education. The study results revealed that parents are hesitant and feel unprepared for and uncomfortable communicating about sexuality with their children. Topics such as intimate relationships, pregnancy and contraception are discussed after parents realise that their children are sexually active or pregnant. An individual participant supported by the group had this to say: ‘The challenge is that we start to talk to a girl child when realising that the child is pregnant. Then the whole family gathers to talk about and with the pregnant child. We wait until the child has become pregnant. We only talk when it is no longer useful.’ (FGD 2 – P1, female, 56 years old) One participant added that: ‘At our school, a certain pastor was invited to talk with learners after the school management realised that 20 learners were pregnant.’ (FGD 3 – P1, male, 58 years old) Another participant said: ‘I have communicated with my daughter while she was in grade 6 to see that she is in the puberty stage. We talked about menstruation, how to take care of menstruations, abstinence to prevent pregnancy.’ (FGD 4 – P2, female, 58 years old) Providing genuine and respectful communication in a genuine manner is very important because learners will be able to receive such information in a good way without feeling judged. However, parents usually comment on issues in a passive manner and when a sexual problem has occurred. Theme 2: Role shifting in imparting sexuality education Participants found it particularly difficult to discuss sexually related matters. However, participants acknowledged their role in imparting knowledge about sex education but shifted that role to teachers and other professionals such as nurses and priests. The following subtheme emerged: Parents shifting responsibility to school and religious institutions. Subtheme 2.1: Parents shifting responsibility to school an d religious institutions: Shifting roles to schools and churches was cited as a challenge regarding parent–child communication. Some participants believed that the provision of information about sexuality is the role of the teachers. Most participants highlighted a lack of knowledge and skill as a problem that shifted their primary teaching function to teachers. Participants felt there was no reason for them to talk about sexuality at home because learners are taught at school in the subjects of life orientation and life science. An individual participant said: ‘I thought it is the role of the teachers to teach … so they are going to teach children according to the curriculum and age of the learner unlike at home.’ (FGD 5 – P2, female, 35 years old) Another participant said: ‘Teachers must continue teaching because parents do not have sufficient information about sex education and different types of contraceptives. I do not have that information.’ (FGD 1 – P3, male, 42 years old) Some participants indicated that it is difficult to have dialogue with their children because their parents did not discuss this topic with them as they grew up. An individual participant supported by the group said: ‘During my school time, my parents did not tell us anything about sexuality. I was taught about menstruation in school when studying biology. So, I expect teachers to teach them likewise.’ (FGD 4 – P4, female, 52 years old) ‘I do not bother myself about teaching my children about sexuality because in our church they conduct workshops to guide youth on how to conduct themselves regarding their sex life and relationships.’ (FGD 3 – P5, female, 38 years old) Another participant supported by saying: ‘Children who are being guided at church have good morals. Pastors must emphasise sexuality.’ (FGD 5 – P2, female, 43 years old) When parents fail to take their primary role of teaching, they shift their role to schools and churches, while these institutions and parents need to work together to communicate sexual health information to learners. However, Christian parents turn to relax, thinking that the church will teach their children about sexuality. Albeit the church only emphasise abstinence to reduce sexual risks, which has failed the learners. Theme 3: Poor parent–child relationships: Parents play a major role in the lives of their children; they should model healthy sexual practices. Literature asserts that children who relates well with their parents usually make good sexual health decisions. Two subthemes emerged: parents’ lack of confidence in the subject and reluctance and avoidance techniques of the subject. Subtheme 3.1: Parents’ lack of confidence in the subject: Participants indicated a lack of confidence in communicating sexual health information because of poor parent–child relationships. Culturally, mothers are closer to their daughters than their sons, which makes it difficult to broach such a subject. An individual participant said: ‘It is difficult to discuss sexuality. Maybe it is because of my relationship with my son.’ (FGD 4 – P6, female, 42 years old) Another participant said: ‘I am not confident to confront my children about sexuality education. It is difficult for me to talk or teach my son, but one person whom he relates well with is his elder brother.’ (FGD 2 – P7, male, 37 years old) Developing good parent–child relationships contributes to a positive, open sharing of information; respecting the learner as an individual encourages the learner to develop better sexual health values. Subtheme 3.2: Reluctance and avoidance of sexual health discussion: Participants verbalised that they are reluctant and avoid communicating about sexuality issues. Participants highlighted feeling embarrassed to watch television with children when people are kissing. Some said they (parents) changed the channel on the social broadcasts: ‘In case of age-restricted shows, we just leave them watching television without supervision to avoid them asking questions.’ (FGD 4 – P2, female, 55 years old) Another participant added: ‘I avoid making any comment on radio talks or television shows. If we are watching television and something occurs like kissing or sex appears on screen, we look down or try to reach out for a remote to change the channel because we feel embarrassed.’ (FGD 5 – P2, male, 40 years old) Avoidance and being reluctant is refusing to take responsibility for parenting, because parents need to control and supervise what the children are watching in the media and Internet platforms. Social media and Internet can negatively influence learners to adopt unhealthy sexual choices when parents do not interrogate media issues with them. The study included 56 participants in focus groups consisting of parents from the Greater Giyani and Thulamela municipalities. Parents were aged between 35 and 63 years, both female and male. Most of the participants were female ( n = 42) and a minority were male ( n = 14). The majority of participants were not employed. The highest qualification held by three participants was a degree; 24 participants did not have a Grade 12 certificate. Themes and subthemes identified in the study are discussed in the following sections, supported with direct quotations and the literature. Theme 1: Communication concerns Participants expressed that communication concerns inhibited parent–child dialogue about sexuality. However, some participants considered the provision of sexuality education only under conditions, while others did not. Communication concerns about promoting sexuality education were further classified into five subthemes, namely: task-shifting of responsibilities, cultural barriers, the uncertainty of time or age to impart knowledge, fear of embarrassment, increased temptation and communication after things go wrong. Subtheme 1.1: Task shifting of responsibilities: In this study, participants indicated that parent–child communication is lacking because they shift their responsibility to teach learners about sexuality. A parent assigns a family member such as a spouse, siblings or elders in the family to discuss information about sexuality with their children. One participant said: ‘My wife is the one who is supposed to talk with children and only report to me [ husband ] back the child who is naughty.’ (FGD 4 – P7, female, 47 years old) Another participant concurred: ‘I would rather have another person talk to my child on my behalf. I believe that the other person will be much more open than me because I am afraid the child will disrespect me.’ (FGD 1 – P3, male, 44 years old) Task shifting has been observed in communities where parents as primary caregivers are no longer staying with their children and have shifted the responsibility to the grandparents, especially grandmothers. Subtheme 1.2: Cultural barrier: Participants expressed that cultural barriers are a challenge that restrict parents from communicating with their learners. Participants indicated that it is taboo and insulting for adults to discuss sexuality matters with persons younger than them. One participant said: ‘Culturally, we are not allowed to talk about sex-related issues with young people.’ (FGD 3 – P4, female, 38 years old) Another participant said: ‘In Venda culture, for us to talk to your child about sex, especially my daughter, is a taboo.’ (FGD 4 – P2, female, 53 years old) Traditionally, in many African cultures, sexual health communication is considered highly inappropriate when parents talk about sexuality matters but emphasise abstinence without explanations. Subtheme 1.3: Uncertainty of time or age at which to impart knowledge: This study revealed that uncertainty of time or age to impart knowledge about sexuality leaves some participants unsure about the age to have such a discussion with their children. Participants indicated that it is challenging to impart sexuality matters to Grade 8 learners because they believe that learners are still young. The information may influence learners to become sexually active. A participant said: ‘I do not know when the right time is and how to start teaching my child.’ (FGD 6 – P2, female, 54 years old) Another one said: ‘I cannot talk about sex with my children. They are still young. I am unsure if I can start now.’ (FGD 3 – P2, female, 37 years old) Appropriate age-specific time is difficult to determine, as parents underestimate the sexual activity of their children; recent observations reveal early sexual debuts and incidences of teen pregnancies among the ages of 10–12 years. Subtheme 1.4: Fear of embarrassment: Parent–child communication is limited by fear of embarrassment and parents’ lack of knowledge about sexual matters. Participants pointed out that it is embarrassing to answer sexually intimate questions. When is the right time to have an intimate relationship? One participant indicated that: ‘I did not know what to say when my daughter asked me when the right time was to start dating. I felt embarrassed to answer her because it is not culturally accepted. Although I know I should tell her the truth.’ (FGD 5 – P2, female, 56 years old) Another participant said: ‘Talking about contraception is embarrassing. As a parent, it was difficult for me to explain.’ (FGD 4 – P3, female, 43 years old) Parents highlighted that sexuality communication is a shameful and embarrassing topic because it is usually associated with humiliation, self-guilt and stigma by society. Subtheme 1.5: Reactive sharing of information instead of being proactive: Reactive sharing of information is viewed as an obstacle to promoting sexuality education. The study results revealed that parents are hesitant and feel unprepared for and uncomfortable communicating about sexuality with their children. Topics such as intimate relationships, pregnancy and contraception are discussed after parents realise that their children are sexually active or pregnant. An individual participant supported by the group had this to say: ‘The challenge is that we start to talk to a girl child when realising that the child is pregnant. Then the whole family gathers to talk about and with the pregnant child. We wait until the child has become pregnant. We only talk when it is no longer useful.’ (FGD 2 – P1, female, 56 years old) One participant added that: ‘At our school, a certain pastor was invited to talk with learners after the school management realised that 20 learners were pregnant.’ (FGD 3 – P1, male, 58 years old) Another participant said: ‘I have communicated with my daughter while she was in grade 6 to see that she is in the puberty stage. We talked about menstruation, how to take care of menstruations, abstinence to prevent pregnancy.’ (FGD 4 – P2, female, 58 years old) Providing genuine and respectful communication in a genuine manner is very important because learners will be able to receive such information in a good way without feeling judged. However, parents usually comment on issues in a passive manner and when a sexual problem has occurred. Theme 2: Role shifting in imparting sexuality education Participants found it particularly difficult to discuss sexually related matters. However, participants acknowledged their role in imparting knowledge about sex education but shifted that role to teachers and other professionals such as nurses and priests. The following subtheme emerged: Parents shifting responsibility to school and religious institutions. Subtheme 2.1: Parents shifting responsibility to school an d religious institutions: Shifting roles to schools and churches was cited as a challenge regarding parent–child communication. Some participants believed that the provision of information about sexuality is the role of the teachers. Most participants highlighted a lack of knowledge and skill as a problem that shifted their primary teaching function to teachers. Participants felt there was no reason for them to talk about sexuality at home because learners are taught at school in the subjects of life orientation and life science. An individual participant said: ‘I thought it is the role of the teachers to teach … so they are going to teach children according to the curriculum and age of the learner unlike at home.’ (FGD 5 – P2, female, 35 years old) Another participant said: ‘Teachers must continue teaching because parents do not have sufficient information about sex education and different types of contraceptives. I do not have that information.’ (FGD 1 – P3, male, 42 years old) Some participants indicated that it is difficult to have dialogue with their children because their parents did not discuss this topic with them as they grew up. An individual participant supported by the group said: ‘During my school time, my parents did not tell us anything about sexuality. I was taught about menstruation in school when studying biology. So, I expect teachers to teach them likewise.’ (FGD 4 – P4, female, 52 years old) ‘I do not bother myself about teaching my children about sexuality because in our church they conduct workshops to guide youth on how to conduct themselves regarding their sex life and relationships.’ (FGD 3 – P5, female, 38 years old) Another participant supported by saying: ‘Children who are being guided at church have good morals. Pastors must emphasise sexuality.’ (FGD 5 – P2, female, 43 years old) When parents fail to take their primary role of teaching, they shift their role to schools and churches, while these institutions and parents need to work together to communicate sexual health information to learners. However, Christian parents turn to relax, thinking that the church will teach their children about sexuality. Albeit the church only emphasise abstinence to reduce sexual risks, which has failed the learners. Theme 3: Poor parent–child relationships: Parents play a major role in the lives of their children; they should model healthy sexual practices. Literature asserts that children who relates well with their parents usually make good sexual health decisions. Two subthemes emerged: parents’ lack of confidence in the subject and reluctance and avoidance techniques of the subject. Subtheme 3.1: Parents’ lack of confidence in the subject: Participants indicated a lack of confidence in communicating sexual health information because of poor parent–child relationships. Culturally, mothers are closer to their daughters than their sons, which makes it difficult to broach such a subject. An individual participant said: ‘It is difficult to discuss sexuality. Maybe it is because of my relationship with my son.’ (FGD 4 – P6, female, 42 years old) Another participant said: ‘I am not confident to confront my children about sexuality education. It is difficult for me to talk or teach my son, but one person whom he relates well with is his elder brother.’ (FGD 2 – P7, male, 37 years old) Developing good parent–child relationships contributes to a positive, open sharing of information; respecting the learner as an individual encourages the learner to develop better sexual health values. Subtheme 3.2: Reluctance and avoidance of sexual health discussion: Participants verbalised that they are reluctant and avoid communicating about sexuality issues. Participants highlighted feeling embarrassed to watch television with children when people are kissing. Some said they (parents) changed the channel on the social broadcasts: ‘In case of age-restricted shows, we just leave them watching television without supervision to avoid them asking questions.’ (FGD 4 – P2, female, 55 years old) Another participant added: ‘I avoid making any comment on radio talks or television shows. If we are watching television and something occurs like kissing or sex appears on screen, we look down or try to reach out for a remote to change the channel because we feel embarrassed.’ (FGD 5 – P2, male, 40 years old) Avoidance and being reluctant is refusing to take responsibility for parenting, because parents need to control and supervise what the children are watching in the media and Internet platforms. Social media and Internet can negatively influence learners to adopt unhealthy sexual choices when parents do not interrogate media issues with them. Participants expressed that communication concerns inhibited parent–child dialogue about sexuality. However, some participants considered the provision of sexuality education only under conditions, while others did not. Communication concerns about promoting sexuality education were further classified into five subthemes, namely: task-shifting of responsibilities, cultural barriers, the uncertainty of time or age to impart knowledge, fear of embarrassment, increased temptation and communication after things go wrong. Subtheme 1.1: Task shifting of responsibilities: In this study, participants indicated that parent–child communication is lacking because they shift their responsibility to teach learners about sexuality. A parent assigns a family member such as a spouse, siblings or elders in the family to discuss information about sexuality with their children. One participant said: ‘My wife is the one who is supposed to talk with children and only report to me [ husband ] back the child who is naughty.’ (FGD 4 – P7, female, 47 years old) Another participant concurred: ‘I would rather have another person talk to my child on my behalf. I believe that the other person will be much more open than me because I am afraid the child will disrespect me.’ (FGD 1 – P3, male, 44 years old) Task shifting has been observed in communities where parents as primary caregivers are no longer staying with their children and have shifted the responsibility to the grandparents, especially grandmothers. Subtheme 1.2: Cultural barrier: Participants expressed that cultural barriers are a challenge that restrict parents from communicating with their learners. Participants indicated that it is taboo and insulting for adults to discuss sexuality matters with persons younger than them. One participant said: ‘Culturally, we are not allowed to talk about sex-related issues with young people.’ (FGD 3 – P4, female, 38 years old) Another participant said: ‘In Venda culture, for us to talk to your child about sex, especially my daughter, is a taboo.’ (FGD 4 – P2, female, 53 years old) Traditionally, in many African cultures, sexual health communication is considered highly inappropriate when parents talk about sexuality matters but emphasise abstinence without explanations. Subtheme 1.3: Uncertainty of time or age at which to impart knowledge: This study revealed that uncertainty of time or age to impart knowledge about sexuality leaves some participants unsure about the age to have such a discussion with their children. Participants indicated that it is challenging to impart sexuality matters to Grade 8 learners because they believe that learners are still young. The information may influence learners to become sexually active. A participant said: ‘I do not know when the right time is and how to start teaching my child.’ (FGD 6 – P2, female, 54 years old) Another one said: ‘I cannot talk about sex with my children. They are still young. I am unsure if I can start now.’ (FGD 3 – P2, female, 37 years old) Appropriate age-specific time is difficult to determine, as parents underestimate the sexual activity of their children; recent observations reveal early sexual debuts and incidences of teen pregnancies among the ages of 10–12 years. Subtheme 1.4: Fear of embarrassment: Parent–child communication is limited by fear of embarrassment and parents’ lack of knowledge about sexual matters. Participants pointed out that it is embarrassing to answer sexually intimate questions. When is the right time to have an intimate relationship? One participant indicated that: ‘I did not know what to say when my daughter asked me when the right time was to start dating. I felt embarrassed to answer her because it is not culturally accepted. Although I know I should tell her the truth.’ (FGD 5 – P2, female, 56 years old) Another participant said: ‘Talking about contraception is embarrassing. As a parent, it was difficult for me to explain.’ (FGD 4 – P3, female, 43 years old) Parents highlighted that sexuality communication is a shameful and embarrassing topic because it is usually associated with humiliation, self-guilt and stigma by society. Subtheme 1.5: Reactive sharing of information instead of being proactive: Reactive sharing of information is viewed as an obstacle to promoting sexuality education. The study results revealed that parents are hesitant and feel unprepared for and uncomfortable communicating about sexuality with their children. Topics such as intimate relationships, pregnancy and contraception are discussed after parents realise that their children are sexually active or pregnant. An individual participant supported by the group had this to say: ‘The challenge is that we start to talk to a girl child when realising that the child is pregnant. Then the whole family gathers to talk about and with the pregnant child. We wait until the child has become pregnant. We only talk when it is no longer useful.’ (FGD 2 – P1, female, 56 years old) One participant added that: ‘At our school, a certain pastor was invited to talk with learners after the school management realised that 20 learners were pregnant.’ (FGD 3 – P1, male, 58 years old) Another participant said: ‘I have communicated with my daughter while she was in grade 6 to see that she is in the puberty stage. We talked about menstruation, how to take care of menstruations, abstinence to prevent pregnancy.’ (FGD 4 – P2, female, 58 years old) Providing genuine and respectful communication in a genuine manner is very important because learners will be able to receive such information in a good way without feeling judged. However, parents usually comment on issues in a passive manner and when a sexual problem has occurred. Participants found it particularly difficult to discuss sexually related matters. However, participants acknowledged their role in imparting knowledge about sex education but shifted that role to teachers and other professionals such as nurses and priests. The following subtheme emerged: Parents shifting responsibility to school and religious institutions. Subtheme 2.1: Parents shifting responsibility to school an d religious institutions: Shifting roles to schools and churches was cited as a challenge regarding parent–child communication. Some participants believed that the provision of information about sexuality is the role of the teachers. Most participants highlighted a lack of knowledge and skill as a problem that shifted their primary teaching function to teachers. Participants felt there was no reason for them to talk about sexuality at home because learners are taught at school in the subjects of life orientation and life science. An individual participant said: ‘I thought it is the role of the teachers to teach … so they are going to teach children according to the curriculum and age of the learner unlike at home.’ (FGD 5 – P2, female, 35 years old) Another participant said: ‘Teachers must continue teaching because parents do not have sufficient information about sex education and different types of contraceptives. I do not have that information.’ (FGD 1 – P3, male, 42 years old) Some participants indicated that it is difficult to have dialogue with their children because their parents did not discuss this topic with them as they grew up. An individual participant supported by the group said: ‘During my school time, my parents did not tell us anything about sexuality. I was taught about menstruation in school when studying biology. So, I expect teachers to teach them likewise.’ (FGD 4 – P4, female, 52 years old) ‘I do not bother myself about teaching my children about sexuality because in our church they conduct workshops to guide youth on how to conduct themselves regarding their sex life and relationships.’ (FGD 3 – P5, female, 38 years old) Another participant supported by saying: ‘Children who are being guided at church have good morals. Pastors must emphasise sexuality.’ (FGD 5 – P2, female, 43 years old) When parents fail to take their primary role of teaching, they shift their role to schools and churches, while these institutions and parents need to work together to communicate sexual health information to learners. However, Christian parents turn to relax, thinking that the church will teach their children about sexuality. Albeit the church only emphasise abstinence to reduce sexual risks, which has failed the learners. Parents play a major role in the lives of their children; they should model healthy sexual practices. Literature asserts that children who relates well with their parents usually make good sexual health decisions. Two subthemes emerged: parents’ lack of confidence in the subject and reluctance and avoidance techniques of the subject. Subtheme 3.1: Parents’ lack of confidence in the subject: Participants indicated a lack of confidence in communicating sexual health information because of poor parent–child relationships. Culturally, mothers are closer to their daughters than their sons, which makes it difficult to broach such a subject. An individual participant said: ‘It is difficult to discuss sexuality. Maybe it is because of my relationship with my son.’ (FGD 4 – P6, female, 42 years old) Another participant said: ‘I am not confident to confront my children about sexuality education. It is difficult for me to talk or teach my son, but one person whom he relates well with is his elder brother.’ (FGD 2 – P7, male, 37 years old) Developing good parent–child relationships contributes to a positive, open sharing of information; respecting the learner as an individual encourages the learner to develop better sexual health values. Subtheme 3.2: Reluctance and avoidance of sexual health discussion: Participants verbalised that they are reluctant and avoid communicating about sexuality issues. Participants highlighted feeling embarrassed to watch television with children when people are kissing. Some said they (parents) changed the channel on the social broadcasts: ‘In case of age-restricted shows, we just leave them watching television without supervision to avoid them asking questions.’ (FGD 4 – P2, female, 55 years old) Another participant added: ‘I avoid making any comment on radio talks or television shows. If we are watching television and something occurs like kissing or sex appears on screen, we look down or try to reach out for a remote to change the channel because we feel embarrassed.’ (FGD 5 – P2, male, 40 years old) Avoidance and being reluctant is refusing to take responsibility for parenting, because parents need to control and supervise what the children are watching in the media and Internet platforms. Social media and Internet can negatively influence learners to adopt unhealthy sexual choices when parents do not interrogate media issues with them. This study revealed the views of parents regarding promoting sexuality education for Grade 8 learners in the Limpopo province. They are often embarrassed or feel shy to discuss sensitive topics with their children; hence, they shift the responsibility of communicating with their children about sexuality. However, the burden is shifted to the significant other or other family members. This study further highlighted gender differences in alleviating difficulties in discussing sexuality education. Evans et al. (2019:182) highlighted that mothers discuss sexuality with the girl child and fathers with the boy child. This implies a need to empower parents in communication skills to avoid shifting responsibility. Children who receive information about sexuality from a parent are more likely to be free to discuss sexual matters than learners who never received information from their parents. Those adolescents who are able to communicate with their parents easily are more likely to engage their parents in sexual conversation than adolescents who have trouble talking to them. Klu et al. ( :7) found similar results. The findings of this study cited cultural barriers as a critical obstacle to parent–child communication about sexuality (Yohannes & Tsegaye :4). Talking about sexuality is taboo. The barrier for parents is perceived as a social taboo on sexuality discussion and a lack of knowledge about the topic (Mbachu et al. :8; Ram et al. :5). This implies that cultural taboos and cultural beliefs about sexuality are deeply embedded in parents’ lives and obstruct communication. Parents often do not openly discuss the subject because it is culturally sensitive, and they lack communication skills, which makes them not feel free to discuss it with their children. This finding was similar to Bikila et al.’s ( :4) findings that also exposed that parents were not allowed to discuss sexuality. In contrast, Shumlich and Fisher ( :1118) suggested that clear and unambiguous talks can help reduce sexual risk behaviours and promote healthy adolescent sexual development. The findings further highlighted task shifting to the schools and churches. The study by Mavhandu-Mudzusi and Mhongo ( :11) revealed that some parents believed that external entities, including the educational system, should bear accountability. The findings of this study highlighted that the appropriate age for initiating the discussion was cited as the greatest common barrier, as parents were always not aware that their children were sexually active. Communication should relatively respond to participants’ age, and most parents only discuss physiological body changes with children (Johnson :8) and reserve important information such as being intimate, the consequences and responsibility thereof. Parents felt uncomfortable initiating discussions with their children about sexuality because of age uncertainty; findings from various studies revealed that some parents began talking to the learner children as early as 10 years old because their bodies are undergoing physical changes at this age This dialogue is only initiated to protect children from sexual health risks (Thin Zaw et al. :85). Fear of embarrassment contributed to the lack of parent–child communication; parents felt they were not confident enough to talk about the subjects as they lacked accurate, comprehensive sexual information. It is a common practice in black communities for parents to just say ‘do not play with boys’ without providing accurate and straightforward information (Zulu et al. :23). Parents struggle with their own lack of sexual knowledge and are frequently too embarrassed to discuss sexuality with their children because it is culturally inappropriate, as children are usually sent to the aunt or elder member in the family to talk to the learner child. In support of the current findings, Othman et al. ( :318), Mullis et al. ( :399) and Mekonen et al. ( :5) revealed that a lack of communication is linked to fear, embarrassment at discussing with children and taboos attached to sexuality. This study highlighted that parents are reactive instead of being proactive; usually parents start talking about issues when something crops up – for example, on seeing a pregnant teenager, they will comment in a negative way for the child learner to realise that it is not acceptable, without directly communicating fruitful sexual education information. This finding is consistent with the findings of Mbachu et al. ( :7) and Flores et al. ( :541), which showed that unpleasant events were used as opportunities for parents to talk with their children. Jones et al. ( :766) also reiterated that parents began commenting about sexuality issues on various occasions about an indecent television scene. However, other researchers articulated that the discussion was limited to reprimands for abstinence (Mabunda & Madiba :170), preserving virginity and avoiding pregnancy Rouhparvar, Javadnoori and Shahali ( :8.). Abstinence is promoted but it is not realistic, as society has transformed, acculturation has occurred and values and beliefs have shifted. The advent of social media and technology has changed the landscape of sexuality information. This does not benefit the children; rather, it exposes them to sexual health risks. Therefore, it is essential for parents to become proactive regarding the provision of sexuality education. This study revealed that parents are lacking the knowledge and skills to approach this socially taboo subject. Sexuality education is a comprehensive subject that do not only encompasses HIV, AIDS and STI’s only but a broad range of factors such as responsible dating, negotiating safe sex and choosing the right contraceptives, to mention a few. Therefore, parents need to be empowered to have this information so that they can assist the learner children. This is similar to the findings reported by Ezwenwaka et al. ( :11) which indicate that parents’ role as educators is hampered by a lack of knowledge and approach on engaging children on sexuality issues (Dagnachew Adam et al. :4–5; Ezenwaka et al. :11). The findings of this study specified that parents did not feel confident or comfortable in communicating with their children on sexuality, partially because of poor parent–child relationships. In support of this finding, Szkody et al. ( :2643) argued that an excellent parent–child relationship must be established with the child when they are young, as this will encourage parent–adolescent communication when children are older. The results of the study conducted by Benharrousse ( :34) suggested that parents were more conservative in giving sexuality education. Maintaining a good parent–child relationship forms a strong foundation of communication skills. This implies that self-confidence builds an individual’s self-esteem. Hence, one could be able to share information. Being shy makes parents ignorant and reluctant to open up about sexuality education. This also applies to parents who have a good relationship with children. They do not encounter the problem of communication with their children about sexuality (Klein, Becker and Štulhofer ( :1493). Therefore, parents and children need to maintain an excellent relationship to promote therapeutic talks. Strengths and limitations As for strengths of this study, the community liked the lecturer (nurse educator) who collected data, which made parent discussions on such sensitive topics easier. Data analysis was performed in a reiterative process with supervisors to ensure that the yielded themes were fully uprooted from the collected and transcribed data, which assisted in ensuring the findings’ reliability and validity. Apart from these advantages, there were some drawbacks. Although we anticipated that using focus groups rather than in-depth interviews would allow active engagement of participants for more in-depth dialogue and deliberation, those who held minority viewpoints may have felt uncomfortable to freely express themselves or speak up. Furthermore, participants shared theoretical stories than talking directly about their own children to avoid dishonouring their children. Male participants were also more problematic for researchers to probe, so they represent a significant population for future research. It was thought that the in-depth interviews could have been more effective at this early stage for one to gain a more comprehensive perspective on certain issues such as contraceptives, sex education, termination of pregnancy and preferences for when to begin sexuality education. This could be a research topic in the future. Our goal-directed selection method was not intended for generalisability to the Vhembe or Mopani Districts. Rather than reflecting the distribution of both nationalities within the population, the researchers fixated our goal on warranting sufficient sample size to attain acceptable depth of information from participants’ responses and to arrive at the point of data saturation when choosing the sample size. Recommendations This study recommends that the community start forums where sexual health issues are discussed, and parents should be empowered on issues of sexual health. The traditional leadership should revise and revisit the traditional institutions which young boys and girls attend for initiation to include issues of sexual health education. Parents should also be empowered with comprehensive sexual education and services available that support teenagers and adolescents. Parents need to be involved in their children’s lives by engaging in sexual health topics while watching television with them. The education system should continuously update the curriculum on sexual health education and communicate comprehensive sexual education, unlike limiting it to HIV and AIDS. Health services should also promote the provision of comprehensive sexual health information. There is a need to address sociocultural taboos and religious beliefs that hinder communication. In addition, strategies must be developed to support parents to have confidence in promoting sexuality education. As for strengths of this study, the community liked the lecturer (nurse educator) who collected data, which made parent discussions on such sensitive topics easier. Data analysis was performed in a reiterative process with supervisors to ensure that the yielded themes were fully uprooted from the collected and transcribed data, which assisted in ensuring the findings’ reliability and validity. Apart from these advantages, there were some drawbacks. Although we anticipated that using focus groups rather than in-depth interviews would allow active engagement of participants for more in-depth dialogue and deliberation, those who held minority viewpoints may have felt uncomfortable to freely express themselves or speak up. Furthermore, participants shared theoretical stories than talking directly about their own children to avoid dishonouring their children. Male participants were also more problematic for researchers to probe, so they represent a significant population for future research. It was thought that the in-depth interviews could have been more effective at this early stage for one to gain a more comprehensive perspective on certain issues such as contraceptives, sex education, termination of pregnancy and preferences for when to begin sexuality education. This could be a research topic in the future. Our goal-directed selection method was not intended for generalisability to the Vhembe or Mopani Districts. Rather than reflecting the distribution of both nationalities within the population, the researchers fixated our goal on warranting sufficient sample size to attain acceptable depth of information from participants’ responses and to arrive at the point of data saturation when choosing the sample size. This study recommends that the community start forums where sexual health issues are discussed, and parents should be empowered on issues of sexual health. The traditional leadership should revise and revisit the traditional institutions which young boys and girls attend for initiation to include issues of sexual health education. Parents should also be empowered with comprehensive sexual education and services available that support teenagers and adolescents. Parents need to be involved in their children’s lives by engaging in sexual health topics while watching television with them. The education system should continuously update the curriculum on sexual health education and communicate comprehensive sexual education, unlike limiting it to HIV and AIDS. Health services should also promote the provision of comprehensive sexual health information. There is a need to address sociocultural taboos and religious beliefs that hinder communication. In addition, strategies must be developed to support parents to have confidence in promoting sexuality education. This study disclosed that parent–child conversation towards sexuality health matters was limited if not absent in Mopani and Vhembe Districts. Challenges identified as obstructing sexual health communication were issues such as cultural and religious barriers, uncertainty in imparting knowledge, role shifting in imparting sexuality education and poor parent–child relationships. Open, supportive communication between parents and young people related to sexual and reproductive health matters has the potential to postpone engagement with sexual activity, protect youth from risky sexual behaviours and support the healthy sexual socialisation among youth. This study therefore recommends that effective parent–learner conversations regarding issues around sexuality, the cultural norms and religious beliefs that hinder communication as well as the lack of knowledge and confidence of parents should be addressed. Furthermore, programmes aimed at supporting parents in getting more involved in their adolescents’ lives and to engage in healthier talk with their children about their sexuality need to be applied in the local communities. Educational pamphlets can be of good assistance.
|
Mastering Your Fellowship: Part 1, 2023
|
e7f6055d-0a63-48ff-be90-4618b5c6cb9a
|
10157437
|
Family Medicine[mh]
|
A 70-year-old male patient presents with epistaxis to the emergency centre (EC). The patient is bleeding profusely, and the team cannot localise the source of the bleeding. The patient’s vital signs are as follows: blood pressure = 160/80 mmHg, pulse = 108 beats/min, respiratory rate = 24 breaths/min, temperature = 37.5 °C. He has no other evidence of bleeding. The patient has been pinching his nose for the last 10 minutes. The bleeding continues when the pressure is released. You note that the team on call is a community service medical officer and two interns. They phone you for advice at 23:00. What is the most appropriate next step? a) Administer intravenous tranexamic acid. b) Insert a compressed nasal sponge. c) Insert a Foley catheter and inflate. d) Lower the blood pressure. e) Pack the anterior nasal cavity with gauze. Answer: b) Model answers Epistaxis is a relatively common condition, although the actual incidence is unknown because most cases self-abort and are managed at home. Severe epistaxis requires prompt evaluation in the EC and appropriate resuscitation. A focused history noting the duration, severity of the haemorrhage and the side of initial bleeding. Enquire about previous epistaxis, hypertension, hepatic or another systemic disease, family history, easy bruising or prolonged bleeding after minor surgical procedures. Recurrent episodes of epistaxis, even if self-limited, should raise suspicion of significant nasal pathology. Use of medications, especially aspirin, nonsteroidal anti-inflammatory drugs, warfarin and heparin, should be documented, as these predispose to epistaxis. The examination using a light source is essential in establishing the point of bleeding. Applying vasoconstrictor drops may slow the bleeding, allowing for an accurate source assessment. Patients should be educated about first aid, which includes pinching the nose and applying an ice pack to the forehead while leaning forward. The relationship between hypertension and epistaxis is not well understood. Patients with epistaxis commonly present with elevated blood pressure. Epistaxis is more common in hypertensive patients due to long-standing vascular fragility. Hypertension, however, is rarely a direct cause of epistaxis. More commonly, epistaxis and the associated anxiety cause an acute elevation of blood pressure. Therefore, therapy should focus on controlling haemorrhages and reducing anxiety as the primary means of blood pressure reduction. Insert pledgets soaked with an anaesthetic-vasoconstrictor solution into the nasal cavity to anaesthetise and shrink nasal mucosa. Nasal packing is the usual practice in most settings in South Africa but is often poorly done and requires some skill. Packing is commonly performed incorrectly, using an insufficient amount of packing set primarily in the anterior naris. The gauze is a plug rather than a haemostatic pack when placed in this way. Physicians inexperienced in the proper gauze pack placement should use a nasal tampon or balloon instead. A compressed sponge (e.g. Merocel ® ) is trimmed to fit snugly through the naris. Moisten the tip with surgical lubricant or topical antibiotic ointment. Firmly grasp the length of the sponge with bayonet forceps, spread the naris vertically with a nasal speculum and advance the sponge along the floor of the nasal cavity. Once wet with blood or a small amount of saline, the sponge expands to fill the nasal cavity and tamponade bleeding. The procedure requires very little skill and is suitable for all levels of emergency care doctors. Another easy method of gaining control of bleeding in the anterior naris is nasal balloons, available in different lengths. A carboxymethyl cellulose outer layer promotes platelet aggregation. The balloons are as effective as nasal tampons, easier to insert and remove and more comfortable for the patient. To insert the balloon, soak its knit outer layer with water, insert it along the floor of the nasal cavity and inflate it slowly with air until the bleeding stops. These balloons are not readily available in most public sector hospitals in South Africa. Further reading Naidoo M. Chapter 88: How to manage epistaxis. In: Mash B, et al, editors. South African Family Practice Manual. 4th ed. Braamfontein: Van Schaik; In press 2023. Traboulsi H, Alam E, Hadi U. Changing trends in the management of epistaxis. Int J Otolaryngol. 2015: 2015;263987. https://doi.org/10.1155/2015/263987 . Bamimore O, Silverberg MA. Acute epistaxis [Internet]. 2022. New York: Medscape. [cited 2022 Sept 12]. Available from: https://emedicine.medscape.com/article/764719-overview .
You are the family physician working in a community health centre. A medical officer (MO) working in the paediatric clinic alongside primary health care (PHC) nurses commented that she has recently seen a few children with hearing loss as a complication of otitis media (OM). At the same time, it is noted in the Pharmaceuticals and Therapeutics Committee (PTC) meeting that there is an increased need for antimicrobial stewardship in the management of common upper respiratory tract infections (URTIs). As a leader of clinical governance in the clinic, what initial steps would you take to investigate this problem in the clinic? Describe three different approaches you might take. (6 marks) Based on your findings, you decide to do a quality improvement project (QIP) on one of your findings. Describe the process you would follow. Apply a relevant example to this process in line with one of your responses to question 1. (6 marks) You plan a continuing professional development (CPD) meeting to address the knowledge gap. List four important learning outcomes written in the correct format which address pertinent points in the management of OM in children. (4 marks) Acquired antibiotic resistance and antimicrobial stewardship raise several ethical dilemmas regarding public health when it comes to balancing harms and benefits. Over a million deaths per year are attributable to resistant bacterial infections. Describe two ethical dilemmas relevant to primary care practice that you will broach in your CPD meeting to raise awareness. (4 marks) Total: 20 marks Model answers 1. As a leader of clinical governance in the clinic, what initial steps would you take to investigate this problem in the clinic? Describe three different approaches you might take. (6 marks) (Provide any three approaches from the list below with a relevant example) File audit – Determine the current standard of care being provided and if this aligns with treatment guidelines. Also consider antibiotic stewardship, appropriate prescription of antibiotics, quality of note keeping and the number of children presenting with OM or URTI. Skills assessment and audit – Assess the competence of the staff who are new, and on an ongoing basis, assess the turnover of staff and provision of relevant supervision and training; note attendance at CPD meetings on the topic and observed consultations. Exploring problems in teams – apply root cause analysis methods, such as asking the 5 why’s, using the fishbone template and applying process mapping techniques. This may assist in understanding where breakdowns are occurring regarding health system factors or process issues, health care worker–related factors and patient factors. These may include problems with patient load, lack of access to functional equipment (otoscope), a gap in knowledge in treatment guidelines, poor examination technique and patient medication adherence. Explore learning needs and gaps – This may be on an individual level (doctors and PHC nurses), or it may be a priority and relevant for district health services and outcomes. Analyse and understand your intended audience and clarify their learning needs and gaps, which will in turn assist in developing learning objectives. Any other relevant response. 2. Based on your findings, you decide to do a quality improvement project (QIP) on one of your findings. Describe the process you would follow. Apply a relevant example to this process in line with one of your responses to question 1. (6 marks) The current situation has been explored in question 2.1. The next steps will be to: (Need to mention the step and elaborate with a relevant example for the mark) Form a relevant team (including PTC committee members) – For example, family physician, MO and PHC nurse from paediatric clinic, pharmacist and facility manager. Agree on problem definition, criteria and set target standards – Apply to one of the above examples. Identify gaps in current provision – Apply to one of the above examples above. Analyse causes and explore ways to improve the situation – Apply to one of the above examples above. Planning and implementing the change – Apply to one of the above examples above. Sustain change – Apply to one of the above examples above. The cycle continues until the desired quality is achieved. The criteria used and the performance levels can be adjusted if necessary before the start of a new cycle (as per the principle of continuous quality improvement [QI]). 3. You plan a continuing professional development (CPD) meeting to address the knowledge gap. List four important learning outcomes written in the correct format which address pertinent points in the management of OM in children. (4 marks) Background information (not part of the model answer) : In higher education today, teaching activities are not defined in terms of the content but rather in terms of the intended outcomes for the learners (see Bloom’s taxonomy). In other words, a learning outcome should specify what the learner should be able to do at the end of the teaching session. The learning outcome can be for knowledge, skills or attitudes, and the level of Bloom’s taxonomy should be clear from the verb used – list, describe, demonstrate. At the end of your teaching activity, you should be able to: Know or understand (cognitive domain: knowledge or application of knowledge in problem-solving or critical reflection) – Possible knowledge learning outcomes may relate to indications, contraindications, anatomy, equipment, drugs, fluids and aftercare. Be able to do (psychomotor domain: skills) – Possible learning outcomes related to skill refer to performing the procedure. Attitudes displayed (affective domain: values and attitudes) – Possible learning outcomes related to attitude may relate to communication, caring and consent. The content relating to the South African national guidelines for the management of upper respiratory tract infections should be expressed in the learning outcomes. The model answer should include any four options from the list below, preferably covering each domain: knowledge, skills and attitudes. At the end of this session, you should be able to list the common organisms that cause OM. At the of this session, you should be able to discuss the primary preventative measures that have reduced the incidence of OM in children. At the end of this session, you should be able to demonstrate the correct examination of the ear using pneumatic otoscopy and tympanometry. At the end of this session, you should be able to list the diagnostic criteria for acute OM. At the end of this session, you should be able to describe an approach to rational antibiotic prescribing for acute OM. At the end of this session, you should be able to list conditions under which antibiotics should be prescribed for acute OM and when a more conservative approach can be taken. At the end of this session, you should be able to demonstrate how you counsel a carer or parent on when management with antibiotics may be required and on the issue of antibiotic adherence. 4. Acquired antibiotic resistance and antimicrobial stewardship raise several ethical dilemmas regarding public health when it comes to balancing harms and benefits. Over a million deaths per year are attributable to resistant bacterial infections. Describe two ethical dilemmas relevant to primary care practice that you will broach in your CPD meeting to raise awareness. (4 marks) The model answer should include any two well-described points for 2 marks each. Primordial prevention and social determinants of health – Even when antibiotics are used scrupulously in individual patients, they can still acquire resistant organisms through no fault of their own from contact with infected or colonised people, animals and other environmental reservoirs. The medical fraternity should raise awareness and influence policy as a public health measure, including environmental and infection control policies. Distributive justice – Overuse of antibiotics in general practice may be because of a lack of evidence-based use by health practitioners, other incentives for health care workers or pressure from patients. Overuse in individuals may result in the depletion of a common resource for all. This requires regulation of human behaviour and may even require regulating access to a common resource for the greater good. Beneficence versus nonmaleficence – Antibiotic use is not a free ride; each use involves risk, and risk is more concentrated in the frequent user. Antibiotic consumption should require regulation. However, governance of antibiotic use through idealised prescription guidelines faces multiple real-world challenges – prescribers, agents and conflicts of interest. Clinicians may prioritise their immediate patients over the interest of other, distant or future patients. Antibiotics may also not be in the interest of the individual or the wider community. Further reading Brink AJ, Cotton MF, Feldman C, et al. Updated recommendations for the management of upper respiratory tract infections in South Africa. S. Afr. Med. J. 2015;105(5):345–52. Moodley K. Chapter 10.8: Family medicine ethics - the four principles of medical ethics. In: Mash B, editor. Handbook of Family Medicine. 4th ed. Cape Town: Oxford University Press, 2017; p. 418–422.
(Provide any three approaches from the list below with a relevant example) File audit – Determine the current standard of care being provided and if this aligns with treatment guidelines. Also consider antibiotic stewardship, appropriate prescription of antibiotics, quality of note keeping and the number of children presenting with OM or URTI. Skills assessment and audit – Assess the competence of the staff who are new, and on an ongoing basis, assess the turnover of staff and provision of relevant supervision and training; note attendance at CPD meetings on the topic and observed consultations. Exploring problems in teams – apply root cause analysis methods, such as asking the 5 why’s, using the fishbone template and applying process mapping techniques. This may assist in understanding where breakdowns are occurring regarding health system factors or process issues, health care worker–related factors and patient factors. These may include problems with patient load, lack of access to functional equipment (otoscope), a gap in knowledge in treatment guidelines, poor examination technique and patient medication adherence. Explore learning needs and gaps – This may be on an individual level (doctors and PHC nurses), or it may be a priority and relevant for district health services and outcomes. Analyse and understand your intended audience and clarify their learning needs and gaps, which will in turn assist in developing learning objectives. Any other relevant response.
The current situation has been explored in question 2.1. The next steps will be to: (Need to mention the step and elaborate with a relevant example for the mark) Form a relevant team (including PTC committee members) – For example, family physician, MO and PHC nurse from paediatric clinic, pharmacist and facility manager. Agree on problem definition, criteria and set target standards – Apply to one of the above examples. Identify gaps in current provision – Apply to one of the above examples above. Analyse causes and explore ways to improve the situation – Apply to one of the above examples above. Planning and implementing the change – Apply to one of the above examples above. Sustain change – Apply to one of the above examples above. The cycle continues until the desired quality is achieved. The criteria used and the performance levels can be adjusted if necessary before the start of a new cycle (as per the principle of continuous quality improvement [QI]).
Background information (not part of the model answer) : In higher education today, teaching activities are not defined in terms of the content but rather in terms of the intended outcomes for the learners (see Bloom’s taxonomy). In other words, a learning outcome should specify what the learner should be able to do at the end of the teaching session. The learning outcome can be for knowledge, skills or attitudes, and the level of Bloom’s taxonomy should be clear from the verb used – list, describe, demonstrate. At the end of your teaching activity, you should be able to: Know or understand (cognitive domain: knowledge or application of knowledge in problem-solving or critical reflection) – Possible knowledge learning outcomes may relate to indications, contraindications, anatomy, equipment, drugs, fluids and aftercare. Be able to do (psychomotor domain: skills) – Possible learning outcomes related to skill refer to performing the procedure. Attitudes displayed (affective domain: values and attitudes) – Possible learning outcomes related to attitude may relate to communication, caring and consent. The content relating to the South African national guidelines for the management of upper respiratory tract infections should be expressed in the learning outcomes. The model answer should include any four options from the list below, preferably covering each domain: knowledge, skills and attitudes. At the end of this session, you should be able to list the common organisms that cause OM. At the of this session, you should be able to discuss the primary preventative measures that have reduced the incidence of OM in children. At the end of this session, you should be able to demonstrate the correct examination of the ear using pneumatic otoscopy and tympanometry. At the end of this session, you should be able to list the diagnostic criteria for acute OM. At the end of this session, you should be able to describe an approach to rational antibiotic prescribing for acute OM. At the end of this session, you should be able to list conditions under which antibiotics should be prescribed for acute OM and when a more conservative approach can be taken. At the end of this session, you should be able to demonstrate how you counsel a carer or parent on when management with antibiotics may be required and on the issue of antibiotic adherence.
The model answer should include any two well-described points for 2 marks each. Primordial prevention and social determinants of health – Even when antibiotics are used scrupulously in individual patients, they can still acquire resistant organisms through no fault of their own from contact with infected or colonised people, animals and other environmental reservoirs. The medical fraternity should raise awareness and influence policy as a public health measure, including environmental and infection control policies. Distributive justice – Overuse of antibiotics in general practice may be because of a lack of evidence-based use by health practitioners, other incentives for health care workers or pressure from patients. Overuse in individuals may result in the depletion of a common resource for all. This requires regulation of human behaviour and may even require regulating access to a common resource for the greater good. Beneficence versus nonmaleficence – Antibiotic use is not a free ride; each use involves risk, and risk is more concentrated in the frequent user. Antibiotic consumption should require regulation. However, governance of antibiotic use through idealised prescription guidelines faces multiple real-world challenges – prescribers, agents and conflicts of interest. Clinicians may prioritise their immediate patients over the interest of other, distant or future patients. Antibiotics may also not be in the interest of the individual or the wider community. Further reading Brink AJ, Cotton MF, Feldman C, et al. Updated recommendations for the management of upper respiratory tract infections in South Africa. S. Afr. Med. J. 2015;105(5):345–52. Moodley K. Chapter 10.8: Family medicine ethics - the four principles of medical ethics. In: Mash B, editor. Handbook of Family Medicine. 4th ed. Cape Town: Oxford University Press, 2017; p. 418–422.
Read the accompanying article carefully and then answer the following questions. As far as possible use your own words. Do not copy out chunks from the article. Be guided by the allocation of marks concerning the length of your responses. Biagio L, Swanepoel DW, Laurent C, Lundberg T. Paediatric otitis media at a primary healthcare clinic in South Africa. S. Afr. Med. J. 2014;104(6):431–5. Total: 30 marks Did the study address a focused question? Discuss. (3 marks) Identify three arguments the author made to justify and provide a rationale for the study. (3 marks) Explain why a quantitative research methodology may be most appropriate for this research question. Comment on where and how a qualitative data collection methodology might still be applicable. (2 marks) Critically appraise the sampling strategy. (5 marks) Critically appraise how well the authors describe the data collection process. (5 marks) Explain the difference between point prevalence and incidence. (2 marks) Critically appraise the analysis and conclusions of the study. (4 marks) Use a structured approach (e.g. relevance, education, applicability, discrimination, evaluation, reaction [READER]) to discuss the value of these findings to your practice. (6 marks) Model answers 1. Did the study address a focused question? Discuss. (3 marks) The authors aimed to measure the prevalence of otitis media in a South African primary health care (PHC) clinic, Witkoppen Health and Welfare Centre. The question is focused as it describes the population of interest (paediatric population attending a PHC clinic) and the condition or phenomenon of interest (point prevalence of otitis media in this population) in a particular community or area (the Diepsloot community north of Johannesburg, South Africa). The authors wished to diagnose the condition of interest with greater sensitivity and specificity than either otoscopy or pneumatic otoscopy, by using otomicroscopy to diagnose and classify otitis media as a cause of middle-ear pathology in children. 2. Identify three arguments the author made to justify and provide a rationale for the study. (3 marks) Otitis media point prevalence in South Africa has never been measured, and most deaths from complications of otitis media are in sub-Saharan Africa and India. Chronic serous otitis media is also the most common cause of hearing impairment. This makes the study socially and scientifically relevant. Most studies of the prevalence of otitis media measure prevalence in children of school-going age and not in younger preschool children, who are more prone to otitis media. Early medical intervention is indicated in communities where chronic suppurative otitis media rates are more than 4%, as this constitutes a high-risk population. This supports the need to employ diagnostic methods to measure the point prevalence more accurately. 3. Explain why a quantitative research methodology may be most appropriate for this research question. Comment on where and how a qualitative data collection methodology might still be applicable. (2 marks) By definition, prevalence is a quantitative measure of proportion and depicts the proportion of a defined population with a disease or illness at a specified time. Therefore, measuring a proportion would require a quantitative methodology and is impossible to achieve using qualitative data and methods. Given that otomicroscopy was used for the first time in this setting, the study could conceivably be amended to address the additional objective of assessing the otologist’s experiences of otomicroscopy in primary health care. Perhaps the caregiver who brought the children would be interviewed for qualitative data on their experience of the process. 4. Critically appraise the sampling strategy. (5 marks) The researchers selected a specific primary healthcare clinic for their study. The clinic is a specialist care centre for primary health care paediatric human immunodeficiency syndrome (HIV) and tuberculosis (TB) patients. This also already indicates that it does not represent the more typical primary health care clinics in the country, which serve patients with all forms of illness. The more accurate description in the title of this study should be that of measuring the prevalence of OM in an HIV and TB primary healthcare clinic. Furthermore, the sampling was not random but consecutive. They recruited 140 children aged 2–16 years as a sample from registered clinic patients known to the service: the participants were recruited from the entire paediatric population attending the clinic for any purpose, whether for a routine clinic appointment or for chronic or acute treatment. They do not indicate on which days they consecutively collected samples and whether they sampled equally for each day of the week. They only specified that the on-site data collection continued over the course of 2 weeks. Bias could be introduced in this way of sampling if, for example, a specific type of child (age or illness) tends to come to the clinic on some days more than others. The researchers do not indicate how they calculated the sample size. This always affects the precision of the estimate of prevalence. Often, it is helpful to use prevalence rates from the literature to calculate sample size estimates. 5. Critically appraise how well the authors describe the data collection process. (5 marks) The authors described the collection of demographic data under the study population subheading in the methods section and not under the data collection subheading. It would have made more sense to include this data collection step in the data collection subsection, as this information was included in the data set. The authors did not specify who collected this information, and it seems like this information might have been captured by a research assistant or the specialist otologist, linked to the informed consent process and possibly the otomicroscopy assessment. It is important to note the person(s) who collected the data from the patients and parents or caregivers as well as the background of the data collectors. It was not clear whether the clinical notes and medical history from the patient’s folder were consulted to complement the dataset and verify the accuracy of comorbid risk factors described in the introduction section (host-related and environmental factors). It would have been useful to present the demographic and medical background data collection instrument as a supplement. Interestingly, even though this clinic served as a specialist HIV and TB centre, the researchers were not able to collect clinical data on HIV status. They mentioned that ‘ethical clearance did not allow for this’ but do not specify the reasons behind this (whether it was a protocol design flaw or whether this was a specification from the ethics review board). The data collection subsection in the methods section describes the technical process of otomicroscopy, including the type of device used (a Leica M525 F40 surgical microscope). The key elements captured by the specialist otologist are described, as well as the diagnostic criteria and types of otitis media classification. It is not clear if only a single specialist otologist performed the technical evaluations over the 2-week period or if more than one observer was involved. This may have resulted in interobserver bias. Intra-observer bias may also have been possible given the workload of assessing 136 participants. It would have been interesting to know if this microscope allowed for digital photography to facilitate external review by an independent expert observer. It was also not clear if cerumen removal was done consistently by a single operator (the results section mentioned that cerumen was removed manually and was halted in the event of any discomfort). Finally, it was not clear if the technical device required calibration during the fieldwork process; usually, a device used to take repeated measures of several participants over a short span of time requires a calibration protocol to ensure consistency and accuracy. 6. Explain the difference between point prevalence and incidence. (2 marks) The two measurements can complement each other and provide a full picture but are often confused. The incidence is a measure of the rate at which new cases of disease appear over a time period, whereas the prevalence is the total number of cases of a disease at or during a specific point in time. It is often referred to as a ‘photograph or snapshot’ of a point in time (point prevalence). Prevalence describes the proportion of the population with a specific characteristic, regardless of when they first developed the characteristic. This means that prevalence includes both new and pre-existing cases, whereas incidence is limited to new cases only. 7. Critically appraise the analysis and conclusions of the study. (4 marks) The authors calculated the prevalence of otitis media appropriately and used well-defined otomicroscopic definitions for the different diagnoses. However, they proceeded to compare prevalence rates between two different age groups using Pearson’s χ2 (chi-squared) test but did not indicate that this comparison will be done in their original objectives. They also did not indicate that their sample size calculation anticipated an analytical component to their study and not just a descriptive point prevalence. The authors did find a statistically significant finding during this comparative analysis that otomicroscopy-confirmed otitis media was more prevalent in the younger group of participants (preschool) compared with the older group of participants (school-going age). The subtypes of diagnosed otitis media confirmed that otitis media with effusion (OME) was more frequently diagnosed in the younger group, while the most severe form of otitis media, chronic suppurative otitis media (CSOM), was more common in the older group. The prevalence of CSOM for the total study sample was 6.6%, which constitutes a high-risk population. The CSOM prevalence in the older group was even higher at 9.3%, which is rated as the highest prevalence based on the World Health Organization (WHO) classification system cited by the authors. The authors admitted to several study design limitations, including the sample size and the lack of information on comorbid medical conditions such as HIV and TB status, as well as host-related and environmental factors, including nutritional status. Although the authors concur that the HIV prevalence of the population could likely contribute to the higher prevalence of otitis media, they still problematically proceed to engage with the findings as if they represent the larger population of children in primary health care settings. This is most starkly noted in their conclusion, where the HIV positivity of the children in the study is omitted. 8. Use a structured approach (e.g. relevance, education, applicability, discrimination, evaluation, reaction [READER]) to discuss the value of these findings to your practice (6 marks) The READER format may be used to answer this question: Relevance – Is it relevant to family medicine and primary care? Education – Does it challenge existing knowledge or thinking? Applicability – Are the results applicable to my practice? Discrimination – Is the study scientifically valid enough? Evaluation – Given the above, how would I score or evaluate the usefulness of this study to my practice? Reaction – What will I do with the study findings? The answer may be a subjective response but should be one that demonstrates a reflection on the possible changes within the student’s practice within the South African public health care system. It is acceptable for the student to suggest how their practice might change within other scenarios after graduation (e.g. private general practice). The reflection on whether all important outcomes were considered is therefore dependent on the reader’s perspective (is there other information you would have liked to see?). A model answer could be written from the perspective of the family physician employed in the South African district health system: R: This study is relevant to the African primary care context, as children presenting to PHC facilities with otitis media are a common phenomenon, and there is a need to diagnose complicated otitis media such as OME and CSOM early to prevent complications. E: The authors made the case that this is the first otitis media prevalence study in a PHC setting in South Africa, especially given their use of the enhanced diagnostic instrument, the otomicroscope operated by a specialist otologist. The study’s novelty is limited by several design flaws, however. A: It is not possible to generalise the study findings to the wider South African setting, as the study was conducted in a specialist HIV and TB PHC facility using a small sample with a nonprobability sampling method (consecutive sampling). D: In terms of discrimination, the concern lies in the study design as mentioned above (small sample and sampling method). The diagnostic accuracy is noted as the authors employed a superior diagnostic technique with clearly focused and defined diagnostic criteria. The data collection process and risk for bias are not adequately presented in the methods section. Using a reporting guideline such as the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist for observational studies would have enabled the reader to make a better judgement in terms of assessment of internal validity. E: The study findings may be relevant to consider when planning coordination of care for children in a similar PHC facility. It is important to consider the presence of complicated otitis media in children, especially those with comorbid conditions. It is also important to note the low incidence of reported symptoms in the 2 weeks prior to otomicroscopy. However, given the concerns described above regarding the study design and reporting, the findings are not generalisable to the typical South African PHC facility setting. R: The study findings are limited by the study setting and design flaws. However, this does not detract from the need to ensure appropriate care for children at risk for complicated otitis media. This would include increasing and augmenting routine screening services with specialised otomicroscopy services where feasible. More research in typical PHC settings with larger samples and more comprehensive data collection tools is warranted to strengthen the case made by the authors. Further reading Pather M. Evidence-based Family Medicine. In: Mash B, editor. Handbook of Family Medicine. 4th ed. Cape Town: Oxford University Press, 2017; 430–453. Riegelman RK. Studying a Study and testing a test. How to read the medical evidence. 5th ed. Lippincott Williams & Wilkins; 2005. MacAuley D. READER: An acronym to aid critical reading by general practitioners. BR J Gen Pract. 1994;44(379):83–5. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, Strobe Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann. Intern. Med. 2007;147(8):573– 577. [cited 2022 Sept 19]. Available from: https://www.equator-network.org/reporting-guidelines/strobe/ The Critical Appraisals Skills Programme (CASP). CASP checklists. [online] 2022. [cited 2022 Sept 19]. Available from: https://casp-uk.net/casp-tools-checklists/
The authors aimed to measure the prevalence of otitis media in a South African primary health care (PHC) clinic, Witkoppen Health and Welfare Centre. The question is focused as it describes the population of interest (paediatric population attending a PHC clinic) and the condition or phenomenon of interest (point prevalence of otitis media in this population) in a particular community or area (the Diepsloot community north of Johannesburg, South Africa). The authors wished to diagnose the condition of interest with greater sensitivity and specificity than either otoscopy or pneumatic otoscopy, by using otomicroscopy to diagnose and classify otitis media as a cause of middle-ear pathology in children.
Otitis media point prevalence in South Africa has never been measured, and most deaths from complications of otitis media are in sub-Saharan Africa and India. Chronic serous otitis media is also the most common cause of hearing impairment. This makes the study socially and scientifically relevant. Most studies of the prevalence of otitis media measure prevalence in children of school-going age and not in younger preschool children, who are more prone to otitis media. Early medical intervention is indicated in communities where chronic suppurative otitis media rates are more than 4%, as this constitutes a high-risk population. This supports the need to employ diagnostic methods to measure the point prevalence more accurately.
By definition, prevalence is a quantitative measure of proportion and depicts the proportion of a defined population with a disease or illness at a specified time. Therefore, measuring a proportion would require a quantitative methodology and is impossible to achieve using qualitative data and methods. Given that otomicroscopy was used for the first time in this setting, the study could conceivably be amended to address the additional objective of assessing the otologist’s experiences of otomicroscopy in primary health care. Perhaps the caregiver who brought the children would be interviewed for qualitative data on their experience of the process.
The researchers selected a specific primary healthcare clinic for their study. The clinic is a specialist care centre for primary health care paediatric human immunodeficiency syndrome (HIV) and tuberculosis (TB) patients. This also already indicates that it does not represent the more typical primary health care clinics in the country, which serve patients with all forms of illness. The more accurate description in the title of this study should be that of measuring the prevalence of OM in an HIV and TB primary healthcare clinic. Furthermore, the sampling was not random but consecutive. They recruited 140 children aged 2–16 years as a sample from registered clinic patients known to the service: the participants were recruited from the entire paediatric population attending the clinic for any purpose, whether for a routine clinic appointment or for chronic or acute treatment. They do not indicate on which days they consecutively collected samples and whether they sampled equally for each day of the week. They only specified that the on-site data collection continued over the course of 2 weeks. Bias could be introduced in this way of sampling if, for example, a specific type of child (age or illness) tends to come to the clinic on some days more than others. The researchers do not indicate how they calculated the sample size. This always affects the precision of the estimate of prevalence. Often, it is helpful to use prevalence rates from the literature to calculate sample size estimates.
The authors described the collection of demographic data under the study population subheading in the methods section and not under the data collection subheading. It would have made more sense to include this data collection step in the data collection subsection, as this information was included in the data set. The authors did not specify who collected this information, and it seems like this information might have been captured by a research assistant or the specialist otologist, linked to the informed consent process and possibly the otomicroscopy assessment. It is important to note the person(s) who collected the data from the patients and parents or caregivers as well as the background of the data collectors. It was not clear whether the clinical notes and medical history from the patient’s folder were consulted to complement the dataset and verify the accuracy of comorbid risk factors described in the introduction section (host-related and environmental factors). It would have been useful to present the demographic and medical background data collection instrument as a supplement. Interestingly, even though this clinic served as a specialist HIV and TB centre, the researchers were not able to collect clinical data on HIV status. They mentioned that ‘ethical clearance did not allow for this’ but do not specify the reasons behind this (whether it was a protocol design flaw or whether this was a specification from the ethics review board). The data collection subsection in the methods section describes the technical process of otomicroscopy, including the type of device used (a Leica M525 F40 surgical microscope). The key elements captured by the specialist otologist are described, as well as the diagnostic criteria and types of otitis media classification. It is not clear if only a single specialist otologist performed the technical evaluations over the 2-week period or if more than one observer was involved. This may have resulted in interobserver bias. Intra-observer bias may also have been possible given the workload of assessing 136 participants. It would have been interesting to know if this microscope allowed for digital photography to facilitate external review by an independent expert observer. It was also not clear if cerumen removal was done consistently by a single operator (the results section mentioned that cerumen was removed manually and was halted in the event of any discomfort). Finally, it was not clear if the technical device required calibration during the fieldwork process; usually, a device used to take repeated measures of several participants over a short span of time requires a calibration protocol to ensure consistency and accuracy.
The two measurements can complement each other and provide a full picture but are often confused. The incidence is a measure of the rate at which new cases of disease appear over a time period, whereas the prevalence is the total number of cases of a disease at or during a specific point in time. It is often referred to as a ‘photograph or snapshot’ of a point in time (point prevalence). Prevalence describes the proportion of the population with a specific characteristic, regardless of when they first developed the characteristic. This means that prevalence includes both new and pre-existing cases, whereas incidence is limited to new cases only.
The authors calculated the prevalence of otitis media appropriately and used well-defined otomicroscopic definitions for the different diagnoses. However, they proceeded to compare prevalence rates between two different age groups using Pearson’s χ2 (chi-squared) test but did not indicate that this comparison will be done in their original objectives. They also did not indicate that their sample size calculation anticipated an analytical component to their study and not just a descriptive point prevalence. The authors did find a statistically significant finding during this comparative analysis that otomicroscopy-confirmed otitis media was more prevalent in the younger group of participants (preschool) compared with the older group of participants (school-going age). The subtypes of diagnosed otitis media confirmed that otitis media with effusion (OME) was more frequently diagnosed in the younger group, while the most severe form of otitis media, chronic suppurative otitis media (CSOM), was more common in the older group. The prevalence of CSOM for the total study sample was 6.6%, which constitutes a high-risk population. The CSOM prevalence in the older group was even higher at 9.3%, which is rated as the highest prevalence based on the World Health Organization (WHO) classification system cited by the authors. The authors admitted to several study design limitations, including the sample size and the lack of information on comorbid medical conditions such as HIV and TB status, as well as host-related and environmental factors, including nutritional status. Although the authors concur that the HIV prevalence of the population could likely contribute to the higher prevalence of otitis media, they still problematically proceed to engage with the findings as if they represent the larger population of children in primary health care settings. This is most starkly noted in their conclusion, where the HIV positivity of the children in the study is omitted.
The READER format may be used to answer this question: Relevance – Is it relevant to family medicine and primary care? Education – Does it challenge existing knowledge or thinking? Applicability – Are the results applicable to my practice? Discrimination – Is the study scientifically valid enough? Evaluation – Given the above, how would I score or evaluate the usefulness of this study to my practice? Reaction – What will I do with the study findings? The answer may be a subjective response but should be one that demonstrates a reflection on the possible changes within the student’s practice within the South African public health care system. It is acceptable for the student to suggest how their practice might change within other scenarios after graduation (e.g. private general practice). The reflection on whether all important outcomes were considered is therefore dependent on the reader’s perspective (is there other information you would have liked to see?). A model answer could be written from the perspective of the family physician employed in the South African district health system: R: This study is relevant to the African primary care context, as children presenting to PHC facilities with otitis media are a common phenomenon, and there is a need to diagnose complicated otitis media such as OME and CSOM early to prevent complications. E: The authors made the case that this is the first otitis media prevalence study in a PHC setting in South Africa, especially given their use of the enhanced diagnostic instrument, the otomicroscope operated by a specialist otologist. The study’s novelty is limited by several design flaws, however. A: It is not possible to generalise the study findings to the wider South African setting, as the study was conducted in a specialist HIV and TB PHC facility using a small sample with a nonprobability sampling method (consecutive sampling). D: In terms of discrimination, the concern lies in the study design as mentioned above (small sample and sampling method). The diagnostic accuracy is noted as the authors employed a superior diagnostic technique with clearly focused and defined diagnostic criteria. The data collection process and risk for bias are not adequately presented in the methods section. Using a reporting guideline such as the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist for observational studies would have enabled the reader to make a better judgement in terms of assessment of internal validity. E: The study findings may be relevant to consider when planning coordination of care for children in a similar PHC facility. It is important to consider the presence of complicated otitis media in children, especially those with comorbid conditions. It is also important to note the low incidence of reported symptoms in the 2 weeks prior to otomicroscopy. However, given the concerns described above regarding the study design and reporting, the findings are not generalisable to the typical South African PHC facility setting. R: The study findings are limited by the study setting and design flaws. However, this does not detract from the need to ensure appropriate care for children at risk for complicated otitis media. This would include increasing and augmenting routine screening services with specialised otomicroscopy services where feasible. More research in typical PHC settings with larger samples and more comprehensive data collection tools is warranted to strengthen the case made by the authors. Further reading Pather M. Evidence-based Family Medicine. In: Mash B, editor. Handbook of Family Medicine. 4th ed. Cape Town: Oxford University Press, 2017; 430–453. Riegelman RK. Studying a Study and testing a test. How to read the medical evidence. 5th ed. Lippincott Williams & Wilkins; 2005. MacAuley D. READER: An acronym to aid critical reading by general practitioners. BR J Gen Pract. 1994;44(379):83–5. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, Strobe Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann. Intern. Med. 2007;147(8):573– 577. [cited 2022 Sept 19]. Available from: https://www.equator-network.org/reporting-guidelines/strobe/ The Critical Appraisals Skills Programme (CASP). CASP checklists. [online] 2022. [cited 2022 Sept 19]. Available from: https://casp-uk.net/casp-tools-checklists/
Objective This station tests the candidate’s ability to manage a patient with persistent dizziness. Type of station Integrated consultation. Role player Simulated patient: male or female adult. Instructions to the candidate You are the family physician working at a community health centre. The medical officer asked you to see a patient with persistent dizziness, who presented to the emergency room. Your task: please consult with this patient and develop a comprehensive management plan. You do not need to examine this patient. All examination findings will be provided on request.
This is an integrated consultation station in which the candidate has 15 minutes. Familiarise yourself with the assessor guidelines that detail the required responses expected from the candidate. No marks are allocated. In the marks sheet , tick off one of the three responses for each of the competencies listed. Make sure you are clear on what the criteria are for judging a candidate’s competence in each area. Provide the following information to the candidate when requested: examination findings. Please switch off your cell phone. Please do not prompt the student. Please ensure that the station remains tidy and is reset between candidates. Guidelines to examiner The aim is to establish that the candidate can diagnose vertigo, identify possible causes (cerebellar stroke with underlying hypercholesterolaemia) and develop an effective and safe management plan. Working definition of competent performance: the candidate effectively completes the task within the allotted time, in a manner that maintains patient safety, even though the execution may not be efficient and well structured. Not competent: patient safety is compromised (including ethical-legally) or task is not completed. Competent: the task is completed safely and effectively. Good: in addition to displaying competence, the task is completed efficiently and in an empathic, patient-centred manner (acknowledges patient’s ideas, beliefs, expectations, concerns or fears). Establishes and maintains a good clinician–patient relationship The competent candidate is respectful, engaging with the patient in a dignified manner. The good candidate is empathic, compassionate and collaborative, facilitating patient participation in key areas of the consultation. Gathering information The competent candidate gathers sufficient information to establish a diagnosis ( acute vertigo, asks questions aimed at localising the problem and enquires about some psychosocial related to the problem ) The good candidate additionally has a structured and holistic approach ( enquiring about the causes of vertigo and assessing the impact on the emotional, social and occupational aspects of the patient’s life ). Clinical reasoning The competent candidate identifies the diagnosis ( acute vertigo due to a central cause, impacting the patient’s work performance as a bus driver ). The good candidate makes a specific diagnosis ( acute vertigo, likely due to a cerebellar stroke, with underlying possible familial hypercholesterolaemia, with major long-term occupational implications ). Explaining and planning The competent candidate uses clear language to explain the problem to the patient and uses strategies to ensure patient understanding ( questions OR feedback OR reverse summarising ). The good candidate additionally ensures that the patient is actively involved in decision-making, paying particular attention to knowledge-sharing and empowerment. Management The competent candidate makes arrangements for urgent referral to a specialist physician or neurologist for further investigations ( computerised tomography [CT] scan or magnetic resonance imaging [MRI]) as an inpatient. The good candidate additionally addresses psychosocial issues comprehensively and may start the process of a follow-up plan being in place when the patient returns from the hospital. Examination findings Body mass index – 24 kg/m 2 Blood pressure – 138/94 mmHg, heart rate: 104 beats/min Haemoglobin – 13.5 gm/dL Random blood glucose (HGT) – 5.9 mmol/L Urinalysis – No abnormalities Ears – Normal hearing bilaterally; no abnormalities on visual inspection, including otoscope; Dix-Hallpike manoeuvre negative. Eyes – Xanthelasma on both eyelids; nystagmus on lateral gaze; normal vision, specifically no diplopia. Cardio-respiratory systems – No abnormalities. Abdomen – No abnormalities. Neuro – Marked ataxic gait; fine tremor at rest: unable to write own name; power, reflexes and sensation intact in all limbs.
Appearance and behaviour Male or female adult, calm, 40–50 years old. Opening statement ‘Hello, Doctor. I’m having this dizziness all the time, since yesterday, and feeling nauseous.’ History Open responses: Freely tell the doctor ■ You were feeling very well yesterday morning. Around lunchtime, you suddenly started getting dizzy and vomited twice. You had to leave work, then slept at home until this morning, but it is not better. Closed responses: Only tell the doctor if asked ■ It feels like the room is spinning around you. Makes it difficult to walk. Not worsened by any specific positions. ■ Nauseous all the time, especially when you are moving. ■ You have no funny ringing noises or deafness in any of your ears. Your medical history ■ Diagnosed with high cholesterol at the age of 34 years. Did not want to use medication – just eating healthily and exercising occasionally. Cholesterol is a family problem; your brother and mother also have cholesterol problems, but you are unsure if they take medication. ■ You do not smoke, drink alcohol very little and exercise by walking once a week. Ideas, concerns and expectations ■ Your major concern is to get rid of this dizziness. ■ It affects your work as a bus driver. Further reading Department of Health. Acute Vertigo. In: Standard Treatment Guidelines, Adult Hospital level. Pretoria: Department of Health; 2019.
|
Pediatric subspecialty pipeline: aligning care needs with a changing pediatric health care delivery environment
|
2025df08-815f-4cd4-8e94-fd3a38598efe
|
10157557
|
Pediatrics[mh]
| |
Assessment of Purity, Functionality, Stability, and Lipid Composition of Cyclofos-nAChR-Detergent Complexes from
|
29f9088c-ad96-4d52-acbd-7ad9fd17790b
|
10157581
|
Physiology[mh]
|
The nicotinic acetylcholine receptors (nAChRs) are integral membrane proteins and are one of the most characterized ligand-gated ion-channels super family. The nAChRs are pentameric proteins with a different assembly from a pool of seventeen homologous polypeptides (subunits): α1-α10, β1-β4, γ, δ, and ε, (Gotti and Clementi ; Zoli et al. ). The nAChRs are widely distributed in different tissues in mammals and other animal species and have been implicated in various neurological diseases. These include but are not limited to congenital myasthenic syndromes, tobacco addiction, Alzheimer’s disease, Parkinson’s disease, schizophrenia, epilepsy, Turret’s syndrome, inflammation, and more recently COVID-19 infection, (Gotti and Clementi ; Zoli et al. ; Lucatch et al. ; Farsalinos et al. ; Mashimo et al. ; Bekdash ; Hollenhorst and Krasteva-Christ ; Recio-Barbero et al. ; Jankauskaite et al. ; Tiepolt et al. ). There is an overwhelming need of high-resolution structures for each nAChR subtype and their binding sites, to facilitate the design of new selective therapeutic drugs to various regions of the extracellular domain and other domains of these receptors (e.g., binding sites, pore, or putative allosteric sites). The first and only X-ray structure of the heteromeric neuronal α4β2-nAChR was reported in 2016 (Morales-Perez et al. ). The structural data from this study were collected from one crystal out of thousands of crystals screened (personal communication with Dr. Morales-Pérez). The difficulties of reproducing high-quality α4β2-nAChR crystals led to the use of Cryo-EM, and in 2018, two different stoichiometries of the same α4β2-nAChR were determined (Walsh et al. ). Although these α4β2-nAChR structures have provided substantial information about nicotine binding, cholesterol-binding, subunit stoichiometry, and overall oligomerization, these are low-resolution structures (~ 3.9 Å). The principal limitations of the α4β2-nAChR structures (X-ray and CryoEM) structures are (1) their inability to reproduce high-quality crystals for drug discovery studies and (2) the very limited information for structure-based drug design that they provide. Recently, three additional Cryo-EM structures have emerged: (1) the α3β4 in nanodiscs at 4.58 Å resolution (Gharpure et al. ), (2) the Torpedo californica ( Tc ) (muscle-type) nAChR in the closed state (Rahman et al. ), and (3) the α7-nAChR in three different channel conformation states resting-like closed-channel state, with the positive allosteric modulator PNU-120596 and agonist epibatidine/α7-nAChR complex with average resolution of 3.6 Å (Noviello et al. ). Some of the limitations of the α7-nAChR structure are that the open conformation of the channel was a proxy and that the position of the PNU-120596 in the proposed binding site was not defined due to limited resolution of the CryoEM images. The first obstacle in achieving high-resolution, X-ray structures of the nAChRs is the preparation of milligram amounts of pure, homogeneous, functional, and stable nAChR-detergent complexes (nAChR-DCs). Other difficulties in crystalizing nAChRs are: (1) heterogenic pentamers, (2) multiple stoichiometries, (3) pseudosimetry of heteropentamers, (4) glycosylation of extracellular domains with diverse sugar compositions, (5) large intracellular domains (M3-M4 loop) with disordered structure, and (6) different conformations (Asmar-Rovira et al. ; Delgado-Vélez et al. ). Along these lines, the preparation and the reproducibility of nAChRs protein crystals suitable for X-ray diffraction studies have become remarkably challenging experimentally and the foremost obstacle to attaining high-resolution structures. In the present study we prepared Tc -nAChR-DCs using three lipid-analog detergents bearing a six-member carbon ring at the tail end of a homologous series of phosphocholine (Table ). We evaluated purity, stability, functionality, and lipid composition of the nAChR-DCs. We adopted the lipidic cubic phase (LCP) as a lipid matrix suitable for the stable relocation of the solubilized and affinity-purified Tc -nAChR-DC to assess stability (Padilla-Morales et al. , ). LCP has become an efficient matrix for harvesting protein crystals of different molecular weights, achieving the deposit of over 120 unique protein structures according to the Protein Data Bank (Landau and Rosenbusch ). To assess protein-detergent complex stability we used LCP-Fluorescence Recovery After Photobleaching (LCP-FRAP) assay during a 30-day period to measure parameters, such as the mobile fraction and diffusion coefficient which correlate with protein stability and aggregation (Padilla-Morales et al. , ). We examined the Tc -nAChR-DC ion-channel functionality using macroscopic ion-channel behavior in Xenopus laevis oocytes via the Two Electrode Voltage Clamp (TECV) technique. In addition, the nAChR-DC lipid composition was accessed via Ultra-Performance Liquid Chromatography (UPLC) couple to Quadrupole Time-of-Flight (QTOF) mass spectrometry. The objective of this study was to investigate and characterize the effect of detergents in Tc -nAChR. Furthermore, we studied the effect of cholesterol in regulating the Tc -nAChR-DC complex stability. Assessing lipid-analog Tc -nAChR-DC functionality and stability and characterizing the conditions where the receptor resembles and behaves as its native environment might be fundamental to future structural studies of the nAChR and other membrane proteins. Materials All reagents were purchased from Sigma-Aldrich unless otherwise specified. The lipid-like cyclic detergent Cyclohexyl-1-Butylphosphocholine family 4-Cyclohexyl-1-Butylphosphocholine [Cyclofoscholine-4 (CF-4)], 6-Cyclohexyl-1-Hexylphosphocholine, [Cyclofoscholine-6 (CF-6)] and 7-Cyclohexyl-1-Heptylphosphocholine [Cyclofoscholine-7 (CF-7)] at purity 98%, were obtained from Anatrace (Maumee, OH, USA), Table . Preparation of Crude Membrane The nAChR was extracted from rich membranes from the electric organ of ( Tc ) (Aquatic Research Consultants, San Pedro, CA), according to the procedure of Asmar-Rovira (Asmar-Rovira et al. ) and with minor modification as described previously by Padilla (Padilla-Morales et al. , ) and Quesada (Quesada et al. ). To avoid possible seasonal changes in lipid content, all the experiments were performed with the same Tc electric organ. We incubated 200 g of Tc tissue with 200 ml of buffer H (100 mM NaCl, 10 mM Sodium Phosphate, 5 mM EDTA, 5 mM EGTA, 5 mM DTPA, 0.02% Sodium Azide, pH 7.4) mixed with 200 μl of phenyl methane sulfonyl fluoride (PMSF) and 0.187 g of Iodoacetamide, in a cool room. Affinity-Column Purification of Solubilized Tc-nAChR The solubilized nAChR was purified by means of affinity-column using the protocol of Padilla and Quesada (Cheng et al. ; Cherezov et al. ; Padilla-Morales et al. ; Padilla-Morales et al. ; Quesada et al. ). Briefly, the crude membranes were thawed and mixed with a 10% (w/v) detergent solution and DB-1X Buffer (100 mM NaCl, 10 mM MOPS, 0.1 mM EDTA, 0.02% NaN3) for a final concentration of detergent 1–2%. The DB-1X buffer was added first, followed by the detergent, and finally the crude membranes, which were added drop by drop. This solution was shaken slowly for 1 h. and then centrifuged for 1 h. at 40,000 rpm and 4 °C. The supernatant was extracted and used immediately for the affinity-column purification. Approximately 12 mL of previously prepared bromoacetylcholine affinity resin (Bio-Rad Laboratories, Hercules, CA) in a 1.5 × 15 cm Econocolumn (Bio-Rad Laboratories, Hercules, CA) was drained of storage buffer (40% Sucrose, 2 mM PMSF) and was conditioned with 50 mL of ultrapure water and 50 mL of 1.5 critical micelle concentration (CMC) detergent buffer before the supernatant prepared previously was added to the column. The column was washed with 50 mL of 1.5 CMC detergent buffer before the nAChR was eluted with 50 mL of elution buffer. The sample was then concentrated using centrifuge filter with a 100 K cutoff (Amicon Ultra Centrifugal Filters Ultracel 100 K, Millipore Co., Billerica, MA) and run through a P-10 desalting column (GE Healthcare, Uppsala, Sweden) to remove the carbamylcholine ligand. The sample was eluted with 5 mL of 1.5 CMC detergent buffer and finally concentrated to 250 μL. Protein concentration was determined using a BCA Protein Concentration Assay (Pierce Biotechnology, Rockford, IL) followed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), which was run to verify receptor purity. Sodium Dodecyl-sulfate Polyacrylamide Gel Electrophoresis (SDS-PAGE) Samples were prepared by mixing 20 μl (1 μg/uL) of purified protein with 20 μL of Lamlli 2 × Buffer. Gel electrophoresis was performed by loading 20 μL of protein in Criterion TGX precast gels. The samples were run in duplicates for 2 h at 120 Volts. Gel was stained with 1X Coomassie Blue and left overnight. After 10–12 h, gel was washed with destaining solution (10% acetic acid, 40% water, and 50% methanol), for 3 h., followed by three washes with distilled water. LCP-nAChR-Detergent Complex Mobility Assay Using FRAP FRAP experiments were performed according to the conditions and protocols described by Cherezov (Cherezov et al. ) with the minor modifications presented by Padilla-Morales (Padilla-Morales et al. ). Briefly, the affinity-purified nAChR-detergent complex was incubated with alpha Bungarotoxin (αBTX) conjugated with Alexa 488 in a 1:2.5 ratio, respectively, for 2 h. in the dark at 4.0 °C. The nAChR-detergent complex- αBTX was mixed with molten monoolein (1-oleoyl- rac -glycerol in a 2:3 volume ratio), using a lipid mixer (Hamilton Syringe) and mixed until clear (Cheng et al. ). The resulting mixture was placed on a 75 mm × 25 mm slide coated with pre-punched holes of 7 mm diameter and 50 μm thickness (3 M 9482PC), and the formed wells were then covered by pressing a coverslip against the slide and flattened with a rubber roll (Cherezov et al. ; Caffrey and Cherezov ). The experimental procedure was conducted in a controlled environment maintaining the humidity between 40 and 50% at any time. Lipidic Cubic Phase-Cholesterol Mixture For any of the assays in which cholesterol was used to supplement monoolein, we used the commercially available Monoolein (and Cholesterol (H200) mixture (Anatrace), and the ratio was 1-oleoyl-rac-glycerol (10 parts): Cholesterol (1 part). The rest of the procedure for cholesterol assays was one as described above, (Tyler et al. ). FRAP Instrument Setup and Data Collection Data collection was performed as described in Cherezov et al. and Padilla-Morales et al. . Briefly, all data were collected using a Zeiss LSM 510 confocal microscope. Fluorescence baseline was established by pre-bleach images in several areas: 75% of laser bleaching power followed by a sequence of 500 images scanning at 2.6% power with a 600 ms laser scanning delay. All images obtained were processed using the LSM 510 Meta ZEN software. Each sample slide was monitored for 30 days at intervals of 5 days. The data were integrated within a 14.0 µm diameter circular region of interest (ROI 1 ) and corrected and normalized by another 14.0 µm circular region of interest (ROI 2 ) positioned near the bleached ROI 1 . Fluorescence intensity was adjusted by dividing the integrated intensity value of ROI 1 in the bleached spot by the average integrated intensity of ROI 2 . As described by Cherezov’s research group (Cherezov et al. ), the fractional fluorescence recovery curves F(t) were calculated using the following equation. 1 [12pt]{minimal} $$F(t)=[({f}_{t}-{f}_{0})/{f}_{ }-{f}_{0}],$$ F t = ( f t - f 0 ) / f ∞ - f 0 , where F ( t ) is the corrected fluorescence intensity of the bleached spot, f 0 is the corrected/normalized fluorescence intensity of the bleached spot during the 600 ms after bleaching, and [12pt]{minimal} $${f}_{ }$$ f ∞ is the average of corrected fluorescence intensity in the five pre-bleached images. Fractional mobility values were obtained by calculating the average of the last 50 values of F(t). The fractional fluorescence recovery curves were fitted with a one-dimensional exponential Plot (Eq. ). 2 [12pt]{minimal} $$f(t)=_{i=1}^{n}{A}_{i }(1-{exp}^{(-Kt)})+B,$$ f t = ∑ i = 1 n A i 1 - exp - K t + B , where A i is the amplitude of each component, K is a constant related to the degree of bleaching, t is time, and B is a constant related to the mobile fraction of receptors (Axelrod et al. ). The fractional fluorescence recovery curves were fitted with a one-dimensional equation (one Phase Exponential Plot) provided by Graph Pad statistical analysis software. The Diffusion coefficient value was calculated using Eq. ; where R is the half width at half maximum of the Gaussian function [R = r(2ln2) 0.5 ] and K is a constant calculated using Eq. as described by Cherezov (Pucadyil and Chattopadhyay ). 3 [12pt]{minimal} $$D=[{R}^{2}/4K]$$ D = R 2 / 4 K Injection With Crude Membrane or nAChR-Detergent Complex into Oocytes and Two Electrode Voltage Clamp Assays We followed the original protocols by Andrés Morales et al. ; Ivorra et al. ; Andrés Morales et al. ) with the modifications described in Padilla and Quesada (Padilla-Morales et al. ; Quesada et al. ), Briefly, the Xenopus leavis oocytes used were in developmental stage V or VI. Each oocyte was injected with 50 nL of a preparation of 6 mg/mL of crude membrane or 3 mg/mL of 1.5-fold CMC nAChR-detergent complex, according to the CMC of the detergent used in the purification of the Tc nAChR. Subsequently, oocytes were incubated at 18 °C for 16–36 h in ND-96 solution containing 96 mM NaCl, 2 mM KCl, 1.8 mM CaCl 2 , 1 mM MgCl 2 , 5 mM HEPES, 2.5 mM Na-pyruvate supplemented with gentamicin (50 mg/mL), tetracycline (50 mg/mL), and theophyline (0.5 mM); and adjusted to a pH of 7.6 with NaOH. Lipid Extraction and Separation by High Performance Thin Layer Chromatography The lipid extraction and separation were conducted according to Quesada. 24 Briefly, the purified Tc nAChR-DCs were lyophilized overnight and subjected to lipid extractions using B&D methods in the presence of butylated hydroxytoluene (BHT; 2.9 × 10 − 5 M), followed by 3.5 h of reflux with MeOH/HCl or MeOH/1N KOH for complete phospholipid hydrolysis. Phospholipids species were resolute using commercially available high-performance thin layer chromatography (HPTLC) plates (20 × 20 cm) from Whatman, Fisher Scientific, MA, USA. The plate containing the samples was developed in chloroform: methanol: ammonium hydroxide (60:35:5). Solid Phase Extraction of Lipid Samples (Pre-Cleaning) The separation of lipids previous to mass spectrometry analysis was accomplished using an aminopropyl extraction column (particle size 40 μm; Agilent Bond Elut NH 2 , Agilent Corporation, Palo Alto, CA, USA), as described by Quesada (Quesada et al. ). Briefly, the dry Tc nAChR-detergent complex was dissolved in CHCl 3 and uploaded to Bond Elut NH 2 , following the manufacturer’s indications. The column was conditioned by passing 6 mL of hexane, then 200 μL CHCl3 lipid extract was loaded followed by a sequential elution with four different eluents: 2 mL of CHCl 3 , 3 mL of diethyl ether with 2% acetic acid, 3 mL MeOH, and a final 3 mL of 0.05 M ammonium acetate in chloroform/methanol plus 2% (v/v) 28% aqueous ammonium solution. The four fractions collected contained the non-polar lipids and cholesterol, non-esterified fatty acids, non-acids phospholipids, and acidic phospholipids, respectively. Tc Membrane and nAChR Detergents Cholesterol Quantitation The cholesterol extraction, isolation, and quantification were achieved according to Quesada (Quesada et al. ). Briefly, the cholesterol extracted from Tc membrane and nAChR-DC were isolated from the rhodamine 6G stained silica gel G plates and further quantified using the Wako cholesterol E-Kit (Wako Chemicals, Richmond, VA, USA). Analysis of Phospholipid Molecular Species by Ultra-Performance Liquid Chromatography (UPLC) Coupled to Electrospray Ionization Mass Spectrometry (ESI–MS/MS) Phospholipids isolated from the Bond Elut NH2 cartridge, as mentioned above, were analyzed using UPLC ESI–MS/MS or MSe with an ACQUITY UPLC coupled to an XEVO G2S quadrupole-time-of-flight mass spectrometry (QToF) from Waters Corp. using BEH HILIC (1.7 μm, 2.1 mm × 100 mm) column as described by Quesada, (Quesada et al. ). Briefly, the sample was run using the following conditions for UPLC and QToF; the mobile phase A was 10 mM ammonium acetate in water at pH 3, adjusted using formic acid, and mobile phase B was acetonitrile. The gradient was as follows: 0–0.1 min, 100% B; 0.1–0.5 min, 92% B; 0.5–15 min, 80% B; and then back to 100% B at 15.1 min. to re-equilibrate the column for about 1 min. The injection volume was 0.5 μL, and the flow rate was 0.3 μL/min. ESI analysis was performed in positive resolution mode using the MSe continuum method. The instrument was calibrated with a sodium iodide standard solution (2 μg/μL) in 2 propanol/water (50:50). The voltages used were: capillary 3 kV, sampling cone 75 kV, and source offset 40 kV. The source temperature was 100 °C, and the desolvation temperature was 350 °C. The gasses’ flows were: cone 50 L/h and desolvation 800 L/h. The acquisition time was 15 min, mass range 50 to 1,100 Dalton, and the collision energy ramp in range of 20 V to 30 V. Leucine enkephalin (2 ng/μL) was used as a reference; a capillary voltage of 2 kV and a flow rate of 3.0 μL/min were employed. Statistical Analysis All data were processed and statistical analyses were conducted using the GraphPad Prism 9 software (GraphPad Software, San Diego, CA, www.graphpad.com ). All samples were analyzed separately using one-way ANOVA followed by Tukey’s multiple comparison test. The activation and deactivation kinetics of CF- Tc -nAChR-DCs were analyzed statistically using a t-test Mann Whitney comparing all the different CF- Tc -nAChR-DCs to crude membranes. All reagents were purchased from Sigma-Aldrich unless otherwise specified. The lipid-like cyclic detergent Cyclohexyl-1-Butylphosphocholine family 4-Cyclohexyl-1-Butylphosphocholine [Cyclofoscholine-4 (CF-4)], 6-Cyclohexyl-1-Hexylphosphocholine, [Cyclofoscholine-6 (CF-6)] and 7-Cyclohexyl-1-Heptylphosphocholine [Cyclofoscholine-7 (CF-7)] at purity 98%, were obtained from Anatrace (Maumee, OH, USA), Table . The nAChR was extracted from rich membranes from the electric organ of ( Tc ) (Aquatic Research Consultants, San Pedro, CA), according to the procedure of Asmar-Rovira (Asmar-Rovira et al. ) and with minor modification as described previously by Padilla (Padilla-Morales et al. , ) and Quesada (Quesada et al. ). To avoid possible seasonal changes in lipid content, all the experiments were performed with the same Tc electric organ. We incubated 200 g of Tc tissue with 200 ml of buffer H (100 mM NaCl, 10 mM Sodium Phosphate, 5 mM EDTA, 5 mM EGTA, 5 mM DTPA, 0.02% Sodium Azide, pH 7.4) mixed with 200 μl of phenyl methane sulfonyl fluoride (PMSF) and 0.187 g of Iodoacetamide, in a cool room. The solubilized nAChR was purified by means of affinity-column using the protocol of Padilla and Quesada (Cheng et al. ; Cherezov et al. ; Padilla-Morales et al. ; Padilla-Morales et al. ; Quesada et al. ). Briefly, the crude membranes were thawed and mixed with a 10% (w/v) detergent solution and DB-1X Buffer (100 mM NaCl, 10 mM MOPS, 0.1 mM EDTA, 0.02% NaN3) for a final concentration of detergent 1–2%. The DB-1X buffer was added first, followed by the detergent, and finally the crude membranes, which were added drop by drop. This solution was shaken slowly for 1 h. and then centrifuged for 1 h. at 40,000 rpm and 4 °C. The supernatant was extracted and used immediately for the affinity-column purification. Approximately 12 mL of previously prepared bromoacetylcholine affinity resin (Bio-Rad Laboratories, Hercules, CA) in a 1.5 × 15 cm Econocolumn (Bio-Rad Laboratories, Hercules, CA) was drained of storage buffer (40% Sucrose, 2 mM PMSF) and was conditioned with 50 mL of ultrapure water and 50 mL of 1.5 critical micelle concentration (CMC) detergent buffer before the supernatant prepared previously was added to the column. The column was washed with 50 mL of 1.5 CMC detergent buffer before the nAChR was eluted with 50 mL of elution buffer. The sample was then concentrated using centrifuge filter with a 100 K cutoff (Amicon Ultra Centrifugal Filters Ultracel 100 K, Millipore Co., Billerica, MA) and run through a P-10 desalting column (GE Healthcare, Uppsala, Sweden) to remove the carbamylcholine ligand. The sample was eluted with 5 mL of 1.5 CMC detergent buffer and finally concentrated to 250 μL. Protein concentration was determined using a BCA Protein Concentration Assay (Pierce Biotechnology, Rockford, IL) followed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), which was run to verify receptor purity. Samples were prepared by mixing 20 μl (1 μg/uL) of purified protein with 20 μL of Lamlli 2 × Buffer. Gel electrophoresis was performed by loading 20 μL of protein in Criterion TGX precast gels. The samples were run in duplicates for 2 h at 120 Volts. Gel was stained with 1X Coomassie Blue and left overnight. After 10–12 h, gel was washed with destaining solution (10% acetic acid, 40% water, and 50% methanol), for 3 h., followed by three washes with distilled water. FRAP experiments were performed according to the conditions and protocols described by Cherezov (Cherezov et al. ) with the minor modifications presented by Padilla-Morales (Padilla-Morales et al. ). Briefly, the affinity-purified nAChR-detergent complex was incubated with alpha Bungarotoxin (αBTX) conjugated with Alexa 488 in a 1:2.5 ratio, respectively, for 2 h. in the dark at 4.0 °C. The nAChR-detergent complex- αBTX was mixed with molten monoolein (1-oleoyl- rac -glycerol in a 2:3 volume ratio), using a lipid mixer (Hamilton Syringe) and mixed until clear (Cheng et al. ). The resulting mixture was placed on a 75 mm × 25 mm slide coated with pre-punched holes of 7 mm diameter and 50 μm thickness (3 M 9482PC), and the formed wells were then covered by pressing a coverslip against the slide and flattened with a rubber roll (Cherezov et al. ; Caffrey and Cherezov ). The experimental procedure was conducted in a controlled environment maintaining the humidity between 40 and 50% at any time. For any of the assays in which cholesterol was used to supplement monoolein, we used the commercially available Monoolein (and Cholesterol (H200) mixture (Anatrace), and the ratio was 1-oleoyl-rac-glycerol (10 parts): Cholesterol (1 part). The rest of the procedure for cholesterol assays was one as described above, (Tyler et al. ). Data collection was performed as described in Cherezov et al. and Padilla-Morales et al. . Briefly, all data were collected using a Zeiss LSM 510 confocal microscope. Fluorescence baseline was established by pre-bleach images in several areas: 75% of laser bleaching power followed by a sequence of 500 images scanning at 2.6% power with a 600 ms laser scanning delay. All images obtained were processed using the LSM 510 Meta ZEN software. Each sample slide was monitored for 30 days at intervals of 5 days. The data were integrated within a 14.0 µm diameter circular region of interest (ROI 1 ) and corrected and normalized by another 14.0 µm circular region of interest (ROI 2 ) positioned near the bleached ROI 1 . Fluorescence intensity was adjusted by dividing the integrated intensity value of ROI 1 in the bleached spot by the average integrated intensity of ROI 2 . As described by Cherezov’s research group (Cherezov et al. ), the fractional fluorescence recovery curves F(t) were calculated using the following equation. 1 [12pt]{minimal} $$F(t)=[({f}_{t}-{f}_{0})/{f}_{ }-{f}_{0}],$$ F t = ( f t - f 0 ) / f ∞ - f 0 , where F ( t ) is the corrected fluorescence intensity of the bleached spot, f 0 is the corrected/normalized fluorescence intensity of the bleached spot during the 600 ms after bleaching, and [12pt]{minimal} $${f}_{ }$$ f ∞ is the average of corrected fluorescence intensity in the five pre-bleached images. Fractional mobility values were obtained by calculating the average of the last 50 values of F(t). The fractional fluorescence recovery curves were fitted with a one-dimensional exponential Plot (Eq. ). 2 [12pt]{minimal} $$f(t)=_{i=1}^{n}{A}_{i }(1-{exp}^{(-Kt)})+B,$$ f t = ∑ i = 1 n A i 1 - exp - K t + B , where A i is the amplitude of each component, K is a constant related to the degree of bleaching, t is time, and B is a constant related to the mobile fraction of receptors (Axelrod et al. ). The fractional fluorescence recovery curves were fitted with a one-dimensional equation (one Phase Exponential Plot) provided by Graph Pad statistical analysis software. The Diffusion coefficient value was calculated using Eq. ; where R is the half width at half maximum of the Gaussian function [R = r(2ln2) 0.5 ] and K is a constant calculated using Eq. as described by Cherezov (Pucadyil and Chattopadhyay ). 3 [12pt]{minimal} $$D=[{R}^{2}/4K]$$ D = R 2 / 4 K We followed the original protocols by Andrés Morales et al. ; Ivorra et al. ; Andrés Morales et al. ) with the modifications described in Padilla and Quesada (Padilla-Morales et al. ; Quesada et al. ), Briefly, the Xenopus leavis oocytes used were in developmental stage V or VI. Each oocyte was injected with 50 nL of a preparation of 6 mg/mL of crude membrane or 3 mg/mL of 1.5-fold CMC nAChR-detergent complex, according to the CMC of the detergent used in the purification of the Tc nAChR. Subsequently, oocytes were incubated at 18 °C for 16–36 h in ND-96 solution containing 96 mM NaCl, 2 mM KCl, 1.8 mM CaCl 2 , 1 mM MgCl 2 , 5 mM HEPES, 2.5 mM Na-pyruvate supplemented with gentamicin (50 mg/mL), tetracycline (50 mg/mL), and theophyline (0.5 mM); and adjusted to a pH of 7.6 with NaOH. The lipid extraction and separation were conducted according to Quesada. 24 Briefly, the purified Tc nAChR-DCs were lyophilized overnight and subjected to lipid extractions using B&D methods in the presence of butylated hydroxytoluene (BHT; 2.9 × 10 − 5 M), followed by 3.5 h of reflux with MeOH/HCl or MeOH/1N KOH for complete phospholipid hydrolysis. Phospholipids species were resolute using commercially available high-performance thin layer chromatography (HPTLC) plates (20 × 20 cm) from Whatman, Fisher Scientific, MA, USA. The plate containing the samples was developed in chloroform: methanol: ammonium hydroxide (60:35:5). The separation of lipids previous to mass spectrometry analysis was accomplished using an aminopropyl extraction column (particle size 40 μm; Agilent Bond Elut NH 2 , Agilent Corporation, Palo Alto, CA, USA), as described by Quesada (Quesada et al. ). Briefly, the dry Tc nAChR-detergent complex was dissolved in CHCl 3 and uploaded to Bond Elut NH 2 , following the manufacturer’s indications. The column was conditioned by passing 6 mL of hexane, then 200 μL CHCl3 lipid extract was loaded followed by a sequential elution with four different eluents: 2 mL of CHCl 3 , 3 mL of diethyl ether with 2% acetic acid, 3 mL MeOH, and a final 3 mL of 0.05 M ammonium acetate in chloroform/methanol plus 2% (v/v) 28% aqueous ammonium solution. The four fractions collected contained the non-polar lipids and cholesterol, non-esterified fatty acids, non-acids phospholipids, and acidic phospholipids, respectively. The cholesterol extraction, isolation, and quantification were achieved according to Quesada (Quesada et al. ). Briefly, the cholesterol extracted from Tc membrane and nAChR-DC were isolated from the rhodamine 6G stained silica gel G plates and further quantified using the Wako cholesterol E-Kit (Wako Chemicals, Richmond, VA, USA). Phospholipids isolated from the Bond Elut NH2 cartridge, as mentioned above, were analyzed using UPLC ESI–MS/MS or MSe with an ACQUITY UPLC coupled to an XEVO G2S quadrupole-time-of-flight mass spectrometry (QToF) from Waters Corp. using BEH HILIC (1.7 μm, 2.1 mm × 100 mm) column as described by Quesada, (Quesada et al. ). Briefly, the sample was run using the following conditions for UPLC and QToF; the mobile phase A was 10 mM ammonium acetate in water at pH 3, adjusted using formic acid, and mobile phase B was acetonitrile. The gradient was as follows: 0–0.1 min, 100% B; 0.1–0.5 min, 92% B; 0.5–15 min, 80% B; and then back to 100% B at 15.1 min. to re-equilibrate the column for about 1 min. The injection volume was 0.5 μL, and the flow rate was 0.3 μL/min. ESI analysis was performed in positive resolution mode using the MSe continuum method. The instrument was calibrated with a sodium iodide standard solution (2 μg/μL) in 2 propanol/water (50:50). The voltages used were: capillary 3 kV, sampling cone 75 kV, and source offset 40 kV. The source temperature was 100 °C, and the desolvation temperature was 350 °C. The gasses’ flows were: cone 50 L/h and desolvation 800 L/h. The acquisition time was 15 min, mass range 50 to 1,100 Dalton, and the collision energy ramp in range of 20 V to 30 V. Leucine enkephalin (2 ng/μL) was used as a reference; a capillary voltage of 2 kV and a flow rate of 3.0 μL/min were employed. All data were processed and statistical analyses were conducted using the GraphPad Prism 9 software (GraphPad Software, San Diego, CA, www.graphpad.com ). All samples were analyzed separately using one-way ANOVA followed by Tukey’s multiple comparison test. The activation and deactivation kinetics of CF- Tc -nAChR-DCs were analyzed statistically using a t-test Mann Whitney comparing all the different CF- Tc -nAChR-DCs to crude membranes. Effect of Cyclofoscholine Detergent on the Stability of the Tc-nAChR-Detergent Complex in LCP-FRAP We used FRAP to assess the stability of the Tc -nAChR-DCs in LCP matrix. The use of LCP was introduced by Landau and Rosenbusch and by Rummel (Landau and Rosenbusch ; Rummel et al. 1998). This approach allows measuring the mobile fraction of the Tc -nAChR-DC in LCP by FRAP. Our group previously determined the fraction fluorescence recovery and mobile fraction of several lysophospholipids and cholesterol-analogs detergents for the Tc nAChR-Alexa 488-DC (Padilla-Morales et al. , ). We also evaluated the effect of the addition of cholesterol in the LCP matrix in the stability of CF-nAChR-DCs. In order to evaluate the stability of the CF-nAChR-DCs the fractional fluorescence recovery, mobile fraction, and diffusion coefficient were determined for a period of 30 days at intervals of 5 days. Figure shows the fractional fluorescence recovery for CF-4, CF-6, and CF-7, panels (a), (b), and (c), respectively. Each CF- Tc -nAChR-DC presents differences in the value of the fractional fluorescence recovery measured at intervals of 5 days. The variability during the 30 days is different for the three detergents evaluated, measured in the plateau at 500 s, with CF-6 being the one with the least variability (0.60–0.72), followed by CF-7 (0.30–0.50), and CF-4 having the greatest (0.45–0.70). However, CF-4 and CF-6 presented different fluorescence recovery values of 0.64 and 0.55, respectively, at day 30 whereas CF-7 presented a very lower mean value of 0.47 at day 30. Since nAChRs have been shown to have cholesterol-modulated activity and stability, we used a mixture of monoolein and cholesterol in a 10:1 ratio to perform FRAP assays. Figure middle panels (c), (d), and (e) present the fractional fluorescence recovery as a function of time for the three CF- Tc -nAChR-DC cholesterol supplemented LCP. At first glance, a decrease in the variability of the average values of fractional fluorescence recovery measured at the plateau with intervals of 5 days is observed for the three detergents studied. However, the average fractional fluorescence recovery value at 30 days was slightly lower for CF-4- Tc -nAChR-DC and CF-6- Tc -nAChR-DC when compared to the values obtained for pure monoolein. The exception was CF-7- Tc -nAChR-DC, which maintained a similar value on practically all the days tested. All CF- Tc -nAChR-DCs tested presented diffusion coefficient values in ranges previously observed for this type of protein in LCP (Cherezov et al. ; Padilla-Morales et al. ). The diffusion coefficient values determined for CF-4-Tc-nAChR-DC, CF-6-Tc-nAChR-DC, and CF-7-Tc-nAChR-DC were 1.33 × 10 − 8 cm2/sec, 1.13 × 10 − 8 cm2/sec, and 1.25 × 10 − 8 cm2/sec, respectively. Although these three detergents showed very similar diffusion coefficients, when the FRAP assay was carried out in LCP supplemented with cholesterol in a ratio of 10:1 (monoolein:cholesterol), the CF-4-Tc-nAChR-DC and CF-6-Tc-nAChR-DC displayed significant increases in their diffusion coefficient compared to their LCP non-cholesterol counterpart, 1.48 × 10 − 8 cm2/sec, and 1.70 × 10 − 8 cm 2 /sec, respectively. The CF-7-Tc-nAChR-DC exhibited a modest increase in the diffusion coefficient (1.29 × 10 − 8 cm 2 /sec). Based on our mobile fraction analysis presented in Fig. panel g, CF-7 with and without cholesterol exhibited a significant decrease over the 30-day period, with an average of 38% and 40% decrease in mobile fraction, respectively, compared to CF-6 with and without cholesterol, which had the highest average mobile fraction of 70% and 68%, respectively, and did not show significant changes between them. In contrast, CF-4 showed a linear increase in mobile fraction over the 30-day period from 46 to 65% without cholesterol and 45% to 50% for CF-4 with cholesterol. According to our results, the length of the carbon chain in each detergent significantly affected the mobile fraction in the LCP. We observed that detergents with longer carbon chains, such as CF-6 and CF-7, showed a lower mobile fraction compared to CF-4. This is because detergents with longer carbon chains have a higher affinity for molecules such as cholesterol, which plays a crucial role in reducing molecular mobility in the LCP. The interaction between detergents and cholesterol may lead to an increase in the rigidity of the cell membrane, resulting in a decrease in molecular mobility in the LCP. In addition, the presence of cholesterol in the membrane can lead to the formation of ordered and disordered lipid domains, which can also influence the mobility of molecules in the LCP. It is important to note that changes in the mobile fraction in the LCP can be an indicator of changes in the organization and composition of the membrane. Therefore, these results may have important implications for understanding cellular dynamics and molecular interactions in different types of cells and biological systems. Phospholipid Molecular Species of the CF-Tc-nAChR-DCs Previous studies in our laboratory determined the endogenous lipid composition of the native Tc electric organ and of different complexes of Tc -nAChR with lipid-like detergents, and a correlation was made with their activity measured with two electron voltage clamp (TEVC) (Quesada et al. ). We used the same approach here in order to evaluate the composition of phospholipid molecular species in CF- Tc -nAChR-DCs and compare it with those detergents previously studied as a way to explain stability and functionality of the Tc -nAChR-detergent complex. Table presents the different phospholipids molecular detected in the ESI positive mode for the three CF- Tc -nAChR-DCs previously mentioned. Only zwitterionic molecular species were detected under the experimental conditions used, such as sphingomyelin and alkenyl phosphatidylcholine. We did not detected measurable levels of any anionic molecular species even in the ESI negative mode. We have the same situation in a previous study using the phospholipid-analog detergents alkylphosphocholine (FC) and lysofoscholine (LFC) families of detergents (Quesada et al. ). For those two families of detergents, we also observed the same exclusion of negatively (acid-rich) phospholipids in the nAChR-DCs. There is no simple explanation for the lack of negatively charged lipids in the nAChR-detergent complexes, however, we hypothesize that these are not essential for the formation of stable mixed micelles containing appropriate levels of protein, lipid, and detergent during solubilization. In this regard, due to its smaller headgroup, PA might not allow for appropriate micellar curvature and those with negatively charged headgroups could also be overwhelmed by the larger fraction of cationic species in Tc tissue. Nevertheless, their absence would appear to indicate that these acidic phopholipids are not necessary for the presence of a nAChR protein in the respective DC that is capable of specifically binding to carbamylcholine in affinity chromatography and to α-bungarotoxin. Along these lines, previous studies (Sunshine and McNamee ; Fong and McNamee ; Poveda et al., ) had demonstrated that negatively (acid-rich) phospholipids can modulate nAChR ion-channel function. The lack of these (acid-rich) phospholipids in the nAChR-DCs could also contribute to the reduced functional responses recorded in oocytes. Cross-correlation of the lipid species detected for the three CF- Tc -nAChR-DCs (Table ) shows similarities in some molecular species. The CF4- Tc -nAChR-DC has the highest number of retained lipid species, 13 in total, followed by the CF7- Tc -nAChR-DC and the CF6- Tc -nAChR-DC with 9 and 6 species, respectively. The three detergents manage to maintain only four identical lipid species: the Sphingomyelin SM (d18:1/24:1) in trace levels and the glycerophosphocholine PC (O16:1/18:0), PC (16:0/20:4), and PC (18:2/20:4), the latter having an abundance greater than 3%. Effect of CF Family Detergent in the Phospholipids and Cholesterol to nAChR Ratio in the Detergent Complex Due to the possible delipidation by the detergent solubilization process of Tc -nAChR, the ratio of cholesterol/nAChR, phospholipid/nAChR, and phospholipid/cholesterol was determined for the three CF- Tc -nAChR-DCs studied here. Apparently, the CF detergents studied have a structure that helps to maintain cholesterol associated with the nAChR, and depends on the separation of the hexane ring from the head group in the CF family assayed (Baier et al. ; Maldonado-Hernández et al. ). The mean cholesterol/nAChR ratio measured was 10.8, 22.6, and 19.5 for CF-4, CF-6, and CF-7, respectively, Fig. a. However, the behavior of this family in its ability to maintain the phospholipid composition, which is critical for nAChR functionality, presents a variability with respect to the length of the ring-expanding aliphatic chain. According to the results of the analysis of the ratio of phospholipids per nAChR molecule shown in Fig. b, extending the aliphatic chain by two carbons from CF-4 to CF-6 affect the phospholipids to nAChR ratio by more than two-fold. The CF-4 detergent produces a phospholipid/nAChR ratio quite similar to those previously reported for the native membrane nAChR, approximately 60 phospholipids per nAChR (Schmidpeter et al. ). However, an increase in the aliphatic chain by an odd number that positions the ring in a unique spatial conformation, as is the case of CF-6 versus CF-7, has a substantial effect on the delipidation of nAChR. The ratio of phospholipid/cholesterol for the three CF detergents studied here presents a similar range in ratio to the previous lipid-like detergent used in our lab, where CF-6 presents a ratio that is three times greater than that obtained for CF-7. However, CF-4 remains the best of the three detergents since it produces a ratio of approximately five phospholipid molecules per cholesterol molecule, which has been shown to satisfy the requirements for maintaining stability and functionality in Tc -nAChR-DC (Barrantes ; Hamouda et al. , ; Baier et al. ). The purity of both crude membranes and CF- Tc -nAChR-DC was accessed qualitatively by SDS-PAGE. All four nAChR subunits (α, β, γ, and δ) were resolved and migrated single bands. In addition to usual co-solubilized proteins rapsyn (43-kD), but also a much lower level of ATPase (100-kD) were observed, Fig. d. A careful inspection of the SDS gels indicates that all of the CF-nAChRs-DCs displayed the same degree of impurities, Fig. d. Effect of CF in the Functionality of the Tc-nAChR Using TEVC Technique The TEVC experiments were done following our previously published protocol (Padilla-Morales et al. ). To evaluate the functional effects of the CF family of detergents, the CF- Tc -nAChR-DCs were injected into Xenopus laevis oocytes and compared to crude membrane extracts using a non-saturating concentration of acetylcholine (ACh) to activate the nAChR response and measured by TEVC. A 5-s application of 100 μM ACh in an oocyte injected with Tc -nAChR’s crude membrane resulted in a mean amplitude of − 247 nA response (− 247 ± 57 nA; n = 8), (see Fig. a). When the Tc crude membranes were solubilized with the CF detergents family followed by affinity-column purification and injected into oocytes, the mean amplitude responses evoked by ACh produced the following currents, for the CF-4 -Tc -nAChR-DC (− 312 ± 69 nA, n = 5), CF-6 -Tc -nAChR-DC (− 37 ± 13 nA, n = 4), and CF-7 -Tc -nAChR (− 170 ± 42 nA, n = 5) (see Fig. a). The activation and deactivation kinetics of the different CF- Tc -nAChR-DCs were compared to crude membrane Tc -nAChRs using the activation half-time and the decay time (90%-10%). As shown in Fig. b the activation half-time was 0.4 ± 0.1 s in crude Tc -nAChRs membranes; 0.6 ± 0.1 s for CF-4- Tc -nAChR-DC; 0.8 ± 0.08 s for CF-64- Tc -nAChR-DC; and 4.1 ± 1.2 s for CF-74- Tc -nAChR-DC, (see Fig. 3c). In order to examine the deactivation kinetics, we analyzed the macroscopic current decay time of the crude and the Tc -nAChR-DC. Decay times were 9.5 ± 1.0 s for crude Tc -nAChRs membranes; 13.4 ± 2.0 s for CF-4- Tc -nAChR-DC; 4.5 ± 0.7 s for CF-6- Tc -nAChR-DC; and 12.85 ± 3.6 s for CF-7- Tc -nAChR, (see Fig. 3d). We used FRAP to assess the stability of the Tc -nAChR-DCs in LCP matrix. The use of LCP was introduced by Landau and Rosenbusch and by Rummel (Landau and Rosenbusch ; Rummel et al. 1998). This approach allows measuring the mobile fraction of the Tc -nAChR-DC in LCP by FRAP. Our group previously determined the fraction fluorescence recovery and mobile fraction of several lysophospholipids and cholesterol-analogs detergents for the Tc nAChR-Alexa 488-DC (Padilla-Morales et al. , ). We also evaluated the effect of the addition of cholesterol in the LCP matrix in the stability of CF-nAChR-DCs. In order to evaluate the stability of the CF-nAChR-DCs the fractional fluorescence recovery, mobile fraction, and diffusion coefficient were determined for a period of 30 days at intervals of 5 days. Figure shows the fractional fluorescence recovery for CF-4, CF-6, and CF-7, panels (a), (b), and (c), respectively. Each CF- Tc -nAChR-DC presents differences in the value of the fractional fluorescence recovery measured at intervals of 5 days. The variability during the 30 days is different for the three detergents evaluated, measured in the plateau at 500 s, with CF-6 being the one with the least variability (0.60–0.72), followed by CF-7 (0.30–0.50), and CF-4 having the greatest (0.45–0.70). However, CF-4 and CF-6 presented different fluorescence recovery values of 0.64 and 0.55, respectively, at day 30 whereas CF-7 presented a very lower mean value of 0.47 at day 30. Since nAChRs have been shown to have cholesterol-modulated activity and stability, we used a mixture of monoolein and cholesterol in a 10:1 ratio to perform FRAP assays. Figure middle panels (c), (d), and (e) present the fractional fluorescence recovery as a function of time for the three CF- Tc -nAChR-DC cholesterol supplemented LCP. At first glance, a decrease in the variability of the average values of fractional fluorescence recovery measured at the plateau with intervals of 5 days is observed for the three detergents studied. However, the average fractional fluorescence recovery value at 30 days was slightly lower for CF-4- Tc -nAChR-DC and CF-6- Tc -nAChR-DC when compared to the values obtained for pure monoolein. The exception was CF-7- Tc -nAChR-DC, which maintained a similar value on practically all the days tested. All CF- Tc -nAChR-DCs tested presented diffusion coefficient values in ranges previously observed for this type of protein in LCP (Cherezov et al. ; Padilla-Morales et al. ). The diffusion coefficient values determined for CF-4-Tc-nAChR-DC, CF-6-Tc-nAChR-DC, and CF-7-Tc-nAChR-DC were 1.33 × 10 − 8 cm2/sec, 1.13 × 10 − 8 cm2/sec, and 1.25 × 10 − 8 cm2/sec, respectively. Although these three detergents showed very similar diffusion coefficients, when the FRAP assay was carried out in LCP supplemented with cholesterol in a ratio of 10:1 (monoolein:cholesterol), the CF-4-Tc-nAChR-DC and CF-6-Tc-nAChR-DC displayed significant increases in their diffusion coefficient compared to their LCP non-cholesterol counterpart, 1.48 × 10 − 8 cm2/sec, and 1.70 × 10 − 8 cm 2 /sec, respectively. The CF-7-Tc-nAChR-DC exhibited a modest increase in the diffusion coefficient (1.29 × 10 − 8 cm 2 /sec). Based on our mobile fraction analysis presented in Fig. panel g, CF-7 with and without cholesterol exhibited a significant decrease over the 30-day period, with an average of 38% and 40% decrease in mobile fraction, respectively, compared to CF-6 with and without cholesterol, which had the highest average mobile fraction of 70% and 68%, respectively, and did not show significant changes between them. In contrast, CF-4 showed a linear increase in mobile fraction over the 30-day period from 46 to 65% without cholesterol and 45% to 50% for CF-4 with cholesterol. According to our results, the length of the carbon chain in each detergent significantly affected the mobile fraction in the LCP. We observed that detergents with longer carbon chains, such as CF-6 and CF-7, showed a lower mobile fraction compared to CF-4. This is because detergents with longer carbon chains have a higher affinity for molecules such as cholesterol, which plays a crucial role in reducing molecular mobility in the LCP. The interaction between detergents and cholesterol may lead to an increase in the rigidity of the cell membrane, resulting in a decrease in molecular mobility in the LCP. In addition, the presence of cholesterol in the membrane can lead to the formation of ordered and disordered lipid domains, which can also influence the mobility of molecules in the LCP. It is important to note that changes in the mobile fraction in the LCP can be an indicator of changes in the organization and composition of the membrane. Therefore, these results may have important implications for understanding cellular dynamics and molecular interactions in different types of cells and biological systems. Previous studies in our laboratory determined the endogenous lipid composition of the native Tc electric organ and of different complexes of Tc -nAChR with lipid-like detergents, and a correlation was made with their activity measured with two electron voltage clamp (TEVC) (Quesada et al. ). We used the same approach here in order to evaluate the composition of phospholipid molecular species in CF- Tc -nAChR-DCs and compare it with those detergents previously studied as a way to explain stability and functionality of the Tc -nAChR-detergent complex. Table presents the different phospholipids molecular detected in the ESI positive mode for the three CF- Tc -nAChR-DCs previously mentioned. Only zwitterionic molecular species were detected under the experimental conditions used, such as sphingomyelin and alkenyl phosphatidylcholine. We did not detected measurable levels of any anionic molecular species even in the ESI negative mode. We have the same situation in a previous study using the phospholipid-analog detergents alkylphosphocholine (FC) and lysofoscholine (LFC) families of detergents (Quesada et al. ). For those two families of detergents, we also observed the same exclusion of negatively (acid-rich) phospholipids in the nAChR-DCs. There is no simple explanation for the lack of negatively charged lipids in the nAChR-detergent complexes, however, we hypothesize that these are not essential for the formation of stable mixed micelles containing appropriate levels of protein, lipid, and detergent during solubilization. In this regard, due to its smaller headgroup, PA might not allow for appropriate micellar curvature and those with negatively charged headgroups could also be overwhelmed by the larger fraction of cationic species in Tc tissue. Nevertheless, their absence would appear to indicate that these acidic phopholipids are not necessary for the presence of a nAChR protein in the respective DC that is capable of specifically binding to carbamylcholine in affinity chromatography and to α-bungarotoxin. Along these lines, previous studies (Sunshine and McNamee ; Fong and McNamee ; Poveda et al., ) had demonstrated that negatively (acid-rich) phospholipids can modulate nAChR ion-channel function. The lack of these (acid-rich) phospholipids in the nAChR-DCs could also contribute to the reduced functional responses recorded in oocytes. Cross-correlation of the lipid species detected for the three CF- Tc -nAChR-DCs (Table ) shows similarities in some molecular species. The CF4- Tc -nAChR-DC has the highest number of retained lipid species, 13 in total, followed by the CF7- Tc -nAChR-DC and the CF6- Tc -nAChR-DC with 9 and 6 species, respectively. The three detergents manage to maintain only four identical lipid species: the Sphingomyelin SM (d18:1/24:1) in trace levels and the glycerophosphocholine PC (O16:1/18:0), PC (16:0/20:4), and PC (18:2/20:4), the latter having an abundance greater than 3%. Due to the possible delipidation by the detergent solubilization process of Tc -nAChR, the ratio of cholesterol/nAChR, phospholipid/nAChR, and phospholipid/cholesterol was determined for the three CF- Tc -nAChR-DCs studied here. Apparently, the CF detergents studied have a structure that helps to maintain cholesterol associated with the nAChR, and depends on the separation of the hexane ring from the head group in the CF family assayed (Baier et al. ; Maldonado-Hernández et al. ). The mean cholesterol/nAChR ratio measured was 10.8, 22.6, and 19.5 for CF-4, CF-6, and CF-7, respectively, Fig. a. However, the behavior of this family in its ability to maintain the phospholipid composition, which is critical for nAChR functionality, presents a variability with respect to the length of the ring-expanding aliphatic chain. According to the results of the analysis of the ratio of phospholipids per nAChR molecule shown in Fig. b, extending the aliphatic chain by two carbons from CF-4 to CF-6 affect the phospholipids to nAChR ratio by more than two-fold. The CF-4 detergent produces a phospholipid/nAChR ratio quite similar to those previously reported for the native membrane nAChR, approximately 60 phospholipids per nAChR (Schmidpeter et al. ). However, an increase in the aliphatic chain by an odd number that positions the ring in a unique spatial conformation, as is the case of CF-6 versus CF-7, has a substantial effect on the delipidation of nAChR. The ratio of phospholipid/cholesterol for the three CF detergents studied here presents a similar range in ratio to the previous lipid-like detergent used in our lab, where CF-6 presents a ratio that is three times greater than that obtained for CF-7. However, CF-4 remains the best of the three detergents since it produces a ratio of approximately five phospholipid molecules per cholesterol molecule, which has been shown to satisfy the requirements for maintaining stability and functionality in Tc -nAChR-DC (Barrantes ; Hamouda et al. , ; Baier et al. ). The purity of both crude membranes and CF- Tc -nAChR-DC was accessed qualitatively by SDS-PAGE. All four nAChR subunits (α, β, γ, and δ) were resolved and migrated single bands. In addition to usual co-solubilized proteins rapsyn (43-kD), but also a much lower level of ATPase (100-kD) were observed, Fig. d. A careful inspection of the SDS gels indicates that all of the CF-nAChRs-DCs displayed the same degree of impurities, Fig. d. The TEVC experiments were done following our previously published protocol (Padilla-Morales et al. ). To evaluate the functional effects of the CF family of detergents, the CF- Tc -nAChR-DCs were injected into Xenopus laevis oocytes and compared to crude membrane extracts using a non-saturating concentration of acetylcholine (ACh) to activate the nAChR response and measured by TEVC. A 5-s application of 100 μM ACh in an oocyte injected with Tc -nAChR’s crude membrane resulted in a mean amplitude of − 247 nA response (− 247 ± 57 nA; n = 8), (see Fig. a). When the Tc crude membranes were solubilized with the CF detergents family followed by affinity-column purification and injected into oocytes, the mean amplitude responses evoked by ACh produced the following currents, for the CF-4 -Tc -nAChR-DC (− 312 ± 69 nA, n = 5), CF-6 -Tc -nAChR-DC (− 37 ± 13 nA, n = 4), and CF-7 -Tc -nAChR (− 170 ± 42 nA, n = 5) (see Fig. a). The activation and deactivation kinetics of the different CF- Tc -nAChR-DCs were compared to crude membrane Tc -nAChRs using the activation half-time and the decay time (90%-10%). As shown in Fig. b the activation half-time was 0.4 ± 0.1 s in crude Tc -nAChRs membranes; 0.6 ± 0.1 s for CF-4- Tc -nAChR-DC; 0.8 ± 0.08 s for CF-64- Tc -nAChR-DC; and 4.1 ± 1.2 s for CF-74- Tc -nAChR-DC, (see Fig. 3c). In order to examine the deactivation kinetics, we analyzed the macroscopic current decay time of the crude and the Tc -nAChR-DC. Decay times were 9.5 ± 1.0 s for crude Tc -nAChRs membranes; 13.4 ± 2.0 s for CF-4- Tc -nAChR-DC; 4.5 ± 0.7 s for CF-6- Tc -nAChR-DC; and 12.85 ± 3.6 s for CF-7- Tc -nAChR, (see Fig. 3d). Studies aimed at obtaining crystallographic structures of membrane proteins must overcome a series of obstacles, particularly, the challenge to obtain the protein of interest as pure as possible and in high yields. Moreover, the detergent selected for the solubilization process needs to produce a stable protein and help to keep the protein functionality. Choosing a suitable detergent that meets all the physical–chemical requirements to solubilize the membrane protein is one of the most challenging and critical tasks prior to the crystallization step. The detergent must intercalate into the membrane and extract the protein of interest and in turn produce a host environment for the protein. In the native membrane, the membrane protein is stabilized by the ring of lipids that interact directly with the hydrophobic belt of the protein and by the lateral pressure provided by the lipids and other proteins that make up the native bilayer. This lateral pressure is compromised in the detergent micelles and it depends on the physicochemical properties of the detergent. Therefore, the stability of the membrane protein in the detergent complex will depend on the level of preservation of the endogenous annular lipids, in other words, on achieving the least degree of delipidation. For more than two decades, our laboratory has been given the task of finding the best conditions to produce stable and functional Tc -nAChR-DC for crystallization trials. Previous work in our laboratory studied the purity and stability of commonly used detergents for the production of solubilized and affinity-purified Tc -nAChRs-DC as a prelude to crystallization. In addition, we measured the functionality using the planar Lipid Bilayer Technique (Cherezov et al. ; Padilla-Morales et al. ). The experience gained over decades of work in our laboratory has shown that the characterization of nAChR ion-channel function using the TEVC technique in oocytes is more effective. In addition, the stability of the Tc -nAChR-αBTX Alexa 488-DC in LCP for phospholipid and cholesterol-analog detergents was assayed using FRAP (Padilla-Morales et al. , ; Maldonado-Hernández et al. ). Our results showed that native lipid depletion occurred in all detergents within certain ranges, depending on the lipid-analogue detergent’s structure, triggering different degrees of stability and functionality. However, most of the lipid-like detergents maintain stability and support ion-channel function such as (3-[(3-Cholamidopropyl)-dimethylammonio]-1-propane sulfonate)] (CHAPS), n-Dodecylphosphocholine (FC12) n-tetradecylphosphocholine (FC-14), n-Hexadecylphosphocholine (LFC-16), and 3α,7α,12α-Trihydroxy-5β-cholan-24-oic acid (sodium cholate) (Asmar-Rovira et al. ; Padilla-Morales et al. , ). In contrast, non-lipid-analog detergents such 6-Cyclohexyl-1-hexyl-β-d-maltoside (Cymal-6), n-Dodecyl-β-d-maltopyranoside (DDM), Lauryldimethylamine-N-oxide (LDAO) and n-Octyl-β-d-glucopyranoside (OG), Polyoxyethylene-(9)-dodecyl ether Anapoe-C12E9) and N,N′-bis-(3-d-Gluconamidopropyl cholamide) (BigCHAP) show decreased stability and significant reduction or loss of ion-channel function (Asmar-Rovira et al. ). Overall, these results indicate that the nAChR can be stable and functional in lipid-analog detergents or in detergents that retain moderate amounts of residual native lipids, while the opposite is true about non-lipid-analog detergents. These data highlight the importance of a careful biophysical characterization of the membrane protein-detergent complex (MP-DCs) for future structural studies (Hamouda et al. , ; Maldonado-Hernández et al. ; Delgado-Vélez et al. ). In this study we took on the task of assessing the capacity of three short-chain lipid-like detergents containing a six-carbon ring at the end of the hydrophobic tail. All the detergents of the CF family assayed produced a considerable amount of protein, in the range of mg and reproducible under the same solubilization conditions. The CMC for CF-4, CF-6, and CF-7 are 8.45, 2.68, and 0.62 mM, respectively (see Table ). Thus, CF-6 and CF-7 produce larger micelles and increase in aggregation number compared to CF-4 (Anandan and Vrielink ). As shown in Fig. , the CF-4 detergents produced the best Tc -nAChR-DC behavior in terms of its LCP fractional fluorescence recovery at the end of the study period, while its maximum recovery is in the range of some non-cyclofoscholine detergents studied in our laboratory, e.g., FC-12, FC-14 and FC-16, (Padilla-Morales et al. ). However, increasing the length of the aliphatic chain in these FC detergents from 12 to 16 carbons produced a substantial improvement in the value fractional fluorescence recovery. Still, this is not completely true for the CF detergent family studied. When increasing from four to six carbons in length (CF-4 to CF-6) the fractional fluorescence recovery at 30 days decreased by approximately 14% for CF-6 and 24% for CF-7. This behavior can be explained in view of the physicochemical properties of CF detergents and their capacity to interact with the cholesterol molecule. Based on examining the detergents and cholesterol structure, it is inevitable that the CF molecules are capable of establishing a stabilizing interaction by means of Van der Waals forces between the aliphatic chain of the detergent and the A, B, and C rings of the cholesterol molecule. Our results showed that both detergents CF-6 and CF-7 maintain approximately the same amount of cholesterol per nAChR in the Tc -nAChR-DC; however, CF-4 produced at least 50% less cholesterol per nAChR. Although the number of molecules of cholesterol in the Tc -nAChR-DC for CF-6 and CF-7 does not significantly differ, CF-7 apparently produces some type of rejection against phospholipids as compared to CF-4 and CF-6 with a 2.8 and 6.1 decreased fold, respectively. This individual behavior translates into phospholipid/cholesterol ratios for CF-4 and CF-6 of approximately 5 and 7, which are similar to those reported for lipid-like detergent analog mentioned above (see Fig. ). Studies carried out in the late 1980s where the effect of lipid composition on the functionality of the solubilized Tc -nAChR was determined suggested that at least a ratio of 45 lipids/nAChR should be present to observe activity (Marsh et al. ; Hamouda et al. a). Furthermore, the amount of cholesterol present in model membranes that support nAChR functionality should be approximately 35 mol%, since this value is similar to that found in the native membranes Tc -nAChR-DC (Marsh et al. ). This implies that there must be at least three (3) sterol molecules per receptor subunit, although, this extrapolation is assuming that there is only one population of lipids due to the nAChR protein (annular lipid) and bulk lipids exchange rapidly. These previous studies did not present data about possible excess cholesterol and its effect in the receptor functionality. Previous studies in our laboratory determined the functionality and lipid composition of different Tc -nAChR-DCs formed by phospholipid-like detergents; however, none of the detergents increased the cholesterol/nAChR similar to the CF, especially the CF-6 and CF-7 which almost doubled the cholesterol/nAChR ratio. Apparently, the excess of cholesterol in CF-6-nAChR-DC and CF-7-nAChR-DC resulted in a decrease in the macroscopic response of the nAChR ion-channel. Specifically, the response value in CF-7 was reduced by one third, reaching 68% of the crude membrane response value, while the reduction in CF-6 was more significant, reaching an 85% reduction when compared to the crude membrane response (Fig. ). These results are consistent with a previous study that demonstrated that a physiologically relevant increase in membrane cholesterol concentration produces a remarkable reduction in the macroscopic current responses of the Tc -nAChR as well as other neuronal nAChRs subtypes (Baez-Pagan et al. . Likewise, the loss of phospholipids and the gain of cholesterol molecules in CF-7-Tc-nAChR-DC may be the explanation for its behavior with respect to its mobility in LCP and LCP: cholesterol mixture. We hypothesize that the spatial orientation and transmembrane location of the of the cyclohexane ring in CF-7 could result in a more efficient interaction with the cholesterol C ring in the LCP:cholesterol mixture over the other CF detergents studied. This could contribute to the lowest 5% increase in the diffusion coefficient of CF-7-Tc-nAChR-DC compared to CF-4-Tc-nAChR-DC and CF-6-Tc-nAChR-DC which increased their diffusion coefficient in LCP:cholesterol mixture by 10% and 33%, respectively. While the Xenopus oocyte expression system is ideal for these types of studies, it is certainly not without drawbacks. Indeed, when doing these experiments, we found in most cases there is always a loss of functional activity except for the LFC-14 and LFC16 that were similar to the crude preparation (Padilla et al., ). In the preset study the nAChR-CF-4 complex gave a functional response that was similar to the crude. Is important to mention that the injection of nAChR-DCs containing 1.5 critical micellar concentration of a detergent, that results in a reduction on the viability of the oocytes. This reduction in viability is traduced in the observed loss of functional nAChR ion activity. We have previously noticed that some detergents showed a reduction on viability within the first 10 h making it hard for us to get TEVC responses (specially, cholesterol-analog detergents), but for the detergents used in the present study we recorded activity between 16–36 h. This time frame gave us a window in which we could maintain oocyte viability ensuring reliable responses. Also, it has been previously proposed that the reason for such loss in function could be due to the viscous nature of the membrane preparations, and that the fluid injected into the oocytes usually contain variable amounts of receptor-bearing membranes (Marsal et al., ). Furthermore, lack of these (acid-rich) phospholipids in the nAChR-DCs could also contribute to the reduced functional responses recorded in oocytes. Interestingly, the CF-4 -Tc -nAChR-DC that enclosed the largest lipid species (13) in total, displays the best functionality compared to CF-6 -Tc -nAChR-DC and CF-7 -Tc -nAChR-DC that enclosed only 6 and 9, respectively (see Table ). Furthermore, functional assays of the solubilized Tc -nAChR-DC reconstituted in model membranes at different lipid to protein mole ratios showed a progressive decrease in receptor activity as the phospholipid/nAChR ratio decreased below 45. Also, this preparation produced an irreversible inactivation below a ratio of 20, this is the case of the CF-7 -Tc -nAChR-DC (Jones and McNamee ; Quesada et al. ; Schmidpeter et al. ). Figure b presents the phospholipid to nAChR-DC ratio for the CF-4 -Tc -nAChR-DC and CF-6 -Tc -nAChR-DC and CF-7 -Tc -nAChR-DC with ratio values of 62, 135, and 22, respectively. By correlating these values with the macroscopic current response produced by nAChR-DC injected into the membranes of Xenopus oocytes, we found that CF-4 -Tc -nAChR-DC, which presented a phospholipids/nAChR ratio (62) in the range previously determined to be functional, was the only one of the three detergents studied that produced an adequate normalized macroscopic current response (Fig. ). Compared to the CF-4 -Tc -nAChR-DC, the CF-6 -Tc -nAChR-DC and CF-7 -Tc -nAChR-DC produced some response, but only 15% and 49%, respectively, relative to the crude membrane response value. Furthermore, the activation kinetics which were measured as activation half-time were significantly slower for the CF-7 -Tc -nAChR-DC when compared to Tc -nAChR from crude membranes. Interestingly, neither CF-4 -Tc -nAChR-DC nor CF-6 -Tc -nAChR-DC nor had a significant effect on activation kinetics; however, when we looked at deactivation kinetics using the decay time (90%–10%) we found that it was significantly faster for CF-6 -Tc -nAChR-DC. Consistent with the idea that CF-4 -Tc -nAChR-DC is able to maintain the normal function of Tc -nAChR all the parameters measured using TEVC were not significantly different than values obtained from crude membrane preparations. Also, an increase in the phospholipid/nAChR ratio reduced the functional response of the nAChR, as shown in the case of CF-6 -Tc -nAChR-DC which displayed a ratio of 135. Overall, the present study demonstrates that the selection of a detergent to solubilize a membrane protein is an empiric experiment, and one of the critical factors is the composition of lipids that remain after extraction in the MP-DCs. The structure and physicochemical properties of the detergent sculpts the composition of the lipids that remains in the MP-DC by selectively including and excluding certain critical lipids species. Most important, the lipid composition that remains in the MP-DC affects the purity, functionality, and stability of the MP. The present study reveals that for the Tc (muscle-type) nAChR-DC certain lipid species such as (SM (d16:1/18:0); PC (18:2/14:1); PC (14:0/18:1); PC ((16:0/18:1); PC (20:5/20:4) and PC (20:4/20:5)) are crucial to retain ion-channel functionality. The interpretation of these results can be perceived as a discreet lipid composition that support ion-channel function; however, from a broader and more complex neurophysiological perspective, we can hypothesize that each nAChR neuronal subtype might have specific lipid species requirements to maintain their diverse ion-channel properties and ultimately the cholinergic neurotransmission in the central nervous system.
|
Application of 3D-printed pulmonary segment specimens in experimental teaching of sectional anatomy
|
b462b25d-d86a-4383-b67b-7ee55a294952
|
10157950
|
Anatomy[mh]
|
Medical imaging is an indispensable tool in disease research and diagnosis, and sectional anatomy is the morphological basis for the observation and analysis of tomographic images . Experimental teaching is the cornerstone of sectional anatomy. It is necessary to establish the tomographic thinking of “from whole to cross-section and from cross-section to the whole”. Students must be able to follow and observe consecutive tomographic specimens based on in the mastery of stereological structures , this could be the only way in which medical images can be interpreted correctly. The sectional anatomy of the lung is one of the emphases and challenges in the course; the complex arrangement of intrapulmonary structures such as bronchi, arteries, and veins in the lungs requires high spatial imagination in students. The identification and understanding of the anatomical tomography of the pulmonary hilum and segments are necessary to learn how to diagnose pulmonary diseases with medical imaging. The time for teaching a specific course is limited, and finding a way to increase teaching effectiveness within a certain amount of time is the main goal that instructors in medical colleges and universities always pursue. Three-dimensional (3D) printed models are an excellent teaching resource in anatomy education , which are useful tools for studying normal, uncommon, and pathological anatomy . Such models can facilitate the visuospatial comprehension of sectional anatomy . Currently, 3D printed models of bronchial trees have been reported to facilitate novice learning of radiology anatomy , but models of lung segments integrating bronchial trees, pulmonary arteries, and veins have not been reported to be applied in teaching. In this study, 3D printed specimens of lung segments were applied to the experimental teaching of pulmonary sectional anatomy, and the teaching effects were evaluated.
Research subjects The subjects of the study were undergraduate students of medical imaging in second-year classes 5, 6, 7, and 8 at Wannan Medical College. Fifty-nine students in classes 7–8 composed the study group, and 60 students in classes 5–6 composed the control group. There were 21 male and 39 female students in the control group, with a mean age of 20.27 ± 0.87 years, and 19 male and 40 female students in the study group, with a mean age of 20.18 ± 0.92 years. There were no statistically significant differences in gender(P = 0.847) and age(P = 0.624) between the two groups. Printing of the pulmonary segment specimens A female digital thoracic dataset from Shandong Digital Human Technology Co., Inc., with a voxel size of 0.0384 mm*0.0384 mm*0.1 mm was chosen . The segmentation data were obtained by manually delineating the boundaries and tissue structures in Photoshop software. An improved Marching Cubes algorithm was then used to reconstruct the 3D digital models . The segmentation data were repeatedly modified and validated, ensuring their accuracy. The data obtained were optimized in the software Maya and Magics using tools such as rewiring, thickness adjustment, and hole repair. Finally, the model was split according to the segment of the bronchus (Fig. a, b and c). All segmentation and postprocessing were performed by an engineer from Shandong Digital Human Co., Inc., China. The digital models were verified for anatomic accuracy by anatomic experts and a senior cardiothoracic surgeon. The data were imported into a 3D printer (J401Pro, Zhuhai Sailner Technology Co., Ltd., China) to create the specimens. The printer can print full-color models in seven different materials for medical use (Fig. d). After printing, the split parts were made into magnet slots, and the magnets were placed for assembly. 3D printed model We obtained a set of lung specimens printed in full color at a 1:1 scale based on high-definition digital human anatomy data. It was the same size and shape as real lungs. The lung parenchyma was printed with transparent material, and the bronchial tree was colored; the bronchi on each side of the lung segment were distinguishable by distinct colors, while the pulmonary segments with the same name on both sides had the same color, for a total of 18 submodels. Magnets were absorbed into each submodel to provide a detachable and integrated display of lung parts (Figs. b and b). Teaching methods This session of the sectional anatomy experimental course concerned lung cross-section. Two groups of students were successively taught in the same laboratory, under the supervision of two identical professional instructors. The syllabus outlined identical instruction time and content for both groups. Initially, multimedia presentations were used to teach the lung segments and intrapulmonary ducts (Table ). In the control group, with the guidance of the sectional anatomy atlas and textbook, slicer knives were used to transect isolated lungs. This produced lung slices in accordance with the ten standard cross-sections of the lung found in the textbook. The lung segments and their key structures were then observed and studied by comparing them with computed tomography (CT) and magnetic resonance imaging (MRI) images. Students in the study group were additionally taught using 3D printed specimens of lung segments. The overall structure of the lung was observed, and the position and shape of each lung segment were examined from different angles, including the rib surface, mediastinal surface, and diaphragm surface (Fig. ). These lung specimens were divided into eighteen segments, and the morphology of each lung segment, including the bronchi, arteries, and veins, was studied (Fig. ). With guidance from the 3D printed specimens and the sectional anatomy atlas, students used the slicer knives to create transverse sections of the lungs to identify and understand key structures in the sections (Fig. ). They also compared their observations with medical images. Overall, both groups of students received the same instruction time and content, but the study group used 3D printed specimens in addition to the traditional methods used by the control group. Teaching effect evaluation Both groups of students were given a preclass test before the experimental session and a postclass test and questionnaire survey after the session. The preclass and postclass assessments had the same number of questions and knowledge point difficulty. There were ten fill-in-the-blank questions, including the lung segment location; lung segment bronchus, arteries and veins; lung segment on cross-section; and identification of key structures in CT images. The contents of the subjective questionnaire survey were designed and are shown in Table , including an understanding of the morphology and location, the pipeline, the cross-sections, the CT tomography of lung segments, the spatial thinking of sectional anatomy, and the overall satisfaction with the teaching. After the course, the course grade, including the final exam scores, was collected on a percentage scale and used to assess the effectiveness of the instruction. Statistical processing SPSS 18.0 software was used for data analysis. A t-test was used for comparisons between the two groups. Counting data were expressed by examples or percentages, and the chi-squared (χ2) test was used for intergroup comparisons.
The subjects of the study were undergraduate students of medical imaging in second-year classes 5, 6, 7, and 8 at Wannan Medical College. Fifty-nine students in classes 7–8 composed the study group, and 60 students in classes 5–6 composed the control group. There were 21 male and 39 female students in the control group, with a mean age of 20.27 ± 0.87 years, and 19 male and 40 female students in the study group, with a mean age of 20.18 ± 0.92 years. There were no statistically significant differences in gender(P = 0.847) and age(P = 0.624) between the two groups.
A female digital thoracic dataset from Shandong Digital Human Technology Co., Inc., with a voxel size of 0.0384 mm*0.0384 mm*0.1 mm was chosen . The segmentation data were obtained by manually delineating the boundaries and tissue structures in Photoshop software. An improved Marching Cubes algorithm was then used to reconstruct the 3D digital models . The segmentation data were repeatedly modified and validated, ensuring their accuracy. The data obtained were optimized in the software Maya and Magics using tools such as rewiring, thickness adjustment, and hole repair. Finally, the model was split according to the segment of the bronchus (Fig. a, b and c). All segmentation and postprocessing were performed by an engineer from Shandong Digital Human Co., Inc., China. The digital models were verified for anatomic accuracy by anatomic experts and a senior cardiothoracic surgeon. The data were imported into a 3D printer (J401Pro, Zhuhai Sailner Technology Co., Ltd., China) to create the specimens. The printer can print full-color models in seven different materials for medical use (Fig. d). After printing, the split parts were made into magnet slots, and the magnets were placed for assembly.
We obtained a set of lung specimens printed in full color at a 1:1 scale based on high-definition digital human anatomy data. It was the same size and shape as real lungs. The lung parenchyma was printed with transparent material, and the bronchial tree was colored; the bronchi on each side of the lung segment were distinguishable by distinct colors, while the pulmonary segments with the same name on both sides had the same color, for a total of 18 submodels. Magnets were absorbed into each submodel to provide a detachable and integrated display of lung parts (Figs. b and b).
This session of the sectional anatomy experimental course concerned lung cross-section. Two groups of students were successively taught in the same laboratory, under the supervision of two identical professional instructors. The syllabus outlined identical instruction time and content for both groups. Initially, multimedia presentations were used to teach the lung segments and intrapulmonary ducts (Table ). In the control group, with the guidance of the sectional anatomy atlas and textbook, slicer knives were used to transect isolated lungs. This produced lung slices in accordance with the ten standard cross-sections of the lung found in the textbook. The lung segments and their key structures were then observed and studied by comparing them with computed tomography (CT) and magnetic resonance imaging (MRI) images. Students in the study group were additionally taught using 3D printed specimens of lung segments. The overall structure of the lung was observed, and the position and shape of each lung segment were examined from different angles, including the rib surface, mediastinal surface, and diaphragm surface (Fig. ). These lung specimens were divided into eighteen segments, and the morphology of each lung segment, including the bronchi, arteries, and veins, was studied (Fig. ). With guidance from the 3D printed specimens and the sectional anatomy atlas, students used the slicer knives to create transverse sections of the lungs to identify and understand key structures in the sections (Fig. ). They also compared their observations with medical images. Overall, both groups of students received the same instruction time and content, but the study group used 3D printed specimens in addition to the traditional methods used by the control group.
Both groups of students were given a preclass test before the experimental session and a postclass test and questionnaire survey after the session. The preclass and postclass assessments had the same number of questions and knowledge point difficulty. There were ten fill-in-the-blank questions, including the lung segment location; lung segment bronchus, arteries and veins; lung segment on cross-section; and identification of key structures in CT images. The contents of the subjective questionnaire survey were designed and are shown in Table , including an understanding of the morphology and location, the pipeline, the cross-sections, the CT tomography of lung segments, the spatial thinking of sectional anatomy, and the overall satisfaction with the teaching. After the course, the course grade, including the final exam scores, was collected on a percentage scale and used to assess the effectiveness of the instruction.
SPSS 18.0 software was used for data analysis. A t-test was used for comparisons between the two groups. Counting data were expressed by examples or percentages, and the chi-squared (χ2) test was used for intergroup comparisons.
Results of pre- and postclass tests The results revealed that there was no statistically significant difference in the precourse test scores of the two groups ( P = 0.261). The postclass test scores in both groups were higher than the preclass test scores, and the incremental value in the study group was significantly greater than the increase in the control group (all P < 0.001; Table ). Evaluation of the teaching effect All students in this study completed the subjective questionnaire survey. As shown in Table , the comparative analysis between both groups showed that there was no difference between both groups in terms of understanding the morphology and location of the lung segments ( P = 0.099). However, regarding understanding the ducts within lung segments, identification of lung segments on cross-section and CT tomography, developing spatial thinking for sectional anatomy, and overall satisfaction with the course, the study group was significantly more satisfied than the control group (all P < 0.001; Table ). The final rating of the course The results suggest that the scores and the excellence rate of the students in the study group were significantly higher than those of the control group, and the differences were statistically significant (both P < 0.05; Table ).
The results revealed that there was no statistically significant difference in the precourse test scores of the two groups ( P = 0.261). The postclass test scores in both groups were higher than the preclass test scores, and the incremental value in the study group was significantly greater than the increase in the control group (all P < 0.001; Table ).
All students in this study completed the subjective questionnaire survey. As shown in Table , the comparative analysis between both groups showed that there was no difference between both groups in terms of understanding the morphology and location of the lung segments ( P = 0.099). However, regarding understanding the ducts within lung segments, identification of lung segments on cross-section and CT tomography, developing spatial thinking for sectional anatomy, and overall satisfaction with the course, the study group was significantly more satisfied than the control group (all P < 0.001; Table ).
The results suggest that the scores and the excellence rate of the students in the study group were significantly higher than those of the control group, and the differences were statistically significant (both P < 0.05; Table ).
3D reconstruction based on cross-sectional anatomy began in the middle 1990s with the Visual Human Project (VHP) as a resource for teaching human anatomy . The Virtual Human Dissector software, which was developed from the VHP, can help students interpret cross-sectional images and understand the relationships between anatomical structures . Browsing software based on Visible Korean data was used to teach sectional anatomy and was a valuable tool to teach medical students . Based on China’s Digitalized Visible Human (CDVH) data, an anatomy assistive teaching system was developed that plays a positive role in guaranteeing the effect and quality of anatomy teaching . The digital thoracic dataset in our study was obtained from the CDVH high-resolution datasets, and the reconstructed digital model was rich in detail. There are three well-accepted methods of 3D representations: 3D printing, virtual reality (VR) glasses, and 3D display . VR glasses and 3D displays require terminal equipment and do not mesh well during experimental operation. 3D-printed specimens are convenient for demonstrating anatomical intricacies and spatial order during the operation stage. 3D-printed lung and bronchial tree models could be applied as simulators for surgical training and preoperative planning . In recent years, the application of high-precision 3D models in medical teaching has enabled students to visually observe the fine structure and position relationship of the human body locally, has motivated learning, has promoted students to change from passive memory to thinking, and has improved the learning effect and satisfaction . Many knowledge points described in sectional anatomy theory classes need to be further observed in experimental classes. In traditional experimental teaching, students learn through a two-dimensional atlas and slices of specimens. However, the tomographic structure is closely related to the 3D spatial relationships of anatomical structures. In the classroom, it is crucial to find ways to pique the curiosity of learners and help them visualize abstract theoretical concepts. In this study, we made adequate preparations for the control group, but the teaching was not as effective as that for the study group in the same period. It was difficult to stimulate the interest of students in the control group during class. This demonstrates that traditional teaching methods are insufficient. The 3D printed specimen of the lung segment used in the study group is a detachable combination model with transparent wrapped material, including structures such as a bronchial tree, pulmonary artery, pulmonary vein, and lung profile, with the pulmonary artery in blue and the pulmonary vein in red; each lung segment bronchus is represented by different colored materials, and the lung surface and its filling consist of a transparent, wrapped material. The 3D-printed specimens were colorful and eye-catching. The students showed great interest in them, which created an active class climate. The results showed that the 3D printed specimens of lung segments could effectively aid students in comprehending the ducts within lung segments, as well as the anatomy and CT tomography. The model can facilitate the development of lung segment tomographic thinking skills, enhance teaching satisfaction, and improve the course grade excellence rate. However, there are shortcomings. There are 240 students in third-year majoring in medical imaging in our college, and the high cost of printing materials makes it difficult to provide enough specimens for all students at the same time. We will seek funding support, and we expect the cost of 3D printing materials to decrease in the future to meet the experimental needs. Soft materials cannot be used to print colorful intrapulmonary tubes, so we had to use rigid materials to create models for observation. We expect the specimens of lung segments to be printed with soft materials, which can be sectioned to supplant cadaveric specimens. This will provide a better experimental teaching effect.
The application of high-precision multicolor 3D printed specimens of lung segments in experimental teaching of sectional anatomy can improve teaching effectiveness and is worth adopting and promoting in sectional anatomy courses.
|
What influences on their professional development do general practice trainees report from their hospital placements? A qualitative study
|
c047403a-38c0-4dc2-ad52-096794cc6eac
|
10158549
|
Family Medicine[mh]
|
The clinical learning environment has been described as the foundation of postgraduate medical education , with the quality of the training environment correlating to the later quality of care provided by graduates . The challenges of providing such training in hospitals, in addition to the primacy of patient need and service provision, are well described . Uniquely for General Practice (GP) trainees, some of this training is in an environment that is not their final workplace, i.e. the hospital, which furthermore may have little understanding of the future professional life of the GP trainee . However, the hospital environment has been reported as providing a high prevalence of morbidity to assist GP trainees in learning, and to show future GPs what the hospital can provide in future care collaboration . This model of GP training is standard across Europe . The rise of competency-based medical education emphasises the workplace as a learning environment . Earlier publications on the hospital training component for GP trainees have focused on what GP trainees could learn from individual hospital placements . While the hospital provides experience in the management of acute illness, technical practice, diagnostic procedures [5], in contrast, tolerating uncertainty, awareness of psychosocial factors and patient-centeredness are learnt well in the GP learning environment . In addition, professional identity formation is now viewed as an essential aspect of specialty training , which may be more challenging for GP trainees in the hospital environment where there is occasional denigration of GP or undermining of GP trainees by some hospital specialists . On the other hand, GP trainees have also reported good peer support in the hospital training and rated hospital paediatrics and emergency medicine as useful . This international qualitative project seeks the views of GP trainees of how their hospital experience contributes to their professional development as a GP.
A multi-country qualitative study, utilising semi-structured interviews, was undertaken. The research group consisted of GP trainees and supervisors from Belgium, Ireland, Lithuania, and Slovenia. Ethical approval was granted (or waived) by the appropriate body in the four countries. Belgium; Antwerp University Hospital (20/46 606), Ireland; The Irish College of General Practitioners (ICGP_REC_2020_T15), Lithuania; Not required, Slovenia; Republic of Slovenia Medical Ethics Committee (0120-381/2020/11). Developing the topic guide Following a literature search, nominal group technique was conducted with international educators in a workshop delivered at the WONCA Europe Conference (Berlin 2020) . Results were used to develop the topic guide ( Supplementary Appendix 1) in English, then translated into Slovenian, Dutch and Lithuanian languages. Recruitment Study participants were selected by purposeful samplng to seek a broad range of trainees of different age, gender, prior experience, and country of primary medical degree. Participation was invited through young doctor’s associations, national GP trainee databases, GP trainee social media groups, National Trainee Conferences, and Day Release teaching sites. GP trainees who had less than three months hospital experience were excluded. Data collection Nine researchers (GP trainees from Belgium, Ireland, Lithuania, and Slovenia, 1 male and 8 female) conducted the interviews, which were face-to-face or via Zoom ® . The interviews took place between January 2021 and May 2021 and were conducted in the language of GP training in that country. The interviews were recorded and transcribed (by hand or using Otter ® software). The transcripts were anonymised, stored according to European General Data Protection Regulation (GDPR) and imported into NVivo® software for analysis. Interviews continued until no further new information was forthcoming. Data analysis Thematic analysis, following a six-step process , was employed to identify themes and patterned meanings. Data familiarisation of the transcripts and line-by-line open coding of each transcript was conducted by two researchers in each country, supported by NVivo ® (version 12). Initial meetings in each country discussed and refined the codes. Each country’s codebook was translated into English. Meetings with the researchers from all the countries condensed the codes and identified key categories and themes. Findings were verified using reflective conversations, comparing and contrasting the codebooks, noting and revising the categories in the light of the research question over several meetings, in line with previously published analytic methods . Reflexivity statement Most researchers are career GPs (range of experience between 1-31 years). Some researchers had previously embarked on a career as a hospital specialist but had changed careers. One of the researchers is on a hospital medicine career. We remained aware that as a research group we may have had a vested interest in promotion of GP as a career and diminution of hospital medicine, and despite our best efforts to the contrary, our interpretation may be biased.
Following a literature search, nominal group technique was conducted with international educators in a workshop delivered at the WONCA Europe Conference (Berlin 2020) . Results were used to develop the topic guide ( Supplementary Appendix 1) in English, then translated into Slovenian, Dutch and Lithuanian languages.
Study participants were selected by purposeful samplng to seek a broad range of trainees of different age, gender, prior experience, and country of primary medical degree. Participation was invited through young doctor’s associations, national GP trainee databases, GP trainee social media groups, National Trainee Conferences, and Day Release teaching sites. GP trainees who had less than three months hospital experience were excluded.
Nine researchers (GP trainees from Belgium, Ireland, Lithuania, and Slovenia, 1 male and 8 female) conducted the interviews, which were face-to-face or via Zoom ® . The interviews took place between January 2021 and May 2021 and were conducted in the language of GP training in that country. The interviews were recorded and transcribed (by hand or using Otter ® software). The transcripts were anonymised, stored according to European General Data Protection Regulation (GDPR) and imported into NVivo® software for analysis. Interviews continued until no further new information was forthcoming.
Thematic analysis, following a six-step process , was employed to identify themes and patterned meanings. Data familiarisation of the transcripts and line-by-line open coding of each transcript was conducted by two researchers in each country, supported by NVivo ® (version 12). Initial meetings in each country discussed and refined the codes. Each country’s codebook was translated into English. Meetings with the researchers from all the countries condensed the codes and identified key categories and themes. Findings were verified using reflective conversations, comparing and contrasting the codebooks, noting and revising the categories in the light of the research question over several meetings, in line with previously published analytic methods .
Most researchers are career GPs (range of experience between 1-31 years). Some researchers had previously embarked on a career as a hospital specialist but had changed careers. One of the researchers is on a hospital medicine career. We remained aware that as a research group we may have had a vested interest in promotion of GP as a career and diminution of hospital medicine, and despite our best efforts to the contrary, our interpretation may be biased.
A total of 43 GP trainees participated, Belgium 18, Ireland 9, Lithuania 14, and Slovenia 6. Participants spread over different years of training and gender was split Female: Male 1.5:1 ( , Participant Demographics). Average interview duration overall was 29 min: Belgium 30 min, Ireland 37, Lithuania 45, and Slovenia 13. In coding, an overall average inter-rater reliability of 92.5% was achieved: Belgium 90%, Ireland 94%, Lithuania 94%, and Slovenia 86%. Our analysis revealed four themes: 1) supervision, 2) teaching, 3) tension between service delivery and learning, and 4) differing secondary care/primary care paradigms. Illustrative quotes, referred to in the theme discussion, are in Quotations . Supervision The supervision experience was found to vary between rotations, a consistent finding across all countries . Some supervisors seem uninterested . A sense of responsibility to provide teaching was lacking . Some hospital consultants appeared to be reluctant to consider what the future professional life of the GP trainee might be like . Other hospital doctors, such as registrars, could fill this supervisory role in an effective manner . Approachability was seen as one of the most important attributes of the supervisor . This was present when the trainee felt that questions on clinical matters were welcomed , when the trainee felt safe that he/she will not be ridiculed and felt valued as a GP trainee . Availability of supervision was highlighted as a significant issue in the hospital environment with concerns for patient safety and trainee well-being as well as training quality. In Ireland and Lithuania, night shifts were noted to have dramatically reduced staffing with a resulting impact on clinical confidence compared to the daytime . GP trainees described many positive experiences. They reported being allowed to push themselves to make clinical decisions with back up and opportunities to acquire experience in best clinical practice . They described a rich learning environment from peers and being stimulated to read up on clinical presentations [ and ], useful to them as future GPs. Teaching One-on-one teaching was particularly valued as teaching tailored towards the future career as a GP, e.g. clear guidelines on when and how the trainee as a future GP should refer a patient to the hospital . While each national GP training body or institution had a formal curriculum, hospital supervisors rarely referenced it, an experience noted across all countries . Sometimes the hospital rotations did not provide any opportunities for learning which matched the curriculum [ and ]. Insufficient teaching was lamented in all countries . Regular assessment was considered lacking in Slovenia and Lithuania. In Ireland and Belgium, the GP trainees leave the hospital clinical environment once weekly for training from GP educators. This was felt to make the hospital experience more relevant . In Lithuania, seminars are delivered by GP educators on site in the hospitals during the hospital rotations. Slovenia has no dedicated GP training during the hospital rotations. Tension between service delivery and learning Administrative work, such as discharge letters, was considered less valid to their training by GP trainees . A unique finding in our study is that GP trainees felt they often shouldered a disproportionate amount of such service work to release the hospital specialist trainees on their team for clinical work . GP trainees felt the sheer volume of work to be a hindrance to learning in all countries . Conversely, on occasion, there was an insufficient number of patients and too many hospital doctors seeking experience . The range of expertise in some speciality wards could be narrow, limiting learning opportunities e.g. eating disorders or cataract surgical ward . Excessive on-call duties also hinder learning both in terms of time, by missing educationally richer day shifts, and fatigue levels . Interviewees demonstrated excellent insight into their training needs and the likely demands of their future role, stating that shorter specialised rotations would create an opportunity for other more relevant experience . General rotations, those with larger volumes of outpatient experience were mostly highly valued with some rotations thought to be of little value to a GP trainee at all . The ability to tailor rotations to learning needs, such as in Slovenia, was limited in other countries by clashes with the logistics of service provision . Differing secondary care/primary care paradigms GP trainees noted that approaches to patient care differed in the hospital environment compared to GP. Hospital-based care focuses on completing multiple investigations quickly in contrast to GP where these investigations can proceed more slowly, using time as a diagnostic tool . GP training is challenging due to the breadth of what needs to be learnt. This contrasts with the depth of knowledge required for specialist care in hospitals. Some GP trainees felt that their hospital experience immersed them in detail which was more than they needed to know for a future career in GP . GP trainees valued learning how the hospital system works, providing insight into the patient’s journey on presentation from the emergency department through to the outpatient clinics, in addition to the clinical opportunity of seeing the course of an illness . Trainees also valued learning how to work as a team and building a future professional network . GP placement early in GP training was beneficial, allowing the GP trainee to better self-direct their training to their future role . Some spoke of awareness of being ambassadors of General Practice while in their hospital clinical placement . Belgian GP trainees, placed in GP rotations before commencing the hospital part of their training, demonstrated a strong sense of GP identity during hospital rotations, presenting their view of what a GP approach to clinical care would be to hospital colleagues [ and ]. An Irish interviewee specifically participated in the study to express how he felt having a GP placement prior to his hospital rotations would have enriched his hospital experience on several levels : by understanding the limitations of services available to a GP, by understanding better what a patient might need from a GP action, e.g. a referral to A&E, by giving greater insight into what was relevant for him to learn during his hospital experience and finally by providing an opportunity to educate his peers on the context of GP referrals to his hospital peers. This view was supported by a Belgian interviewee . Unfortunately, there were negative comments about how some hospitals perceived the contribution of GP trainees. In Lithuania. GP trainees were singled out as not belonging in the hospital, e.g. by derogatory comments from consultants or other hospital team members . This affected the trainee’s sense of being a team member and represents a missed opportunity for the positive relationships created between career hospital doctors and career GPs as noted in Belgium.
The supervision experience was found to vary between rotations, a consistent finding across all countries . Some supervisors seem uninterested . A sense of responsibility to provide teaching was lacking . Some hospital consultants appeared to be reluctant to consider what the future professional life of the GP trainee might be like . Other hospital doctors, such as registrars, could fill this supervisory role in an effective manner . Approachability was seen as one of the most important attributes of the supervisor . This was present when the trainee felt that questions on clinical matters were welcomed , when the trainee felt safe that he/she will not be ridiculed and felt valued as a GP trainee . Availability of supervision was highlighted as a significant issue in the hospital environment with concerns for patient safety and trainee well-being as well as training quality. In Ireland and Lithuania, night shifts were noted to have dramatically reduced staffing with a resulting impact on clinical confidence compared to the daytime . GP trainees described many positive experiences. They reported being allowed to push themselves to make clinical decisions with back up and opportunities to acquire experience in best clinical practice . They described a rich learning environment from peers and being stimulated to read up on clinical presentations [ and ], useful to them as future GPs.
One-on-one teaching was particularly valued as teaching tailored towards the future career as a GP, e.g. clear guidelines on when and how the trainee as a future GP should refer a patient to the hospital . While each national GP training body or institution had a formal curriculum, hospital supervisors rarely referenced it, an experience noted across all countries . Sometimes the hospital rotations did not provide any opportunities for learning which matched the curriculum [ and ]. Insufficient teaching was lamented in all countries . Regular assessment was considered lacking in Slovenia and Lithuania. In Ireland and Belgium, the GP trainees leave the hospital clinical environment once weekly for training from GP educators. This was felt to make the hospital experience more relevant . In Lithuania, seminars are delivered by GP educators on site in the hospitals during the hospital rotations. Slovenia has no dedicated GP training during the hospital rotations.
Administrative work, such as discharge letters, was considered less valid to their training by GP trainees . A unique finding in our study is that GP trainees felt they often shouldered a disproportionate amount of such service work to release the hospital specialist trainees on their team for clinical work . GP trainees felt the sheer volume of work to be a hindrance to learning in all countries . Conversely, on occasion, there was an insufficient number of patients and too many hospital doctors seeking experience . The range of expertise in some speciality wards could be narrow, limiting learning opportunities e.g. eating disorders or cataract surgical ward . Excessive on-call duties also hinder learning both in terms of time, by missing educationally richer day shifts, and fatigue levels . Interviewees demonstrated excellent insight into their training needs and the likely demands of their future role, stating that shorter specialised rotations would create an opportunity for other more relevant experience . General rotations, those with larger volumes of outpatient experience were mostly highly valued with some rotations thought to be of little value to a GP trainee at all . The ability to tailor rotations to learning needs, such as in Slovenia, was limited in other countries by clashes with the logistics of service provision .
GP trainees noted that approaches to patient care differed in the hospital environment compared to GP. Hospital-based care focuses on completing multiple investigations quickly in contrast to GP where these investigations can proceed more slowly, using time as a diagnostic tool . GP training is challenging due to the breadth of what needs to be learnt. This contrasts with the depth of knowledge required for specialist care in hospitals. Some GP trainees felt that their hospital experience immersed them in detail which was more than they needed to know for a future career in GP . GP trainees valued learning how the hospital system works, providing insight into the patient’s journey on presentation from the emergency department through to the outpatient clinics, in addition to the clinical opportunity of seeing the course of an illness . Trainees also valued learning how to work as a team and building a future professional network . GP placement early in GP training was beneficial, allowing the GP trainee to better self-direct their training to their future role . Some spoke of awareness of being ambassadors of General Practice while in their hospital clinical placement . Belgian GP trainees, placed in GP rotations before commencing the hospital part of their training, demonstrated a strong sense of GP identity during hospital rotations, presenting their view of what a GP approach to clinical care would be to hospital colleagues [ and ]. An Irish interviewee specifically participated in the study to express how he felt having a GP placement prior to his hospital rotations would have enriched his hospital experience on several levels : by understanding the limitations of services available to a GP, by understanding better what a patient might need from a GP action, e.g. a referral to A&E, by giving greater insight into what was relevant for him to learn during his hospital experience and finally by providing an opportunity to educate his peers on the context of GP referrals to his hospital peers. This view was supported by a Belgian interviewee . Unfortunately, there were negative comments about how some hospitals perceived the contribution of GP trainees. In Lithuania. GP trainees were singled out as not belonging in the hospital, e.g. by derogatory comments from consultants or other hospital team members . This affected the trainee’s sense of being a team member and represents a missed opportunity for the positive relationships created between career hospital doctors and career GPs as noted in Belgium.
Main findings This qualitative study gives insights into the views of GP trainees from different European countries of how their hospital experience contributes to their professional development as a GP. Valued It was clear that GP trainees valued their hospital-based training rotations despite the conditions experienced. This was most noted in Ireland and Belgium. This counters a previous argument by Goldie for situating UK GP training entirely in General Practice. Identity dissonance Cruess et al. (2018) recommend adopting the ‘communities of practice-theory’ as the overarching educational theory in medical education . Each hospital department where a trainee is placed is a ‘community of practice’ and our research shows that these were not always the ideal training environment for GP trainees. Support of doctors in training should consist of an inclusive welcome to the community, access to activities appropriate to the level of the learner, instruction, role modelling and mentoring, and charting progress through assessment and feedback . This is not consistently present during hospital rotations for GP Trainees. Unique learning opportunities Some of the learning on hospital rotations, e.g. current best practice in a specialty, or the full range of presentations which can occur, could not have been learnt in GP. The hospital rotations supported the development of clinical confidence, learning how to work in teams and learning what happens to a patient admitted to hospital. Also, the hospital experience assists in formatting professional connections for those who would practice in the future as a GP in the locality. Context An important finding in this study is the need to situate the learning from the hospital experience in the context of general practice. Contact with GP educational supervisors, either through off-site protected half-day release, or through GP rotations early in training, assisted identity formation as a GP and helped trainees use learning opportunities better. Belgian trainees described educating their hospital colleagues on primary care approaches. Cross-education between primary and secondary care by hospital-based GP trainees (with experience in General Practice) could be a valuable opportunity to deepen understanding of each other by both environments. Supervision Quality of supervision is the most pivotal aspect affecting the value of the hospital rotation. In keeping with AMEE guidelines , the authors recommend that hospital supervisors be aware of the requirements of the training body, and supervision should be structured with regular timetabled meetings . Based on the GP trainee’s comments, there is room for improvement in the quality of supervision for GP trainees on hospital placements across all countries. Strengths and limitations Strengths of the study include the spread of data collection across four European countries with a range of investment of GP placement time within GP training, from countries with a high proportion (Belgium, Ireland) to countries with lower proportions (Slovenia, Lithuania). Limitations of this study include that the interviews were conducted in four different languages and some distortion of the meaning may have occurred in translation. Individual country GP programmes can be limited by availability of training positions which gives significant heterogeneity and resulting experiences. Another limitation is that the subjects interviewed were all trainees. Widening participants to recently qualified GPs now working in General Practice may have uncovered more recognition of the differences between primary and secondary care. A further limitation is that the COVID-19 pandemic may have affected the responses in our data, as some of the more usual formal teaching was lost and so may be under-reported.
This qualitative study gives insights into the views of GP trainees from different European countries of how their hospital experience contributes to their professional development as a GP.
It was clear that GP trainees valued their hospital-based training rotations despite the conditions experienced. This was most noted in Ireland and Belgium. This counters a previous argument by Goldie for situating UK GP training entirely in General Practice.
Cruess et al. (2018) recommend adopting the ‘communities of practice-theory’ as the overarching educational theory in medical education . Each hospital department where a trainee is placed is a ‘community of practice’ and our research shows that these were not always the ideal training environment for GP trainees. Support of doctors in training should consist of an inclusive welcome to the community, access to activities appropriate to the level of the learner, instruction, role modelling and mentoring, and charting progress through assessment and feedback . This is not consistently present during hospital rotations for GP Trainees.
Some of the learning on hospital rotations, e.g. current best practice in a specialty, or the full range of presentations which can occur, could not have been learnt in GP. The hospital rotations supported the development of clinical confidence, learning how to work in teams and learning what happens to a patient admitted to hospital. Also, the hospital experience assists in formatting professional connections for those who would practice in the future as a GP in the locality.
An important finding in this study is the need to situate the learning from the hospital experience in the context of general practice. Contact with GP educational supervisors, either through off-site protected half-day release, or through GP rotations early in training, assisted identity formation as a GP and helped trainees use learning opportunities better. Belgian trainees described educating their hospital colleagues on primary care approaches. Cross-education between primary and secondary care by hospital-based GP trainees (with experience in General Practice) could be a valuable opportunity to deepen understanding of each other by both environments.
Quality of supervision is the most pivotal aspect affecting the value of the hospital rotation. In keeping with AMEE guidelines , the authors recommend that hospital supervisors be aware of the requirements of the training body, and supervision should be structured with regular timetabled meetings . Based on the GP trainee’s comments, there is room for improvement in the quality of supervision for GP trainees on hospital placements across all countries.
Strengths of the study include the spread of data collection across four European countries with a range of investment of GP placement time within GP training, from countries with a high proportion (Belgium, Ireland) to countries with lower proportions (Slovenia, Lithuania). Limitations of this study include that the interviews were conducted in four different languages and some distortion of the meaning may have occurred in translation. Individual country GP programmes can be limited by availability of training positions which gives significant heterogeneity and resulting experiences. Another limitation is that the subjects interviewed were all trainees. Widening participants to recently qualified GPs now working in General Practice may have uncovered more recognition of the differences between primary and secondary care. A further limitation is that the COVID-19 pandemic may have affected the responses in our data, as some of the more usual formal teaching was lost and so may be under-reported.
This study shows that GP trainees valued their hospital experience, especially where approachability and availability of hospital teachers improves the quality of supervision. It uniquely shows that GP trainees have additional challenges as trainees in the hospital environment. These include an identity dissonance of being a GP trainee in the hospital environment, shouldering a greater service work administrative burden compared to their hospital specialty peers, and on occasion, being excluded from the community, which should support the learner.
Supplemental Material Click here for additional data file.
|
Thromboprophylaxis for Coagulopathy Related to COVID-19 in Pediatrics: A Narrative Review
|
a62e2f55-8201-4b6b-a621-5bf012a95b27
|
10158690
|
Pediatrics[mh]
|
Children may develop a special complication of Coronavirus Disease 2019 (COVID-19), known as multisystem inflammatory syndrome in children (MIS-C), which could cause vascular damage alongside multiple organ involvement. One of the consequences of MIS-C is thrombosis and development of thrombotic events has been associated with worse outcomes . In the current review, MIS-C and thrombosis interplay and suggested prophylactic anticoagulants are discussed. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-related COVID-19 is one of the most life-threatening epidemics in the world after the ‘Spanish flu’ pandemic in 1918, causing devastating consequences for both human health and socioeconomic welfare. Since the widespread occurrence of COVID-19 in 2019, over 700 million people have been infected and over 6.8 million people died during the pandemic, while many countries experienced economic losses as well . The results of a myriad of clinical observations and trials were reported, all striving to better inform, manage and treat the disease. Evidence shows that COVID-19 disease is not merely an isolated respiratory infection; it causes development of a multisystem inflammatory syndrome and results in complications to patients, mandating a multi-disciplinary approach. A SARS-CoV-2-related inflammatory syndrome, MIS-C, may develop in children, usually weeks after acute infection. MIS-C is likely to present with arrhythmias, coronary artery aneurysms, myocarditis, and sudden cardiogenic shock; all of which tend to evolve rapidly . Despite the unknown cause of MIS-C, scientists have suggested that a dysfunctional immune response results in cytokine release and organ damage . From this perspective, MIS-C was contrasted with and mimicked Kawasaki disease, cytokine release syndrome, and other autoimmune disorders. MIS-C can cause changes in blood parameters. A study has shown that the increase in C-reactive protein (CRP), ferritin, D-dimer, white blood cell (WBC) count, along with rotational thromboelastometry (ROTEM) factors in MIS-C patients are connected to a hypercoagulable state caused by endothelial damage in the setting of hyperinflammation, leading to coagulation abnormalities and microvascular alongside macrovascular thrombosis in these patients . MIS-C brings about a thrombosis risk of approximately 3.5% (see Table ). As an integral component of the SARS-CoV-2-related disease pathogenesis, coagulopathy presents a predictor of poor prognosis . Deep venous thrombosis of the limbs and other sites, microvascular pulmonary thrombosis, and macrovascular pulmonary artery thrombosis/embolism can occur in patients with coronavirus infections . The pathophysiology of COVID-19-related thrombotic abnormalities can be described using Virchow's triad: endothelial damage, stasis of blood flow, and coagulopathy . A study that was conducted in pediatric patients revealed that the rates of symptomatic venous thromboembolism (VTE) were 7% and 1.3% among patients 13–21 years old and 5–13 years old, respectively . In another study, which was conducted in adult COVID-19 patients with median age of 65 years (age range: 36–80 years), pulmonary embolism (PE) and deep vein thrombosis (DVT) incidence rates were 16.5% and 14.8%, respectively . In a comparison between pediatrics and the adult patient population, the incidence of thrombosis is higher in the second group. Based on a report from the International Society on Thrombosis and Hemostasis (ISTH), around 71.4% of adult COVID-19 non-survivors, mean age 70 years old, exhibited disseminated intravascular coagulation (DIC) . SARS-CoV-2 infects cells via the type 2 angiotensin-converting enzyme receptor (ACE2). Numerous human cell types express this receptor type, including endothelial cells (EC), podocytes, cardiac myocytes, and alveolar cells. Therefore, SARS-CoV-2 directly affects the vascular system, comprising veins and arteries. Damage to EC leads to release of the tissue factors, activating the coagulation cascade, initiating the complement system, and the start of an inflammatory system response known as a ‘cytokine storm’ . Interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNF-α) are drastically elevated in patients with severe COVID-19. Through an encounter with EC, both of the abovementioned cytokines exert prothrombotic characteristics . Besides, IL-6 elevates inflammatory responses more than other factors, capable of developing thrombotic events . Moreover, TNF-α provokes the complement system, in turn stimulating the coagulation system . In patients with severe COVID-19-related systemic inflammation, several markers represent the imbalance between coagulation and fibrinolysis, which have similar features to cytokine storm or macrophage activation syndrome . An intravascular coagulopathy with significant hemorrhagic infarction, vessel wall edema, and capillary thrombosis arises from systemic EC damage at the level of the respiratory system, serving as the entry point for COVID-19 infection. Further systemic thromboses and emboli, particularly sizable arterial thromboses, can develop once the pathogen damages EC in other areas . Pathophysiology of thrombosis development is shown in Fig. . As a result, the dysfunction of the antithrombotic and anti-inflammatory system likely is the prominent cause of coagulopathy in Covid-19 settings, alongside other complications. Zhang et al. reported that the level of anticoagulant proteins like antithrombin, protein S, and protein C is decreased in COVID-19 adult patients . The severe inflammatory state, known as DIC, which is mainly detected in sepsis, causes activated partial thromboplastin time (aPTT) and prothrombin time (PT) as well as the D-dimer levels to rise, as reported in sepsis and COVID-19 pediatric patients . Assessing the severity of COVID-19 infection is a crucial first step in determining the risk of thrombosis. Among adult patients, worsening conditions and higher oxygen needs are related to worse outcomes, including thrombosis . About 95% of children with acute infections show milder clinical manifestations than adults, and many recover without problems. Nevertheless, like adults, respiratory symptoms and increased oxygen requirements are associated with higher risk of VTE in children . According to a study by Whitworth et al., the occurrence rate of venous thromboembolism in patients with MIS-C was higher than that in those with COVID-19. In the MIS-C group, the incidence rate was 6.5% and it was associated with increased fibrinogen and D-dimer levels, and presence of thromboembolic risk factors such as cancer, central venous catheter, or age older than 12 years . Although thromboprophylaxis in non-critically ill children can reduce thromboembolic incidents, in critically ill children, defined as pediatric patients hospitalized with MIS-C or severe COVID-19, despite receipt of thromboprophylaxis, the rate of thromboembolic events was high (more than two third of thromboembolisms were from patients who received thromboprophylaxis). However, thrombotic events were lower in patients under 12 years old . The American Society of Hematology (ASH), the Society of Critical Care Medicine (SCCM), and the American Academy of Pediatrics (AAP) have not yet released guidelines or recommendations for thromboprophylaxis in hospitalized children with MIS-C. However, ISTH suggests thromboprophylaxis for children hospitalized with COVID-19, including those with MIS-C, with significantly increased D-dimer levels (≥5 times the upper limit of normal values) or coexisting clinical risk factors (presence of central venous catheter, obesity, flare of underlying inflammatory disease, previous history of VTE, first-degree family history of VTE before age 40 years or unprovoked VTE, known thrombophilia, receiving estrogen-containing oral contraceptive pill, or need for mechanical ventilation, etc.) based on the expert opinion . Due to limited available data regarding pediatrics, adopting thromboprophylaxis techniques in children with COVID-19 is challenging. Based on many reports, coagulation factors like D-dimer levels are dependent on age during infancy and childhood, underscoring the necessity of age-specific reference ranges in the accurate evaluation and management of thrombotic incidents in pediatric populations, and this makes these findings less reliable in making a diagnosis . The abnormalities identified in the extensive coagulation such as elevated levels of D-dimer and fibrin degradation products, aPTT, PT, lower antithrombin levels, higher activity of von Willebrand factor and factor VIII, and thrombocytopenia were ascribed to predict poor prognosis and not diagnosis or risk evaluation in adults and pediatrics with COVID-19, therefore D-dimer levels may be considered a weak option as a factor guiding therapeutic decision making regarding anticoagulant prophylaxis . Increased levels of inflammatory markers (e.g., D-dimer), lack of thromboprophylaxis, and the presence of risk factors for clinically relevant bleeding caused by epitheliopathy, platelet dysfunction, and consumptive coagulopathy in COVID-19 patients are imperative to discover favorable thromboprophylaxis strategies. Pediatric physicians face more challenges in incorporating available evidence to apply thromboprophylaxis strategies in children with COVID-19, because the incidence of severe COVID-19 in children is lower compared with adults and for this reason, less investigation has been carried out about pediatric thromboprophylaxis in this situation. Enoxaparin was approved in 1993 . It is a low molecular weight heparin (LMWH) with a mean molecular weight of 4000 to 5000 Daltons. The indications for use of enoxaparin are thrombotic events such as treatment of VTE or PE, acute coronary syndromes, and DVT prophylaxis in different circumstances; the medication prevents the formation of blood clots by binding to antithrombin III, forming a complex that irreversibly inactivates factor Xa . Although an investigation showed that enoxaparin injection twice per day, with initial dose of 0.5 mg/kg per dose (maximum 60 mg per dose) in patients under 18 years old (median age 12.1 years [range 1.3–17.5]), to achieve the target anti-Xa activity of 0.20–0.49 ng/mL, was safe and efficient in hospitalized children with COVID-19, without increased risk of life-threatening adverse effects or bleeding . Prescription of anticoagulant thromboprophylaxis is not regularly carried out in children hospitalized with asymptomatic SARS-CoV-2 infection in the absence of multiple clinical risk factors for hospital-related VTE. In this clinical trial, the median dose for achieving goal levels of anti-Xa was 0.5 mg/kg/dose and it was not significantly different in various age ranges (<12 years old or >12 years old). It was also shown that the median dose was significantly higher in patients with MIS-C. Two studies have illustrated that thromboprophylaxis may not be effective based on the incidence of thrombotic complications in spite of thromboprophylaxis. One of these studies revealed that about 70% of pediatric patients experienced thrombotic complications in spite of receiving prophylactic anticoagulation . Similarly, based on the results of another study, VTE occurred in more than 30% of children . This study was conducted in children with an age range of 2 months to 21 years, and only patients hospitalized with symptomatic COVID-19 were included. Six of ten patients who received LMWH and had their dose adjusted to anti-Xa levels of 0.2–0.4 ng/mL were not diagnosed with VTE, while three of the four patients who were on a fixed dose of LMWH, either 40 mg daily or a weight-based fixed dose, were diagnosed with VTE . The outcomes of the study showed that in spite of thromboprophylaxis, the occurrence rates of thrombotic cases might be high . These results also suggest that therapeutic dosing of LMWH might provide superior effects in thromboprophylaxis; however, results from high quality studies are needed to confirm both the safety and efficacy of therapeutic over prophylactic dosing in children. In the study conducted by Del Borrello et al., it was shown that although widespread anticoagulant prophylaxis is not recommended for hospitalized children with COVID-19, 10 U/kg/h of unfractionated heparin (UFH) or 100 U/kg/24h (equivalent to 1 mg/kg/24h) of enoxaparin, may be prescribed for carefully selected patients with multiple risk factors for coagulopathy (the authors used an institutional risk assessment model that included cardiovascular comorbidities, patient’s mobility, and risk of bleeding), regardless of D-dimer levels . Similarly, another study confirmed that the normalization of irregularities occurred shortly after admission and thromboprophylaxis did not appear to be beneficial . However, this study concluded that coagulopathy may not be related to the rates of VTE and the pediatric patients who were hospitalized with severe COVID-19 should be evaluated individually. Several risk factors should be investigated in order to prescribe thromboprophylaxis. Considering the degree of morbidity related to healthcare-associated VTE, protocols have been defined for thromboprophylaxis in hospitalized pediatric patients, including assessments specific to the pediatric intensive care unit setting and analyses are done on moderately ill, hospitalized children . Loi et al. recommended LMWH thromboprophylaxis in COVID-19 patients presenting with increased D-dimer levels, increased fibrinogen levels, DIC, or average or mildly decreased platelet count, and for the children with risk factors of developing VTE . This protocol is shown in Fig. . In a recent study, the results did not show significant effects of thromboprophylaxis on mortality in pediatric patients with COVID-19, but enoxaparin showed a negligible effect on mortality rate ( p -value: 0.57) . The investigators suggested that, despite the results of the study, thromboprophylaxis could be helpful in pediatric patients with moderate to severe COVID-19 classified based on the World Health Organization (WHO) progression scale; further randomized clinical trials with larger sample sizes are needed to prove this conclusion. In a recent cohort study that evaluated thromboprophylaxis in patients with MIS-C, it was demonstrated that thromboprophylaxis is useful in these patients after first considering multiple factors including rising laboratory parameters such as D-dimer levels . It is imperative to consider clinical parameters such as illness severity and VTE risk factors prior to thromboprophylaxis administration and determining the dose of the agent. Using the tailored-intensity thromboprophylaxis, which adjusts the dose of enoxaparin to a specific level of anti-Xa based on the patient's condition, is another crucial factor. The studies have shown that the median dose of LMWH for prophylaxis was 0.5 mg/kg/dose twice daily and for therapeutic intensity was 1.0 mg/kg/dose twice daily . A practical guideline provided by Karimi et al., states that patients with D-dimer levels of over 300 ng/mL should receive LMWH prophylaxis and be evaluated for DVT, but any irregularity in aPTT, PT, platelet count, fibrinogen, and D-dimer should be considered and followed. Furthermore, the guideline suggests that a high index of speculation for thrombosis should be maintained. It also recommends that pediatric patients with moderate COVID-19 (patients with fever, respiratory symptoms, and radiographical features) who need hospitalization should receive prophylactic LMWH anticoagulation . In children with severe COVID-19, it is suggested to intensify anticoagulation therapy in combination with ultrasonography screening if D-dimer and serum ferritin levels are > 500 ng/mL, and if the patient’s situation deteriorates . Recent updated clinical guidance suggests thromboprophylaxis with LMWH twice daily (enoxaparin 0.5 mg/kg targeted to an anti-Xa activity level of 0.2 to < 0.5 IU/mL) in acute COVID-19 or MISC patients with risk factor for VTE or a significant rise in their D-dimer levels . This guidance also recommends against the use of thromboprophylaxis in asymptomatic COVID-19-infected patients without any risk factor for VTE development. In general, LMWH is preferred to UFH in pediatric patients due to good bioavailability, long duration of action and more studies supporting its use in the pediatric patient population . The phase II trial, which was conducted in pediatric COVID-19 and MIS-C patients in order to investigate safety and optimal dose of enoxaparin, showed that based on the absence of clinically relevant bleeding events and other adverse effects within the prespecified range of dose, this medication is safe . However, the results of the study showed that dose of enoxaparin is not significantly different between patients under or above 12 years of age. This study also showed that the enoxaparin dosing which was used in MIS-C patients is higher than that for other pediatric COVID-19-infected patients to achieve the target of anti-Xa . The initial prophylactic dose of enoxaparin in this clinical trial and others was 0.5 mg/kg twice daily for patients over 2 months of age , which was in correlation with the recommendations from the Chest guideline regarding the general preventive dose of enoxaparin for thromboprophylaxis in pediatric patients (age-dependent dosing: < 2 months: 0.75 mg/kg per dose twice daily and ≥ 2 months: 0.5 mg/kg per dose twice daily) . Another medication that is used in the anticoagulation protocol in addition to LMWH is aspirin. Aspirin inhibits platelet activity by irreversible inhibition of the cyclooxygenase (COX) function. COVID-19 causes lung damage which can induce thrombotic events, and antithrombotic drugs decrease coagulopathy accidents . The American College of Rheumatology guidelines recommend the following regarding aspirin in MIS-C patients: low-dose aspirin (3–5 mg/kg/day; maximum 81 mg/day) should be ordered in MIS-C patients with Kawasaki disease features, coronary artery changes, or risk factors of thrombosis . One possible reason for this might be the beneficial effects of aspirin in managing coronary artery aneurysms in pediatric patients with Kawasaki disease, although higher doses are used in the initial management phase of those patients . This antiplatelet therapy should be continued for at least 4 weeks and until platelet counts normalize. After discharge, patients may need more care and follow-up to determine the level of D-dimer and risk of thrombosis. D-dimer levels five times higher than the upper limit of normal are related to thrombosis in children . D-dimer levels in combination with disease severity could be useful in determining the duration of thromboprophylaxis . A consensus guideline has recommended that continuing pharmacological thromboprophylaxis is required in pediatric patients with COVID-19 or MIS-C with elevated D-dimer levels or VTE risk factors after discharge . The duration of anticoagulation in the post-discharge setting was recommended to be 30 days or sooner if clinical risk factors resolve. A recent guidance generally has recommended that thromboprophylaxis should not be continued typically after discharge from the hospital in all hospitalized patients with COVID-19, even in the patients who received therapeutic intensity anticoagulation during hospitalization for thromboprophylaxis , but the guidance has suggested thromboprophylaxis with rivaroxaban 10 mg daily for 35 days following hospitalization in adult patients who are at increased risk of thromboembolism. Also, the guidance points out that data related to post-hospital thromboprophylaxis are limited in pediatric patients and each patient should be evaluated for risk factors individually . Although there are some data regarding use of direct oral anticoagulants in older children and adolescents, their use is not considered routine practice in these patients, and data is even more limited on their use in pediatric patients with COVID-19 . According to recent studies, it is currently recommended to continue thromboprophylaxis for 7–14 days or until the resolution of risk factors that the patients had at the time of discharge, such as presence of central venous line, ongoing immobility, as well as elevated D-dimer levels. Combining low-dose aspirin (3–5 mg/kg per day) with thromboprophylaxis in MIS-C may increase the risk of bleeding; nevertheless, if there are no other bleeding risk factors, this is not a contraindication . In addition to LMWH, low-dose aspirin is recommended in patients with MIS-C although the intention may not be for anticoagulation . In one study, patients with MIS-C who received enoxaparin in combination with aspirin did not develop any thrombosis, the same finding was reported in patients without MIS-C who only received aspirin. Enoxaparin was continued following discharge or switched to apixaban. Within 2 weeks following discharge, laboratory abnormalities were corrected and no patient was diagnosed with thrombosis . Although it was a retrospective study in patients who were diagnosed with MIS-C or without MIS-C under 21 years of age, the mean age of the two groups of patients were 9 years and 8.8 years, which categorize them as pediatric groups. In another study, patients who received thromboprophylaxis were compared with those who did not and no significant difference or VTE was reported . Despite all the data, children with complications of COVID-19 (including MIS-C) with considerably increased levels of plasma D-dimer at hospital discharge and overlaid clinical risk factors for VTE may be candidates for continued anticoagulant thromboprophylaxis post-hospital discharge. This may be done for a planned period, such as until resolution of the clinical risk factors or 1-month post-discharge, utilizing low-dose LMWH subcutaneously two times a day or therapeutic-intensity LMWH (e.g., targeted anti-Xa activity of 0.5–1.0 U/mL) once a day, if there are no contraindications or increased risk of bleeding . The latest version of the American College of Rheumatology clinical guidance for MIS-C has recommended that MIS-C patients with thrombosis or ejection fraction of <35% should receive low-dose aspirin and a therapeutic dose of anticoagulation with enoxaparin for 3 months, until resolution of thrombosis . Evaluation of thrombosis should be repeated at intervals of 4–6 weeks post-diagnosis, and anticoagulation can be discontinued if the condition is resolved. This guideline suggests the minimum duration of anticoagulation for this patient population is 2 weeks after discharge . Utilization of enoxaparin prophylaxis, at an initial dose of 0.5 mg/kg twice daily, may be safe and could be recommended in pediatric patients with an age range of 1–20 years who are diagnosed with MIS-C or severe COVID-19. The treatment is later adjusted based on the levels of anti-Xa. Nonetheless, thromboprophylaxis is not suggested in all pediatric patients with COVID-19 and should be utilized depending on case condition and presence of risk factors. In addition, trials with a high number of patients are required to determine the effect of thromboprophylaxis on mortality and disease prognosis.
|
A bibliometric and visualized research on global trends of immune checkpoint inhibitors related complications in melanoma, 2011–2021
|
20f8d580-8674-4f00-a1da-3523207eb2ca
|
10158729
|
Internal Medicine[mh]
|
Introduction Melanoma is a type of cancer that arises from the malignant transformation of melanocytes, which occurs in the skin mucosa and skin pigmentation . Despite accounting for only 1% of skin tumors, the mortality rate of melanoma ranks first in skin tumors . Currently, surgery is the preferred treatment for stage I-II melanoma, with up to 80% of patients surviving for 5 years. However, for advanced and distantly metastatic malignant melanoma, the 5-year survival rate is less than 10%, making it the leading cause of melanoma-related deaths . Melanoma is extremely insensitive to radiotherapy, so chemotherapy has been the mainstay of treatment for surgically unresectable or advanced melanoma. However, the low therapeutic efficiency and serious toxic side effects of chemotherapeutic drugs have greatly limited their use . Immune checkpoints are cell surface molecules that regulate the strength and quality of the body’s immune response, including checkpoint molecules that upregulate the immune response, such as CD28 and OX-40, and inhibitory checkpoint molecules that downregulate the immune response, such as CTLA- 4 and PD-1 . Immune checkpoints play a crucial role in immune tolerance, allowing tumor cells to escape surveillance by the host immune system . Tumor immune escape mechanisms are extremely crucial to the development of tumorigenesis . Thus, the use of immune checkpoint inhibitors (ICIs) is beneficial in restoring and enhancing the body’s anti-tumor immune response to eliminate tumor cells. To date, ICIs such as the CTLA-4 inhibitor (ipilimumab), the PD-1 inhibitor (nivolumab and pembrolizumab) have been approved by the U.S. Food and Drug Administration for the treatment of metastatic melanoma and have demonstrated significant efficacy . Despite the significant clinical benefits of ICIs, they are associated with a range of adverse effects that can affect multiple organs throughout the body . For example, gastrointestinal adverse reactions are more common with CTLA-4 inhibitors, and pneumonia and thyroiditis are more common with PD-1 inhibitors. Furthermore, the incidence of adverse events in immunocombination therapy is significantly higher than that in monotherapy . Numerous studies have been reported on complications related to ICIs in melanoma, but no reverent studies have analyzed the overall trend of complications. Bibliometric analysis combines mathematics and statistics to quantitatively describe the current state of scientific research, research topics, and research trends based on unique parameters of published literature, such as country, institution, author, etc. . As such, this study provides a comprehensive quantitative and qualitative evaluation of studies on ICIs-related complications in melanoma over the last 10 years using bibliometrics. The aim of this analysis is to continuously explore hot topics and research trends in this field and provide directions for future research.
Materials and methods 2.1 Data collection The Web of Science Core Collection (WoSCC) was performed to collect data on ICIs related complications in melanoma from 2011 to 2021. Search strategies were set as follows: TS (title/abstract) = (ICI* OR immune-checkpoint inhibitor* OR immune checkpoint inhibitor* OR immune-checkpoint blockade* OR immune checkpoint blockade* OR checkpoint inhibitor* OR CPI* OR ipilimumab OR nivolumab OR pembrolizumab OR toripalimab OR lambrolizumab OR atezolizumab OR avelumab OR durvalumab) AND TS (title/abstract) = (adverse event* OR complication* OR side effect*) AND TS (title/abstract) = (melanoma OR chronic melanoma OR metastatic melanoma OR malignant melanoma OR advanced melanoma) AND Publication Date = (2011-01-01 to 2021-12-31) AND Language = (English). Subsequently, the studies most relevant to ICIs related complications in melanoma were manually screened. The detail of the search process and included studies was shown in and . All literature searches were conducted within one day (03 July 2022) to avoid bias related to database updates. The types of original article and review publication were screened to include them. The first and second authors independently retrieved data and all data, including the number of publications and citations, titles, countries/regions, affiliations, etc., were downloaded from WoSCC for further analyses. 2.2 Bibliometric analysis The bibliometric analysis was performed by using Microsoft Excel 2021, VOSviewer (version 1.6.17), CiteSpace V (version 6.1.2) and R language package from the online tool website ( http://www.bioinformatics.com.cn/srplot ). Based on the data downloaded from the WoSCC, the annual of publications and citation was performed by Microsoft Excel 2021; The bibliometric network or density map of intercountry and organisation cooperation, linkage of journal, and cooccurrence of keywords was visualised by VOSviewer. In term of the county/region and journal analysis, the minimum threshold of inclusion criteria was set as 5, 33 countries and 49 journals were included and visualized, respectively. In term of organization, author and keyword analysis, the minimum threshold of inclusion criteria was set as 10 with 83 organizations, 69 authors and 162 keywords included and visualized, respectively; The cooperation among authors, cluster analysis and bursts of references and keywords were employed by CiteSpace V; The geographical distribution of publications worldwide, the radar charts of the top 10 productive institutions, and journals were used by using R language packages. The latest impact factors (IF) and zoning were obtained from the newest edition of the Journal Citation Report (JCR, 2021).
Data collection The Web of Science Core Collection (WoSCC) was performed to collect data on ICIs related complications in melanoma from 2011 to 2021. Search strategies were set as follows: TS (title/abstract) = (ICI* OR immune-checkpoint inhibitor* OR immune checkpoint inhibitor* OR immune-checkpoint blockade* OR immune checkpoint blockade* OR checkpoint inhibitor* OR CPI* OR ipilimumab OR nivolumab OR pembrolizumab OR toripalimab OR lambrolizumab OR atezolizumab OR avelumab OR durvalumab) AND TS (title/abstract) = (adverse event* OR complication* OR side effect*) AND TS (title/abstract) = (melanoma OR chronic melanoma OR metastatic melanoma OR malignant melanoma OR advanced melanoma) AND Publication Date = (2011-01-01 to 2021-12-31) AND Language = (English). Subsequently, the studies most relevant to ICIs related complications in melanoma were manually screened. The detail of the search process and included studies was shown in and . All literature searches were conducted within one day (03 July 2022) to avoid bias related to database updates. The types of original article and review publication were screened to include them. The first and second authors independently retrieved data and all data, including the number of publications and citations, titles, countries/regions, affiliations, etc., were downloaded from WoSCC for further analyses.
Bibliometric analysis The bibliometric analysis was performed by using Microsoft Excel 2021, VOSviewer (version 1.6.17), CiteSpace V (version 6.1.2) and R language package from the online tool website ( http://www.bioinformatics.com.cn/srplot ). Based on the data downloaded from the WoSCC, the annual of publications and citation was performed by Microsoft Excel 2021; The bibliometric network or density map of intercountry and organisation cooperation, linkage of journal, and cooccurrence of keywords was visualised by VOSviewer. In term of the county/region and journal analysis, the minimum threshold of inclusion criteria was set as 5, 33 countries and 49 journals were included and visualized, respectively. In term of organization, author and keyword analysis, the minimum threshold of inclusion criteria was set as 10 with 83 organizations, 69 authors and 162 keywords included and visualized, respectively; The cooperation among authors, cluster analysis and bursts of references and keywords were employed by CiteSpace V; The geographical distribution of publications worldwide, the radar charts of the top 10 productive institutions, and journals were used by using R language packages. The latest impact factors (IF) and zoning were obtained from the newest edition of the Journal Citation Report (JCR, 2021).
Results 3.1 Overall distribution and global contributions A total of 890 original articles and 197 reviews associated with complications related complications in melanoma were selected. The annual trend of the publication and citation has been increasing steadily since 2011 (publications = 16, citations = 98), with citations peaking in 2021 (publications = 221, citations = 14, 182), although publications are slightly decrease in 2018 (publications = 135, citations = 8, 775) and 2020 (publications = 172, citations = 14, 060) . The rapid growth in the number of publications and citations means that more researchers are investing and paying attention to this field. Based on the publishing number, as shown in the country/region distribution map and top 10 most productive countries/regions were listed in , the top 3 countries were USA (n = 454, 41.77%), Germany (n = 155, 14.26%) and Italy (n = 139, 12.79%), respectively. The annual trend for publications and citations in the United States from 2012 to 2021 has been steadily increasing and consistently well above other countries . At the peak of publications in 2021, the number of publications in the USA (publications = 81) was 2.7 times that of the second-placed Italy (publications = 30); at the peak of citations in 2020, the number of citations in the USA (citations = 10, 559) was 2.09 times that of the second-placed Germany (citations = 5, 161). The annual trend of publications in top 10 most productive countries except the USA is generally upward, but there are small fluctuations from year to year. The annual trend of citations for the top 10 most productive countries, all with significant year-over-year increases through 2021. Furthermore, an extensive network of cooperation between countries is shown in . The United States is located at the central node of the cooperation network and has close cooperation with many countries such as England, France, Canada, and Australia. It can be seen that the USA is the leader in the field of ICIs-related complications in melanoma, with a much higher number of publications and citations than any other country. 3.2 Analysis of institutions and authors A total of 1, 919 institutions have conducted studies on ICIs-related complications in melanoma. The top 10 institutions with the most publications were listed in . Memorial Sloan Kettering Cancer Center (n = 69) is the most productive institution, followed by Dana-Farber Cancer Institute (n = 55) and H. Lee Moffitt Cancer Center and Research Institute (n = 41) . VOSviewer was then used to analyze and visualize the extensive network and density map of cooperation between institutions . Memorial Sloan Kettering Cancer Center, Dana-Farber Cancer Institute, University of Texas MD Anderson Cancer Center and Sydney University were the hottest publication centers with the strongest ties to other institutions. A total of 6, 140 authors have published articles on ICIs-related complications in melanoma, with the top 10 authors listed in . Hodi, F. Stephen was the most productive author with 41 publications, followed by Wolchok, Jedd D. with 40 publications, and Robert, Caroline with 35 publications. Based on the number of citations, Wolchok, Jedd D. was the most influential author with 17, 191 citations, followed by Robert, Caroline with 15, 984 citations and Postow, Michael A. with 10, 104 citations (publications = 29). The extensive network of cooperation between authors showed that the number of author publications determines, to a certain extent, the closeness of cooperation between authors . 3.3 Analysis of journals A total of 325 journals accepted and published articles on ICIs-related complications in melanoma, with the top 10 journals listed in . Melanoma research (IF = 3.20) was the most productive journal with 79 publications, followed by Journal for Immunotherapy of Cancer (IF = 12.47) with 62 publications and Cancer Immunology Immunotherapy (IF = 6.63) with 31 publications . The extensive network and density map of cooperation between journals were analyzed , which showed that Melanoma Research and Journal for Immunotherapy of Cancer were the most thermal publication centers with the strongest ties to other journals. 3.4 Analysis of cited and co-cited references Citation and co-citation analysis of the literature can help understand the most influential research in the field and guide the basic direction of future research. The top 10 publications with the most citations are listed in , and the top 10 publications with the most co-citations are presented in . The article with most citation (n = 5, 104) and co-citation (n = 293) was published by Larkin J et al. in 2015 who revealed that compared to ipilimumab, Nivolumab alone or in combination with ipilimumab in patients with previously untreated metastatic melanoma significantly improved progression-free survival by enabling complementary activity between PD-1 and CTLA-4 blockade. Followed by Robert C et al. in 2015 (citation = 3, 602, co-citation = 267) who revealed that pembrolizumab significantly prolongs progression-free survival and overall survival in patients with advanced melanoma and has less high-grade toxicity than ipilimumab. Based on the cocitation network, a reference burst analysis was performed, and the top 25 cocited references with the strongest citation bursts are shown in . The article with highest bursts strength (49.15) was published by Robert C et al. in 2011 who revealed that the combination of ipilimumab with dacarbazine significantly improved overall survival in patients with previously untreated metastatic melanoma, but increased the incidence of hepatic impairment. Then, a title cluster analysis was performed to summarize the references in the co-citation network to understand the frontier directions. Thirteen different clusters were divided from the cocitation network of references, including the expanded access programme, antitumour efficacy, adjuvant treatment, motor polyradiculopathy, adjuvant therapy, clinical response, current systemic therapy, review of the literature, clinical outcome, brain metastases of melanoma, metastatic uveal melanoma, targeted nanoparticles and targeting cancer-associated fibroblast . According to the clustering display of the timeline , anti-tumor efficacy, adjuvant treatment, clinical response, literature review, clinical outcome and metastatic uveal melanoma were recent research topics of strong concerned to researchers. 3.5 Analysis of keywords Keywords are a generalization of the central idea of the research, and keyword co-occurrence analysis is helpful to understand the internal relationship between keywords in the field of ICIs related complications in melanoma. The extensive network of co-occurrence keywords was visualized by VOSviewer . Ipilimumab, metastatic melanoma, survival, nivolumab, pembrolizumab and adverse event were the most co-occurrence keywords with the strongest ties to other keywords. Based on the keyword co-occurrence network, a keyword burst analysis was performed, and the top 25 keyword co-occurrence with the strongest citation bursts are shown in . It showed that the keywords with the highest burst strength were untreated melanoma (15.93), followed by safety (11.13) and phase II (8.15). Then, a cluster analysis of keywords was performed, and nine different clusters were divided from the co-occurrence network of keywords, including checkpoint blockade, adjuvant therapy, antibody, dabrafenib, immune checkpoint inhibitors, phase I, pd-1 and expanded access . According to the timeline clustering display , checkpoint blockade, adjuvant therapy, antibody, dabrafenib, pd-1, and expanded access were the key points of research that researchers have been paying close attention to since 2011.
Overall distribution and global contributions A total of 890 original articles and 197 reviews associated with complications related complications in melanoma were selected. The annual trend of the publication and citation has been increasing steadily since 2011 (publications = 16, citations = 98), with citations peaking in 2021 (publications = 221, citations = 14, 182), although publications are slightly decrease in 2018 (publications = 135, citations = 8, 775) and 2020 (publications = 172, citations = 14, 060) . The rapid growth in the number of publications and citations means that more researchers are investing and paying attention to this field. Based on the publishing number, as shown in the country/region distribution map and top 10 most productive countries/regions were listed in , the top 3 countries were USA (n = 454, 41.77%), Germany (n = 155, 14.26%) and Italy (n = 139, 12.79%), respectively. The annual trend for publications and citations in the United States from 2012 to 2021 has been steadily increasing and consistently well above other countries . At the peak of publications in 2021, the number of publications in the USA (publications = 81) was 2.7 times that of the second-placed Italy (publications = 30); at the peak of citations in 2020, the number of citations in the USA (citations = 10, 559) was 2.09 times that of the second-placed Germany (citations = 5, 161). The annual trend of publications in top 10 most productive countries except the USA is generally upward, but there are small fluctuations from year to year. The annual trend of citations for the top 10 most productive countries, all with significant year-over-year increases through 2021. Furthermore, an extensive network of cooperation between countries is shown in . The United States is located at the central node of the cooperation network and has close cooperation with many countries such as England, France, Canada, and Australia. It can be seen that the USA is the leader in the field of ICIs-related complications in melanoma, with a much higher number of publications and citations than any other country.
Analysis of institutions and authors A total of 1, 919 institutions have conducted studies on ICIs-related complications in melanoma. The top 10 institutions with the most publications were listed in . Memorial Sloan Kettering Cancer Center (n = 69) is the most productive institution, followed by Dana-Farber Cancer Institute (n = 55) and H. Lee Moffitt Cancer Center and Research Institute (n = 41) . VOSviewer was then used to analyze and visualize the extensive network and density map of cooperation between institutions . Memorial Sloan Kettering Cancer Center, Dana-Farber Cancer Institute, University of Texas MD Anderson Cancer Center and Sydney University were the hottest publication centers with the strongest ties to other institutions. A total of 6, 140 authors have published articles on ICIs-related complications in melanoma, with the top 10 authors listed in . Hodi, F. Stephen was the most productive author with 41 publications, followed by Wolchok, Jedd D. with 40 publications, and Robert, Caroline with 35 publications. Based on the number of citations, Wolchok, Jedd D. was the most influential author with 17, 191 citations, followed by Robert, Caroline with 15, 984 citations and Postow, Michael A. with 10, 104 citations (publications = 29). The extensive network of cooperation between authors showed that the number of author publications determines, to a certain extent, the closeness of cooperation between authors .
Analysis of journals A total of 325 journals accepted and published articles on ICIs-related complications in melanoma, with the top 10 journals listed in . Melanoma research (IF = 3.20) was the most productive journal with 79 publications, followed by Journal for Immunotherapy of Cancer (IF = 12.47) with 62 publications and Cancer Immunology Immunotherapy (IF = 6.63) with 31 publications . The extensive network and density map of cooperation between journals were analyzed , which showed that Melanoma Research and Journal for Immunotherapy of Cancer were the most thermal publication centers with the strongest ties to other journals.
Analysis of cited and co-cited references Citation and co-citation analysis of the literature can help understand the most influential research in the field and guide the basic direction of future research. The top 10 publications with the most citations are listed in , and the top 10 publications with the most co-citations are presented in . The article with most citation (n = 5, 104) and co-citation (n = 293) was published by Larkin J et al. in 2015 who revealed that compared to ipilimumab, Nivolumab alone or in combination with ipilimumab in patients with previously untreated metastatic melanoma significantly improved progression-free survival by enabling complementary activity between PD-1 and CTLA-4 blockade. Followed by Robert C et al. in 2015 (citation = 3, 602, co-citation = 267) who revealed that pembrolizumab significantly prolongs progression-free survival and overall survival in patients with advanced melanoma and has less high-grade toxicity than ipilimumab. Based on the cocitation network, a reference burst analysis was performed, and the top 25 cocited references with the strongest citation bursts are shown in . The article with highest bursts strength (49.15) was published by Robert C et al. in 2011 who revealed that the combination of ipilimumab with dacarbazine significantly improved overall survival in patients with previously untreated metastatic melanoma, but increased the incidence of hepatic impairment. Then, a title cluster analysis was performed to summarize the references in the co-citation network to understand the frontier directions. Thirteen different clusters were divided from the cocitation network of references, including the expanded access programme, antitumour efficacy, adjuvant treatment, motor polyradiculopathy, adjuvant therapy, clinical response, current systemic therapy, review of the literature, clinical outcome, brain metastases of melanoma, metastatic uveal melanoma, targeted nanoparticles and targeting cancer-associated fibroblast . According to the clustering display of the timeline , anti-tumor efficacy, adjuvant treatment, clinical response, literature review, clinical outcome and metastatic uveal melanoma were recent research topics of strong concerned to researchers.
Analysis of keywords Keywords are a generalization of the central idea of the research, and keyword co-occurrence analysis is helpful to understand the internal relationship between keywords in the field of ICIs related complications in melanoma. The extensive network of co-occurrence keywords was visualized by VOSviewer . Ipilimumab, metastatic melanoma, survival, nivolumab, pembrolizumab and adverse event were the most co-occurrence keywords with the strongest ties to other keywords. Based on the keyword co-occurrence network, a keyword burst analysis was performed, and the top 25 keyword co-occurrence with the strongest citation bursts are shown in . It showed that the keywords with the highest burst strength were untreated melanoma (15.93), followed by safety (11.13) and phase II (8.15). Then, a cluster analysis of keywords was performed, and nine different clusters were divided from the co-occurrence network of keywords, including checkpoint blockade, adjuvant therapy, antibody, dabrafenib, immune checkpoint inhibitors, phase I, pd-1 and expanded access . According to the timeline clustering display , checkpoint blockade, adjuvant therapy, antibody, dabrafenib, pd-1, and expanded access were the key points of research that researchers have been paying close attention to since 2011.
Discussion Melanoma ranks first in invasiveness and lethal type of skin tumors, and its incidence is increasing . While over 95% of early-stage melanomas go into remission with surgical treatment, but the prognosis for advanced and metastatic melanoma is extremely poor, with a median survival of 6-9 months and an overall survival rate of less than 10% at 5 years . Therefore, the development of drugs for the treatment of progressive or advanced melanoma has always been a hot research topic. Immune escape has been regarded as an important mechanism for tumor development . Tumor cells suppress T-cell immune function for immune escape by activating immune checkpoints and blocking the antigen-presenting function in the tumor immune process . Tumor immunotherapy is a treatment method that recognizes and kills tumor cells by mobilizing the immune system to activate adaptive or innate immunity . Among these methods, ICIs therapy has achieved remarkable efficacy in the treatment of melanoma patients . Despite its significant clinical benefits of ICIs, multiple complications such as rashes, thyroiditis, and colitis occur in melanoma patients . To date, numerous studies have been reported on ICIs-related complications in melanoma, but no reverent studies have analyzed the overall trend of complications. In order to explore the development process and trends in the field of ICIs-related complications in melanoma, analyze current hot topics and predict future research directions, a bibliometric analysis of relevant literatures from 2011 to 2021 was conducted. A bibliometric analysis of global publications and citations in the field of ICIs related complications in melanoma over the past 10 years found that the number of articles and citations in the last decade showed an overall increasing trend, although the number of papers published in 2020 has declined slightly compared with 2019 . Interestingly, to exclude the interference of publication volume on being cited, we calculated the annual of average citation rate, which was much higher in 2020 (81.74) than in 2019 (66.01) and 2021 (64.17). Then, the annual and overall trend of publications and citations in top 10 most productive countries were analyzed and showed that the USA is significantly higher than other countries in all aspects. This result is consistent with the institution analysis, where 90% of the top 10 most productive institutions are located in the USA . In particular, although Canada has only 83 publications, its average citation rate (342.19) is much higher than that of the USA (133.22), followed by France (307.01). It is evident that although the USA is a leader in the field of ICIs-related complications in melanoma, it still needs to pay attention to improving the overall quality of its publications. In terms of author distribution , Hodi, F. Stephen (Dana-Farber cancer institute) from USA is the most productive and has a certain number of citations. Wolchok, Jedd D. (Memorial Sloan Kettering Cancer Center) from USA ranked second in the number of published papers, but has the most citations among the top publishing authors. Robert, Caroline (Institut Gustave Roussy) from France ranked third in the number of publications and citations, but has the most average citation rate among the top publishing authors. Robert, Caroline team found that nivolumab significantly improved overall and progression-free survival compared to dacarbazine in previously untreated patients with metastatic melanoma without BRAF mutations . However, the study found that the combination of nivolumab and ipilimumab increased the incidence of grade 3-4 adverse events, with colitis, diarrhea, and elevated alanine aminotransferase being the most common . This result is consistent with the findings of Larkin, James team . In the same year, Robert, Caroline team found that pembrolizumab significantly prolongs progression-free survival and overall survival in patients with advanced melanoma and has less high-grade toxicity than ipilimumab . Among them, the most common immune-related adverse events with pembrolizumab were hypothyroidism and hyperthyroidism, and colitis and hypophysitis with ipilimumab . It is evident that the efficacy of combination therapy with ICIs is superior to that of monotherapy, but with a significantly higher incidence of associated adverse events. Based on the references burst analysis and keywords burst analysis, the highest bursts strength in recent years (23.03) article was published by Wolchok JD et al. in 2017 who revealed that the combination of nivolumab with ipilimumab or with nivolumab alone significantly improved overall survival compared with ipilimumab alone in patients with advanced melanoma. However, the study found that the combination of nivolumab and ipilimumab increased the incidence of adverse events, with select adverse events being the most common with skin-related adverse events; grade 3-4 adverse events were most common with gastrointestinal adverse events . Similarly, combined nivolumab is the keyword with the highest burst strength (7.12) that has appeared in recent years. A title cluster analysis of the references and a keyword co-occurrence cluster analysis of the publications were performed to analyze the current hot topics of ICIs related complications in melanoma. For the grouping analysis of references on the timeline , researchers seemed to focus more on antitumour efficacy, adjuvant treatment, clinical response, literature review, clinical outcome, and metastatic uveal melanoma. For instance, identification of biomarkers in patients benefiting from ICIs is a current research priority, and adverse events associated with ICIs are considered as a potential clinical biomarker. Das S et al. found a correlation between immune-related adverse events and antitumor efficacy of immune checkpoint inhibitors, with patients experiencing adverse events showing significant improvements in progression-free survival, overall survival, and overall response rate. For the cluster analysis of keywords on the timeline , it is not difficult to find that ICIs in combination with other adjuvant therapies for the immunotherapy of melanoma are a current research hotspot. This study identified the relevant publications on ICIs related complications in melanoma in the WoSCC database in the past 10 years, and comprehensively analyzes the current hotspot trend. However, this study still has some limitations. For example, we only included publications in English, which will result in the exclusion of many non-English quality publications. Therefore, a multicentre collaboration with researchers from other countries could be followed up to conduct a broader and more in-depth study.
Conclusion In summary, this study offers a thorough quantitative and qualitative evaluation of studies about complications related to ICIs in melanoma from 2011 to 2021. Over the past decade, there has been a substantial increase in the number of publications on this topic. ICIs-related complications can be used as clinical markers for the anti-tumor efficacy of ICIs, thus the establishment of related prediction models and the immunotherapy of melanoma with ICIs in combination with other adjuvant therapies are the future research hotspots.
The original contributions presented in the study are included in the article/ . Further inquiries can be directed to the corresponding authors.
JX and FJ designed the study. LJ and JY performed and drafted the experiment. HZ, RG, XZ, YS, and YC revised the manuscript, and all authors approved the final version of the manuscript. All authors contributed to the article and approved the submitted version.
|
Comparative evaluation of
|
08300ab5-f476-429e-a4bd-7ce636c99317
|
10158776
|
Anatomy[mh]
|
Introduction Lung cancer is the leading cause of cancer deaths worldwide . Non‐small cell lung cancer (NSCLC) contributes to more than 80% of lung cancer diagnosis. Most patients are diagnosed in advanced unresectable disease stages . Immune checkpoint inhibitors (ICIs) targeting programmed cell death protein‐1 (PD‐1) and its receptor ligand‐1 (PD‐L1) have become a mainstay of treatment in NSCLC, particularly, in patients with advanced disease stages who lack druggable molecular alterations . Tumoural PD‐L1 expression is still the most useful biomarker that predicts treatment response to ICIs . Therefore, PD‐L1 expression is considered a key factor in selecting NCSLC patients who might benefit from a treatment with ICIs . Based on their use in randomized clinical trials , the current standard of care is to quantify tumoural PD‐L1 expression in histology specimens. Nevertheless, there remains an unmet need to quantify tumoural PD‐L1 expression in a considerable proportion of NSCLC patients with only cytology samples available for diagnosis . So far, several studies have investigated the feasibility of tumoural PD‐L1 expression in cytology samples . Many were retrospective analyses or have compared the PD‐L1 expression between selected paired and matched histology–cytology samples . However, data from prospective real‐world studies elucidating the agreement on PD‐L1 expression of unpaired histology and cytology samples obtained from the same tumour lesion are still scarce. In addition, the relationship between the detection rate of circulating tumour cells (CTCs) and their PD‐L1 expression with the tumoural PD‐L1 expression remains uncertain. While active immune checkpoint receptors represent a potential mechanism of tumour immune evasion and CTCs might be a surrogate marker of tumour immune evasion , an association between CTCs detection and tumoural PD‐L1 expression might exist. In this prospective study, we sought to investigate the relationship between PD‐L1 expression on tumour tissue from standard immunohistochemistry with the PD‐L1 expression of site‐matched cytology imprints of primary tumour lesions and the detection rate of CTCs and their PD‐L1 expression in patients with NSCLC. This investigation may provide the first evidence of whether alternative sources of tumour cells are informative for the assessment of PDL1 expression.
Materials and methods 2.1 Study design In this prospective observational single‐centre study, we recruited patients with suspected NSCLC who underwent routine procedures for lung cancer diagnosis at the LungenClinic Grosshansdorf. The analysis included subjects with NSCLC who were 18 ≥ year old. Exclusion criteria were diagnoses other than NSCLC or previous treatment with systemic chemotherapy or immunotherapy (Fig. ). The written informed consent was obtained before enrolment. The study was approved by the ethics committee at the University of Luebeck (Az. 17‐161) and conducted according to the declarations of Helsinki. Primary tumour specimens were collected via fibreoptic bronchoscopy; this comprises biopsies from endobronchial visible tumour, tumour mucosal infiltration or transbronchial biopsies. Further tumour specimens were obtained from surgical tumour tissue or via ultrasound‐guided percutaneous tumour biopsy. Tumour specimens were smeared in a rapid on‐site evaluation (ROSE), so matched cytology imprints were from the same tumour site. Different pathologists did further PD‐L1 immunostaining on unpaired, yet site‐matched, cytology and histology samples. 2.2 Tissue immunohistochemistry and immunocytochemistry The immunohistochemical staining was performed on 4‐μm‐thick sections obtained from formalin‐fixed paraffin‐embedded tumour tissue. The quantification of PD‐L1 expression was done by estimating the number of PD‐L1‐positive tumour cells as a percentage of all tumour cells in both histology sections and cytology imprints. Tumour proportion score (TPS) was determined as the percentage of cells, which are PD‐L1 positive to the total number of cells that included at least 100 viable tumour cells . Staining of samples was done with the antibody clone Dako 28‐8 according to standard operating procedures . Although the 28‐8 antibody clone was used in this study as it is the standard antibody used for routine clinical diagnostics of NSCLC at the department of Pathology in Hamburg, a high degree of agreement between the 28‐8 and 22C3 PD‐L1 antibody clones for histological and cytological samples staining results have been described before . A consistently lower positivity rate has been described for the SP142 antibody . 2.3 Circulating tumour cell‐based liquid biopsy We used the Parsortix® Technology (ANGLE plc, Guildford, UK) to detect CTC from 7.5 mL blood collected in Transfix tubes (CTC‐TVT tubes, CYTOMARK, Buckingham, UK) as previously described . The Parsortix technology has been extensively evaluated including several studies on NSCLC and also as part of multicentre ring trials (CANCER ID consortium) . Cells enriched by the Parsortix® system were directly harvested into cytospin funnels, centrifuged onto a glass slide (RCF 190 g ), dried overnight and stored at −80 °C until further processing. For staining, slides were brought to room temperature and fixed with 0.5% PFA for 10 min. Cells were washed with 0.5 mL of 1× PBS three times for 3 min each. 10% AB‐ serum (BioRad, Rüdigheim, Germany) was applied for blocking (20 min). Unconjugated rabbit anti‐human PD‐L1 antibody, clone HL1041 (GTX635975, 1 : 100) was incubated over night at 4°, after which cells were washed with 0.5 mL of 1× PBS three times for 3 min. BD Horizon™ BV421 goat anti‐ Rabbit (BD Biosciences, San Jose, CA, USA, 1 : 200) was used as a secondary antibody and incubated for 45 min. Following the three additional washing steps, directly eFluor560 conjugated pan‐keratin (AE1/AE3‐eBioscience, San Diego, CA, USA, 1 : 200), PerCP‐labelled CD45 (clone H130‐Miltenyi Biotec, Bergisch Gladbach, Germany, 1 : 200) and DRAQ5™ for nuclear staining (BioLegend, San Diego, CA, USA, 1 : 5000) antibodies were incubated for 60 min. Subsequently, cytospins were covered with Prolong Gold Antifade Reagent (Thermo Fisher Scientific, Dreieich, Germany), sealed with a cover slip and examined by fluorescence microscopy. Keratin‐positive, DRAQ5 (nuclear)‐positive and CD45‐negative cells with intact morphology were defined as tumour cells. H1975 was used as a positive control for PD‐L1 expression, while MFC7 was used as a negative control. Rabbit anti‐human PD‐L1 antibody, clone 28.8 was optimized to detect cell surface PD‐L1 in formalin‐fixed paraffin‐embedded human tumour tissue specimens and its specificity was demonstrated by antigen competition and genetic deletion of PD‐L1 in tumour cell lines. It is an approved companion test antibody. However, its use in the immune‐fluorescence setting is poorly investigated as the antibody is mainly used for IHC approaches. Rabbit anti‐human PD‐L1 antibody, clone HL1041 (Genetex, Irvine, CA, USA) targeting PD‐L1 cell membrane as well, was compared to other antibodies frequently used for immunofluorescence PD‐L1 staining, including PD‐L1 E1L3N clone and D8T4X clone (Cell Signalling Technology, San Diego, CA, USA, both). PD‐L1 expression was assessed using cell lines with known different PD‐L1 expression levels . Although these clones worked alike, a slightly higher signal was observed for the newly released clone HL1041 and thus this antibody was used for the CTC assays. 2.4 Statistical analysis We used a receiver operator characteristic (ROC) to evaluate the percent PD‐L1 expression from cytology imprints and the expression of PD‐L1 in CTCs as predictors for positive PD‐L1 expression (tumour cells expression score ≥ 1%) and PD‐L1 high expression (tumour cells expression score ≥ 50%) defined according to standard immunohistochemistry staining. We used Fisher exact test to identify differences in clinical variables between the study groups. To examine the correlation between two continuous variables, we used Pearson's test. Statistical analyses were performed using r (version 4.2.1, R Foundation, Vienna, Austria). An alpha error of less than 5% was considered statistically significant.
Study design In this prospective observational single‐centre study, we recruited patients with suspected NSCLC who underwent routine procedures for lung cancer diagnosis at the LungenClinic Grosshansdorf. The analysis included subjects with NSCLC who were 18 ≥ year old. Exclusion criteria were diagnoses other than NSCLC or previous treatment with systemic chemotherapy or immunotherapy (Fig. ). The written informed consent was obtained before enrolment. The study was approved by the ethics committee at the University of Luebeck (Az. 17‐161) and conducted according to the declarations of Helsinki. Primary tumour specimens were collected via fibreoptic bronchoscopy; this comprises biopsies from endobronchial visible tumour, tumour mucosal infiltration or transbronchial biopsies. Further tumour specimens were obtained from surgical tumour tissue or via ultrasound‐guided percutaneous tumour biopsy. Tumour specimens were smeared in a rapid on‐site evaluation (ROSE), so matched cytology imprints were from the same tumour site. Different pathologists did further PD‐L1 immunostaining on unpaired, yet site‐matched, cytology and histology samples.
Tissue immunohistochemistry and immunocytochemistry The immunohistochemical staining was performed on 4‐μm‐thick sections obtained from formalin‐fixed paraffin‐embedded tumour tissue. The quantification of PD‐L1 expression was done by estimating the number of PD‐L1‐positive tumour cells as a percentage of all tumour cells in both histology sections and cytology imprints. Tumour proportion score (TPS) was determined as the percentage of cells, which are PD‐L1 positive to the total number of cells that included at least 100 viable tumour cells . Staining of samples was done with the antibody clone Dako 28‐8 according to standard operating procedures . Although the 28‐8 antibody clone was used in this study as it is the standard antibody used for routine clinical diagnostics of NSCLC at the department of Pathology in Hamburg, a high degree of agreement between the 28‐8 and 22C3 PD‐L1 antibody clones for histological and cytological samples staining results have been described before . A consistently lower positivity rate has been described for the SP142 antibody .
Circulating tumour cell‐based liquid biopsy We used the Parsortix® Technology (ANGLE plc, Guildford, UK) to detect CTC from 7.5 mL blood collected in Transfix tubes (CTC‐TVT tubes, CYTOMARK, Buckingham, UK) as previously described . The Parsortix technology has been extensively evaluated including several studies on NSCLC and also as part of multicentre ring trials (CANCER ID consortium) . Cells enriched by the Parsortix® system were directly harvested into cytospin funnels, centrifuged onto a glass slide (RCF 190 g ), dried overnight and stored at −80 °C until further processing. For staining, slides were brought to room temperature and fixed with 0.5% PFA for 10 min. Cells were washed with 0.5 mL of 1× PBS three times for 3 min each. 10% AB‐ serum (BioRad, Rüdigheim, Germany) was applied for blocking (20 min). Unconjugated rabbit anti‐human PD‐L1 antibody, clone HL1041 (GTX635975, 1 : 100) was incubated over night at 4°, after which cells were washed with 0.5 mL of 1× PBS three times for 3 min. BD Horizon™ BV421 goat anti‐ Rabbit (BD Biosciences, San Jose, CA, USA, 1 : 200) was used as a secondary antibody and incubated for 45 min. Following the three additional washing steps, directly eFluor560 conjugated pan‐keratin (AE1/AE3‐eBioscience, San Diego, CA, USA, 1 : 200), PerCP‐labelled CD45 (clone H130‐Miltenyi Biotec, Bergisch Gladbach, Germany, 1 : 200) and DRAQ5™ for nuclear staining (BioLegend, San Diego, CA, USA, 1 : 5000) antibodies were incubated for 60 min. Subsequently, cytospins were covered with Prolong Gold Antifade Reagent (Thermo Fisher Scientific, Dreieich, Germany), sealed with a cover slip and examined by fluorescence microscopy. Keratin‐positive, DRAQ5 (nuclear)‐positive and CD45‐negative cells with intact morphology were defined as tumour cells. H1975 was used as a positive control for PD‐L1 expression, while MFC7 was used as a negative control. Rabbit anti‐human PD‐L1 antibody, clone 28.8 was optimized to detect cell surface PD‐L1 in formalin‐fixed paraffin‐embedded human tumour tissue specimens and its specificity was demonstrated by antigen competition and genetic deletion of PD‐L1 in tumour cell lines. It is an approved companion test antibody. However, its use in the immune‐fluorescence setting is poorly investigated as the antibody is mainly used for IHC approaches. Rabbit anti‐human PD‐L1 antibody, clone HL1041 (Genetex, Irvine, CA, USA) targeting PD‐L1 cell membrane as well, was compared to other antibodies frequently used for immunofluorescence PD‐L1 staining, including PD‐L1 E1L3N clone and D8T4X clone (Cell Signalling Technology, San Diego, CA, USA, both). PD‐L1 expression was assessed using cell lines with known different PD‐L1 expression levels . Although these clones worked alike, a slightly higher signal was observed for the newly released clone HL1041 and thus this antibody was used for the CTC assays.
Statistical analysis We used a receiver operator characteristic (ROC) to evaluate the percent PD‐L1 expression from cytology imprints and the expression of PD‐L1 in CTCs as predictors for positive PD‐L1 expression (tumour cells expression score ≥ 1%) and PD‐L1 high expression (tumour cells expression score ≥ 50%) defined according to standard immunohistochemistry staining. We used Fisher exact test to identify differences in clinical variables between the study groups. To examine the correlation between two continuous variables, we used Pearson's test. Statistical analyses were performed using r (version 4.2.1, R Foundation, Vienna, Austria). An alpha error of less than 5% was considered statistically significant.
Results 3.1 Study population One hundred and thirty‐eight subjects with suspected lung cancer were screened. We excluded subjects who had a diagnosis other than NSCLC ( n = 17) or due to lacking histology specimens or evaluable cytology imprints ( n = 45) as shown in the flow chart (Fig. ). The final analysis included 76 patients, of whom the majority had non‐squamous non‐small lung cancer in locally advanced or metastasized disease stages, Table . Nearly 80% of the specimens were collected via fibreoptic bronchoscopy; this comprised biopsies from endobronchial visible tumour, tumour mucosal infiltration and transbronchial biopsies. Further tumour specimens were obtained from surgical tumour tissue or via ultrasound‐guided percutaneous tumour biopsy (Table ). 3.2 PD‐L1 expression in the tumour tissue samples and in cytological imprints We found a moderate correlation of percent PD‐L1 expression between cytology imprints and the matched histology specimens ( R = 0.58, P < 0.001). Likewise, we observed a similar estimation for the number of patients with positive PD‐L1 expression (TPS ≥ 1%); yet, a higher estimation of patients with high PD‐L1 expression (TPS ≥ 50%) according to cytology imprints than in histology specimens (Table ). Compared to percent PD‐L1 expression from standardized immunohistochemistry, the predictive capacity of cytology imprints of PD‐L1 positivity (≥ 1%) indicated a positive predicted value (PPV) of 91%, a negative predicted value (NPV) of 33%, AUC = 78% [95% CI: 65–90%]. Considering high PD‐L1 expression (≥ 50%), cytology imprints showed a PPV of 64% and a NPV of 85%, AUC = 79% [95% CI: 67–91%]. The overall agreement on PD‐L1 positivity for the whole cohort was 84%. Positive agreement on PD‐L1 positivity was seen in 61 cytology imprints out of 67 matched histology specimens; yielding a positive agreement rate of 91.0%. The negative agreement rate was 33.3% and was only seen in three cytology imprints out of nine matched histology specimens. The overall agreement on PD‐L1 high expression was 82.8%. Positive agreement on PD‐L1 high expression was 79%; seen in 23 cytology imprints out of 29 matched histology specimens and the negative agreement was 85%, seen in 40 cytology imprints out of 47 matched histology specimens. Furthermore, the overall correlation of percent PD‐L1 expression between cytology imprints and histology specimens was higher in surgical specimens ( R = 0.67, P < 0.01) than in non‐surgical specimens ( R = 0.56, P < 0.01). Moreover, specimens obtained from surgically resected tumour tissue yielded a greater cytology–histology agreement than non‐surgical specimens, that is, those that were obtained via fibreoptic bronchoscopy or percutaneous tumour biopsy. The cytology–histology agreement on PD‐L1 high expression was 100% versus 75% in surgical versus non‐surgical specimens respectively. Nevertheless, the cytology–histology agreement on PD‐L1 positivity in surgical specimens (91%) was yet comparable to the agreement from non‐surgical specimens (89%). 3.3 PD‐L1 expression in CTCs Sixty‐eight out of the 76 samples were assessed for PD‐L1 expression on CTCs. Eight samples were excluded due to clogged or not evaluable blood samples ( n = 3), low blood volume (< 5 mL, n = 2) or missing liquid biopsy samples ( n = 3). CTCs were detected in 27/68 samples (39.7%). The detection rate of CTCs was comparable between patients with: non‐resectable versus resectable (OR 1.59 [95% CI 0.49–5.14], P = 0.43), non‐squamous versus squamous (OR 0.64 [95% CI 0.20–1.94], P = 0.45), negative versus positive PD‐L1 expression (OR 0.86 [95% CI 0.13–6.44], P = 1.0) and non‐high versus high PD‐L1 expression tumours (OR 0.92 [95% CI 0.29–2.78], P = 1.0), or M1 versus M0 disease stages (OR 1.18 [95% CI 0.40–3.50], P = 0.80). Yet, CTC detection rate showed a tendency to be elevated in patients with stage IVB versus patients with all other disease stages (OR 3.52 [95% CI 0.90–15.5], P = 0.063) and was significantly higher in patients with stage IVB than those with stage IVA (OR 5.48 [95% CI 0.98–37.6], P = 0.032; Table ). The average CTC number was 2.7 CTCs per 7.5 mL of blood (range 1–13 CTCs; Table ). PD‐L1 + CTCs were detected in 21 blood samples (77.8%) with an average of 1.4 PD‐L1 + CTC per blood sample (range 1–6). In these 21 samples, the PD‐L1 + CTC subset represented 10.0% to 100.0% of all detected CTCs. One M0 (stage III) patient had PD‐L1 + CTC cluster of three CTCs, while two‐stage IV patients had each one CTC cluster with all cells positive for PD‐L1. Only one stage IV patient had a cluster negative for PD‐L1 (Table ). Examples of single CTC and CTC cluster staining with positive versus negative PD‐L1 expression are presented in Fig. . We assessed for agreement on positive PD‐L1 expression between histology specimens and CTCs in patients who had at least one CTC ( n = 27). Here, we found a relatively good overall agreement of 66.7%, with three patients showing PD‐L1 + CTCs yet, a negative PD‐L1 expression in histology specimens (Table ). When considering high PD‐L1 expression, the agreement rate dropped to 51.9%. Of patients with high PD‐L1 expression in histology specimens, 90.0% had PD‐L1 + CTCs; however, PD‐L1 + CTCs were also detected in 70.6% of patients who had negative PD‐L1 expression (< 1%) in histology specimen (Table ). Furthermore, the overall agreement on positive PD‐L1 expression between cytology imprints and CTCs was 62.9%. Similar to histology specimen, the agreement rate dropped to 51.9% when high PD‐L1 expression in cytology imprints was considered (Table ). The addition of CTCs PD‐L1 expression has markedly improved the prediction capacity of cytology imprints for PD‐L1 positivity; AUC = 91% [95% CI: 79–100%] and for high PD‐L1 expression; AUC = 84% [95% CI: 69–100%] from standardized immunohistochemistry.
Study population One hundred and thirty‐eight subjects with suspected lung cancer were screened. We excluded subjects who had a diagnosis other than NSCLC ( n = 17) or due to lacking histology specimens or evaluable cytology imprints ( n = 45) as shown in the flow chart (Fig. ). The final analysis included 76 patients, of whom the majority had non‐squamous non‐small lung cancer in locally advanced or metastasized disease stages, Table . Nearly 80% of the specimens were collected via fibreoptic bronchoscopy; this comprised biopsies from endobronchial visible tumour, tumour mucosal infiltration and transbronchial biopsies. Further tumour specimens were obtained from surgical tumour tissue or via ultrasound‐guided percutaneous tumour biopsy (Table ).
PD‐L1 expression in the tumour tissue samples and in cytological imprints We found a moderate correlation of percent PD‐L1 expression between cytology imprints and the matched histology specimens ( R = 0.58, P < 0.001). Likewise, we observed a similar estimation for the number of patients with positive PD‐L1 expression (TPS ≥ 1%); yet, a higher estimation of patients with high PD‐L1 expression (TPS ≥ 50%) according to cytology imprints than in histology specimens (Table ). Compared to percent PD‐L1 expression from standardized immunohistochemistry, the predictive capacity of cytology imprints of PD‐L1 positivity (≥ 1%) indicated a positive predicted value (PPV) of 91%, a negative predicted value (NPV) of 33%, AUC = 78% [95% CI: 65–90%]. Considering high PD‐L1 expression (≥ 50%), cytology imprints showed a PPV of 64% and a NPV of 85%, AUC = 79% [95% CI: 67–91%]. The overall agreement on PD‐L1 positivity for the whole cohort was 84%. Positive agreement on PD‐L1 positivity was seen in 61 cytology imprints out of 67 matched histology specimens; yielding a positive agreement rate of 91.0%. The negative agreement rate was 33.3% and was only seen in three cytology imprints out of nine matched histology specimens. The overall agreement on PD‐L1 high expression was 82.8%. Positive agreement on PD‐L1 high expression was 79%; seen in 23 cytology imprints out of 29 matched histology specimens and the negative agreement was 85%, seen in 40 cytology imprints out of 47 matched histology specimens. Furthermore, the overall correlation of percent PD‐L1 expression between cytology imprints and histology specimens was higher in surgical specimens ( R = 0.67, P < 0.01) than in non‐surgical specimens ( R = 0.56, P < 0.01). Moreover, specimens obtained from surgically resected tumour tissue yielded a greater cytology–histology agreement than non‐surgical specimens, that is, those that were obtained via fibreoptic bronchoscopy or percutaneous tumour biopsy. The cytology–histology agreement on PD‐L1 high expression was 100% versus 75% in surgical versus non‐surgical specimens respectively. Nevertheless, the cytology–histology agreement on PD‐L1 positivity in surgical specimens (91%) was yet comparable to the agreement from non‐surgical specimens (89%).
PD‐L1 expression in CTCs Sixty‐eight out of the 76 samples were assessed for PD‐L1 expression on CTCs. Eight samples were excluded due to clogged or not evaluable blood samples ( n = 3), low blood volume (< 5 mL, n = 2) or missing liquid biopsy samples ( n = 3). CTCs were detected in 27/68 samples (39.7%). The detection rate of CTCs was comparable between patients with: non‐resectable versus resectable (OR 1.59 [95% CI 0.49–5.14], P = 0.43), non‐squamous versus squamous (OR 0.64 [95% CI 0.20–1.94], P = 0.45), negative versus positive PD‐L1 expression (OR 0.86 [95% CI 0.13–6.44], P = 1.0) and non‐high versus high PD‐L1 expression tumours (OR 0.92 [95% CI 0.29–2.78], P = 1.0), or M1 versus M0 disease stages (OR 1.18 [95% CI 0.40–3.50], P = 0.80). Yet, CTC detection rate showed a tendency to be elevated in patients with stage IVB versus patients with all other disease stages (OR 3.52 [95% CI 0.90–15.5], P = 0.063) and was significantly higher in patients with stage IVB than those with stage IVA (OR 5.48 [95% CI 0.98–37.6], P = 0.032; Table ). The average CTC number was 2.7 CTCs per 7.5 mL of blood (range 1–13 CTCs; Table ). PD‐L1 + CTCs were detected in 21 blood samples (77.8%) with an average of 1.4 PD‐L1 + CTC per blood sample (range 1–6). In these 21 samples, the PD‐L1 + CTC subset represented 10.0% to 100.0% of all detected CTCs. One M0 (stage III) patient had PD‐L1 + CTC cluster of three CTCs, while two‐stage IV patients had each one CTC cluster with all cells positive for PD‐L1. Only one stage IV patient had a cluster negative for PD‐L1 (Table ). Examples of single CTC and CTC cluster staining with positive versus negative PD‐L1 expression are presented in Fig. . We assessed for agreement on positive PD‐L1 expression between histology specimens and CTCs in patients who had at least one CTC ( n = 27). Here, we found a relatively good overall agreement of 66.7%, with three patients showing PD‐L1 + CTCs yet, a negative PD‐L1 expression in histology specimens (Table ). When considering high PD‐L1 expression, the agreement rate dropped to 51.9%. Of patients with high PD‐L1 expression in histology specimens, 90.0% had PD‐L1 + CTCs; however, PD‐L1 + CTCs were also detected in 70.6% of patients who had negative PD‐L1 expression (< 1%) in histology specimen (Table ). Furthermore, the overall agreement on positive PD‐L1 expression between cytology imprints and CTCs was 62.9%. Similar to histology specimen, the agreement rate dropped to 51.9% when high PD‐L1 expression in cytology imprints was considered (Table ). The addition of CTCs PD‐L1 expression has markedly improved the prediction capacity of cytology imprints for PD‐L1 positivity; AUC = 91% [95% CI: 79–100%] and for high PD‐L1 expression; AUC = 84% [95% CI: 69–100%] from standardized immunohistochemistry.
Discussion The evaluation of tumoral PD‐L1 expression is essential for selecting patients with NSCLC who might benefit from treatment with ICIs. So far, the evaluation of tumoral PD‐L1 expression is only validated for histology specimens , excluding a considerable proportion of NSCLC patients for whom no tumour tissue is available . In this prospective study, we therefore compared PD‐L1 expression from standard immunohistochemistry with the PD‐L1 expression of cytology imprints and CTCs. Tumour samples were obtained from the primary tumour site through various biopsy procedures and evaluated independently by different pathologists. Using this approach, we sought to avoid sampling bias and to represent real‐world data on the potential use of cytology imprints and CTCs for examining PD‐L1 expression. Though PD‐L1 assessment was confirmed as a predictive biomarker for histological samples, the evaluation of PD‐L1 expression on paired cytological specimen has also shown comparable results . However, to our knowledge, this is the first study comparing PD‐L1 expression on paired histological, cytological and CTC‐based liquid biopsy specimen. Overall, our data indicate a good cytology–histology agreement for both PD‐L1 positivity and high expression. Furthermore, our study demonstrates the added role of CTCs‐ PD‐L1 expression as the combination of liquid biopsy and cytology has markedly improved the positive prediction capacity for PD‐L1 positivity and high expression. Noteworthy, cytology imprints yielded excellent positive agreement (91%), yet a poor negative agreement, on PD‐L1 positivity. This might indicate that cytology imprints overestimate PD‐L1 positivity, that is, PD‐L1 tumour cells expression score ≥ 1% in PD‐L1 negative tumours as per immunohistochemistry. However, cytology imprints also yielded a good negative agreement on PD‐L1 high expression, that is, PD‐L1 tumour cells expression score ≥ 50% with a NPV of 85%, AUC = 79% [95% CI: 68–91%], indicating good capacity of these imprints in ruling out patients who might not qualify for a first‐line monotherapy with ICIs. Our data also indicate that samples obtained from surgically resected tumour tissue might yield a greater cytology–histology agreement than those obtained via fibreoptic bronchoscopy or percutaneous tumour biopsy. Our findings regarding the PD‐L1 cytology–histology agreement are in line with the finding of other previous studies that reported an agreement rate of between 65–100% for both PD‐L1 positivity and high PD‐L1 expression . Many factors contribute to cytology–histology disagreement as well as to the heterogeneity in the reported agreement rates. This includes the intra‐tumour heterogeneity of PD‐L1 expression , the number of tumour cells in cytology samples as well as the discordance due to the applied diagnostic tools in tumour sampling and staining procedures including used antibodies . Further, the type of cytology specimens might have an impact as cytological cell blocks which demonstrated better agreement with histology specimen than cytology imprints . In this study, we also compared PD‐L1 expression between standard immunohistochemistry and a label‐independent, microfluidic‐based CTC enrichment system. As previously reported, our data confirm that the Parsortix system reliably detects CTCs in liquid biopsies from patients with NSCLC . The detection rate of CTCs in our cohort was nearly 40%. Noteworthy was that CTCs detection rates were rather comparable between patients with resectable versus non‐resectable as well as between patients with metastatic (M1) and non‐metastatic (M0) disease stages. Nevertheless, the subgroup analysis has revealed that CTCs detection rate is significantly higher in patients with stage IVB (64%) than those with stage IVA (23%) or patients with non‐metastasized tumours (37%). Here, we also report that most CTCs (nearly 80%) showed positive PD‐L1 expression. PD‐L1 expression in CTCs of patients with NSCLC has already been assessed using various different enrichment techniques and PD‐L1 antibodies . Kulasinghe et al. used the microfluidic‐based ClearCell FX system to assess for CTC in a smaller cohort of patients with advanced NSCLC and reported a CTCs detection rate of 51% with 65% PD‐L1‐positive cells. In a further study, the same authors detected CTC in 60% of patients with stage IV NSCLC and 56% of selected patients with CTC were PD‐L1 positive . The absence of clinical relevance of PD‐L1 expression on CTCs prior to therapy has also been reported in Guibert et al. study . The authors used the size‐based separation ISET platform to yield a high CTC positivity of 93% at baseline ( n = 89/96), with 83% of these patients expressed PD‐L1 on at least one CTC. Sinoquet et al. used the EpCAM‐based CellSearch enrichment method, and reported a 43.4% of CTC positivity ( n = 54 patients) with a low PD‐L1 + CTC rate of 21.7%. Janning et al. reported a 68.5% CTC positivity rate and 81.9% of PD‐L1 + CTC in late‐stage NSCLC patients using the same system as ours. Overall, with a CTC positivity of nearly 40% and PD‐L1 + CTC of 78%, our data align with what was previously reported in literature. We suggest that the observed variability could be attributed to the different enrichment techniques including the antibody that has been used. Still, the sample size and rather low positivity rate is clearly a limit of our study that could cause a bias in or data. Furthermore, agreement rates between PD‐L1 expression on CTCs versus cytological imprints yielded similar results to CTCs versus histology specimen; 62.9% with PD‐L1 ≥ 1% versus 51.9% with high PD‐L1 expression (≥ 50%). Notable was that the agreement rate between PD‐L1 expression on CTCs versus tissue was relatively low for almost all the previously described studies. Only Ilie et al. reported a high agreement (93%) of PD‐L1 + CTC with matched tissue using the ISET platform. In our study, a higher agreement was observed compared to many other studies. This could be due to the use of a new more sensitive PD‐L1 antibody that was shown to be very sensitive when used for immunofluorescence staining. By using this new staining protocol, a moderate to high agreement of 66.7% was observed when at least 1% of cells expressed PD‐L1 . However, the agreement drops to 51.9% when increasing the threshold of PD‐L1 tissue positivity to ≥ 50%. Indicating that preferentially the PD‐L1‐positive cells enter or survive in the blood circulation. For further refinement and in order to increase predictive accuracy, few limitations inherent to our study need to be considered. Though we followed standard procedure and kept the cell quantity as previously advocated (at least 100 viable tumour cells), the sample size might be a limit for which PD‐L1 expression might be underestimated. Nearly 80% of the specimens were collected via fibreoptic bronchoscopy making the chance of underestimating the PD‐L1 content higher . This limit further highlights the importance of a combined approach for PD‐L1 assessment and suggests that the use of a liquid biopsy approach through CTC analysis might improve PD‐L1 predictive accuracy.
Conclusions Our study shows that the tissue biopsy and the consequent smear imprint at a single tumour site or a specific time point is insufficient to represent the overall status of PD‐L1 on tumour tissue. There is obvious spatial and temporal heterogeneity of PD‐L1 expression on tumour tissue that cannot be unravelled by conventional tissue biopsy and cytological imprints which might explain the low agreement rate when considering a 50% positivity threshold. By assessing PD‐L1 expression on CTCs in a minimal invasive approach and a real‐time detection using a label‐independent, microfluidic system, may represent a complementary source for PDL1 immunostaining, which also makes the dynamic monitoring of PD‐L1 during treatment more convenient for physicians and less invasive for patients . Still, future larger prospective studies assessing all these biomarkers in NSCLC patients receiving ICI are needed to be performed to assess the sensitivity and specificity of each approach.
The authors declare no conflict of interest.
MA, YB, HW, KP and MR were involved in study concept and design and drafted the manuscript. MA, YB, HW and MR carried out data analysis. MA, YB, MSc, NH‐O, IW, MSz, JK, TP‐V and HE were involved in patient sample collection. MA, YB, DH, LW, JK, SW and TP‐V performed the experiments. MA, YB, DH, LW, SW, SP, SS, HE and HW carried out data interpretation. All authors have read and agreed to the published version of the manuscript.
|
Evaluation of the implementation progress through key performance indicators in a new multimorbidity patient-centered care model in Chile
|
eaea1f43-5c6b-4148-b860-27d04e24a172
|
10159678
|
Patient-Centered Care[mh]
|
Complex changes in health represent a real challenge for health systems, clinical teams, and individuals not only because of their inherent complexity but also in terms of ensuring sustainability over time. For example, in recent years, important epidemiological changes have modified the burden of disease, health services use, and life expectancy, making the reorganization of health services a priority . Therefore, complex changes require core elements that allow change to be executed and its sustainability, such as changes in associated resources, adequate competencies, clear leadership, and culture and behaviors that support change . Methodologies and frameworks for implementing complex changes in health described usually include stages of theory exploration, development of clinical intervention, identifying core aspects, as well as feasibility and implementation studies . The evaluation is often focused on intermediate or final outcomes, but they lack performance indicators that can deliver valuable information from the implementation process. Even more, challenges of the complex interventions are born within the implementation process, from operational and leading changes barriers that need to be appropriately addressed to pursue a sustainable incorporation . Measurements during complex interventions are core in monitoring the degree of implementation progress of the proposed intervention. A healthcare Key Performance Indicator (KPI) is a clear-cut measure used to observe, monitor, optimize, manage, and transform the performance of a healthcare process to ensure effectiveness, quality, and efficiency and increase patient satisfaction and healthcare providers . Therefore, their use in complex interventions could provide a broader perspective with quantitative and qualitative information that can help decision-makers during the process. For example, when a complex intervention involves organizational, operational, and cultural changes, KPIs can monitor/track progress and make objective comparisons between different contexts enabling opportune response to those who are experiencing a harder process . However, according to the quality and amount of health services data, it is key to choose the indicators that can be fulfilled simply and provide relevant information to the process and progress of the implementation of a complex intervention. There are experiences in Chile and internationally that show how complex the profound changes to the organization and delivery of health services are. In the field of multimorbidity defined as two or more chronic conditions in the same person , the Chilean public health system and its primary care centers are organized in the traditional single diagnostic approach. They are offering fragmented disintegrated, and inefficient care, which has shown negative results in 11 million people (70% of the national population) living with chronic disease outcomes during the last years . Therefore, the Centro de Innovacion en Salud ANCORA UC (CISAUC), together with the Servicio Metropolitano Sur Oriente (SSMSO) and National Found of Health (FONASA), implemented a complex change in health. The objective was to change health services organized in diagnosis towards a patient centered care organized according to each patient multimorbidity risk. The Multimorbidity Patient-Centered Care Model (MPCM) enhances the family and community health model implemented in the primary health care centers (PHC) of the country and adds core elements such as case management, risk stratification, and multimorbidity as shown in Fig. . The intervention strategies were designed and offered in primary and tertiary care centers, according to each person’s risk. The implementation process had three stages: preparation, implementation, and evaluation activities. During the preparation, process activities were carried out to disseminate and communicate the model, together with training of health teams and operational preparation. In the implementation, clinical activities corresponding to the intervention strategy were executed (Fig. ), and the CISAUC expert team monitored each center’s implementation’s particularities and execution times. In the evaluation stage, an impact analysis on the use of health services and an evaluation of patient and health team satisfaction were carried out showing positive results . Still, measuring the impact on avoidable hospitalization would have complemented those results. Similar interventions have shown a decrease in unplanned hospitalizations . The MPCM intervention decreased the total number of hospitalizations, and we could infer that those results are related to the decrease in avoidable hospitalization. But at the time of evaluation, there was a lack of consensus about the kind or list of avoidable hospitalizations, limiting the data extraction and evaluation. Given that there were multiple barriers and facilitators that influence the progress of implementation and its sustainability over time, this study aimed to evaluate the progress in implementing the Multimorbidity patient-center care model in seven Primary Care Centers in Chile. The study’s objective was to evaluate the progress of implementing the Multimorbidity Patient-Centered Care Model in seven primary care centers through key performance indicators.
The study used a quantitative approach to assess the progress of implementing the MPCM in seven primary health care centers in the southeast of Santiago, Chile, that intervened in 22,642 adult patients with multimorbidity. The PHCs are organized by the Family and community model . Their size ranged from 3 to 4 multidisciplinary health teams to offer care from 22,000 to 35,000 patients (covered population) with vulnerable conditions. The intervention strategy shown above (Fig. ) had several components from which developed a set of indicators in four main areas: change management, operational items, new roles, and services and activities. In addition, some KPIs were identified from the overall set of indicators to reflect the minimum conditions required for the intervention sustainability. Figure shows the process of the setup and monitoring of KPI. Indicators Assignment Areas Four areas were considered for grouping the KPIs according to the complex interventions challenges and the intervention strategy main characteristics, as shown in Fig. . In the change management area, the organization of local governance to plan, lead and coordinate the actions necessary to achieve change is required to activate a gradual, strategic, and responsible process. Therefore, the objective of this intervention strategy area items was to activate local teams, perform constant communication and dissemination activities, and deliver the necessary training for health teams. To achieve the minimum implementation of this section, the center must have managerial support, internal leadership for the installation of the model, and a local induction plan for the strategy for new employees. The measurement of these last three corresponds to the KPIs. In the operational area, it is necessary to perform modifications to the structure and health services delivery organization to allow the installation of the new care model. The objective was to assess the incorporation of multimorbidity stratification, changes in the protocols for electronic clinical records (ECR), and health services delivery according to each patient’s complexity. To achieve the minimum implementation of this section, the center must have the adult population stratification, unified drug prescriptions, alerts for consultations in the emergency service, and hospitalization activated and modify the acts on the agenda toward comprehensive care. Incorporating new roles is expected to provide new activities of the intervention strategy, such as Case Manager, Transition Nurse, Clinical Pharmaceutical Chemist, and High-Risk Family Physicians. The objective was to measure the degree of implementation of the new roles proposed to guarantee the execution of the new clinical services and improve continuity of care and patient follow-up efficiency. To achieve the minimum implementation of this section, the center must have the new roles installed. In the activities and services area, the differentiation of health care delivery by multimorbidity risk is a core aspect of the new care model and reflects the transition from a single diagnostic to a person-centered approach. The objective was to evaluate the core activities and services that would be the foundation for the sustainability of the change in healthcare delivery. To achieve the minimum implementation of this section, the center must have included the implementation of agreed plans, telephone counseling, continuity of care with a professional from the team, rescue after hospital discharge, implementation of an induction plan, and transition care. Key performance Identification The objective was to identify components of the intervention strategy that were core for the change towards a multimorbidity approach and the implementation success. They were chosen based on the minimum conditions required for the intervention sustainability, on the representation of the implementation progress, on the availability of measurement information (either because it was available or because it was simple to download by the health team), and on accessible and sustainable monitoring over time. From a total of 32 components, 17 were identified and assigned key performance indicators to track their implementation progress (Table ). Complementary indicators identification In addition to the KPIs, we developed another 15 indicators where the intervention strategy components open evaluation in greater depth if necessary. In the present study, we only evaluated the KPIs. The performance indicators for the MPCM are available in the supplementary material. Setup, measurement, and score assignment of KPI The monitorization of the KPIs was self-reported, with dichotomous responses, and was completed by the implementation health care teams composed by clinicians such as nurses, physicians, nutritionist and physiotherapist. The setting-up, measurement, and scoring of KPI were provided by the study’s researchers and the expert team of CISAUC. The KPIs were designed according to each area and component. And a score was defined according to the level of complexity and relevance where the component was tracked. For scoring, the individual scores of the KPI of each area were summed (example in change management score of 3) and divided by the maximum expected (score 28) to obtain a percentage of progress for each area (example: (3/28) * 100 = 10%). Finally, an average between areas was calculated for an overall percentage score. Table represents the four groups of KPI scoring. The full description of scoring and measurement for each KPI is in the supplementary material. Threshold and minimum implementation period An overall threshold of 67% was defined with a group of experts and local teams to determine the minimal expected progress after 12 months of implementation in activities that are core to reflect the change. The implementation of MPCM represents a complex change, and the implementation of the complete intervention strategy is expected to be longer than the piloting period. Therefore, defining a minimal implementation period and a minimal percentage of implementation progress was relevant. Review with the primary care team The KPIs’ setup, measurement, scoring, and pertinence were reviewed and discussed with the healthcare teams of the seven PHCs. Then a new draft was produced and checked for a second time to proceed and consolidate a final draft. The objective was to evaluate (i) the assertiveness of the KPIs with the minimal required conditions, (ii) the monitoring feasibility, and (iii)the understanding of a variety of healthcare professionals. Finally, the CISAUC team collected the information and made the necessary adjustments to the components and the indicators. This process was done twice, first after the indicators’ preliminary draft (December 2019). The second consisted in adapting the indicators to the global and national context of the COVID-19 pandemic (November 2020). Monitoring KPI The seven PHCs had 30 days to monitor, collect the necessary information and fulfill the information of the indicators. This process was carried out in September 2020. During this period, the process was conducted by a local health care professional in charge of implementing the MPCM and supported by the CISAUC team. In addition, a document was prepared and delivered to the teams to facilitate the monitoring, collection, and completion of the information required and standardize the process. The data with the results was collected and analyzed by the CISAUC expert team from each PHC that implemented the MPCM.
Four areas were considered for grouping the KPIs according to the complex interventions challenges and the intervention strategy main characteristics, as shown in Fig. . In the change management area, the organization of local governance to plan, lead and coordinate the actions necessary to achieve change is required to activate a gradual, strategic, and responsible process. Therefore, the objective of this intervention strategy area items was to activate local teams, perform constant communication and dissemination activities, and deliver the necessary training for health teams. To achieve the minimum implementation of this section, the center must have managerial support, internal leadership for the installation of the model, and a local induction plan for the strategy for new employees. The measurement of these last three corresponds to the KPIs. In the operational area, it is necessary to perform modifications to the structure and health services delivery organization to allow the installation of the new care model. The objective was to assess the incorporation of multimorbidity stratification, changes in the protocols for electronic clinical records (ECR), and health services delivery according to each patient’s complexity. To achieve the minimum implementation of this section, the center must have the adult population stratification, unified drug prescriptions, alerts for consultations in the emergency service, and hospitalization activated and modify the acts on the agenda toward comprehensive care. Incorporating new roles is expected to provide new activities of the intervention strategy, such as Case Manager, Transition Nurse, Clinical Pharmaceutical Chemist, and High-Risk Family Physicians. The objective was to measure the degree of implementation of the new roles proposed to guarantee the execution of the new clinical services and improve continuity of care and patient follow-up efficiency. To achieve the minimum implementation of this section, the center must have the new roles installed. In the activities and services area, the differentiation of health care delivery by multimorbidity risk is a core aspect of the new care model and reflects the transition from a single diagnostic to a person-centered approach. The objective was to evaluate the core activities and services that would be the foundation for the sustainability of the change in healthcare delivery. To achieve the minimum implementation of this section, the center must have included the implementation of agreed plans, telephone counseling, continuity of care with a professional from the team, rescue after hospital discharge, implementation of an induction plan, and transition care.
The objective was to identify components of the intervention strategy that were core for the change towards a multimorbidity approach and the implementation success. They were chosen based on the minimum conditions required for the intervention sustainability, on the representation of the implementation progress, on the availability of measurement information (either because it was available or because it was simple to download by the health team), and on accessible and sustainable monitoring over time. From a total of 32 components, 17 were identified and assigned key performance indicators to track their implementation progress (Table ).
In addition to the KPIs, we developed another 15 indicators where the intervention strategy components open evaluation in greater depth if necessary. In the present study, we only evaluated the KPIs. The performance indicators for the MPCM are available in the supplementary material.
The monitorization of the KPIs was self-reported, with dichotomous responses, and was completed by the implementation health care teams composed by clinicians such as nurses, physicians, nutritionist and physiotherapist. The setting-up, measurement, and scoring of KPI were provided by the study’s researchers and the expert team of CISAUC. The KPIs were designed according to each area and component. And a score was defined according to the level of complexity and relevance where the component was tracked. For scoring, the individual scores of the KPI of each area were summed (example in change management score of 3) and divided by the maximum expected (score 28) to obtain a percentage of progress for each area (example: (3/28) * 100 = 10%). Finally, an average between areas was calculated for an overall percentage score. Table represents the four groups of KPI scoring. The full description of scoring and measurement for each KPI is in the supplementary material.
An overall threshold of 67% was defined with a group of experts and local teams to determine the minimal expected progress after 12 months of implementation in activities that are core to reflect the change. The implementation of MPCM represents a complex change, and the implementation of the complete intervention strategy is expected to be longer than the piloting period. Therefore, defining a minimal implementation period and a minimal percentage of implementation progress was relevant.
The KPIs’ setup, measurement, scoring, and pertinence were reviewed and discussed with the healthcare teams of the seven PHCs. Then a new draft was produced and checked for a second time to proceed and consolidate a final draft. The objective was to evaluate (i) the assertiveness of the KPIs with the minimal required conditions, (ii) the monitoring feasibility, and (iii)the understanding of a variety of healthcare professionals. Finally, the CISAUC team collected the information and made the necessary adjustments to the components and the indicators. This process was done twice, first after the indicators’ preliminary draft (December 2019). The second consisted in adapting the indicators to the global and national context of the COVID-19 pandemic (November 2020).
The seven PHCs had 30 days to monitor, collect the necessary information and fulfill the information of the indicators. This process was carried out in September 2020. During this period, the process was conducted by a local health care professional in charge of implementing the MPCM and supported by the CISAUC team. In addition, a document was prepared and delivered to the teams to facilitate the monitoring, collection, and completion of the information required and standardize the process. The data with the results was collected and analyzed by the CISAUC expert team from each PHC that implemented the MPCM.
The intervened PHC were located at the southeast of the capital of Chile, Santiago and implemented the MPCM between 2017 and 2020. The population covered ranged from 17,487 to 35,240 patients. Three of the PHC were located at the municipality of La Pintana, two in La Florida and two in Puente Alto. The number of local care teams ranged from two to six for each PHC (local team integrated by physician, midwives, nutritionists, physical therapists, psychologists, social workers, dentists, nurses and paramedic technician). The overall results on the seven PHCs on 2020 showed positive implementation progress of the MPCM. The average total score was of 22 out of a maximum of 31. The overall threshold was met with a score of 72% (min 45% - max 100%) (Table ). The municipalities that implemented the MPCM offered health services for similar populations showing differences in between. In Municipality 1, one of the PHCs obtained the highest level of implementation. On the contrary, the other PHC didn´t reach the minimum implementation threshold, scoring a 55% of implementation progress and lower results in activities and services. In Municipality 2 had similar results, where two of the three PHCs scored 81% and 100% on implementation progress, with high scores in components in the four areas. The third PHC didn’t reach the threshold and scored 45% in the implementation progress. Finally, in Municipality 3, both PHCs reached the threshold with scores of 68% and 71%. Regarding the areas of evaluation, the highest scores were in change management and new roles. The lowest score was in services and activities. This is where the indicators reflect substantial changes in the real practice and the execution of the components of the intervention strategy of the MPCM. The results by each component study showed that there were six that scored the highest: Decision makers support (PHC director and managers), Leaders for the Implementation of the MPCM at the PHC, Alert System Informing PHC Teams of Patients Consulting at the Emergency Room and Hospitalization, High-complexity Primary Physician, Case Manager, and Transition Nurse. In contrast, the components that obtained the lowest scores across all centers were: Continuity of Care and Rescue of High-risk Patients After Discharge. Finally, regarding the review process with primary care teams, adjustments were provided in components mainly deciding if they were a “minimum or not” for the sustainability of the model. For example, in change management area the induction plan was a complementary indicator and after the review it was assessed as a KPI. In operational items, the Integrated multimorbidity scheduled appointments indicator was modified from a percentage of change to a dichotomic answer in yes/no. New roles had no modifications. Services and activities the Implementation of an induction plan and transition care were identified as KPI instead of complementary.
The results of the study showed that the MPCM intervention strategy can be monitored by the health care teams in terms of implementation progress through key performance indicators. Of the seven pilot centers, five (71,4%) reached the expected threshold reflecting the presence of the minimum intervention strategies required for the sustainability of the MPCM. Only two (28,6%) didn’t meet the threshold, demanding further attention to improve quality and performance. The results of this monitoring of the KPIs delivered relevant information for decision-makers and implementation teams to analyze and optimize the implementation progress. Regarding the territory and the PHC where the MPCM pilot was implemented, the centers that did not reach the implementation progress threshold are from different municipalities but have in common the absence of important intervention strategy components. For example, the absence of integrated multimorbidity scheduled appointments refers to health professionals’ schedules by multimorbidity risk instead of by pathology or the program. Also, individualized plans and continuity of care were absent. These three missing components require a deeper and structural change in the organization and operation of the daily routine. Therefore, the barriers within the diagnostic approach for chronic [ – ] diseases are captured by the KPIs monitoring. Thus, strong decision-makers support is needed to authorize and facilitate the transition and sustainable change over the structural organization. The areas of the KPIs also showed a relationship with the implementation process. The differences between the areas of progress may be related the stages of the pilot. In the pre-implementation phase, interventions were carried out first with a focus in a cultural and paradigm change, therefore, change management, operational, and new role changes began executed and obtained the highest scores. In contrast, the services and activities had less score reflecting that structural and operational chances diverse areas of the health services require a longer time . Therefore, the importance to invest time and perform actions to properly install the basis of a further change . Thus, a gradual process should be performed to ensure success in the implementation and sustainability overall as described in other studies. Concerning the areas, Change Management and New Roles reached the highest score. These results could be a consequence of the time invested in the pre-implementation period, where the action of socializing with the health teams, managers, and local leaders was frequent and essential to the change and its urgency. In addition, these changes don’t necessarily involve a structural change in the real context. Therefore, its implementation doesn’t face those barriers that are more difficult to address. Thus, the human resources inserted by the pilot study for the performance of the new roles was a challenge that had a positive acceptance from the health care teams, which probably positively influenced this area’s results . Even more, the national scale-up of a similar intervention by the Ministry of Health included the new roles piloted . The strength of the indicators is that they provide a simple, quantitative, and practical tool to monitor progress in multicomponent and interdisciplinary complex interventions. Methodologies described in the literature for health intervention design and implementation don’t usually include performance indicators or measures from the implementation process itself . Rather, they look for health outcomes . Therefore, complementing both could give health professionals and decision-makers a wider perspective with concrete gaps that certainly facilitate planning opportune quality improvement and addressing gaps in core areas to favor sustainability over time. The limitations of the indicators are that they focus on the primary care components of the intervention strategy. Due to the piloting time, indicators for the performance in secondary and tertiary care were not provided but we included the most relevant network coordination activities performed, indicators such as transition care and rescue after hospital discharge measuring the continuity of care between care levels. Another limitation is that the second measurement was done in the first six months of the pandemic in Chile, which could have affected the results. Also, the validity of the indicators , therefore the construction of the KPI was reviewed, discussed twice with health care teams from the pilot centers. Finally, these are self-reported indicators, which could generate bias in their measurement as a proper limitation of the KPIs . Hence, driving to an automatized monitoring could mitigate bias and maintain the strengths of a tool that delivers opportune, concrete, and relevant information for decision makers . Finally, the set of key performance indicators has the potential to reflect the progress in a complex intervention in health like the MPCM, even though in a pandemic context. The automatization and extrapolation to other complex interventions in other groups of patients could provide early useful information to make opportune necessary changes and increase the expected outcomes of the intervention. The setup, monitoring and knowledge performed by the study it is potentially valuable for the similar intervention that it is scaling up the Ministry of Health . Further studies could complement the indicators in the performance of the secondary and tertiary level providing a complete overview of the progress implementation of complex health interventions.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Heat generated during dental treatments affecting intrapulpal temperature: a review
|
8fc9b393-58cd-452b-9fc9-749025a08eb1
|
10159962
|
Dental[mh]
|
Human teeth consist of hard components (enamel, dentine, cementum), soft pulp tissue and sensory fibres . Human teeth are regarded as a sensory tissue with the pulp, a soft connective tissue, containing nerve fibres and nerve endings extending into the dentinal tubules. These pulpal nerve terminals are crucial in sensing thermal stimuli [ – ]. Although heat transfer in human teeth is a common occurrence in both daily life and clinical dentistry, there is a lack of knowledge regarding the actual amount of heat transfer that takes place during dental procedures. This is important as trauma must be limited to a stressed pulp, where the accumulation of thermal, microbial, chemical and mechanical can compromise its vitality. Zach and Cohen reported that an increase of 5.5°C in temperature can result in irreversible pulpitis and has since been the threshold cited by subsequent studies as the maximum temperature increase the dental pulp can endure. Although this value may have limited clinical relevance, it provides a value to which the results of other in vitro studies can be compared to. There are various stages during the dental treatment which generate heat, affecting the intrapulpal temperature: from cutting of the tooth structure by high-speed dental handpieces (HSDH), exothermic reactions during the polymerisation of light or self-cured restorative materials and during the polishing step. However, little is known about the effect of various factors which can increase the intrapulpal temperature. Moreover, since measuring the intrapulpal temperature in human subjects is both unethical and unfeasible, previous studies have adopted in vitro simulation models to conduct research on the change of intrapulpal temperature. This review paper attempts to provide a comprehensive understanding on the heat generation during dental treatments affecting intrapulpal temperatures. To address this, firstly, the human tooth structure and heat transfer mechanism of enamel and dentine will be explained. Secondly, factors affecting the intrapulpal temperature during tooth preparation (cutting), crown fabrication, light curing and polishing will be discussed. Lastly, in vitro and in vivo methodologies used to study the intrapulpal temperature will be discussed, along with its opportunities and challenges. The objective of this review is to give an overview of the current research done on the heat generation during dental procedures and highlight the areas for future research to improve the understanding of the various factors that can affect the intrapulpal temperature.
Enamel is the highly mineralised outermost layer, which is directly affected during restorative treatment. Below this is dentine, a mineralised connective tissue layer composed of an organic matrix of collagenous proteins . Dentine accounts for most of the tooth structure by both weight and volume. It exhibits a complex hierarchical structure of organic and inorganic components, composed of approximately 70% mineral and 20% organic materials (mainly type I collagen) and 10% water by weight . In essence, dentine serves as the elastic foundation that supports the outermost hard and brittle enamel layer, while also acting as a protective medium for the innermost soft tissue, the pulp . However, perhaps the most distinct feature of this layer’s microstructure is the network of long channels—the dentinal tubules. These extend outwards from the innermost pulp layer towards the exterior cementum or dentine-enamel junction (DEJ) [ , – ]. The dental pulp is a highly vascularised tissue encased in hard dentinal walls, containing a large amount of connective tissue, nerve fibres and sensory nerve endings . Its innate ability to heal and repair itself has been previously studied, with the combination of the inflammatory response as well as both the proliferation and differentiation of numerous cell types combining to achieve the repair of the pulp-dentine tissue . Regardless, the pulp is still vulnerable to impairment, particularly to heat exposure during tooth preparation and extensive restorative procedures. Pulp insults are mainly results of heat changes, desiccation, exposure to chemicals and bacterial infection . Normal intrapulpal baseline temperature appears to range between 34 and 35°C , with increases in intrapulpal temperature exceeding 42 to 42.5°C sufficient to cause irreversible damage . This is of particular importance as an increase in intrapulpal temperature does not necessarily produce an increase in pulpal blood flow. Consequently, for the pulp which may already be dealing with the effect of thermal changes from tooth preparation, any previous inflammatory changes and limited perfusion may lead to the potential loss of pulpal vitality . The effects of different harmful insults are cumulative, and where possible, dental clinicians must avoid materials and procedures which may contribute to the potential for iatrogenic damage to the pulp . For in vitro studies, irreversible biological effects result when intra-pulpal temperature increases by more than 5.5°C (that is, the intra-pulpal temperature exceeds 42.4°C). It was found that 15% of the experimental teeth developed irreversible pulpitis or necrosis when this temperature was reached . This is shared by another study which determined the temperature range for reversible damage to be between 42 and 42.5°C . Overestimation of the pulp temperature changes in in vitro studies is probable, with the lack of blood and dentine fluid flow, and lack of periodontal tissues [ – ]. Mechanism of thermal insult to a human tooth When heat is transferred to the pulp, it can cause various histopathological changes which may lead to irreversible injury. Unlike heat transfer to other materials, the thermal behaviour of teeth is a heat conduction process, combined with its physiological processes, such as dentinal fluid flow and pulpal blood flow . The mechanism of injury includes protoplasm coagulation, expansion of the liquid in the dentinal tubules, increased outwards flow from the tubules, vascular injuries and tissue necrosis [ , , , ]. Moreover, because of the variance in thermophysical properties and microstructure between the layers in human teeth, heat transfer may also result in thermal stresses that lead to cracking within the different layers . It is thought that an intrapulpal temperature rise above 43°C activates nerve fibres, leading to a reactive increase of blood circulation which assists in the dissipation of any heat advancing towards the dental pulp . Additionally, the surrounding periodontal tissues could also play a significant role in promoting heat convection, thus limiting the intrapulpal temperature rise . Although the flow of dentine fluid can enhance the heat transfer within the pulp upon heating, it is the pulp microcirculation of blood that plays an important role in the thermoregulation of pulpal soft tissue. In essence, the pulp blood flow rate is practically constant within the range of 33 to 42°C but increases significantly when the temperature rises above 42°C. Perfused blood works as a heat sink under heating and as a source of heat when subjected to cooling. Yet, the overall influence of pulpal blood flow on heat transfer is thought to be minimal due to its relatively low blood volume . In addition, several other biological factors impact on whether the pulp tissue undergoes irreversible effects. This includes the amount of water content in the pulp, the changes in pulp blood and dentinal fluid flows, previous injury to the pulp, the health of the tissues, remaining dentine thickness and insulating quality, duration of insult and the surface area of exposed dentinal tubules [ – ]. Alternative consequences, such as necrosis and alveolar bone loss, and even ankylosis can also occur when intrapulpal temperatures increase by 3 to 10°C during tooth preparation . Higher and longer lasting temperature peaks, and specifically those exceeding the 5.5 °C increase threshold, may lead to pulpal necrosis, and an excessive temperature increase of 3–10 °C can lead to periodontal malformations (e.g. alveolar bone necrosis, bone loss and ankylosis) . Tooth heat transfer The relatively low values for thermal conductivity (TC) and diffusivity of enamel and dentine help protect the deeper tissues from thermal insults . Additionally, the characteristic arrangements of its inner structures have a significant influence on heat excursion in teeth . Nevertheless, greater attention is given to dentine since it is often the layer in direct contact with provisional materials and the layer likely to be involved in the heat transfer that takes place from the surface of the tooth preparation to the pulp chamber. Even though both enamel and dentine are hard components with a high percentage of mineral content, their thermophysical properties are different. TC indicates the ability of a material to conduct heat and the TD is the measure of the speed with which a temperature change will proceed through an object . The TD and TC of enamel are approximately 2.5 and 1.6 times larger than dentine, respectively . Dental pulp is involved in the maintenance of tooth vitality and is vulnerable to heat changes without the protection of the enamel and dentine layers. The TC and TD of enamel and dentine are relatively low compared to those of the pulp; therefore, these two layers are effectively thermal insulators and protect the pulp from deleterious thermal irritation . The thermophysical properties of the tooth is a factor in its thermal behaviour and depends on the microstructures of each tooth layer (Fig. ). However, because the human tooth is a living tissue, the heat conduction process occurs in conjunction with physiological processes, including the fluid motion in the DTs and blood circulation in the pulp chamber. Dentinal fluid flow could improve the heat transfer within the pulp during temperature changes. The pulpal blood flow also influences the thermoregulation of pulpal soft tissue. The increase of pulpal blood flow rate during extra heating from hot foods or rotary dental procedures (above 42°C) works as a heat sink, while during cooling, e.g. from the water jet spray of a handpiece, the blood flow would maintain the temperature as a heating source . Residual dentine Dentine acts as a thermal barrier against harmful stimuli. The flow of heat through dentine is proportional to the TC of dentine and inversely proportional to the thickness of the residual dentine . The key material properties for heat transfer in teeth; the TC and TD values are both low for dentine . Residual dentine is a critical factor in reducing heat transfer to the pulp with its thickness seeming to be the most important factor in determining pulpal protection. A thicker residual dentin layer results in a greater insulating effect, affecting the quantum of heat transfer to the pulp chamber during dental procedures [ , , , , , ]. Thus, factors such as the type of tooth preparation (full veneer preparation on molars, three quarter preparation on molars or premolars) should be carefully considered this ultimately determines the amount of residual dentine and therefore the level of potential risk to the pulp arising from intrapulpal temperature rise . However, in the clinical situation, the thickness of prepared dentine is difficult to assess and therefore cannot be used to exclude thermal damages to the pulp . Dentinal tubules Factors such as the presence of dentinal tubules strongly impact the porosity, density and TC of dentine . Dentinal tubules are a network of channels radiating outwards from the pulp cavity to the DEJ [ , , ]. Thermal conductivity of dentine will vary with dentinal tubule density, orientation and structure (normal, transparent and reparative dentine, with reparative dentine being the formation of a tissue barrier by odontoblast-like cells following pulpal insults) . For instance, the TC of dentine decreases with increasing volume fraction of dentine tubules . Likewise, specific heat of dentine is said to rely on the orientation of dentine tubules . These characteristics in dentine promote a better transfer of heat towards the pulp where heat dissipating mechanisms can be activated . Yet, these physical properties of teeth differ extensively even for a single tooth but also between different teeth (incisor, canine, molar) including age, gender, ethnicity and different donors [ – ]. Previous work has demonstrated that there is a notable increase in the number of dentinal tubules in regions near the pulp chamber, providing a greater overall surface area available for diffusion compared to a much smaller presence of dentinal tubules in regions closer to the DEJ . This spatial variation in density of the dentinal tubules range from about 10,000 lumens/mm 2 at the DEJ to about 60,000 lumens/mm 2 near the pulp . Therefore, it could be concluded that the microstructure of human dentine is adapting to not only withstand thermal alterations but also to dissipate heat towards the pulp chamber. Accordingly, it is postulated that the thickness of the residual dentine layer could determine the density of dentinal tubules, where small amounts of residual dentine thickness would be more prone to intrapulpal temperature increase due to a greater presence of dentinal tubules . Dentine thermal conductivity By combining the residual dentine thickness with the coefficient of thermal conductivity of dentine, it is possible to establish the rate of heat flow from a thermal exposure at the surface of the cut dentine layer and establish the potential risk to the pulp tissue. This relationship is represented by a modified thermodynamic equation : [12pt]{minimal}
$$H=D$$ H = K A t 2 - t 1 D H is heat flow through dentine per unit time, K is the thermal conductivity of dentine, A is the surface area exposed to the heat stimulus, D is the thickness of the residual dentine layer and t 2 − t 1 is the temperature difference. This equation demonstrates that heat flow through dentine is directly proportional to the TC and inversely proportional to the residual dentine thickness .
When heat is transferred to the pulp, it can cause various histopathological changes which may lead to irreversible injury. Unlike heat transfer to other materials, the thermal behaviour of teeth is a heat conduction process, combined with its physiological processes, such as dentinal fluid flow and pulpal blood flow . The mechanism of injury includes protoplasm coagulation, expansion of the liquid in the dentinal tubules, increased outwards flow from the tubules, vascular injuries and tissue necrosis [ , , , ]. Moreover, because of the variance in thermophysical properties and microstructure between the layers in human teeth, heat transfer may also result in thermal stresses that lead to cracking within the different layers . It is thought that an intrapulpal temperature rise above 43°C activates nerve fibres, leading to a reactive increase of blood circulation which assists in the dissipation of any heat advancing towards the dental pulp . Additionally, the surrounding periodontal tissues could also play a significant role in promoting heat convection, thus limiting the intrapulpal temperature rise . Although the flow of dentine fluid can enhance the heat transfer within the pulp upon heating, it is the pulp microcirculation of blood that plays an important role in the thermoregulation of pulpal soft tissue. In essence, the pulp blood flow rate is practically constant within the range of 33 to 42°C but increases significantly when the temperature rises above 42°C. Perfused blood works as a heat sink under heating and as a source of heat when subjected to cooling. Yet, the overall influence of pulpal blood flow on heat transfer is thought to be minimal due to its relatively low blood volume . In addition, several other biological factors impact on whether the pulp tissue undergoes irreversible effects. This includes the amount of water content in the pulp, the changes in pulp blood and dentinal fluid flows, previous injury to the pulp, the health of the tissues, remaining dentine thickness and insulating quality, duration of insult and the surface area of exposed dentinal tubules [ – ]. Alternative consequences, such as necrosis and alveolar bone loss, and even ankylosis can also occur when intrapulpal temperatures increase by 3 to 10°C during tooth preparation . Higher and longer lasting temperature peaks, and specifically those exceeding the 5.5 °C increase threshold, may lead to pulpal necrosis, and an excessive temperature increase of 3–10 °C can lead to periodontal malformations (e.g. alveolar bone necrosis, bone loss and ankylosis) .
The relatively low values for thermal conductivity (TC) and diffusivity of enamel and dentine help protect the deeper tissues from thermal insults . Additionally, the characteristic arrangements of its inner structures have a significant influence on heat excursion in teeth . Nevertheless, greater attention is given to dentine since it is often the layer in direct contact with provisional materials and the layer likely to be involved in the heat transfer that takes place from the surface of the tooth preparation to the pulp chamber. Even though both enamel and dentine are hard components with a high percentage of mineral content, their thermophysical properties are different. TC indicates the ability of a material to conduct heat and the TD is the measure of the speed with which a temperature change will proceed through an object . The TD and TC of enamel are approximately 2.5 and 1.6 times larger than dentine, respectively . Dental pulp is involved in the maintenance of tooth vitality and is vulnerable to heat changes without the protection of the enamel and dentine layers. The TC and TD of enamel and dentine are relatively low compared to those of the pulp; therefore, these two layers are effectively thermal insulators and protect the pulp from deleterious thermal irritation . The thermophysical properties of the tooth is a factor in its thermal behaviour and depends on the microstructures of each tooth layer (Fig. ). However, because the human tooth is a living tissue, the heat conduction process occurs in conjunction with physiological processes, including the fluid motion in the DTs and blood circulation in the pulp chamber. Dentinal fluid flow could improve the heat transfer within the pulp during temperature changes. The pulpal blood flow also influences the thermoregulation of pulpal soft tissue. The increase of pulpal blood flow rate during extra heating from hot foods or rotary dental procedures (above 42°C) works as a heat sink, while during cooling, e.g. from the water jet spray of a handpiece, the blood flow would maintain the temperature as a heating source .
Dentine acts as a thermal barrier against harmful stimuli. The flow of heat through dentine is proportional to the TC of dentine and inversely proportional to the thickness of the residual dentine . The key material properties for heat transfer in teeth; the TC and TD values are both low for dentine . Residual dentine is a critical factor in reducing heat transfer to the pulp with its thickness seeming to be the most important factor in determining pulpal protection. A thicker residual dentin layer results in a greater insulating effect, affecting the quantum of heat transfer to the pulp chamber during dental procedures [ , , , , , ]. Thus, factors such as the type of tooth preparation (full veneer preparation on molars, three quarter preparation on molars or premolars) should be carefully considered this ultimately determines the amount of residual dentine and therefore the level of potential risk to the pulp arising from intrapulpal temperature rise . However, in the clinical situation, the thickness of prepared dentine is difficult to assess and therefore cannot be used to exclude thermal damages to the pulp .
Factors such as the presence of dentinal tubules strongly impact the porosity, density and TC of dentine . Dentinal tubules are a network of channels radiating outwards from the pulp cavity to the DEJ [ , , ]. Thermal conductivity of dentine will vary with dentinal tubule density, orientation and structure (normal, transparent and reparative dentine, with reparative dentine being the formation of a tissue barrier by odontoblast-like cells following pulpal insults) . For instance, the TC of dentine decreases with increasing volume fraction of dentine tubules . Likewise, specific heat of dentine is said to rely on the orientation of dentine tubules . These characteristics in dentine promote a better transfer of heat towards the pulp where heat dissipating mechanisms can be activated . Yet, these physical properties of teeth differ extensively even for a single tooth but also between different teeth (incisor, canine, molar) including age, gender, ethnicity and different donors [ – ]. Previous work has demonstrated that there is a notable increase in the number of dentinal tubules in regions near the pulp chamber, providing a greater overall surface area available for diffusion compared to a much smaller presence of dentinal tubules in regions closer to the DEJ . This spatial variation in density of the dentinal tubules range from about 10,000 lumens/mm 2 at the DEJ to about 60,000 lumens/mm 2 near the pulp . Therefore, it could be concluded that the microstructure of human dentine is adapting to not only withstand thermal alterations but also to dissipate heat towards the pulp chamber. Accordingly, it is postulated that the thickness of the residual dentine layer could determine the density of dentinal tubules, where small amounts of residual dentine thickness would be more prone to intrapulpal temperature increase due to a greater presence of dentinal tubules .
By combining the residual dentine thickness with the coefficient of thermal conductivity of dentine, it is possible to establish the rate of heat flow from a thermal exposure at the surface of the cut dentine layer and establish the potential risk to the pulp tissue. This relationship is represented by a modified thermodynamic equation : [12pt]{minimal}
$$H=D$$ H = K A t 2 - t 1 D H is heat flow through dentine per unit time, K is the thermal conductivity of dentine, A is the surface area exposed to the heat stimulus, D is the thickness of the residual dentine layer and t 2 − t 1 is the temperature difference. This equation demonstrates that heat flow through dentine is directly proportional to the TC and inversely proportional to the residual dentine thickness .
Tooth preparation The restorative process of preparing a tooth to receive a fixed prosthetic restoration requires both clinical and technical considerations as shown in Fig. . A critical area of concern for the clinician during this, often lengthy and involved procedure, is the minimising of external factors that lead to an increase in heat production and are potentially harmful to the vitality of the tooth . Two specific heat-generating variables include the friction between the HSDH and tooth and the exothermic setting reaction of self-polymerising restorative materials used for the temporalisation of the tooth preparation or the heat generated from the light curing of dental resins . Studies have shown a direct relationship between the tooth preparation design and intrapulpal temperature rise, especially the thickness of the residual dentine layer [ , , – ]. High-speed handpiece in dentistry In dentistry, the HSDH is a commonly used equipment in any clinical setting. It is used for fast and efficient removal of tooth structure in restorations . A good high-speed handpiece should be of an ergonomic size and weight, have a suitable head size, have adequate power and speed, have adequate illumination and have sufficient cooling features. Cooling features are important because tooth cutting produces friction and heat between the bur and tooth surface. Excessive heat can transfer to the pulp, resulting in inflammation and necrosis if not dissipated efficiently, as well as structural changes in the enamel and dentine . Air turbine versus electrical high-speed handpieces There are two main types of HSDHs—an electric micromotor which utilises an electric motor to generate the required rotational force, and an air turbine which utilises compressed air . The main advantage of electric micromotor–driven handpieces over air turbine handpieces is the greater cutting efficiency, with a smoother and even cutting rate caused by the constant torque maintenance under high loads and lack of ‘stalling’ compared to air motor–driven handpieces [ – ]. While air turbine far outruns electric motor run HDSH regarding speed, reaching speeds as high as 420,000 rpm, they lack the torque stability of electric HSDHs. Low torque means that there is less rotational force, and with the rotational speed decreasing it may stall at high loads, whereas the consistently high torque will maintain a constant rotational speed that does not decrease with high loads, therefore exhibiting a greater cutting efficiency . The greater cutting efficiency of electric HSDHs applies to a variety of dental materials, including glass ceramic, silver amalgam and high noble alloy . The torque of the handpiece is expressed by the power specification of the handpiece . One study found electric handpieces resulted in greater decrease in intra-pulpal temperature in comparison to air turbine handpieces, due to the improved cutting efficiency and friction production . However, with no other studies validating this result, the impact of handpiece type on intrapulpal temperatures changes cannot be concluded. In addition, evidence of the effect of input air pressure and torque on temperature increase is conflicting between studies. Ozturk et al. found temperature increase with increasing air pressure, but Firoozmand et al. found no difference in pulpal temperature between high torque and low torque HSDH. Both speed and power of the HSDH is related to the generated energy, so increased HSDH speed results in increased intra-pulpal temperature . However, within the handpiece body itself, more heat is generated with electric HSDHs. This can result in soft tissue injuries when the handpiece is running at maximum speed without an effective cooling mechanism . Coolant ports to reduce the thermal shock to the tooth Most modern HSDHs incorporate air or air-water coolant ports. These are designed to form a halo around the bur and spray high-velocity water and/or air at the tooth-bur interface. This improves visibility, cutting, polishing and cooling efficiency, as well as decreases the frictional heat and risk of pulp injury [ , , , , , ]. Schuchard conducted a photographic study looking at the action of the coolant water droplets while the bur is rotating at working speed. He found that with the water volume and pressure used in clinical setting, the coolant does not actually reach the cutting part of the bur. Instead, the coolant has a cooling effect on the entire tooth, rather than just the area of contact . Dental handpieces can differ on the number and location of their coolant ports, with 1-, 3- and 4-port varieties available for air turbine, and 1- and 4-port varieties for electric. Siegel and von Fraunhofer reported that the introduction of the 3- and 4-port handpieces was to allow sufficient cooling of the tooth if one or more ports become blocked. Theoretically, it would be hypothesised that the greater number of coolant ports would result in a greater cooling efficiency as the coolant would be greater distributed over the cutting surfaces. In general, manufacturers have claimed that more ports enhance cooling efficiency; however, the results from recent studies are inconclusive. Chua et al. demonstrated that there was no statistical significance between the intra-pulpal temperatures following the use of high-speed air turbine handpieces with different coolant port designs (1-, 3- and 4-ports), whereas a later study, Lau et al. , found a statistically significant difference found between the cooling efficiency with 1- and 4-port coolant design on electric micromotor HSDHs. The coolant port design also influences the cutting efficiency—HSDHs with a multiport coolant design exhibit greater cutting efficiency compared to those with a single coolant port, even if the 1-port handpiece has a higher coolant flow rate . Similarly, Lloyd et al. observed that cutting with water results in cutting rates three times that of dry cutting, with Siegel and von Fraunhofer finding that 1-port HSDH had significantly lower cutting rates than 3- or 4-spray ports when making groove cuts (with intact edges) . However, this difference was only observed when performing groove cuts (surrounded by tooth structure), and not when performing edge cuts. Groove cuts differ as they produce greater increases in temperature, due to the concentration of generated heat at the bur interface and decreased accessibility of water spray . Additionally, the position of spray ports affects water supply to cutting interface and therefore cutting rate. The authors Siegel and von Fraunhofer observed in previous studies that cutting rates varied with different spray port numbers and positioning, especially if there was a blocked port. Yang and Sun conducted a similar experiment on ceramic blocks, utilising both edge and groove cutting. However, they found that only the output coolant flow rate, and not the number of spray ports, affected cutting efficiency. Type of bur used There are a variety of burs used by the dentists with HSDHs. Studies have concluded that the type of bur used, whether it be made of diamond or carbide, and/or its shape, size and grit size, depended on clinician preference and is highly influenced by equipment used during dental school . Diamond burs are the most popular, followed by tungsten carbide burs . Current studies on the effect of bur type on the heat generation and cutting efficiency are inconclusive, with studies reporting contradicting findings on the differences between diamond and carbide burs. Most agree that carbide burs generate less heat and pressure potentially due to the different cutting mechanisms—diamond burs tend to clog with a grinding action used to remove tooth structure whereas carbide burs, with their fluted design, use a cleaving action instead. Diamond burs have been found to showcase poorer cutting efficiency when compared to carbide burs with a thicker smear layer and greater frictional heat produced [ , , ]. This can be attributed to the action of a diamond bur, where a large amount of energy is applied on the small cutting surfaces of each diamond grit . Watson et al. found that diamond burs produced more temperature increases, as there is a greater area of contact and more friction produced. Nevertheless, several studies have found the opposite with lower heat generation with diamond burs , and greater increases in temperatures with deeper cavity preparations using tungsten carbide burs . Numerous other factors such as the bur size, shape, coarseness and amount of surface wear can influence the amount of heat generated during mechanical tooth preparation. Diamond burs are available in different grit sizes, which produce different finishes. Coarser grit burs produce less of a smooth surface, and more friction and thus heat . In addition, as burs wear out and lose their grit, there is reduced cutting efficiency . Similarly, when the diamond bur clogs with debris or stall, their generated energy creates a significant spike in temperature . In addition, there appears to be differences in the burs produced within a manufacturer and between different manufacturers . Overall, however, studies found that the increase in intra-pulpal temperature was not clinically significant. In Watson et al.’s study , all tests with different burs resulted in a drop in intra-pulpal temperature. Ercoli et al. and Lau et al. also found that despite an increase in temperature for some burs, all were still below the critical value to cause pulpal damage. Cutting technique The cutting technique adopted by dentists can vary by either continuously cutting with no pause or intermittent cutting with periods of pauses. A study on cutting techniques showed that intermittent cutting produces greater cutting effectiveness . In intermittent cutting, heat dissipation can occur during the periods of rest where the bur is not in contact, resulting in a lower overall temperature increase [ , , ]. Similarly, an experimental study also showed that continuous cutting with high loads resulted in greater temperature increases . Other variables of cutting technique, such as rotational speed, operator pressure, depth of the cavity preparation, duration and even the differences in the cutting medium (extracted teeth versus glass slabs), will also impact the heat generated and the results of previous in vitro studies. Effect of water coolant on intra-pulpal temperature Many studies found that air coolant is insufficient, and water stream or air-water spray are more appropriate forms of cooling when drilling with HSDHs. This is supported by both histological studies and studies measuring intra-pulpal temperature [ , , – ]. The two main variables that influence the heat absorption ability of the water coolant are its flow rate and the temperature . Currently, utilisation of flow rates between 30 and 50mL/min is the standard cooling conditions [ , , , , , , ], with the International Organization for Standardization recommending the upper level of the threshold at 50mL/min . These coolant flow rates are necessary to decrease thermal injury with a cooling effect at both the cutting interface and the handpiece head, with maintenance of adequate operator visibility . If the coolant flow rate is high enough, theoretically, the pulp tissues will not exceed the temperature of the water coolant used . When sufficient coolant was used, the closer the bur is to the pulp, the further the temperature dropped . Leung et al. also found that thermal resistivity for air-water spray was lower than for water stream cooling at the same flow rate. The maximum output flow rate of the HSDH varies with the number of spray ports, if the water pressure is kept the same . The study found that HSDH with 1 spray port had the highest flow rate, followed by 2 ports, and then 3 spray ports. Temperature of the coolant water also affects intra-pulpal temp in cooling. Lower water temperatures have both a greater cooling efficiency and a greater heat absorption capacity . However, the clinical usefulness is limited due to the increased risk of pulp damage with the reduced pulp blood flow and decreased waste removal ability. This can occur if pulp temperature drops below 21°C . In addition, both the operator and patient may be uncomfortable when cooler water is used, especially with prolonged appointments and patients who suffer from cold sensitivity. Several studies have recommended the use of room temperature water at between 25 and 50 mL/min coolant flow rates to effectively prevent pulp injury . Farah et al. investigated the impact of three different water coolant temperatures, 10°C, 23°C and 35°C, at the coolant flow rate of 50 mL/min. This study concluded that water coolant was essential to prevent injury of the pulp and soft tissues, and that a coolant temperature of 35°C in electric handpieces offers only minimal protection as temperature increases were observed. Past studies have shown that electric handpieces increase the water coolant temperature, with heat gained when the water travels through the handpiece , which results in the possibility of soft tissue damage . Friction can be generated by the motor bearings, which can create heat that results in warming of the coolant water .
The restorative process of preparing a tooth to receive a fixed prosthetic restoration requires both clinical and technical considerations as shown in Fig. . A critical area of concern for the clinician during this, often lengthy and involved procedure, is the minimising of external factors that lead to an increase in heat production and are potentially harmful to the vitality of the tooth . Two specific heat-generating variables include the friction between the HSDH and tooth and the exothermic setting reaction of self-polymerising restorative materials used for the temporalisation of the tooth preparation or the heat generated from the light curing of dental resins . Studies have shown a direct relationship between the tooth preparation design and intrapulpal temperature rise, especially the thickness of the residual dentine layer [ , , – ].
In dentistry, the HSDH is a commonly used equipment in any clinical setting. It is used for fast and efficient removal of tooth structure in restorations . A good high-speed handpiece should be of an ergonomic size and weight, have a suitable head size, have adequate power and speed, have adequate illumination and have sufficient cooling features. Cooling features are important because tooth cutting produces friction and heat between the bur and tooth surface. Excessive heat can transfer to the pulp, resulting in inflammation and necrosis if not dissipated efficiently, as well as structural changes in the enamel and dentine .
There are two main types of HSDHs—an electric micromotor which utilises an electric motor to generate the required rotational force, and an air turbine which utilises compressed air . The main advantage of electric micromotor–driven handpieces over air turbine handpieces is the greater cutting efficiency, with a smoother and even cutting rate caused by the constant torque maintenance under high loads and lack of ‘stalling’ compared to air motor–driven handpieces [ – ]. While air turbine far outruns electric motor run HDSH regarding speed, reaching speeds as high as 420,000 rpm, they lack the torque stability of electric HSDHs. Low torque means that there is less rotational force, and with the rotational speed decreasing it may stall at high loads, whereas the consistently high torque will maintain a constant rotational speed that does not decrease with high loads, therefore exhibiting a greater cutting efficiency . The greater cutting efficiency of electric HSDHs applies to a variety of dental materials, including glass ceramic, silver amalgam and high noble alloy . The torque of the handpiece is expressed by the power specification of the handpiece . One study found electric handpieces resulted in greater decrease in intra-pulpal temperature in comparison to air turbine handpieces, due to the improved cutting efficiency and friction production . However, with no other studies validating this result, the impact of handpiece type on intrapulpal temperatures changes cannot be concluded. In addition, evidence of the effect of input air pressure and torque on temperature increase is conflicting between studies. Ozturk et al. found temperature increase with increasing air pressure, but Firoozmand et al. found no difference in pulpal temperature between high torque and low torque HSDH. Both speed and power of the HSDH is related to the generated energy, so increased HSDH speed results in increased intra-pulpal temperature . However, within the handpiece body itself, more heat is generated with electric HSDHs. This can result in soft tissue injuries when the handpiece is running at maximum speed without an effective cooling mechanism .
Most modern HSDHs incorporate air or air-water coolant ports. These are designed to form a halo around the bur and spray high-velocity water and/or air at the tooth-bur interface. This improves visibility, cutting, polishing and cooling efficiency, as well as decreases the frictional heat and risk of pulp injury [ , , , , , ]. Schuchard conducted a photographic study looking at the action of the coolant water droplets while the bur is rotating at working speed. He found that with the water volume and pressure used in clinical setting, the coolant does not actually reach the cutting part of the bur. Instead, the coolant has a cooling effect on the entire tooth, rather than just the area of contact . Dental handpieces can differ on the number and location of their coolant ports, with 1-, 3- and 4-port varieties available for air turbine, and 1- and 4-port varieties for electric. Siegel and von Fraunhofer reported that the introduction of the 3- and 4-port handpieces was to allow sufficient cooling of the tooth if one or more ports become blocked. Theoretically, it would be hypothesised that the greater number of coolant ports would result in a greater cooling efficiency as the coolant would be greater distributed over the cutting surfaces. In general, manufacturers have claimed that more ports enhance cooling efficiency; however, the results from recent studies are inconclusive. Chua et al. demonstrated that there was no statistical significance between the intra-pulpal temperatures following the use of high-speed air turbine handpieces with different coolant port designs (1-, 3- and 4-ports), whereas a later study, Lau et al. , found a statistically significant difference found between the cooling efficiency with 1- and 4-port coolant design on electric micromotor HSDHs. The coolant port design also influences the cutting efficiency—HSDHs with a multiport coolant design exhibit greater cutting efficiency compared to those with a single coolant port, even if the 1-port handpiece has a higher coolant flow rate . Similarly, Lloyd et al. observed that cutting with water results in cutting rates three times that of dry cutting, with Siegel and von Fraunhofer finding that 1-port HSDH had significantly lower cutting rates than 3- or 4-spray ports when making groove cuts (with intact edges) . However, this difference was only observed when performing groove cuts (surrounded by tooth structure), and not when performing edge cuts. Groove cuts differ as they produce greater increases in temperature, due to the concentration of generated heat at the bur interface and decreased accessibility of water spray . Additionally, the position of spray ports affects water supply to cutting interface and therefore cutting rate. The authors Siegel and von Fraunhofer observed in previous studies that cutting rates varied with different spray port numbers and positioning, especially if there was a blocked port. Yang and Sun conducted a similar experiment on ceramic blocks, utilising both edge and groove cutting. However, they found that only the output coolant flow rate, and not the number of spray ports, affected cutting efficiency.
There are a variety of burs used by the dentists with HSDHs. Studies have concluded that the type of bur used, whether it be made of diamond or carbide, and/or its shape, size and grit size, depended on clinician preference and is highly influenced by equipment used during dental school . Diamond burs are the most popular, followed by tungsten carbide burs . Current studies on the effect of bur type on the heat generation and cutting efficiency are inconclusive, with studies reporting contradicting findings on the differences between diamond and carbide burs. Most agree that carbide burs generate less heat and pressure potentially due to the different cutting mechanisms—diamond burs tend to clog with a grinding action used to remove tooth structure whereas carbide burs, with their fluted design, use a cleaving action instead. Diamond burs have been found to showcase poorer cutting efficiency when compared to carbide burs with a thicker smear layer and greater frictional heat produced [ , , ]. This can be attributed to the action of a diamond bur, where a large amount of energy is applied on the small cutting surfaces of each diamond grit . Watson et al. found that diamond burs produced more temperature increases, as there is a greater area of contact and more friction produced. Nevertheless, several studies have found the opposite with lower heat generation with diamond burs , and greater increases in temperatures with deeper cavity preparations using tungsten carbide burs . Numerous other factors such as the bur size, shape, coarseness and amount of surface wear can influence the amount of heat generated during mechanical tooth preparation. Diamond burs are available in different grit sizes, which produce different finishes. Coarser grit burs produce less of a smooth surface, and more friction and thus heat . In addition, as burs wear out and lose their grit, there is reduced cutting efficiency . Similarly, when the diamond bur clogs with debris or stall, their generated energy creates a significant spike in temperature . In addition, there appears to be differences in the burs produced within a manufacturer and between different manufacturers . Overall, however, studies found that the increase in intra-pulpal temperature was not clinically significant. In Watson et al.’s study , all tests with different burs resulted in a drop in intra-pulpal temperature. Ercoli et al. and Lau et al. also found that despite an increase in temperature for some burs, all were still below the critical value to cause pulpal damage.
The cutting technique adopted by dentists can vary by either continuously cutting with no pause or intermittent cutting with periods of pauses. A study on cutting techniques showed that intermittent cutting produces greater cutting effectiveness . In intermittent cutting, heat dissipation can occur during the periods of rest where the bur is not in contact, resulting in a lower overall temperature increase [ , , ]. Similarly, an experimental study also showed that continuous cutting with high loads resulted in greater temperature increases . Other variables of cutting technique, such as rotational speed, operator pressure, depth of the cavity preparation, duration and even the differences in the cutting medium (extracted teeth versus glass slabs), will also impact the heat generated and the results of previous in vitro studies.
Many studies found that air coolant is insufficient, and water stream or air-water spray are more appropriate forms of cooling when drilling with HSDHs. This is supported by both histological studies and studies measuring intra-pulpal temperature [ , , – ]. The two main variables that influence the heat absorption ability of the water coolant are its flow rate and the temperature . Currently, utilisation of flow rates between 30 and 50mL/min is the standard cooling conditions [ , , , , , , ], with the International Organization for Standardization recommending the upper level of the threshold at 50mL/min . These coolant flow rates are necessary to decrease thermal injury with a cooling effect at both the cutting interface and the handpiece head, with maintenance of adequate operator visibility . If the coolant flow rate is high enough, theoretically, the pulp tissues will not exceed the temperature of the water coolant used . When sufficient coolant was used, the closer the bur is to the pulp, the further the temperature dropped . Leung et al. also found that thermal resistivity for air-water spray was lower than for water stream cooling at the same flow rate. The maximum output flow rate of the HSDH varies with the number of spray ports, if the water pressure is kept the same . The study found that HSDH with 1 spray port had the highest flow rate, followed by 2 ports, and then 3 spray ports. Temperature of the coolant water also affects intra-pulpal temp in cooling. Lower water temperatures have both a greater cooling efficiency and a greater heat absorption capacity . However, the clinical usefulness is limited due to the increased risk of pulp damage with the reduced pulp blood flow and decreased waste removal ability. This can occur if pulp temperature drops below 21°C . In addition, both the operator and patient may be uncomfortable when cooler water is used, especially with prolonged appointments and patients who suffer from cold sensitivity. Several studies have recommended the use of room temperature water at between 25 and 50 mL/min coolant flow rates to effectively prevent pulp injury . Farah et al. investigated the impact of three different water coolant temperatures, 10°C, 23°C and 35°C, at the coolant flow rate of 50 mL/min. This study concluded that water coolant was essential to prevent injury of the pulp and soft tissues, and that a coolant temperature of 35°C in electric handpieces offers only minimal protection as temperature increases were observed. Past studies have shown that electric handpieces increase the water coolant temperature, with heat gained when the water travels through the handpiece , which results in the possibility of soft tissue damage . Friction can be generated by the motor bearings, which can create heat that results in warming of the coolant water .
After tooth preparation, the patient receives provisional crown(s) while the final dental restoration is made in the dental laboratory. The requirement for provisional crowns has been mainly derived from the methodological process that relies on the indirect fabrication of the definitive restoration in the dental laboratory. Provisional crowns must, with the exception to the type of material from which they are fabricated, resemble the planned final restoration in all regards to satisfy areas of critical concern . These restorations restore function to the prepared tooth with only slight differences to the definitive restoration they precede . The overarching aims of provisional crowns can be summarised as biologic, diagnostic, aesthetic and mechanical . Biologically, provisional crowns must provide adequate contour that stabilises and promotes gingival health, and restore functional intercuspal and proximal contacts that prevent migration of the prepared tooth and movement of the adjacent teeth, as well as provide immediate protection to the pulpal tissues . The latter is of particular interest to this review as an unfavourable combination of the type of material and the method of fabrication could be detrimental to the pulp of a vital tooth. Therefore, clinicians must take extreme caution during the provisional restorative phase to ensure the health of the underlying tissues. Fabrication methods and types of dental provisional crowns There are two main methods for fabricating provisional crowns: the direct and the indirect method. The direct method places acrylic resin material onto the prepared tooth, with the risk of thermal injury at temperature increases of 5.6°C in the pulp . Most chair-side materials in use by clinicians for provisional restorations lead to a rise in temperature during polymerisation , and may also cause irreversible damage to the gingival and pulpal tissue . Furthermore, the presence of free monomer in direct contact with open dentinal tubules can be harmful and cause pulp inflammation if it leaches towards the pulp tissue . For the indirect method, materials are able to be cured in a hydro flask which shields the freshly prepared tooth from the heat released from the polymerising resin . Polymers used in provisional restorative materials are classified either by their chemistry or by method of curing. The chemistry group of polymers include acrylics, composite resin and polycarbonate . Methods of curing include chemical, heat, light or dual-activated. Predominately available commercial options for provisional restorations are either composite resin (bis-A-glycidyl methacrylate, bis-acryl, urethane dimethacrylate) or methacrylate resin (methyl methacrylate, ethyl methacrylate, vinyl methcrylate, butylmethacrylate)–based materials. The choice of material should be based on the clinical needs and longevity of the provisional restoration. Exothermic properties of provisional materials Provisional crown materials available today have in common that they cure by radical polymerisation resulting in either a non (mono-methacrylates) or highly cross-linked polymer network (di- or multifunctional methacrylates) . It is the exothermic character of radical curing that leads to a significant amount of heat being generated during the course of polymerisation . The reaction of these polymer-based provisional materials is through additional polymerisation, where carbon-carbon double bonds are converted to new carbon single bonds. The exothermic heat released during the polymerisation process is a direct result from the difference in energy between the two bonds . Thus, there is variety in the amount of heat generated for different materials. For example, in a previous study examining the temperature profiles of a direct (Luxatemp) and preformed (Hi-tempo) provisional crown materials, both of which is placed directly on the tooth preparation, a higher temperature increase was noted with the preformed crown system .
There are two main methods for fabricating provisional crowns: the direct and the indirect method. The direct method places acrylic resin material onto the prepared tooth, with the risk of thermal injury at temperature increases of 5.6°C in the pulp . Most chair-side materials in use by clinicians for provisional restorations lead to a rise in temperature during polymerisation , and may also cause irreversible damage to the gingival and pulpal tissue . Furthermore, the presence of free monomer in direct contact with open dentinal tubules can be harmful and cause pulp inflammation if it leaches towards the pulp tissue . For the indirect method, materials are able to be cured in a hydro flask which shields the freshly prepared tooth from the heat released from the polymerising resin . Polymers used in provisional restorative materials are classified either by their chemistry or by method of curing. The chemistry group of polymers include acrylics, composite resin and polycarbonate . Methods of curing include chemical, heat, light or dual-activated. Predominately available commercial options for provisional restorations are either composite resin (bis-A-glycidyl methacrylate, bis-acryl, urethane dimethacrylate) or methacrylate resin (methyl methacrylate, ethyl methacrylate, vinyl methcrylate, butylmethacrylate)–based materials. The choice of material should be based on the clinical needs and longevity of the provisional restoration.
Provisional crown materials available today have in common that they cure by radical polymerisation resulting in either a non (mono-methacrylates) or highly cross-linked polymer network (di- or multifunctional methacrylates) . It is the exothermic character of radical curing that leads to a significant amount of heat being generated during the course of polymerisation . The reaction of these polymer-based provisional materials is through additional polymerisation, where carbon-carbon double bonds are converted to new carbon single bonds. The exothermic heat released during the polymerisation process is a direct result from the difference in energy between the two bonds . Thus, there is variety in the amount of heat generated for different materials. For example, in a previous study examining the temperature profiles of a direct (Luxatemp) and preformed (Hi-tempo) provisional crown materials, both of which is placed directly on the tooth preparation, a higher temperature increase was noted with the preformed crown system .
The placement of the final dental restoration often requires adjustments, with the generated frictional heat transferred to the pulp chamber . This heat transfer will be dependent on the thermal conductivity and diffusivity of the material/materials being used in the construction of the final restoration and the material of the bonding system. Generally, thermal conductivities increase in the following order: polymers < ceramics < metals . The higher value of thermal conductivity means that the material has greater ability to transmit thermal energy. However, if the temperature gradient changes with time, thermal diffusivity is used to determine the amount of heat transferred. Therefore, the thermal diffusivity of a dental restorative material might be more important than its thermal conductivity. This property also depends on the material’s density and heat capacity. Thermal diffusivity is not in proportion to the thermal conductivity, which means that a material might have a low thermal diffusivity and relatively high thermal conductivity. Gold is sometimes utilised as an alloy material for dental restorations which has about 500 times the thermal conductivity (297 Wm −1 K −1 ) and 600 times the thermal diffusivity (1.18 cm 2 s −1 ) than that of dentine. Hence, compared to dentine, gold restorations provide very little protection to the pulp against the thermal stimulation. However, the thermal conductivity of zirconia (2.5–2.8 Wm −1 K −1 ) is extremely low compared to metallic materials and alumina (30 Wm −1 K −1 ) with lithium disilicates having a thermal conductivity of 5.2 Wm −1 K −1 . Intraoral polishing of fixed dental restoration Temperature rise is a common occurrence and could easily exceed the 5.5°C threshold value during the intraoral polishing procedure . Zirconia, for example, has much higher hardness, elastic modulus and fracture toughness than other all-ceramic restorative materials . Therefore, it requires much higher frictional forces (e.g. with higher speed and/or harder polishers) to create a smooth surface if it is not glazed, which is known to generate more heat . İşeri et al. studied the temperature changes during clinical procedures, focusing on the periodical and continual grinding of disc-shaped zirconia specimens (15mm diameter × 1mm) with micromotor at 22,000 rpm and high-speed handpiece at 320,000 rpm. The study showed that dry grinding and adjusting zirconia produced a temperature rise of 63.4°C, by far exceeding the critical temperature which is known to cause pulp damage. Chavali et al. investigated the influence of two polishing systems and three speeds on the heat production of zirconia. In order to compare the heat generation via intraoral polishing, three different types of polishing agents were used to polish zirconia specimens into 4-mm-thick sections at either 5000, 15,000 or 40,000 rpm with slow-speed dental handpieces . The results showed that no group generated surface temperature over 42°C, which is just under the critical temperature for pulp damage reported by Zach and Cohen . Heat generation during direct restoration with light curing The heat generated during photopolymerisation using visible light-curing units has the potential of causing damage to pulp tissue. The temperature elevation occurs due to increased exposure time to light during irradiation. Studies have identified photopolymerisation as a big risk to pulp health demonstrating a temperature rise between 4.3 and 7.5°C during photopolymerisation of composite discs . Another study recorded intrapulpal temperature rises ranging from 1.5°C to more than 4°C during light-curing of composite resin restoration of extracted teeth. Yet, clinical experiments have demonstrated that the pulp appears to recover from transient heating from light-curing units . Some consideration must be given to the combination of the temporary material and the type of light-curing unit used as its output may influence the final temperature rise. The heat emission during polymerisation may induce a temperature rise that may be of biological concern. With regard to tooth preparation in prosthodontic dentistry, the probability of damage to the pulp is real when the temperature increase due to polymerisation is greater than the physiologic heat dissipation mechanisms of the dental periodontal system . Influence of light intensity on temperature raise of BCRs As discussed previously, the increase in light intensity is associated with increasing concern of heat generation within the BCRs and subsequent pulpal injury. Balestrino et al. found a difference in heat generation between various types of light curing units. They concluded that the LEDs produced higher temperature rises than the QTH, and the LED with lower irradiance causing higher temperature rises than the LED with higher irradiance . However, the heat dissipation design of a light-curing unit should also be taken into account. Armellin et al. provided an alternative perspective on the heat generation of BCRs, by stating that the temperature increase during resin curing is a function of the rate of polymerisation, which is not only associated with the energy from light curing units, but also due to the exothermic polymerisation reaction and time of exposure. Par et al. found that temperature rise during curing ranged from 4.4 to 9.3°C and was significantly reduced by curing with the lower intensity blue curing unit. This study also suggested that the correlating temperature rise of radiant energy, in combination of material × thickness × curing unit, revealed a highly significant linear relationship . However, there is no direct evidence that support the relationship between the light intensity on heat generation of BCRs and the subsequent pulpal injury. Uhl et al. argued that no considerable difference in the temperature increase within the pulp chamber model was found for the different light curing units and composites.
Temperature rise is a common occurrence and could easily exceed the 5.5°C threshold value during the intraoral polishing procedure . Zirconia, for example, has much higher hardness, elastic modulus and fracture toughness than other all-ceramic restorative materials . Therefore, it requires much higher frictional forces (e.g. with higher speed and/or harder polishers) to create a smooth surface if it is not glazed, which is known to generate more heat . İşeri et al. studied the temperature changes during clinical procedures, focusing on the periodical and continual grinding of disc-shaped zirconia specimens (15mm diameter × 1mm) with micromotor at 22,000 rpm and high-speed handpiece at 320,000 rpm. The study showed that dry grinding and adjusting zirconia produced a temperature rise of 63.4°C, by far exceeding the critical temperature which is known to cause pulp damage. Chavali et al. investigated the influence of two polishing systems and three speeds on the heat production of zirconia. In order to compare the heat generation via intraoral polishing, three different types of polishing agents were used to polish zirconia specimens into 4-mm-thick sections at either 5000, 15,000 or 40,000 rpm with slow-speed dental handpieces . The results showed that no group generated surface temperature over 42°C, which is just under the critical temperature for pulp damage reported by Zach and Cohen .
The heat generated during photopolymerisation using visible light-curing units has the potential of causing damage to pulp tissue. The temperature elevation occurs due to increased exposure time to light during irradiation. Studies have identified photopolymerisation as a big risk to pulp health demonstrating a temperature rise between 4.3 and 7.5°C during photopolymerisation of composite discs . Another study recorded intrapulpal temperature rises ranging from 1.5°C to more than 4°C during light-curing of composite resin restoration of extracted teeth. Yet, clinical experiments have demonstrated that the pulp appears to recover from transient heating from light-curing units . Some consideration must be given to the combination of the temporary material and the type of light-curing unit used as its output may influence the final temperature rise. The heat emission during polymerisation may induce a temperature rise that may be of biological concern. With regard to tooth preparation in prosthodontic dentistry, the probability of damage to the pulp is real when the temperature increase due to polymerisation is greater than the physiologic heat dissipation mechanisms of the dental periodontal system .
As discussed previously, the increase in light intensity is associated with increasing concern of heat generation within the BCRs and subsequent pulpal injury. Balestrino et al. found a difference in heat generation between various types of light curing units. They concluded that the LEDs produced higher temperature rises than the QTH, and the LED with lower irradiance causing higher temperature rises than the LED with higher irradiance . However, the heat dissipation design of a light-curing unit should also be taken into account. Armellin et al. provided an alternative perspective on the heat generation of BCRs, by stating that the temperature increase during resin curing is a function of the rate of polymerisation, which is not only associated with the energy from light curing units, but also due to the exothermic polymerisation reaction and time of exposure. Par et al. found that temperature rise during curing ranged from 4.4 to 9.3°C and was significantly reduced by curing with the lower intensity blue curing unit. This study also suggested that the correlating temperature rise of radiant energy, in combination of material × thickness × curing unit, revealed a highly significant linear relationship . However, there is no direct evidence that support the relationship between the light intensity on heat generation of BCRs and the subsequent pulpal injury. Uhl et al. argued that no considerable difference in the temperature increase within the pulp chamber model was found for the different light curing units and composites.
The most used method for measuring heat generation is by measuring real time temperature change via thermocouples, which is a reliable and relatively simple method to measure temperature change within dental materials or measuring heat transfer across the tooth structure [ , , , , – ]. There have been variations in the methods of measuring heat generation which could potentially lead to differences in results. For example, the type of thermocouple wires used were different. Some studies used J-type thermocouples while others used K or T-type . However, there has been no evidence that suggest the types of thermocouples can have significant influence on the measurement of real-time temperature change in dental settings or dental materials. Measurements of heat generation and related temperature change can also be affected by the position of the thermocouples placed, as a change in location of the probe, which can result in variation of measurements and inconsistent results . Additionally, a silicone heat-transfer compound injected into the pulp chamber is used to help transfer the heat from the walls of the pulp chamber to the thermocouple . Measurements of heat transfer can be affected by the placement of the thermocouple, which needs to be in the same position at each measurement to minimise any variations that can be caused by any variant location of the probe . Radiovisiography can be used to determine proper positioning of the thermocouple probe, as well as the residual dentine thickness . To minimise this variation, radiovisiography can be adapted to aid in proper positioning of the thermocouple probe . Often specimens are prepared either in disc shapes or by using actual tooth preparations. As shown in Table , previous studies appear to largely employ molars for their studies with few using pre-molars and only two studies used dentine discs. In general, when using tooth specimens, the design had been shaped to represent real case applications by using tooth preparations for either crown or cavity restorations. From the literature, there also seems to be a systematic preference to using thermocouples to measure changes in temperature. Additionally, most studies have adopted some form of metallic material to fill the cavity of the pulp chamber to facilitate heat transfer to the thermocouple. On the other hand, it is surprising that most studies had a small sample size ( n =5) and although two studies used water baths to simulate intraoral conditions, with only one study having attempted a model to simulate the complex intrapulpal fluid flow. Many studies listed in Tables , , , had one thermocouple located in the pulp chamber. Thus, they were interested in only measuring the intrapulpal temperature change, rather than the heat transfer from outside the enamel down to the pulp (during dental procedures/restorative materials). The difference in methods adapted in various studies makes it difficult to compare their results. Furthermore, one common limitation of these in vitro studies is the lack of blood circulation seen in vital pulp and associated heat dissipation. Overestimation of the pulp temperature changes in in vitro studies is probable, with the lack of blood and dentine fluid flow, and lack of periodontal tissues [ – ]. This could limit the representativeness of the results to vital human dentition. Experimental setup to simulate intraoral environment Although thermogenesis during various dental procedures is extremely common, the amount of heat generation needs to be measured for providing clinical implications for better instruction and instrument application . Heat transfer in teeth commonly depends on the geometry of the tooth itself, material properties and biological function. The biological function would be the biggest challenge for the experimental setup. In vivo experiments reflect the active processes within a tooth, whereas the experimental measurement of in vivo temperature changes within tooth pulp is impractical. Obviously, the in vitro test would be the only choice and the way how the simulation system is built up would influence the accurateness and reliability of the results. For mimicking the natural intraoral environment, three main factors should be considered: temperature, intra-pulpal blood fluid and humidity. Also, there are several integrated simulation systems of the intraoral environment used by previous studies, which could be referred to for setting up a more realistic and ideal experiment. Various methodologies applied across previous studies that stimulated the above three factors are summarised in Table . Effect of pulpal blood flow and microcirculation model Pulpal blood flow (PBF) which varies with external stimuli helps maintain pulpal temperature by providing circulation and absorbing or providing heat . Kodonas et al. reconstructed pulpal microcirculation by running 37°C water through extracted human teeth and found significantly lower temperature increase under the microcirculation model. However, PBF varies with external stimuli. It decreases when the pulp is cool and increases significantly when pulpal temperature increases above 42°C and clinically used vasoconstrictors slow or stop PBF . Most studies evaluating the heat generation via dental procedures, such as Chavali et al.’s study , were designed and completed at room temperature (24.0 ± 0.3°C) and with ambient humidity . However, the surface temperatures of the dentition and soft tissues have been found between 30 to 35°C and 32 to 37°C, respectively . Amsler et al. showed that the temperature range of the oral cavity was 26 to 29°C. In Dias et al.’s study , they investigated the real-time pulp temperature change during temporary crown fabrication, comparing the heat generation during two different temporary crown systems and at different tooth sites. In Chua et al. and Dias et al.’s studies , the authors simulated the pulp temperature by adding 37°C water in the container where the teeth specimens were fixed during the experiment (Fig. ). These two studies highlighted the importance of conducting the experiment with 37°C water to simulate the baseline pulp temperature as the experiments carried out at room temperature had a significant impact on the temperature profile. For example, when pulp temperature was measured with and without 37°C water during a self-polymerising temporary crown fabrication, there was almost 20°C difference in pulp temperature between the two techniques (Fig. ). When compared to the results from a previous study by Kim and Watts using the same crown material conducted at room temperature, the authors found that while the pulp temperature stabilised at 37°C, the temperature recorded in the pulp chamber was 69 times lower . In Chavali et al.’s study , the polishing was also assessed without pulp temperature simulation. They discussed that dry polishing had the possibility to affect the rate of evaporation and thereby cooling rate. Their results of the temperature increased to 42°C from the intraoral polishing may have come from the fact that the experiment was conducted at room temperature and may have increased when it was done under pulp temperature simulation conditions . In order to determine the reference values of the two intraoral factors, Park et al. assessed the accuracy of two intraoral scanners utilising a box-shaped intraoral environment simulator to mimic the temperature and humidity of the mouth (Fig. ). Then, in Farah’s studies , an incubator at 37°C ± 1°C was used as the simulation chamber of the intraoral temperature to evaluate the effect of cooling water temperature on the temperature changes in the pulp chamber (Fig. ). These two studies successfully simulated the intraoral temperature, but pulp flow and intraoral humidity were not simulated in this study. Intra-pulpal blood flow and intraoral humidity Surveys such as that conducted by Goodis et al. and Mülle and Raab showed that the pulp blood flow probably mediated the effective homeostatic mechanism within human teeth, while many in vitro heat transfer studies of human teeth were carried out with cleaned and empty pulp chambers. Linsuwanont et al. reported that, under temperature fluctuations, any fluid movement either away from or towards the pulp would inevitably result in the redistribution of the pulp chamber temperature. Lin et al. also briefly stated that the TC and heat capacity of teeth of empty pulp chambers were significantly different from filled chamber with pulp soft tissue. In Kodonas et al.’s research , it found that the heat transfer experiments conducted without pulpal simulation would result in temperature increase of a greater magnitude than those with pulpal simulation. In order to simulate the vital dental pulp, Hannig and Bott filled the pulp chamber with warm water to mimic heat transfer through soft tissue in the pulp chamber. Attrill et al. filled the dead space of pulp chamber with a ‘pulp phantom’ which provided a thermal conduction environment similar to the vital dental pulp and Chua et al. and Farah utilised a high-density polysynthetic silver thermal compound inside the pulp cavity to improve conductivity. Nevertheless, Hannig and Bott reported that the influence of pulpal blood flow on the thermal behaviour of the dentine-pulp complex cannot be simulated by stationary water inside the testing container. Chua et al. also suggested that a better pulpal simulation experimental setup would help to find the more exact results of the temperature change. However, this will be challenging to accurately replicate due to its dynamic nature and changes in flow following different stimulations, such as temperature increases causing an increase in blood circulation . Previous studies have many attempts to simulate this flow. An earlier research, Daronch et al. , noticed the deficiency of empty pulp chambers, which limited the direct application of the measurement data of in vivo situations, and employed an infusion pump connected to the tooth roots through a small diameter tube. This device delivered water at a speed of 0.0125 ml/min to simulate the pulpal blood flow. At the same time, the tooth was immersed into a water bath up to the cement-enamel junction. Then, Farah used a curved needle connected to a peristaltic pump with a controlled fluid flow rate to simulate the pulp blood flow. This study also concluded that simulated pulpal blood flow resulted in a lower increase in the pulp chamber temperature, compared to when pulpal blood flow was simulated (Fig. ). The relative humidity of the oral cavity has been found to vary in the range 78 to 94% during operative dental procedures [ , , ]. Breathing through either nose or mouth showed no significant effect on the relative humidity ; however, the relative humidity would have decreased once the use of the rubber dam was completed . Bicalho et al. constructed a chamber to mimic the oral environment and evaluated the effect of the temperature and humidity. They controlled the humidity by a water spray system which was activated automatically to maintain a pre-set humidity value of either 50 or 90% at 22 or 37 °C. According to their study, the temperature and humidity had a significant influence on the mechanical properties of restored teeth with composite resins . In another study, the flexural modulus and flexural strength properties of composite were not negatively influenced by the simulated intraoral conditions of 35°C at 90% relative humidity .
Although thermogenesis during various dental procedures is extremely common, the amount of heat generation needs to be measured for providing clinical implications for better instruction and instrument application . Heat transfer in teeth commonly depends on the geometry of the tooth itself, material properties and biological function. The biological function would be the biggest challenge for the experimental setup. In vivo experiments reflect the active processes within a tooth, whereas the experimental measurement of in vivo temperature changes within tooth pulp is impractical. Obviously, the in vitro test would be the only choice and the way how the simulation system is built up would influence the accurateness and reliability of the results. For mimicking the natural intraoral environment, three main factors should be considered: temperature, intra-pulpal blood fluid and humidity. Also, there are several integrated simulation systems of the intraoral environment used by previous studies, which could be referred to for setting up a more realistic and ideal experiment. Various methodologies applied across previous studies that stimulated the above three factors are summarised in Table .
Pulpal blood flow (PBF) which varies with external stimuli helps maintain pulpal temperature by providing circulation and absorbing or providing heat . Kodonas et al. reconstructed pulpal microcirculation by running 37°C water through extracted human teeth and found significantly lower temperature increase under the microcirculation model. However, PBF varies with external stimuli. It decreases when the pulp is cool and increases significantly when pulpal temperature increases above 42°C and clinically used vasoconstrictors slow or stop PBF . Most studies evaluating the heat generation via dental procedures, such as Chavali et al.’s study , were designed and completed at room temperature (24.0 ± 0.3°C) and with ambient humidity . However, the surface temperatures of the dentition and soft tissues have been found between 30 to 35°C and 32 to 37°C, respectively . Amsler et al. showed that the temperature range of the oral cavity was 26 to 29°C. In Dias et al.’s study , they investigated the real-time pulp temperature change during temporary crown fabrication, comparing the heat generation during two different temporary crown systems and at different tooth sites. In Chua et al. and Dias et al.’s studies , the authors simulated the pulp temperature by adding 37°C water in the container where the teeth specimens were fixed during the experiment (Fig. ). These two studies highlighted the importance of conducting the experiment with 37°C water to simulate the baseline pulp temperature as the experiments carried out at room temperature had a significant impact on the temperature profile. For example, when pulp temperature was measured with and without 37°C water during a self-polymerising temporary crown fabrication, there was almost 20°C difference in pulp temperature between the two techniques (Fig. ). When compared to the results from a previous study by Kim and Watts using the same crown material conducted at room temperature, the authors found that while the pulp temperature stabilised at 37°C, the temperature recorded in the pulp chamber was 69 times lower . In Chavali et al.’s study , the polishing was also assessed without pulp temperature simulation. They discussed that dry polishing had the possibility to affect the rate of evaporation and thereby cooling rate. Their results of the temperature increased to 42°C from the intraoral polishing may have come from the fact that the experiment was conducted at room temperature and may have increased when it was done under pulp temperature simulation conditions . In order to determine the reference values of the two intraoral factors, Park et al. assessed the accuracy of two intraoral scanners utilising a box-shaped intraoral environment simulator to mimic the temperature and humidity of the mouth (Fig. ). Then, in Farah’s studies , an incubator at 37°C ± 1°C was used as the simulation chamber of the intraoral temperature to evaluate the effect of cooling water temperature on the temperature changes in the pulp chamber (Fig. ). These two studies successfully simulated the intraoral temperature, but pulp flow and intraoral humidity were not simulated in this study.
Surveys such as that conducted by Goodis et al. and Mülle and Raab showed that the pulp blood flow probably mediated the effective homeostatic mechanism within human teeth, while many in vitro heat transfer studies of human teeth were carried out with cleaned and empty pulp chambers. Linsuwanont et al. reported that, under temperature fluctuations, any fluid movement either away from or towards the pulp would inevitably result in the redistribution of the pulp chamber temperature. Lin et al. also briefly stated that the TC and heat capacity of teeth of empty pulp chambers were significantly different from filled chamber with pulp soft tissue. In Kodonas et al.’s research , it found that the heat transfer experiments conducted without pulpal simulation would result in temperature increase of a greater magnitude than those with pulpal simulation. In order to simulate the vital dental pulp, Hannig and Bott filled the pulp chamber with warm water to mimic heat transfer through soft tissue in the pulp chamber. Attrill et al. filled the dead space of pulp chamber with a ‘pulp phantom’ which provided a thermal conduction environment similar to the vital dental pulp and Chua et al. and Farah utilised a high-density polysynthetic silver thermal compound inside the pulp cavity to improve conductivity. Nevertheless, Hannig and Bott reported that the influence of pulpal blood flow on the thermal behaviour of the dentine-pulp complex cannot be simulated by stationary water inside the testing container. Chua et al. also suggested that a better pulpal simulation experimental setup would help to find the more exact results of the temperature change. However, this will be challenging to accurately replicate due to its dynamic nature and changes in flow following different stimulations, such as temperature increases causing an increase in blood circulation . Previous studies have many attempts to simulate this flow. An earlier research, Daronch et al. , noticed the deficiency of empty pulp chambers, which limited the direct application of the measurement data of in vivo situations, and employed an infusion pump connected to the tooth roots through a small diameter tube. This device delivered water at a speed of 0.0125 ml/min to simulate the pulpal blood flow. At the same time, the tooth was immersed into a water bath up to the cement-enamel junction. Then, Farah used a curved needle connected to a peristaltic pump with a controlled fluid flow rate to simulate the pulp blood flow. This study also concluded that simulated pulpal blood flow resulted in a lower increase in the pulp chamber temperature, compared to when pulpal blood flow was simulated (Fig. ). The relative humidity of the oral cavity has been found to vary in the range 78 to 94% during operative dental procedures [ , , ]. Breathing through either nose or mouth showed no significant effect on the relative humidity ; however, the relative humidity would have decreased once the use of the rubber dam was completed . Bicalho et al. constructed a chamber to mimic the oral environment and evaluated the effect of the temperature and humidity. They controlled the humidity by a water spray system which was activated automatically to maintain a pre-set humidity value of either 50 or 90% at 22 or 37 °C. According to their study, the temperature and humidity had a significant influence on the mechanical properties of restored teeth with composite resins . In another study, the flexural modulus and flexural strength properties of composite were not negatively influenced by the simulated intraoral conditions of 35°C at 90% relative humidity .
Various steps of dental restorative procedures have the potential to generate considerable amounts of heat which can permanently damage the pulp, leading to pulp necrosis, discoloration of the tooth and eventually tooth loss. Thus, measures should be undertaken to limit pulp irritation and injury during procedures. This is especially true as damage to the pulp is accumulative and past insults affect the restorability of the tooth. Despite the importance of this topic, there are limited studies available which investigate the influencing factors and dental procedures. Experimental setups of simulating intraoral environment have been employed by most previous studies using an incubator at 37°C to mimic the intraoral temperature [ , , ]. However, there is limited research which simulated the pulp blood flow using the peristaltic tubing pump and temperature . The use of intraoral humidity chamber was employed by one study to simulate the relative humidity around natural teeth, which is an important variable . This highlights the gap for future research and a need for an experimental setup which can simulate pulp blood flow, temperature, intraoral temperature and intraoral humidity to accurately simulate the intraoral conditions and record temperature changes during various dental procedures.
|
Patient and clinician-reported experiences of using electronic patient reported outcome measures (ePROMs) as part of routine cancer care
|
7965a6e4-ff5a-4ba9-bdf4-d73bbad8263e
|
10160312
|
Internal Medicine[mh]
|
There are over two and a half million people in the UK currently living with cancer and this number is set to increase to 4 million by 2030 . The symptom burden for these patients can be high and even mild side-effects can impact quality of life (QoL) and lead to cessation of treatment, especially with prolonged treatment regimes . The effective management of cancer symptoms or treatment-related side-effects is integral to maintaining a good QoL in patients living with cancer. Patient Reported Outcome Measures (PROMs) are used to gather information about health status, QoL and functioning directly from patients, without any interpretation from a member of clinical staff . PROMs allow patients to report on symptom severity as well as the impact of these symptoms on QoL, functioning and overall well-being. The benefits of integrating remotely-reported PROMs using electronic platforms (ePROMs) within the clinical pathways are well documented . Randomised controlled trials have demonstrated that the use of ePROMs lead to improvements in the doctor/patient relationship as a result of enhanced communication and clinical efficiency, better symptom control, reduced emergency department attendance, reduced hospitalisation and improved survival. ePROMs have also been shown to lead to earlier detection and management of symptoms as well as earlier detection of tumour recurrence . Furthermore, automated feedback to patients on completing ePROMs can identify milder symptoms which do not necessarily need clinician involvement and can be managed at home . On the whole, the routine collection of ePROMs can enable a more holistic and patient-centred approach to clinical care . This high level evidence has been invaluable in this arena for informing shared decision-making as well as economic and regulatory analyses . To date, the implementation of ePROMs in oncology has mostly occurred in the context of clinical trials while their integration into routine cancer care is still to be established. Patients and clinicians report high satisfaction and acceptability when ePROMs are used as part of routine cancer care . However, a number of patient, clinician and logistical barriers to ePROMs integration in this setting should be taken into consideration in order to make the routine implementation of ePROMs a reality . ‘MyChristie-MyHealth’ was launched in January 2019, integrating ePROM questionnaires routinely into patient care pathways . As part of the evaluation of the ePROMs service, we aimed to assess the acceptability and feasibility of regular ePROMs collection in routine cancer care and explore patient and clinician experiences of the service.
Study design This was a single-centre, questionnaire-based study which formed part of a service evaluation of the MyChristie-MyHealth initiative. The study focused on patients with lung cancer and head and neck cancer, the two main disease groups in which this service was initially introduced. The aim was to demonstrate the feasibility of ePROMs collection in routine cancer care and to explore patient and clinician experiences of the service. This service evaluation was reviewed and approved by the Christie NHS Foundation Trust Governance Panel. MyChristie-MyHealth ePROMs service Patients with an outpatient consultation automatically receive a text message or email containing a personalised link to access the MyChristie-MyHealth platform the day before their first clinic appointment or three days prior to a scheduled follow-up appointment. Patients then log onto the ePROM platform using their personal details (surname, date of birth and postcode) to complete the questionnaire. Patients were able to seek assistance to complete ePROMs via a proxy (e.g. family member) or a member of the Christie ePROM team. This help was solely of a technical nature, such as logging on to the MyChristie-MyHealth platform, and all ePROMS responses were entirely the patient’s own (Figs. and ). The ePROMs questionnaire consist of symptom items written in lay language, adapted from the Common Terminology Criteria for Adverse Events (v5.0) and quality of life items (using the EuroQol EQ-5D-5L quality of life (QoL) tool . Symptom items were chosen by the relevant clinical teams. Patients and specialist nurses were involved with the development of the MyChristie-MyHealth ePROM questionnaire. The type and number of symptom items is dependent on prior treatment received, e.g., systemic anticancer therapy or radiotherapy. Examples of the ePROMs questionnaires used in the lung and head and neck patient groups are provided in Additional file : Appendix 1 and Additional file : Appendix 2. Following completion of an ePROMs questionnaire, patients are presented with colour-coded advice dependent on symptom severity. Patients without any symptoms receive a message reassuring them that no action is required (green). Those with mild symptoms receive an alert with a link to the Macmillan website that includes self-care advice (blue). Moderate symptoms elicit advice to seek medical attention from their oncology team or their General Practitioner within a week (orange). Finally, those with severe symptoms receive an alert advising them to seek urgent medical advice within 24 h (red). Each advice alert was accompanied by the hospital’s 24/7 hotline contact details. For the purposes of this report, data regarding demographics, disease stage, performance status and comorbidity burden (Adult Comorbidity Evaluation (ACE) score) were collected prospectively at the time of consultations by the clinical teams. Missing data was collected by the first author from the electronic patient record. At the time when this study was conducted, the results of the ePROMs questionnaires were not integrated into the institution’s electronic patient record. In order to view the completed ePROMS questionnaires clinicians logged into a separate electronic platform, which was provided by a digital health company (DrDoctor®), and they were encouraged to do so prior to each clinical encounter. Clinicians were reminded by members of the ePROM team to log onto the platform and review responses, at the start of each clinic. Patient experience Participants Patients who attended lung cancer or head and neck cancer clinics between May 2019 and June 2019 and had completed at least one ePROMs questionnaire were invited to complete a Patient Reported Experience Measure (PREM) questionnaire. All consecutive patients who attended these clinics were approached to complete the questionnaire. Participants were excluded if they had not completed an ePROMs questionnaire prior to the assessment period, if they had completed the questionnaire on the day of PREM collection with the help of a member of the ePROMs team, or if they had completed the questionnaire with the assistance of a proxy who was not present at the time of PREM collection. Questionnaire development and content The PREM questionnaire was developed in collaboration with the Christie ePROMs Steering Group. Questions were chosen by the ePROMs steering group and formed into draft questionnaires. Clinicians, clinical nurse specialists and patient representatives were asked to review the questionnaires to ensure they were relevant and understandable, and provided modifications where needed. The final questionnaires were reviewed and approved by the ePROMs steering group prior to roll-out. The questionnaire consisted of six questions exploring the usability of the ePROMs questionnaire, the timing of the text messages and the impact on clinical care. These questions were answered using a 4 point Likert scale (from 1 ‘strongly agree’ to 4 ‘strongly disagree’). A neutral option was omitted as research has suggested that 10–20% of those who answer with a neutral option tend to have a preference either favourably or unfavourably . After discussion within the ePROM steering group the neutral option was omitted for this reason. A further two dichotomous (‘yes/no’) questions with free text boxes were added to gain information regarding the advice messages and the frequency of questionnaire administration. A final free text box was added at the end for further comments about the MyChristie-MyHealth service (Fig. ). Both paper and electronic versions were available to allow as many patients as possible to participate. All paper versions were anonymised, entered onto the ePROMs platform and subsequently disposed of securely. Clinician experience Participants All clinicians involved in lung and head and neck cancer clinics between May 2019 and July 2019, were invited to complete a clinician experience questionnaire. Participants were approached in person and via email. Questionnaire development and content The clinician questionnaire was also developed with input from the Christie ePROMs steering committee Potential questions to be included were discussed with steering group, these were then constructed into a questionnaire that was reviewed and adapted by the steering group. After final review and approval this was uploaded and distributed using an online platform. It included six questions using a 4 point Likert scale, as outlined above. These questions explored the impact of the service on clinical decision making, communication with patients, duration of consultations and patient engagement in their consultation and their clinical care as a whole. The clinician questionnaire is shown in Fig. below.
This was a single-centre, questionnaire-based study which formed part of a service evaluation of the MyChristie-MyHealth initiative. The study focused on patients with lung cancer and head and neck cancer, the two main disease groups in which this service was initially introduced. The aim was to demonstrate the feasibility of ePROMs collection in routine cancer care and to explore patient and clinician experiences of the service. This service evaluation was reviewed and approved by the Christie NHS Foundation Trust Governance Panel.
Patients with an outpatient consultation automatically receive a text message or email containing a personalised link to access the MyChristie-MyHealth platform the day before their first clinic appointment or three days prior to a scheduled follow-up appointment. Patients then log onto the ePROM platform using their personal details (surname, date of birth and postcode) to complete the questionnaire. Patients were able to seek assistance to complete ePROMs via a proxy (e.g. family member) or a member of the Christie ePROM team. This help was solely of a technical nature, such as logging on to the MyChristie-MyHealth platform, and all ePROMS responses were entirely the patient’s own (Figs. and ). The ePROMs questionnaire consist of symptom items written in lay language, adapted from the Common Terminology Criteria for Adverse Events (v5.0) and quality of life items (using the EuroQol EQ-5D-5L quality of life (QoL) tool . Symptom items were chosen by the relevant clinical teams. Patients and specialist nurses were involved with the development of the MyChristie-MyHealth ePROM questionnaire. The type and number of symptom items is dependent on prior treatment received, e.g., systemic anticancer therapy or radiotherapy. Examples of the ePROMs questionnaires used in the lung and head and neck patient groups are provided in Additional file : Appendix 1 and Additional file : Appendix 2. Following completion of an ePROMs questionnaire, patients are presented with colour-coded advice dependent on symptom severity. Patients without any symptoms receive a message reassuring them that no action is required (green). Those with mild symptoms receive an alert with a link to the Macmillan website that includes self-care advice (blue). Moderate symptoms elicit advice to seek medical attention from their oncology team or their General Practitioner within a week (orange). Finally, those with severe symptoms receive an alert advising them to seek urgent medical advice within 24 h (red). Each advice alert was accompanied by the hospital’s 24/7 hotline contact details. For the purposes of this report, data regarding demographics, disease stage, performance status and comorbidity burden (Adult Comorbidity Evaluation (ACE) score) were collected prospectively at the time of consultations by the clinical teams. Missing data was collected by the first author from the electronic patient record. At the time when this study was conducted, the results of the ePROMs questionnaires were not integrated into the institution’s electronic patient record. In order to view the completed ePROMS questionnaires clinicians logged into a separate electronic platform, which was provided by a digital health company (DrDoctor®), and they were encouraged to do so prior to each clinical encounter. Clinicians were reminded by members of the ePROM team to log onto the platform and review responses, at the start of each clinic.
Participants Patients who attended lung cancer or head and neck cancer clinics between May 2019 and June 2019 and had completed at least one ePROMs questionnaire were invited to complete a Patient Reported Experience Measure (PREM) questionnaire. All consecutive patients who attended these clinics were approached to complete the questionnaire. Participants were excluded if they had not completed an ePROMs questionnaire prior to the assessment period, if they had completed the questionnaire on the day of PREM collection with the help of a member of the ePROMs team, or if they had completed the questionnaire with the assistance of a proxy who was not present at the time of PREM collection. Questionnaire development and content The PREM questionnaire was developed in collaboration with the Christie ePROMs Steering Group. Questions were chosen by the ePROMs steering group and formed into draft questionnaires. Clinicians, clinical nurse specialists and patient representatives were asked to review the questionnaires to ensure they were relevant and understandable, and provided modifications where needed. The final questionnaires were reviewed and approved by the ePROMs steering group prior to roll-out. The questionnaire consisted of six questions exploring the usability of the ePROMs questionnaire, the timing of the text messages and the impact on clinical care. These questions were answered using a 4 point Likert scale (from 1 ‘strongly agree’ to 4 ‘strongly disagree’). A neutral option was omitted as research has suggested that 10–20% of those who answer with a neutral option tend to have a preference either favourably or unfavourably . After discussion within the ePROM steering group the neutral option was omitted for this reason. A further two dichotomous (‘yes/no’) questions with free text boxes were added to gain information regarding the advice messages and the frequency of questionnaire administration. A final free text box was added at the end for further comments about the MyChristie-MyHealth service (Fig. ). Both paper and electronic versions were available to allow as many patients as possible to participate. All paper versions were anonymised, entered onto the ePROMs platform and subsequently disposed of securely.
Patients who attended lung cancer or head and neck cancer clinics between May 2019 and June 2019 and had completed at least one ePROMs questionnaire were invited to complete a Patient Reported Experience Measure (PREM) questionnaire. All consecutive patients who attended these clinics were approached to complete the questionnaire. Participants were excluded if they had not completed an ePROMs questionnaire prior to the assessment period, if they had completed the questionnaire on the day of PREM collection with the help of a member of the ePROMs team, or if they had completed the questionnaire with the assistance of a proxy who was not present at the time of PREM collection.
The PREM questionnaire was developed in collaboration with the Christie ePROMs Steering Group. Questions were chosen by the ePROMs steering group and formed into draft questionnaires. Clinicians, clinical nurse specialists and patient representatives were asked to review the questionnaires to ensure they were relevant and understandable, and provided modifications where needed. The final questionnaires were reviewed and approved by the ePROMs steering group prior to roll-out. The questionnaire consisted of six questions exploring the usability of the ePROMs questionnaire, the timing of the text messages and the impact on clinical care. These questions were answered using a 4 point Likert scale (from 1 ‘strongly agree’ to 4 ‘strongly disagree’). A neutral option was omitted as research has suggested that 10–20% of those who answer with a neutral option tend to have a preference either favourably or unfavourably . After discussion within the ePROM steering group the neutral option was omitted for this reason. A further two dichotomous (‘yes/no’) questions with free text boxes were added to gain information regarding the advice messages and the frequency of questionnaire administration. A final free text box was added at the end for further comments about the MyChristie-MyHealth service (Fig. ). Both paper and electronic versions were available to allow as many patients as possible to participate. All paper versions were anonymised, entered onto the ePROMs platform and subsequently disposed of securely.
Participants All clinicians involved in lung and head and neck cancer clinics between May 2019 and July 2019, were invited to complete a clinician experience questionnaire. Participants were approached in person and via email. Questionnaire development and content The clinician questionnaire was also developed with input from the Christie ePROMs steering committee Potential questions to be included were discussed with steering group, these were then constructed into a questionnaire that was reviewed and adapted by the steering group. After final review and approval this was uploaded and distributed using an online platform. It included six questions using a 4 point Likert scale, as outlined above. These questions explored the impact of the service on clinical decision making, communication with patients, duration of consultations and patient engagement in their consultation and their clinical care as a whole. The clinician questionnaire is shown in Fig. below.
All clinicians involved in lung and head and neck cancer clinics between May 2019 and July 2019, were invited to complete a clinician experience questionnaire. Participants were approached in person and via email.
The clinician questionnaire was also developed with input from the Christie ePROMs steering committee Potential questions to be included were discussed with steering group, these were then constructed into a questionnaire that was reviewed and adapted by the steering group. After final review and approval this was uploaded and distributed using an online platform. It included six questions using a 4 point Likert scale, as outlined above. These questions explored the impact of the service on clinical decision making, communication with patients, duration of consultations and patient engagement in their consultation and their clinical care as a whole. The clinician questionnaire is shown in Fig. below.
Patient experience Study population Between May and July 2019 107 patients were approached to complete a PREM. Of these 100 PREMs were returned completed. Two patients declined due to anxiety around upcoming appointment, 1 completed the ePROM with a proxy who was not present at the time of PREM collection and four were returned with incomplete data that was insufficient of analysis. The median patient age was 67 years (range 30–80 years) and 50% were female. 78 patients had lung cancer and 22 had a head and neck cancer (Table ). Most patients had an ECOG performance status of 0–1 (86%) and the remainder (14%) had a performance status of 2. Seventy-five percent of patients had an ACE-27 score of 0–1. Almost half (49%) of patients had non-metastatic disease, 42% had metastatic or extensive stage disease and for the remainder, the extent of disease was not documented (Table ). Patient experience data All patients either strongly agreed or agreed that they found the ePROMs service (MyChristie-MyHealth) easy to understand. Almost all (99%) felt ePROMs were easy to access and that the time taken to complete the questionnaires was appropriate. Finally, 97% reported that the timing of the text or email prompt to complete the questionnaires was appropriate. When investigating the perceived impact of ePROMs on clinical care, 82% stated that using ePROMs improved communication with their oncology team and 88% agreed or strongly agreed that using ePROMs made them feel more involved in their care. Eighty-one participants felt that using ePROMs prompted them to seek medical advice sooner. Eighteen patients reported receiving self-care advice through the MyChristie-MyHealth portal of which 14 said they found this advice helpful. An evaluation of the of the free-text comment boxes found that the patients considered the questionnaire to be helpful, easy to use and a good method to aid communication with their clinical team. Some patients highlighted that they thought it was important that clinicians mentioned and demonstrated that they used the ePROMs responses during the clinical consultations. Other patients reported that the questions included were too rigid and suggested the inclusion of a free-text box at the end of the ePROM questionnaire so that they could add other comments on their health (Table ). Clinician experience Study population Between June 2019 and July 2019, 11 oncologists specializing in lung and head and neck cancer completed the clinician experience questionnaire. Due to the set-up of the online platform, demographic data could not be collected. One questionnaire was returned with incomplete data (one question unanswered) but was felt to be sufficiently completed to be included in the analysis. Clinician experience data Eight clinicians (72.2%) reported that using ePROMs supported communication with their patients and six noted their use made consultations more patient-focused. Seven clinicians (63.6%) felt that ePROMs use had led to patients being more engaged during their consultations and 5 (45.5%) believed that patients using ePROMs were more engaged with their care as a whole. Five clinicians (45.5%) felt that using ePROMs had contributed to their clinical decision making. Only one clinician reported that using ePROMs shortened their consultation time. Some clinicians commented that whilst they thought the inclusion of ePROMs into clinical care was useful, integration into the electronic patient record would be a valuable step in ensuring that ePROMs were easier to use. Clinicians also commented that due to the lack of integration into the electronic patient record accessing and reviewing ePROMs was time consuming and frequently forgotten (Table ).
Study population Between May and July 2019 107 patients were approached to complete a PREM. Of these 100 PREMs were returned completed. Two patients declined due to anxiety around upcoming appointment, 1 completed the ePROM with a proxy who was not present at the time of PREM collection and four were returned with incomplete data that was insufficient of analysis. The median patient age was 67 years (range 30–80 years) and 50% were female. 78 patients had lung cancer and 22 had a head and neck cancer (Table ). Most patients had an ECOG performance status of 0–1 (86%) and the remainder (14%) had a performance status of 2. Seventy-five percent of patients had an ACE-27 score of 0–1. Almost half (49%) of patients had non-metastatic disease, 42% had metastatic or extensive stage disease and for the remainder, the extent of disease was not documented (Table ).
Between May and July 2019 107 patients were approached to complete a PREM. Of these 100 PREMs were returned completed. Two patients declined due to anxiety around upcoming appointment, 1 completed the ePROM with a proxy who was not present at the time of PREM collection and four were returned with incomplete data that was insufficient of analysis. The median patient age was 67 years (range 30–80 years) and 50% were female. 78 patients had lung cancer and 22 had a head and neck cancer (Table ). Most patients had an ECOG performance status of 0–1 (86%) and the remainder (14%) had a performance status of 2. Seventy-five percent of patients had an ACE-27 score of 0–1. Almost half (49%) of patients had non-metastatic disease, 42% had metastatic or extensive stage disease and for the remainder, the extent of disease was not documented (Table ).
All patients either strongly agreed or agreed that they found the ePROMs service (MyChristie-MyHealth) easy to understand. Almost all (99%) felt ePROMs were easy to access and that the time taken to complete the questionnaires was appropriate. Finally, 97% reported that the timing of the text or email prompt to complete the questionnaires was appropriate. When investigating the perceived impact of ePROMs on clinical care, 82% stated that using ePROMs improved communication with their oncology team and 88% agreed or strongly agreed that using ePROMs made them feel more involved in their care. Eighty-one participants felt that using ePROMs prompted them to seek medical advice sooner. Eighteen patients reported receiving self-care advice through the MyChristie-MyHealth portal of which 14 said they found this advice helpful. An evaluation of the of the free-text comment boxes found that the patients considered the questionnaire to be helpful, easy to use and a good method to aid communication with their clinical team. Some patients highlighted that they thought it was important that clinicians mentioned and demonstrated that they used the ePROMs responses during the clinical consultations. Other patients reported that the questions included were too rigid and suggested the inclusion of a free-text box at the end of the ePROM questionnaire so that they could add other comments on their health (Table ).
Study population Between June 2019 and July 2019, 11 oncologists specializing in lung and head and neck cancer completed the clinician experience questionnaire. Due to the set-up of the online platform, demographic data could not be collected. One questionnaire was returned with incomplete data (one question unanswered) but was felt to be sufficiently completed to be included in the analysis. Clinician experience data Eight clinicians (72.2%) reported that using ePROMs supported communication with their patients and six noted their use made consultations more patient-focused. Seven clinicians (63.6%) felt that ePROMs use had led to patients being more engaged during their consultations and 5 (45.5%) believed that patients using ePROMs were more engaged with their care as a whole. Five clinicians (45.5%) felt that using ePROMs had contributed to their clinical decision making. Only one clinician reported that using ePROMs shortened their consultation time. Some clinicians commented that whilst they thought the inclusion of ePROMs into clinical care was useful, integration into the electronic patient record would be a valuable step in ensuring that ePROMs were easier to use. Clinicians also commented that due to the lack of integration into the electronic patient record accessing and reviewing ePROMs was time consuming and frequently forgotten (Table ).
Between June 2019 and July 2019, 11 oncologists specializing in lung and head and neck cancer completed the clinician experience questionnaire. Due to the set-up of the online platform, demographic data could not be collected. One questionnaire was returned with incomplete data (one question unanswered) but was felt to be sufficiently completed to be included in the analysis.
Eight clinicians (72.2%) reported that using ePROMs supported communication with their patients and six noted their use made consultations more patient-focused. Seven clinicians (63.6%) felt that ePROMs use had led to patients being more engaged during their consultations and 5 (45.5%) believed that patients using ePROMs were more engaged with their care as a whole. Five clinicians (45.5%) felt that using ePROMs had contributed to their clinical decision making. Only one clinician reported that using ePROMs shortened their consultation time. Some clinicians commented that whilst they thought the inclusion of ePROMs into clinical care was useful, integration into the electronic patient record would be a valuable step in ensuring that ePROMs were easier to use. Clinicians also commented that due to the lack of integration into the electronic patient record accessing and reviewing ePROMs was time consuming and frequently forgotten (Table ).
Historically, the use of PROMs in oncology care has been largely undertaken in the context of clinical research. Recently there has been a drive to incorporate regular ePROMs collection into routine cancer care . This study shows that the real-world collection of ePROMs as part of routine cancer care is acceptable to patients and clinicians and can have a positive impact on patient attitudes towards engagement with their care. In this study, nearly all patients found ePROMs easy to use and understand which is similar to the findings from published literature on the use of PROMs in cancer care. Studies in a range of cancer sites and also in a palliative care setting have found that between 78.2% and 100% of patients found ePROMs easy to use and 97%-100% found them easy to understand . It is worth noting that these studies have all used different electronic platforms to the current study but these findings support the idea that routine collection of ePROMs is acceptable to patients. Our study found that 95% of patients surveyed were happy to continue completing ePROMs at every clinic visit which is higher than in previously published studies. In a study by Boyes et al., 75% of patients wished to complete a PROM questionnaire at each clinic visit whilst only 60% of those in a study by Kallen et al. wanted to continue using ePROMs regularly as part of their clinical care . The higher willingness to complete regular ePROMs in our evaluation may reflect the fact that only patients who had filled in at least one ePROMs questionnaire, and therefore more likely to continue to be compliant, were approached in this study, potentially introducing bias to the results. Furthermore, in the current evaluation the ePROMs initiative had been running for less than a year meaning patients may be less likely to have experienced questionnaire fatigue than in longer running studies. Another important finding is that over 80% of patients reported that completing the ePROMs questionnaires helped them to feel more involved in their care. Previous studies by Basch et al. demonstrated that 60–77% of patients felt more in control of their cancer care as a result of using ePROMs . It is possible that the different wording of the question in this study, using ‘involved’ rather than ‘in control’, may have led to the slightly increased agreement with this statement as patients have been found to experience a ‘lack of control’ whilst undergoing their treatment . One limitation of this study is that it did not specifically investigate the barriers related to the routine collection of PROMs using an electronic platform. Current literature is mixed when looking at the impact of the use of PROMs use on patient-clinician communication. Eighty-two percent of patients in our study felt that using ePROMs improved communication with their clinical team which is similar to a number of studies which have shown that between 51 and 95% of participants felt that the use of PROMs supported communication with their clinical team . However, only 37% of respondents in a study by McLachlan et al. reported that PROMs improved communication with their clinical team and Rosenbloom et al. did not find any statistically significant changes in patient satisfaction regarding communication when using PROMs as part of clinical care . Reasons for this difference may be that the patients in the study by McLaughlin et al. were not undergoing treatment and only a small proportion were found to have high cancer needs which may have limited the effect. Furthermore, baseline satisfaction with communication was high prior to the implementation of PROMs in the Rosenbloom et al. study, which may have led to an element of ceiling effect. Existing literature on PROMs echoes the comments made by patients in this study. The use of PROMs has been shown to help reassure patients and better focus their thoughts on health related issues and symptoms during consultations . Patients in this study commented that clinicians were not systematically discussing their ePROMs questionnaire responses during consultations which has been found to be an issue in other studies. Boyes et al. found that only 3 of the 40 patients in their study recalled clinicians specifically mentioning PROMs responses during their consultations . An important aim of ePROMs service improvement is therefore to raise the awareness of the importance of clinician’s review of the questionnaire and feedback to patients. Most clinicians in this study reported that the use of ePROMs supported communication with patients (8/11) whilst just over half (6/10) reported that they led to consults being more patient focused. Interestingly although seven clinicians reported that the use of ePROMs led to patients being more involved in the consultation only five reported it improved engagement with their overall care. The current literature regarding the role of PROMs in supporting communication is very mixed and it appears that whilst 70%-100% of clinicians from a nursing or allied health care professional background feel that PROMs support communication , only 50–67% of doctors agree with this statement . To our knowledge, no previous literature has looked directly at the impact of ePROMs on making consultations more ‘patient-centred’. However 60% of clinicians in a study by Berry et al. found that the use of PROMs helped to guide consultations and 67% in a study by Mark et al. reported that PROMs helped to focus consultations . This study found that 45% of clinicians reported that patients’ ePROMs responses contributed to their decision-making. A study by Moore et al. looking at using ePROMs as part of routine cancer care in haematological malignancies found similar results to this study in that just over 40% of clinicians reported taking action after looking at the results of ePROMs. However, an earlier study conducted at the Christie showed this percentage to be much higher (79.5%) . It is important to note that in the earlier study, patient responses to the ePROMs questionnaires were available within the electronic patient record rather than on a separate platform as was the case in this study. This was reported by clinicians as a potential barrier to accessing ePROMs responses prior to the consultation and could contribute to the difference in the results. This issue has since been rectified. The ePROMs responses have been available in the electronic patient record for the clinical team to review since March 2020. Approximately two-thirds of the patients in this study who stated they received advice to seek urgent medical help reported that they did not heed this advice. Reasons given by patients for not seeking urgent help were that the patient was due to see their oncology team in the very near future or that the symptom was long-standing and being managed. This has highlighted an important area for ongoing study to explore further patients’ reasons for not heeding the urgent medical advice prompts and whether there needs to be alterations in the threshold for the alerts. One limitation of this study is that the clinician experience questionnaire was not completed by all clinicians involved in clinics using ePROMs, again potentially leading to bias in the results. Another area for potential bias was noted as all questions were phrased in a positive way and no negative phrasing was used. This was primarily to ensure they were easy to understand and to keep the number of questions as low as possible to avoid questionnaire fatigue but it is acknowledged that this could lead to more positive responses. In the same vein, although the PREM questionnaires were collected anonymously the patients were approached to complete the questionnaire by a member of the MyChristie-MyHealth team. This could therefore introduce potential bias as patients may not want to respond negatively about a service that is providing their cancer care. Future directions for the project include gaining experience data from non-completers as well as the continued review of patient and clinician experience to aid future development of the MyChristie-MyHealth service. A further roll out of the initiative to all patient groups and the development of ad-hoc and responsive ePROMs service can help create an adaptive, patient-centred approach to routine cancer care (Table ).
This study has shown that the use of regular ePROMs collection in routine cancer care is not only feasible and acceptable to patients and clinicians alike, but can also lead to improved communication between patients and their oncology teams. Furthermore, ePROMs can help to make patients feel more involved in their care and be more engaged in consultations. Our findings will help other centres who may be considering the implementation of ePROM into routine care and provide some ideas of further work that is required in this setting. Further research looking specifically at patients who did not complete the ePROMs, enhancing engagement of clinicians with the service and constant review and evaluation of the MyChristie-MyHealth initiative is needed moving forward to optimize the benefits to patients.
Additional file 1. Examples of lung cancer ePROMs questions Additional file 2. Examples of head and neck cancer ePROMs questions
|
Patient needs and care: moves toward person-centered care for Graves' disease in Sweden
|
dff35571-5a5a-4e35-881a-fb9ae8d79346
|
10160562
|
Patient-Centered Care[mh]
|
There are five concepts of central importance with respect to person-centered care (PCC) ( , ): patient needs, patient expectations, patient perception, patient care, and patient quality of life (QoL). According to the World Health Organization, health-related QoL is defined as the physical, psychological, and social domains of health, as perceived by the patient, which are influenced by a patient's experiences, beliefs, and expectations of their disease and treatment ( , ). In other words, patient experience may be defined as "…the sum of all interactions, shaped by an organization's culture, that influence patient perceptions across the continuum of care" ( ). Consequently, an agreement between the care given to patients and patient needs influences QoL. In Sweden, it is stated in Patient Law (2014:821) that patients shall participate in their own healthcare. The person who provides information must ensure that the recipient has understood the content and that the information is adapted to age, maturity, experience, language, and other individual conditions. It is not until patients are well informed, fully included in their situation, and influence their own care that high-quality healthcare can be delivered ( , ). Previously, the outcome of diseases used to be mostly assessed by biological and physiological measures ( ). While these measures provide important information for the clinician, they often correlate poorly with functional health perceived by patients ( , ). When the treatment goal is to improve patient function and well-being rather than prolong life, the patient's perception of their care is central and an important part of QoL measurement ( ) and may need specific supportive actions. The patient's QoL and their medical treatment are equally important ( ). Within the Swedish national system for knowledge-driven management in healthcare, mapping the care process is a general tool to better adapt patient care to the needs of patients and identify areas for improvements. The care of Graves' disease (GD) is exemplified in , showing the onset of the disease process. Our purpose is to overview the literature on patient care and needs in GD and illustrate how to improve the work process in patient care using the strategic tool system-based driver and association of health outcomes in relation to available Resources (SHOR) diagram that may have implications on four main outcomes: mortality, morbidity, QoL, and patient experience ( ) ( ). In this review, we will address the GD patient experience of healthcare, the importance of addressing patient fears and pre-morbid psychological conditions, patient need for adequate information, and evaluation of QoL in GD. Thereafter, we will suggest what may be implemented into GD healthcare and define the gaps in knowledge for further research.
Mapping patient experience at the onset of the disease and during treatment is important to optimize care and achieve a better QoL for patients; however, the literature on this topic is limited. A New Zealand study ( ) has examined the level of patient preference in determining which type of treatment to proceed with. The most important factors for choosing therapy between antithyroid drugs (ATDs), surgery, or radioactive iodine (RAI) were whether the treatment affected activities of daily living, concerns about the use of RAI, possibility of depression or anxiety, and doctor's recommendations. Satisfaction was high with all three treatment types. In another study ( ), only 10% of patients reported some hesitation and 3% major hesitation in recommending the treatment given to them. Furthermore, others report differences between patients and physicians regarding which treatment they prefer ( ). Patients were more worried about RAI than surgery as compared to physicians. It is important to be aware of the different preferences of patients and physicians that may occur, but through listening and providing the right conditions, such as right information, and providing a conducive clinical environment, it should be possible to arrive at a joint decision on treatment. More studies are needed on how to understand patient needs, how to best support GD patients during both the short- and long-term period of the disease course, how to deliver information in the best way, why some patients do not recover as expected, how patients who do not recover fully should be cared for, and how nurses, dieticians, psychologists, occupational therapists, and physiotherapists can improve the situation to end up with a better disease experience and QoL.
When interviewing patients with GD, ambiguous signs of the disease appear problematic ( ). Hyperthyroid patients felt that they had to negotiate their sickness. Most often, medical tests failed to validate their illness experiences, which pressured patients to work despite feeling ill. They often hid their disabilities in fear of work-related consequences. Instead of taking sick leave, they were using holidays or flextime to compensate for time lost at work. Patients expressed a longing for acknowledgment from significant others. In other interviews, patients with hyperthyroidism reported a loss of physical and emotional control. Patients showed concern about cosmetic changes in their eyes from Graves' orbitopathy (GO) and its connection to self-image, body image, and social avoidance ( ). One of the most common fears was the risk of blindness that would have an effect on their responsibilities, ability to work, and capability to support their families ( ). There appears to be significant variability in individual perception of living with a disfiguring condition ( ). Patients seem to give a higher rate of severity than endocrinologists when comparing the perception of appearance ( ), underlining the importance in considering the patient's perspective. It is also important to consider the intensity of symptom perception, which is influenced by the individual's pre-morbid function with respect to physical, psychological, and societal function ( ). The ability to cope differs between individuals and depends on their resources ( ); it seems to be the psychological processes behind it rather than objective measurements that explain the variability of coping ( , ). Depressive coping, trivializing the condition, and higher levels of emotional stress were associated with worse QoL ( ). It is unclear how much fear increases the need for healthcare, information, and security. It is also unclear whether addressing fear and working in a supportive manner can decrease healthcare consumption and improve outcomes of psychiatric morbidity and QoL in GD patients. Outcomes are, however, improved in thyroid cancer patients through psychological support ( ). Generally, sickness absence resulting from mental symptoms is often long-lasting ( ) and return to work is a complex process where self-efficacy, a positive attitude, and support are most relevant ( ). The lack of mental energy and endurance over time reduces the ability to cope with rehabilitation, to spend time with family and friends, and to be involved in society in terms of both work and leisure ( ). In burn-out patients, increased support with an eHealth intervention improved self-efficacy and reduced burn-out symptoms ( ). GO patients have an even higher risk of work disability than other thyroid patients during the first year after diagnosis ( ). A study by Ponto and colleagues shows that patients with GO suffer from emotional stress and occupational impairment. These patients therefore require more preventive care and rapid rehabilitation ( ). Also, studies report that 15–20% of patients with GO report that they have received psychotherapy ( , ). Can PCC address individual fears and difficulties better than conventional care in GD patients? PCC is based on ethical principles and aims to involve the patient as an active partner in his or her care, treatment, and decision-making process ( ). PCC will identify and use the patient’s own resources, capabilities, and needs ( ) and has proven effective for patient-related outcomes in other conditions ( ). Also, the effect of different coping strategies in GD patients needs to be determined as positive coping strategies are associated with a lower frequency of recurrent GD ( ). If coping strategies can be improved, it may result in a more beneficial outcome.
Increased patient knowledge of a disease results in better insight into the condition, reduces anxiety, and increases compliance with treatment ( ). Patients may search online but it is difficult to obtain appropriate and readable information ( ). However, in one study ( ), the knowledge level was similar among GD patients with and without GO and was not influenced by disease duration, educational level, language, or demography. Moreover, 3 years after total thyroidectomy, GD patients expressed that they had wanted more information before the surgery, such as the healthcare staff explaining the disease in detail, and that they had received information about possible post-operative side effects such as weight gain, fatigue, and changes in mood ( ). The authors also emphasized the patient's own assessment of the risks after having received adequate information. Obtaining accurate informed consent is necessary according to the Montgomery ruling in the UK ( ) and under Swedish patient law. According to patients, there is a huge need for repeated information during the disease course.
QoL questionnaires are divided into those that are generic or disease-specific. Generic instruments are designed to measure the most important general concepts of QoL. This makes them applicable across diseases and different cohorts; however, questions are often too general or broad to detect clinically important changes for a specific disease ( ) because they include items of low relevance for the actual patient group ( ). On the other hand, disease-specific questionnaires include concerns of high relevance to the disease and are often designed to detect changes that occur over time. There are two well-known, validated thyroid-specific QoL questionnaires for GD patients: thyroid-specific patient-reported outcome measure (ThyPRO) ( , , ) for thyroid patients in general and the GO-QoL ( ) for GD patients with orbitopathy. While developing the ThyPRO questionnaire ( , ), it was noted that endocrine experts focused on disease-specific issues, while patients focused on broader non-specific psychological aspects of the disease, thus highlighting that healthcare personnel may need to broaden their view on thyroid-related problems faced by their patients. GO-QoL is short and simple with two disease-specific scales, one referring to the visual consequences and the other referring to appearance perception ( ). Both these questionnaires are used today in research, having good reliability and validity ( , ). While euthyroidism is usually achieved rapidly under treatment, recovery of well-being is delayed for months in many patients and a negative impact on QoL is noted in the short term ( , , , ). Approximately 20% of patients experience reduced QoL for as long as 1 year after treatment ( , ) and many are not fully recovered after 3 years ( ). A randomized, prospective study has shown that GD patients still had decreased QoL 17–21 years later compared to the general population ( ). Some will never regain full pre-morbid health ( , , , ). The reasons are unclear but studies on this are ongoing ( ). Not surprisingly, QoL is even more reduced in patients who develop GO ( , , , , , , , , , , , ). Also, after GO treatment, 61% of the patients reported that the appearance of their eyes had not normalized, 51% thought the appearance of their eyes was abnormal, and 37% were dissatisfied with the appearance of their eyes ( ). After orbital decompression ( ), although well-being was improved in most patients, dissatisfaction was linked to unanticipated effects of surgical care, recovery, or appearance. Hence, this data may be used to improve the information for patients by offering realistic estimates of expected QoL impairments and treatment effects. Caregivers should also be aware of the way patients form expectations ( ). However, other factors may also contribute to the impaired QoL such as the disease is often misdiagnosed (58% of the time) ( ), there is a delay in diagnosis (average time 9 months) ( ), and access to joint thyroid-eye clinics is limited (25% of patients) ( ), even though patients seen in such clinics report greater satisfaction with their care. In addition, the uncertainty of the future may also play a role and impaired QoL may have secondary psychological effects ( ). Research is, however, lacking in how these QoL instruments can be applied in healthcare and how they can be used effectively. Just recently, remaining symptoms such as tiredness, depression, and anxiety had been identified as mental fatigue in GD patients using a validated scale, the mental fatigue scale (MFS) ( ), which has been previously used in patients with stroke and traumatic brain injury ( , ). It consists of 15 questions covering symptoms such as the rapid drain of mental energy on mental activity, impaired concentration, long recovery time, and diurnal variation ( ). A total score ≥10.5 indicates deviation from normality ( ). A validated English translation is found at www.brainfatigue.se . The MFS has also been evaluated in hypothyroid patients with a good correlation to work ability ( ). It is unclear whether mental fatigue is similar to ‘brain fog’, which has been recently reviewed for hypothyroidism ( ). The MFS would well cover the symptoms that were reported by the patients ( ).
Validate patient symptoms To relieve ambiguity and shame, healthcare professionals may play an important role in validating diffuse and hidden symptoms as relevant aspects of living with a thyroid disorder ( ). According to an invited patient and public involvement meeting, experts, patients, and the public stated that psychological intervention was an unmet and unprioritized need for patients and public participants, especially in GO ( ). This is particularly important as patients diagnosed with GD have an increased risk of suicide ( , ), which is significantly higher in patients with GO, and the risk persists after adjustments for pre-existing somatic and psychiatric disease. A clear association is reported between hyperthyroidism and attention deficit hyperactivity disorder, adjustment disorder, anxiety, bipolar disorder, and depression on one hand, and suicidality on the other. Healthcare personnel should acknowledge psychiatric symptoms, provide treatment and follow-up, and screen patients for the risk of suicide ( ). Patient organizations provide support and mediate contact with other patients with a similar diagnosis and their relatives. As a patient representative, patient organizations also have a valuable role in providing healthcare knowledge on what it means to live with thyroid disease. The role of patient organizations is also an influential force, pointing out shortcomings in healthcare and working for better care. Organizations are also important providers of information and knowledge, including what rights patients and their relatives have for support. Support patients more Patient care may benefit from counseling, support groups, a regular port of call to a nurse specialist, and even disease-specific psychiatric care when necessary ( ). In fact, the Amsterdam Declaration states that the treatment of GD patients with GO should include aspects that improve patient experience and QoL ( ). Coping with a disease relates to having the diagnosis of GD and/or GO as well as handling facial changes and facilitating societal interaction ( , ). However, although there are several coping methods to help patients with disfiguring conditions ( , , ), none have been evaluated in GD or GO. Hence, this is still considered a gap in knowledge. Moreover, PCC has proven beneficial in many conditions, for example, chronic obstructive pulmonary disease, chronic heart failure, and mental disorders such as depression and anxiety ( , ). It is fundamental to listen to the patient's narrative and, together, create a health plan that includes the patient's goals, how the goals are to be achieved, and what are the responsibilities of the patient and healthcare personnel, respectively. In PCC, the resources of the patient are characterized and any need for support is determined and regularly updated. Using PCC, the resources of society are used more efficiently, and patients feel more secure and more involved in their care. Studies on PCC have shown increased self-efficacy, shorter hospital stays, and cost savings ( ). Studies on PCC in GD are still lacking and is also, therefore, considered a gap of knowledge. However, implementing a thyroid/contact nurse has proven cost-effective in other diseases ( , , , ) and may improve QoL and how patients experience outcome by increasing accessibility within healthcare, patient security, and increasing the possibility of patients being involved in one’s own care ( and ). The nurse, patient, and physician as well as any other relevant persons should form a team to improve healthcare. The responsibility of the thyroid/contact nurse is also to coordinate healthcare, inform about the healthcare situation, mediate contacts, and be the contact person for the patient ( , ). In the national guideline for hyperthyroidism in Sweden, there is a recommendation of thyroid/contact nurses to this patient group to increase security and strengthened opportunities to be involved in one’s own care ( ). Therefore, we suggest that GD patients with or without GO should be given a thyroid/contact nurse in the more intensive phases of the disease course. In general, physical activity promotes well-being and is cost-effective. In a retrospective, non-randomized study ( ), exercise reduced fatigue, promoted disease remission, and reduced the relapse rate in euthyroid GD. A mechanism for this may be that exercise reduces stress ( ) and less stress positively correlates with thyroid-stimulating hormone receptor antibody levels and thyroid volume ( ). Also, about two-thirds of GD patients with GO perceived that psychological stress worsened GO, although there is no evidence for this in the literature ( ). Supporting patients being physically active once euthyroidism has been regained may be beneficial for QoL and patient experiences, but there is a lack of knowledge on the effect of physical activity in GD patients. Institute a rehabilitation process for GD patients in need Rehabilitation addresses the impact of a health condition on a person's everyday life, by optimizing their functioning and reducing their experience of disability ( ). Indeed, GD is to be defined as a condition with persistent symptomatology, in some cases, regarding both mental fatigue and disabling eye complaints. Additional mental suffering with anxiety and depression can develop when life drastically changes and does not return to normal. Although medication is not currently available in practice, limited research shows that mental fatigue can be alleviated for patients who had suffered a stroke or traumatic brain injury with medication ( , , ), mindfulness, or cognitive behavioral therapy ( , , ), while no such studies so far are found for endocrine disorders. Systematic analyses show that mindfulness can be helpful in alleviating depression and anxiety ( , ). Patients characterized by mental fatigue may get important information on how to manage it in daily life on the webpage https://brainfatigue.se/ However, there is still no research on the efficacy of such supporting interventions in GD patients. Optimizing non-medical, non-surgical ophthalmological treatment with prisms is a well-known therapy for improving double vision. Psychological support, aid tools with different glasses, computer filters, computer settings, light adjustments ( ), and ophthalmology pedagogues are beneficial in other diseases with similar symptomatology. However, evidence is lacking in GO. Rehabilitation together with multiprofessional teams to ensure that people can remain as independent as possible is rare. Rehabilitation depends on patient symptoms: we suggest implementation of a rehabilitation process for GD patients 3 months after diagnosis if the patient is still on sick leave for an indefinite period and in cases of disabling GO symptoms. A rehabilitation process has recently been proposed for hypothyroid patients ( ). Provide GD patients with more information that is accurate and readable Healthcare needs to detect the areas where knowledge is insufficient and effectively target those needs with tailored educational efforts and readable materials with the aim of increasing safety, reducing anxiety, and avoiding misconceptions ( ). There also needs to be education for patients. We recommend that such information materials are uniform for a country, are constructed to cover different needs, and can be given on different occasions ( ). For patients with visual difficulties, information should be presented in a way that facilitates patients with visual problems taking part by using an auditory approach. Improving information may increase compliance, improve adequate decision making, and benefit outcome and patient experience, as outlined in the SHOR diagram ( ). Implementation of QoL measurements into routine healthcare It is time to implement QoL measurements in pracdtical, everyday healthcare to detect those patients who need extra support and rehabilitation as well as to follow recovery individually. Disease-specific questionnaires may also be used to identify specific symptoms that need further evaluation. In a recent publication, key points to facilitate the implementation of QoL measurements into routine care were highlighted such as ( ). It could be argued that QoL measurements may lead to frustration as there are limited therapies at hand; however, identification of symptoms and making them measurable is an important way to acknowledge the existence of symptoms that are not measurable through blood sampling ( and ). In GD patients with GO, symptoms such as excessive lacrimation, photophobia, changes in refraction, and various degrees of dysmotility may escape objective measurement but may cause the patient significant visual dysfunction ( ) and impact their vision-related daily functioning such as reading, watching TV, and driving ( , , ). Similarly, orbital discomfort is common ( , , ) and may have profound effects on QoL ( ). Another example is that mental fatigue alone may be the major complaint and should not be mixed up with depression or anxiety as it can be treated separately (Birgitta Johansson, manuscript in preparation). Specifically, full or partial sick leave may be necessary for a long period of time and the patient needs to learn on how to manage in daily life (see https://brainfatigue.se/ ). QoL and the MFS measurement can be used to follow functional disability. We suggest that three questionnaires are implemented into regular care: ThyPRO, GO-QoL, and MFS. Preferably, the questionnaire should be sent to patients electronically to be filled in at home and resubmitted to the thyroid/contact nurse for evaluation and further team discussions with the physician. ThyPRO may be measured in the initial course of GD, after 6 and 12 months, and before and 6 months after termination of ATDs, surgery, or RAI. GO-QoL may be filled in at the onset of GD, at the onset of GO, every 6 months in the first year, and thereafter yearly as necessary. We recommend that the MFS should be completed in parallel with QoL measurements. To further facilitate the ThyPRO questionnaire, a short-form variant with preserved validity has been developed ( ). It is also valuable to know that a difference of 6 points on one or both subscales in GO-QoL may be noted by patients as beneficial with important changes in daily functioning. A minimum of ≥10 points compared to before invasive therapies, such as orbital decompression, is needed to see an important clinical difference ( , ). Moreover, disease-specific QoL questionnaires may also be an important quality indicator within regular healthcare to appropriately address these outcomes.
To relieve ambiguity and shame, healthcare professionals may play an important role in validating diffuse and hidden symptoms as relevant aspects of living with a thyroid disorder ( ). According to an invited patient and public involvement meeting, experts, patients, and the public stated that psychological intervention was an unmet and unprioritized need for patients and public participants, especially in GO ( ). This is particularly important as patients diagnosed with GD have an increased risk of suicide ( , ), which is significantly higher in patients with GO, and the risk persists after adjustments for pre-existing somatic and psychiatric disease. A clear association is reported between hyperthyroidism and attention deficit hyperactivity disorder, adjustment disorder, anxiety, bipolar disorder, and depression on one hand, and suicidality on the other. Healthcare personnel should acknowledge psychiatric symptoms, provide treatment and follow-up, and screen patients for the risk of suicide ( ). Patient organizations provide support and mediate contact with other patients with a similar diagnosis and their relatives. As a patient representative, patient organizations also have a valuable role in providing healthcare knowledge on what it means to live with thyroid disease. The role of patient organizations is also an influential force, pointing out shortcomings in healthcare and working for better care. Organizations are also important providers of information and knowledge, including what rights patients and their relatives have for support.
Patient care may benefit from counseling, support groups, a regular port of call to a nurse specialist, and even disease-specific psychiatric care when necessary ( ). In fact, the Amsterdam Declaration states that the treatment of GD patients with GO should include aspects that improve patient experience and QoL ( ). Coping with a disease relates to having the diagnosis of GD and/or GO as well as handling facial changes and facilitating societal interaction ( , ). However, although there are several coping methods to help patients with disfiguring conditions ( , , ), none have been evaluated in GD or GO. Hence, this is still considered a gap in knowledge. Moreover, PCC has proven beneficial in many conditions, for example, chronic obstructive pulmonary disease, chronic heart failure, and mental disorders such as depression and anxiety ( , ). It is fundamental to listen to the patient's narrative and, together, create a health plan that includes the patient's goals, how the goals are to be achieved, and what are the responsibilities of the patient and healthcare personnel, respectively. In PCC, the resources of the patient are characterized and any need for support is determined and regularly updated. Using PCC, the resources of society are used more efficiently, and patients feel more secure and more involved in their care. Studies on PCC have shown increased self-efficacy, shorter hospital stays, and cost savings ( ). Studies on PCC in GD are still lacking and is also, therefore, considered a gap of knowledge. However, implementing a thyroid/contact nurse has proven cost-effective in other diseases ( , , , ) and may improve QoL and how patients experience outcome by increasing accessibility within healthcare, patient security, and increasing the possibility of patients being involved in one’s own care ( and ). The nurse, patient, and physician as well as any other relevant persons should form a team to improve healthcare. The responsibility of the thyroid/contact nurse is also to coordinate healthcare, inform about the healthcare situation, mediate contacts, and be the contact person for the patient ( , ). In the national guideline for hyperthyroidism in Sweden, there is a recommendation of thyroid/contact nurses to this patient group to increase security and strengthened opportunities to be involved in one’s own care ( ). Therefore, we suggest that GD patients with or without GO should be given a thyroid/contact nurse in the more intensive phases of the disease course. In general, physical activity promotes well-being and is cost-effective. In a retrospective, non-randomized study ( ), exercise reduced fatigue, promoted disease remission, and reduced the relapse rate in euthyroid GD. A mechanism for this may be that exercise reduces stress ( ) and less stress positively correlates with thyroid-stimulating hormone receptor antibody levels and thyroid volume ( ). Also, about two-thirds of GD patients with GO perceived that psychological stress worsened GO, although there is no evidence for this in the literature ( ). Supporting patients being physically active once euthyroidism has been regained may be beneficial for QoL and patient experiences, but there is a lack of knowledge on the effect of physical activity in GD patients.
Rehabilitation addresses the impact of a health condition on a person's everyday life, by optimizing their functioning and reducing their experience of disability ( ). Indeed, GD is to be defined as a condition with persistent symptomatology, in some cases, regarding both mental fatigue and disabling eye complaints. Additional mental suffering with anxiety and depression can develop when life drastically changes and does not return to normal. Although medication is not currently available in practice, limited research shows that mental fatigue can be alleviated for patients who had suffered a stroke or traumatic brain injury with medication ( , , ), mindfulness, or cognitive behavioral therapy ( , , ), while no such studies so far are found for endocrine disorders. Systematic analyses show that mindfulness can be helpful in alleviating depression and anxiety ( , ). Patients characterized by mental fatigue may get important information on how to manage it in daily life on the webpage https://brainfatigue.se/ However, there is still no research on the efficacy of such supporting interventions in GD patients. Optimizing non-medical, non-surgical ophthalmological treatment with prisms is a well-known therapy for improving double vision. Psychological support, aid tools with different glasses, computer filters, computer settings, light adjustments ( ), and ophthalmology pedagogues are beneficial in other diseases with similar symptomatology. However, evidence is lacking in GO. Rehabilitation together with multiprofessional teams to ensure that people can remain as independent as possible is rare. Rehabilitation depends on patient symptoms: we suggest implementation of a rehabilitation process for GD patients 3 months after diagnosis if the patient is still on sick leave for an indefinite period and in cases of disabling GO symptoms. A rehabilitation process has recently been proposed for hypothyroid patients ( ).
Healthcare needs to detect the areas where knowledge is insufficient and effectively target those needs with tailored educational efforts and readable materials with the aim of increasing safety, reducing anxiety, and avoiding misconceptions ( ). There also needs to be education for patients. We recommend that such information materials are uniform for a country, are constructed to cover different needs, and can be given on different occasions ( ). For patients with visual difficulties, information should be presented in a way that facilitates patients with visual problems taking part by using an auditory approach. Improving information may increase compliance, improve adequate decision making, and benefit outcome and patient experience, as outlined in the SHOR diagram ( ).
It is time to implement QoL measurements in pracdtical, everyday healthcare to detect those patients who need extra support and rehabilitation as well as to follow recovery individually. Disease-specific questionnaires may also be used to identify specific symptoms that need further evaluation. In a recent publication, key points to facilitate the implementation of QoL measurements into routine care were highlighted such as ( ). It could be argued that QoL measurements may lead to frustration as there are limited therapies at hand; however, identification of symptoms and making them measurable is an important way to acknowledge the existence of symptoms that are not measurable through blood sampling ( and ). In GD patients with GO, symptoms such as excessive lacrimation, photophobia, changes in refraction, and various degrees of dysmotility may escape objective measurement but may cause the patient significant visual dysfunction ( ) and impact their vision-related daily functioning such as reading, watching TV, and driving ( , , ). Similarly, orbital discomfort is common ( , , ) and may have profound effects on QoL ( ). Another example is that mental fatigue alone may be the major complaint and should not be mixed up with depression or anxiety as it can be treated separately (Birgitta Johansson, manuscript in preparation). Specifically, full or partial sick leave may be necessary for a long period of time and the patient needs to learn on how to manage in daily life (see https://brainfatigue.se/ ). QoL and the MFS measurement can be used to follow functional disability. We suggest that three questionnaires are implemented into regular care: ThyPRO, GO-QoL, and MFS. Preferably, the questionnaire should be sent to patients electronically to be filled in at home and resubmitted to the thyroid/contact nurse for evaluation and further team discussions with the physician. ThyPRO may be measured in the initial course of GD, after 6 and 12 months, and before and 6 months after termination of ATDs, surgery, or RAI. GO-QoL may be filled in at the onset of GD, at the onset of GO, every 6 months in the first year, and thereafter yearly as necessary. We recommend that the MFS should be completed in parallel with QoL measurements. To further facilitate the ThyPRO questionnaire, a short-form variant with preserved validity has been developed ( ). It is also valuable to know that a difference of 6 points on one or both subscales in GO-QoL may be noted by patients as beneficial with important changes in daily functioning. A minimum of ≥10 points compared to before invasive therapies, such as orbital decompression, is needed to see an important clinical difference ( , ). Moreover, disease-specific QoL questionnaires may also be an important quality indicator within regular healthcare to appropriately address these outcomes.
Based on experiences and traditions, treatment strategies and care may differ within a country. To harmonize care, national guidelines are pivotal. Is medical advice the only thing you expect from a guideline? Looking into international, national, and regional guidelines in our field, the answer is yes. The Swedish national system for knowledge-driven management in healthcare provides a different view and requires patient involvement in national and regional task forces to include all issues of importance for patients. This may, per se , promote a better outcome concerning morbidity, mortality, QoL, and patient experience. Three patients, representing patient organizations in Sweden, were included in our national guideline task force together with nurses, physicians, and physicists. These patients are also co-authors of this paper together with nurses and physicians. The patients' mission through participation was to provide their individual and collective perspectives. In the context of hyperthyroidism, variables such as information, individualization, approaching fears, aid tools, rehabilitation, structuring healthcare, and QoL instruments so that patients are always met with competence became important. In addition to medical issues, our national guidelines ( ) have chapters on nursing, long-term consequences and rehabilitation, patient information, lists of priorities, and gap analyses (scientific and structural). Taking these guidelines to the regional level was not only to incorporate them into regional medical guidelines but also to improve the education of personnel, create thyroid/contact nurses, create patient and relative education, and form a rehabilitation program. These issues were those that were expected to create the largest benefit for patients. This prioritization was performed in a structured manner using a SHOR diagram to avoid an even larger disparity in healthcare services contrary to its intention ( ). The SHOR diagram is a pedagogic tool to identify the variables with the highest impact on mortality, morbidity, and QoL ( ), which is, according to the over-arching principle of Swedish healthcare governance, to provide as much care as possible for as many patients as possible given the available resources.
The limited evidence of research in patient experience and PCC for patients with hyperthyroidism and the low use of QoL questionnaires in ordinary healthcare were the incentives for this review. Going through this literature, we realize that it is time to introduce QoL measurements into national guidelines including the disease-specific QoL tools, ThyPRO and GO-QoL, as well as MFS. Awaiting research to complete the gaps of knowledge ( ) in a more patient-orientated way to evaluate healthcare, we must place a focus on how to promote a better QoL outcome by supportive actions besides medical treatment to create another cornerstone in establishing high-quality healthcare. Research within these areas is strongly warranted, not least by patients.
H F N has received lecture fees from Siemens Inc., IBSA, Oripharm, AstraZeneca, and Bristol-Mayer Squibb. The remaining authors declare no competing financial interests.
This work was financed by grants from the Swedish state under the agreement between the Swedish government and the county councils, the ALF-agreement (ALFGBG-717311, ALFGBG-790271), local FOU 2020 (VGFOUGSB-941595) 110 000SEK, the Healthcare Board, Region Västra Götaland (Hälso- och sjukvårdsstyrelsen), and The Knut and Alice Wallenberg Foundation. The University of Gothenburg, Sweden, is acknowledged for generous support.
H F N and A L wrote the manuscript. All authors contributed to the project concept, developed the figures, and approved the final version of the manuscript.
|
Editorial: Molecular crosstalk between endocrine factors, paracrine signals, and the immune system during aging
|
a6d41d0e-3244-429b-86db-fa8eb2f7b448
|
10160671
|
Physiology[mh]
|
Aging is a complex biological process that gradually declines physiological function and increases susceptibility to disease. Various factors, including genetics, lifestyle, and environmental factors, influence this process . One key factor that has emerged as a significant contributor to aging is the interactions between the endocrine, paracrine, and immune systems. The endocrine and immune systems are closely interconnected and work together to maintain homeostasis in the body. Hormones produced by the endocrine system, such as insulin, growth hormone, prolactin, and thyroid hormone, have a significant effect on the immune system. For example, Insulin increases production of inflammatory cytokines like IL-6 during LPS stimulation in macrophages . Conversely, immune cells, such as T and B cells, and senescent cells, can produce hormones that regulate immune function and interact with the endocrine system. For example, senescent cells, via the senescence-associated secretory phenotype (SASP), release growth factors and cytokines that interact with the local and systemic environment . The endocrine and immune systems utilize paracrine signaling to coordinate cellular responses to any stress in distant tissues in the body. There is a significant gap in the field regarding our understanding of the interaction between these pathways during aging and diseased conditions. The current Research Topic, “ Molecular Crosstalk Between Endocrine Factors, Paracrine Signals, and the Immune System During Aging ,” helps to bridge this gap by highlighting recent research findings at the intersection of endocrine-immune axis, aging, and age-related pathologies, as well as opportunities for therapeutic interventions. Endocrine factors regulate many physiological processes, including growth and development, metabolism, and reproduction. With age, the levels of many endocrine factors change, leading to alterations in these processes. For example, insulin receptor (InsR) signaling is a well-conserved pathway regulating longevity. Makhijani et al. reviewed the function of InsR signaling pathways in different immune cell subsets and their impact on cellular metabolism, differentiation, and effector versus regulatory function. With ample evidence from the literature, the authors provided mechanistic links between altered InsR signaling and immune dysfunction in various disease settings and conditions, focusing on age-related conditions, such as type 2 diabetes and cancer. The immune system plays a key role in defending the body against infection and disease. As we age, the immune system undergoes significant changes, including a decline in the production of new immune cells and a decrease in the ability of immune cells to respond to infection. The reduction in immune cells can lead to an increased susceptibility to infections and a reduced ability to clear infections once they occur. King et al. provided a brief report on the relationship between aging, reproductive health, and immune function. From the studies in the lab, the authors claim that transplanting young ovaries into old mice increased healthspan and lifespan. However, the results from Mason’s lab suggest that the protective effect of the ovarian transplant was not due to hormonal activity, as hormone-depleted ovaries from young mice also extended their lifespan. The authors claim that additional factors other than ovarian hormones are the reason for health benefits. In their current report, the authors specifically focused on the influence of young ovarian tissues on immune function in post-reproductive female mice in the presence or absence of ovarian follicles. Hormones play an essential role in the immune system and can significantly impact the development and progression of rheumatic disorders. Bertoldo et al. reviewed the interaction between the endocrine hormones and the immune system from the perspective of rheumatic disorders. The review article covers recent data describing the role of bone-related hormones and cytokines. The pituitary gland produces growth hormone (GH), which plays a key role in growth and development during childhood and adolescence. While some studies have suggested that GH replacement therapy may improve markers of health and longevity in older adults, other studies have raised concerns about GH treatment’s potential risks and side effects, such as an increased risk of cancer and diabetes. As a part of this Research Topic, Bartke reviewed the relationship between growth hormones and longevity. He suggested that a slower pace of life is associated with extended longevity within and between species. This review warrants future studies in understanding energy metabolism and nutrient-dependent signaling at different stages of life. Paracrine signals are molecules produced by one cell and act on neighboring cells to regulate their function. These signals are vital in maintaining tissue homeostasis and responding to damage or injury. In aging, the production and response to paracrine signals can become dysregulated, leading to tissue dysfunction and disease. For example, senescent cells’ production of inflammatory cytokines can lead to chronic inflammation, a hallmark of aging, and associated with many age-related diseases. Kuehnemann et al. reported a new senescence-associated secretory phenotype marker. Nicotinamide Phosphoribosyl Transferase (NAMPT), the enzyme involved in the rate-limiting step of NAD biosynthesis, is increased in senescent cells. Results from the research show that the senescence cells displayed increased NAMPT, which is different from classical DNA damage response and without further increase in NAD. Based on the observed results, the authors believe that increased extracellular NAMPT (eNAMPT) during senescence is another SASP marker that could regulate metabolic functions in distant cells. Further, the authors showed that diabetic mice displayed elevated levels of eNAMPT, and treatment with the senolytic drug ABT-263 can rescue the high levels of eNAMPT. In conclusion, the crosstalk between endocrine factors, paracrine signals, and the immune system is a complex and dynamic process that plays a crucial role in aging. The interaction between these vital pathways has important implications for aging research and interventions. For example, targeting endocrine factors such as growth hormone and IGF-1 or paracrine signals such as inflammatory cytokines may provide new therapeutic strategies to improve immune function in the elderly population. The review articles and research manuscripts presented in this Research Topic have highlighted this crosstalk’s importance and identified new intervention targets. Further research is needed to fully understand the dynamic interactions between biological pathways and develop effective interventions to improve health and prevent age-related diseases. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
|
Zoom and its Discontents: Group Decision Making in Pediatric Cardiology in the Time of COVID (and Beyond)
|
a24be72b-966f-49a3-b963-f7245c78499a
|
10160710
|
Internal Medicine[mh]
|
Group-decision making through multidisciplinary teams is an essential component of modern healthcare. In providing remote access to multi-disciplinary meetings online communication has enabled clinician participation while minimising the contagion risk during the Covid-19 pandemic. Despite the many expressed shortcomings of virtual participation, the majority of our clinicians favoured the availability of a hybrid format for post-pandemic MDT meetings primarily due to the greater opportunity to access the meeting. It is important not to conflate the effectiveness of the process with its practical implementation. As a new generation of clinicians emerges, attuned to online technologies and a preference to incorporate remote working, online meetings within healthcare will likely continue and develop. However, no communication media can yet replicate the high quality of collocated human interaction. As the media richness declines, so does the quality of image perception, group interaction and potentially group decision-making. Undoubtedly near-future technologies will advance to address some of the current limitations of video conferencing and accessing high-quality imaging data. Those systems that preserve important non-verbal cues and enhance social presence should facilitate interactive behaviour and promote optimal working practices. Concurrently, as virtual work-groups gain the experience of working together they will learn to develop strategies and evolve social behaviours that adapt to this new environmental condition . At present the evaluation of the virtual MDT effectiveness has been limited to survey reports with an absence of empirical studies comparing these alternative modes of MDT functioning. Therefore, healthcare should carefully consider the implications of online group decision making, adapt accordingly and evaluate prior to replacing the in-person face-to-face multidisciplinary meetings.
|
Multi-Cause Calibration of Verbal Autopsy–Based Cause-Specific Mortality Estimates of Children and Neonates in Mozambique
|
47c75b28-b361-417c-9337-8a9038c43e44
|
10160855
|
Forensic Medicine[mh]
|
In Mozambique, the Countrywide Mortality Surveillance for Action (COMSA) platform provides continually updated statistics on mortality and cause of death (COD) for the country. The goal is to conduct COD analyses stratified by age and province level to inform the government of Mozambique and other stakeholders. This is important for the nation’s public health because Mozambique does not have a comprehensive civil registration and vital statistics system. Countrywide Mortality Surveillance for Action has implemented a sample registration system (SRS) of births and deaths, but accurately determining the COD is challenging because many deaths occur outside hospitals. Countrywide Mortality Surveillance for Action has trained Community Surveillance Assistants (CSAs) deployed for interviews of families of the deceased individuals registered in the SRS. The CSAs conduct verbal autopsies (VAs), standardized series of questions that establish the health history and signs and symptoms of the fatal illness. These questionnaires can be examined by physicians to establish a likely COD, but such a protocol is costly and difficult to standardize. Instead, computer-coded VA (CCVA) algorithms like InSilicoVA, InterVA, expert algorithm (EAVA), Tariff or SmartVA, and naive Bayes classifier can be used to automatically infer COD. Computer-coded VA data can be used directly to estimate cause-specific mortality fractions (CSMFs; i.e., the proportion of deaths attributable to a set of causes) at the population level. However, the outputs of the CCVA algorithms are merely statistical predictions and not definitive measurements. Any biases present in individual-level COD predictions will be propagated to the aggregated estimates (i.e., the population-level CSMF). In this manuscript, we conduct analysis to obtain CSMFs for neonates aged 1–28 days and children aged 1–59 months in Mozambique using the COMSA VA data. There are a priori reasons to suspect the presence of large biases in the results from CCVA algorithms used for the COMSA VA data. For example, major CCVA algorithms have been trained on data from the Population Health Metrics Research Consortium (PHMRC). These data date to 2011 and were collected in countries other than Mozambique. Due to varying causes of disease and cultural differences in the communication underlying the VA methods, the relationship between reported symptoms and underlying causes is likely substantially different between the PHMRC data and the COMSA data. Accuracy of CCVA is known to highly depend on the training data. This means CCVA algorithms that might be accurate in the PHMRC cohort may be quite biased in the COMSA cohort. It has been demonstrated that biased CSMF estimates from CCVA algorithms may be improved by incorporating auxiliary data sources with more comprehensive COD information for the purpose of calibration. , Specifically, we use a comparatively small set of data from the Child Health and Mortality Prevention (CHAMPS) project collected in Bangladesh, Ethiopia, Kenya, Mali, Mozambique, Sierra Leone, and South Africa. For deaths assessed in this project, a panel of experts determines the COD/chain of events leading to death using medical history and records of the terminal illness and post-mortem multiple pathogen screening and biopsy pathology data using minimally invasive tissue sample (MITS) procedures. The CODs determined by the CHAMPS process will be referred to as “MITS-COD,” recognizing that use of MITS is an important addition to improve the validity of medical certification of COD. The COD determination informed by MITS has been shown to be highly concordant with COD from full diagnostic autopsies. The deaths studied by CHAMPS also have VA data, and it is possible to obtain the CCVA-predicted COD, henceforth referred to as “VA-COD.” This paired dataset of MITS-COD and VA-COD allows us to estimate the misclassification rates of the CCVA algorithms in this cohort. Combining this evidence with the raw COMSA CCVA results using a Bayesian algorithm as implemented in the calibratedVA R package, we are able to provide calibrated CSMF estimates that account for the misclassification of causes by the CCVA methods. In addition, we provide results from an ensemble calibration algorithm that combines the input of multiple CCVA algorithms. The VA calibration procedure is simplest when the CCVA and MITS autopsy results identify a single COD. However, CCVA algorithms like InSilicoVA and InterVA may output probabilistic predictions, suggesting multiple plausible CODs with an assigned score. Ignoring this uncertainty in COD prediction reflected in the multi-cause VA-COD output wastes information. To understand this, when probabilistic predictions are converted to single-cause predictions, a plurality rule is used (i.e., the cause with the highest score is assigned to be the VA-COD for that case). Therefore, a COD with relatively low probability may be assigned as the definitive cause. Additionally, for MITS results, many individuals are considered to have multiple CODs, including one underlying (the one precipitating the chain of events ultimately leading to death) and an immediate (the closest one to the fatal event) cause. A single-cause analysis would only use the underlying cause, missing important information on the causal chain of illnesses leading to death. Due to the multi-cause nature of both the VA-COD and MITS-COD data, we implement a multi-cause procedure for CCVA calibration as described by Fiksel et al., which sensibly incorporates multiple causes in both the VA and the MITS results. This is accomplished through a novel generalization of the notion of misclassification rates for multi-cause data and by using a “generalized Bayes” estimation technique that replaces the full probability likelihood model with the solution of an estimating equation incorporating a loss function , because the latter is easier to deal with for multi-cause data. Two different CCVA algorithms, InSilicoVA and EAVA, are considered for the analysis, and the final CSMF estimates are obtained using an ensemble calibration that uses data from both the algorithms. We show that use of the multi-cause COD output improves the sensitivities of both the CCVA algorithms with respect to the MITS-COD. The calibrated CSMFs show significant differences from the uncalibrated ones and offer substantially improved fit to the data.
Overview of single-cause VA-calibration. To estimate the prevalence of various CODs in Mozambique, we have the results of VAs for 1,841 child deaths (aged 1–59 months) and 818 neonatal deaths. These autopsies are analyzed by two CCVA algorithms of inherently different nature – InSilicoVA, which assigns conditional probabilities of COD, and EAVA, an algorithm that follows logical rules related to reported signs and symptoms of the fatal illness to move through a hierarchical decision tree to assign a COD. To introduce the calibration procedure, we will begin by assuming that one COD is identified by the algorithms. In the case of InSilicoVA, this is accomplished by selecting a cause by “plurality rule” (i.e., the COD with the highest predicted probability). EAVA, by default, offers only a single COD. Given these predictions from either of these algorithms, we can estimate the CSMF with the sample proportions. However, these uncalibrated CSMF estimates will exhibit substantial bias due to systematic misclassification in the VA predictions. For each pair of causes i,j (and a specific algorithm, InSilicoVA, or EAVA), M ij denotes the rate that a subject with true COD i (as diagnosed by a more comprehensive diagnostic procedure like MITS) will be predicted as having COD j by the CCVA algorithm. If i = j , then M ij is the probability of a correct classification of an individual with condition i . Otherwise, M ij is the probability of misclassifying such an individual as having condition j . Stacking up the unknown M ij ’s into a matrix, we have a parameter M that represents the misclassification rates of the CCVA algorithm. M will be close to the identity matrix (with ones on the diagonal and zeros elsewhere) for an accurate algorithm and far from the identity matrix for an inaccurate algorithm. To overcome the bias in raw CSMF estimates, we estimate the misclassification rates M for each of the two CCVA algorithms with respect to the MITS-COD. The CHAMPS data used in this analysis contains MITS-COD for the deaths of 426 children and 614 neonates across all sites. For the same subjects, we have the results of the automated algorithms (EAVA and InSilicoVA) run on their VA and can obtain the VA-COD for the respective algorithms. These paired data are used to estimate the misclassification rates. We do not assume that the cause-specific COD proportions are the same for the COMSA and CHAMPS, but rather that the conditional misclassification rates of the VA algorithms with respect to MITS-COD are equivalent between the two. We first illustrate how the calibration works using a simple example with three causes. Let ( p 1 , p 2 , p 3 ) denote the true CSMF for these causes in the population of interest and ( q 1 , q 2 , q 3 ) denote the uncalibrated CSMF estimated from a CCVA algorithm. One can obtain an estimate of the error rates of the algorithm from a paired dataset of VA- and MITS-COD. These rates are summarized in an error matrix M = ( M ij ), where M 11 is the sensitivity of VA identifying the first cause, M 21 is the error rate of VA-COD being cause 1 when the MITS-COD is cause 2, and the other entries are similarly defined. Then, following the law of total probability, we have q 1 = P ( VA COD = cause 1 ) = P ( MITS COD = cause 1 ) * P ( VA COD = cause 1 | MITS COD = cause 1 ) + P ( MITS COD = cause 2 ) * P ( VA COD = cause 1 | MITS COD = cause 2 ) + P ( MITS COD = cause 3 ) * P ( VA COD = cause 1 | MITS COD = cause 3 ) = p 1 * M 11 + p 2 * M 21 + p 3 * M 31 We can develop similar equations for q 2 and q 3 . To generalize this to the case with more than three causes, we denote by p the target parameter of interest (i.e., the population CSMF); p is a vector whose i th component is the CSMF for the i th cause. We let q denote the apparent CSMF as provided by the raw (uncalibrated) and biased CCVA algorithms. Following the law of total probability, the apparent CSMF for each cause j is a weighted estimate of the true CSMF for all causes, weighted by the proportion of times those causes are misclassified as cause j by CCVA. In mathematical terms, this yields the equation q = M ′ p (1) Note that q and M are both directly estimable from the available data: q is measured by the aggregated COMSA raw CSMF estimates from the CCVA algorithms, and M is measured by comparing the CCVA and CHAMPS results. If q and M were known with certainty, then p could be calculated by solving the system of linear equations q = M ′ p . Because q and M instead are associated with statistical estimates based on data, one can use a Bayesian procedure for the calibration as described by Datta et al. The approach uses multinomial likelihoods for the data and conducts Bayesian inference using Markov Chain Monte Carlo that ensures propagation of uncertainty. Multi-cause CCVA outputs. As mentioned earlier, for each VA record, the InSilicoVA algorithm outputs predicted conditional COD probabilities. Although this prediction can be summarized into one class (the class with the highest predicted probability), such a procedure wastes information. For example, it considers deaths with a 60% probability of a COD to contain the same information as deaths with a 100% probability, even though the latter is clearly stronger evidence for that COD. Also, with more than two CODs, the largest predicted probability could even be well under 60%. In addition, a disease that shares a symptom profile with a more common COD might never be identified as the most likely COD in any individual case; the single-cause procedure will incorrectly indicate a prevalence of zero for such a disease. We aim instead to use the multi-cause output of complete set of InSilicoVA predicted probabilities for each individual case, without recourse to the plurality rule. EAVA is a deterministic algorithm designed to yield single-class COD predictions. We used a novel modification of the EAVA algorithm to generate multi-cause predictions for cases where more than one COD is compatible with the VA responses. This is achieved by running the algorithm first normally to identify the most likely cause, then running the algorithm a second time with the most likely cause removed from the COD hierarchy. The cause selected by this second run of the algorithm is identified as the second most likely cause. We then create a multi-cause EAVA output assigning the most likely cause a probability of 75% and the second most likely cause a probability of 25%. All other causes are assigned a probability of 0%. Sensitivity analysis was conducted to study the impact of the choice of the weights. MITS underlying and immediate causes. Death is a complex process involving a causal chain encompassing multiple causes. In fact, for each death assessed by the CHAMPS process that is enrolled with MITS, all CODs are captured, which includes both an underlying and immediate cause being identified. The single-cause analysis would only use the “underlying” cause because it is the first one to appear and leads to the chain of events causing death. However, in many deaths, the immediate cause, besides being the final cause in the causal chain, carries important information that may affect policy or clinical decisions. For example, if we only know that deaths had HIV (underlying cause) and ignored that the terminal events involved tuberculosis or pneumonia, we would not be well informed. Therefore, we aim to allow the use of up to two MITS-CODs (underlying and immediate) for each death (recognizing that there may be even more causes for some deaths). If MITS does not identify two distinct causes (i.e., if both the underlying and immediate causes are the same), the MITS-COD remains a single cause as before. Multi-cause misclassification matrix. With these aims in mind, it is necessary to generalize the notion of CCVA misclassification rates described above to allow for multiple causes for both the CCVA outputs and the CHAMPS diagnosis. We first extend the definition of misclassification matrix for multi-cause CCVA outputs but with single-cause MITS output. Recall that for a single-cause analysis, each entry of the misclassification matrix M ij is the proportion of deaths that belong to class i (i.e., have MITS-COD i ) that are identified as belonging to class j (i.e., have VA-COD j ). For multi-cause CCVA output, the misclassification rate M ij is defined as the average score assigned to cause j by the CCVA algorithm among all deaths that would be attributed to MITS-COD i . Because for binary data averages are the same as proportions, the multi-cause definition of misclassification rates agrees with the previous definition of M if all the CCVA outputs were single cause. The calibration framework also allows for multiple MITS causes (underlying and immediate causes). This extension follows from recognizing that for multi-cause MITS-COD, the VA-COD for a death is a mixture of the misclassification rates of CCVA for the two possible MITS-COD (underlying and immediate) for that death. To formally understand this, for a total of C causes considered, we can summarize the multi-cause MITS-COD for a case in a C -length vector x . The entries of x will be 1 (if a cause is identified as both the underlying and immediate causes of death), 0.5 (if the cause is identified as one of immediate or underlying MITS-COD but not both), or 0 (if the cause is not identified as either COD). Consider a case where MITS identifies cause 1 as the immediate cause and cause 2 as the underlying cause. This MITS diagnosis can be expressed as a C -length vector x = (0.5, 0.5, 0, …, 0). The multi-cause calibration interprets the MITS diagnosis as assigning 50 out of 100 individuals with such a MITS diagnosis to cause 1 and the remaining 50 individuals to cause 2. Hence, the proportion of times the CCVA will predict cause j for such a case will be the weighted average of the VA misclassification rates for causes 1 and 2 (i.e., 0.5 * M 1 j + 0.5 * M 2 j = M ′ x ). Thus, estimating the misclassification matrix corresponds to a linear regression y = M ′ x of the multi-cause VA-COD y on the multi-cause MITS-COD x , and the misclassification matrix M can be interpreted as the multi-dimensional regression line. This notion of the misclassification matrix is formally defined by Datta and implemented in the codalm R package. Note that if the MITS identifies a single cause (i.e., one entry of x is 1), then this formulation agrees with the previous single-cause definition of M . Multi-cause calibration. The generalization of the misclassification matrix allows us to extend the VA calibration to allow multi-cause VA-COD and MITS-COD. The CHAMPS data of paired MITS and VA records are used to estimate the misclassification matrix using the regression method described in “Multi-cause misclassification matrix.” The COMSA VA data allows estimating the raw (uncalibrated) CSMF q . Letting p denote the calibrated CSMF, the equation q = M ′ p presented in “Overview of single-cause VA-calibration” remains valid for multi-cause data. Hence, subsequent to estimation of M and q , one can solve for p using the equation above. However, unlike the single-cause data, which is categorical (discrete) and amenable to multinomial modeling, the multi-cause data (also termed compositional or fractional data) cannot be modeled using a multinomial distribution. This is because both the CCVA and MITS COD outcomes are no longer discrete variables. Multi-cause COD are now compositional variables (i.e., vectors of probability scores, with each probability representing the chance that death occurred due to the corresponding cause, summing up to 1). Rather than maximizing a likelihood function, we minimize a loss function to connect parameters to the multi-cause data. The Kullback-Leibler divergence, or relative entropy loss, is a popular measure of dissimilarity connecting compositional or multi-cause data to parameters. Previous research provides a guide for Bayesian-style inference using loss function rather than a full likelihood. , With a given loss function and prior, we have the generalized Bayes posterior for multi-cause VA calibration: Generalized posterior ∝ exp [ − loss ( parameters | data ) ] ∗ prior Hence, instead of a likelihood formulation, the multi-cause calibration is conducted in a generalized Bayes rule by using the relative entropy (Kullback-Leibler) loss functions for multi-cause data (see Fiksel et al. for details). This loss agrees with the multinomial likelihood for single-cause data, and thus the multi-cause VA calibration is a generalization of the single-cause one. With this set-up, computations to find the optimal estimates of unknown parameters can proceed in a manner analogous to the single-cause paradigm. To arrive at an ensemble calibrated estimate, the loss function is taken as the sum of the loss functions for EAVA and InSilicoVA. The purpose of combining the loss functions for both algorithms is to yield estimates that are consistent with the results of both algorithms. Note that taking the sum of the loss functions for the ensemble calibration is equivalent to placing equal importance to each VA algorithm. If there is a priori knowledge that one algorithm is more accurate than the other, one can also consider a weighted sum of the loss functions, placing higher weights to the more accurate algorithm. However, it is hard to quantitatively assess superiority of an algorithm a priori and assign a number to this. Also, although the prior weights are equal for the ensemble calibration, the final estimated CSMF will often tend to align with the estimate from calibrating the most accurate CCVA algorithm. , Previously, extensive validation studies were presented using the benchmarking PHMRC dataset for the assessment of the multi-cause VA calibration method. These validation studies compared the calibrated and uncalibrated CCVA algorithms using the cause-specific mortality fraction accuracy (CSMFA) metric. However, this metric requires knowledge of the true CSMF for the dataset. This is available for the PHMRC dataset, which also contains a more comprehensive COD information for each record in addition to the VA data. For the COMSA analysis, the true CSMF is unknown and is the quantity of interest that needs to be estimated. Hence, CSMFA cannot be calculated. To evaluate statistical models without knowledge of the true parameter values, a common strategy is to compare the out-of-sample prediction performance of the candidate models on hold-out data. This strategy cannot be adopted for VA calibration because the algorithm does not work at the individual level but only at the population level. In other words, the calibration does not calibrate the COD for the individual deaths or produce individual-level calibrated COD predictions; it only calibrates the estimate of the population-level CSMF. To compare uncalibrated models with their calibrated counterparts, we use the Widely Applicable Information Criterion (WAIC), a goodness-of-fit measure that uses in-sample model fit to estimate the model’s ability to predict future observations. WAIC provides an estimate of out-of-sample (prediction) error but using only in-sample (training) data. WAIC also harmonizes well with Bayesian inference, making use of the entire posterior distribution available from the Markov Chain Monte Carlo runs. Calculating the WAIC for the multi-cause calibrated models is not substantially different from calculating the WAIC for the single-cause calibrated models, except that the log-likelihood function is replaced with the negative log of the loss function. Data adjustments and exclusion. Given the raw output of the MITS and CCVA algorithms, we have instituted a few steps of pre-processing to ensure the accuracy and validity of our procedure. Because conditions for neonates in South Africa hospitals differ from those in the other countries of interest, we exclude these 274 South Africa neonates from our analysis. The EAVA algorithm is inconclusive for a significant proportion of individuals, failing to identify any particular COD. These deaths would have to be by necessity excluded from a single-cause analysis, but the multi-cause calibration accommodates imputed values for this data with best available estimates. For the COMSA data, inconclusive EAVA deaths are assigned the average EAVA scores of all other (conclusive) EAVA scores. For a death with inconclusive EAVA diagnosis in the CHAMPS data, we impute the EAVA COD as the row of the EAVA misclassification matrix corresponding to the subject’s MITS underlying COD, where the misclassification matrix is calculated relative to the single-cause MITS. These choices are made to represent our best estimate of how the EAVA algorithm would classify such subjects. Because the imputed values would be multi-cause in nature, they cannot be included in the single-cause analysis. Thus, deaths with inconclusive EAVA diagnoses, which had to be removed from a single-cause analysis, are assigned multi-cause imputed scores, which are then used in the multi-cause calibration. With C CODs, learning the misclassification matrix requires estimating a total of C* ( C − 1) parameters. To minimize the dimensionality of the problem while retaining its important aspects, we group some CODs into more general labels based on evidence on the predominant CODs among children and neonates. , For the children, we use the following seven classifications: malaria, pneumonia, diarrhea, severe malnutrition, HIV, other infections (which includes meningitis, typhoid fever, hepatitis, etc.), and other. For the neonates, we use the following five classifications: congenital malformation, infection (neonatal tetanus, meningitis and encephalitis, diarrhea, pneumonia, and sepsis), intrapartum-related events (IPREs), other, and prematurity.
To estimate the prevalence of various CODs in Mozambique, we have the results of VAs for 1,841 child deaths (aged 1–59 months) and 818 neonatal deaths. These autopsies are analyzed by two CCVA algorithms of inherently different nature – InSilicoVA, which assigns conditional probabilities of COD, and EAVA, an algorithm that follows logical rules related to reported signs and symptoms of the fatal illness to move through a hierarchical decision tree to assign a COD. To introduce the calibration procedure, we will begin by assuming that one COD is identified by the algorithms. In the case of InSilicoVA, this is accomplished by selecting a cause by “plurality rule” (i.e., the COD with the highest predicted probability). EAVA, by default, offers only a single COD. Given these predictions from either of these algorithms, we can estimate the CSMF with the sample proportions. However, these uncalibrated CSMF estimates will exhibit substantial bias due to systematic misclassification in the VA predictions. For each pair of causes i,j (and a specific algorithm, InSilicoVA, or EAVA), M ij denotes the rate that a subject with true COD i (as diagnosed by a more comprehensive diagnostic procedure like MITS) will be predicted as having COD j by the CCVA algorithm. If i = j , then M ij is the probability of a correct classification of an individual with condition i . Otherwise, M ij is the probability of misclassifying such an individual as having condition j . Stacking up the unknown M ij ’s into a matrix, we have a parameter M that represents the misclassification rates of the CCVA algorithm. M will be close to the identity matrix (with ones on the diagonal and zeros elsewhere) for an accurate algorithm and far from the identity matrix for an inaccurate algorithm. To overcome the bias in raw CSMF estimates, we estimate the misclassification rates M for each of the two CCVA algorithms with respect to the MITS-COD. The CHAMPS data used in this analysis contains MITS-COD for the deaths of 426 children and 614 neonates across all sites. For the same subjects, we have the results of the automated algorithms (EAVA and InSilicoVA) run on their VA and can obtain the VA-COD for the respective algorithms. These paired data are used to estimate the misclassification rates. We do not assume that the cause-specific COD proportions are the same for the COMSA and CHAMPS, but rather that the conditional misclassification rates of the VA algorithms with respect to MITS-COD are equivalent between the two. We first illustrate how the calibration works using a simple example with three causes. Let ( p 1 , p 2 , p 3 ) denote the true CSMF for these causes in the population of interest and ( q 1 , q 2 , q 3 ) denote the uncalibrated CSMF estimated from a CCVA algorithm. One can obtain an estimate of the error rates of the algorithm from a paired dataset of VA- and MITS-COD. These rates are summarized in an error matrix M = ( M ij ), where M 11 is the sensitivity of VA identifying the first cause, M 21 is the error rate of VA-COD being cause 1 when the MITS-COD is cause 2, and the other entries are similarly defined. Then, following the law of total probability, we have q 1 = P ( VA COD = cause 1 ) = P ( MITS COD = cause 1 ) * P ( VA COD = cause 1 | MITS COD = cause 1 ) + P ( MITS COD = cause 2 ) * P ( VA COD = cause 1 | MITS COD = cause 2 ) + P ( MITS COD = cause 3 ) * P ( VA COD = cause 1 | MITS COD = cause 3 ) = p 1 * M 11 + p 2 * M 21 + p 3 * M 31 We can develop similar equations for q 2 and q 3 . To generalize this to the case with more than three causes, we denote by p the target parameter of interest (i.e., the population CSMF); p is a vector whose i th component is the CSMF for the i th cause. We let q denote the apparent CSMF as provided by the raw (uncalibrated) and biased CCVA algorithms. Following the law of total probability, the apparent CSMF for each cause j is a weighted estimate of the true CSMF for all causes, weighted by the proportion of times those causes are misclassified as cause j by CCVA. In mathematical terms, this yields the equation q = M ′ p (1) Note that q and M are both directly estimable from the available data: q is measured by the aggregated COMSA raw CSMF estimates from the CCVA algorithms, and M is measured by comparing the CCVA and CHAMPS results. If q and M were known with certainty, then p could be calculated by solving the system of linear equations q = M ′ p . Because q and M instead are associated with statistical estimates based on data, one can use a Bayesian procedure for the calibration as described by Datta et al. The approach uses multinomial likelihoods for the data and conducts Bayesian inference using Markov Chain Monte Carlo that ensures propagation of uncertainty.
As mentioned earlier, for each VA record, the InSilicoVA algorithm outputs predicted conditional COD probabilities. Although this prediction can be summarized into one class (the class with the highest predicted probability), such a procedure wastes information. For example, it considers deaths with a 60% probability of a COD to contain the same information as deaths with a 100% probability, even though the latter is clearly stronger evidence for that COD. Also, with more than two CODs, the largest predicted probability could even be well under 60%. In addition, a disease that shares a symptom profile with a more common COD might never be identified as the most likely COD in any individual case; the single-cause procedure will incorrectly indicate a prevalence of zero for such a disease. We aim instead to use the multi-cause output of complete set of InSilicoVA predicted probabilities for each individual case, without recourse to the plurality rule. EAVA is a deterministic algorithm designed to yield single-class COD predictions. We used a novel modification of the EAVA algorithm to generate multi-cause predictions for cases where more than one COD is compatible with the VA responses. This is achieved by running the algorithm first normally to identify the most likely cause, then running the algorithm a second time with the most likely cause removed from the COD hierarchy. The cause selected by this second run of the algorithm is identified as the second most likely cause. We then create a multi-cause EAVA output assigning the most likely cause a probability of 75% and the second most likely cause a probability of 25%. All other causes are assigned a probability of 0%. Sensitivity analysis was conducted to study the impact of the choice of the weights.
Death is a complex process involving a causal chain encompassing multiple causes. In fact, for each death assessed by the CHAMPS process that is enrolled with MITS, all CODs are captured, which includes both an underlying and immediate cause being identified. The single-cause analysis would only use the “underlying” cause because it is the first one to appear and leads to the chain of events causing death. However, in many deaths, the immediate cause, besides being the final cause in the causal chain, carries important information that may affect policy or clinical decisions. For example, if we only know that deaths had HIV (underlying cause) and ignored that the terminal events involved tuberculosis or pneumonia, we would not be well informed. Therefore, we aim to allow the use of up to two MITS-CODs (underlying and immediate) for each death (recognizing that there may be even more causes for some deaths). If MITS does not identify two distinct causes (i.e., if both the underlying and immediate causes are the same), the MITS-COD remains a single cause as before.
With these aims in mind, it is necessary to generalize the notion of CCVA misclassification rates described above to allow for multiple causes for both the CCVA outputs and the CHAMPS diagnosis. We first extend the definition of misclassification matrix for multi-cause CCVA outputs but with single-cause MITS output. Recall that for a single-cause analysis, each entry of the misclassification matrix M ij is the proportion of deaths that belong to class i (i.e., have MITS-COD i ) that are identified as belonging to class j (i.e., have VA-COD j ). For multi-cause CCVA output, the misclassification rate M ij is defined as the average score assigned to cause j by the CCVA algorithm among all deaths that would be attributed to MITS-COD i . Because for binary data averages are the same as proportions, the multi-cause definition of misclassification rates agrees with the previous definition of M if all the CCVA outputs were single cause. The calibration framework also allows for multiple MITS causes (underlying and immediate causes). This extension follows from recognizing that for multi-cause MITS-COD, the VA-COD for a death is a mixture of the misclassification rates of CCVA for the two possible MITS-COD (underlying and immediate) for that death. To formally understand this, for a total of C causes considered, we can summarize the multi-cause MITS-COD for a case in a C -length vector x . The entries of x will be 1 (if a cause is identified as both the underlying and immediate causes of death), 0.5 (if the cause is identified as one of immediate or underlying MITS-COD but not both), or 0 (if the cause is not identified as either COD). Consider a case where MITS identifies cause 1 as the immediate cause and cause 2 as the underlying cause. This MITS diagnosis can be expressed as a C -length vector x = (0.5, 0.5, 0, …, 0). The multi-cause calibration interprets the MITS diagnosis as assigning 50 out of 100 individuals with such a MITS diagnosis to cause 1 and the remaining 50 individuals to cause 2. Hence, the proportion of times the CCVA will predict cause j for such a case will be the weighted average of the VA misclassification rates for causes 1 and 2 (i.e., 0.5 * M 1 j + 0.5 * M 2 j = M ′ x ). Thus, estimating the misclassification matrix corresponds to a linear regression y = M ′ x of the multi-cause VA-COD y on the multi-cause MITS-COD x , and the misclassification matrix M can be interpreted as the multi-dimensional regression line. This notion of the misclassification matrix is formally defined by Datta and implemented in the codalm R package. Note that if the MITS identifies a single cause (i.e., one entry of x is 1), then this formulation agrees with the previous single-cause definition of M .
The generalization of the misclassification matrix allows us to extend the VA calibration to allow multi-cause VA-COD and MITS-COD. The CHAMPS data of paired MITS and VA records are used to estimate the misclassification matrix using the regression method described in “Multi-cause misclassification matrix.” The COMSA VA data allows estimating the raw (uncalibrated) CSMF q . Letting p denote the calibrated CSMF, the equation q = M ′ p presented in “Overview of single-cause VA-calibration” remains valid for multi-cause data. Hence, subsequent to estimation of M and q , one can solve for p using the equation above. However, unlike the single-cause data, which is categorical (discrete) and amenable to multinomial modeling, the multi-cause data (also termed compositional or fractional data) cannot be modeled using a multinomial distribution. This is because both the CCVA and MITS COD outcomes are no longer discrete variables. Multi-cause COD are now compositional variables (i.e., vectors of probability scores, with each probability representing the chance that death occurred due to the corresponding cause, summing up to 1). Rather than maximizing a likelihood function, we minimize a loss function to connect parameters to the multi-cause data. The Kullback-Leibler divergence, or relative entropy loss, is a popular measure of dissimilarity connecting compositional or multi-cause data to parameters. Previous research provides a guide for Bayesian-style inference using loss function rather than a full likelihood. , With a given loss function and prior, we have the generalized Bayes posterior for multi-cause VA calibration: Generalized posterior ∝ exp [ − loss ( parameters | data ) ] ∗ prior Hence, instead of a likelihood formulation, the multi-cause calibration is conducted in a generalized Bayes rule by using the relative entropy (Kullback-Leibler) loss functions for multi-cause data (see Fiksel et al. for details). This loss agrees with the multinomial likelihood for single-cause data, and thus the multi-cause VA calibration is a generalization of the single-cause one. With this set-up, computations to find the optimal estimates of unknown parameters can proceed in a manner analogous to the single-cause paradigm. To arrive at an ensemble calibrated estimate, the loss function is taken as the sum of the loss functions for EAVA and InSilicoVA. The purpose of combining the loss functions for both algorithms is to yield estimates that are consistent with the results of both algorithms. Note that taking the sum of the loss functions for the ensemble calibration is equivalent to placing equal importance to each VA algorithm. If there is a priori knowledge that one algorithm is more accurate than the other, one can also consider a weighted sum of the loss functions, placing higher weights to the more accurate algorithm. However, it is hard to quantitatively assess superiority of an algorithm a priori and assign a number to this. Also, although the prior weights are equal for the ensemble calibration, the final estimated CSMF will often tend to align with the estimate from calibrating the most accurate CCVA algorithm. , Previously, extensive validation studies were presented using the benchmarking PHMRC dataset for the assessment of the multi-cause VA calibration method. These validation studies compared the calibrated and uncalibrated CCVA algorithms using the cause-specific mortality fraction accuracy (CSMFA) metric. However, this metric requires knowledge of the true CSMF for the dataset. This is available for the PHMRC dataset, which also contains a more comprehensive COD information for each record in addition to the VA data. For the COMSA analysis, the true CSMF is unknown and is the quantity of interest that needs to be estimated. Hence, CSMFA cannot be calculated. To evaluate statistical models without knowledge of the true parameter values, a common strategy is to compare the out-of-sample prediction performance of the candidate models on hold-out data. This strategy cannot be adopted for VA calibration because the algorithm does not work at the individual level but only at the population level. In other words, the calibration does not calibrate the COD for the individual deaths or produce individual-level calibrated COD predictions; it only calibrates the estimate of the population-level CSMF. To compare uncalibrated models with their calibrated counterparts, we use the Widely Applicable Information Criterion (WAIC), a goodness-of-fit measure that uses in-sample model fit to estimate the model’s ability to predict future observations. WAIC provides an estimate of out-of-sample (prediction) error but using only in-sample (training) data. WAIC also harmonizes well with Bayesian inference, making use of the entire posterior distribution available from the Markov Chain Monte Carlo runs. Calculating the WAIC for the multi-cause calibrated models is not substantially different from calculating the WAIC for the single-cause calibrated models, except that the log-likelihood function is replaced with the negative log of the loss function.
Given the raw output of the MITS and CCVA algorithms, we have instituted a few steps of pre-processing to ensure the accuracy and validity of our procedure. Because conditions for neonates in South Africa hospitals differ from those in the other countries of interest, we exclude these 274 South Africa neonates from our analysis. The EAVA algorithm is inconclusive for a significant proportion of individuals, failing to identify any particular COD. These deaths would have to be by necessity excluded from a single-cause analysis, but the multi-cause calibration accommodates imputed values for this data with best available estimates. For the COMSA data, inconclusive EAVA deaths are assigned the average EAVA scores of all other (conclusive) EAVA scores. For a death with inconclusive EAVA diagnosis in the CHAMPS data, we impute the EAVA COD as the row of the EAVA misclassification matrix corresponding to the subject’s MITS underlying COD, where the misclassification matrix is calculated relative to the single-cause MITS. These choices are made to represent our best estimate of how the EAVA algorithm would classify such subjects. Because the imputed values would be multi-cause in nature, they cannot be included in the single-cause analysis. Thus, deaths with inconclusive EAVA diagnoses, which had to be removed from a single-cause analysis, are assigned multi-cause imputed scores, which are then used in the multi-cause calibration. With C CODs, learning the misclassification matrix requires estimating a total of C* ( C − 1) parameters. To minimize the dimensionality of the problem while retaining its important aspects, we group some CODs into more general labels based on evidence on the predominant CODs among children and neonates. , For the children, we use the following seven classifications: malaria, pneumonia, diarrhea, severe malnutrition, HIV, other infections (which includes meningitis, typhoid fever, hepatitis, etc.), and other. For the neonates, we use the following five classifications: congenital malformation, infection (neonatal tetanus, meningitis and encephalitis, diarrhea, pneumonia, and sepsis), intrapartum-related events (IPREs), other, and prematurity.
Raw data. The timeframe for the COMSA VA dataset used for all the analysis presented here is May 2018 to May 2021. summarizes the CSMF results using the InSilicoVA and EAVA algorithms for children who died in the COMSA surveillance area. The estimated prevalences and the rankings of the causes are different for the two algorithms; the largest difference is that InSilicoVA recognizes many more malaria deaths than EAVA. summarizes the results for 818 neonatal deaths in COMSA for the same algorithms. The prevalence of infection is quite high for both algorithms, accounting for a majority of deaths by EAVA. IPRE and prematurity also account for large percentages of deaths. Congenital malformation is rare in the judgments of both algorithms. is a contingency table of the relationship between underlying (labeled horizontally) and immediate (labeled vertically) CODs of children as identified by CHAMPS. Because there are many entries off the diagonals, we can see that many deaths have multiple MITS causes. We can see that pneumonia and other infections are relatively common as immediate causes, whereas severe malnutrition and other are more common as underlying rather than immediate causes. The combination of other as an underlying cause and other infections as an immediate cause is particularly frequent. is the contingency table of MITS underlying and immediate causes for the neonate cohort. We can see that for most of the causes the number of deaths where they were the underlying cause (row totals) are similar to the number of deaths where they were the immediate cause (column totals). The combination of infection as an underlying cause and prematurity as an immediate cause is relatively frequent. displays the estimate of the multi-cause misclassification matrix M comparing the MITS-COD (row headers) and each VA-COD (column headers). Conditional on the MITS-COD classifications, each row gives the frequency of VA-COD of individuals with that MITS-COD for both InSilicoVA and EAVA. Because the diagonal entries are far from 100%, it is apparent that there is a high frequency of misclassification. Only a few CODs are correctly identified in more than half of the deaths (diarrhea by both algorithms and HIV by EAVA). Malaria is identified relatively often by InSilicoVA but not by EAVA. Severe malnutrition is misclassified very often by both algorithms. displays the same information for the neonatal deaths. Infection and prematurity are correctly identified in more than half the deaths by both algorithms, and IPRE is correctly identified by InSilicoVA for a majority of deaths. However, EAVA mislabels IPRE as infection in 59% of deaths. Note that congenital malformation, up to rounding error, is never identified as a COD by the InSilicoVA algorithm. compares graphically the diagonal entries (i.e., the sensitivity values) of the single-cause and multi-cause misclassification matrices. We see that for both EAVA and InSilicoVA and for both children and neonates, the data points generally lie above the 45 degree line. We conclude that the multi-cause analysis has somewhat higher sensitivity. That is, according to the multi-cause weighting and misclassification calculations we have used, the MITS-COD and VA-COD are in better agreement for multi-cause data than the single-cause data. Causes where the multi-cause analysis leads to around 5% or more increased sensitivity for CCVA for children are HIV (both algorithms), diarrhea (InSilicoVA), and other (EAVA). For neonates, the multi-cause analysis leads to increased sensitivity for infection (both algorithms), IPRE (InSilicoVA), other (InSilicoVA), and congenital malformation (EAVA). For some of the other cause–algorithm combinations, this difference is not dramatic, and a few conditions (malaria for children; IPRE and other for neonates) are slightly better identified in the single-cause data for EAVA. Calibration. We present the calibration results of the child deaths first. Supplemental Table 1 gives uncalibrated and calibrated CSMF estimates for InSilicoVA, EAVA, and the ensemble algorithm, with 95% credible intervals for the calibrated models. displays this information graphically. The pre- and post-calibration CSMF values are strongly related, yet there are some important differences. The calibration causes the estimated pneumonia CSMF to decrease for both algorithms; this can be understood from because pneumonia is identified with relatively high sensitivity but is often incorrectly identified in deaths that are attributable to other causes, particularly malaria (which is itself a common COD). Therefore, the calibrated model recognizes that many deaths that are identified as due to pneumonia by the CCVA should be properly allocated to other causes. Conversely, for the EAVA and ensemble algorithms, the estimated CSMF of other infection increases through calibration because sensitivity for this cause is low; the COD is often misclassified as pneumonia and diarrhea. Thus, the calibration recognizes that many deaths identified as due to pneumonia or diarrhea should properly be labeled as other infection. We can see that for EAVA the jump in the other infection CSMF after calibration is rather dramatic, but the CI is quite wide. This indicates high uncertainty in the posterior distribution, likely resulting from near-singularity of the misclassification matrix; that is, uncertainty in the estimation of the misclassification matrix causes even larger uncertainty in the estimation of the CSMF. As reported in Supplemental Table 1 , the calibrated CSMF estimates of the EAVA and InSilicoVA algorithms are similar, although InSilicoVA identifies more deaths as diarrhea and malaria, whereas EAVA identifies more deaths as other infection. As expected, the results of the ensemble algorithm that uses data from both InSilicoVA and EAVA are generally between the results of the two algorithms. However, the calibrated CSMF from the ensemble algorithm aligns much more closely with the calibrated InSilicoVA CSMF ( ). In the plot of the error matrices for children in , we see that the sensitivities for InSilicoVA are higher than those of EAVA for every cause except HIV. Therefore, the ensemble calibration agrees with the more accurate algorithm here. The ensemble model identifies other infection as the most common COD. Supplemental Figure 1 displays WAIC values for the calibrated and uncalibrated models for each algorithm. In each case, the calibrated model has a lower value of WAIC, indicating that the calibrated models are better fit to the data than the uncalibrated models. compares the posterior distribution of the ensemble CSMFs for the single-cause and multi-cause analysis. The analyses are largely in agreement, with some important changes. Malaria is identified as less prevalent in the multi-cause analysis, whereas diarrhea and other are identified as slightly more prevalent. indicates that results are not sensitive to the choice of probability weights assigned to the primary and secondary COD as identified by EAVA. We discuss the findings for the neonatal deaths next. We can see from that the InSilicoVA CSMF for infection is increased through calibration. This is somewhat surprising because the sensitivity for the identification of infection is relatively high. However, true infection deaths are frequently mislabeled as causes that are common (prematurity and IPRE), whereas only uncommon causes (congenital malformation and other) are frequently labeled as infection, suggesting overall underreporting. This results in the increase in CSMF for infection after calibration. This increase in the CSMF for infection after calibration is less pronounced in EAVA because the increase is partly offset by adjusting for misclassification of a large percentage of MITS IPRE deaths as infection by EAVA. Calibration decreases the estimated CSMF of prematurity for both algorithms. This can be understood by noting that the sensitivity for prematurity is very high, and common causes (infection and IPRE) are frequently mislabeled as prematurity. As given in and Supplemental Table 2 , calibrated CSMF estimates of the EAVA and InSilicoVA algorithms are very similar, and the results of the ensemble algorithm are generally close to the average of the results of the two algorithms. All models identify infection as the most common cause of death. Supplemental Figure 2 indicates, as with the child data, that model fit is improved by calibration for all algorithms, as measured by WAIC. Supplemental Figure 3 demonstrates that the posterior distributions of CSMF are nearly identical for the multi- and single-cause analyses for neonates. Supplemental Figure 4 indicates that results are not sensitive to the choice of probability assigned to the primary COD as identified by EAVA.
The timeframe for the COMSA VA dataset used for all the analysis presented here is May 2018 to May 2021. summarizes the CSMF results using the InSilicoVA and EAVA algorithms for children who died in the COMSA surveillance area. The estimated prevalences and the rankings of the causes are different for the two algorithms; the largest difference is that InSilicoVA recognizes many more malaria deaths than EAVA. summarizes the results for 818 neonatal deaths in COMSA for the same algorithms. The prevalence of infection is quite high for both algorithms, accounting for a majority of deaths by EAVA. IPRE and prematurity also account for large percentages of deaths. Congenital malformation is rare in the judgments of both algorithms. is a contingency table of the relationship between underlying (labeled horizontally) and immediate (labeled vertically) CODs of children as identified by CHAMPS. Because there are many entries off the diagonals, we can see that many deaths have multiple MITS causes. We can see that pneumonia and other infections are relatively common as immediate causes, whereas severe malnutrition and other are more common as underlying rather than immediate causes. The combination of other as an underlying cause and other infections as an immediate cause is particularly frequent. is the contingency table of MITS underlying and immediate causes for the neonate cohort. We can see that for most of the causes the number of deaths where they were the underlying cause (row totals) are similar to the number of deaths where they were the immediate cause (column totals). The combination of infection as an underlying cause and prematurity as an immediate cause is relatively frequent. displays the estimate of the multi-cause misclassification matrix M comparing the MITS-COD (row headers) and each VA-COD (column headers). Conditional on the MITS-COD classifications, each row gives the frequency of VA-COD of individuals with that MITS-COD for both InSilicoVA and EAVA. Because the diagonal entries are far from 100%, it is apparent that there is a high frequency of misclassification. Only a few CODs are correctly identified in more than half of the deaths (diarrhea by both algorithms and HIV by EAVA). Malaria is identified relatively often by InSilicoVA but not by EAVA. Severe malnutrition is misclassified very often by both algorithms. displays the same information for the neonatal deaths. Infection and prematurity are correctly identified in more than half the deaths by both algorithms, and IPRE is correctly identified by InSilicoVA for a majority of deaths. However, EAVA mislabels IPRE as infection in 59% of deaths. Note that congenital malformation, up to rounding error, is never identified as a COD by the InSilicoVA algorithm. compares graphically the diagonal entries (i.e., the sensitivity values) of the single-cause and multi-cause misclassification matrices. We see that for both EAVA and InSilicoVA and for both children and neonates, the data points generally lie above the 45 degree line. We conclude that the multi-cause analysis has somewhat higher sensitivity. That is, according to the multi-cause weighting and misclassification calculations we have used, the MITS-COD and VA-COD are in better agreement for multi-cause data than the single-cause data. Causes where the multi-cause analysis leads to around 5% or more increased sensitivity for CCVA for children are HIV (both algorithms), diarrhea (InSilicoVA), and other (EAVA). For neonates, the multi-cause analysis leads to increased sensitivity for infection (both algorithms), IPRE (InSilicoVA), other (InSilicoVA), and congenital malformation (EAVA). For some of the other cause–algorithm combinations, this difference is not dramatic, and a few conditions (malaria for children; IPRE and other for neonates) are slightly better identified in the single-cause data for EAVA.
We present the calibration results of the child deaths first. Supplemental Table 1 gives uncalibrated and calibrated CSMF estimates for InSilicoVA, EAVA, and the ensemble algorithm, with 95% credible intervals for the calibrated models. displays this information graphically. The pre- and post-calibration CSMF values are strongly related, yet there are some important differences. The calibration causes the estimated pneumonia CSMF to decrease for both algorithms; this can be understood from because pneumonia is identified with relatively high sensitivity but is often incorrectly identified in deaths that are attributable to other causes, particularly malaria (which is itself a common COD). Therefore, the calibrated model recognizes that many deaths that are identified as due to pneumonia by the CCVA should be properly allocated to other causes. Conversely, for the EAVA and ensemble algorithms, the estimated CSMF of other infection increases through calibration because sensitivity for this cause is low; the COD is often misclassified as pneumonia and diarrhea. Thus, the calibration recognizes that many deaths identified as due to pneumonia or diarrhea should properly be labeled as other infection. We can see that for EAVA the jump in the other infection CSMF after calibration is rather dramatic, but the CI is quite wide. This indicates high uncertainty in the posterior distribution, likely resulting from near-singularity of the misclassification matrix; that is, uncertainty in the estimation of the misclassification matrix causes even larger uncertainty in the estimation of the CSMF. As reported in Supplemental Table 1 , the calibrated CSMF estimates of the EAVA and InSilicoVA algorithms are similar, although InSilicoVA identifies more deaths as diarrhea and malaria, whereas EAVA identifies more deaths as other infection. As expected, the results of the ensemble algorithm that uses data from both InSilicoVA and EAVA are generally between the results of the two algorithms. However, the calibrated CSMF from the ensemble algorithm aligns much more closely with the calibrated InSilicoVA CSMF ( ). In the plot of the error matrices for children in , we see that the sensitivities for InSilicoVA are higher than those of EAVA for every cause except HIV. Therefore, the ensemble calibration agrees with the more accurate algorithm here. The ensemble model identifies other infection as the most common COD. Supplemental Figure 1 displays WAIC values for the calibrated and uncalibrated models for each algorithm. In each case, the calibrated model has a lower value of WAIC, indicating that the calibrated models are better fit to the data than the uncalibrated models. compares the posterior distribution of the ensemble CSMFs for the single-cause and multi-cause analysis. The analyses are largely in agreement, with some important changes. Malaria is identified as less prevalent in the multi-cause analysis, whereas diarrhea and other are identified as slightly more prevalent. indicates that results are not sensitive to the choice of probability weights assigned to the primary and secondary COD as identified by EAVA. We discuss the findings for the neonatal deaths next. We can see from that the InSilicoVA CSMF for infection is increased through calibration. This is somewhat surprising because the sensitivity for the identification of infection is relatively high. However, true infection deaths are frequently mislabeled as causes that are common (prematurity and IPRE), whereas only uncommon causes (congenital malformation and other) are frequently labeled as infection, suggesting overall underreporting. This results in the increase in CSMF for infection after calibration. This increase in the CSMF for infection after calibration is less pronounced in EAVA because the increase is partly offset by adjusting for misclassification of a large percentage of MITS IPRE deaths as infection by EAVA. Calibration decreases the estimated CSMF of prematurity for both algorithms. This can be understood by noting that the sensitivity for prematurity is very high, and common causes (infection and IPRE) are frequently mislabeled as prematurity. As given in and Supplemental Table 2 , calibrated CSMF estimates of the EAVA and InSilicoVA algorithms are very similar, and the results of the ensemble algorithm are generally close to the average of the results of the two algorithms. All models identify infection as the most common cause of death. Supplemental Figure 2 indicates, as with the child data, that model fit is improved by calibration for all algorithms, as measured by WAIC. Supplemental Figure 3 demonstrates that the posterior distributions of CSMF are nearly identical for the multi- and single-cause analyses for neonates. Supplemental Figure 4 indicates that results are not sensitive to the choice of probability assigned to the primary COD as identified by EAVA.
In this paper, we have described the application of a multi-cause VA calibration method to improve quantification of CSMF from CCVA data for child and neonatal deaths identified in COMSA. By cross-tabulating the results of CCVA algorithms with respect to MITS-COD, we can learn the misclassification patterns of the CCVA algorithms and thus correct the CSMF estimates. This paper has focused on the use of multiple causes in both the VA-COD and MITS-COD (up to two). We see this as preferable to the single-cause analysis because it more accurately incorporates the data sources because both VA-COD and the MITS-COD may identify multiple causes of death. With respect to the COMSA data, we find that calibrated models are consistently better fits for the combined CHAMPS-COMSA data compared with their uncalibrated counterparts, as measured by WAIC. We find that, among other changes, calibration increases the estimated CSMF of malaria and other infections and decreases the estimated CSMF of pneumonia in children. For neonatal deaths, calibration increases the CSMF of infection and decreases the estimated CSMF of prematurity. We have attempted to intuitively explain these changes based on the misclassification matrices, but we note that the calibration involves solving a large system of equations and the point estimate of the misclassification matrix may not match the Bayesian estimates of the calibration procedure; thus, it may be difficult to understand the calibration results fully by intuition alone, and it is important to understand the basic principle of calibration, which is to adjust for imperfect sensitivities of the CCVA algorithms. In the final results, infection is the most common cause of death in neonates, and other infections is the most common cause of death in children. Some historical data on CSMF for children and neonates in Mozambique are available from estimates published in the report of the INCAM VA survey in 2007 and those published in Perin et al., which is informed by data from both the INCAM survey and COMSA. We see that for child deaths at 1–59 months, the multi-cause ensemble calibration estimated a higher CSMF for diarrhea (21%, with credible interval of 16–27%) compared with INCAM (6%) and Perin et al. (11%). This is due to InSilicoVA identifying a high proportion of diarrhea cases for COMSA data ( ). The CSMF for malaria was generally similar (> 20%) in all three estimates, as was the CSMF for pneumonia (∼10%). For neonates, both previous reports estimated a significantly higher CSMF of prematurity (INCAM: 35%, Perin et al. : 48%) compared with the calibrated CSMF (10%, with credible intervals of 7–13%). We see from the error matrices in that both CCVA algorithms produced a large number of false positives for prematurity. The calibration adjusts for these overcounting of prematurity deaths, thereby reducing the CSMF for prematurity. This, in turn, increases the CSMF for infection for the calibration (62%, with credible interval of 56–69%), which is higher than the INCAM results (27%). The CSMF for IPRE is similar (∼20%) for all three sets of estimates. There are several differences between our study and the INCAM study that make these estimates not directly comparable. They correspond to different time periods, are based on datasets that may not be comparable in terms of representativeness, and have used different methods for cause-of-death diagnosis. Therefore, the observed differences in estimates are possibly due to all of the aforementioned differences in the two settings. Neither the INCAM nor the Perin et al. estimates had considered VA misclassification, and adjusting for it presumably would have changed those results in the same way our raw results change after calibration. The calibrated CSMF from our study emphasizes need to account for this misclassification and to reevaluate the priorities for the health services based on the changes after calibration. However, before undertaking such actions, which of course would have resource implications, it would be important to seek confirmatory information of this bias of VA perhaps from health facility monitoring of deaths or even from demand for health care at facility level (and community, if there are services that could be monitored) for illnesses due to the same causes. The frequent misclassification of some causes by VA also has implications for care and the need to consider the causal chain (e.g., the possibility that the death in premature babies may be due to infection as highlighted in the large proportion of infection cases being misclassified as prematurity by VA) ( , left panel). The current estimates use all the available COMSA data collected across the 11 provinces to produce a pooled calibrated CSMF estimate for Mozambique. One can also conduct the calibration on subsets of the data corresponding to specific provinces to produce subnational estimates. Such an exercise would be useful only if there is adequate sample size per province to yield estimates that are not too imprecise. Similarly, one can also stratify the analysis by time to produce yearly CSMF estimates and study trends over time. This would require the yearly subsets of the data to have uniform and representative geographical coverage across the country. We also expect the CSMF estimates to evolve with more COMSA and CHAMPS data collection and with the refinement of the CCVA algorithms. The multi-cause calibration approach has certain limitations. The single-cause calibration is simpler to implement, but the assignment of multi-cause COD can be difficult using any method. Current implementation of the multi-cause calibration uses a simplified representation of the multi-cause MITS diagnosis by only using up to two MITS-COD (underlying and immediate causes). Future work will expand the multi-cause calibration framework to incorporate information more comprehensively about the entire set of (possibly more than two) causes present in the causal chain leading to the death. Also, both the single- and multi-cause calibration procedures rely on two main assumptions: 1) that the causes of death identified by MITS in the CHAMPS cohort are accurate and 2) that the CCVA misclassification rates in the COMSA population match the misclassification rates in the CHAMPS data. Future work needs to scrutinize these assumptions. For example, to understand representativeness of the MITS-VA error matrix estimated from the pooled CHAMPS data, it would be important compare error matrices from individual countries with MITS-VA pairs once enough MITS are conducted locally. If the MITS-VA error matrices reveal substantial heterogeneity across countries, it would highlight the need for a local dataset of MITS-VA pairs to estimate the VA misclassification rates. The multi-cause analysis has important advantages over the single-cause analysis. Death is a complex process, and assigning any one cause, although it is simpler procedurally, can be problematic especially when using a probabilistic CCVA algorithm like InSilicoVA, which provides rich multi-cause output. Furthermore, multi-cause calibration allows use of all data, including the deaths where one of the CCVA algorithms is inconclusive. For such inconclusive deaths, the calibration requires imputation. The imputed value is a multi-cause COD estimate that cannot be used in a single-cause analysis. Thus, a single-cause analysis can lead to loss of a significant amount of data (in our case, single-cause analysis leads to loss of ∼15–20% of the COMSA data due to inconclusive EAVA diagnosis). We also see that the multi-cause analysis leads to better agreement between the VA-COD and MITS-COD ( ). This is because the single-cause calibration, using only MITS underlying COD, would regard the deaths where VA-COD agrees with the MITS immediate COD but not with the MITS underlying COD as complete misclassifications. This would lead to higher estimates of misclassification. The multi-cause calibration considers such deaths as only partial misclassification and thus better captures the degree of false classifications in the CCVA algorithms. For the COMSA-Mozambique data analysis, the final calibrated mortality fractions from the multi-cause analysis does not differ too much from the single-cause analysis for most causes. However, the advantages of the multi-cause analysis can manifest more in VA datasets from other populations. For example, if for a large fraction of deaths in the population the VA-COD is inconclusive between two causes but always assigns one cause a higher score than the other, then in a single-cause analysis the CSMF for the cause with lower score will always be zero, although it is likely that many of the deaths occurred due to this cause. Also, besides calibrating the VA-based CSMF, another future utility of the misclassification matrices is to help understand why the VA misclassifies such a large proportion of cases. For such a task, the multi-cause misclassification matrices help reduce cases with false misclassifications occurring due to either the single-cause VA or the single-cause MITS leaving out the matching cause. Hence, one can focus on studying cases with true mismatch between VA and MITS and try to improve the VA algorithms. Thus, it is advisable to use the multi-cause analysis rather than the single-cause one for studying and calibrating VA data.
Financial support: The COMSA Mozambique project is funded by the Bill & Melinda Gates Foundation ( Grant no. OPP1163221 ).
Supplemental materials
|
Correcting for Verbal Autopsy Misclassification Bias in Cause-Specific Mortality Estimates
|
aeb2a9dd-81c1-4023-b2c5-00a0c4ada1a0
|
10160858
|
Forensic Medicine[mh]
|
Accurate and credible cause-of-death (COD) data are critical to understand, interpret, and address the burden of diseases and tailor public health policymaking at subnational, national, and regional levels. A complete diagnostic autopsy (CDA) is the gold-standard procedure for determining COD. When full autopsy is not affordable or feasible, medical certification of COD (MCCOD) is often conducted using all medical information relevant to the terminal illness. In low- and middle-income countries (LMICs), CDAs are very rarely conducted due to cultural, religious, and infrastructural constraints, whereas MCCOD has suboptimal coverage that is usually limited to deaths that occur in health facilities and is of variable quality. For settings without the capacity to conduct CDA or where they are infrequently done, a nonclinical approach called “verbal autopsy” (VA) is commonly used. VA is a systematic postmortem interview of the relatives of the deceased on the health history, signs, and symptoms of the fatal illness that can potentially identify the COD. Although the reliability of the VA at the individual level is questionable, it is often the only feasible option and has become a key source of COD data in LMICs that do not have fully developed civil registration and vital statistics systems with MCCOD information. In addition, VA-based results are often useful for studying population-wide trends of cause of death. There are two ways to assign a COD from a VA report. One practice is to have physicians review the VA (physician-coded VA [PCVA]). This process is time- and resource intensive, and PCVA results can be inaccurate or hard to standardize across physicians, countries, or regions. A scalable alternative to PCVA is to use an automated algorithm termed “computer-coded VA” (CCVA) that inputs a VA record and outputs a probable COD. The format of the VA instrument has now been standardized by the WHO and is compatible with many CCVA automated diagnostic algorithms like InterVA, InSilicoVA, EAVA, SmartVA, the Naive Bayes Classifier, among others. The automation afforded by CCVA, offering diagnosis of COD for large databases of VA records, has led to its increased adoption in large-scale VA studies. The “raw” estimates of cause-specific mortality fractions (CSMFs)—the percentage of deaths attributable to a given cause—are obtained as the proportion of the total number of deaths in the VA database that are predicted to be from that cause by the CCVA algorithm. The CSMF estimates can be stratified by age groups, sex, geographical regions, or other subgroups. CSMF estimates from CCVA algorithms can produce results similar to those from physician review. However, this widespread practice of aggregating CCVA outputs to obtain CSMFs has ignored the fact that CCVA algorithms are not perfect, and their accuracy depends both on the quality and geographical coverage of the training data and the modeling assumptions used in creating the algorithm. The COD determination from CCVA is not the true COD; it is only a predicted one and is prone to misclassification. Multiple studies have now shown that CCVA-predicted COD (VA-COD) suffers from misclassification bias; for a significant proportion of deaths (often > 50%), the predicted cause from CCVA differs from the cause obtained using more comprehensive information. – The misclassification of CCVA can be assessed by comparing CCVA outputs to medical certification based on COD obtained from CDA, minimally invasive tissue sampling (MITS; also called minimally invasive autopsy, or MIA), – or some “reference standard” combination of laboratory, pathology, and medical imaging results such as used by the Population Health Metrics Research Consortium (PHMRC). The misclassification bias of CCVA gets propagated into the raw (uncalibrated) estimates of CSMFs based on the CCVA-determined COD. Biased CSMF estimates from CCVA can mislead public health professionals and decision-makers to potentially misallocate the use of resources to prevent mortality. This misclassification is also prevalent for PCVA, which has been shown to perform worse than CCVA at identifying COD at both the individual and the population level in some settings. In this manuscript, we focus on calibrating CSMFs from CCVA, but the calibration approach can also be applied to CSMFs from PCVA. Datta et al. developed a method called “calibratedVA” that calibrates the initial raw CSMF estimate from a CCVA algorithm by adjusting for the misclassification bias of the algorithm. The method requires a paired dataset of COD having both CCVA and a reference standard COD based on more comprehensive medical and laboratory information to learn the misclassification rates of the CCVA algorithms. The misclassification rates are then used to calibrate the raw CSMF estimates in a hierarchical Bayesian modeling framework. Calibration has been shown to substantially improve CSMF estimates over the raw (uncalibrated) estimates from CCVA. , In this manuscript, we offer a statistical primer on how to use calibratedVA to correct for misclassification bias of CCVA algorithms. We provide a complete workflow of the methodology that estimates the raw CSMF and the misclassification rates, combines them to produce calibrated CSMF estimates, and provides data-driven model comparison metrics to compare and choose between the raw and calibrated CSMF estimate. Finally, we discuss how calibratedVA also combines predictions from multiple CCVA algorithms to produce a single CSMF estimate based on an ensemble calibration method. The ensemble method is preferable over the use of a single CCVA algorithm because it guards against incorrect results produced by a poorly performing algorithm. We apply calibratedVA to obtain CSMF estimates for child (aged 1–59 months) and neonatal deaths in Mozambique. The methodology can be used to correct for COD misclassification bias in VA-based projects in other countries.
COMSA Mozambique verbal autopsy data. We use VA data from the nationally representative Countrywide Mortality Surveillance for Action (COMSA) program in Mozambique to obtain raw (uncalibrated) CSMF estimates. COMSA provides CSMFs at the national and subnational levels for Mozambique based on active surveillance for deaths in 700 clusters of approximately 300 households each, with a total population of 923,031 people. We collected 11,614 VAs on deaths across all age groups that occurred from 2017 to 2021. The majority of deaths that are registered in COMSA occur outside of a hospital and thus are not assigned an official COD. For each registered death in COMSA, a VA is conducted. The dataset used in this analysis includes records for 1,841 deaths of children (1–59 months old) and 818 neonatal deaths from May 2018 to May 2021 from all 11 provinces of Mozambique. The VA questionnaire used for COMSA corresponds to the WHO 2016 VA tool. The forms have been programmed into the Open Data Kit software for data collection on a tablet. In-person interviews are conducted with a respondent determined to have been the child’s usual caregiver, which is most often the mother. Child Health and Mortality Prevention Surveillance (CHAMPS) Network MITS data. To estimate the misclassification rates of COD predictions for the CCVA algorithms, we use data from the CHAMPS network. CHAMPS is an ongoing comprehensive child mortality surveillance project that performs MITS to inform determination of COD for children (1–59 months), neonates, and stillbirths at sites across several countries, including Mozambique. MITS COD assignments in these age groups have been shown to be accurate (∼75% concordance) when compared with the full diagnostic autopsies. , The CHAMPS data used in this manuscript contain records for 426 child (1–59 months) and 614 neonatal deaths that occurred within the CHAMPS network hospitals in Bangladesh, Ethiopia, Kenya, Mali, Mozambique, Sierra Leone, and South Africa, from July 2017 through December 2020. MITS is only conducted for “disease-related” deaths and not for trauma or accidental deaths. The MITS-COD was determined through review of postmortem biopsy pathology and screening tests for a large array of pathogens, as well as medical history and clinical records, by a panel of physicians (including pediatricians), pathologists, microbiologists, and public health specialists. In CHAMPS, the COD report using MITS provides a full chain of events, initiated by the underlying cause, followed by the morbid or antecedent conditions(s), and finalizing with the immediate cause. For each death in CHAMPS, the VA record was also available in addition to the MITS-COD. Because the CCVA typically only provides the underlying cause of death, to estimate misclassification rates of VA (see “Misclassification bias of CCVA algorithms” below), we pair the “underlying” cause from the MITS-COD with the VA-COD for each of the deaths in the CHAMPS dataset. CCVA algorithms and uncalibrated CSMFs. To obtain the raw (uncalibrated) estimates of age group–specific CSMFs, we use COD diagnosis from two CCVA algorithms, InSilicoVA 6 and EAVA, for each COMSA child and neonate record. These CCVA algorithms were used due to their fundamentally different nature of decision-making. InSilicoVA is a probabilistic (Bayesian) method that assigns a COD for a VA record based on the likelihood (probability) of the reported VA responses (illness, signs, and symptoms) for that record given each COD. InSilicoVA is broadly similar to InterVA, another popular CCVA algorithm, but offers a more statistically principled treatment of the binary (yes/no) and the missing VA responses. Hence, we used InSilicoVA instead of InterVA. The second CCVA algorithm, EAVA, is not a statistical algorithm. It is based on medical decision-making rules. The approach relies on expert-derived algorithms of VA illness signs and symptoms for each COD and a hierarchy to select the main COD from among all identified comorbidities. EAVA does not use a probability framework, training dataset, or symptom-given-cause matrix like InterVA or InSilicoVA. Instead, it is a deterministic algorithm and for each death produces a single most likely COD. It is, however, driven by the ordering of causes in the hierarchy of all causes of interest. More details on the implementation of the two algorithms to obtain COD is provided in Supplemental Section 3 . Once the specific cause has been determined by InSilicoVA and EAVA for each death, causes are grouped into broader cause categories (see “Aggregation of causes into broad categories” below) to be used for raw estimation or calibration. For each neonatal and child (1–59 months) death record in the data, we obtained the top (most probable) COD from InSilicoVA. These are then aggregated to obtain the raw (uncalibrated) InSilicoVA CSMFs simply as the proportion of all VA records assigned to be from a given cause. The same procedure is repeated with EAVA to obtain the raw EAVA CSMF. More formally, for an age group and a chosen CCVA algorithm, the raw CSMF estimate for a cause j for that age group will be given by Raw CSMF for cause j = Number of VA records with CCVA predicted COD as cause j Total number of VA records (1) Misclassification bias of CCVA algorithms. Misclassification occurs from a CCVA algorithm when the algorithm assigns an individual a COD that is different from that individual’s reference COD (in this case the MITS-COD). Previous work has shown that using the misclassification rates of a VA algorithm to obtain a calibrated CSMF estimate can greatly improve accuracy over the uncalibrated CSMF estimate. Because the misclassification rates for the CCVA algorithms are not known for COMSA, we use the CHAMPS data to estimate these misclassification rates. For each CHAMPS record we use the MITS-COD paired with the VA-COD. We can estimate the misclassification rate (cause-specific true-positive and false-negative rates) of the CCVA algorithm as described below. For a cause i we calculate the true-positive rate of the CCVA algorithm for that cause as the proportion of CHAMPS deaths with MITS COD i that are also assigned to COD i by the CCVA algorithm (VA-COD). Similarly, for a pair of causes i and j , we can calculate the cause pair–specific false-negative rate as the proportion of CHAMPS deaths with MITS-COD i that are assigned to COD j by the CCVA algorithm (i.e., VA-COD is j ). We collect these true-positive and false-negative rates in a misclassification rate matrix M whose entry M ij in the i th row and j th column is given by: M i j = { Number of CHAMPS cases with MITS COD cause i , and CCVA predicted COD as cause j } Total number of CHAMPS cases with MITS COD cause i (2) The diagonal entries of the misclassification matrix are the cause-specific true-positive rates (sensitivities), and higher values would indicate higher accuracy for the CCVA algorithm. The off-diagonal entries of the matrix contain the cause pair–specific false-negative rates and lower values would indicate higher accuracy for the CCVA algorithm. A perfect CCVA algorithm with no misclassification bias would have 1 (100%) on the diagonals and 0 on the off-diagonals of M . Aggregation of causes into broad categories. Estimating the misclassification rates of a CCVA algorithm requires estimating all the entries of the misclassification matrix. For C many causes, this would imply inferring about C 2 many true-positive or false-negative rates (one corresponding to each cause pair). If we want to use the full set of more than 30 causes, this would mandate estimating the 30 × 30 misclassification rates matrix (i.e., 900 cause pair–specific misclassification rates). Such a task is impossible with only a few hundred MITS deaths (426 for children 1–59 months, 614 for neonates). Hence, to ensure stable estimation of the misclassification rates, we grouped the original larger set of causes into a smaller set of broad cause categories. For children, we use seven broad causes of death in our study: pneumonia, malaria, diarrhea, severe malnutrition, HIV, other infections, and other causes of death. Other infections in children include meningitis, typhoid fever, and hepatitis. Other causes in children include cancer, injury, and congenital malformation. For neonates, we use five broad causes: congenital malformation, infection, intra-partum related events (IPREs), prematurity, and other. Infection in neonates includes neonatal tetanus, meningitis and encephalitis, diarrhea, pneumonia, and sepsis. The other category for neonates includes causes like injury. These broad causes represent the main causes of death of young children and neonates known from the extensive literature on child mortality in LMICS. , Correcting for misclassification bias using calibrated VA. The misclassification matrix of a CCVA algorithm can be used to correct for its misclassification bias in the raw CSMF estimates. The calibration is essentially a back-solving procedure to adjust for the CCVA sensitivities. We elucidate this with a simple hypothetical example. Suppose there are only two causes A and B, and we know that a given CCVA has sensitivities of 95% and 65% for the two causes, respectively. This knowledge about the sensitivity of CCVA may be derived from some paired dataset of VA records and a reference COD (like MITS-COD) from an auxiliary dataset (like the CHAMPS data in this application). Also, suppose that from the unpaired data of only VA records, the uncalibrated CSMFs are 53% for cause A and 47% for cause B. It is evident that these uncalibrated CSMFs are biased. Sensitivity for cause B is only 65%. Therefore, the CCVA mistakenly assigns 100% − 65% = 35% of people who truly die of the cause B to cause A. This will lead to a higher uncalibrated CSMF for the cause A than its true CSMF. We can use these sensitivities to calibrate for the true CSMF p A and p B = 100% − p A , respectively, of cause A and B as follows: 53 % = p A * 95 % + p B * ( 100 % − 65 % ) (3) 47 % = p A * ( 100 % − 95 % ) + p B * 65 % (4) The above equations allow to calibrate (back-solve) for the unknown CSMFs p A and p B. These calibrated CSMFs are p A = 30% and p B = 70%, respectively, reflective of the substantial bias in the uncalibrated CSMF. When more than two causes are being considered, the back-solve is not straightforward, and direct attempts to back-solve this multivariate system of equations may lead to unstable and absurd estimates (estimated cause proportions lying outside of 0–100%). Hence, the calibration approach was formalized into a probability model that avoids both these problems. The model has two parts for the two data sources: COMSA and CHAMPS. The model for the COMSA data models the raw CSMF as a weighted sum of the calibrated CSMF weighted by the misclassification rates similar to the equations above. The model for the CHAMPS data helps estimate these misclassification rates using Equation ( ). The two parts are jointly used in a Bayesian framework that simultaneously estimates both the misclassification matrix and the calibrated CSMFs. Being a Bayesian algorithm, calibratedVA offers both point estimate of the CSMFs as well as 95% credible intervals which are used for inference about changes in CSMF after calibration. The COMSA national sample includes sampling weight to correct for the selection of clusters with probability proportionate to size and oversampling in four provinces. However, for the analysis and description in this paper, only unweighted CSMFs were used. See Supplement Section 1 for a technical overview of the calibration method. The calibratedVA method is made publicly available as a software via Github R-package. The Github repository contains all the code as open-access and a vignette with example scripts to use the software is also publicly available. Ensemble calibration method. The available CCVA algorithms generally do not agree with each other for a substantial proportion of deaths, and, for a given VA data point, it is challenging to know a priori which CCVA algorithm will be most accurate. Hence, Datta et al. developed an ensemble calibration approach that uses COD predictions from multiple CCVA algorithms. The ensemble method estimates the misclassification rates of each CCVA algorithm separately and then calibrates by back-solving for the CSMF that agrees best with all the data (i.e., the misclassification rates and the raw CSMFs from each of the CCVA algorithms). The ensemble calibration estimates the misclassification rates of the different algorithms and weights the more accurate ones favorably. Hence, the ensemble calibration has been shown to guard against inadvertent use of a poor performing CCVA algorithm and, therefore, performs better than VA calibration using a single CCVA algorithm. In our analysis, in addition to conducting individual VA calibration with each CCVA algorithm to present the respective calibrated CSMFs, we also implement the ensemble calibration by simultaneously using the predicted COD data from both InSilicoVA and EAVA to produce a unified CSMF estimate. As recommended by Datta et al., we present the estimate from the ensemble calibration as our final CSMF estimate for each age group. We compared the calibrated estimate from each respective CCVA algorithm with the corresponding uncalibrated estimate. For the ensemble method, we compare the calibrated ensemble estimate with the uncalibrated ensemble estimate, which is simply the equally weighted average of the uncalibrated CSMF estimates from the different CCVA algorithms. Overview of VA calibration pipeline. We provide a summary of the entire VA calibration procedure in . For each VA record in the dataset (in our case, the COMSA VA dataset), the predicted COD is obtained and aggregated, leading to the raw uncalibrated CSMF estimates. This is repeated for each CCVA algorithm considered (InSilicoVA and EAVA in this analysis), and the resulting CSMFs are averaged to obtain the uncalibrated ensemble estimate. From the CHAMPS data of paired VA-COD and MITS-COD, we obtain the misclassification rates for each CCVA algorithm. We feed both the uncalibrated CSMFs and misclassification rates for both algorithms into the VA calibration pipeline to obtain the ensemble calibrated estimate. To compare the results from the calibrated models to the uncalibrated CSMFs for each CCVA algorithm (InSilicoVA, EAVA, ensemble), we use the widely applicable information criterion (WAIC). WAIC is an estimate of a model’s ability to model future data but using only already collected data. Lower WAIC is better. Details of how the WAIC is calculated are provided in Supplemental Section 2 .
We use VA data from the nationally representative Countrywide Mortality Surveillance for Action (COMSA) program in Mozambique to obtain raw (uncalibrated) CSMF estimates. COMSA provides CSMFs at the national and subnational levels for Mozambique based on active surveillance for deaths in 700 clusters of approximately 300 households each, with a total population of 923,031 people. We collected 11,614 VAs on deaths across all age groups that occurred from 2017 to 2021. The majority of deaths that are registered in COMSA occur outside of a hospital and thus are not assigned an official COD. For each registered death in COMSA, a VA is conducted. The dataset used in this analysis includes records for 1,841 deaths of children (1–59 months old) and 818 neonatal deaths from May 2018 to May 2021 from all 11 provinces of Mozambique. The VA questionnaire used for COMSA corresponds to the WHO 2016 VA tool. The forms have been programmed into the Open Data Kit software for data collection on a tablet. In-person interviews are conducted with a respondent determined to have been the child’s usual caregiver, which is most often the mother.
To estimate the misclassification rates of COD predictions for the CCVA algorithms, we use data from the CHAMPS network. CHAMPS is an ongoing comprehensive child mortality surveillance project that performs MITS to inform determination of COD for children (1–59 months), neonates, and stillbirths at sites across several countries, including Mozambique. MITS COD assignments in these age groups have been shown to be accurate (∼75% concordance) when compared with the full diagnostic autopsies. , The CHAMPS data used in this manuscript contain records for 426 child (1–59 months) and 614 neonatal deaths that occurred within the CHAMPS network hospitals in Bangladesh, Ethiopia, Kenya, Mali, Mozambique, Sierra Leone, and South Africa, from July 2017 through December 2020. MITS is only conducted for “disease-related” deaths and not for trauma or accidental deaths. The MITS-COD was determined through review of postmortem biopsy pathology and screening tests for a large array of pathogens, as well as medical history and clinical records, by a panel of physicians (including pediatricians), pathologists, microbiologists, and public health specialists. In CHAMPS, the COD report using MITS provides a full chain of events, initiated by the underlying cause, followed by the morbid or antecedent conditions(s), and finalizing with the immediate cause. For each death in CHAMPS, the VA record was also available in addition to the MITS-COD. Because the CCVA typically only provides the underlying cause of death, to estimate misclassification rates of VA (see “Misclassification bias of CCVA algorithms” below), we pair the “underlying” cause from the MITS-COD with the VA-COD for each of the deaths in the CHAMPS dataset.
To obtain the raw (uncalibrated) estimates of age group–specific CSMFs, we use COD diagnosis from two CCVA algorithms, InSilicoVA 6 and EAVA, for each COMSA child and neonate record. These CCVA algorithms were used due to their fundamentally different nature of decision-making. InSilicoVA is a probabilistic (Bayesian) method that assigns a COD for a VA record based on the likelihood (probability) of the reported VA responses (illness, signs, and symptoms) for that record given each COD. InSilicoVA is broadly similar to InterVA, another popular CCVA algorithm, but offers a more statistically principled treatment of the binary (yes/no) and the missing VA responses. Hence, we used InSilicoVA instead of InterVA. The second CCVA algorithm, EAVA, is not a statistical algorithm. It is based on medical decision-making rules. The approach relies on expert-derived algorithms of VA illness signs and symptoms for each COD and a hierarchy to select the main COD from among all identified comorbidities. EAVA does not use a probability framework, training dataset, or symptom-given-cause matrix like InterVA or InSilicoVA. Instead, it is a deterministic algorithm and for each death produces a single most likely COD. It is, however, driven by the ordering of causes in the hierarchy of all causes of interest. More details on the implementation of the two algorithms to obtain COD is provided in Supplemental Section 3 . Once the specific cause has been determined by InSilicoVA and EAVA for each death, causes are grouped into broader cause categories (see “Aggregation of causes into broad categories” below) to be used for raw estimation or calibration. For each neonatal and child (1–59 months) death record in the data, we obtained the top (most probable) COD from InSilicoVA. These are then aggregated to obtain the raw (uncalibrated) InSilicoVA CSMFs simply as the proportion of all VA records assigned to be from a given cause. The same procedure is repeated with EAVA to obtain the raw EAVA CSMF. More formally, for an age group and a chosen CCVA algorithm, the raw CSMF estimate for a cause j for that age group will be given by Raw CSMF for cause j = Number of VA records with CCVA predicted COD as cause j Total number of VA records (1)
Misclassification occurs from a CCVA algorithm when the algorithm assigns an individual a COD that is different from that individual’s reference COD (in this case the MITS-COD). Previous work has shown that using the misclassification rates of a VA algorithm to obtain a calibrated CSMF estimate can greatly improve accuracy over the uncalibrated CSMF estimate. Because the misclassification rates for the CCVA algorithms are not known for COMSA, we use the CHAMPS data to estimate these misclassification rates. For each CHAMPS record we use the MITS-COD paired with the VA-COD. We can estimate the misclassification rate (cause-specific true-positive and false-negative rates) of the CCVA algorithm as described below. For a cause i we calculate the true-positive rate of the CCVA algorithm for that cause as the proportion of CHAMPS deaths with MITS COD i that are also assigned to COD i by the CCVA algorithm (VA-COD). Similarly, for a pair of causes i and j , we can calculate the cause pair–specific false-negative rate as the proportion of CHAMPS deaths with MITS-COD i that are assigned to COD j by the CCVA algorithm (i.e., VA-COD is j ). We collect these true-positive and false-negative rates in a misclassification rate matrix M whose entry M ij in the i th row and j th column is given by: M i j = { Number of CHAMPS cases with MITS COD cause i , and CCVA predicted COD as cause j } Total number of CHAMPS cases with MITS COD cause i (2) The diagonal entries of the misclassification matrix are the cause-specific true-positive rates (sensitivities), and higher values would indicate higher accuracy for the CCVA algorithm. The off-diagonal entries of the matrix contain the cause pair–specific false-negative rates and lower values would indicate higher accuracy for the CCVA algorithm. A perfect CCVA algorithm with no misclassification bias would have 1 (100%) on the diagonals and 0 on the off-diagonals of M .
Estimating the misclassification rates of a CCVA algorithm requires estimating all the entries of the misclassification matrix. For C many causes, this would imply inferring about C 2 many true-positive or false-negative rates (one corresponding to each cause pair). If we want to use the full set of more than 30 causes, this would mandate estimating the 30 × 30 misclassification rates matrix (i.e., 900 cause pair–specific misclassification rates). Such a task is impossible with only a few hundred MITS deaths (426 for children 1–59 months, 614 for neonates). Hence, to ensure stable estimation of the misclassification rates, we grouped the original larger set of causes into a smaller set of broad cause categories. For children, we use seven broad causes of death in our study: pneumonia, malaria, diarrhea, severe malnutrition, HIV, other infections, and other causes of death. Other infections in children include meningitis, typhoid fever, and hepatitis. Other causes in children include cancer, injury, and congenital malformation. For neonates, we use five broad causes: congenital malformation, infection, intra-partum related events (IPREs), prematurity, and other. Infection in neonates includes neonatal tetanus, meningitis and encephalitis, diarrhea, pneumonia, and sepsis. The other category for neonates includes causes like injury. These broad causes represent the main causes of death of young children and neonates known from the extensive literature on child mortality in LMICS. ,
The misclassification matrix of a CCVA algorithm can be used to correct for its misclassification bias in the raw CSMF estimates. The calibration is essentially a back-solving procedure to adjust for the CCVA sensitivities. We elucidate this with a simple hypothetical example. Suppose there are only two causes A and B, and we know that a given CCVA has sensitivities of 95% and 65% for the two causes, respectively. This knowledge about the sensitivity of CCVA may be derived from some paired dataset of VA records and a reference COD (like MITS-COD) from an auxiliary dataset (like the CHAMPS data in this application). Also, suppose that from the unpaired data of only VA records, the uncalibrated CSMFs are 53% for cause A and 47% for cause B. It is evident that these uncalibrated CSMFs are biased. Sensitivity for cause B is only 65%. Therefore, the CCVA mistakenly assigns 100% − 65% = 35% of people who truly die of the cause B to cause A. This will lead to a higher uncalibrated CSMF for the cause A than its true CSMF. We can use these sensitivities to calibrate for the true CSMF p A and p B = 100% − p A , respectively, of cause A and B as follows: 53 % = p A * 95 % + p B * ( 100 % − 65 % ) (3) 47 % = p A * ( 100 % − 95 % ) + p B * 65 % (4) The above equations allow to calibrate (back-solve) for the unknown CSMFs p A and p B. These calibrated CSMFs are p A = 30% and p B = 70%, respectively, reflective of the substantial bias in the uncalibrated CSMF. When more than two causes are being considered, the back-solve is not straightforward, and direct attempts to back-solve this multivariate system of equations may lead to unstable and absurd estimates (estimated cause proportions lying outside of 0–100%). Hence, the calibration approach was formalized into a probability model that avoids both these problems. The model has two parts for the two data sources: COMSA and CHAMPS. The model for the COMSA data models the raw CSMF as a weighted sum of the calibrated CSMF weighted by the misclassification rates similar to the equations above. The model for the CHAMPS data helps estimate these misclassification rates using Equation ( ). The two parts are jointly used in a Bayesian framework that simultaneously estimates both the misclassification matrix and the calibrated CSMFs. Being a Bayesian algorithm, calibratedVA offers both point estimate of the CSMFs as well as 95% credible intervals which are used for inference about changes in CSMF after calibration. The COMSA national sample includes sampling weight to correct for the selection of clusters with probability proportionate to size and oversampling in four provinces. However, for the analysis and description in this paper, only unweighted CSMFs were used. See Supplement Section 1 for a technical overview of the calibration method. The calibratedVA method is made publicly available as a software via Github R-package. The Github repository contains all the code as open-access and a vignette with example scripts to use the software is also publicly available.
The available CCVA algorithms generally do not agree with each other for a substantial proportion of deaths, and, for a given VA data point, it is challenging to know a priori which CCVA algorithm will be most accurate. Hence, Datta et al. developed an ensemble calibration approach that uses COD predictions from multiple CCVA algorithms. The ensemble method estimates the misclassification rates of each CCVA algorithm separately and then calibrates by back-solving for the CSMF that agrees best with all the data (i.e., the misclassification rates and the raw CSMFs from each of the CCVA algorithms). The ensemble calibration estimates the misclassification rates of the different algorithms and weights the more accurate ones favorably. Hence, the ensemble calibration has been shown to guard against inadvertent use of a poor performing CCVA algorithm and, therefore, performs better than VA calibration using a single CCVA algorithm. In our analysis, in addition to conducting individual VA calibration with each CCVA algorithm to present the respective calibrated CSMFs, we also implement the ensemble calibration by simultaneously using the predicted COD data from both InSilicoVA and EAVA to produce a unified CSMF estimate. As recommended by Datta et al., we present the estimate from the ensemble calibration as our final CSMF estimate for each age group. We compared the calibrated estimate from each respective CCVA algorithm with the corresponding uncalibrated estimate. For the ensemble method, we compare the calibrated ensemble estimate with the uncalibrated ensemble estimate, which is simply the equally weighted average of the uncalibrated CSMF estimates from the different CCVA algorithms.
We provide a summary of the entire VA calibration procedure in . For each VA record in the dataset (in our case, the COMSA VA dataset), the predicted COD is obtained and aggregated, leading to the raw uncalibrated CSMF estimates. This is repeated for each CCVA algorithm considered (InSilicoVA and EAVA in this analysis), and the resulting CSMFs are averaged to obtain the uncalibrated ensemble estimate. From the CHAMPS data of paired VA-COD and MITS-COD, we obtain the misclassification rates for each CCVA algorithm. We feed both the uncalibrated CSMFs and misclassification rates for both algorithms into the VA calibration pipeline to obtain the ensemble calibrated estimate. To compare the results from the calibrated models to the uncalibrated CSMFs for each CCVA algorithm (InSilicoVA, EAVA, ensemble), we use the widely applicable information criterion (WAIC). WAIC is an estimate of a model’s ability to model future data but using only already collected data. Lower WAIC is better. Details of how the WAIC is calculated are provided in Supplemental Section 2 .
Child (1–59 months) results. Raw VA summary statistics. All data analyses were conducted and figures were produced using R software. We present the summary (predicted counts and distributions) of different cause categories among the COMSA VA data using both InSilicoVA and EAVA. Among the 1,841 child deaths from the dataset, the analysis excludes VA records for 252 child deaths for which the EAVA diagnoses were inconclusive. presents these numbers for the remaining 1,589 VA records for under-five child deaths. We see that, according to InSilicoVA, diarrhea and other infections each contributed to nearly one-quarter of the deaths (25%); malaria and pneumonia were also significant causes, both contributing to > 15% of deaths; and severe malnutrition, HIV, and other each contributed to < 10% of the deaths. For EAVA, the distributions showed some difference from the InSilicoVA results. According to EAVA, > 30% of the COMSA child (1–59 months) deaths were attributed to other infections, which stood out as the single largest cause category. Pneumonia (21.5%) and diarrhea (19.4%) were also major categories, whereas deaths attributed to malaria, severe malnutrition, HIV, and other were each < 10%. CHAMPS MITS data and VA misclassification rate matrices. To evaluate the misclassification rates of the two CCVA algorithms, we used the paired dataset of MITS-COD and VA-COD for child (1–59 months) death records from CHAMPS. Thirty-two child deaths of the CHAMPS/MITS study had inconclusive EAVA diagnoses and were excluded from the analysis. The misclassification rates of CCVA on the MITS data are presented in . The diagonal entries of the misclassification matrices are the cause-specific true-positive rates (sensitivities) of the VA-COD agreeing with the MITS-COD, and higher values would indicate higher accuracy for the CCVA algorithm. For example, the entry in the first row, first column of the InSilicoVA misclassification matrix in (left) is 44%. This means that of the deaths for which the MITS-COD was malaria, the VA-COD was also malaria for 44% of them. The off-diagonal entries of the matrices contain the cause pair–specific false-negative rates (i.e., the fraction of cases with a specific MITS-COD that are assigned to a different COD by the CCVA algorithm). Lower values of these off-diagonal entries indicate higher accuracy for the CCVA algorithm. As another example, the first row, second column of the InSilicoVA misclassification matrix in (left) is 23%. This indicates that 23% of the child deaths that were assigned to malaria by MITS were assigned to pneumonia by InSilicoVA. Both CCVA algorithms have very large misclassification rates with cause-specific sensitivities that are very low (∼10% for severe malnutrition for both algorithms) to moderate (∼60% for the MITS diarrhea deaths for InSilicoVA and the MITS HIV deaths for EAVA). Many of the cause pair–specific false-negative rates are > 20% for either algorithm. Calibrated CSMFs. presents the results of VA-calibration. We present both the uncalibrated and calibrated CSMFs along with the 95% interval estimate for the calibrated CSMF. The exact CSMFs corresponding to these plots are provided in Supplemental Table 1 . In addition to results for the individual CSMF algorithms (InSilicoVA and EAVA), we provide the results from the ensemble calibration, which gives the final CSMF estimate. For InSilicoVA ( , top left), the main changes after calibration are an increase in CSMF for malaria and a decrease in CSMF for pneumonia. We observe from that a large proportion of MITS-COD for malaria is falsely classified by InSilicoVA as pneumonia or other infections. Because these two causes combined account for 40% of the COMSA deaths according to InSilicoVA ( ), the calibration adjusts for this misclassification, thereby resulting in increased proportion of malaria deaths. For pneumonia, many of the deaths caused by MITS-diagnosed malaria, diarrhea, and other infections are falsely classified as pneumonia, whereas among the MITS pneumonia deaths only substantial misclassification is the diagnosis of diarrhea for 20% of deaths. These results imply that there are overall more false positives for pneumonia than false negatives, and hence the CSMF decreases after calibration. For EAVA, after calibration, the CSMFs for malaria and other infections increase, and those of pneumonia and diarrhea decrease ( , top right). The reason for the change in the EAVA CSMF of malaria is the same as that for InSilicoVA: substantial undercounting of true malaria deaths by EAVA. For pneumonia, a large percentage of MITS pneumonia cases are misclassified as HIV. Because HIV is a relatively small category ( ), this only implies undercounting a small absolute number of pneumonia cases. On the other hand, a large percentage of other infections is misclassified by EAVA as pneumonia. Because other infections contribute a high percentage of deaths, this implies substantial overcounting of pneumonia cases. A net effect of these is overcounting for pneumonia, and hence the calibrated CSMFs for pneumonia is considerably lower than the uncalibrated CSMFs. Additionally, for EAVA we see a large increase in the CSMFs for other infections and decreases in CSMFs for diarrhea and HIV. A high percentage (28%) of deaths with MITS-COD as other infections were misclassified by EAVA as pneumonia, analogous to the misclassification rate for InSilicoVA. However, compared with InSilicoVA, EAVA has lower misclassification rates of deaths from other MITS causes being misclassified as other infections (especially for MITS severe malnutrition deaths). Hence, there is less overcounting and similar undercounting for other infections for EAVA compared with InSilicoVA, and we see that after calibration the EAVA CSMFs for “other infections” increases substantially. The ensemble estimates are presented in (bottom left). The uncalibrated ensemble CSMF is simply an equally weighted average of the uncalibrated InSilicoVA and EAVA CSMF. The ensemble calibrated CSMF lies between the calibrated CSMF for InSilicoVA and EAVA, but the weights are cause specific and data driven. For malaria, the CSMF from ensemble calibration agrees more with the CSMF from calibrated InSilicoVA, which had much more certainty than the calibrated EAVA malaria CSMF, which had a wide credible interval. The final CSMF estimate from the ensemble is presented in the pie chart of (bottom right) and assigns 36% to other infections, 27% to malaria, 19% to diarrhea, and < 10% to each of the remaining causes. The 95% Bayesian credible intervals for the calibrated CSMF help assess for which set of causes the calibrated CSMF is substantially different from the uncalibrated ones. We see from Supplemental Table 1 that for the ensemble method, the 95% interval for calibrated CSMF for malaria (19–33%) lies above the uncalibrated estimate (14%), showing an increase in the CSMF after calibration. The interval for calibrated CSMF for pneumonia (5–12%) lies below the uncalibrated estimate (19%), showing a significant decrease. The uncalibrated CSMF for “other infections” is at the lower end of the interval for calibrated CSMF for this category (27–46%), showing an increase in CSMF for “other infections” after calibration. In , we evaluate the uncalibrated and calibrated CSMF using WAIC for each of the three methods: InSilicoVA, EAVA, and ensemble (see Supplemental Section 2 for details of WAIC). The WAIC for the calibrated CSMF is consistently lower, offering evidence that the uncalibrated CSMF is incompatible with the observed misclassification rates and that adjustment via calibration substantially improves model fit for the combined COMSA and CHAMPS data. Neonatal results. Raw VA summary statistics. Among the 818 neonatal deaths, the analysis excludes VA records for 186 deaths for which the EAVA diagnoses were inconclusive. presents these the VA-COD distributions for the remaining 632 COMSA neonate VA records. Both InSilicoVA and EAVA attribute most deaths (∼50%) to infection, with IPREs and prematurity being the two other major categories, each being attributed to ∼20–25% of the deaths. Both algorithms assigned < 5% of deaths to either congenital malformation or other. CHAMPS MITS data and VA misclassification rate matrices. The misclassification rates of the two CCVA algorithms for neonates were calculated based on CHAMPS/MITS data for neonatal deaths; 79 deaths were excluded from the analysis due to inconclusive EAVA diagnosis. Additionally, all the neonatal deaths from the CHAMPS site in South Africa were excluded due to a high proportion of nosocomial infections documented at the newborn intensive care unit. The signs and symptoms of the presenting illness, reported by the parents in the VA, may not correspond well to the illness causes by an infection acquired after hospitalization. presents the misclassification rates. For InSilicoVA, prematurity has the highest sensitivity, with 85% of the MITS prematurity deaths correctly diagnosed by InSilicoVA. Infection and IPREs had moderate sensitivity (∼50%), whereas sensitivities for congenital malformation and other were low. There were also a few large misclassification rates, most prominent being InSilicoVA falsely diagnosing prematurity for ∼20–30% of deaths for each of the other four MITS causes. The misclassification rates for EAVA were broadly similar. The major difference was that the sensitivity of EAVA diagnosing prematurity was less (63%). Also, 52% of MITS IPRE deaths were misdiagnosed by EAVA as infection. Calibrated CSMFs. presents the calibrated CSMF point estimates and 95% credible intervals along with the uncalibrated CSMF for neonates. The impacts of calibration on the CSMFs are similar for all three methods: InSilicoVA, EAVA, and ensemble. The main changes after calibration are an increase in CSMFs for infection and a large decrease in CSMFs for prematurity. This is expected because a large proportion of infection deaths are misclassified as prematurity by both algorithms ( ). For EAVA, the increase in CSMFs for infection is more moderate than for InSilicoVA. This is because, for EAVA, a considerable proportion of true IPRE and prematurity deaths are misdiagnosed as infection, so the calibration adjusts for it, and the gain in infection CSMFs by accounting for the misclassification of infection deaths as prematurity is partly offset by this adjustment. The uncalibrated ensemble CSMF is simply the average of the uncalibrated InSilicoVA and EAVA CSMFs. The final estimate of CSMFs from the calibrated ensemble is presented in the bottom right of . The calibrated ensemble attributes 62% of neonatal deaths to infection, 22% to IPREs, 8% to prematurity, and < 5% to each of congenital malformation and other. The exact numbers are provided in Supplemental Table 2 . The 95% Bayesian credible intervals for the calibrated CSMF help assess for which set of causes the calibrated CSMF is substantially different from the uncalibrated ones. For the ensemble method, for infections the 95% interval for the calibrated CSMF (54–69%) lies above the uncalibrated CSMF for infection (49%), showing an increase in CSMF after calibration ( Supplemental Table 2 ). For prematurity, the 95% interval for calibrated CSMF (6–12%) lies below the uncalibrated CSMF for prematurity (23%), showing a considerable decrease. For the other causes, the 95% interval for the calibrated CSMF covers the uncalibrated CSMF. We compare the performance of the uncalibrated and calibrated CSMFs for neonates using WAIC in . Akin to the children results, the WAIC for the calibrated CSMFs is consistently and substantially better (lower) than the uncalibrated analogs. This demonstrates that the uncalibrated CSMFs do not provide an accurate description of the combined COMSA and CHAMPS data and that calibration is necessary to adjust for the large misclassification rates of the CCVA algorithms.
Raw VA summary statistics. All data analyses were conducted and figures were produced using R software. We present the summary (predicted counts and distributions) of different cause categories among the COMSA VA data using both InSilicoVA and EAVA. Among the 1,841 child deaths from the dataset, the analysis excludes VA records for 252 child deaths for which the EAVA diagnoses were inconclusive. presents these numbers for the remaining 1,589 VA records for under-five child deaths. We see that, according to InSilicoVA, diarrhea and other infections each contributed to nearly one-quarter of the deaths (25%); malaria and pneumonia were also significant causes, both contributing to > 15% of deaths; and severe malnutrition, HIV, and other each contributed to < 10% of the deaths. For EAVA, the distributions showed some difference from the InSilicoVA results. According to EAVA, > 30% of the COMSA child (1–59 months) deaths were attributed to other infections, which stood out as the single largest cause category. Pneumonia (21.5%) and diarrhea (19.4%) were also major categories, whereas deaths attributed to malaria, severe malnutrition, HIV, and other were each < 10%. CHAMPS MITS data and VA misclassification rate matrices. To evaluate the misclassification rates of the two CCVA algorithms, we used the paired dataset of MITS-COD and VA-COD for child (1–59 months) death records from CHAMPS. Thirty-two child deaths of the CHAMPS/MITS study had inconclusive EAVA diagnoses and were excluded from the analysis. The misclassification rates of CCVA on the MITS data are presented in . The diagonal entries of the misclassification matrices are the cause-specific true-positive rates (sensitivities) of the VA-COD agreeing with the MITS-COD, and higher values would indicate higher accuracy for the CCVA algorithm. For example, the entry in the first row, first column of the InSilicoVA misclassification matrix in (left) is 44%. This means that of the deaths for which the MITS-COD was malaria, the VA-COD was also malaria for 44% of them. The off-diagonal entries of the matrices contain the cause pair–specific false-negative rates (i.e., the fraction of cases with a specific MITS-COD that are assigned to a different COD by the CCVA algorithm). Lower values of these off-diagonal entries indicate higher accuracy for the CCVA algorithm. As another example, the first row, second column of the InSilicoVA misclassification matrix in (left) is 23%. This indicates that 23% of the child deaths that were assigned to malaria by MITS were assigned to pneumonia by InSilicoVA. Both CCVA algorithms have very large misclassification rates with cause-specific sensitivities that are very low (∼10% for severe malnutrition for both algorithms) to moderate (∼60% for the MITS diarrhea deaths for InSilicoVA and the MITS HIV deaths for EAVA). Many of the cause pair–specific false-negative rates are > 20% for either algorithm. Calibrated CSMFs. presents the results of VA-calibration. We present both the uncalibrated and calibrated CSMFs along with the 95% interval estimate for the calibrated CSMF. The exact CSMFs corresponding to these plots are provided in Supplemental Table 1 . In addition to results for the individual CSMF algorithms (InSilicoVA and EAVA), we provide the results from the ensemble calibration, which gives the final CSMF estimate. For InSilicoVA ( , top left), the main changes after calibration are an increase in CSMF for malaria and a decrease in CSMF for pneumonia. We observe from that a large proportion of MITS-COD for malaria is falsely classified by InSilicoVA as pneumonia or other infections. Because these two causes combined account for 40% of the COMSA deaths according to InSilicoVA ( ), the calibration adjusts for this misclassification, thereby resulting in increased proportion of malaria deaths. For pneumonia, many of the deaths caused by MITS-diagnosed malaria, diarrhea, and other infections are falsely classified as pneumonia, whereas among the MITS pneumonia deaths only substantial misclassification is the diagnosis of diarrhea for 20% of deaths. These results imply that there are overall more false positives for pneumonia than false negatives, and hence the CSMF decreases after calibration. For EAVA, after calibration, the CSMFs for malaria and other infections increase, and those of pneumonia and diarrhea decrease ( , top right). The reason for the change in the EAVA CSMF of malaria is the same as that for InSilicoVA: substantial undercounting of true malaria deaths by EAVA. For pneumonia, a large percentage of MITS pneumonia cases are misclassified as HIV. Because HIV is a relatively small category ( ), this only implies undercounting a small absolute number of pneumonia cases. On the other hand, a large percentage of other infections is misclassified by EAVA as pneumonia. Because other infections contribute a high percentage of deaths, this implies substantial overcounting of pneumonia cases. A net effect of these is overcounting for pneumonia, and hence the calibrated CSMFs for pneumonia is considerably lower than the uncalibrated CSMFs. Additionally, for EAVA we see a large increase in the CSMFs for other infections and decreases in CSMFs for diarrhea and HIV. A high percentage (28%) of deaths with MITS-COD as other infections were misclassified by EAVA as pneumonia, analogous to the misclassification rate for InSilicoVA. However, compared with InSilicoVA, EAVA has lower misclassification rates of deaths from other MITS causes being misclassified as other infections (especially for MITS severe malnutrition deaths). Hence, there is less overcounting and similar undercounting for other infections for EAVA compared with InSilicoVA, and we see that after calibration the EAVA CSMFs for “other infections” increases substantially. The ensemble estimates are presented in (bottom left). The uncalibrated ensemble CSMF is simply an equally weighted average of the uncalibrated InSilicoVA and EAVA CSMF. The ensemble calibrated CSMF lies between the calibrated CSMF for InSilicoVA and EAVA, but the weights are cause specific and data driven. For malaria, the CSMF from ensemble calibration agrees more with the CSMF from calibrated InSilicoVA, which had much more certainty than the calibrated EAVA malaria CSMF, which had a wide credible interval. The final CSMF estimate from the ensemble is presented in the pie chart of (bottom right) and assigns 36% to other infections, 27% to malaria, 19% to diarrhea, and < 10% to each of the remaining causes. The 95% Bayesian credible intervals for the calibrated CSMF help assess for which set of causes the calibrated CSMF is substantially different from the uncalibrated ones. We see from Supplemental Table 1 that for the ensemble method, the 95% interval for calibrated CSMF for malaria (19–33%) lies above the uncalibrated estimate (14%), showing an increase in the CSMF after calibration. The interval for calibrated CSMF for pneumonia (5–12%) lies below the uncalibrated estimate (19%), showing a significant decrease. The uncalibrated CSMF for “other infections” is at the lower end of the interval for calibrated CSMF for this category (27–46%), showing an increase in CSMF for “other infections” after calibration. In , we evaluate the uncalibrated and calibrated CSMF using WAIC for each of the three methods: InSilicoVA, EAVA, and ensemble (see Supplemental Section 2 for details of WAIC). The WAIC for the calibrated CSMF is consistently lower, offering evidence that the uncalibrated CSMF is incompatible with the observed misclassification rates and that adjustment via calibration substantially improves model fit for the combined COMSA and CHAMPS data.
All data analyses were conducted and figures were produced using R software. We present the summary (predicted counts and distributions) of different cause categories among the COMSA VA data using both InSilicoVA and EAVA. Among the 1,841 child deaths from the dataset, the analysis excludes VA records for 252 child deaths for which the EAVA diagnoses were inconclusive. presents these numbers for the remaining 1,589 VA records for under-five child deaths. We see that, according to InSilicoVA, diarrhea and other infections each contributed to nearly one-quarter of the deaths (25%); malaria and pneumonia were also significant causes, both contributing to > 15% of deaths; and severe malnutrition, HIV, and other each contributed to < 10% of the deaths. For EAVA, the distributions showed some difference from the InSilicoVA results. According to EAVA, > 30% of the COMSA child (1–59 months) deaths were attributed to other infections, which stood out as the single largest cause category. Pneumonia (21.5%) and diarrhea (19.4%) were also major categories, whereas deaths attributed to malaria, severe malnutrition, HIV, and other were each < 10%.
To evaluate the misclassification rates of the two CCVA algorithms, we used the paired dataset of MITS-COD and VA-COD for child (1–59 months) death records from CHAMPS. Thirty-two child deaths of the CHAMPS/MITS study had inconclusive EAVA diagnoses and were excluded from the analysis. The misclassification rates of CCVA on the MITS data are presented in . The diagonal entries of the misclassification matrices are the cause-specific true-positive rates (sensitivities) of the VA-COD agreeing with the MITS-COD, and higher values would indicate higher accuracy for the CCVA algorithm. For example, the entry in the first row, first column of the InSilicoVA misclassification matrix in (left) is 44%. This means that of the deaths for which the MITS-COD was malaria, the VA-COD was also malaria for 44% of them. The off-diagonal entries of the matrices contain the cause pair–specific false-negative rates (i.e., the fraction of cases with a specific MITS-COD that are assigned to a different COD by the CCVA algorithm). Lower values of these off-diagonal entries indicate higher accuracy for the CCVA algorithm. As another example, the first row, second column of the InSilicoVA misclassification matrix in (left) is 23%. This indicates that 23% of the child deaths that were assigned to malaria by MITS were assigned to pneumonia by InSilicoVA. Both CCVA algorithms have very large misclassification rates with cause-specific sensitivities that are very low (∼10% for severe malnutrition for both algorithms) to moderate (∼60% for the MITS diarrhea deaths for InSilicoVA and the MITS HIV deaths for EAVA). Many of the cause pair–specific false-negative rates are > 20% for either algorithm.
presents the results of VA-calibration. We present both the uncalibrated and calibrated CSMFs along with the 95% interval estimate for the calibrated CSMF. The exact CSMFs corresponding to these plots are provided in Supplemental Table 1 . In addition to results for the individual CSMF algorithms (InSilicoVA and EAVA), we provide the results from the ensemble calibration, which gives the final CSMF estimate. For InSilicoVA ( , top left), the main changes after calibration are an increase in CSMF for malaria and a decrease in CSMF for pneumonia. We observe from that a large proportion of MITS-COD for malaria is falsely classified by InSilicoVA as pneumonia or other infections. Because these two causes combined account for 40% of the COMSA deaths according to InSilicoVA ( ), the calibration adjusts for this misclassification, thereby resulting in increased proportion of malaria deaths. For pneumonia, many of the deaths caused by MITS-diagnosed malaria, diarrhea, and other infections are falsely classified as pneumonia, whereas among the MITS pneumonia deaths only substantial misclassification is the diagnosis of diarrhea for 20% of deaths. These results imply that there are overall more false positives for pneumonia than false negatives, and hence the CSMF decreases after calibration. For EAVA, after calibration, the CSMFs for malaria and other infections increase, and those of pneumonia and diarrhea decrease ( , top right). The reason for the change in the EAVA CSMF of malaria is the same as that for InSilicoVA: substantial undercounting of true malaria deaths by EAVA. For pneumonia, a large percentage of MITS pneumonia cases are misclassified as HIV. Because HIV is a relatively small category ( ), this only implies undercounting a small absolute number of pneumonia cases. On the other hand, a large percentage of other infections is misclassified by EAVA as pneumonia. Because other infections contribute a high percentage of deaths, this implies substantial overcounting of pneumonia cases. A net effect of these is overcounting for pneumonia, and hence the calibrated CSMFs for pneumonia is considerably lower than the uncalibrated CSMFs. Additionally, for EAVA we see a large increase in the CSMFs for other infections and decreases in CSMFs for diarrhea and HIV. A high percentage (28%) of deaths with MITS-COD as other infections were misclassified by EAVA as pneumonia, analogous to the misclassification rate for InSilicoVA. However, compared with InSilicoVA, EAVA has lower misclassification rates of deaths from other MITS causes being misclassified as other infections (especially for MITS severe malnutrition deaths). Hence, there is less overcounting and similar undercounting for other infections for EAVA compared with InSilicoVA, and we see that after calibration the EAVA CSMFs for “other infections” increases substantially. The ensemble estimates are presented in (bottom left). The uncalibrated ensemble CSMF is simply an equally weighted average of the uncalibrated InSilicoVA and EAVA CSMF. The ensemble calibrated CSMF lies between the calibrated CSMF for InSilicoVA and EAVA, but the weights are cause specific and data driven. For malaria, the CSMF from ensemble calibration agrees more with the CSMF from calibrated InSilicoVA, which had much more certainty than the calibrated EAVA malaria CSMF, which had a wide credible interval. The final CSMF estimate from the ensemble is presented in the pie chart of (bottom right) and assigns 36% to other infections, 27% to malaria, 19% to diarrhea, and < 10% to each of the remaining causes. The 95% Bayesian credible intervals for the calibrated CSMF help assess for which set of causes the calibrated CSMF is substantially different from the uncalibrated ones. We see from Supplemental Table 1 that for the ensemble method, the 95% interval for calibrated CSMF for malaria (19–33%) lies above the uncalibrated estimate (14%), showing an increase in the CSMF after calibration. The interval for calibrated CSMF for pneumonia (5–12%) lies below the uncalibrated estimate (19%), showing a significant decrease. The uncalibrated CSMF for “other infections” is at the lower end of the interval for calibrated CSMF for this category (27–46%), showing an increase in CSMF for “other infections” after calibration. In , we evaluate the uncalibrated and calibrated CSMF using WAIC for each of the three methods: InSilicoVA, EAVA, and ensemble (see Supplemental Section 2 for details of WAIC). The WAIC for the calibrated CSMF is consistently lower, offering evidence that the uncalibrated CSMF is incompatible with the observed misclassification rates and that adjustment via calibration substantially improves model fit for the combined COMSA and CHAMPS data.
Raw VA summary statistics. Among the 818 neonatal deaths, the analysis excludes VA records for 186 deaths for which the EAVA diagnoses were inconclusive. presents these the VA-COD distributions for the remaining 632 COMSA neonate VA records. Both InSilicoVA and EAVA attribute most deaths (∼50%) to infection, with IPREs and prematurity being the two other major categories, each being attributed to ∼20–25% of the deaths. Both algorithms assigned < 5% of deaths to either congenital malformation or other. CHAMPS MITS data and VA misclassification rate matrices. The misclassification rates of the two CCVA algorithms for neonates were calculated based on CHAMPS/MITS data for neonatal deaths; 79 deaths were excluded from the analysis due to inconclusive EAVA diagnosis. Additionally, all the neonatal deaths from the CHAMPS site in South Africa were excluded due to a high proportion of nosocomial infections documented at the newborn intensive care unit. The signs and symptoms of the presenting illness, reported by the parents in the VA, may not correspond well to the illness causes by an infection acquired after hospitalization. presents the misclassification rates. For InSilicoVA, prematurity has the highest sensitivity, with 85% of the MITS prematurity deaths correctly diagnosed by InSilicoVA. Infection and IPREs had moderate sensitivity (∼50%), whereas sensitivities for congenital malformation and other were low. There were also a few large misclassification rates, most prominent being InSilicoVA falsely diagnosing prematurity for ∼20–30% of deaths for each of the other four MITS causes. The misclassification rates for EAVA were broadly similar. The major difference was that the sensitivity of EAVA diagnosing prematurity was less (63%). Also, 52% of MITS IPRE deaths were misdiagnosed by EAVA as infection. Calibrated CSMFs. presents the calibrated CSMF point estimates and 95% credible intervals along with the uncalibrated CSMF for neonates. The impacts of calibration on the CSMFs are similar for all three methods: InSilicoVA, EAVA, and ensemble. The main changes after calibration are an increase in CSMFs for infection and a large decrease in CSMFs for prematurity. This is expected because a large proportion of infection deaths are misclassified as prematurity by both algorithms ( ). For EAVA, the increase in CSMFs for infection is more moderate than for InSilicoVA. This is because, for EAVA, a considerable proportion of true IPRE and prematurity deaths are misdiagnosed as infection, so the calibration adjusts for it, and the gain in infection CSMFs by accounting for the misclassification of infection deaths as prematurity is partly offset by this adjustment. The uncalibrated ensemble CSMF is simply the average of the uncalibrated InSilicoVA and EAVA CSMFs. The final estimate of CSMFs from the calibrated ensemble is presented in the bottom right of . The calibrated ensemble attributes 62% of neonatal deaths to infection, 22% to IPREs, 8% to prematurity, and < 5% to each of congenital malformation and other. The exact numbers are provided in Supplemental Table 2 . The 95% Bayesian credible intervals for the calibrated CSMF help assess for which set of causes the calibrated CSMF is substantially different from the uncalibrated ones. For the ensemble method, for infections the 95% interval for the calibrated CSMF (54–69%) lies above the uncalibrated CSMF for infection (49%), showing an increase in CSMF after calibration ( Supplemental Table 2 ). For prematurity, the 95% interval for calibrated CSMF (6–12%) lies below the uncalibrated CSMF for prematurity (23%), showing a considerable decrease. For the other causes, the 95% interval for the calibrated CSMF covers the uncalibrated CSMF. We compare the performance of the uncalibrated and calibrated CSMFs for neonates using WAIC in . Akin to the children results, the WAIC for the calibrated CSMFs is consistently and substantially better (lower) than the uncalibrated analogs. This demonstrates that the uncalibrated CSMFs do not provide an accurate description of the combined COMSA and CHAMPS data and that calibration is necessary to adjust for the large misclassification rates of the CCVA algorithms.
Among the 818 neonatal deaths, the analysis excludes VA records for 186 deaths for which the EAVA diagnoses were inconclusive. presents these the VA-COD distributions for the remaining 632 COMSA neonate VA records. Both InSilicoVA and EAVA attribute most deaths (∼50%) to infection, with IPREs and prematurity being the two other major categories, each being attributed to ∼20–25% of the deaths. Both algorithms assigned < 5% of deaths to either congenital malformation or other.
The misclassification rates of the two CCVA algorithms for neonates were calculated based on CHAMPS/MITS data for neonatal deaths; 79 deaths were excluded from the analysis due to inconclusive EAVA diagnosis. Additionally, all the neonatal deaths from the CHAMPS site in South Africa were excluded due to a high proportion of nosocomial infections documented at the newborn intensive care unit. The signs and symptoms of the presenting illness, reported by the parents in the VA, may not correspond well to the illness causes by an infection acquired after hospitalization. presents the misclassification rates. For InSilicoVA, prematurity has the highest sensitivity, with 85% of the MITS prematurity deaths correctly diagnosed by InSilicoVA. Infection and IPREs had moderate sensitivity (∼50%), whereas sensitivities for congenital malformation and other were low. There were also a few large misclassification rates, most prominent being InSilicoVA falsely diagnosing prematurity for ∼20–30% of deaths for each of the other four MITS causes. The misclassification rates for EAVA were broadly similar. The major difference was that the sensitivity of EAVA diagnosing prematurity was less (63%). Also, 52% of MITS IPRE deaths were misdiagnosed by EAVA as infection.
presents the calibrated CSMF point estimates and 95% credible intervals along with the uncalibrated CSMF for neonates. The impacts of calibration on the CSMFs are similar for all three methods: InSilicoVA, EAVA, and ensemble. The main changes after calibration are an increase in CSMFs for infection and a large decrease in CSMFs for prematurity. This is expected because a large proportion of infection deaths are misclassified as prematurity by both algorithms ( ). For EAVA, the increase in CSMFs for infection is more moderate than for InSilicoVA. This is because, for EAVA, a considerable proportion of true IPRE and prematurity deaths are misdiagnosed as infection, so the calibration adjusts for it, and the gain in infection CSMFs by accounting for the misclassification of infection deaths as prematurity is partly offset by this adjustment. The uncalibrated ensemble CSMF is simply the average of the uncalibrated InSilicoVA and EAVA CSMFs. The final estimate of CSMFs from the calibrated ensemble is presented in the bottom right of . The calibrated ensemble attributes 62% of neonatal deaths to infection, 22% to IPREs, 8% to prematurity, and < 5% to each of congenital malformation and other. The exact numbers are provided in Supplemental Table 2 . The 95% Bayesian credible intervals for the calibrated CSMF help assess for which set of causes the calibrated CSMF is substantially different from the uncalibrated ones. For the ensemble method, for infections the 95% interval for the calibrated CSMF (54–69%) lies above the uncalibrated CSMF for infection (49%), showing an increase in CSMF after calibration ( Supplemental Table 2 ). For prematurity, the 95% interval for calibrated CSMF (6–12%) lies below the uncalibrated CSMF for prematurity (23%), showing a considerable decrease. For the other causes, the 95% interval for the calibrated CSMF covers the uncalibrated CSMF. We compare the performance of the uncalibrated and calibrated CSMFs for neonates using WAIC in . Akin to the children results, the WAIC for the calibrated CSMFs is consistently and substantially better (lower) than the uncalibrated analogs. This demonstrates that the uncalibrated CSMFs do not provide an accurate description of the combined COMSA and CHAMPS data and that calibration is necessary to adjust for the large misclassification rates of the CCVA algorithms.
This paper outlines the complete statistical workflow to use a limited dataset of paired VA records and a reference standard COD (in this case including results of MITS) to calibrate raw CSMF estimates obtained from CCVA algorithms applied to abundant VA data from a nationally representative survey (in this case COMSA). We show that for neonates and children age 1–59 months, and for two choices of CCVA algorithms, the COD predictions from CCVA have large misclassification rates. Naive estimates of CSMF that do not account for the misclassification will be biased, and calibration is necessary to mitigate this bias. For child deaths in Mozambique, the calibration results in higher estimated mortality from malaria and other infections and lower estimated mortality from pneumonia. For neonates, the calibration results in increased CSMF for infection and decreased CSMF for prematurity. We provide insight into why the calibration resulted in these changes to the CSMF based on the misclassification rate matrices. However, in general, giving a simple explanation for the changes to the CSMF after calibration may not always be possible because the calibration reflects the total change affected by multiple different misclassification rates. This underscores the need for clear communication between statistical practitioners and government officials and stakeholders to understand the general principles of the calibration model, which are intuitive and interpretable. The entire analysis in this manuscript used the top predicted COD from InSilicoVA. In practice, many probabilistic algorithms like InSilicoVA offer not just the most likely COD but probability scores for each cause to be the COD for an individual. Reducing such a rich probabilistic output to a single top cause wastes valuable information. Additionally, the analysis excluded some deaths because of indecisive EAVA diagnosis. Ideally this should be imputed to have the population proportion for each cause. Such imputed COD would also constitute a multi-cause COD output and cannot be accommodated in the single-cause format. Finally, MITS offers both an immediate and an underlying COD, and for many deaths these are different. The current analysis only used the underlying COD. An approach that accommodates multi-cause MITS output would be able to use information from both the underlying and the immediate COD. Fiksel et al. extended the VA calibration method to accommodate multi-cause output for both the VA and the reference COD, based on the generalized definition of misclassification rates for such multi-output data. , Subsequent work will apply this approach for calibration of CSMF based on multi-cause COMSA-VA and CHAMPS-MITS data. To use a multi-cause calibration for EAVA and also for the ensemble that uses EAVA, one innovation will be to apply a modified EAVA algorithm that offers multi-cause output as opposed to the EAVA algorithm used here, which only offers a single COD. The calibration does not depend on the cause-specific composition of the MITS deaths, which is not representative of the population COD composition. Only the misclassification rates of VA for a given MITS cause are estimated from the CHAMPS data and used for calibration of COMSA data. There is a need to increase community MITS deaths for better representation of the VA misclassification rates. Hospital deaths, and especially NICU deaths, may even exhibit signs and symptoms not seen in community deaths due to effects of treatment, nosocomial infections, prolonged life, etc., and VA responses for these deaths may differ from those for community deaths in their exposure to health care and medical information. Also, the pooled CHAMPS data across all sites are used to improve sample size for estimation of the misclassification rates. This increased sample size is critical to improve precision of the analysis but may come with a loss of representativeness of the estimated misclassification rates for Mozambique. The impact of this tradeoff on the performance of the calibration needs to be quantitatively assessed. In the future, if more data are available on the VA-MITS pair for community deaths in Mozambique, the misclassification rates may be estimated solely using Mozambique CHAMPS data and may be more representative of these rates in the population. Due to the limited sample size of the CHAMPS data, the calibration aggregated causes to a smaller set of broad cause categories (see “Aggregation of causes into broad categories” above) and produced calibrated CSMF at this broad resolution of causes. In the future, when more MITS data are available to estimate misclassification rates, these lists could be expanded to include more specific causes encompassed by these broad categories. For children, this would add to the current list injury and three neonatal causes that can still lead to death in the first year of life (i.e., prematurity, IPREs, and congenital malformation). For neonates, the infection category would be replaced by pneumonia, sepsis, meningitis, and other infections. With even more data, the child list could be further expanded to include causes such as specific injury types, childhood cancer, hemorrhagic fever, and other major conditions. The neonatal list could also include tetanus and injuries. Despite these important unsolved challenges for producing calibrated CSMF estimates, given the large misclassification rates we observe for both VA algorithms, our method produces more informed CSMF estimates than simply aggregating VA algorithm predictions. The methodology adopted for calibration of COMSA CSMF using MITS offers a general template for calibration of VA-based CSMF in other studies. The calibratedVA software only requires as input the VA-COD for the unpaired data and both the VA-COD and the COD based on more extensive information (in our case, MITS-COD) for the paired data. The software works with any number of different CCVA algorithms (can be more than two) and with any type of reference COD in the paired data (e.g., a different COD based on the PHMRC data was used for VA calibration in Datta et al. ). COD information is fundamental to prioritizing and planning disease control strategies and health services. VA is the major data source for COD information for 90% of child deaths globally and for all high-mortality countries. The use of calibration for nationally representative VA data can make these estimates more accurately reflect the true causes in a national population of children, better guiding national and international responses to reduce child deaths. As more countries begin implementing VAs within national mortality surveillance systems, there will be a need to obtain data on a smaller number of deaths with both VA-COD and some reference COD (like MITS-COD) based on more comprehensive information. This will inform the misclassification rates of VA for that country and, in turn, improve CSMF estimates via calibration. Projects such as the global symptom-cause archive may help to establish misclassification rates for many algorithms and regions of the world to produce accurate COD information for low- and middle-income countries.
Financial support: The COMSA Mozambique project is funded by the Bill & Melinda Gates Foundation ( Grant no. OPP1163221 ).
Supplemental materials
|
A Qualitative Assessment of Community Acceptability and Its Determinants in the Implementation of Minimally Invasive Tissue Sampling in Children in Quelimane City, Mozambique
|
91fed634-5301-4b59-880f-d9d05de4abbc
|
10160860
|
Forensic Medicine[mh]
|
Despite reductions over the past two decades, child mortality remains unacceptably high, particularly in low-income settings in sub-Saharan Africa, where 1 in 13 children dies before age 5, a rate that is 16 times higher than the average ratio of 1 in 199 in high-income countries. In Mozambique, the child mortality rate fell gradually from 190.8 deaths per 1,000 live births in 1998 to 73.2 deaths per 1,000 live births in 2018. An accurate understanding of global child mortality and health is severely limited by inadequate methods and measurements. Less than 20% of the 192 countries in the world have high-quality death registry data, and more than one-third do not have any specific mortality registry data. For this reason, tracking the mortality of under-five children is at the forefront of public health priorities. In low- and middle-income countries (LMICs), including Mozambique, children often die without a documented medical history and are often buried before the cause of death (COD) determination has been conducted. , The Countrywide Mortality Surveillance for Action (COMSA) project is an initiative that aims to determine the causes of under-five mortality, including stillbirths, in Mozambique by measuring and monitoring mortality and COD utilizing verbal autopsies, a methodology widely used in areas where death certification remains inaccurate and with some limitations. , In Zambézia province (central Mozambique), the surveillance activities include the additional implementation of minimally invasive tissue sampling (MITS), which is considered a valuable tool for COD investigation and for generating data to prioritize research and prevention strategies aimed at reducing child deaths in under-resourced settings. , , The MITS consists of a series of post-mortem punctures using fine biopsy needles aiming to obtain tissue samples and body fluids from a corpse within the first hours after death, which are then submitted for a thorough histopathological and microbiological investigation of the underlying COD. , This technique is being used in several contexts in LMICs and appears to be promising in terms of its greater acceptability because it is considered a fast, simple, and more user-friendly technique that can be performed by minimally trained staff, although there are factors considered to be important barriers that may put its acceptability at risk. – Existing evidence on the anticipated and experienced acceptability of the MITS procedure thus demonstrates the importance of recognizing the cultural mores and practices of the context where MITS is implemented. , Death and related post-mortem procedures are embedded in complex social, cultural, and religious environments. , In the case of Zambézia province (central Mozambique), the existence of myths, rumors, and negative perceptions of certain public health initiatives, as was the case with cholera control, could challenge the success of the implementation of MITS in this setting. Acknowledging this risk, it is critical to understand local attitudes and perceptions in relation to death as well as the potential barriers and facilitators to the uptake of interventions involving post-mortem procedures, which could determine the acceptability of the implementation of MITS in Quelimane.
Study design. This was a qualitative study that was conducted as part of the formative research to inform the COMSA program for investigating COD in Mozambique and other LMICs. COMSA is a sample registration system with community surveillance assistants who prospectively report birth, death, and COD data from a representative sample of communities in the country. Following an ethnographic approach, the study sought to understand how people in this region form and sustain their experiences and customs in relation to death. To do so, we explored this phenomenon from the socio-cultural perspective to understand the meaning and local practices related to health, disease, and death, in the view of later on incorporating a phenomenological approach to capture individuals’ understandings of the meanings of death of children embedded in their own lived experiences. , Study site and population. This study was conducted in Quelimane city, the capital of Zambézia province, central Mozambique. Quelimane is a district covering an area of 117 km 2 , with 193,343 inhabitants. It is located by the Bons Sinais River and about 20 km from the Indian Ocean ( ). The main economic activities are fishing and agriculture. The population is mostly of Chuabo ethnicity, the dominant ethnic group, and Christianity is the main religion (60.2%), although a considerable part of the population (18.9%) is Muslim. The district, which administers the powers of the central government, incorporates a municipality with an elected local government with five Urban Administrative Posts. These administrative posts are designated in numbers, and most of them are rural areas where the predominant occupations are artisanal fishing and agriculture. Quelimane hosts 21 health facilities, including 17 health centers, two peripheral health posts, a general hospital, and a central hospital (tertiary level). The Central Hospital of Quelimane (HCQ) was also included as the point for selection of some key informants. Target groups and sampling. This study targeted individuals with relevant roles and experiences related to the study questions (i.e., direct experience with caring for dead children’s bodies or with events related to the death of children and/or stillbirths). Thus, study participants consisted of community leaders (a community representative with central or local government legitimacy and/or influence and power over communities). Such power can be political, either by election or appointment (e.g., neighborhood secretaries, heads of blocks) or traditional by lineage (e.g., regulators); traditional birth attendants (who care for pregnant women, support them when they deliver outside health facilities, and care for newborns); nharrubes (traditional authorities that are in charge of washing the bodies and lead prayers before the funeral); and health care providers, such as medical doctors (pediatrician, neonatologist, and obstetrician); nurses and hospital support teams were included as well. These target groups were purposively selected to include diverse perspectives on the experience of caring for dead children’s bodies. During data collection we used snowball sampling to identify participants from each group. Snowball sampling was continued until the study team concluded that saturation had been reached and there was no need to interview more participants. Data collection. We conducted a combination of qualitative data collection strategies. To better understand the local context, we carried out transect walks around the community accompanied by observations and informal conversations while identifying potential participants. During these observations and informal conversations, social science assistants generated field notes on participants’ behavior and reactions when discussing issues related to child mortality in the community. Focus group discussions (FGDs) were carried out with community leaders and traditional birth attendants (TBAs), and semi-structured interviews (SSIs) were conducted with nharrubes and health professionals directly linked to the HCQ in the pediatric, obstetrics, and maternity services and with nharrubes. Data were collected by four social science assistants, specifically trained for the purposes of the study, and one social science researcher, who also acted as the overall study coordinator. The study was overseen by a senior social scientist based remotely. Each data collection strategy followed its own topic guide or script, mostly formed by open-ended questions about the death of children and procedures for the care of a deceased child’s body. These general questions served to unravel potential alignments and tensions between current local practices and the procedures proposed by the mortality surveillance. Specifically, they sought to identify the perceived relevance and interest in knowing the COD, perceptions about the objectives of the MITS, and the role of community leaders and heads of families in caring for the bodies of deceased children. The SSIs focused on individual perspectives based on own experiences, whereas the FGDs explored the perceptions and experiences of the community at large, also taking into account local norms and values. The FGDs were composed of a minimum of 8 and a maximum of 12 participants belonging to the same target group. Data collection was carried out in Portuguese or in the local language (Chuabo), depending on participants’ preference. Semi-structured interviews and FGDs lasted approximately 60–90 minutes, and informal conversations lasted approximately 20–30 minutes and took place during walks around the communities while identifying potential study participants. Data analysis. Semi-structured interviews and FGDs were digitally recorded and fully transcribed by three experienced transcribers who were fluent in Chuabo and received specialized training in transcription techniques based on the study’s standards. Interviews conducted in Chuabo were simultaneously translated into Portuguese during transcription. The notes from informal conversations and observations were digitized by social science assistants after their return from fieldwork and were later triangulated with the SSI and FGD data during the analysis. Data analysis was performed in two stages. The first stage was the pre-analysis of the field notes, which led to the development of analytic categories, while the interviews and focus group discussions were still being transcribed verbatim and to the to the definition of the theoretical saturation of data. The second stage was the coding and analysis of interview transcripts. In this phase, new categories were added, removed, and changed. This process was done using NVIVO12 ® software (QSR International, Inc., Melbourne, Australia), which supported not only coding but also extraction of coded text, organization of categories and subcategories in a codebook, and establishment of relationships between ideas. Codifications were subjected to content analysis summarized and tabulated into a matrix format using MS Excel for framework and content analysis in a word sheet. The coding of the transcripts on NVivo was conducted independently by two coders hired for the purpose and a junior investigator. The two sets of codes were compared to ensure consistency. Ethical aspects. The study followed a protocol that received ethical approval (CIBS-CISM/013/2018) by the Institutional Committee on Bioethics in Health of the Manhiça Health Research Center (CIBS-CISM). Administrative authorizations were provided by the Ministry of Health, the Zambézia Provincial Health Directorate, and the Municipal Authority of Quelimane City. Authorization to engage with community members and perform the study was also first requested from neighborhood secretaries and the heads of each administrative post included in the study.
This was a qualitative study that was conducted as part of the formative research to inform the COMSA program for investigating COD in Mozambique and other LMICs. COMSA is a sample registration system with community surveillance assistants who prospectively report birth, death, and COD data from a representative sample of communities in the country. Following an ethnographic approach, the study sought to understand how people in this region form and sustain their experiences and customs in relation to death. To do so, we explored this phenomenon from the socio-cultural perspective to understand the meaning and local practices related to health, disease, and death, in the view of later on incorporating a phenomenological approach to capture individuals’ understandings of the meanings of death of children embedded in their own lived experiences. ,
This study was conducted in Quelimane city, the capital of Zambézia province, central Mozambique. Quelimane is a district covering an area of 117 km 2 , with 193,343 inhabitants. It is located by the Bons Sinais River and about 20 km from the Indian Ocean ( ). The main economic activities are fishing and agriculture. The population is mostly of Chuabo ethnicity, the dominant ethnic group, and Christianity is the main religion (60.2%), although a considerable part of the population (18.9%) is Muslim. The district, which administers the powers of the central government, incorporates a municipality with an elected local government with five Urban Administrative Posts. These administrative posts are designated in numbers, and most of them are rural areas where the predominant occupations are artisanal fishing and agriculture. Quelimane hosts 21 health facilities, including 17 health centers, two peripheral health posts, a general hospital, and a central hospital (tertiary level). The Central Hospital of Quelimane (HCQ) was also included as the point for selection of some key informants.
This study targeted individuals with relevant roles and experiences related to the study questions (i.e., direct experience with caring for dead children’s bodies or with events related to the death of children and/or stillbirths). Thus, study participants consisted of community leaders (a community representative with central or local government legitimacy and/or influence and power over communities). Such power can be political, either by election or appointment (e.g., neighborhood secretaries, heads of blocks) or traditional by lineage (e.g., regulators); traditional birth attendants (who care for pregnant women, support them when they deliver outside health facilities, and care for newborns); nharrubes (traditional authorities that are in charge of washing the bodies and lead prayers before the funeral); and health care providers, such as medical doctors (pediatrician, neonatologist, and obstetrician); nurses and hospital support teams were included as well. These target groups were purposively selected to include diverse perspectives on the experience of caring for dead children’s bodies. During data collection we used snowball sampling to identify participants from each group. Snowball sampling was continued until the study team concluded that saturation had been reached and there was no need to interview more participants.
We conducted a combination of qualitative data collection strategies. To better understand the local context, we carried out transect walks around the community accompanied by observations and informal conversations while identifying potential participants. During these observations and informal conversations, social science assistants generated field notes on participants’ behavior and reactions when discussing issues related to child mortality in the community. Focus group discussions (FGDs) were carried out with community leaders and traditional birth attendants (TBAs), and semi-structured interviews (SSIs) were conducted with nharrubes and health professionals directly linked to the HCQ in the pediatric, obstetrics, and maternity services and with nharrubes. Data were collected by four social science assistants, specifically trained for the purposes of the study, and one social science researcher, who also acted as the overall study coordinator. The study was overseen by a senior social scientist based remotely. Each data collection strategy followed its own topic guide or script, mostly formed by open-ended questions about the death of children and procedures for the care of a deceased child’s body. These general questions served to unravel potential alignments and tensions between current local practices and the procedures proposed by the mortality surveillance. Specifically, they sought to identify the perceived relevance and interest in knowing the COD, perceptions about the objectives of the MITS, and the role of community leaders and heads of families in caring for the bodies of deceased children. The SSIs focused on individual perspectives based on own experiences, whereas the FGDs explored the perceptions and experiences of the community at large, also taking into account local norms and values. The FGDs were composed of a minimum of 8 and a maximum of 12 participants belonging to the same target group. Data collection was carried out in Portuguese or in the local language (Chuabo), depending on participants’ preference. Semi-structured interviews and FGDs lasted approximately 60–90 minutes, and informal conversations lasted approximately 20–30 minutes and took place during walks around the communities while identifying potential study participants.
Semi-structured interviews and FGDs were digitally recorded and fully transcribed by three experienced transcribers who were fluent in Chuabo and received specialized training in transcription techniques based on the study’s standards. Interviews conducted in Chuabo were simultaneously translated into Portuguese during transcription. The notes from informal conversations and observations were digitized by social science assistants after their return from fieldwork and were later triangulated with the SSI and FGD data during the analysis. Data analysis was performed in two stages. The first stage was the pre-analysis of the field notes, which led to the development of analytic categories, while the interviews and focus group discussions were still being transcribed verbatim and to the to the definition of the theoretical saturation of data. The second stage was the coding and analysis of interview transcripts. In this phase, new categories were added, removed, and changed. This process was done using NVIVO12 ® software (QSR International, Inc., Melbourne, Australia), which supported not only coding but also extraction of coded text, organization of categories and subcategories in a codebook, and establishment of relationships between ideas. Codifications were subjected to content analysis summarized and tabulated into a matrix format using MS Excel for framework and content analysis in a word sheet. The coding of the transcripts on NVivo was conducted independently by two coders hired for the purpose and a junior investigator. The two sets of codes were compared to ensure consistency.
The study followed a protocol that received ethical approval (CIBS-CISM/013/2018) by the Institutional Committee on Bioethics in Health of the Manhiça Health Research Center (CIBS-CISM). Administrative authorizations were provided by the Ministry of Health, the Zambézia Provincial Health Directorate, and the Municipal Authority of Quelimane City. Authorization to engage with community members and perform the study was also first requested from neighborhood secretaries and the heads of each administrative post included in the study.
Socio-demographic data. In total, 113 people participated in the study. Twenty-six SSIs were conducted with 16 healthcare providers and 10 nharrubes, and 11 FGDs were organized involving 43 community leaders and 44 TBAs from the different administrative posts included in the study. shows the distribution of the participants per target group and data collection tool. Socio-demographic characteristics of the 26 SSI participants are summarized in . The median age was 53 years (interquartile range [IQR]: 35–62), 62% were female, and 23% of respondents had no formal education. Among the respondents, 35% were farmers, and the vast majority of nharrubes were also traditional healers. describes the characteristics of FGD participants. The median age was 48 years (IQR: 34–57). Most FGD participants were female (74%), 51% of them were TBAs, 18% did not have a formal education, and 32% had primary education level. Only 2% of the participants had attended higher education. Experiences related to the death of a child. In general, child mortality was characterized as a sad, striking, and unexpected event. Numerous participants from both SSIs and FGDs mentioned that child deaths are treated in a special way and that they are accompanied by specific itineraries and ceremonies, especially when it comes to stillbirths. When inquired about specific issues related to stillbirths, all participants emphasized that a stillborn child is not considered to have existed as a human being. Participants recounted that stillborn children are perceived to be children who were “returned to God.” Therefore, some nharrubes have explained that the ceremonies for a stillborn child cannot be performed following the same rituals related to the death of an adult because the stillborn child is not considered a human being who has lived. For example, one man stated that, “a child who was born dead, did not have a life like us, so it cannot be buried normally as we do with an adult person. We don’t need to use a box, we just bury it” (nharrube, male, Namuinho, SSI). Some participants reported that local norms dictate that the body of a stillborn child cannot be taken to the family home so as to not infect the mother with the evil spirits of death. Thus, small and restricted ceremonies, different from the regular ceremonies of a deceased adult, are held in the home of family members other than the parents of the stillborn. When a child is born dead [they] cannot enter in the family home, this is because, you cannot join a dead child with their mother without undergoing a purification treatment so that she [the mother] overcomes the loss, gets pregnant and in the future gives birth to another healthy baby. (traditional birth attendant, female, field note) The above quote illustrates the essence of the link between the meaning of a stillbirth and the perceived consequences for the mother and her future pregnancies because the requirements to contain such consequences involve the separation between the dead body and the newly purified body of the mother. According to participants, the mother’s purification consists, first of all, of ensuring that she does not see the baby’s body being buried and later, with prayer and bathing, using specific herbs that serve to separate the living from the dead. This ritual serves to ensure that the baby has a blessed return and that the mother is purified so that she can re-conceive and give birth to a healthy child. Experiences with the death of children were captured from the perspective of events occurring both within and outside health facilities. After the death of a child, regardless of the place of death, it is common for some community members and leaders to get involved. Participants describe it as an event that is easily disseminated within the community, and neighbors mobilize in solidarity to support funeral ceremonies, which include contributing to the expenses of transporting the body, feeding the family and other participants during funeral ceremonies, and buying a coffin. When a child dies everyone gets that sad feeling, then everyone goes to the neighbor’s house to provide solidarity and emotional support. (community leader, male, FGD) Even if the death is at the health facility, the child’s parents or caregivers are quickly notified, who in turn pass on the information to one of the community authorities, such as the neighborhood secretary, religious leaders, nharrubes, or TBAs, to lead or support the family in caring for the deceased body (depending on the norms of each family) before the funeral ceremonies start. Participants reported that some stillborn infants, particularly those belonging to families experiencing financial constraints, are left under the responsibility of the health facility, which in turn mobilizes resources to help dispose of the body in a common ditch because parents are not able to transfer the body back to their community. In these cases, the health facility requests the consent of either parent to conduct the disposal along with the other stillbirth cases. Between 1 and 3 days after the death of a person, whether child or adult, the funeral is held. The timing varies depending on the religion of the family of the deceased. Participants mentioned that it is common among Christian families to perform the funeral 2 or 3 days after death, whereas in Muslim families the funeral ceremonies must ideally take place before the end of the first 24 hours. For stillborn infants, regardless of religion, the body is buried within the first 24 hours. Also regardless of religion, in cases of death the presence of the nharrube is crucial. Some participants mentioned that in some families the ceremonies are guided by the nharrube and in others by a religious leader. In many cases a religious leader can also be a nharrube. In preparation for the funeral, several rituals take place: washing, purification, and blessing. For the washing, the nharrube is authorized to choose a “brave” family member to assist in the process. Purification and blessing take place after washing the body and are performed by a religious leader before burial. After washing, purification, and blessing, the body can no longer be touched by “strangers” (i.e., those who are not direct relatives). Additionally, nharrubes and community leaders frequently reported that after the funeral and purification ceremonies are over, children’s parents seek out healers to help them learn the causes of their child’s death. This action by the deceased child’s relatives to find out the COD is part of the itinerary to be followed during the time of mourning in some families and serves to end the tensions and accusations of witchcraft among the family members. Potential motivators for MITS implementation. The factors analyzed in this study comprise the features already in place at the individual or community level, mostly related to the already mentioned social and cultural norms related to the death of a child that would act as drivers that could influence positively the acceptance of MITS, namely parents’ desire to know the causes of child deaths and the possibility of acquitting the elderly of witchcraft accusations. Willingness to know the causes of child deaths. When asked about what would make MITS acceptable, the most frequently cited motivation was the desire that participants expressed in knowing the causes of child deaths. This was repeatedly expressed through FGDs with community leaders and TBAs. Study participants reported that when a child dies, family and community members often do not know what the causes were, particularly if it is a sudden death. MITS, which is considered by participants as an innovation, was seen as potentially helpful in providing clear answers about the COD to the parents of the deceased child: “…it is a good [thing] because from these analyses we’ll know what is killing our children, because here many children are dying without knowing what is the disease that killed them” (nharrube, female, SSI). In addition, and as mentioned earlier, TBAs alluded to the parents’ practice of seeking answers as to what caused the child death, through consultation with traditional healers. In their opinion, MITS would address this desire of parents and family members by providing reliable information regarding the cause of the child’s death. Knowing the causes of a child’s death through MITS has also been cited by TBAs as an initiative that can help the parents of the deceased child treat and prevent diseases that they could have and that have been diagnosed by the MITS procedure. Healthcare providers were also motivated by the desire to know the real causes of the deaths of hospitalized children. Their particular concern relates to children that are admitted to hospital in a critical state and whom they cannot save or diagnose on time. Thus, they value MITS because they consider that it could help discover the COD in these cases. For us it will be a good thing because it will be a complementary service to help us understand the causes of those sudden deaths. (healthcare provider, male or female, SSI) Intention to acquit the elderly of witchcraft accusations. Another potential motivation for accepting MITS was the intention to acquit the elderly of witchcraft accusations when a child dies. Participants expressed that child deaths are often attributed to witchcraft and that the elderly are perceived to bewitch children and kill them to increase their longevity. Participants reported that the charges are made to any healthy-looking elderly individual, especially those with experience as healers. With the results of these analyses at least they will stop saying that we the elderly have bewitched the children to die. (community leader, male, FGD) Paradoxically to the accusations of witchcraft resulting in children’s deaths, the elderly are recognized to play a significant role in the preservation children’s lives and well-being. Established nharrubes, healers, and TBAs, who are mostly elderly people, do care for sick children as part of their role in the community in addition to caring for the bodies of the deceased children. Some elders mentioned that, with the implementation of MITS, the diagnosis of COD for children will be clear and their role as knowledgeable elders in the community will be credible. Expected programmatic influencing factors. When asked about which factors would facilitate the acceptance of MITS, participants alluded to some requisites that would have to be consigned to the intervention, namely: the facilitation of transportation of bodies to the community, the dissemination of the intervention through the community radio and health talks, and the involvement of leaders in dissemination activities. Provision of means to transport bodies. Healthcare providers stated that the availability of transport could contribute to the acceptance of MITS. Because many families are unable to transport the body of a dead child from the hospital to their home, as evidenced by the earlier statements referring to parents not claiming the bodies of their children if they die at the health facilities, healthcare providers believe that providing means to transport bodies back to the communities would establish confidence and motivation to accept MITS if a child’s death occurs. Of the patients we have received here, many are unable to rent a car to carry the coffin, so if the program could inform that it will help with transport if necessary it would be good, the population may have a motivation to accept it. (healthcare provider, female, SSI) Healthcare providers mentioned that providing transportation to transfer the bodies of deceased children from the health facility to their homes had acted as an additional motivation for potential participants to accept MITS. This act can be seen as support for some families in need. Dissemination of information through trusted channels. According to the participants, the dissemination of information through trusted channels, such as community associations or community health commutes, and adoption of appropriate community awareness activities will boost the acceptance of MITS by the community members. For the community to accept, they must know that it is MITS and that it is being implemented. Therefore, you must make a lot of publicity in several campaigns and health lectures. (community leader, male, FGD) Additionally, community leaders mentioned that the information and campaigns disseminated by the community radio stations are commonly trusted and adhered to because most members of the community listen to the radio. Thus, if the MITS are broadcasted on local radio, community members will learn about the initiative, understand the purpose of the program, and easily accept that it is carried out on their children. If you use the radios you will gain a lot. This city is big, if you go from house to house, you will not be able to finish it, the best thing is to use the radio for everyone to find out about MITS. (community leader, male, FGD) Involvement of leaders in the dissemination of project information. Most participants believe that engaging all influential community leaders in the dissemination process can facilitate MITS acceptance. The influence that leaders have over members of the community allows their messages and recommendations to be easily followed and adhered to. Quelimane is under two overlapping powers; thus, many areas are influenced by central government–appointed leaders and, at the same time, by local government–compried of elected leaders who govern the district and municipal territories simultaneously. For this reason, participants mentioned the need to involve all influential community leaders in the dissemination of information related to the implementation of MITS regardless of their political orientation. You must involve the leaders, these secretaries, and the regulators [community political leaders] to help as activists in the community. (community leader, male, FGD) A community leader explained that MITS will only be accepted if people feel safe while taking part in the intervention and that, according to him, this sense of safety can only be successfully conveyed by community leaders. Furthermore, there is a belief that if something goes wrong in communicating the message or misunderstandings arise, community leaders can help solve and clarify the problem in the community, thereby guaranteeing the re-establishment and maintenance of the desired sense of safety that community members feel when their community leaders are involved in the process. The people here will not accept any new project if they do not feel safe, for that, we are the persons who guarantee safety in them because we are influential here in the community. (community leader, male, FGD) Moreover, community leaders’ expression of reluctance regarding the implementation of MITS without their active involvement in dissemination the activities is grounded in the belief that if they are not involved, other people will be called upon to fulfill this role in exchange for financial compensation. One community leader (male, FGD) explained: “Our population here will accept the message if it is disseminated by a healthcare provider or a community leader, but if it is an outside person who will make money we will not accept it because we as representatives of the community can divulge that information.” As described below, it is notable that the leaders’ requirements are related to possible benefits that they can have in return for giving support to the project in the dissemination of MITS in the community but also to a sense of ownership of an intervention pertaining their own community. Incentives for local leaders. Some of the above notions of leaders’ involvement were expressed by community leaders as a responsibility they held on making decisions on behalf of the community, to the extent of paralleling it to a job: “If we work we can make the parents of children in the community trust and accept MITS” (community leader, male, FGD). For them, the authority that has been entrusted in them must be reinforced at a time when communities need a representative to help them understand and trust the new interventions. Some leaders were more specific, stating that their involvement and support for COMSA intervention should be rewarded with cash compensation or incentivized with the provision of support material, such as bicycles (for mobility), t-shirts (for identification and credibility in their genuine link to the intervention), and airtime for their mobile phones (to communicate with the study team and the members of the wider community). Other leaders suggested direct collaboration through the hiring of community members to work as COMSA activists in the community, implying some extra income for them. Potential barriers to MITS implementation. The main factors that may constitute barriers to the implementation of MITS were the possible disagreements with Islamic religious practices, skepticism regarding the objectives of the intervention, and negative past experiences with health interventions. Disagreement with Islamic religious practices. The norms surrounding post-mortem Islamic rituals constitute important factors influencing the potential refusal to perform MITS. Therefore, in the case of the death of a person belonging to the Islamic faith, the aforementioned washing, purifying, and praying rituals must be performed within 24 hours of the death. This is to allow the spirit to be blessed and to reach heaven while clean. Furthermore, the body cannot be dissected or sutured, especially if Islamic religious leaders have already blessed the body. Additionally, if the death occurs in the hospital, it is not acceptable for the body to be exposed for long hours to the hospital environment so as to minimize the spirit’s suffering. Further, according to participants, if a person belonging to the Islamic community dies, including stillbirths, it is only acceptable for the body to be manipulated by a person who is a family member and a Muslim, unless the person is a nharrube. Nharrubes, despite not being a family member and not necessarily being a Muslim, are highly respected and considered custodians of the rules around body manipulation in Quelimane. Regarding the implications for MITS implementation, many participants emphasized that timing restrictions marking funeral rites could compromise the achievement of MITS. One Islamic leader explained that: When we receive information that an adult person or a child has died, we are the first to arrive [in the family home] to make our prayers. Prayers help the spirit be in peace. (community leader, male, FGD) Participants noted that in some cases the achievement of MITS will be compromised because it has the potential to interrupt or delay the washing and grooming of the body by the nharrube, which are considered vital post-mortem practices. As one participant explained: “We may even want this MITS, but we have to follow up with our [Islamic] religious norms to guarantee peace for the spirit” (community leader, male, FGD). Respondents who expressed difficulties in accepting MITS were mainly Muslims, but some were Christians belonging to non-Catholic churches, such as the Jehovah’s Witnesses. Most participants had high levels of education, such as retired teachers, whom are considered influential people within communities. Skepticism due to negative past experiences with health interventions. Community leaders mentioned that people are afraid of organ trafficking allegedly carried out by some entities, such as nongovernment organizations operating in the health field, and that this could lead to skepticism regarding the motivations behind the practice of MITS. These fears are based on rumors about the extraction of organs from deceased bodies at local health facilities. For example, they recalled an episode linked to a cholera outbreak that occurred in 2017, during which the body of a cholera victim who had died at the cholera treatment center was returned to relatives wrapped in a white sheet and with the nostrils and ears plugged with cotton buds, an image community members were not familiar with. This episode gave rise to negative reactions among relatives and community members, which culminated in the destruction of the cholera treatment center in Quelimane. It is a problem because they [relatives] may be suspicious that they [health professionals] want to take organs as other institutions have already been accused of. So people can refuse to perform these MITS. (community leader, male, FGD) Some community leaders and nharrubes also claimed to have little confidence in health authorities because they believe that health institutions may intend to use bodies of deceased children for obscure scientific experiments. It is normal to suspect because there is not much confidence in some organizations, especially those private ones that come here to make money. Even people do not trust anything. (community leader, male, FGD) They pointed out that this suspicion is what triggers people not to not leave the bodies of their deceased relatives for a long time in the hospital’s mortuary. Community leaders reported that some health interventions and programs previously implemented in these communities have failed because of low levels of involvement and engagement of leaders in the dissemination of information about the intervention. According to participants, MITS can be easily rejected if the community leaders are not made responsible for conveying information about MITS to the population. Some study participants mentioned that, due to a history of weak involvement of community leaders in the dissemination of new health interventions in Quelimane, some campaigns have been characterized as inappropriate by the population, an example being the promotion of cervical cancer screening for women of reproductive age, which was erroneously interpreted as a campaign to promote birth control. Another community leader’s statement illustrates the point that their involvement and support to the dissemination of a future MITS intervention stands out almost as a requirement: If you do not call on the leaders of the community here to work with you then this program will not move forward. (community leader, male, FGD)
In total, 113 people participated in the study. Twenty-six SSIs were conducted with 16 healthcare providers and 10 nharrubes, and 11 FGDs were organized involving 43 community leaders and 44 TBAs from the different administrative posts included in the study. shows the distribution of the participants per target group and data collection tool. Socio-demographic characteristics of the 26 SSI participants are summarized in . The median age was 53 years (interquartile range [IQR]: 35–62), 62% were female, and 23% of respondents had no formal education. Among the respondents, 35% were farmers, and the vast majority of nharrubes were also traditional healers. describes the characteristics of FGD participants. The median age was 48 years (IQR: 34–57). Most FGD participants were female (74%), 51% of them were TBAs, 18% did not have a formal education, and 32% had primary education level. Only 2% of the participants had attended higher education.
In general, child mortality was characterized as a sad, striking, and unexpected event. Numerous participants from both SSIs and FGDs mentioned that child deaths are treated in a special way and that they are accompanied by specific itineraries and ceremonies, especially when it comes to stillbirths. When inquired about specific issues related to stillbirths, all participants emphasized that a stillborn child is not considered to have existed as a human being. Participants recounted that stillborn children are perceived to be children who were “returned to God.” Therefore, some nharrubes have explained that the ceremonies for a stillborn child cannot be performed following the same rituals related to the death of an adult because the stillborn child is not considered a human being who has lived. For example, one man stated that, “a child who was born dead, did not have a life like us, so it cannot be buried normally as we do with an adult person. We don’t need to use a box, we just bury it” (nharrube, male, Namuinho, SSI). Some participants reported that local norms dictate that the body of a stillborn child cannot be taken to the family home so as to not infect the mother with the evil spirits of death. Thus, small and restricted ceremonies, different from the regular ceremonies of a deceased adult, are held in the home of family members other than the parents of the stillborn. When a child is born dead [they] cannot enter in the family home, this is because, you cannot join a dead child with their mother without undergoing a purification treatment so that she [the mother] overcomes the loss, gets pregnant and in the future gives birth to another healthy baby. (traditional birth attendant, female, field note) The above quote illustrates the essence of the link between the meaning of a stillbirth and the perceived consequences for the mother and her future pregnancies because the requirements to contain such consequences involve the separation between the dead body and the newly purified body of the mother. According to participants, the mother’s purification consists, first of all, of ensuring that she does not see the baby’s body being buried and later, with prayer and bathing, using specific herbs that serve to separate the living from the dead. This ritual serves to ensure that the baby has a blessed return and that the mother is purified so that she can re-conceive and give birth to a healthy child. Experiences with the death of children were captured from the perspective of events occurring both within and outside health facilities. After the death of a child, regardless of the place of death, it is common for some community members and leaders to get involved. Participants describe it as an event that is easily disseminated within the community, and neighbors mobilize in solidarity to support funeral ceremonies, which include contributing to the expenses of transporting the body, feeding the family and other participants during funeral ceremonies, and buying a coffin. When a child dies everyone gets that sad feeling, then everyone goes to the neighbor’s house to provide solidarity and emotional support. (community leader, male, FGD) Even if the death is at the health facility, the child’s parents or caregivers are quickly notified, who in turn pass on the information to one of the community authorities, such as the neighborhood secretary, religious leaders, nharrubes, or TBAs, to lead or support the family in caring for the deceased body (depending on the norms of each family) before the funeral ceremonies start. Participants reported that some stillborn infants, particularly those belonging to families experiencing financial constraints, are left under the responsibility of the health facility, which in turn mobilizes resources to help dispose of the body in a common ditch because parents are not able to transfer the body back to their community. In these cases, the health facility requests the consent of either parent to conduct the disposal along with the other stillbirth cases. Between 1 and 3 days after the death of a person, whether child or adult, the funeral is held. The timing varies depending on the religion of the family of the deceased. Participants mentioned that it is common among Christian families to perform the funeral 2 or 3 days after death, whereas in Muslim families the funeral ceremonies must ideally take place before the end of the first 24 hours. For stillborn infants, regardless of religion, the body is buried within the first 24 hours. Also regardless of religion, in cases of death the presence of the nharrube is crucial. Some participants mentioned that in some families the ceremonies are guided by the nharrube and in others by a religious leader. In many cases a religious leader can also be a nharrube. In preparation for the funeral, several rituals take place: washing, purification, and blessing. For the washing, the nharrube is authorized to choose a “brave” family member to assist in the process. Purification and blessing take place after washing the body and are performed by a religious leader before burial. After washing, purification, and blessing, the body can no longer be touched by “strangers” (i.e., those who are not direct relatives). Additionally, nharrubes and community leaders frequently reported that after the funeral and purification ceremonies are over, children’s parents seek out healers to help them learn the causes of their child’s death. This action by the deceased child’s relatives to find out the COD is part of the itinerary to be followed during the time of mourning in some families and serves to end the tensions and accusations of witchcraft among the family members.
The factors analyzed in this study comprise the features already in place at the individual or community level, mostly related to the already mentioned social and cultural norms related to the death of a child that would act as drivers that could influence positively the acceptance of MITS, namely parents’ desire to know the causes of child deaths and the possibility of acquitting the elderly of witchcraft accusations. Willingness to know the causes of child deaths. When asked about what would make MITS acceptable, the most frequently cited motivation was the desire that participants expressed in knowing the causes of child deaths. This was repeatedly expressed through FGDs with community leaders and TBAs. Study participants reported that when a child dies, family and community members often do not know what the causes were, particularly if it is a sudden death. MITS, which is considered by participants as an innovation, was seen as potentially helpful in providing clear answers about the COD to the parents of the deceased child: “…it is a good [thing] because from these analyses we’ll know what is killing our children, because here many children are dying without knowing what is the disease that killed them” (nharrube, female, SSI). In addition, and as mentioned earlier, TBAs alluded to the parents’ practice of seeking answers as to what caused the child death, through consultation with traditional healers. In their opinion, MITS would address this desire of parents and family members by providing reliable information regarding the cause of the child’s death. Knowing the causes of a child’s death through MITS has also been cited by TBAs as an initiative that can help the parents of the deceased child treat and prevent diseases that they could have and that have been diagnosed by the MITS procedure. Healthcare providers were also motivated by the desire to know the real causes of the deaths of hospitalized children. Their particular concern relates to children that are admitted to hospital in a critical state and whom they cannot save or diagnose on time. Thus, they value MITS because they consider that it could help discover the COD in these cases. For us it will be a good thing because it will be a complementary service to help us understand the causes of those sudden deaths. (healthcare provider, male or female, SSI) Intention to acquit the elderly of witchcraft accusations. Another potential motivation for accepting MITS was the intention to acquit the elderly of witchcraft accusations when a child dies. Participants expressed that child deaths are often attributed to witchcraft and that the elderly are perceived to bewitch children and kill them to increase their longevity. Participants reported that the charges are made to any healthy-looking elderly individual, especially those with experience as healers. With the results of these analyses at least they will stop saying that we the elderly have bewitched the children to die. (community leader, male, FGD) Paradoxically to the accusations of witchcraft resulting in children’s deaths, the elderly are recognized to play a significant role in the preservation children’s lives and well-being. Established nharrubes, healers, and TBAs, who are mostly elderly people, do care for sick children as part of their role in the community in addition to caring for the bodies of the deceased children. Some elders mentioned that, with the implementation of MITS, the diagnosis of COD for children will be clear and their role as knowledgeable elders in the community will be credible.
When asked about what would make MITS acceptable, the most frequently cited motivation was the desire that participants expressed in knowing the causes of child deaths. This was repeatedly expressed through FGDs with community leaders and TBAs. Study participants reported that when a child dies, family and community members often do not know what the causes were, particularly if it is a sudden death. MITS, which is considered by participants as an innovation, was seen as potentially helpful in providing clear answers about the COD to the parents of the deceased child: “…it is a good [thing] because from these analyses we’ll know what is killing our children, because here many children are dying without knowing what is the disease that killed them” (nharrube, female, SSI). In addition, and as mentioned earlier, TBAs alluded to the parents’ practice of seeking answers as to what caused the child death, through consultation with traditional healers. In their opinion, MITS would address this desire of parents and family members by providing reliable information regarding the cause of the child’s death. Knowing the causes of a child’s death through MITS has also been cited by TBAs as an initiative that can help the parents of the deceased child treat and prevent diseases that they could have and that have been diagnosed by the MITS procedure. Healthcare providers were also motivated by the desire to know the real causes of the deaths of hospitalized children. Their particular concern relates to children that are admitted to hospital in a critical state and whom they cannot save or diagnose on time. Thus, they value MITS because they consider that it could help discover the COD in these cases. For us it will be a good thing because it will be a complementary service to help us understand the causes of those sudden deaths. (healthcare provider, male or female, SSI)
Another potential motivation for accepting MITS was the intention to acquit the elderly of witchcraft accusations when a child dies. Participants expressed that child deaths are often attributed to witchcraft and that the elderly are perceived to bewitch children and kill them to increase their longevity. Participants reported that the charges are made to any healthy-looking elderly individual, especially those with experience as healers. With the results of these analyses at least they will stop saying that we the elderly have bewitched the children to die. (community leader, male, FGD) Paradoxically to the accusations of witchcraft resulting in children’s deaths, the elderly are recognized to play a significant role in the preservation children’s lives and well-being. Established nharrubes, healers, and TBAs, who are mostly elderly people, do care for sick children as part of their role in the community in addition to caring for the bodies of the deceased children. Some elders mentioned that, with the implementation of MITS, the diagnosis of COD for children will be clear and their role as knowledgeable elders in the community will be credible.
When asked about which factors would facilitate the acceptance of MITS, participants alluded to some requisites that would have to be consigned to the intervention, namely: the facilitation of transportation of bodies to the community, the dissemination of the intervention through the community radio and health talks, and the involvement of leaders in dissemination activities. Provision of means to transport bodies. Healthcare providers stated that the availability of transport could contribute to the acceptance of MITS. Because many families are unable to transport the body of a dead child from the hospital to their home, as evidenced by the earlier statements referring to parents not claiming the bodies of their children if they die at the health facilities, healthcare providers believe that providing means to transport bodies back to the communities would establish confidence and motivation to accept MITS if a child’s death occurs. Of the patients we have received here, many are unable to rent a car to carry the coffin, so if the program could inform that it will help with transport if necessary it would be good, the population may have a motivation to accept it. (healthcare provider, female, SSI) Healthcare providers mentioned that providing transportation to transfer the bodies of deceased children from the health facility to their homes had acted as an additional motivation for potential participants to accept MITS. This act can be seen as support for some families in need. Dissemination of information through trusted channels. According to the participants, the dissemination of information through trusted channels, such as community associations or community health commutes, and adoption of appropriate community awareness activities will boost the acceptance of MITS by the community members. For the community to accept, they must know that it is MITS and that it is being implemented. Therefore, you must make a lot of publicity in several campaigns and health lectures. (community leader, male, FGD) Additionally, community leaders mentioned that the information and campaigns disseminated by the community radio stations are commonly trusted and adhered to because most members of the community listen to the radio. Thus, if the MITS are broadcasted on local radio, community members will learn about the initiative, understand the purpose of the program, and easily accept that it is carried out on their children. If you use the radios you will gain a lot. This city is big, if you go from house to house, you will not be able to finish it, the best thing is to use the radio for everyone to find out about MITS. (community leader, male, FGD) Involvement of leaders in the dissemination of project information. Most participants believe that engaging all influential community leaders in the dissemination process can facilitate MITS acceptance. The influence that leaders have over members of the community allows their messages and recommendations to be easily followed and adhered to. Quelimane is under two overlapping powers; thus, many areas are influenced by central government–appointed leaders and, at the same time, by local government–compried of elected leaders who govern the district and municipal territories simultaneously. For this reason, participants mentioned the need to involve all influential community leaders in the dissemination of information related to the implementation of MITS regardless of their political orientation. You must involve the leaders, these secretaries, and the regulators [community political leaders] to help as activists in the community. (community leader, male, FGD) A community leader explained that MITS will only be accepted if people feel safe while taking part in the intervention and that, according to him, this sense of safety can only be successfully conveyed by community leaders. Furthermore, there is a belief that if something goes wrong in communicating the message or misunderstandings arise, community leaders can help solve and clarify the problem in the community, thereby guaranteeing the re-establishment and maintenance of the desired sense of safety that community members feel when their community leaders are involved in the process. The people here will not accept any new project if they do not feel safe, for that, we are the persons who guarantee safety in them because we are influential here in the community. (community leader, male, FGD) Moreover, community leaders’ expression of reluctance regarding the implementation of MITS without their active involvement in dissemination the activities is grounded in the belief that if they are not involved, other people will be called upon to fulfill this role in exchange for financial compensation. One community leader (male, FGD) explained: “Our population here will accept the message if it is disseminated by a healthcare provider or a community leader, but if it is an outside person who will make money we will not accept it because we as representatives of the community can divulge that information.” As described below, it is notable that the leaders’ requirements are related to possible benefits that they can have in return for giving support to the project in the dissemination of MITS in the community but also to a sense of ownership of an intervention pertaining their own community. Incentives for local leaders. Some of the above notions of leaders’ involvement were expressed by community leaders as a responsibility they held on making decisions on behalf of the community, to the extent of paralleling it to a job: “If we work we can make the parents of children in the community trust and accept MITS” (community leader, male, FGD). For them, the authority that has been entrusted in them must be reinforced at a time when communities need a representative to help them understand and trust the new interventions. Some leaders were more specific, stating that their involvement and support for COMSA intervention should be rewarded with cash compensation or incentivized with the provision of support material, such as bicycles (for mobility), t-shirts (for identification and credibility in their genuine link to the intervention), and airtime for their mobile phones (to communicate with the study team and the members of the wider community). Other leaders suggested direct collaboration through the hiring of community members to work as COMSA activists in the community, implying some extra income for them.
Healthcare providers stated that the availability of transport could contribute to the acceptance of MITS. Because many families are unable to transport the body of a dead child from the hospital to their home, as evidenced by the earlier statements referring to parents not claiming the bodies of their children if they die at the health facilities, healthcare providers believe that providing means to transport bodies back to the communities would establish confidence and motivation to accept MITS if a child’s death occurs. Of the patients we have received here, many are unable to rent a car to carry the coffin, so if the program could inform that it will help with transport if necessary it would be good, the population may have a motivation to accept it. (healthcare provider, female, SSI) Healthcare providers mentioned that providing transportation to transfer the bodies of deceased children from the health facility to their homes had acted as an additional motivation for potential participants to accept MITS. This act can be seen as support for some families in need.
According to the participants, the dissemination of information through trusted channels, such as community associations or community health commutes, and adoption of appropriate community awareness activities will boost the acceptance of MITS by the community members. For the community to accept, they must know that it is MITS and that it is being implemented. Therefore, you must make a lot of publicity in several campaigns and health lectures. (community leader, male, FGD) Additionally, community leaders mentioned that the information and campaigns disseminated by the community radio stations are commonly trusted and adhered to because most members of the community listen to the radio. Thus, if the MITS are broadcasted on local radio, community members will learn about the initiative, understand the purpose of the program, and easily accept that it is carried out on their children. If you use the radios you will gain a lot. This city is big, if you go from house to house, you will not be able to finish it, the best thing is to use the radio for everyone to find out about MITS. (community leader, male, FGD)
Most participants believe that engaging all influential community leaders in the dissemination process can facilitate MITS acceptance. The influence that leaders have over members of the community allows their messages and recommendations to be easily followed and adhered to. Quelimane is under two overlapping powers; thus, many areas are influenced by central government–appointed leaders and, at the same time, by local government–compried of elected leaders who govern the district and municipal territories simultaneously. For this reason, participants mentioned the need to involve all influential community leaders in the dissemination of information related to the implementation of MITS regardless of their political orientation. You must involve the leaders, these secretaries, and the regulators [community political leaders] to help as activists in the community. (community leader, male, FGD) A community leader explained that MITS will only be accepted if people feel safe while taking part in the intervention and that, according to him, this sense of safety can only be successfully conveyed by community leaders. Furthermore, there is a belief that if something goes wrong in communicating the message or misunderstandings arise, community leaders can help solve and clarify the problem in the community, thereby guaranteeing the re-establishment and maintenance of the desired sense of safety that community members feel when their community leaders are involved in the process. The people here will not accept any new project if they do not feel safe, for that, we are the persons who guarantee safety in them because we are influential here in the community. (community leader, male, FGD) Moreover, community leaders’ expression of reluctance regarding the implementation of MITS without their active involvement in dissemination the activities is grounded in the belief that if they are not involved, other people will be called upon to fulfill this role in exchange for financial compensation. One community leader (male, FGD) explained: “Our population here will accept the message if it is disseminated by a healthcare provider or a community leader, but if it is an outside person who will make money we will not accept it because we as representatives of the community can divulge that information.” As described below, it is notable that the leaders’ requirements are related to possible benefits that they can have in return for giving support to the project in the dissemination of MITS in the community but also to a sense of ownership of an intervention pertaining their own community.
Some of the above notions of leaders’ involvement were expressed by community leaders as a responsibility they held on making decisions on behalf of the community, to the extent of paralleling it to a job: “If we work we can make the parents of children in the community trust and accept MITS” (community leader, male, FGD). For them, the authority that has been entrusted in them must be reinforced at a time when communities need a representative to help them understand and trust the new interventions. Some leaders were more specific, stating that their involvement and support for COMSA intervention should be rewarded with cash compensation or incentivized with the provision of support material, such as bicycles (for mobility), t-shirts (for identification and credibility in their genuine link to the intervention), and airtime for their mobile phones (to communicate with the study team and the members of the wider community). Other leaders suggested direct collaboration through the hiring of community members to work as COMSA activists in the community, implying some extra income for them.
The main factors that may constitute barriers to the implementation of MITS were the possible disagreements with Islamic religious practices, skepticism regarding the objectives of the intervention, and negative past experiences with health interventions. Disagreement with Islamic religious practices. The norms surrounding post-mortem Islamic rituals constitute important factors influencing the potential refusal to perform MITS. Therefore, in the case of the death of a person belonging to the Islamic faith, the aforementioned washing, purifying, and praying rituals must be performed within 24 hours of the death. This is to allow the spirit to be blessed and to reach heaven while clean. Furthermore, the body cannot be dissected or sutured, especially if Islamic religious leaders have already blessed the body. Additionally, if the death occurs in the hospital, it is not acceptable for the body to be exposed for long hours to the hospital environment so as to minimize the spirit’s suffering. Further, according to participants, if a person belonging to the Islamic community dies, including stillbirths, it is only acceptable for the body to be manipulated by a person who is a family member and a Muslim, unless the person is a nharrube. Nharrubes, despite not being a family member and not necessarily being a Muslim, are highly respected and considered custodians of the rules around body manipulation in Quelimane. Regarding the implications for MITS implementation, many participants emphasized that timing restrictions marking funeral rites could compromise the achievement of MITS. One Islamic leader explained that: When we receive information that an adult person or a child has died, we are the first to arrive [in the family home] to make our prayers. Prayers help the spirit be in peace. (community leader, male, FGD) Participants noted that in some cases the achievement of MITS will be compromised because it has the potential to interrupt or delay the washing and grooming of the body by the nharrube, which are considered vital post-mortem practices. As one participant explained: “We may even want this MITS, but we have to follow up with our [Islamic] religious norms to guarantee peace for the spirit” (community leader, male, FGD). Respondents who expressed difficulties in accepting MITS were mainly Muslims, but some were Christians belonging to non-Catholic churches, such as the Jehovah’s Witnesses. Most participants had high levels of education, such as retired teachers, whom are considered influential people within communities. Skepticism due to negative past experiences with health interventions. Community leaders mentioned that people are afraid of organ trafficking allegedly carried out by some entities, such as nongovernment organizations operating in the health field, and that this could lead to skepticism regarding the motivations behind the practice of MITS. These fears are based on rumors about the extraction of organs from deceased bodies at local health facilities. For example, they recalled an episode linked to a cholera outbreak that occurred in 2017, during which the body of a cholera victim who had died at the cholera treatment center was returned to relatives wrapped in a white sheet and with the nostrils and ears plugged with cotton buds, an image community members were not familiar with. This episode gave rise to negative reactions among relatives and community members, which culminated in the destruction of the cholera treatment center in Quelimane. It is a problem because they [relatives] may be suspicious that they [health professionals] want to take organs as other institutions have already been accused of. So people can refuse to perform these MITS. (community leader, male, FGD) Some community leaders and nharrubes also claimed to have little confidence in health authorities because they believe that health institutions may intend to use bodies of deceased children for obscure scientific experiments. It is normal to suspect because there is not much confidence in some organizations, especially those private ones that come here to make money. Even people do not trust anything. (community leader, male, FGD) They pointed out that this suspicion is what triggers people not to not leave the bodies of their deceased relatives for a long time in the hospital’s mortuary. Community leaders reported that some health interventions and programs previously implemented in these communities have failed because of low levels of involvement and engagement of leaders in the dissemination of information about the intervention. According to participants, MITS can be easily rejected if the community leaders are not made responsible for conveying information about MITS to the population. Some study participants mentioned that, due to a history of weak involvement of community leaders in the dissemination of new health interventions in Quelimane, some campaigns have been characterized as inappropriate by the population, an example being the promotion of cervical cancer screening for women of reproductive age, which was erroneously interpreted as a campaign to promote birth control. Another community leader’s statement illustrates the point that their involvement and support to the dissemination of a future MITS intervention stands out almost as a requirement: If you do not call on the leaders of the community here to work with you then this program will not move forward. (community leader, male, FGD)
The norms surrounding post-mortem Islamic rituals constitute important factors influencing the potential refusal to perform MITS. Therefore, in the case of the death of a person belonging to the Islamic faith, the aforementioned washing, purifying, and praying rituals must be performed within 24 hours of the death. This is to allow the spirit to be blessed and to reach heaven while clean. Furthermore, the body cannot be dissected or sutured, especially if Islamic religious leaders have already blessed the body. Additionally, if the death occurs in the hospital, it is not acceptable for the body to be exposed for long hours to the hospital environment so as to minimize the spirit’s suffering. Further, according to participants, if a person belonging to the Islamic community dies, including stillbirths, it is only acceptable for the body to be manipulated by a person who is a family member and a Muslim, unless the person is a nharrube. Nharrubes, despite not being a family member and not necessarily being a Muslim, are highly respected and considered custodians of the rules around body manipulation in Quelimane. Regarding the implications for MITS implementation, many participants emphasized that timing restrictions marking funeral rites could compromise the achievement of MITS. One Islamic leader explained that: When we receive information that an adult person or a child has died, we are the first to arrive [in the family home] to make our prayers. Prayers help the spirit be in peace. (community leader, male, FGD) Participants noted that in some cases the achievement of MITS will be compromised because it has the potential to interrupt or delay the washing and grooming of the body by the nharrube, which are considered vital post-mortem practices. As one participant explained: “We may even want this MITS, but we have to follow up with our [Islamic] religious norms to guarantee peace for the spirit” (community leader, male, FGD). Respondents who expressed difficulties in accepting MITS were mainly Muslims, but some were Christians belonging to non-Catholic churches, such as the Jehovah’s Witnesses. Most participants had high levels of education, such as retired teachers, whom are considered influential people within communities.
Community leaders mentioned that people are afraid of organ trafficking allegedly carried out by some entities, such as nongovernment organizations operating in the health field, and that this could lead to skepticism regarding the motivations behind the practice of MITS. These fears are based on rumors about the extraction of organs from deceased bodies at local health facilities. For example, they recalled an episode linked to a cholera outbreak that occurred in 2017, during which the body of a cholera victim who had died at the cholera treatment center was returned to relatives wrapped in a white sheet and with the nostrils and ears plugged with cotton buds, an image community members were not familiar with. This episode gave rise to negative reactions among relatives and community members, which culminated in the destruction of the cholera treatment center in Quelimane. It is a problem because they [relatives] may be suspicious that they [health professionals] want to take organs as other institutions have already been accused of. So people can refuse to perform these MITS. (community leader, male, FGD) Some community leaders and nharrubes also claimed to have little confidence in health authorities because they believe that health institutions may intend to use bodies of deceased children for obscure scientific experiments. It is normal to suspect because there is not much confidence in some organizations, especially those private ones that come here to make money. Even people do not trust anything. (community leader, male, FGD) They pointed out that this suspicion is what triggers people not to not leave the bodies of their deceased relatives for a long time in the hospital’s mortuary. Community leaders reported that some health interventions and programs previously implemented in these communities have failed because of low levels of involvement and engagement of leaders in the dissemination of information about the intervention. According to participants, MITS can be easily rejected if the community leaders are not made responsible for conveying information about MITS to the population. Some study participants mentioned that, due to a history of weak involvement of community leaders in the dissemination of new health interventions in Quelimane, some campaigns have been characterized as inappropriate by the population, an example being the promotion of cervical cancer screening for women of reproductive age, which was erroneously interpreted as a campaign to promote birth control. Another community leader’s statement illustrates the point that their involvement and support to the dissemination of a future MITS intervention stands out almost as a requirement: If you do not call on the leaders of the community here to work with you then this program will not move forward. (community leader, male, FGD)
The implementation of any postmortem procedure to determine the COD in a social setting that is not familiar with research studies, with some of the poorest health indicators and a history of myths, rumors, and negative perceptions surrounding specific public health initiatives, is marred with challenges. , It is thus evident that the success of the intervention would require an in-depth understanding of what is culturally and religiously acceptable and feasible. , Similar to what has been reported in other settings, prior to the implementation of the MITS procedure, an assessment of the foreseen acceptability of child mortality surveillance was conducted to understand attitudes and perceptions surrounding death as well as possible barriers and facilitators to the implementation of the post-mortem procedure. , , This study provides a useful body of evidence on community issues potentially conditioning the implementation of MITS in Quelimane. Our results suggest that, no different from other settings, the death of an individual is characterized as a sensitive social event. , The death of a child is marked by age-specific rituals that are rooted in local beliefs and is an event that has consequences for the parents (particularly the mother), the family, and the community. These consequences are tied to the specific rituals and itineraries that must be followed by families of the deceased children. According to the findings of this study, stillbirths are conceptualized as children that have returned to God before being fully established and therefore before having reached the status of human beings in their full capacity, a phenomenon that is associated with spirits of death. Thus, there is a certain degree of exceptionalism in how stillbirths are conceived of and socially handled. Specifically, they are handled in accordance with distinct rites and without contact with the closest relatives to guard them from further misfortune. These findings are not unique to Mozambique. Similar results were reported in a study conducted in Ethiopia describing stillbirths and newborns who die soon after birth as non-human. In this case they are also hidden from the community, and the mother of the baby assumes that the baby never existed. The religious requirements pertaining to a section of this community add to the complexities regarding the death event. The sensitivity of this event is also deeply related to the events that unfold at the community level, such as ceremonies and practices around death. Our data also revealed that the economic constraints that mark the reality of funeral ceremonies in Quelimane play a crucial role in defining bereavement. Community members and leaders often mobilize resources within their social network to support funeral ceremonies and contribute to expenses. Yet, in many cases, the younger the child is, the fewer the resources that are collected, leading families of very young children not to claim the body at the health facilities. Healthcare providers have also reported this situation, stating that it will be more feasible to carry out MITS on stillbirths because these bodies are more likely to be left at the hospital. This situation may increase the likelihood of post-mortem investigation on stillbirths compared with the other age groups, a finding that may be relevant for Mozambique and other countries with similar practices. Although this can be considered a potential facilitator for the performance of MITS, it must be received with caution because it raises ethical issues. Also, community awareness of the practice of MITS on “abandoned” bodies could potentially escalate into rumors of hidden research agendas similar to previous experiences related to cholera in the same context, where the body of a cholera victim who had died at the cholera treatment center was released while wrapped in a white sheet and having the nostrils and ears covered with cotton buttons; the relatives believed that the nostrils and ears were covered because the organs had been extracted. The findings of this study suggest that MITS would generally be accepted by parents who experience the loss of a child. This aligns with results from a qualitative study carried out in five sites in Africa and Asia, which found a high hypothetical acceptability of MITS for COD determination. However, these results differ from the findings of a previous study that suggested low levels of hypothetical acceptability of post-mortem procedures to determine the COD, particularly in Muslim communities. The results from our study revealed different factors that constitute potential motivators for acceptance of MITS, namely 1) parents’ willingness to know the cause of death to prevent future disease episodes in their children, 2) the desire to gain answers that explain sudden deaths, 3) the need to restore the reputation of elders that are often blamed for child deaths in their communities, and 4) health professionals’ anticipation of MITS as a procedure that can help uncover the true COD of children who had been admitted in a critical condition. In addition, study findings anticipate other factors that, if introduced, could further enhance the acceptability of MITS once implemented. We refer to these as “programmatic factors,” and they include 1) the establishment of acceptable mechanisms of information, communication, and education and 2) logistical arrangements to incentivize leaders and community members alike. Participants believe that in the event of a child’s death, some relatives are willing to know the COD. In comparison, studies conducted with parents and family members of children who died and a MITS procedure was offered showed that the experienced acceptability was driven by the opportunity to prevent further deaths of children in the family. , , Of note, this potential facilitator of anticipated acceptability of MITS in Quelimane is to some extent influenced by certain cultural norms, as the findings of our study revealed that some community members already practice a desire to know the COD following local practices, such as consultation with oracles through healers. This suggests that the introduction of MITS in this context could be in alignment with some of the existing practices surrounding the death of a child. Similar results were found in a study conducted in Tanzania and Chile, where after the death of a child or an adult their relatives seek to know the COD through trusted healers. However, this possible alignment should be interpreted with caution because MITS intervention can be seen as a competitor to traditional healers’ role to establish the COD. In the present study, the willingness to know the COD is sometimes tied to the desire to discharge the elderly of accusations of witchcraft related to infant deaths, and therefore community leaders mentioned it as an important factor that could facilitate the acceptance of MITS. In this case, the indictment is influenced by the experience of some elderly persons as former healers and their longevity. A study in Tanzania also showed that the accusation of witchcraft weighs more heavily on the elderly, especially in older women, who are often also healers. Similarly, it is important to consider that, in cases where the MITS procedure is not able to provide clear results that satisfy the expectations of individuals, the communication of results could have unintended effects. One possible unintended effect could be that the contrary outcome is achieved and that the accusation of witchcraft is reinforced. This possibility was not discussed by participants, yet it has the potential to become a point of tension during future MITS implementation, especially considering the importance given to accusations of witchcraft. More extensive and meaningful engagement with leaders and other influential people during community mobilization activities may enhance MITS acceptance. However, it is important to note that in Quelimane, where there are community leaders from different political parties, this approach may be particularly necessary for the successful implementation of the MITS. Even so, it is also worth considering that the engagement of community leaders is sensitive to the nuances of their roles as respected actors that wield significant influence and as elders that are susceptible to witchcraft accusations due to the perceived association between child deaths and elders’ (presumed) desire for longevity. Our study findings also reveal factors that constitute potential barriers to the acceptability of MITS. These can be organized into those that anticipate a clash between cultural norms and the procedures of the intervention (e.g., disagreement with religious practices) and those that anticipate the social consequences of a history of programmatic failures (e.g., poor community mobilization and the foreseen skepticism toward the agenda behind the implementation of the MITS). Some of these results are similar to those observed in a multicenter study carried out in Mozambique, Gabon, Kenya, Mali, and Pakistan, which provides evidence of local practices and socio-cultural and religious norms that regulate the manipulation of lifeless bodies and how these influence the hypothetical acceptance of MITS in these contexts. This study also described concerns about delays in the initiation of ceremonies and burials, highlighting potential barriers to accepting MITS. As reflected in our study findings, concerns with the disruption of burial timings are more pressing in the case of Islamic practices where the timeframe between death and burial is narrower. For Islamic persons, the exposure and delay for burial is considered a disrespect to the spirits of the deceased. This result has been suggested in other studies, on hypothetical and experienced acceptability alike, conducted in Muslim countries. , , , However, the implementation of MITS-based mortality surveillance programs has been shown to be feasible even in predominantly Muslim countries, such as Bangladesh and Mali. A notable finding of this study is the role that mistrust may play in the anticipated acceptance of the MITS given the backdrop of organ and human blood trafficking rumors that drove the refusal of past healthcare programs in this context. In Zambézia province, and in Quelimane city in particular, the rumors generated in relation to organ and human blood trafficking are some of the toughest challenges faced by local health programs. , , This result was also highlighted in a hypothetical acceptability study that referenced potential barriers to MITS acceptance that were related to fear of organ and blood harvesting during the procedure. Studies conducted in places where MITS was already being implemented also found that mistrust in medics was an important barrier. This finding calls for important investments in activities, such as rumor surveillance, which help to detect and trace the circulating rumors in the community an confront them with trustworthy information targeted at the appropriate audiences. The level of involvement of the child’s parents, religious leaders, and nharrubes may be key to the success of the acceptability of MITS in Quelimane. Moreover, in line with other studies, , our results show that the involvement of community structures and the consistent desire to know the COD can also influence the acceptability of MITS, even when this desire is fraught with beliefs that include differentiated practices, fear, anxiety, worry, despair, and sadness. Overall, this study contributes with valuable knowledge to the implementation of child mortality surveillance in Quelimane, Mozambique. Limitations. This study has a few important limitations that may have influenced the trustworthiness of the data. The first limitation is that these are hypothetical scenarios in which the participants answered what their reaction would be if they were approached with a request for consent to conduct MITS on their deceased children or how they thought members of their communities would react to the same request. In this sense, results must be interpreted as what is anticipated by participants rather than as what de facto would have happened. However, an important number of anticipated barriers and facilitators to MITS found in this study coincide with those elucidated by studies that were conducted in real scenarios of MITS implementation, adding assurance of the relevance of our results to inform future MITS implementation in Quelimane. In this study, data collection was conducted by a team of CISM research assistants operating under Ministry of Health credentials. In addition, CISM and the Ministry of Health are seen as highly regarded entities that must be respected, which also may have influenced participants’ choice of information conveyed to the researchers. Their willingness to assist the Ministry of Health and CISM in the implementation of MITS, as well as their role as gatekeepers together with the study’s snowballing sampling strategy, may have had an impact on the overall profile of study participants. Therefore, there were gains as well as limitations in obtaining data from privileged participants. Lastly, study participants were people with power and influence in the community or who held some publicly important roles in the community; therefore, the opinions of the “ordinary” members of the community were lacking, calling for further studies targeting this group.
This study has a few important limitations that may have influenced the trustworthiness of the data. The first limitation is that these are hypothetical scenarios in which the participants answered what their reaction would be if they were approached with a request for consent to conduct MITS on their deceased children or how they thought members of their communities would react to the same request. In this sense, results must be interpreted as what is anticipated by participants rather than as what de facto would have happened. However, an important number of anticipated barriers and facilitators to MITS found in this study coincide with those elucidated by studies that were conducted in real scenarios of MITS implementation, adding assurance of the relevance of our results to inform future MITS implementation in Quelimane. In this study, data collection was conducted by a team of CISM research assistants operating under Ministry of Health credentials. In addition, CISM and the Ministry of Health are seen as highly regarded entities that must be respected, which also may have influenced participants’ choice of information conveyed to the researchers. Their willingness to assist the Ministry of Health and CISM in the implementation of MITS, as well as their role as gatekeepers together with the study’s snowballing sampling strategy, may have had an impact on the overall profile of study participants. Therefore, there were gains as well as limitations in obtaining data from privileged participants. Lastly, study participants were people with power and influence in the community or who held some publicly important roles in the community; therefore, the opinions of the “ordinary” members of the community were lacking, calling for further studies targeting this group.
The results of this study highlight important factors associated with the potential acceptance and refusal of MITS in Quelimane. Although some of the respondents considered post-mortem procedures complex because of religious and traditional norms, participants also expressed a desire to accept MITS to know the children’s COD. Although MITS was considered a positive innovation to determine the COD in children, participants remain skeptical about the procedure in lifeless bodies due to tensions with religion and tradition that include the fear of delays in funeral practices particularly regarding Muslim ceremonies. However, although the identified negative and positive aspects have the potential to influence the acceptability of MITS, the participants’ desire to know the COD of children was perceived as the most recurrent factor associated with the acceptability of MITS. Thus, the implementation of MITS in Quelimane should prioritize the involvement of a variety of influential community and religious leaders, as well as transparency in terms of the information provided to family members, in addition to the need for a truthful dialogue with the direct relatives of deceased children. The findings of this study could thus prove useful in the optimization of childhood mortality surveillance using MITS to determine COD in Quelimane.
Financial support: This study was funded by the Bill & Melinda Gates Foundation under the Grant OPP1126780 to Robert Breiman, subcontract SC00003286-S1, via CHAMPS Network. CISM is supported by the Government of Mozambique and the Spanish Agency for International Development Cooperation (AECID).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.