accession_id
stringlengths
9
11
pmid
stringlengths
1
8
introduction
stringlengths
0
134k
methods
stringlengths
0
208k
results
stringlengths
0
357k
discussion
stringlengths
0
357k
conclusion
stringlengths
0
58.3k
front
stringlengths
0
30.9k
body
stringlengths
0
573k
back
stringlengths
0
126k
license
stringclasses
4 values
retracted
stringclasses
2 values
last_updated
stringlengths
19
19
citation
stringlengths
14
94
package_file
stringlengths
0
35
PMC10789234
37158194
INTRODUCTION Hiatal hernias can be classified into four types, the most common of which is a type I hiatal hernia which represents a sliding hernia. Type II hernias are otherwise known as paraesophageal hiatal hernias—they do not have a sliding component and the gastro-esophageal junction remains anatomically below the diaphragm. A type III hernia is a combination of types I and II (sometimes called a mixed hernia), whereas a type IV hernia contains the stomach and additional abdominal viscera in the hernia sac. 1 , 2 Treatment options for large hernias, usually containing 50% + of the stomach within them, and with a significant paraesophageal component (types II, III, IV) include conservative management or surgical intervention. One of the rare risks of the former, when large hernias are present, is that of strangulation or incarceration, which may ultimately lead to an emergency paraesophageal hernia repair which has a poorer prognosis and outcome. 1 Furthermore, and perhaps more importantly, large paraesophageal hernia risk continued enlargement over time with a possible increase in symptoms. Elective surgical options include minimally invasive or open repairs which are often technically challenging in a typically elderly, co-morbid patient cohort. There are a wide range of insidious symptoms that may be associated with paraesophageal hernias, such as weight loss, dysphagia, dyspnea, shortness of breath, heartburn, chest pain, hoarseness of voice, early satiety and anemia. 3–5 The prevalence of upper gastrointestinal symptoms in paraesophageal hernias provides the rationale for using health-related quality of life (HRQoL) questionnaires focusing on gastro-esophageal reflux disease (GORD) to assess patients before and after surgery. 6–8 As the questionnaires are focused on GORD symptoms alone, they do not take into consideration the other symptoms that are associated with paraesophageal hernias. The symptoms that are not included in the current HRQoL tools are sometimes the very reason patients seek surgical intervention. 6–8 Additionally, the current screening questionnaires do not fully identify a patient’s motivations for surgical repair, which often relates to their broad symptoms and its impact on their quality of life. For those who undergo surgical repair, the symptom response following surgery has not been thoroughly studied. Due to the lack of paraesophageal hernia screening tools, this study group devised a disease-specific questionnaire (POST questionnaire) in 4 stages; first a Steering Committee was formed, followed by a systematic review and online scoping survey and then a Delphi consensus. The final stage consisted of two international patient workshops to assess the acceptability and usability of the tool. 9 The aim of this study is to assess the clinical utility and longitudinally validate a paraesophageal hernia symptom tool (POST) 9 , 10 for the clinical assessment of patients with paraesophageal hernias—types II to IV. The study will test POST in patients before paraesophageal hernia repairs to assess the need and threshold for surgery, and it will be used in patients before and after paraesophageal hernia repairs to assess the symptom response to surgery.
METHODS A total of 21 esophago-gastric units internationally will be invited to participate as participant identification centers (PIC). Given the caseload in each center, we are aiming to recruit approximately 500 patients. The centers will be asked to recruit patients being assessed for primary paraesophageal hernia repair over a 24-month study period as per the inclusion and exclusion criteria. Inclusion criteria Age over 18 years old All type II-IV hernias with a paraesophageal component confirmed on computerized tomography (CT) or endoscopy or barium study Able to complete the POST questionnaire (hybrid model—patient-led or clinician-led depending on center preference Exclusion criteria Unable to provide informed consent Previous paraesophageal hernia repair or esophagi-gastric surgery Diagnosis of an esophagi-gastric cancer Type 1 sliding hiatus hernia Emergency paraesophageal hernia at presentation/surgery There will be two cohorts of patients fulfilling the study criteria—1. Patients undergoing surgical management of paraesophageal hernias and 2. Patients managed conservatively (observational cohort). All patients will be followed up for 1 year. The surgical cohort will be followed up for a total of 5 years postoperatively to assess for symptomatic disease recurrence. Appendices 1 and 2 summarize the key time points for the study. When a patient consents to participate in the study, they will be asked to complete the following questionnaires: 1. Validated quality of life tool (GORD-HRQL) which provides a measure of current best practice for assessment of health-related quality of life ( Appendix 3 ) 2. POST questionnaire ( Appendix 3 ) and 3. Satisfaction questionnaire regarding the use of POST for symptom assessment. Some centers may have long waiting times between the first clinical review and the date of surgery. For those who are operated on within a year of their first clinical review, the baseline questionnaires listed above will be completed in clinic. For patients who are operated on more than a year after their first clinical review, the baseline questionnaires will be repeated within a 2-week period prior to surgery to capture any change in symptoms. Additional investigations that were performed during this prolonged waiting time will also be recorded. In order to ensure patient inclusivity, the questionnaires will be translated into English, Italian, French, Spanish, and Dutch to cover the native languages at each study center. Patient demographics, clinical data, results of paraesophageal hernia investigations and clinical outcome data from surgery including recurrence over the 5 year follow-up period, will also be collected. There is accepted variation in the preoperative and postoperative investigations including CT scans, barium swallow, endoscopy, and pH/manometry testing. All data will be stored on the RedCap online database. The POST platform will be made into an online web or app-based tool using Qualtrics. This will allow patients to complete the questionnaire either in a clinic environment, via telephone or at home. An alternative paper version will also be made available for completion and correspondence via post. Given the international variation in completing these questionnaires by patients, some centers may offer clinician-led telephonic/face-to-face completion of the questionnaire, accepting the potential reporting bias. If a center has long waiting times between the first clinical review and the date of surgery, the preoperative questionnaires can be completed closer to the date of surgery or on the day of surgery. Those who undergo surgery to repair their paraesophageal hernias will be asked to complete these questionnaires again at the following postoperative time points: 4–6 weeks, 6 months, 12 months and then annually for a total of 5 years postoperatively. The conservatively managed cohort will also be asked to repeat the questionnaires at 1 year to follow up to assess for symptom progression and whether there was a change in the decision for surgery. Given the variation in follow-up protocols and investigations between different centers, each center will follow their standard practice and outline the details of this in their data. The two questionnaires (1. Validated quality of life tool (GORD-HRQL) 11 which provides a measure of current best practice for assessment of health-related quality of life and 2. POST questionnaire) will be completed at each time point. The satisfaction questionnaire regarding the use of POST for symptom assessment will only be used for the first year following the operation. Follow up in the form of electronic, telephone or paper-based questionnaires will be conducted by each center for the first year. Subsequent annual follow-up will be conducted either by the center or the POST research team. All results will then be uploaded onto the Redcap database, either by the local center or by the study organizers.
Nainika Menon and Nadia Guidozzi are the Joint first authors. Summary Large hiatus hernias with a significant paraesophageal component (types II–IV) have a range of insidious symptoms. Management of symptomatic hernias includes conservative treatment or surgery. Currently, there is no paraesophageal hernia disease-specific symptom questionnaire. As a result, many clinicians rely on the health-related quality of life questionnaires designed for gastro-esophageal reflux disease (GORD) to assess patients with hiatal hernias pre- and postoperatively. In view of this, a paraesophageal hernia symptom tool (POST) was designed. This POST questionnaire now requires validation and assessment of clinical utility. Twenty-one international sites will recruit patients with paraesophageal hernias to complete a series of questionnaires over a five-year period. There will be two cohorts of patients—patients with paraesophageal hernias undergoing surgery and patients managed conservatively. Patients are required to complete a validated GORD-HRQL, POST questionnaire, and satisfaction questionnaire preoperatively. Surgical cohorts will also complete questionnaires postoperatively at 4–6 weeks, 6 months, 12 months, and then annually for a total of 5 years. Conservatively managed patients will repeat questionnaires at 1 year. The first set of results will be released after 1 year with complete data published after a 5-year follow-up. The main results of the study will be patient’s acceptance of the POST tool, clinical utility of the tool, assessment of the threshold for surgery, and patient symptom response to surgery. The study will validate the POST questionnaire and identify the relevance of the questionnaire in routine management of paraesophageal hernias.
OBJECTIVES Global objective To assess the clinical utility of the POST tool for the clinical assessment of patients with paraesophageal hernias—types II to IV Specific objectives Patient’s acceptance of the POST testing tool Beta test POST in patients before paraesophageal hernia repair to assess the need and threshold for surgery Beta test POST in patients before and after paraesophageal hernia repair to assess the symptom response to surgery SAMPLE SIZE Sample size will be dependent on how many patients consent to be enrolled in the study from the 21 sites, over the 24-month recruitment period. This decision was based on the variability in the caseload between different centers. The predicted sample size will be 500 patients. ETHICS Ethics approval is pending from the Research Ethics Committee (REC) and Health Research Authority (HRA) for the United Kingdom arm of the study. Ethics approval will be obtained from the appropriate local ethics department at each individual site involved in the study. The study will be conducted in accordance with the recommendations for physicians involved in research on human subjects adopted by the 18th World Medical Assembly, Helsinki 1964, and later revisions. No interim data will be analyzed. Data will only be analyzed at the end of the 24 month recruitment period and then for a total of 5 years in order to review the long-term outcomes. All patients will complete a written consent form and consent to their email, telephone numbers and postal address being used for disseminating the questionnaires. The chief investigator will be responsible for data protection. Data will be stored in a secure environment under password protection where study personnel will have exclusive access. Every effort will be made to keep patient information anonymous and all data will be destroyed after 10 years. DISSEMINATION AND DELIVERABLES The findings of this study will be shared internationally through various modalities including publication in a high impact clinical journal and presentations at surgical and gastroenterology meetings. The first set of data will be released at the one-year timepoint. The validated tool will be shared with patients and surgeons with a view to implement this tool in routine paraesophageal hernia management. Collaborators R. Aye, B. Louie (Swedish Cancer Institute and Medical Centre, Seattle, USA); R. Baigrie (Kingsbury Hospital and Groote Schuur Hospital, Cape Town, South Africa); L. Bonavina (University of Milan, Milan, Italy); G. Darling (Toronto General Hospital, Toronto, Canada); P.M. Fisichella (Northwestern University Feinberg School of Medicine, Chicago, USA); S. Jaume-Bottcher (University Hospital del Mar, Barcelona, Spain); J.C. Lipham (University of Southern California, Los Angeles, USA); W.S. Melvin (Montefiore Medical Centre, New York, USA); K. Nason (University of Massachusetts Chan Medical School, Springfield, USA); B. Oelschlager (University of Washington, Seattle, USA); F. Puccetti, R. Rosati (San Raffaele Hospital, Milan, Italy); J.S. Roth (University of Kentucky, Lexington, USA); P. Siersma (University Medical Centre Utrecht, Utrecht, The Netherlands); B. Smithers (University of Queensland, Woolloongabba, Australia); N. Soper (University of Arizona College of Medicine, Phoenix, USA); S. Thompson (Flinders University, Adelaide, Australia).
DATA AVAILABILITY Data sharing is not applicable to this article as no new data were created or analyzed in this protocol paper. However, the study will generate data that support the findings of this study and will be available from the corresponding author, SRM, upon reasonable request. Financial support: None. Potential competing interests: None. Appendices Appendix 1. Data collection flowchart
CC BY
no
2024-01-16 23:47:16
Dis Esophagus. 2023 May 8; 36(10):doad028
oa_package/d8/ef/PMC10789234.tar.gz
PMC10789235
36942526
INTRODUCTION Gastroesophageal reflux disease (GERD) is the most common disease of the esophagus and affects about 10% of the western population. 1 , 2 Treatment options for most include lifestyle changes and medical acid suppression therapy. However, despite maximal medical therapy, nearly 40% of patients can experience persistent symptoms, 3 , 4 and up to 90% of patients fail medical management. 5 In these patients, laparoscopic fundoplication is considered as the ‘gold standard’ surgical option. However, only 1% of patients who could potentially benefit from fundoplication undergo surgery, despite surgical enthusiasm and an evidence base dating back decades. Contributory factors include fear of the side effects following surgery, such as gas bloat 6 and an inability to belch or vomit, 7 as well as anatomic failure of the repair. Markar et al. 8 reported that, on the basis of community data in the United Kingdom, PPI usage had resumed in 59.4% of post-fundoplication patients within 10 years and that, in 9.4% of post-fundoplication patients, surgical reintervention was necessary. When magnetic sphincter augmentation (MSA) was successfully introduced in 2007, the original concept was that it would fit into the treatment options for GERD as a less invasive alternative option than laparoscopic fundoplication. 9 It entails laparoscopic placement of titanium beads with a magnetic core around the lower esophageal sphincter (LES); it augments the physiological barrier and preserves gastric anatomy using a standardized and easily reproducible technique; and it is more easily reversed than fundoplication. Short-term outcomes initially demonstrated no intraoperative complications, 9 , 10 in addition to an improvement in patient-reported health-related quality of life (HRQL) outcomes, which have been sustained at long-term follow-up. 11 A recent meta-analysis that compared MSA to fundoplication in 1099 patients again demonstrated the safety of the device and, importantly, demonstrated that it is as effective as fundoplication in controlling symptoms of GERD 12 and that MSA may be associated with a reduced risk of gas bloating compared to fundoplication. Patients undergoing anti-reflux surgery primarily are choosing to have this procedure to improve their symptoms, and therefore, an improvement in symptoms and a reduction in HRQL scores should be the primary aim. The manufacturers of the LINX device used for MSA report that nearly 40,000 patients have now undergone sphincter augmentation worldwide, and yet adoption has varied geographically. Regulatory approval was granted in the United Kingdom in 2012 (NICE IPG431), and yet there has been only a single series reporting outcomes in just 48 patients over 3 years. 13 The aim of this study was to report the largest series of patients undergoing MSA in the United Kingdom, with a particular focus on HRQL outcomes, antacid dependency, operative outcome measures, and patients with severe reflux.
METHODS This was a single-center cohort study, from a prospectively maintained database, to assess the effectiveness of the MSA in the management of patients with GERD. All patients undergoing MSA implantation between 2012 and August 2020 were included. Patients were referred having failed medical management. Patients eligible for either a Nissen’s fundoplication or MSA were comprehensively consented; however, the decision regarding surgical approach was determined by patient choice where both procedures were deemed surgically appropriate. Patients were only excluded if they had previously undergone anti-reflux surgery in the past. MSA was introduced within the standard clinical governance framework at this center. When initially introduced, the manufacturers and the FDA suggested several precautions for use, including the presence of hiatus hernias larger than 3 cm, 14 , 15 Barrett’s esophagus, severe GERD as reflected in grades C/D esophagitis, and poor esophageal motility, defined as a mean distal amplitude of <32 mmHg. These precautions were purely advisory and have been subsequently withdrawn. 16 , 17 With experience, the authors have offered MSA to all patients as an alternative to fundoplication except for those with type III hiatus hernias and ineffective motility as defined above. As part of the preoperative pathway, patients routinely received esophagogastroduodenoscopy (EGD), pH, and manometry testing and were administered GERD-HRQL and Reflux Symptom Index (RSI) questionnaires and were questioned regarding their use of antacids preoperatively and at 6 months, 1 year, 2 years, 3 years, and 5 years postoperatively where applicable. A total quality-of-life (QOL) score was created by adding the total scores of the GERD-HRQL and the RSI questionnaire. At endoscopy, the size of the hiatus hernia, the presence of esophagitis, and the presence of Barrett’s esophagus were determined. Patients either underwent pH testing via trans-nasal pH testing or via the Bravo wireless system. At manometry testing, the lower-esophageal sphincter pressures, distal contractile integral were collated in addition to a general comment about esophageal motility. The procedure was carried out as a laparoscopic procedure as previously described, 10 all patients underwent a formal hiatal dissection, and crural repair. The implant was sized using the bespoke sizing device and clinical assessment with a minimum of two clicks above the ‘popping’ of the device. Continuous data are reported as the median and interquartile range (IQR), or the mean and standard deviation (SD). Patients were their own control and were compared to preoperative results. Descriptive analysis, followed by paired t -tests, was performed using GraphPad Prism version 8.0.0 for Mac, GraphPad Software, San Diego, California, USA. A P- value of <0.05 was considered to be statistically significant. Clinical outcomes The primary outcome was the assessment of QOL in patients receiving MSA, with particular focus on regurgitation, dysphagia, and gas bloating symptoms, in addition to assessing use of antacids. Furthermore, QOL outcomes were interrogated in patients with severe reflux. The secondary outcomes were assessment of short- and long-term outcomes associated with MSA. QOL assessment tools The GERD-HRQL is a tool employed to assess reflux symptoms and has been validated across many countries and in many languages. 18 It comprises a 11-question self-administered questionnaire where the score ranges from 0 to 5, and a total score of 50, and a total score of >15 is considered to be abnormal. 19 The RSI comprises seven questions, where the score ranges from 0 to 5; it has been demonstrated to be effective when considering patients with laryngopharyngeal reflux (LPR) disease both preoperatively and postoperatively, and a score of ≥13 is considered to be indicative of LPR. 20 Severe reflux Severe reflux was defined as a DeMeester score of >50, a cut-off used by other such papers. 16
RESULTS Over a 9-year study period, 202 patients underwent placement of the LINX device. 61% ( n = 124) were male, and the mean age at procedure was 48 years (range: 18–80 years) ( Table 1 ). Most patients had no significant comorbidities ( n = 148), and the most common comorbidity was hypertension in 6% ( n = 13), which was followed by respiratory conditions in 5% ( n = 11); 47 (23%) patients had undergone previous abdominal surgery. Over the same time period, 73 patients under went laparoscopic fundoplication, 54% ( n = 39) of which were carried out in the first 3 years of the study period. Preoperative The mean number of years of GERD symptoms was 18.4, and the mean number of years of antacid use was 9.8. Dysphagia was present in 43% of patients ( n = 87). The median hiatal hernia size was 2 cm at EGD (range: 0–8 cm). Patients reported a variety of symptoms, and the most commonly reported symptom was ‘retrosternal burning’ in 60 patients (29.7%), which was followed by dyspepsia in 36 (17.8%) and by cough and retrosternal discomfort in 18 each, respectively (8.9%). Physiology testing On preoperative impedance testing, the median DeMeester Score was 23.2 (IQR: 23.9–38.2), and in 13.4% ( n = 27), the DeMeester score was ≥50. BRAVO capsules were utilized in 11 patients in lieu of trans-nasal pH impedance testing. The mean acid exposure time was 6.6% (SD: 8.5%) (normal as per the Lyon consensus <4% 21 ). The mean LES pressure was 18.07 mmHg (SD: 14.07 mmHg) (normal range: 10–45 mmHg), and the mean distal esophageal amplitude was 54.41 mmHg (SD: 26.21 mmHg). Follow-up Patients were routinely discharged from clinical care at 6 months following the operation if they were symptom-free, and they were contacted at 1 year, 2 years, 3 years, and 5 years postoperatively to obtain symptom scores. Median follow-up was 2 years (IQR: 1–3). All patients were eligible for follow-up at 1 year following the operation, and HRQL scores were obtained from 80% of patients ( Table 2 ); 184 patients were eligible for follow-up at 2 years and 3 years following the operation, and data were available in 68% and 41% of patients, respectively; 88 patients were operated on ≥5 years ago, and in this population, 38 provided HRQL scores (43%). Primary outcomes QOL scores The GERD-HRQL and RSI scores were combined to give a total score; preoperatively, the median total QOL score was 44.5 (IQR 32-63), the median GERD-HRQL score was 31 (IQR 16-39), and the median RSI Score was 17 (IQR 9-25). The RSI was >13 in 62% of patients preoperatively ( n = 125). There was a reduction in all scores from preoperative values to each time point that was sustained at 5-year follow-up ( Fig. 1 ). The median GERD-HRQL score at the latest follow-up was 2 (IQR: 0–9), and this was a significant reduction compared to the preoperative value of 31, with a P- value of <0.0001. The total QOL score was significantly lower in patients all time points compared to preoperatively; at 6-month follow-up, this was significant to P < 0.0001 using a paired t -test (95% CI: −35.96 to −29.34) ( Fig. 2 ). Overall, there was greater than 50% reduction in total scores compared to preoperatively at each time point ( Supplementary Fig. 1 ). Antacid use Preoperatively, 84% ( n = 149) required antacids to control symptoms; at immediate postoperative follow-up, this was reduced to 19% ( n = 30), and this was a significant reduction at all follow-up intervals, P < 0.0001 ( Fig. 3 ). Symptom breakdown The quality-of-life questions relating to regurgitation, upper abdominal bloating, flatulence, and dysphagia (sum of the scores of the questions ‘Do you have difficulty swallowing?’ and ‘Do you have pain on swallowing’) were individually analyzed. Abdominal bloating The median preoperative score was 2 (IQR: 0–3), this reduced to 0 at all postoperative time points ( Fig. 4A ). There was a statistically significant reduction in the scores at all time points compared to preoperatively. Flatulence The median preoperative score was 2 (IQR: 0–3), and this reduced to 0 at all postoperative time points ( Fig. 4B ). There was a statistically significant reduction in the scores at all time points compared to preoperatively. Dysphagia This was assessed using the two following questions: ‘Do you have difficulty swallowing?’ and ‘Do you have painful swallowing?’ which were assessed using a Likert score of 0 (none) to 5 (severe). The median preoperative sum of the scores for questions relating to dysphagia was 1 (IQR: 0–4), and this reduced to 0 after 2 years of follow-up ( Fig. 4C ). There was a statistically significant reduction in the score compared to pre-op for post-op year 1, post-op year 2, and post-op year 3. Regurgitation The median preoperative score was 2 (IQR: 1–4), and this reduced to 0 at all postoperative time points ( Fig. 4D ). There was a statistically significant reduction in the score at all time points compared to preoperatively. Secondary outcomes Short term: procedure and recovery All cases were carried out laparoscopically. In all cases, a circumferential distal esophageal dissection and crural repair were performed. The median device size was 14 beads ( Table 3 ). There were no intraoperative complications. The median length of stay was 0.6 days; where patients stayed overnight, this was due to patients’ preference as patients traveled significant distances to have the procedure carried out. There was one postoperative death in a patient with undiagnosed cardiac disease who suffered from a myocardial infarction. One patient was diagnosed with a deep vein thrombosis and subsequent small volume pulmonary emboli. Four patients required re-admission to hospital postoperatively with fever, chest pain (two), and nausea secondary to transient gastroparesis, but none required operative intervention. Long term In the long term, 15 patients required dilatation of the gastroesophageal junction following insertion of the implant (7.43%), of whom 2 required a second dilatation. Four patients (1.98%) underwent device ex-plantation. Two were removed because of persistent dysphagia which did not improve with dilatations; one was removed due to unexplained back and abdominal pain without dysphagia; and one was removed due to persistent reflux symptoms, and this patient went onto have a Toupet fundoplication which did not improve symptoms. One patient suffered disruption of the device due to manufacturing failure in an early device, and his symptoms were of recurrent GERD following implantation of the device. Two patients had symptomatic recurrence of their hiatal hernia at 1-year follow-up (0.99%). Erosion of the device occurred in no patients. Severe reflux: outcomes in patients with DeMeester score 50 In the 27 patients with a DeMeester Score >50, the mean DeMeester score was 90.02 (SD: 58.71). Twenty-four patients with a DeMeester score >50 at preoperative impedance testing had follow-up QOL data available ( Fig. 5 ). The median preoperative GERD-HRQL score was 32 (IQR: 20–40), and this reduced to 0 at the most recent follow-up available (IQR: 0–5.75). The median preoperative RSI score was 15.5 (IQR: 8–23.75), and this reduced to 0 at the most recent follow-up (IQR: 0–11.5).
DISCUSSION Insertion of the LINX device to surgically manage reflux is safe and improves patient-reported QOL outcomes. Patient-reported measures of regurgitation and gas bloat are reduced in the postoperative period, and this is sustained where 5-year follow-up is available. GERD-HRQL and RSI scores are reduced postoperatively even in patients with severe reflux. Patients were able to stop or reduce their use of antacids following MSA. Over a period of 5 years, good reflux control was maintained, establishing the durability of this technique, with a low long-term complication rate. In the latter 6 years of the study period in this center, only 35 patients opted to have a laparoscopic fundoplication when compared to 172 MSA procedures carried out. Symptom control Previously published data have demonstrated that sphincter augmentation device has an improved postoperative symptom profile compared to a traditional fundoplication; with MSA, there is reduced bloating, and patients retain the ability to belch. 12 , 22 , 23 Our results strongly support that there is a reduction in bloating and abdominal distension in addition to flatulence compared with preoperatively in patients undergoing the MSA, which can affect between up to 40% and 57% of patients undergoing fundoplication as per the LOTUS trial. 6 The proposed mechanisms to explain the lower rate of surgical re-intervention in MSA are first the preservation of the ability to belch following MSA enabling gastric decompression, 17 which is in contrast with patients undergoing fundoplication who cannot vent the stomach, which may impact the crural repair in the early stages of recovery. The study by Warren et al. suggested that there was a higher rate of dysphagia in patients following implantation compared with fundoplication (44% vs. 32%). 24 The rate of dilatation within the cohort is low, and in keeping what has been reported in previous series. 25 , 26 In their retrospective case series of 380 patients, Ayazi et al. reported that 31% of patients had required at least one dilatation; however, over time, this fell to 18%. 25 This reduction was attributed to key alterations in clinical practice, namely the re-introduction of a normal diet in the immediate postoperative period and change in the sizing protocol for the device. 11 It has been our practice to emphasize the importance of swallowing normally; we explain to all our patients that the rationale of resumption of a normal diet is to prevent the formation of a restrictive fibrous capsule around the implant 27 in the first weeks following surgery, which is in contrast with the instructions given to patients following fundoplication. To emphasize that there should be no mechanical cause for them to be unable to swallow, all patients are asked to eat a sandwich before they leave hospital, as the majority are performed as day case procedures. Commonly, patients encounter increasing dysphagia at 10–14 days following the operation often require the support of a specialist nurse, who will encourage them to persist with their diet. As emphasized in Ayazi’s study, in most patients, dysphagia will resolve within 3 months. 25 Our practice has always been to combine a visual albeit subjective assessment during the sizing process in addition to using the measuring device and to oversize rather than undersize. Only three patients underwent implantation of a size 12 device, which has been demonstrated in other series to be associated with an increased risk of dysphagia. 11 These practices may be contributory factors to minimizing significant dysphagia in this series. QOL improvements Within our patient cohort, there was a significant reduction in GERD-HRQL scores and RSI scores at 5-year follow-up. This is in keeping with previous data, which has suggested that MSA is effective in reducing patient-reported QOL scores. 10 , 11 , 24 This despite a patient population with more symptomatic reflux; in our cohort, the mean GERD-HRQL preoperative score was 29 when compared with a mean preoperative score of 19, as reported by Ferrari et al., 11 and the symptom profile is further supported by the lengthy duration of which antacids had been taken for preoperatively. Severe reflux In the 27 patients in whom the preoperative DeMeester scores were >50, there has been an objective and sizable reduction in both QOL indices. This further supports the evidence that MSA is not only safe in patients with severe reflux but is also effective at controlling symptoms. 16 Antacid use Antacid use was reduced in patients undergoing MSA at all follow-up time points. This is as in other series, and this fits with the evidence to suggest that MSA can improve the DeMeester scores and normalize pH testing. 28 , 29 At 5-year follow-up 64% of our patients were not taking antacids, which is supported by other studies, 30 and this is lower than in patients in the United Kingdom undergoing fundoplication. 8 Patient safety Previous papers have cited that the MSA is associated with an erosion risk of between 0.1% and 0.3%, 26 , 27 , 31 and all of these papers suggest that there is a link between device size and the rate of erosion and development of the laparoscopic sizing tool in 2013 has changed the way in which the device has been fitted. Within our series, there have been no patients that have experienced erosion following implantation. No patients had to undergo a re-do procedure. The explanation of 1.4% in this series is lower than previously reported series 11 , 32 and is lower than reported rates of revision in laparoscopic fundoplication of approximately 9%. 7 , 8 Lessons learnt Since its introduction, the surgical technique for anti-reflux surgery has evolved. The original description of the technique did not include formal circumferential distal esophageal mobilization or crural repair. However, there is a long history of surgical learning going back to Allison, 33 which has reflected the importance of these maneuvers in preventing the recurrence of hiatal hernias and symptomatic reflux. Furthermore, when measured manometrically during surgery, crural repair and fundoplication appear to contribute equally to the pressure and length of the reconstructed LES mechanism. 34 Two studies have also reported better symptomatic outcomes in patients undergoing formal esophageal dissection and crural repair when carried out with MSA compared with MSA alone. 35 , 36 The author’s approach has been always to include both esophageal dissection and mobilization to ensure good infra-diaphragmatic esophageal length and subsequent cruroplasty. It maybe that this approach contributed to the low incidence of recurrent reflux symptoms and re-intervention rate of <2% reported in this series. Finally, it has always been the author’s protocol to place the implant just above the gastroesophageal junction. Although to our knowledge, there is no published evidence that, following surgical mobilization, the esophagus will naturally shorten it seems sensible to assume that this may well be the case. Consequently, more cephalic implantation may increase the likelihood of proximal migration and consequently increase the risk of recurrent reflux symptoms or dysphagia. Limitations A limitation is the lack of comparison with other surgical interventions such as laparoscopic fundoplication. While weight can be a common confounding variable with reflux, we did not monitor the patient’s weight before or after MSA. Data regarding the indication for postoperative use of antacids were unavailable, which is a common limitation of anti-reflux surgery studies. The main limitation of our study is primarily the retrospective reviewing of patient case notes and the difficulty in following up patients to obtain QOL scores after discharge from routine clinical care ( Table 2 ); this is in spite of making efforts to contact all patients who had incomplete QOL scores. Despite this, at a 2-year follow-up, these data suggest that MSA provides equivalent symptom control as laparoscopic fundoplication 37 with a better postoperative symptom profile. 6 , 7 Conclusions Currently there are no randomized control trial comparing MSA with fundoplication, there have been matched retrospective analyses that have been carried out that demonstrated equipoise in the GERD-HRQL scores and better postoperative results with regard to ability to belch and bloating with MSA. 37 The data in our series support this. Further work must entail a randomized control trial to compare the use of MSA with laparoscopic fundoplication in order to demonstrate efficacy, safety, and improvement in patient-reported QOL outcomes.
Conclusions Currently there are no randomized control trial comparing MSA with fundoplication, there have been matched retrospective analyses that have been carried out that demonstrated equipoise in the GERD-HRQL scores and better postoperative results with regard to ability to belch and bloating with MSA. 37 The data in our series support this. Further work must entail a randomized control trial to compare the use of MSA with laparoscopic fundoplication in order to demonstrate efficacy, safety, and improvement in patient-reported QOL outcomes.
Abstract Surgical intervention for gastroesophageal reflux disease (GERD) has historically been limited to fundoplication. Magnetic sphincter augmentation (MSA) is a less invasive alternative that was introduced 15 years ago, and it may have a superior side-effect profile. To date, however, there has been just a single published study reporting outcomes in a UK population. This study reports quality-of-life (QOL) outcomes and antacid use in patients undergoing MSA, with a particular focus on postoperative symptoms and those with severe reflux. A single-center cohort study was carried out to assess the QOL outcomes and report long-term safety outcomes in patients undergoing MSA. GERD-health-related quality of life (GERD-HRQL) and Reflux Symptom Index (RSI) scores were collected preoperatively, and immediately postoperatively, at 1-, 2-, 3-, and 5-year follow-up time points. All patients underwent preoperative esophagogastroduodenoscopy, impedance, and manometry. Two hundred and two patients underwent laparoscopic MSA over 9 years. The median preoperative GERD-HRQL score was 31, and the median RSI score was 17. There was a reduction in all scores from preoperative values to each time point, which was sustained at 5-year follow-up; 13% of patients had a preoperative DeMeester score of >50, and their median preoperative GERD-HRQL and RSI scores were 32 and 15.5, respectively. These were reduced to 0 at the most recent follow-up. There was a significant reduction in antacid use at all postoperative time points. Postoperative dilatation was necessary in 7.4% of patients, and the device was removed in 1.4%. Erosion occurred in no patients. MSA is safe and effective at reducing symptom burden and improving QOL scores in patients with both esophageal and laryngopharyngeal symptoms, including those with severe reflux.
DATA AVAILABILITY STATEMENT The data that support the findings of this study are not openly available due to patient confidentiality and are available from the corresponding author upon reasonable request. Supplementary Material
This paper is not based on a previous communication to a society or meeting. Specific author contributions: All authors were involved in conception and design of the study; in addition, all authors were involved in initially drafting the article, and all authors were equally involved in revising it critically for important intellectual content. All authors gave final approval of the version to be published. Financial support: None. Conflicts of interest: The authors declare that they have no conflict of interest.
CC BY
no
2024-01-16 23:47:16
Dis Esophagus. 2023 Mar 20; 36(10):doad014
oa_package/83/ae/PMC10789235.tar.gz
PMC10789236
37236811
INTRODUCTION Outcomes in esophageal cancer are often related to inherent aggressive tumor biology but also due to challenges in early diagnosis and difficulties in assessing tumor response to an increasingly broader range of treatment modalities. Given the lack of non-invasive biomarkers, there is a significant reliance on radiological assessment at various stages of the cancer treatment pathway—diagnosis, radiotherapy planning, assessment of treatment response, surveillance, and prognostication. The interpretation of radiological images is limited by human factors, subjective visual interpretation and can often be time-consuming and inaccurate. With the advent of artificial intelligence (AI) in medical image interpretation, these limitations can potentially be overcome. Within medicine, AI refers to the use of a system to replicate human cognition in the comprehension, analysis, and presentation of medical data. This can be achieved using machine learning, which is a specialized sub-field within AI that improves the performance of systems through repetition. 1 , 2 More recently, AI has been integrated into major imaging modalities to refine the way esophageal cancers are diagnosed and followed up. Machine learning models can be designed to train, validate, and test datasets for image interpretation. This is facilitated via the high-throughput extraction of large quantities of data from the images, known as radiomics. Radiomics provide both a qualitative and quantitative perspective to the spatial relationship between pixels and signal intensities and additionally adds a layer of visual interpretation that may otherwise not be visible to the human eye. 3 , 4 This enables more consistent image interpretation with a focus on fine detail and patterns that may otherwise be missed. Within radiomics, textural analysis is the dominant area of interest in most research studies. Radiomics within imaging also offers the advantage of combining ML to obtain images; segment images into regions of interest (ROI) or volumes of interest (VOI); extract necessary details to construct a model; and validate the models. Although there are reviews that discuss radiomics in esophageal cancer, they are specific to cancer type or an imaging modality. 5–8 This systematic review and meta-analysis consolidates the evidence for the role of radiomics throughout the esophageal cancer therapeutic pathway.
METHODS A systematic review of studies evaluating the use of AI for diagnostic and treatment purposes in esophageal cancer was performed. This systematic review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Search strategy and selection of studies The search methodology was defined according to the PRISMA guidelines. 9 A systematic literature search was carried out by two of the authors (NM and NG) using Pubmed, MEDLINE, and Ovid EMBASE databases (date range: 1992 to 6 January 2023) using the following search strategies with standard Boolean operators: ‘artificial intelligence’, ‘radiomics’, ‘machine learning’, ‘esophageal cancer’, and ‘oesophageal cancer’. Furthermore, the reference lists of included articles and review articles were hand searched for additional studies. Selection of studies: inclusion and exclusion criteria Two authors (NM and NG) performed the literature search, and any disagreements were resolved by the senior author (SRM). Titles and abstracts were scanned, and irrelevant studies were excluded. Full text articles of the remaining studies were then retrieved and evaluated for inclusion. The inclusion criteria required articles to have reported on the use of radiomics in computed tomography (CT) and PET. English language studies for esophageal cancers of all types were included exploring the role of radiomics in the most commonly used imaging modalities such as positron emission tomography with 2-deoxy-2-[fluorine-18] fluoro-D-glucose integrated with computed tomography ( 18 F-FDG PET/CT) scans and CT scan. A few studies reporting on the use of barium esophagograms were excluded as they were believed to be less commonly used as a key imaging modality in cancer diagnosis. Studies that discussed the use of AI in esophageal cancer but not specifically for radiomics were excluded. Comparative cohort studies, non-randomized prospective studies, and RCTs were included. However, case series, case reports, narrative reviews, editorials, and conference abstracts; studies without comparison groups; studies with pediatric patients or less than five patients; and publications in a non-English language were excluded. Outcome measures The primary outcome measures are data related to the diagnostic accuracy of the imaging modality, including sensitivity, specificity, positive predictive value, and negative predictive value. We also collected other study data which included the year of publication, study design, sample size, country of study, type of patients, patient characteristics, outcome measures, and conclusions. Data charting was performed by two authors (NM and NG), and cross-validated by a third author (SC). The data charting forms were formed through study group co-design. The forms were tested by the team to capture relevant study data. Data extraction was undertaken using a single charting and audit approach organized in a tabular form. The forms were then piloted on the five first studies to ensure the approach to data charting was consistent and in line with the research question and purpose. Based on these, we identified recurrent themes and data-points. Thereafter, a calibration exercise was performed on the next five studies. The results were discussed, and the data charting form was continuously updated in an iterative process to be inclusive for key features not initially listed. Statistical analysis All statistical analyses were performed using STATA/SE, version 16.0 (StataCorp LLC, College Station, TX). The overall pooled estimate of sensitivity and specificity with their corresponding 95% confidence interval (95% CI) was calculated using the random-effects model by the metandi command in STATA/SE. Sensitivity was defined as the proportion of patients with esophageal cancer that were correctly confirmed by AI, while specificity was defined as correctly identifying patients without the disease. Forest plots were used to visualize the variation of the diagnostic parameters effect size estimates with 95% CI and weights from the included studies.
RESULTS Figure 1 describes the search methodology in detail. Fifty articles were included in this study, most of which were cohort studies (see Supplementary Tables for further detail). The results have been stratified into positron emission tomography with 18 F-FDG PET/CT scan and CT scan sections. The use of radiomics in CT scans Diagnosis The diagnosis of esophageal cancer at an early stage remains a challenge and shapes clinical management for patients. Four studies have focused on identifying textural radiomics features and AI-based models to improve diagnostics. Wang et al. in 2017 developed a support-vector machine (SVM) model to identify lymph node metastases and found this to be better than standard CT interpretation (area under the curve [AUC] 0.887 vs. 0.705). 10 Similarly, Tan et al. studied 1576 radiomic features in the context of 230 esophageal squamous cell carcinoma (ESCC) patients on pre-treatment CT scans—these radiomic features were superior to size-based image features in predicting lymph node metastasis (AUC 0.758 training set, 0.773 test set). 11 In 2021, Kawahara et al. used radiomics and machine learning to assess the degree of tumor differentiation from planning CT images of patients with locally advanced ESCC. 12 Thirteen radiomic features were identified to assist in diagnosis between poorly differentiated tumors and moderately/well differentiated tumors. The AI system had an accuracy of 85.4% and specificity and sensitivity of 88.6% and 80.0%. 12 A CNN-based model (VGG16) was developed based on 457 esophageal cancer patients and tested on 46 esophageal cancer patients (44 ESCC, 2 esophageal adenocarcinoma [EAC]) and was shown to have superior accuracy (0.842 vs. 0.836/0.808) and specificity (0.900 vs. 0.790/0.760) than two radiologists but a lower sensitivity (0.717 vs. 0.935/0.913). 13 Treatment response A total of 11 studies explored the role of radiomics in treatment response. Models Much like with 18 F-FDG PET/CT scans, a variety of CT radiomics-based models have been developed to predict treatment response. Hu et al. used six peritumoral and seven intratumoral radiomics features from pretreatment CT scans of patients with ESCC undergoing chemoradiation and noted that the best performance was achieved by combining intratumoral and peritumoral features with an AUC of 0.852. 14 Hu et al. then went on to compare handcrafted radiomics, deep learning algorithms/CNN-based, and clinical models to assess complete pathological response of ESCC to chemoradiation. 15 Seven features were selected for radiomics and the model achieved an AUC of 0.725 in the validation cohort, this was lower than the CNN model (ResNET50) which used 14 features and achieved an AUC of 0.805. 15 In support of the use of CNN models further, Li et al. performed a multicenter study to predict patient response to chemoradiation therapy. 16 Using 3D-CNN the model achieved a PPV of 100% in the validation cohort. 16 The model also used different radiotherapy regimens (large-field group and involved-field group) to predict treatment response and ultimately may be able to recommend individualized radiation treatment strategies on a per patient level. 16 Furthermore, Jin et al. 2019 studied 94 patients and identified that a combination of radiomic and dosimetric features on CT produced a superior model at predicting treatment response than individual features. 17 SVM and ANN models were used by Hou et al. to identify chemoradiation non-responders from responders in 49 patients using four principle methods; shape-based, histogram based, texture-based, and transform-based. 18 Five radiomic features were identified that could successfully differentiate between chemoradiation responders and non-responders, these included Histogram2D_skewness, Histogram2D_kurtosis, GLSZM2D_LZE, Gabor2D_MSA-54, and Gabor2D_MSE-54. 18 Yang et al. identified further radiomic features selected by LASSO in patients with ESCC with an AUC of 0.84–0.86 in the training cohort and 0.71–0.79 in the test cohort. 19 Similarly, Riyahi et al. created an SVM-LASSO model with a Jacobian map with an AUC of 0.94, although the cohort itself was only 20 patients. 20 Although Larue et al., similarly showed that radiomics models were superior to clinical models, Luo et al. demonstrated that a nomogram model that is a composite of radiomic features and clinical TNM staging was better. 21 , 22 Furthermore, the addition of 18 F-FDG PET imaging to CT scanning (as demonstrated in the section above) was good at predicting treatment response and loco-regional disease control after neoadjuvant chemoradiotherapy. 23 Pre-therapeutic CT scans have been used to predict the treatment response to immune-checkpoint inhibitors plus chemotherapy in 64 patients with advanced ESCC. 24 Five features and support vector machine algorithms were used to build two-dimensional and three-dimensional radiomic models. 24 The two-dimensional model outperformed the three-dimensional model in selecting which patients would benefit from this treatment strategy. The two-dimensional corrected model had an accuracy of 79.6% in the validation cohort. 24 Pooled analysis Five studies involving 625 patients provided sufficient data of true positive, true negative, false positive, and false negative rates for the calculation of sensitivity and specificity. All five studies assessed the treatment response in patients with esophageal cancer using CT scans. The pooled sensitivity and specificity were 86.7% (81.4–90.7) and 76.1% (69.9–81.4), respectively, as visualized on the forest plot and summary ROC curve ( Figs 2 and 3 ). There was evidence for significant heterogeneity between studies (I 2 = 64%). Survival Six studies focused on the use of radiomics in CT scans to predict survival in esophageal cancer. The reporting of the studies in this section meant that a meta-analysis could not be performed for this section, therefore, a narrative review has been provided instead. Textural features such as zone distance variance GLDZM was significantly associated with overall survival. The development of AI-based CT and histopathological imaging models can be used to predict survival. A single center study consisting of 153 patients with ESCC developed a combined model integrating CT and histopathology interpretation, which showed an improved c-index (0.694) compared to individual CT or histopathology AI models alone. 25 It was found that the features in histopathological images had a closer correlation to survival with more accurate prediction than CT alone. 25 The algorithms were used successfully to predict 1–5 year mortality results with sensitivity and specificity of 78.1% and 84.7% at 1 year and 80.7% and 86.5% at 5 years, respectively. 40 Interestingly, histopathological features were found to be more useful at predicting survival than radiomic features alone. 25 Deep learning radiomics has followed on from handcrafted radiomics to further characterize tumor regions on CT scans in the prognostication of esophageal cancers after treatment. Handcrafted radiomics involves human-determined key tumor-related radiological features and regions of interest within the tumor, with AI introduced in the later stages to identify and consolidate these relevant features. More recently, deep learning radiomics integrates machine learning at an earlier stage with superior outcomes compared to handcrafted radiomics, by learning specific radiological features and eliminating the human aspect of image interpretation altogether. 26 Much like Hu et al. noted for 18 F-FDG PET/CT scans, Wang et al. also noted that deep learning models were better at survival prognostication than handcrafted or clinical models. 15 , 26 As shown in previous sections, composite models of clinical, including morphological characteristics such as wall thickness, and radiomic features were good at predicting survival. 27 , 28 Other (radiotherapy planning, incidental esophageal cancers) The measured fields prior to radiotherapy are divided into gross tumor volume (main bulk of the tumor) and clinical target volume (surrounding subclinical malignant disease with likely microscopic tumor burden). Radiotherapy planning for patients with esophageal cancer is reliant on quantifying these boundaries on CT scans, a time-consuming and often difficult process with intra- and inter-user variability. This can be particularly challenging in the adjuvant setting with the primary tumor resection and additional postoperative changes visible on the CT images. Two studies explored the role of radiomics in radiotherapy planning. The use of AI in measuring the gross tumor volume and clinical target volume can standardize the process, with comparable results to clinicians, however, in much less time. 29 The development of CNN in radiomics has enabled an automated process for organ segmentation in targeted radiotherapy built on deep learning and more recently, improved by a three-dimensional model. 30 In the adjuvant setting, an AI model built for estimating clinical target volume averaged at 25 seconds per patient. 31 Furthermore, Sui et al. studied the role of radiomics in identifying incidental, false negative esophageal cancers. 32 They created a deep learning network to identify such lesions, which had a higher accuracy, sensitivity, and specificity compared to radiologists alone, but an even better result when used in combination with radiologists. 32 This is particularly relevant, as patients with early esophageal cancers as asymptomatic, and therefore, they are not always referred to endoscopy. 32 There is a promising future for the potential identification of incidental esophageal cancers using radiomics. The use of radiomics in 18 F-FDG PET/CT scans 18 F-FDG PET/CT scans play an important role in the management pathway for esophageal cancer, both in the neoadjuvant setting for staging and treatment planning as well as in the adjuvant setting for assessing tumor response to treatment and predicting survival. Radiomics is increasingly being integrated into 18 F-FDG PET/CT scan interpretation as it offers more detail and consistency than the human eye. Treatment response Textural features A total of 17 studies were included that addressed the role of radiomics in 18 F-FDG PET/CT scans to predict treatment response. Standardized uptake values (SUV—SUV max , SUV peak , and SUV mean ) in 18 F-FDG PET/CT are the typical measures used to determine tissue metabolic activity. With the onset of AI, textural analysis biomarkers are being identified to correlate with tumor activity and therefore indirectly, treatment response, and has been suggested to be more sensitive than SUV alone. 33 , 34 A range of textural features identified in pre-treatment 18 F-FDG PET/CT have been implicated in predicting tumor response by differentiating responders and non-responders. These include homogeneity, 33 entropy, 33 size zone variability, 33 , 35 intensity variability, 33 , 35 metabolic tumor volume 35 /tumor volume, 36 total lesion glycolysis, 35 , 36 skewness, 37 inertia, 37 correlation, 37 cluster prominence, 37 run length, 38 size zone matrix, 38 short zone high gray emphasis, 38 and coarseness (as part of a CNN approach). 39 Risk prediction models Eight of the 17 studies discussed the development of AI-based models to predict tumor response based on textural and/or clinical parameters. Least absolute shrinkage and selection operator (LASSO) logistic regression models have been shown to be good at stratifying patients into high and low risk categories. 40 , 41 Paul et al. used a feature-based selection model, Genetic Algorithm based on Random Forest which was shown to be superior to other models at risk prediction. 42 Of note, Van Rossum et al. created a prediction model following their study of 217 patients with EAC, the largest study of its kind, and reported that it was difficult to truly base clinical decision making on such models as associations were often found between textural features and treatment responders, but not to an extent that would confidently separate them from non-responders. 43 Other studies have shown that risk prediction models that combine textural features alongside histological subtype, Tumor/Node Metastasis (TNM) staging and tumor size can improve the accuracy, specificity and sensitivity. 34 , 44 , 45 Beukinga et al. showed that the addition of biological tumor markers CD44 and HER2 to their radiomic model also improved prediction of treatment response. 46 Pooled analysis Seven studies involving 443 patients provided sufficient data of true positive, true negative, false positive and false negative rates for the calculation of sensitivity and specificity. All seven studies assessed the treatment response in patients with esophageal cancer using 18 F-FDG PET/CT scan scans. The pooled sensitivity and specificity were 86.5% (81.1–90.6) and 87.1% (78.0–92.8), as visualized on the forest plot and summary Receiver Operating Characteristic (ROC) curve ( Figs 4 and 5 ). There was evidence for significant heterogeneity between studies (I 2 = 72%). Survival Seven studies focused on the radiomics in 18 F-FDG PET/CT scans as a means to predict progression-free survival (PFS) and/or overall survival. The reporting in these studies meant that a meta-analysis could not be performed for this section, therefore a narrative review is provided instead. A single-center study by Karahan Sen et al. evaluating machine learning in textural and metabolic analysis of 18 F-FDG PET/CT scans in 75 patients noted that total lesion glycolysis and metabolic tumor volume were noted to be higher in the 1- and 5-year non-survivors when compared to survivors at the same time points. 47 Nakajo et al. noted that ESCC carcinoma patients with a higher metabolic tumor volume, total lesion glycolysis, intensity variability and size zone variability had a shorter PFS and overall survival, however none were an independent factor to strongly influence prognostication. 35 Paul et al. concurred with Nakajo et al. whereby metabolic tumor volume was deemed to be a good prognostic indicator, in addition to patient factors such as nutritional status and the World Health Organization (WHO) performance status. 42 Foley et al. developed a prognosis model after studying 403 esophageal cancer patients (of which 237 had EAC). 48 Foley’s group developed an Automated Decision Tree Learning Algorithm for Advanced Segmentation based prognostic model but reported difficulty in discriminating between the internal and external validation models. 48 , 49 They observed further radiomic features such as histogram energy, and intensity features such as kurtosis, in addition to total glycolysis volume, as independent features which significantly predict overall survival. 48 , 49 Xiong et al. specifically studied 30 unresectable esophageal cancers (all ESCC) and noted that wavelet radiomic features were especially helpful at prognostication. 45 Other (histological classification, identification of metastatic disease) One study discussed the role of radiomics in 18 F-FDG PET/CT scans to determine histological subtype and noted significant textural differences between EAC and ESCC but with no clear correlative patterns to draw any decisive conclusions. 47 A further study by Baiocco et al. reported that the lower second order SUV entropy combined with higher second order apparent diffusion coefficient entropy as a good textural feature to identify metastatic disease. 50
DISCUSSION This systematic review included a total of 49 articles, the majority were retrospective single center studies, applying AI technology and radiomics to the diagnosis, treatment response, histological identification, and prognostication of esophageal malignancies. This study suggests the potential value AI has in the modern management of esophageal cancer, with the intent of improving outcomes and individualizing treatment strategies. As all patients with a malignancy will have some form of imaging, the potential breadth of AI assisted technology in radiomics is vast, assisting in reducing inter-observer discrepancies, improving identification of subtle pathology and even out-performing radiologists in certain parameters. More than this, AI in radiomics has proved advantageous in predicting survival outcomes in various stages of disease. Esophageal cancers have a variable pathological complete response to neoadjuvant treatment ranging from 25% to 50%, therefore it would be beneficial to predict likely treatment outcome prior to initiation of therapy. 23 Treatment response between the imaging modalities in our study identified that 18 F-FDG PET/CT imaging had a higher specificity of 87.1% compared to CT scans with 76.1% but sensitivities were comparable between the two modalities at 86.5% and 86.7%, respectively. Rishi et al. achieved the best outcomes in identifying treatment response by using combined CT and PET imaging, achieving an AUC of 0.87. 23 Lymph node metastases in esophageal malignancy is a highly important prognostic factor, therefore accurate preoperative lymph node identification is vital for decision making. Tan et al. successfully used CT imaging to distinguish lymph node metastasis with an AUC of 0.773 and outperformed size criteria alone. 37 Wang et al. 2017 also showed improved lymph node identification using SVM as opposed to standard short axis size of the largest lymph node on CT scan. 10 Lymph node involvement correlates with survival prediction which is of clinical relevance. 10 Survival prediction in this review has largely been based on combining radiomics and tumor biology and/or TNM staging. Among other studies, Cui et al. successfully produced a machine learning model predicting PFS and overall survival in esophageal cancer patients using a combination of radiomics features and clinical features with the combined models displaying high performance with a C-index of 0.79 for PFS and 0.71 for OS for 3 year outcomes. 28 Radiomic features provide information about underlying tumor biology and behavior. The use of radiomics in conjunction with patient related factors can predict tumor phenotyping as well as response to treatment and prognosis. Many of the articles in this study did not report diagnostic sensitivities and specificities of their AI models, as such, a fully comprehensive pooled statistical analysis was not able to be conducted. Due to lacking statistical data, a direct comparison between 18 F-FDG PET/CT and CT imaging was also not achieved. The paucity of available data may reflect real-world applications of what 18 F-FDG PET/CT and CT imaging are usually used for. Multiple textural features and radiomic models have been listed in this review to aid in the management of esophageal cancers. Despite ample data, no clear comparisons have been made between the models to allow for accurate analysis in this study. Another identified limitation is that many studies were single centers with a small sample size, especially in the 18 F-FDG PET/CT cohort. As AI technology is a relatively modern aid to cancer management, this may be expected and highlights the need for larger multicenter trials before accepting AI models into routine management. Many of the studies are also from Asian institutions where ESCC is the predominant histological pathology, as such, the relevance of our findings may need to be interpreted with discretion when applied to an international cohort. Lastly, this paper has not effectively differentiated outcomes based on tumor histology; this may limit application of our findings in the real-life setting.
CONCLUSION This meta-analysis has clearly identified key areas of AI use in the management of esophageal malignancies and identified gaps in the current literature. It is evident that in the future machine learning will be integrated into patient management, however, further multicenter trials and comparisons of various radiomics-based models are required prior to implementation.
Joint first authors Summary Radiomics can interpret radiological images with more detail and in less time compared to the human eye. Some challenges in managing esophageal cancer can be addressed by incorporating radiomics into image interpretation, treatment planning, and predicting response and survival. This systematic review and meta-analysis provides a summary of the evidence of radiomics in esophageal cancer. The systematic review was carried out using Pubmed, MEDLINE, and Ovid EMBASE databases—articles describing radiomics in esophageal cancer were included. A meta-analysis was also performed; 50 studies were included. For the assessment of treatment response using 18 F-FDG PET/computed tomography (CT) scans, seven studies (443 patients) were included in the meta-analysis. The pooled sensitivity and specificity were 86.5% (81.1–90.6) and 87.1% (78.0–92.8). For the assessment of treatment response using CT scans, five studies (625 patients) were included in the meta-analysis, with a pooled sensitivity and specificity of 86.7% (81.4–90.7) and 76.1% (69.9–81.4). The remaining 37 studies formed the qualitative review, discussing radiomics in diagnosis, radiotherapy planning, and survival prediction. This review explores the wide-ranging possibilities of radiomics in esophageal cancer management. The sensitivities of 18 F-FDG PET/CT scans and CT scans are comparable, but 18 F-FDG PET/CT scans have improved specificity for AI-based prediction of treatment response. Models integrating clinical and radiomic features facilitate diagnosis and survival prediction. More research is required into comparing models and conducting large-scale studies to build a robust evidence base.
Supplementary Material
CC BY
no
2024-01-16 23:47:16
Dis Esophagus. 2023 May 26; 36(6):doad034
oa_package/1f/43/PMC10789236.tar.gz
PMC10789237
36794714
Introduction Periodontal disease (PD), a chronic inflammatory condition, is a major driver of tooth loss in older age and the sixth most prevalent non-communicable disease worldwide [ 1 ]. Dementia is the fifth leading cause of death globally and there are concerns that the disease prevalence could increase at an alarming rate as a result of the ageing population [ 2 ]. Observational evidence suggests that cognitive decline, as a pre-cursor to dementia, is associated with fewer teeth [ 3–6 ]. Recent evidence also suggests that there may be a reciprocal relationship between poor oral health and dementia [ 7 ]. Experimental studies have shown chronic systemic inflammation may be linked to onset of both dementia and PD [ 8 , 9 ], and there is also evidence of increased levels of inflammatory markers associated with periodontal pathogens in people with Alzheimer’s disease (AD) [ 10 ]. Understanding the factors that could influence this association is imperative in view of tailoring public health initiatives promoting oral health towards dementia prevention. Observational studies enable non-intrusive examination of exposures, outcomes and risk factors in the general population. Previous systematic reviews have sought to quantify the prevalence and risk of cognitive disorders in PD using observational studies; however, meta-analyses of effect sizes vary vastly and often conclude that further evidence is required to substantiate the findings [ 11–14 ]. One review suggested that combining cross-sectional and longitudinal studies in a meta-analysis caused around 16% of heterogeneity [ 12 ]. Given their real-world setting, observational studies can be subject to several biases including confounding and selection; it is therefore crucial to consider study factors when conducting systematic reviews and meta-analyses accordingly [ 15 ]. Furthermore, recent work has suggested there is risk of overestimating the link of PD and cognitive disorders from spurious associations identified in cross-sectional research [ 16 ]. We previously demonstrated the utility of meta-regression in revealing the effect of study factors such as sex, PD classification and study region on risk estimates of cardiovascular disease [ 17 ]. A recent systematic review used similar methods to explore effect of sample size, treatment during the follow-up and bias rating in studies estimating the risk of adverse pregnancy outcomes and diabetes in people with PD [ 18 ]. As yet, no study has examined the effects of study characteristics on prevalence and risk estimates of cognitive disorders, specifically cognitive decline/impairment and dementia, in people with PD. The aim of the current investigation was to assess the study factors that could impact the association of cognitive disorders in people with PD. In order to pool the results of individual studies, a meta-analysis will be used to quantify risk of dementia in PD populations and meta-regression will be used to evaluate the impact of key risk factors.
Methods Study design—a systematic review of cross-sectional and longitudinal cohort studies that examine the prevalence and incidence of cognitive disorders in people with periodontitis. Search strategy and selection criteria The search string considered alternate terms incorporating several relevant key words and Medical Subject Headings (MeSH) headings. The final Boolean search string was: (periodon * OR tooth loss OR missing teeth) AND (dementia OR Alzheimer’s Disease OR cognitive * ) ( Supplementary Table 1 ). The search string was applied from database conception until 2 February 2022 to Medline, EMBASE and Cochrane databases to ensure retrieval of a broad scope of literature. Additional reference checking and ‘citation snowballing’ methods of key articles were also undertaken to maximise search sensitivity. Study inclusion criteria were outlined as the following: Cross-sectional or longitudinal retrospective/prospective cohort. Clinically diagnosed or self-reported PD. Clearly defined classification of dementia, AD and/or subtypes such as vascular dementia, or cognitive decline (including mild cognitive impairment). Diagnosis should be identified via appropriate disease classification codes such as ICD-10F00-F03, or clinical assessment using verified assessment tool such as MMSE or MoCA. Provides estimates for prevalence (cross-sectional) or incidence (longitudinal) of dementia or cognitive decline, and/or when absent raw numbers are available for crude calculation. Peer reviewed articles and published in English. For full details of study selection, see Note S1 in the supplemental file. Quality assessment Quality assessment tools for observational studies can be contentious [ 19 ]; therefore, this review employed the Risk of Bias in Non-Randomised Studies of Interventions (ROBINS-I) recommended by Cochrane to determine the risk of bias in cohort and longitudinal observational studies [ 20 ]. Results from the risk of bias assessment were conferred with a second author and discrepancies discussed before finalising ROBINS-I assessment table. The protocol for the present review was registered to PROSPERO before the study began (registration number: CRD42019154897). Statistical analysis Odds ratios (OR), hazard ratios (HR) and relative risks (RR) were used in different studies to quantify the risk of cognitive decline and dementia/AD. We examined cross-sectional and longitudinal studies separately. In cross-sectional studies that did not report the effect size, we used raw numbers of exposed/unexposed and case numbers to quantify a crude RR in pooling for meta-analysis. We converted ORs and HRs into RR in order to maximise the number of included studies for meta-analysis [ 21 ]. Where possible, adjusted RRs were used in the meta-analysis and adjustments of key confounders, such as smoking, gender and age, were screened for each study. For inclusion in meta-analysis, studies must have reported total population numbers for PD and non-PD cases, and RRs or converted RRs should also be available to be eligible for synthesis and pooling. For precision, studies should also have a minimum of 30 participants in the exposed PD/unexposed groups; studies that reported less than this were included as part of a sensitivity analysis. Random effects meta-analysis was performed for prevalence or risk of cognitive decline or dementia according to the study type (cross-sectional or longitudinal). Subgroup analysis and meta-regression examined the impact of study and population factors such as age, smoking, PD classification (self-report or clinical), study region, sex and sample size. Study and population factors were selected according to previous literature and data availability. Average age (mean or median), smoking (population percentage), sex (female population percentage) were treated as continuous variables in meta-regression. Where age was reported in bands and the average was missing, the median value of the mode group was considered the average. PD classification, study region and sample size were treated as categorical variables. Sample size categories were determined according to the range in sample sizes within included studies and to maximise study numbers. Variables included in the meta-regression were dependent on data availability from included studies. I 2 was used to measure the study heterogeneity. Publication bias was illustrated with funnel plots, which was quantified by Egger’s test. Forest plots were used to visualise the pooled results from meta-analysis.
Results The search strategy retrieved 2,146 studies, with 1,726 studies eligible for title and abstract screening following duplicate removal. After title and abstract screening and hand searches, 232 studies were eligible for full text screening with 49 studies eligible for review. Most studies were excluded due to ineligible study design ( n = 63). Of the 49 included studies, 21 were cross-sectional and 28 were of longitudinal design, including 11 and 11, and 20 and 9 studies examining dementia or cognitive decline, respectively. Two studies examined both dementia and cognitive decline as outcomes [ 22 , 23 ]. One study was not eligible for meta-analysis due to missing raw data [ 24 ] and a further study was not eligible due to insufficient case numbers (number of cases in exposed = 0) [ 25 ]. One study was not eligible for meta-regression due to the across-region study population [ 22 ]. Seven studies were not eligible for meta-analysis as they reported below 30 participants in the exposed/unexposed groups [ 23 , 26–31 ]. All included studies were published between 2007 and 2022 ( Figure S1 , Table 1 ). Most study populations were from Asia ( n = 18) and utilised a clinical diagnosis of PD to define the exposure ( n = 22). Participants in eight dementia studies also received periodontal treatment during the follow-up as part of the study design. The median total study follow-up time for longitudinal studies was 10 years (interquartile range[IQR] = 5–14 years) ( Table 1 ). Risk of bias assessment by ROBINS-I checklist demonstrated most included studies were of serious risk of bias due to risk of confounding or selection biases ( n = 39; Table 2 ). Both cross-sectional and longitudinal cognitive decline studies had significant risk of publication bias, whereas this was not observed in dementia/AD studies ( Figures S2 and S3 ). Cognitive decline Random effects meta-analysis of cross-sectional studies showed that the prevalence of cognitive decline in people with PD was increased by 34% compared with those without PD (prevalence risk ratio [PRR] = 1.43, 95% confidence interval [CI] = 1.07–1.66; Figure 1 ). This outcome had moderate-high heterogeneity ( = 66.8%; Figure 1 ). The risk of developing cognitive decline in longitudinal studies was 33% higher in people with PD than those without (relative risk [RR] = 1.33, 95% CI = 1.13–1.55). The heterogeneity was high for longitudinal studies with this outcome ( = 89.3%; Figure 2 ). Of note, a small study ( n = 35) that was not eligible for meta-analysis identified no cases of cognitive decline in people without PD in non-smoking older Japanese outpatients [ 25 ]. Cognitive decline—study factors Subgroup analysis of cross-sectional studies revealed an incremental increase in prevalence of cognitive decline by PD severity and a reduction in heterogeneity (moderate—PRR = 1.21, 95% CI = 0.85–1.72, = 69.9%; severe—PRR = 1.35, 95% CI = 1.03–1.71, = 0%; Figure S4 ). The prevalence of cognitive decline was 20% higher in severe cases compared with moderate PD (PRR = 1.20, 95% CI = 0.78–1.85; Table 2 ). Prevalence estimates for cognitive decline were also impacted by study population ( Figure S4 ) but meta-regression indicates this is not significant (Asia—PRR = 0.75, 95% CI = 0.49–1.17; Table 2 ). Older participants with PD had higher prevalence of cognitive decline compared with younger populations (younger—PRR = 1.17, 95% CI = 0.97–1.42; older—PRR = 2.06, 95% CI = 1.341–3.02; Figure S6 ). Meta-regression showed that for every 10 years increase in average age, there was a 20% increase in prevalence cognitive decline (PRR = 1.20, 95% CI = 1.11–1.29; Table 2 ). For longitudinal studies, an incremental increase in the risk of cognitive decline by PD severity was also observed (moderate—RR = 1.14, 95% CI = 1.07–1.22; severe—RR = 1.25, 95% CI = 1.18–1.32; Figure S7 ). In fact, the risk of cognitive decline was 8% higher for those with severe PD compared with moderate cases (RR = 1.08, 95% CI = 0.84–1.38; Table 3 ). Furthermore, for every 10% population increase in females in the study population, there was a 34% increased risk of cognitive decline (RR = 1.34, 95% CI = 1.16–1.55; Table 3 ). The risk of cognitive decline in studies stratified by age was similar (younger—RR = 1.40, 95% CI = 1.01–1.94; older—RR = 1.36, 95% CI = 1.08–1.71; Figure S8 ). Compared with studies of moderate risk of bias, those of serious and critical risk reported 57 and 66% lower risks, respectively (serious—RR = 0.53, 95% CI = 0.31–0.92; critical—RR = 0.44, 95% CI = 0.24–0.82; Table 3 ). Meta-regression also showed that studies that utilised self-reported PD diagnosis reported 23% lower risks compared with clinical diagnosis (RR = 0.77, 95% CI = 0.65–0.91) and those of bigger sample sizes reported lower risks compared with sample sizes of less than 1,000 participants (1,000–10,000—RR = 0.65, 95% CI = 0.54–0.79; 10,000–100,000—RR = 0.66, 95% CI = 0.53–0.82; Table 3 ). Dementia and AD Random effects meta-analysis of cross-sectional studies showed that the overall prevalence of dementia/AD was 8% higher in people with PD (PRR = 1.08, 95% CI = 0.82–1.42) with high heterogeneity ( = 98.2%; Figure 1 ). The incident risk of dementia/AD was also increased in people with periodontal disease in longitudinal studies (RR = 1.22, 95% CI = 1.14–1.31, = 92.6%; Figure 2 ). Dementia and AD—study factors Incremental increase in prevalence of dementia/AD was observed in moderate PD (PRR = 0.98, 95% CI = 0.76–1.26) to severe cases (1.44, 95% CI = 0.89–2.32) when compared with those without PD ( Figure S5 ). In fact, the prevalence of dementia and AD in severe PD was over 2-fold higher than those with moderate cases (PRR = 2.27, 95% CI = 0.93–5.53; Table 2 ). PD severity did not appear to have impact on incident risk estimates of dementia and AD for longitudinal studies ( Figure S7 ), with risks in moderate and severe PD similarly increased compared with mild cases (moderate—RR = 1.05, 95% CI = 0.81–1.37; severe—RR = 1.03, 95% CI = 0.78–1.35; Table 3 ). When compared with clinical PD diagnosis, self-reported PD showed reduced risks of dementia and AD (RR = 0.86, 95% CI = 0.77–0.96; Table 3 ). Risk of dementia and AD appeared highest in populations from Europe (RR = 1.41, 95% CI = 0.89–2.22; Figure S8 ). Meta-regression revealed lower risks of dementia and AD in studies from Asia and North America compared with those from Europe (Asia—RR = 0.87, 95% CI = 0.60–1.26; North America—RR = 0.77, 95% CI = 0.53–1.12; Table 3 ). Sensitivity analysis Meta-analysis of longitudinal studies that reported periodontal treatment during the follow-up ( n = 8) revealed an increased risk of dementia in people with PD of a similar magnitude to the main analysis (RR = 1.30, 95% CI = 1.14–1.48; Figure S10 ). Meta-regression revealed a modest 6% increase in risk of dementia and AD compared with studies that did not report periodontal treatment (RR = 1.06, 95% CI = 0.95–1.17; Table 3 ). Furthermore, including studies with fewer than 30 participants within exposed/unexposed groups did not greatly impact results of the meta-analysis of prevalence and risk of cognitive disorders in cross-sectional or longitudinal studies ( Figures S11 – S12 ).
Discussion In this systematic review, we examined 21 cross-sectional and 28 longitudinal studies reporting either prevalence or risk of cognitive decline, or dementia/AD. Overall, the prevalence and risk of cognitive decline was higher than dementia and AD in people with PD. Severe PD was associated with increased prevalence and risk of cognitive disorders. Meta-regression of study factors suggested that PD classification type, gender, age, study region and overall risk of bias may also attribute to variation observed in effect size estimates of observational studies. The findings of this review align with previous systematic reviews that have found augmented risks for cognitive decline and dementia/AD in people with PD [ 11–14 ]. Contrariwise, a recent review concluded that the evidence regarding periodontal pathogens and AD onset is contentious and subject to bias, which may influence the robustness of previous findings [ 32 ]. Evidence shows that age is a risk factor for PD and both cognitive decline and dementia/AD, with cognitive decline typically developing prior to a formal diagnosis of dementia/AD [ 33 , 34 ]. We found that the prevalence and risks for cognitive decline were higher than for dementia/AD. This supports the notion that signs of cognitive decline are the early markers for subsequent neuro-degeneration and eventual dementia-onset [ 35 ]; thus, cognitive decline is a more frequently diagnosed condition than dementia [ 36 ]. There may also be differences in the association of dementia with other disease subtypes. For example, the risk of vascular dementia increases 2-fold in people diagnosed with cardiovascular disease [ 37 ]. PD is linked to augmented risks cardiovascular disease development [ 16 ], which could implicate vascular dementia development further along the disease trajectory. Further primary work is required to dissect the association of PD with specific subtypes of dementia. Meta-regression has shown merit in exploring the impact of study factors on estimates for the risk of systemic diseases. Meta-regression and subgroup analyses by key study factors in the present review demonstrated reductions to statistical heterogeneity. We previously demonstrated using meta-regression that PD severity and male gender may increase estimates for risk of cardiovascular disease [ 17 ]. The former finding aligns with the current systematic review as we revealed that PD severity is incrementally associated with the prevalence of cognitive decline. We also showed that a higher proportion of females was associated with increased risks of cognitive disorders, though this could be reflective of the higher proportion of females with dementia than males [ 38 ]. People with self-reported PD had reportedly lower risks of cognitive disorders than those with clinical classification. This contrasts previous work that suggests classification of PD has no effect on longitudinal risk of cardiovascular disease [ 16 ]. A possible explanation could be the differences in severity of self-reported responses. For example, previous oral health research in the UK Biobank has utilised responses of bleeding gums (mild periodontitis/gingivitis) to loose teeth (indicative of severe periodontitis) [ 39 , 40 ]. It is possible that studies that utilise a self-reported classification such as bleeding gums, a noticeable sign of disease, may have a higher proportion with mild/moderate PD, which may have a lower risk of developing cognitive disorders. These studies are also at risk of reporting bias and therefore the results may not be precise; however, there is evidence that suggests self-reported tools for PD are accurate [ 41 ]. A recent systematic review with meta-regression also revealed the sample size and risk of bias can impact study estimates for risks of adverse pregnancy outcomes and diabetes [ 18 ]. We found that sample size had a variable effect on estimates for cognitive disorders, whereas studies at serious risk of bias also did not affect the association of PD on dementia/AD compared with those of moderate risk. Generally, studies rated at moderate risk of bias used methods such as inverse probability weighting to account for selection biases and stratified random sampling [ 42–44 ]. Most studies were rates at serious risk of bias due to failing to address confounding and selection biases, thereby meta-regression of this factor may be problematic; as such, there is a need for better quality primary research in the field and researchers should interpret findings of systematic reviews with caution. This systematic review with meta-analysis is the first of its kind to assess using meta-regression, the impact of study factors on effect size estimates for dementia and AD in PD; as such, the study has notable strengths. Through including both cross-sectional and longitudinal studies, as well as two systemic disease outcomes—cognitive decline and dementia/AD, we were able to examine the associations with PD using a larger pool of included studies. The use of meta-regression enabled adjustments for several key factors of study design including gender, PD severity, study region, age, risk of bias and sample size. This ensured a thorough exploration of effect sizes in association studies of PD and cognitive disorders. Our review is further strengthened through adherence to the PRISMA guidelines [ 45 ]. Although the primary aim of this review was to explore the causes of methodological heterogeneity through meta-regression, a limitation was the risk of bias present in the included studies due to selection and unmeasured cofounding. Given that the studies included in this review were cross-sectional and longitudinal design, often using real-world datasets such as electronic health records, this leaves opportunity for residual bias and statistical heterogeneity that cannot be adjusted for post-hoc. Furthermore, the results of meta-regression are dependent on sufficient sample size and we were not able to explore the influence of some study factors due to the absence of information in certain studies. The impact of subgroups demonstrated reductions in statistical heterogeneity, thereby advocating future homologous studies with transparent reporting to account for between-study variation. Furthermore, although we strove to account for classification bias through stratifying PD classification into self-reported versus clinical, the classification guidelines of both PD and cognitive disorders can change over time. Thus, true identification of these conditions are therefore dependant on the classification system used and the time of the study. Another limitation of this review is that we were not able to extract adjusted estimates of prevalence from all cross-sectional studies. As a result, these studies were at serious risk of confounding. Evidence suggests that PD is associated with multimorbidity [ 40 , 46 ]; multimorbidity is also linked to worse outcomes in older age, including dementia incidence [ 47 ]. We were not able to explore effect of co-morbidities, and future work should account for multimorbidity and seek to make necessary adjustments. Other confounding factors such as deprivation and socioeconomic status should also be explored further in future meta-regression studies as known drivers of adverse health outcomes that may influence effect sizes. This study demonstrates the fragility of estimations of the association between PD and cognitive disorders, with study factors such as age, gender, study region and PD severity having strong influence on prevalence and risk estimates. The findings of this review contribute to understanding of PD prognosis and implicate the necessity for improved quality and reporting of observational studies in the field. The clinical implication of these findings is that dental and medical professionals should be made aware of the possible association and make appropriate treatment/prevention arrangements to care. Given the strain on dental appointments following the COVID-19 pandemic, self-managed oral hygiene should also be encouraged to prevent progression to severe PD.
Conclusion The findings of this systematic review reveal that PD is more strongly associated with cognitive decline than dementia/AD. Meta-regression showed that some study factors may influence prevalence and risk estimates of cognitive disorders. More homologous observational evidence with clear adjustments for confounding and selection biases is required to determine the true direction of these associations. Specifically, future studies should utilise bias-reducing selection methods such as inverse probability weighting and random sampling of large and representative study populations with validated PD assessment tools to reduce the heterogeneity that is reflected in the current literature.
Abstract Aim The aim was to assess study factors that impact the association of cognitive disorders in people with periodontal disease (PD). Method Medline, EMBASE and Cochrane databases were searched until February 2022 using keywords and MeSH: (periodon * OR tooth loss OR missing teeth) AND (dementia OR Alzheimer’s Disease OR cognitive * ). Observational studies reporting prevalence or risk of cognitive decline, dementia or Alzheimer’s disease (AD) in people with PD compared with healthy controls were included. Meta-analysis quantified the prevalence and risk (relative risk[RR]) of cognitive decline, dementia/AD, respectively. Meta-regression/subgroup analysis explored the impact of study factors including PD severity and classification type, and gender. Results Overall, 39 studies were eligible for meta-analysis: 13 cross-sectional and 26 longitudinal studies. PD demonstrated increased risks of cognitive disorders (cognitive decline—RR = 1.33, 95% CI = 1.13–1.55; dementia/AD—RR = 1.22, 95% CI = 1.14–1.31). Risk of cognitive decline increased with PD severity (moderate—[RR] = 1.14, 95% confidence interval [CI] = 1.07–1.22; severe—RR = 1.25, 95% CI = 1.18–1.32). For every 10% population increase in females, the risk of cognitive decline increased by 34% (RR = 1.34, 95% CI = 1.16–1.55). Self-reported PD showed a lower risk of cognitive disorders compared with clinical classification (cognitive decline—RR = 0.77, 95% CI = 0.65–0.91; dementia/AD—RR = 0.86, 95% CI = 0.77–0.96). Conclusion The prevalence and risk estimates of cognitive disorders in association with PD can be influenced by gender, the disease classification of PD and its severity. Further homologous evidence taking these study factors into consideration is needed to form robust conclusions.
Key Points A systematic review that explores study factors impacting the association of cognitive disorders in periodontal disease (PD). Study factors including PD severity, classification and sex influence prevalence and risk estimates of cognitive disorders. Further homologous evidence from observational studies is needed to form robust conclusions. Supplementary Material
Data Availability All data generated or analysed during this study are included in this published article [and its supplementary information files]. Declaration of Sources of Funding H.L. was supported by the Frederick E. Hopper Scholarship at the University of Leeds. J.W. is supported by Barts Charity (MGU0504). The research is supported by the National Institute for Health Research (NIHR) infrastructure at Leeds. Declaration of Conflicts of Interest None.
CC BY
no
2024-01-16 23:47:16
Age Ageing. 2023 Feb 14; 52(2):afad015
oa_package/32/08/PMC10789237.tar.gz
PMC10789238
37740898
Introduction In the year following discharge from an inpatient psychiatric admission, those with schizophrenia or bipolar disorder are twice as likely to die than the general population. Three quarters of these excess deaths are due to medical disorders such as circulatory or respiratory disease [ 1 ]. Mortality rates are higher in older people with serious mental illness than those without [ 2 ]. The inpatient admission therefore presents a ‘window of opportunity’ to reduce premature deaths [ 3 ]. The Royal College of Psychiatrists recommends that older adults with mental illness have access to specialist medical advice whilst on inpatient wards, and compares this with the importance of those in acute hospitals having access to liaison psychiatry [ 4 ]. There is evidence that consultant geriatricians reduce length of stay and costs in other specialties such as orthopaedics [ 5 ] and the emergency department [ 6 ]. More mental health services are integrating sessions with geriatricians or specialist General Practitioners (GPs), but there is limited evidence for the impact liaison geriatricians have on old age psychiatry wards. Goh et al. in 2016 [ 7 ] examined medical interventions in an older adults psychiatric ward in Australia following the introduction of a medical resident. They identified high numbers of medical co-morbidities in the inpatient population and found increased medical resident consultations after the introduction of the service but no reductions in emergency transfers. The study aimed to examine the impact of having a geriatrician on our inpatient psychiatric wards across a range of outcomes. The outcomes measured were: emergency transfers, geriatrician consultations, other speciality consultations, length of stay, changes in non-psychiatric drugs and changes in discharge destination. Falls were measured by looking at falls data from the intervention and comparison period. Qualitatively, we interviewed medical staff to examine their views on this service development.
Methods Cambridge and Peterborough NHS Foundation Trust (CPFT) provides physical and mental healthcare to a population of just under a million people in the UK. There are two inpatient mental health services in the trust for older adults, for both functional and dementia patients over the age of 65. The total number of beds in CPFT North is 22 and CPFT South is 32. The geriatrician in the South of CPFT offered advice for 1 hour every fortnight. Due to COVID-19 pandemic restrictions, this was entirely through videoconferencing but prior to this had been in person. The geriatrician in the North, offered a session (4 hours) to the ward, attending in person and offering support for audits and research. They saw patients individually, accompanied by a trainee, and then joined the ‘dementia’ ward round to discuss their findings with the team. Both geriatricians were happy to be contacted if support was needed between sessions. We performed a retrospective cohort service evaluation accessed from electronic health records. We compared a period of 6 months prior to and then after the introduction of the liaison geriatrician service (informed by the methodology of Goh et al. 2016). Details of how periods for study were chosen are in Supplementary material S1 . For each patient, the following outcome metrics were extracted from the relevant section of their record: patient names, admission and discharge dates, unique identifiers, basic demographic information, diagnosis associated with admission, admission and discharge medications, co-morbidities on admission and discharge destinations. Length of stay (LOS) was calculated. Using search terms, emergency transfers, other speciality consultations and geriatrician consultations were identified. To capture the impact of the intervention of polypharmacy, the total number of non-psychiatric drug changes between admission and discharge were recorded as both stopping and starting of medications may have been appropriate. Each researcher extracting data completed a pilot group of cases that were cross checked by other researchers to ensure reliability. The Charlson co-morbidity index (CCI) was calculated retrospectively based on the extracted data, as measures like frailty were not routinely collected on admissions [ 8 ] and was used as a surrogate to test whether any differences between groups were due to other factors. The CCI, used previously in psychiatric patients [ 9 ], was chosen to capture the impact of co-morbidities. Total number of monthly falls was extracted from incident reports from these periods under review. Patient reported outcomes measures were the question ‘Overall, how was your experience of our service?’ from the NHS family and friends test ( Supplementary material S7 ), performed as a survey at the end of patients stay. Finally, using the personal Social Services Research Unit (PSSRU2019) costs [ 10 ], we estimated the cost of the service and inpatient bed days between the comparator and intervention period ( Supplementary material S10 ). Semi-structured interview and qualitative analysis We used purposive sampling to identify participants with knowledge of the intervention, covering clinicians from the range of specialities and seniority impacted. We invited all consultants and trainees (all levels including general practice registrars on psychiatry rotations) practising on the old age wards in June 2020 or the 2 years prior to participate via email invitations. A total of 18 doctors agreed to participate from both services (out of 22, response rate 82%). The survey was primarily conducted by paired interviewers from June 2020 to March 2021. Included were 18 qualified doctors: 3 consultant psychiatrists, 2 consultant geriatricians, 3 senior trainees, 6 core trainees and 4 general practice vocational training scheme. Details of the interview questions are included in Supplementary material S2 . Some trainees had been on the ward both prior to and after the geriatricians commenced working on the ward. Answers were recorded in writing by the person not interviewing and interviews were not recorded to enhance free expression. Data were analysed using a framework approach and two authors (J.R. and P.S.) collaboratively identified themes. The transcripts were reviewed to index phrases to themes, and the number of indexed phrases per theme was recorded. The four most frequently reported themes are reported per question. Ethical considerations Approval was gained from the Research and Development department at CPFT, who agreed this was a ‘service evaluation’ in accordance with the Health Research Authority Guidance and therefore did not require ethics approval. Statistical analysis Demographic, primary and secondary outcomes were compared using unpaired t-tests for normally distributed continuous variables and Mann–Whitney U tests for non-normal distributions. Proportions were compared using x 2 as per a prespecified analysis plan (using IBM SPSS statistics [ 11 ] and JASP [ 12 ]). Further post hoc exploratory analysis (incidence rate ratios (IRRs) with R package fmsb [ 13 ], Poisson regression in SPSS) was performed to interrogate results for robustness.
Results Retrospective cohort evaluation There were 222 admissions during the study period (102 prior to intervention and 120 after). Continuous variables (age, length of stay) and frequency counts (number of emergency transfers, geriatrician consultations, other speciality consultations, non-psychiatric drug changes) and CCI were non-normally distributed ( Supplementary material S3 ). There was no difference in age, sex, CCI or diagnostic group between groups in the comparator and intervention periods ( Table 1 ). Reasons for a geriatrician consultation were diverse but predominantly cardiovascular, infections and electrolyte disturbances. The main reason for emergency transfer was for falls, followed by suspected infections ( Supplementary materials S4 and S5 ). There was no significant difference in the primary outcome of emergency transfers (U = 5,856, P = 0.5, this remained true when admissions were categorised as avoidable, unavoidable or unclear, see Supplementary material S6 ). There was a significant increase in geriatrician consultations (U = 7,224, P = 0.003) and decrease in specialty consultations (U = 4465.5 P = <0.001), with small effect sizes. There was no significant difference in patient reported outcomes ( Supplementary material S7 ) or non-psychiatric drug changes ( Table 2 ). Length of stay was significantly shorter in the intervention group compared with the comparator group ( Figure 1 ), median 78 vs 52, U = 4664.5, P = 0.002) with a small effect size. Observing a number of outliers in the comparator group, this analysis was run again removing all participants with greater than twice the standard deviation of length of stay (7 comparators and 1 from the intervention group) and remained significant (U = 4567.5, P = 0.016, effect size −0.192, 95% CI -0.337, −0.038). Exploring the reasons for length of stay with linear regression showed that dementia diagnosis and number of emergency transfers were the main predictors, and geriatricians were more likely to see those with longer lengths of stay ( Supplementary material S8 ). Those managed under the mental health or mental capacity act had prolonged length of stay, but this was due to increased numbers with a dementia diagnosis. Admission under Mental Health Act or management under mental capacity act had no impact on number of emergency transfers ( Supplementary material S8 ). To ensure effects were not driven by difference in length of stay, we calculated IRRs ( Table 3 , Rscript Supplementary material S9 ) with the liaison geriatrician as the exposure and emergency transfers, geriatrician consultations and specialty consultations as the outcome ( Table 3 ). These results showed the same pattern—no significant difference in emergency transfers (estimate = 0.89, 95% CI 0.62–1.23, P = 0.522), but increased likelihood of geriatrician consultations (estimate = 3.13, 95% CI 1.94–5.03, P < 0.001) and decreased likelihood of speciality consultation (estimate = 0.63, 95% CI 0.45–0.88, P < 0.001). The number of falls per month was reduced in the intervention period compared with the comparator period (t-test, t = 5.507, P = 0.006), but there was no reduction in falls leading to admission ( t = −1.550, P = 0.123). There was also no significant difference in overall patient experience of the ward (U = 4,973, P = 0.645, see Supplementary material S7 ). In our economic analysis, the cost of the geriatrician service was estimated to be £126.45 per admission. The cost for the average difference in length of stay was £14,417.66 per admission ( Supplementary material S10 ). Semi-structured interview The interviews highlight the main challenges of managing physical healthcare on an inpatient psychiatry ward being complexity and co-morbidity, polypharmacy and a lack of senior medical input. Whilst most doctors were fairly confident managing medical issues, psychiatric trainees felt less confident the further away they were from medical training. The primary role of a geriatrician was discussing complex non urgent medical issues, and they added an independent expert medical opinion, both on physical health and shared areas like delirium and dementia. Geriatrician input was perceived to reduce unnecessary referrals and provide reassurance. Reducing acute admissions was commonly mentioned, although this contrasts with the quantitative data above. Psychiatrists and geriatricians each reported increased learning and appreciation from the other speciality. A total of 100% of those surveyed would recommend the liaison geriatrician service, with all respondents either ‘reasonably’ or ‘very’ satisfied on a 5-point Likert scale. Full details of the interview questions and top themes are reported in the supplementary material ( Supplementary material S2 ).
Discussion This is the most detailed evaluation of the impact of a liaison geriatrician on older people’s mental health services in the UK. There was no significant reduction in emergency transfers but a significant reduction in length of stay was found. There was significantly increased numbers of geriatrician consultations and a significant reduction in speciality consultations. We did not find any difference in non-psychiatric drug prescriptions or change in discharge destination. There was a reduction in falls, but not in falls leading to emergency transfer. There was no difference in overall patient reported outcomes. Costs for LOS were lower in the intervention group. Our interpretation is that a liaison geriatrician led to a more holistic continuity of care from a single expert source, rather than the more ad hoc contact of acute services. This may have contributed to a reduction in length of stay, but did not impact emergency transfers, drug prescriptions or discharge destination. The lack of effect may have been due to the primary reasons for transfers being falls and infections, acute events, whilst the geriatrician is focussed on long-term care. Transfers are the culmination of a complex series of events and may be either necessary or unavoidable. Other studies have found increased healthcare contact leads to more admissions [ 14 ]. Our study was underpowered to detect a small effect on emergency transfers. Falls were often the presenting complaint leading to admission, but these likely represented multifactorial causes and were not the final discharge diagnosis. LOS was shown to be lower in the intervention group although we are not sure of the causal relationship in our study. There are likely to be numerous unmeasured factors such as impacts and interaction of physical and mental health, social care constraints, differences in how care is given in different teams that may change between the comparator and intervention period. Input from geriatricians was greatly appreciated by psychiatrists and was mainly around chronic health issues. The trainees liked the continuity of care offered, the pragmatic nature of advice given and the multidisciplinary opinions advised by the same geriatrician. This survey led to the geriatrician in the south offering a weekly consultation rather than fortnightly. This evaluation’s main strengths were being a real-world sample with pragmatic outcome measures utilised. The excellent response rate to participate in the survey suggested enthusiasm among doctors for the role of geriatricians on the ward. Limitations identified include patients were not randomised and service evaluations may have limited generalisability to other services. Whilst we postulate that a liaison geriatrician increased geriatrician consultations and reduced other specialty consultations and contributed to reducing length of stay, we cannot claim to have proved causality. There were significant limitations of a number of outcome measures as we were reliant on information collected at the time. This led to compromises such as the choice of CCI rather than a standardised frailty scale, overall number of medication changes and patient’s overall satisfaction with the service rather than specifically with how their physical health was managed. Future prospective studies are needed to fill these gaps. We note that interviewers and interpretation were performed by clinicians who worked alongside the service and thus were implicitly supportive of the intervention, which might have influenced answers or interpretation of statements. The study focussed on the opinions of doctors but did not explore the views of patients, carers nurses and allied health professionals, and this needs to be addressed in future studies.
Conclusions This evaluation shows that integrating geriatricians on older people’s mental health wards is an acceptable, cost effective and desirable intervention. Patients received more consultations with a regular geriatrician and fewer with other medical specialities. The intervention period had a reduction in mean length of stay and overall patient falls, however, there will be unmeasured confounders contributing to these results. The value of geriatrician input is not focussed on preventing acute admissions (such as from falls and infections) but managing co-morbidities and training other doctors.
Abstract Background Although liaison services in acute hospitals are now the norm, the reverse is not usually available for patients in mental health trusts. Following the introduction of support from geriatricians to older people’s mental health inpatient wards, we wanted to see if this intervention was effective and acceptable to clinicians. Methods We performed a retrospective cohort service evaluation on the impact of a liaison geriatrician, using routinely collected data, and assessed acceptability among medical staff by semi-structured interview. Intervention Our service introduced regular sessions from consultant community geriatricians across older adults psychiatric wards including a mixture of video conference and face-to-face input. Results There was no significant decrease in emergency transfers but there was a significant reduction in length of stay with an associated cost benefit for the service after the introduction of a liaison geriatrician. There was a significant increase in geriatrician consultations and a decrease in specialty consultations to other specialists. There was no change in discharge prescriptions or destination. There was a significant reduction in falls in the intervention arm but not in falls leading to emergency hospital admissions geriatricians gave confidence to psychiatrists of all grades to treat physical health care issues. Conclusions A liaison geriatrician service may be a component in reducing length of stay (although there are many others) and improving continuity of care, although it confers no impact on emergency transfers. The intervention was highly acceptable to clinicians.
Key Points There is an unmet need for providing physical healthcare to older adults who are psychiatric inpatients. We performed a retrospective cohort evaluation on the impact of a liaison geriatrician on psychiatric wards. A liaison geriatrician led to a reduction in length of stay and an improved continuity of care. There was no impact on emergency transfers to acute hospital. The intervention brings benefits to clinicians including confidence in managing complex cases. Supplementary Material
Declaration of Conflicts of Interest The two consultant geriatricians interviewed in the study were involved in the service evaluation and the preparation of the manuscript. Declaration of Sources of Funding None. Data Availability The data underlying this article cannot be shared publicly due to the restrictions on routinely collected electronic healthcare records. Deidentified data may be shared upon reasonable request to the authors, however will be dependent on meeting the requirements of CPFT governance.
CC BY
no
2024-01-16 23:47:16
Age Ageing. 2023 Sep 22; 52(9):afad184
oa_package/c0/c7/PMC10789238.tar.gz
PMC10789239
38224487
Introduction Methicillin-resistant Staphylococcus aureus (MRSA) and methicillin-sensitive S. aureus (MSSA) are major pathogens associated with serious infections in both hospitals and communities worldwide ( Tong et al., 2015 ; Arias et al., 2018 ; Junie et al., 2018 ; Lakhundi and Zhang, 2018 ). In Latin American countries like Brazil, Bolivia, Chile, and others, more than 50% of S. aureus isolates are already categorized as MRSA and can be considered resistant to most β-lactams ( Lee et al., 2018 ). Exotoxins produced by S. aureus can cause diarrhea that is either related or not to the use of antibiotics, gastroenteritis, or food poisoning ( Ortega et al., 2010 ; Pinchuk et al., 2010 ). This pathogen is also connected to severe illnesses like Toxic Shock Syndrome (TSS) and Scalded Skin Syndrome (SSS), which are regarded as superantigens and caused by toxin production ( Ladhani et al., 1999 ; Dinges et al., 2000 ). The group of superantigens includes staphylococcal enterotoxins (SE), toxic shock syndrome toxin (TSST), and exfoliative toxins (ET) ( Portillo et al., 2013 ). For epidemiological purposes, it is crucial to identify the distribution of clinical isolates, and genotyping has emerged as a key tool in medical investigations to identify strain origin, clonal relatedness, and outbreak epidemiology ( Olive and Bean, 1999 ; Arias et al., 2018 ). Genotyping methods often involve applying different molecular techniques based on PCR, sequencing, or genomic macrorestriction ( Rodríguez-Noriega et al., 2010 ). By combining these methods, strains can be classified into different lineages, or clones. Some clones of S. aureus are known as epidemic clones, meaning they are descendants of the same ancestor and are widely distributed among different countries. Some of these lineages, such as the Brazilian Epidemic clone (BEC), the Pediatric clone (PC), and the Cordobes/Chilean clone, have already been identified as native to Latin America and are useful for describing the various genetic backgrounds of S. aureus ( Zurita et al., 2016 ). Since few data have been published in Brazil, particularly in the Northeast region, our goal was to investigate the exotoxin gene profile for MRSA and MSSA isolates from patients admitted to hospitals in Recife city, Pernambuco state, Brazil. Additionally, we aimed to evaluate the ability of coagulase gene typing ( coa -PCR) and ribosomal 16S-23S internal transcribed spacer (ITS-PCR) to distinguish isolates and clones from clinical onsets in comparison to techniques more frequently used for molecular epidemiology studies of S. aureus , such as pulsed-field gel electrophoresis (PFGE), multilocus sequence typing (MLST), spa typing, and Staphylococcal Cassette Chromosome mec (SCC mec ) genotyping.
Material and Methods Ethics statement The project was approved by the Oswaldo Cruz Foundation Health Research Ethics Committee, Aggeu Magalhães Institute, IAM/Fiocruz, Brazil (CEP: 0024.0.095.000-07). Bacterial isolates and DNA extraction In this investigation, 89 S. aureus isolates were obtained on spontaneous demand from hospitals in Recife that treat patients from different parts of Pernambuco ( Table S1 ). Prior to the present study, these isolates were typed using PFGE, MLST, spa , phenotypic identification of resistance to cefoxitin, and SCC mec genotyping ( Andrade-Figueiredo and Leal-Balbino, 2016 ). Genomic DNA was extracted using the automated NucliSens-easyMAG (bioMérieux, Durham, NC), and PCR for toxigenic genes and molecular typing methods, designed for the present work, were performed on a GeneAmp PCR System 9700 (Applied Biosystems, Foster City, CA). Detection of staphylococcal toxins genes Using the primers given in Table 1 , we examined the toxic shock syndrome toxin gene ( tst ), two exfoliative genes ( eta and etb ), and 14 staphylococcal enterotoxin (se) genes. Two multiplex PCR were performed, one for sea-see genes and another for tst , eta and etb genes. Uniplex PCRs were designated for seg to seo genes. PCR uniplex assays were prepared to a final volume of 50 μL, containing 40 ng chromosomal DNA, 0.8 mM of deoxynucleotide triphosphates, 1X PCR buffer, 1.5 mM MgCl 2 , 1 U Taq DNA polymerase (Promega, Madison, WI, USA) and 20 μM of each oligonucleotide. Amplification happened as follows: 95 oC for 2 min, then 30 cycles of 95 oC for 1 min, 55 oC for 1 min and 72 oC for 1 min. Products were separated by electrophoresis through 1.5% agarose gel. Multiplex assays were prepared to a final volume of 50 μL, containing 40 ng chromosomal DNA, 1 mM of deoxynucleotide triphosphates, 1X PCR buffer, 3 mM MgCl 2 , 1.5 U Taq DNA polymerase (Promega, Madison, WI, USA) and 20 μM of each oligonucleotide. Amplification consisted of denaturation 95 oC for 2 min, then 30 cycles of 95 oC for 1 min, 60 oC for 1 min and 72 oC for 2 min. Products were separated by electrophoresis through 1.5% agarose gel. The following S. aureus strains were used as positive controls: FRI722 and FRIS6 (for sea and seb genes, respectively), FRI361 ( sec , sed , seg , sei and sej genes), and FRIMN8 ( tst gene), provided by Food Research Institute (Madison, Wiscosin, EUA), 1SB and 3SB ( sek, sel and sem ), MRSA41 ( sen and seo ) and CR6 ( seh ) (Aggeu Magalhães Institute, laboratory collection, Recife, PE, Brazil). Toxin profiling was done by converting the toxin gene data into a binary matrix. For each isolate, a concatenation of these data yielded a binary profile resembling a barcode sequence. Table S2 has a detailed listing of these data. coa -PCR The coagulase gene typing was performed as previously described ( Aarestrup et al., 1995 ). PCR assays were prepared to a final volume of 50 μL, containing 40 ng of chromosomal DNA, 0.8 mM of deoxynucleotide triphosphates, 1X PCR buffer, 2.5 mM MgCl 2 , 1 U Taq DNA polymerase (Promega, Madison, WI, USA), and 20 μM of each oligonucleotide. Amplification was made up of an initial denaturation of 95 oC for 2 min, then 30 cycles of 95 oC for 1 min, annealing at 55oC for 1 min and extension at 72 oC for 1 min. Products were submitted to electrophoresis through 1.5% agarose gel. S. aureus strain ATCC 25923 was used as a positive control. ITS-PCR Amplification of the 16S-23S intergenic spacer region (ITS-PCR) employed a single pair of primers, as previously described ( Jensen et al., 1993 ). PCR assays were prepared to a final volume of 50 μL, containing 40 ng chromosomal DNA, 1 mM of deoxynucleotide triphosphates, 1X PCR buffer, 3 mM MgCl 2 , 1.5 U Taq DNA polymerase (Promega, Madison, WI, USA) and 20 μM of each oligonucleotide (Jensen et al. , 1993). Amplification was made up of 95 oC for 5 min, then 30 cycles of 95 oC for 1 min, 55 oC for 1 min and 72 oC for 2 min. Electrophoresis was performed using a 2% agarose gel. For quality assurance, S. aureus strains ATCC 33591 and ATCC 25923 were employed. Sequencing A random sample of purified PCR products from toxigenic genes, ITS-PCR and coa -PCR (Purelink PCR purification kit, Invitrogen, Carlsbad, CA, USA) were chosen for DNA sequencing using the Big Dye Terminator Kit v3.1 and an ABI 3730xl DNA analyzer (Applied Biosystems, Foster City, CA, USA) in order to confirm only the specificity of the amplified genes. The PureLink PCR purification kit was provided by Invitrogen, Carlsbad, California, USA (Applied Biosystems, Foster City, CA, USA). The nucleotide sequences obtained were compared with the S. aureus sequence database in the GenBank through BLAST (http://www.ncbi.nlm.nil.gov). Discriminatory analysis In order to compare the discriminatory index of typing methods, we used the formula described by Hunter and Gaston (1988 ): DI = 1 - [1 / N(N - 1)] Σs nj(nj - 1) , where N is the total number of isolates in this population, s is the total number of different types, and nj is the number of isolates representing each type. This formula is based on the probability that two unrelated strains taken from the population sample will be placed in different types of groups. Statistical analysis We investigated any connections between the presence of toxins and the genotypes identified by Coagulase/ITS-PCR, as well as between the presence of toxins and the resistance status (either MRSA or MSSA). We used the Jamovi software’s Pearson’s correlation test, and only results with a p value of 0.05 or lower were considered statistically significant ( Jamovi, 2023 ).
Results Toxigenic profiles of Staphylococcus aureus isolates None of the 89 isolates carried sed , see, or etb , but all isolates were positive for at least three of the investigated genes. The most frequent toxin gene was seg, presented in 88/89 (99%) isolates, followed by sem 81/89 (91%), seo 80/89 (90%), sen 78/89 (88%) and sei 74/89 (83%), all members of the enterotoxin gene cluster ( egc ). Six isolates (Sa08, Sa32, Sa80, Sa82, Sa87, and Sa88) were positive for the tst gene, and only one (Sa17) harbored eta . In one isolate (Sa87) from an ICU patient in Hospital 2, 12 of the 17 genes under investigation were found. Statistically, however, the only relevant correlation happened between the MSSA group and classical enterotoxins. 19 MSSA isolates carried a sum of 21 genes, while MRSA isolates carried only one ( sea , in isolate Sa86 - toxin profile 33) ( Table 2 ). All strains, regardless of susceptibility, carried at least two egc -related genes. A complete set of the egc , consisting of seg, sei, sem, sen , and seo , was found in 63/89 (71%), of which 22 are MRSA and 41 MSSA. Isolates were subdivided into 45 genetic profiles based on their toxigenic content, and these genotypes were called toxin profiles ( Table S2 ). The 31 MRSA isolates exhibited 16 profiles, while the 58 MSSA isolates were associated with 39, with both groups sharing 10 of these toxin profiles. The toxin profile 22 - representing a complete egc in addition to genes sej , sek, and sel - was the most prevalent genotype, occurring in 10 isolates (10/89 - 16%; 5 MRSA and 5 MSSA). For both groups, toxin profile 22 was also the most frequent, followed by 17, which had four representatives in each group and corresponds to a complete egc in addition to gene sek . PCR-based typing methods Coagulase genotyping analysis revealed five coagulotypes, according to the amplified segment, denominated as C1= ~730 bp (corresponding to 4 repeats), C2= ~810 bp (corresponding to 5 repeats), C3= ~890 bp (corresponding to 6 repeats), C4= ~970 bp (corresponding to 7 repeats) and C5 = ~650 bp (corresponding to 3 repeats). S. aureus ATCC 25923 amplified a fragment of ~810 bp ( Figure 1 ). In ITS-PCR reactions were observed 3 to 8 fragments with approximate sizes of 380 to 650 bp. Based on the amplification patterns, isolates were classified into 15 types, designated in this study as R1-R15 ( Figure 2 ). The association between coa -PCR and ITS-PCR (C/R) analysis revealed 22 genotypes ( Table 3 ). The discriminatory index obtained by this combination was 0.84, similar to the index obtained by MLST (0.86) and spa -typing (0.89) but lower than the index obtained by PFGE (0.99). The 22 genotypes and the toxins under investigation did not show any statistically significant correlations. Twenty-six isolates (26/89 - 29%) exhibited the genotype C/R-9. Of these isolates, 21 (21/26 - 80,8%) exhibit ST (multilocus sequence type profile) 5, in which 10 MRSA isolates are related to the Pediatric clone (PC, also called USA800), harboring the following molecular characteristics: ST5, spa types t002 (5 isolates), t267, t6787, spa non-typeable (3 isolates), SCC mec IV, and grouped into PFGE clusters C or D. Four MSSA isolates within ST5 (t002/t10548/t1277/t214) and 3 MSSA isolates ST1635/t002 are also related to PC and were grouped together into PFGE cluster F and within genotype 9. One additional MRSA isolate (Sa01) was related to USA800/PC, even though with another ST (the newly described ST2381), grouped into PFGE cluster C, and also was classified as genotype 9. One MRSA isolate (Sa81, ST105/t002-SCC mec II) and 1 MSSA isolate (Sa03, ST5/t2164) were individually related to the New York/Japan clone, or USA100 (ST5-SCC mec II) in a PFGE dendrogram previously obtained ( Andrade-Figueiredo and Leal-Balbino, 2016 ). These STs within genotype C/R-9 (ST105, ST1635, and ST2381) belong to the same clonal complex of ST5 isolates (CC5) since they have six matching loci. Genotype C/R-16 consists of only one isolate, a MSSA with ST5 and in PFGE cluster F, and therefore, also related to PC/USA800 ( Table 3 ). Genotypes C/R-9 and C/R-16 differ only in their ITS-PCR profiling. While R1 (ITS-PCR profiling for genotype C/R-9) has 5 bands, R9 (ITS-PCR profiling for genotype C/R-16) has 4, and 3 of these bands are exactly at the same height. We can therefore suggest that both of these genotypes are related. The genotype C/R-4 was observed in 23 isolates (23/89 - 26%), of which 17 (17/23 - 74%) are MRSA ST239/t037-SCC mec III, related to the Brazilian epidemic clone (BEC); 13 of these isolates were grouped into PFGE clusters A (7 isolates) or B (6 isolates). Two MSSA isolates with the genotype C/R-4 were also ST239/t037. In addition, 3 MSSA isolates that exhibit ST333 (t084 [2 isolates], spa non-typeable) and 1 MSSA isolate, ST15 and spa non-typeable, were genotyped C/R-4 and grouped together into PFGE cluster G ( Table 3 ). The genotype C/R-11 was observed in 6 MSSA isolates (6/89 - 7%), in which 5 ST30 (t318 [4 isolates] and t021 [1]), and 1 isolate ST285/t021 are related to the Oceania Southwest Pacific clone (OSPC, also called USA1100 - ST30-SCC mec IV) and were grouped in PFGE cluster H. Similarly, the genotype C/R-12 was observed in 4 MSSA isolates (4/89 - 4%) MSSA isolates, of which 3 are ST30 (t433 [2 isolates] and t1001 [1]) and 1 (Sa4) is ST and PFGE non-typeable. These isolates are also related to the OSPC clone and were grouped into cluster H ( Table 3 ). ST30 and ST285 belong to the same clonal complex (CC30), as they have four or more similar MLST loci. The genotype C/R-3 was observed in 6 isolates (6/89 - 7%), harboring STs ST239, ST25, ST30, ST333, and ST71. Two of these isolates were related to international clones: Sa73, classified as ST239/t037-SCC mec III is related to the BEC clone (PFGE cluster B); and Sa65 (ST30) is related to OSPC (cluster H). All 5 MSSA isolates (5/89 - 6%) within ST1 exhibited the genotype C/R-6, of which 3 (ST1/t127) are related to the USA400 clone (ST1-SCC mec IV) and grouped into PFGE cluster E. Three isolates (3/89 - 3%) were genotype C/R-20, of which two (Sa72 [ST669/t359] and Sa49 [ST97/t267]) were also related to the USA400 clone and grouped into cluster E. The genotype C/R-10 was observed in two (2/89 - 2%) MSSA isolates that exhibit a new ST (ST2382) and spa type t189.
Discussion Staphylococcus aureus is responsible for a broad spectrum of diseases in humans due to its ability to express several virulence factors, including enterotoxins, toxic shock syndrome toxin, and exfoliative toxins. Among them, SEs are the major cause of staphylococcal food poisoning ( Argudin et al., 2010 ). Ferry et al. (2005 ) described that S. aureus strains that cause sepsis, with or without shock, harbor at least one superantigen-encoding gene. Several diseases, including infectious endocarditis and food poisoning, have already been linked to egc ( Johler et al., 2015 ; Stach et al., 2016 ). Different rates of toxigenic genes have been found in S. aureus clinical isolates from multiple countries, according to a number of investigations ( Song et al., 2016 ; De Carvalho et al., 2019 ). In our study, an elevated frequency of S. aureus clinical isolates comprising toxigenic genes was observed, especially those with a complete egc . All isolates related to the USA800/PC clone contained a complete egc , as well as other toxins, except for two isolates (Sa1 and Sa76) which exhibited only part of this cluster. According to Monecke et al. (2011 ), CC5 isolates carry the enterotoxin gene cluster, although partial deletions have been observed. Only a few studies have examined the frequency of toxigenic genes in S. aureus in Brazil, particularly in the North and Northeast regions and especially in clinical specimens ( Vasconcelos et al., 2011 ). In one study, 14% of MRSA strains from a university general hospital in Recife during 2002-2003 were related to the USA800 clone and harbored egc . Additionally, approximately 70% of MRSA strains were related to BEC, and none of them had toxigenic genes ( De Miranda et al., 2007 ). Only the classic staphylococcal enterotoxins ( sea-see ) were statistically associated with the resistance status of the isolates, and MSSA had the highest frequency of these superantigens. According to earlier research, MRSA makes up no more than 10% of colonizing strains, which suggests that MSSA is very common in colonizing and/or community-acquired infections ( Mehraj et al., 2016 ). The predominance of these genes in MSSA isolates raises concerns for community-associated infections because the classical enterotoxins are strongly associated with food poisoning ( seb ), lethal sepsis ( sec ), and infective endocarditis ( sec ) ( Salgado-Pabón et al., 2013 ; Ahmad-Mansour et al., 2021 ). The eta gene was found in only one isolate, and the low rates of isolates carrying the eta and etb genes responsible for SSS are in accordance with other studies, which have also demonstrated the low frequency of these genes in S. aureus isolates ( Becker et al., 1998 , 2003 ). The tst gene, responsible for TSS, was observed in six isolates. Two of which were related to USA800 (MRSA Sa82 and MSSA Sa87), agreeing with the results of Durand et al. (2006 ) and Takano et al. (2008 ), and one isolate was related to USA600 (Berlin clone, BC, ST45-SCC mec IV), in accordance with the observations of Tenover et al. (2008 ) and King et al. (2016 ). Monecke et al. (2011 ) described that a similar CC45-MRSA-SCC mec IV strain, isolated in Australia, carries sec , sel , tst , and arginine catabolic mobile elements (ACME). Additionally, Portugal, Australia (WA MRSA-4), and Germany have all reported occasional cases of ST45-MRSA-SCCmecV. Most isolates tested in the study harbor tst , sek, and seq ( Aires-de-Sousa et al., 2008 ; Monecke et al ., 2011 ). The isolates from our study were from different clinical specimens. It is interesting to note that one isolate (MSSA Sa08, tst positive) was obtained from vaginal secretion. TSS was initially associated with the use of superabsorbent tampons in women with S. aureus tst producers on vaginal secretion; however, later, cases of non-menstrual TSS in the community and hospitals became prevalent ( Fitzgerald et al., 2001 ; Durand et al., 2006 ). In the current study, we found significant genetic diversity among S. aureus isolates as well as a high frequency of toxigenic genes. Molecular typing techniques can be used to understand this diversity and how these strains are related. In some circumstances, where speed is required to identify a local outbreak and design containment plans, PCR-based genotyping approaches, such as coagulase gene typing and ITS-PCR, are fast and offer significant discriminatory power. According to Hunter and Gaston (1988 ), a discriminatory index greater than 0.90 can be interpreted as reliable and is thus desirable. However, even though PFGE has shown greater discriminatory power ( DI 0.99), the combination of PCR-based typing methods proved to be a useful and inexpensive procedure for conducting epidemiological surveys of S. aureus on a local or regional scale, even with a DI of 0.84. The analysis of PCR- coa identified five different amplicons. The 3 ́end coding region of the coa gene contains a series of repeating 81bp DNA sequences that differ in the number of tandem repeats. Since this region exhibits polymorphism, it is useful as a typing method ( Aarestrup et al., 1995 ). We chose ITS-PCR as one of our genotyping methods in this study because of its practicality, cost-effectiveness, and alignment with the specific research objectives ( Liu et al., 2008 ). This technique has several advantages, such as simplicity, speed, and the ability to discern between closely related strains based on variations within the ITS region ( Boom et al., 1990 ; Gray et al., 2014 ). However, it is important to recognize that there are other genotyping techniques, such as ribotyping, that provide extensive information about the ribosomal RNA sequence and rely on the sequencing or hybridization of ribosomal RNA segments ( Bouchet et al., 2008 ). On the other hand, ITS-PCR focuses specifically on the amplification and analysis of a specific genomic region, the ITS region of the ribosomal RNA operon. This targeted approach allows efficient screening of large collections of strains and can provide valuable information about the genetic diversity and lineage of strains based on variations within the ITS region ( Ryberg et al., 2011 ; Lian and Zhao, 2016 ). By employing ITS-PCR in conjunction with other molecular typing techniques such as coagulase typing, PFGE, spa typing and SCC mec genotyping, we were able to obtain comprehensive information on the molecular epidemiology and clonal relationships between the S. aureus strains under investigation. In this study, C/R-9 was the most frequent genotype observed among isolates. All 26 isolates in C/R-9 were associated with Clonal Complex 5, grouping all USA800/PC and USA100 clones. Thus, the results indicate that the C/R combination was able to distinguish USA800/PC and USA100 clones from other strains. Only one isolate related to the USA800 clone (Sa28 ST5/ newly described t10548, cluster F) exhibited a different genotype, C/R-16, but this genotype appears to be related to C/R-9. The BEC represents a multidrug-resistant lineage described in Brazil in 1992, linked to hospital-acquired infections (HA-MRSA), which was widely distributed in Brazil and later in other countries. However, in the first ten years of the 2000s, there was an increase in “imported” clones, such as the Pediatric clone (PC), the New York/Japan Clone, and other less common lineages ( Andrade et al., 2020 ). Since then, BEC prevalence appears to be decreasing, while infections by community-acquired (CA-MRSA) clones, such as PC and OSPC, have been steadily increasing. Complete BEC substitution has already been reported in some hospitals ( Chamon et al., 2017 ; Monteiro et al., 2019 ), and CA-MRSA clones appear to be becoming more common across Brazil ( Romero and Cunha, 2021 ). Historically, there have been distinctions between the epidemiologies of HA-MRSA and CA-MRSA, with CA-MRSA typically being linked to more virulent infections and infecting patients who have few or no risk factors. However, these distinctions are becoming less clear today ( Lee et al., 2018 ). The genotype C/R-4 grouped 17 of 19 (89,5%) BEC and related clones, including all five BEC MRSA isolates from ICU (hospital 1), as well as distinguishing these from other epidemic/pandemic clones such as USA800/PC, USA100, USA1100, USA400, and USA 600. Only two BEC clones exhibited a diverse C/R pattern. The genotype C/R-4 also grouped 3 isolates within ST333 ( spa nontypeable, t084 [2 isolates], cluster G) and 1 isolate ST15 ( spa nontypeable, cluster G), both CC15. All isolates within ST1 (Cluster E, CC1) were grouped into the genotype C/R-6 association. Coagulotype and ITS-PCR analyses were capable of distinguishing these isolates from others also clustered into PFGE cluster E and related to USA400 that exhibited ST669 and ST97, both STs from CC97. The genotypes C/R-11 and C/R-12 were able to group all isolates related to clone USA1100, except for isolate Sa65 (C/R-3). We observed a more robust correlation between coa -PCR/ITS-PCR and PFGE/MLST patterns in MRSA isolates. The C/R association allowed us to observe the clonal spread of MRSA and MSSA within the main hospital analyzed (hospital 1). Patients from these isolates were dispersed throughout various hospital wings. Additionally, we discovered closely related isolates between hospital 1’s isolates and all four of hospital 2’s isolates. No one genotyping technique that is now available is thought to be best for epidemiological studies. Every circumstance is unique, so it is important to assess the benefits and drawbacks of each technique both separately and collectively in order to choose the best methodology based on the targets and objectives outlined in each study. Through combining coagulotype and ITS-PCR analysis, which showed a relationship with PFGE genotype and MLST as well as a minor correlation with spa typing, we found a high genetic diversity among the isolates in our study and observed clonal spread of MRSA and MSSA in hospital settings. It is important to emphasize that this specific association between techniques may be practical, quick, and affordable for initial epidemiological investigations in hospitals or local outbreaks, thus becoming an interesting strategy for countries and institutions with fewer resources. We emphasize the need for further studies for epidemiological surveillance of MRSA and MSSA due to the change in S. aureus epidemiology and the growing threat of this pathogen to hospital and community environments.
Associate Editor: : Augusto Schrank Conflict of Interest: The authors declare that there is no conflict of interest that could be perceived as prejudicial to the impartiality of the reported research. Abstract Staphylococcus aureus is a frequent cause of infections worldwide. Methicillin-resistant S. aureus (MRSA) is one of the main causes of Gram-positive infections, and methicillin-susceptible strains (MSSA) primarily colonize and infect community hosts. Multiple virulence factors are involved, with toxins playing a significant role in several diseases. In this study, we assess the prevalence of toxin genes in 89 S. aureus clinical isolates (31 MRSA and 58 MSSA). We evaluated the discriminatory power of the association of internal transcribed spacer-PCR (ITS-PCR) and 3’- end coa gene ( coa -PCR) when compared with other more commonly used and costly techniques. The isolates showed a high level of genetic diversity, and toxins were found in all the isolates. While most toxin classes displayed no statistically significant correlations and were equally distributed in isolates regardless of their resistance status, classic enterotoxins ( sea-see ) showed a positive correlation with MSSA isolates. The combination of coa -PCR with ITS-PCR showed a discriminatory index of 0.84, discriminating 22 genotypes that agree with previously determined data by PFGE and MLST. This association between the two PCR-based methods suggests that they can be useful for an initial molecular epidemiological investigation of S. aureus in hospitals, providing significant information while requiring fewer resources. Keywords:
Acknowledgements This study was supported by research support from IAM/Fiocruz and INOVA/Fiocruz (VPPCB-008-FIO-18-2-81-30). In addition, the study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) - Brazil - Finance Code 001. Internet Resources The following online material is available for this article:
CC BY
no
2024-01-16 23:47:16
Genet Mol Biol.; 46(4):e20220321
oa_package/f8/03/PMC10789239.tar.gz
PMC10789240
37427997
Introduction ZAK (leucine zipper and sterile alpha motif-containing kinase, also known as MAP 3 K20 , MRK or MLTK ) is a MAP 3 kinase independently identified by four groups in the 2000s ( 1–4 ). Since then, diverse roles have been assigned to the two isoforms encoded by the ZAK gene, ZAKα and ZAKβ . A potential role for ZAK in muscle function was initially suggested by its association with the Kyphoscoliosis peptidase (KY) protein complex ( 5 ). The Ky gene was first identified in the mouse following identification of a muscular dystrophy-causing mutation ( 6 ). Subsequent protein interaction analyses revealed a putative Z-disc protein network that includes KY, ZAK, IGFN1 (immunoglobulin like and fibronectin type III domain containing 1) and others ( 5 , 7 ). The key roles for KY and ZAK in muscle function are now evident from identification of disruptive mutations in both genes causing congenital myopathy in rare human pedigrees ( 8–11 ). In the case of ZAK , two homozygous frameshift mutations and a homozygous nonsense mutation were identified in three consanguineous families from different ethnic backgrounds ( 11 ). These mutations were the first example of a direct association between a MAP 3 K and skeletal muscle weakness. In all affected individuals, fibre size variation, predominance of type I fibres and centralized nuclei are common pathological findings ( 11 ). Other evidence points at ZAK as having a key physiological role in skeletal muscle. Phosphoproteomics analysis following a bout of maximal intensity contractions showed ZAK as one the most abundant phosphorylated proteins ( 12 ). In mice, ZAK deficiency causes pathology in skeletal muscle only ( 13 ). The ZAK mutations described in mice and humans cause loss of function of both isoforms, but the pathological observations can be attributed specifically to ZAKβ , as this is the only isoform expressed in skeletal muscle ( 11 ). ZAKα and ZAKβ hold a common N-terminal serine/threonine kinase domain but distinctive C-terminal domains. The C-terminal domains have been shown to confer different functions to their respective isoforms. Thus, ZAKα performs a function in ribotoxic stress sensing through two ribosome-binding domains in its C-terminus that serve as a molecular sensing module ( 14 ), whereas ZAKβ has been shown to respond to mechanical perturbation of cytoskeletal stress fibres in U2OS cells ( 13 ). Under osmotic shock or cellular compression conditions, ZAKβ auto-phosphorylates, relocates to stress fibres through its unique C-terminal domain and leads to p38 activation ( 13 ). In vivo , repeated contractions of tibialis anterior (TA) and extensor digitorum longus muscles in mice leads to acute ZAKβ-dependent p38 activation ( 13 ). In humans, resistance exercise specifically has been shown to strongly activate ZAKβ ( 15 ). Therefore, ZAKβ appears to play a role in the early response to mechanical stress in skeletal muscle. The mechanisms by which ZAKβ senses mechanical stress as well as the downstream panoply of ZAKβ targets remain to be determined. Here, we first expanded the muscle histopathology data of Zak −/− mice to include challenges such as regeneration, overloading and ageing. To identify direct targets of ZAKβ, we performed a phosphoproteomics assay. Overrepresented phosphopeptides identified components of cell adhesion. Amongst these, the actin filament cross-linking protein Filamin C (FLNC) emerged as an interesting pathogenic marker. We identify hallmarks of impaired FLNC turnover and other myofibrillar myopathy markers in muscle sections from Zak −/− mice and the biopsy of a ZAK-deficient patient.
Materials and Methods Cell culture Murine C2C12 myoblasts were cultured in growth medium; Dulbecco’s Modified Eagle’s Medium (DMEM, Gibco; S41966-029) supplemented with 10% fetal bovine serum (Gibco; S10270), penicillin and streptomycin (P/S, Gibco; S15140 (100 U/ml)). Cells were maintained in a humidified cell incubator at 37°C with exposure to 5% CO 2 . C2C12 myoblasts were differentiated into myotubes through the replacement of growth medium with differentiation medium; DMEM +2% horse serum + P/S (HS, Gibco; 26 050). Media were replaced daily and left to differentiate for 7 days. For transfection, cells were seeded in either 6-well or 24-well plates in growth medium. Transfections were performed according to manufacturer’s guidelines when cells reached 80% confluency. The TransIT-X2 (Mirus Bio; MIR 6000) transfection reagent was used in the transfection of both C2C12 and COS7 cells. After 48 h, cells were fixed, clonally selected or processed for differentiation analyses. For fixation, transfected cells were washed twice with pre-warmed 1x PBS and fixed with either 4% PFA for 10 min or ice-cold acetone: methanol (1:1) for 20 min. After fixation, cells were washed twice with 1x PBS and then with ddH 2 O ready for immunofluorescence analyses. Western blot For protein extraction from cells, cells placed on ice and washed twice in ice-cold 1x PBS. Cells were lysed following the addition of minimal RIPA buffer (Sigma; R0278) + protease inhibitor cocktail (PI, Sigma; P8340) on ice. Lysed cells were subjected five times to 5-s pulses of sonication with 1-min rest on ice between each pulse. Cell lysate was centrifuged at 15 000 g for 15 min at 4°C. The supernatant was collected and stored for processing. Samples were mixed with 4x NuPAGE LDS buffer (Invitrogen; NP0008) and 10x NuPAGE Reducing Agent (Invitrogen; NP004) and boiled for 10 min before being resolved by SDS-PAGE and transfer onto nitrocellulose membranes. Membranes were blocked in PBS-T (1x PBS supplemented with 0.2% v/v Tween-20) + 5% milk followed by overnight (ON) incubation with primary antibodies. The following antibodies were used for western blot in this study: 1:2500 rabbit anti-ZAK (Proteintech. 14945-1-AP), 1:500 mouse anti-sarcomeric α-actinin (Abcam, EA-53), 1:20 000 anti-GAPDH-HRP (Sigma, G9295) incubated in blocking buffer ON at 4°C. Membranes were then washed in PBS-T three times and incubated for 1 h with HRP-conjugated secondary antibodies: anti-rabbit IgG (Santa Cruz Biotechnology, sc-2030 1:10 000) and anti-mouse (Santa Cruz Biotechnology, sc-20051:10 000), followed by further three PBS-T washes. Detection was performed using Immobilon Western Chemiluminescent HRP Substrate (Millipore, P09718). Bands were detected using autoradiography film (Santa Cruz Biotechnology, sc-210696) and developed using an Xograph film developer. Generation of ZAK KO C2C12 cell lines via Crispr/Cas9 Pre-designed vectors were purchased from VectorBuilder. ZAKβ CRISPR/Cas9 KO plasmid (VectorBuilder, VB191031-4552uqf) was obtained for CRISPR/Cas9 targeting. Specific target sites within exons 2 and 3 of ZAKβ were identified and targeted using two gRNAs: • gRNA1 = GGAGTGTGTATCGAGCCAAA • gRNA2 = CCAACTATGGCATCGTCAC SacII-linearized ZAKβ CRISPR/Cas9 KO vector (VB191031-4552uqf) was transfected into WT C2C12 cells using TransIT-X2 system as described above. After 48 h, the GM was replaced with GM + neomycin (1.5 mg/ml). Transfected cells were then maintained at 37°C in a humidified 5% CO 2 incubator. Media was changed every 2 days until the formation of resistant colonies. Resistant colonies were re-seeded in a 96-well plate via serial dilution, to identify individual clones. Cells were observed and wells in which single colonies formed were selected for expansion, differentiation and analysis. Surviving clones were sequenced, and the nature of the mutation was determined. Immunofluorescence Snap-frozen muscle tissue was first sectioned using a cryostat machine to a thickness of 10 μm, and slides were stored at −80°C. Slides were removed from −80°C and left to thaw at RT until condensation had evaporated. Blocking was performed in 4% bovine serum albumin (BSA) in 1x PBS. Sections were incubated with the following primary antibodies ON at 4°C; 1:100 anti-MyHC1 (A4.840, DSHB), 1:100 anti-MyHC2a (SC-71, DSHB) 1:150 anti-FLNC (RR90, gift from Peter Van der Ven), 1:150 anti-BAG3 (10599-1-AP, Proteintech) and 1:80 anti-MYOT (RS034, Novocastra) in blocking solution. Sections were subject to 3x 5-min washes in 1x PBS and incubated with the following secondary antibodies at RT for 1 h; 1:150 goat anti-mouse IgG-FITC (F9006, Sigma), 1:150 goat anti-mouse IgM-TRITC (SAB3701196, Sigma), 1:150 goat anti-mouse IgA-FITC (F9006, Sigma), goat anti-rabbit IgG-alexafluor594 (R37117, Invitrogen) and goat anti-mouse IgG-alexafluor594 (R37121, Invitrogen). Sections were again subject to three times 5-min washes in 1x PBS. Slides were mounted with Mowiol mounting medium with DAPI. A coverslip was added and compressed avoiding the formation of air bubbles. Fixed cells were permeabilized by incubation with 4% BSA in 1x PBS supplemented with 0.1% triton-X100 for 30 min. This was followed with blocking using 4% BSA in 1x PBS for 1 h and ON incubation with 1:150 anti-α-actinin (EA-53, Sigma) at 4°C. Cells were incubated with goat anti-mouse IgG-alexafluor594 (R37121, Invitrogen) for 2 h following 3× PBS washes. Cells were further washed 3× with 1× PBS, followed by a final wash with ddH 2 O. Cells cultured on coverslips were mounted onto slides with Mowiol + DAPI. For cells cultured directly onto plastic, the bottom of the plastic dish was removed by cutting it using specialized laser equipment in order to isolate the bottom of the well and mounted with a coverslip using Mowiol + DAPI. BaCl 2 treatment BaCl 2 was selected as the method of injury as the levels of inflammatory cells return to basal levels quicker when compared with notexin or freezing injury models ( 50 ). WT and Zak −/− mice (8 weeks) were anaesthetized with 2% isoflurane. To injure the leg, IM injections of barium chloride (BaCl 2 ; 1.2% in sterile saline, 30 μl) were delivered to the tibialis under general anaesthesia. The tibialis muscle from the contralateral leg remains uninjured. Five replicates were performed per condition. Injections were performed on control and Zak −/− mice. Three uninjured mice were used as non-injured controls. After the procedure mice were left for 4, 7, 12 and 28 dpi to recover. Mice were euthanized by cervical dislocation, and the injured tibialis was isolated and snap-frozen in isopentane for sectioning. Cross sections of tibialis were obtained at proximal, central and distal levels along the long axis of the muscles, H&E stained and imaged as described. Synergistic ablation WT and Zak −/− mice (30 weeks) were anaesthetized with 2% isoflurane. To mechanically overload the soleus muscles, the distal third of the gastrocnemius muscle is removed from one leg, leaving the soleus muscle intact. The gastrocnemius muscle from the contralateral leg remains intact and acts as a control. After the procedure mice were left for 2 weeks in individually housed cages. Immediately after the surgical procedure and 6, 12, 18 and 24 h after the surgery, each mouse is injected with analgesics (buprenorphine—Temgesic, 0.05–0.1 mg/kg). In addition, the mice are injected with carprofen immediately after surgery (when still anesthetized) and 24 h after surgery. After 14 days, the mice were euthanized by cervical dislocation and the soleus (sham and overloaded) muscles were isolated and embedded in OCT compound and frozen in liquid nitrogen–cooled isopentane and then stored at −80°C for further analysis. Skeletal preparation Zak −/− and WT mice were sacrificed at 28 weeks old, respectively, and fixed in 10% neutral buffered formalin (Sigma-Aldrich, HT501128) for 48 h. Samples were rinsed with ddH 2 O and left for 24 h with gentle shaking. Both samples were post-fixed in 70% ethanol for 5 days, with the ethanol solution refreshed daily. Following post-fixing, specimens were dissected. This process involved removal of the skin, eyes, visceral organs and adipose tissue between the scapulas and behind the neck. Specimens were placed in two changes of 95% ethanol ON at RT to dehydrate and further fix the remaining tissue. The ethanol solution was replaced with 100% acetone for a further 2 days to fix the specimens and remove excess adipose tissue. Specimens were stained with Alcian blue stain solution (0.02% (w/v) Alcian blue 8GX (Thermo Fisher; 15432949), 70% EtOH, 30% glacial acetic acid) for 4 days. Samples were washed with ethanol/glacial acetic acid (7:3) for 1 h and fixed in 100% ethanol ON at RT. Specimens were rinsed in ddH 2 O for 3 days before being incubated with 1% trypsin (in ddH 2 O containing 30% sodium borate) for 24 h. Digested specimens were subjected to staining with Alizarin red stain (0.005% (w/v) Alizarin red (Alfa Aesar; 11 319 707) in 0.5% KOH (w/v)) for 48 h. Specimens were transferred to 1% KOH clearing solution for 2 days followed by decreasing gradients of 1% KOH to glycerol to further clear the specimens, i.e. 3:1, 1:1, 1:3 and 100% glycerol at 2 days per step. Specimens were stored in 100% glycerol prior to imaging. Immunohistochemistry Snap-frozen muscle tissue was first sectioned using a cryostat machine, and slides were stored at −80°C. Sections were cut at 12 μm. Slides were removed from −80°C and left to thaw at RT until condensation had evaporated. Slides were fixed in acetone for 10 s and then incubated in Gill’s haematoxylin for 2 min. Slide was then washed in running tap water for 1 min. Sections were incubated in Scott’s water for 1 min and subsequently washed again in tap water for 1 min. Afterwards, the sections were incubated with eosin for 45 s and then washed in tap water for a further 1 min. Sections were dehydrated first in 70% ethanol for 1 min, and then in 100% ethanol for 1 min, and finally in HistoClear (National Diagnostics) for an additional 1 min. Slides were mounted using DPX mounting medium. Electron microscopy Mouse soleus muscle was obtained from five Zak −/− and three controls. Each soleus was fixed individually for 20 min in fixative 2% glutaraldehyde, 2% formaldehyde in buffer (0.1 M sodium phosphate buffer pH 7.4) followed by further dissection cross sectionally into smaller blocks. All blocks were processed together by the Imaging and Cytometry team within the University of York Technology Facility as follows. Muscle samples were fixed for a further 1 h at ambient temperature in 2% glutaraldehyde, 2% formaldehyde fixative and then washed with buffer (3x 15 min) before secondary fixation in 1% osmium tetroxide (in buffer, 1 h on ice). Samples were subsequently dehydrated through a graded ethanol series before infiltration and embedding in Spurr resin (Taab laboratories, S0-24D). Sections (~70 nm) were collected on 200 mesh copper TEM grids and post stained with uranyl acetate (2% (w/v) in 50% ethanol) and lead citrate ( 48 ) before viewing using an FEI Tecnai 12 G2 fitted with a CCD camera. Scratch wound assay WT and ZAK KO C2C12 cell lines (KOD9 and KOC10) were grown to 100% confluency in a six-well plate, three replicates per cell line. A 200 μl pipette tip was used to scrape a single vertical line through the cells from the growth surface (0 h). Cells were imaged every 3 h for a total of 24 h using an Evos XL core microscope (Thermo Fisher, AMEX1000). For quantification of wound closure, three images per time point were taken and averaged. To quantify wound closure, the space void of cells was calculated and taken away from the initial wound measurement at 0 h. An average of the distance travelled at each time point was then calculated compared with the initial wound distance at 0 h. A replica of this experiment was carried out using live cell imaging (Livecyte, Phasefocus), using the manufacturer’s automated software to generate images with a 10× objective for 30 h at 1-h intervals from cells maintained in an environmental chamber at 37°C with 5% CO 2 and 95% humidity. Single-cell tracking quantifications were generated using the Livecyte integrated analysis suite. Phosphoproteomics Sample preparation for MS analysis A phosphoproteomic assay (adapted from ( 49 )) was used to identify the immediate downstream targets of ZAKβ, whereby recombinant ZAKβ was added to a kinase-dead skeletal muscle lysate and subsequent phosphopeptides detected by LC–MS/MS. TA muscle from WT C57Bl/6 mice was collected and snap-frozen in liquid nitrogen–cooled isopentane. Protein was extracted using the protocol as stated above. Skeletal muscle lysate was kept on ice and used fresh for the remainder of processing. Muscle extracts were treated with either FSBA (Sigma; F9128, 5 m m ) dissolved in dimethyl sulphoxide (DMSO) or DMSO as control, to irreversibly inhibit all endogenous kinases. Skeletal muscle lysate treated with FSBA was desalted using Millipore Amicon ultrafiltration columns with a 3 kDa molecular weight cutoff (Merck; UFC500324) to remove all unreacted FSBA. When using the ultrafiltration columns, samples were diluted 1:10 with Nonidet P-40 buffer (10 m m Tris HCl, 10 m m NaCl, 3 m m MgCl 2 , 0.5% Nonidet P-40). Following desalting, samples diluted 1:5 with 5x kinase assay buffer (50 m m Tris (pH 7.2), 125 m m β-glycerophosphate, 250 m m KOH, 10 m m EGTA, 1.25 m m Na 3 VO 4 , 50 m m MgCl 2 and 2 m m DTT, in ddH 2 O) and assayed for 3 h along with recombinant active ZAKβ (5 μg; obtained from Drs Hilary McLauchlan and James Hastie at the MRC PPU, University of Dundee) and ATP (Sigma, A26209) or [γ-32P]ATP (10 μCi; Perkin Elmer, NEG002H250UC). For 1D SDS-PAGE analysis of skeletal muscle lysate treated with FSBA, [γ-32P] ATP and recombinant ZAKβ protein, the reactions were stopped using 4x NuPAGE LDS buffer and 10x NuPAGE Reducing Agent. Samples were electrophoresed using a 10% polyacrylamide gel at 200 V for 1 h or until the dye had run off the gel. Protein loading assessed using SafeBlue (NBS Biologicals, NBS-SB1L). X-ray film (Insight Biotechnology, sc-201 696) was placed over the gel encased in acetate and exposed for 5 days before being developed. LC–MS/MS analysis For LC–MS/MS, proteins were precipitated using 4x ice-cold acetone and left at −20°C overnight. Samples were then pelleted for 10 min at 15 000 g . Once the supernatant was removed, the pellet was left to air-dry. The pellet was resuspended in urea lysis buffer (100 m m Tris (pH 7.4), 8 M Urea, 1 m m Na 3 VO 4 , 2.5 m m Na 4 P 2 O 7 and 1 m m β-glycerophosphate) using sonication. 0.0364x the sample volume of DTT reducing solution (19.25 mg/ml DTT in ultra-pure water) was added to the samples and incubated at 55°C for 30 min. The samples were then cooled to 4°C for 10 min. 0.1x the sample volume of IAM alkylating solution (19 mg/ml iodoacetamide in ultra-pure water) was added and incubated in the dark at RT for 15 min. Samples were then diluted 4x with 50 m m ammonium bicarbonate and 1:50 (protease:protein m:m) protease (Trypsin and Lys-C mix). Samples were incubated ON at 37°C. For peptide desalting, ON samples were acidified in 0.1% TFA. C18-E cartridges (Phonomenex) were loaded with 100% acetonitrile. The cartridge was then conditioned with 2 ml of 80% acetonitrile/0.1% TFA. Equilibration of the cartridge was performed using 2 ml of 0.1% TFA. Samples were loaded onto the cartridge using moderate pressure and flow rate. Column was washed with 2× 0.25 ml of 0.1% TFA. Phosphopeptides were eluted with 2× 0.2 ml of 80% acetonitrile/0.1% TFA and collected into an Eppendorf tube. Samples were subsequently dried using SpeedVac using moderate heat. LC–MS/MS was performed over a 60-min acquisition with elution from a 50 cm EasyNano PepMap column onto a Fusion Orbitrap Fusion Tribrid mass spectrometer using a Waters mClass nanoUPLC. LC–MS chromatograms were imported into PEAKSX-Studio for peak picking and peptide identification. Data were searched against the mouse subset of the UniProt database. These data are filtered to 1% false discovery rate for identification. Complete mass spectrometry data sets and proteomic identifications are available to download from MassIVE (MSV000090935) (doi: 10.25345/C5TB0Z08V ) and ProteomeXchange (PXD037832). Raw data processing The full complement of phosphopeptides identified in the FSBA phosphoproteomics screen were a combination of significant phosphopeptides with high probability of confidence being present in all five biological replicates and phosphopeptides identified in fewer than five replicates with reduced confidence. The complete data was processed in R (v4.0.3), to produce a list of significant phosphopeptides. Generation of the list of significant phosphopeptides was necessary to identify the potential targets of ZAKβ for further processing. The phosphoproteomic assay alone yielded a list of 114 S/T phosphopeptides from 48 individual proteins. In silico prediction of ZAKβ substrates The phosphosite sequences obtained from our phosphoproteomic screen were used to predict further potential substrates of ZAKβ. A ZAKβ PSSM was generated by submitting an alignment of the 114 15-mer phosphopeptide sequences to the PSSMSearch website ( http://slim.icr.ac.uk/pssmsearch/ ) using the default settings. The PSSM was then cross-referenced against the entire mouse proteome within the PSSMSearch website to identify further putative ZAKβ substrates. From this analysis, 1449 phosphopeptides from 1200 different proteins were identified. The resultant list was cross-referenced against a mouse skeletal muscle proteome for KEGG analyses using GProfiler ( https://biit.cs.ut.ee/gprofiler/gost ) or interactome analysis using STRING ( https://string-db.org/ ). The list of putative ZAKβ targets from a site-specific peptide array was obtained from Johnson et al . ( 27 ). The raw data were filtered to include proteins within the top-ranking categories for ZAKβ activity. Disease enrichment and interaction analyses were performed using STRING. Generation of fish line Guide strand RNA targeting Zakβ in zebrafish were designed using ‘CHOPCHOP’ ( https://chopchop.cbu.uib.no/ ) from exon 2 of the Zakβ gene. Two gene-specific 20–22 nt sequences were selected, each including a protospacer adjacent motif: (i) GCAGCTAATACGACTCACTATA GGA AAGGGATCTGAACGAAACGTTTTAGAGCTAGAAATA; (ii) GCAGCTAATACGACTCACTATA GGT CCCACAGGATAAAGAAGGTTTTAGAGCTAGAAATA. In addition to the target sequence, a T7 promoter region is incorporated into these forward primers. Templates are generated by PCR using the gene-specific and common reverse primers with the high-fidelity Phusion polymerase (Thermo Fisher), and guide strand RNA is synthesized using MEGA-shortscript T7 transcription kit (Ambion). 300 pg of the two gene-specific sgRNA together with 1 ng Cas9 protein in a 1 nl volume was injected into zebrafish embryos (AB WT) at the 1–4 cell stage. F0 zebrafish were raised to sexual maturity and outcrossed with WT AB zebrafish, F1s were genotyped by PCR of the relevant genomic DNA region and heterozygotes identified by heteroduplex analysis, and the specific disruption was identified by sequencing after cloning the PCR product into a plasmid. These F1s were backcrossed to AB before selecting F2 heterozygotes which were outcrossed again before raising a line of heterozygotes with a known mutation. Homozygous Zakβ zebrafish were generated by in-crossing heterozygotes and genotyping the offspring by fin-clipping at maturity. Methodology for swimming analysis Comparison of key swimming attributes between Zakβ −/− zebrafish and siblings. Zakβ +/− and Zakβ −/− fish were raised in the same tank. The zebrafish were transferred to a recording tank, left to acclimatize for 10 min, before a 5-min video recording. A 30-s recording from the first half and a 30-s recording from the second half were processed using ShadowFish, and the tracked frames coordinates were processed for key swimming attributes using MATLAB: (a) distance travelled (cm); (b) velocity (cm/s); (c) percentage of time moving; (d) mean length of time for each swimming episode (s); (e) mean tail beat frequency (hertz); (f–h) mean tail beat frequency at low, medium and high speeds, respectively; (i) mean tail bend amplitude; and (j–l) the mean tail bend amplitude at low, medium and high speeds, respectively. Each data point is an individual recording; the bar chart and error bars are mean ± standard error of the mean. Twelve WT and eight Zakβ +/− zebrafish had recordings taken, providing 24 data points for WT and 16 data points for Zakβ +/− . In the case of 18-month-old fish, a ROUT test was performed to eliminate anomalies. Six WT and eight Zakβ −/− zebrafish recordings were taken, providing 12 data points for WT and 16 data points for Zakβ −/− zebrafish. The ROUT test for anomalies eliminated two data points for WT for distance travelled and two data points for mean velocity. For 35-month-old fish, recordings were taken from four WT and five Zakβ −/− zebrafish that had been kept in separate tanks, providing 8 data points for WT and 10 data points for Zakβ −/− zebrafish. Mice Generation of Zak −/− mouse line as described in Nordgaard et al . ( 13 ). Mice were housed in individually ventilated cages at the animal facility at the University of York. Research was approved and monitored by the UK Home Office. All animal regulated procedures were carried out according to Project License constraints (PEF3478B3) and Home Office guidelines and regulations. Mice were also housed at the animal facility of the Department of Experimental Medicine at the University of Copenhagen, and the research was monitored by the Institutional Animal Care and Use Committee. All of the mouse work was performed in compliance with Danish and European regulations. Animal experiments were approved by the Danish Experimental Animal Inspectorate. Mice were kept on a 12-h light/12-h dark cycle in ventilated cages at RT and fed regular rodent chow. Patient The patient originated from France, and sample collection was performed with written informed consent from the patient and participating family members, according to the Declaration of Helsinki. Statistics To measure the effect of loss of function of ZAK on soleus and TA muscle regeneration in vivo , 10× magnification images were captured using a Leica brightfield microscope. The number of fibres displaying centralized nuclei was expressed as a percentage of the total number myofibres to create one single data point per image. Comparison of percentage of centralized nuclei determined from full coverage of soleus ( n = 3) and TA muscles ( n = 6). An average of the percentage of fibres with centralized nuclei was calculated across three mice per condition. Soleus muscle fibre size determined from complete coverage of soleus muscle ( n = 3). TA fibre size determined from a minimum sample of 2000 fibres from sections taken at the same level ( n = 6). Data were compared using either the Student’s t -test or one-way ANOVA test followed by a Bonferroni HSD post hoc test. P -values < 0.05 were deemed significant. For fibre typing of soleus muscle sections, relative numbers of type I and type IIa fibres were determined manually by counting the total of fibres expressing each myosin isoform. The fibre cross-sectional area was calculated as an average of three individual muscles. To avoid regional fibre size variation in the TA muscle to confound comparisons, a scale factor (average area of electroporated: average area non-electroporated) was obtained from at least three electroporated regions within the muscle, which were then averaged to produce a single data point per muscle. For morphological analyses of H&E-stained sections, 10× magnification images were captured using the Leica brightfield microscope (DM2500), camera and imaging software (SPOT Insight FireWire; Diagnostic instruments). Centralized nuclei were counted manually. The number of fibres displaying centralized nuclei was expressed as a percentage of the total number of myofibres to create one single data point per muscle. An average of the percentage of fibres with centralized nuclei was calculated across three mice per genotype. Data were compared using a one-way ANOVA test followed by a Tukey’s HSD post hoc test. P -values < 0.05 were deemed significant. To measure the effect of loss of function of ZAK on TA muscle regeneration in vivo in response to BaCl 2 -induced injury, 10× magnification images were captured using a Leica brightfield microscope from 28 dpi muscles. The fibres displaying centralized nuclei were used as a proxy for injured muscle fibres undergoing regeneration ( 50 ). Twenty-eight days post-injury TA fibre size was determined from a measurement of all centronucleated fibres (minimum of 750 fibres per section) from sections taken at the same level ( n = 4). Data were compared using Student’s t -test. A P -value < 0.05 was deemed significant. Conflict of interest statement . None declared.
Results Pathology severity is gender-biassed and increases with age Scoliosis and developmental delays are features of ZAK-deficient patients ( 11 ). To test this in mice, we looked at skeletal muscle preparations from adult mice. The distribution of bone and cartilage did not reveal any difference in Zak −/− mice compared with control ( Fig. 1A ), indicating that scoliosis or any other skeletal anomalies are not a feature of ZAK deficiency in mice. To check for any gender bias, which has been occasionally reported in muscular dystrophy ( 16 , 17 ), we extended the morphological analysis of slow and fast muscle sections previously done ( 13 ) to include males and females ( Fig. 1B and C ). To test also the effect of age, we performed the analysis at 8 and 22 weeks. We observed a significant increase in the abundance of type I slow fibres in Zak −/− female mice when compared with the male counterparts ( Fig. 1C ). H&E-stained sections confirmed centralized nuclei in both males and females in the soleus ( Fig. 1D and E ), with Zak −/− males showing higher numbers of regenerating fibres with central nuclei than females in this tissue. Moreover, there was a clear effect of age, with mice showing approximately three times more centrally nucleated fibres at 22 weeks than at 8 weeks ( Fig. 1E ). Neither fibre atrophy nor central nucleation was identified in the TA muscle from 8-week-old Zak −/− or control mice ( Supplementary Material, Fig. S1 ). Therefore, to test if ageing exposes any changes also in the pathology free TA muscle, we looked at a later time point. The results showed a significant increase in centrally nucleated fibres in the TA muscle in 14-month-old males and females. Altogether, these results indicate that age is a pathology driver in Zak −/− mice. Regeneration capacity is slightly decreased in Zak −/− mice A role for ZAKβ in muscle regeneration would be consistent with the variable muscle fibre size described in sections from biopsies from human patients ( 11 ). To test this, we first analysed the expression of ZAKα and ZAKβ through in vitro differentiation of C2C12 myoblasts ( Fig. 2 ). The results showed that while ZAKβ expression is maintained throughout, ZAKα quickly declines as anticipated in differentiation medium ( Fig. 2A ). We then generated two ZAK-deficient C2C12 lines by CRISPR/Cas9 mutagenesis, hereafter referred to as D9 and C10. Total absence of ZAK in the two independently derived C2C12 clones was confirmed by western blot ( Supplementary Material, Fig. S2A and B ). The presence of different alleles with disruptive frameshifts was confirmed in both cell lines ( Supplementary Material, Fig. S3 ). As the expression of ZAK overlaps the transition from proliferation to differentiation of C2C12 myoblasts, we tested the fusion and differentiation indices in the ZAK KO cell lines and the parental C2C12 myoblasts ( Fig. 2B–D ). Quantifications from the D9, C10 and parental lines showed a significant reduction in fusion and differentiation indices in the D9, C10 cell lines. Moreover, these defects were partially rescued by the reintroduction of ZAKβ in both cell lines through transient transfection ( Fig. 2B–D ). We have previously observed in vitro fusion defect phenotypes in cell lines rendered deficient for the ZAK-associated proteins IGFN1 ( 5 ) and COBL ( 18 ). The implication of the actin nucleating protein COBL in these interactions prompted the hypothesis that the fusion defects in the ZAK-deficient cell lines could be caused by a lack of COBL actin nucleation activity since the requirement of actin remodelling for fusion has been extensively established (e.g. ( 19–21 )). We therefore tested if ZAK β modulates the activity of COBL. Surprisingly, both D9 and C10 cell lines show no detectable COBL on western blots ( Fig. 3A ), suggesting that COBL is either stabilized by ZAKβ or the COBL gene is a downstream transcriptional target of ZAKβ. To test COBL activity, we used the ability of COBL to form ruffles in COS7 cells as a read out ( Fig. 3B ). Ruffles can be identified by COBL concentrating in phalloidin-rich aggregates ( 18 ). The results indicate that in cells cotransfected with COBL and ZAKβ constructs, ruffle formation is inhibited ( Fig. 3C ). Moreover, this inhibition is dependent on the kinase activity of ZAK. Thus, the ZAK inhibitor PLX4720 ( 22 ) abolishes the ZAK inhibitory effect on COBL and formation of ruffles is again observed in these cotransfected cells ( Fig. 3C and D ). Although the effect of ZAKβ on COBL appears clear in COS7 cells, it is unlikely that a cell fusion defect plays a major role in muscle development in vivo , given that Zak −/− mice develop normally and are overtly indistinguishable from wild-type (WT) littermates ( 13 ). Therefore, to test if the in vitro observations are relevant to muscle regeneration, we induced acute focal regeneration in vivo by intramuscular (IM) injection of BaCl 2 in the tibialis of Zak −/− and control mice. We choose 8-week-old mice and a pathology free muscle to ensure a robust response to the BaCl2 treatment. Representative H&E images of mid-level sections are shown in Fig. 3E . The results showed an initial phase of acute degeneration and incipient regeneration indicated by the large areas of small centrally nucleated fibres at 4 days post-injection (dpi), followed by a regenerative process from 7 to 28 dpi. Larger fibres lacking central nuclei were evident from 12 dpi ( Fig. 3E ), but there were no significant differences between samples from Zak −/− compared with control mice at 12 or 28 dpi ( Supplementary Material, Fig. S4 ). Centronucleated fibres progressively increased in size during the regeneration period ( Fig. 3E ). As an indirect measure of regeneration capacity, we measured the cross-sectional area of centrally nucleated fibres from the whole tibialis ( Fig. 3F ). The results showed a small but significant reduction in the cross-sectional area of samples from Zak −/− mice at 28 dpi. We interpret this as the loss of ZAKβ causing a slight delay in muscle fibre regeneration (control mean: 2182.42 μm 2 ; Zak −/− mean: 1875.47 μm 2 ; n = 5), as there is no difference in fibre cross-sectional area or internal nuclei between genotypes in samples from uninjured young mice ( Figs 1G and 3F ). It therefore appears that loss of ZAKβ has a small but measurable effect on the muscle regeneration capacity, independently of confounding factors such as age or pathology that could have influenced this test. However, since COBL appears dispensable for muscle function ( 23 ), it seems unlikely that the ZAK/COBL interplay would play a significant role in the pathogenic mechanisms of ZAK deficiency. Therefore, to identify relevant targets of ZAKβ kinase activity in skeletal muscle, we next performed a phosphoproteomic screen. Phosphoproteomics and bioinformatics identify likely targets and associated disease A phosphoproteomics study was undertaken using mouse skeletal muscle extracts and recombinantly produced ZAKβ. Muscle extracts were first treated with 5′-4-fluorosulphonylbenzoyladenosine (FSBA) to irreversibly inhibit all endogenous kinases. Then, active ZAKβ was added to half the samples (treated group), whereas the other half remained untreated (control group) ( Fig. 4A , see Materials and Methods for details). In a quality control assay, the FSBA treatment proved effective at inhibiting all endogenous kinases in the muscle lysate, whereas recombinant ZAKβ was able to actively undertake phosphorylation of substrates in the FSBA-treated lysate ( Fig. 4B ). Enriched S/T phosphopeptides were then analysed by LC–MS/MS and a list of 114 S/T phosphopeptides from 48 individual proteins obtained ( Supplementary Material, Table S1 ). Pathway analyses of this short list ( Supplementary Material, Table S1 ) failed to identify any specific biological process, complex or subcellular compartments (data not shown), but we noted the presence of SYNPO2 (Synaptopodin 2), a protein that interacts with FLNC at the Z-disc ( 24 ) and PDLIM5 (PDZ and LIM domain 5), a scaffolding protein that tethers protein kinase D1 at the Z-disc ( 25 , 26 ). To identify further putative ZAKβ targets that may not have been present just in the soluble fraction of the muscle lysate, a position-specific scoring matrix (PSSM) was generated from the 15-mer phosphopeptide sequences identified by LC–MS/MS. The list of direct phosphopeptides was loaded into the PSSMSearch website ( http://slim.icr.ac.uk/pssmsearch/ ) to generate a PSSM as well as a 15-mer consensus sequence ( Fig. 4C ). The PSSM was then cross-referenced with the online mouse proteome, and 1200 potential ZAKβ targets were identified ( Supplementary Material, Table S2 ). The predicted targets were then subjected to pathway analysis ( Fig. 4C , full description in Supplementary Material, Table S3 ), which identified proteins involved in extracellular matrix–receptor interaction, focal adhesion and cell adhesion as the top enriched KEGG categories ( Fig. 4C ). The predicted targets included the Z-disc associated proteins FLNC, IGFN1, PDLIM5, MYOZ1 (Myozenin 1) and dystrophin. In a recent study, the optimal substrate specificity for the majority of the human Ser/Thr kinome has been experimentally determined by performing a positional scanning peptide array analysis and a computational approach ( 27 ). For ZAK, the top-ranked phosphorylation targets identify myofibrillar myopathy as an associated disease ( Supplementary Material, Fig. S5A and Supplementary Material, Table S4 ). Of the 1555 proteins ranked in the top 10 for ZAK phosphorylation ( 27 ), 222 were also identified in our study, including proteins associated with myofibrillar myopathies such as FLNC, BAG3 (BCL2-binding athanogene 3) and TTN (Titin) ( Supplementary Material, Fig. S5B and Supplementary Material, Table S5 ). Recently generated RNAseq data from Zak −/− and control mice (accession number PRJNA816072, ( 13 )) also identified extracellular matrix–receptor interaction and focal adhesion as the top pathways in the differentially expressed genes (DEGs) obtained from the pathology free TA muscle ( Supplementary Material, Fig. S6 and Supplementary Material, Table S6 ). These same pathways also appear to be highly enriched in pathological soleus muscle ( Supplementary Material, Fig. S6 and Supplementary Material, Table S7 ). In addition, enrichment of the same top KEGG pathways was obtained from the list of DEGs from microarray analysis of ZAK-deficient patient muscle biopsies ( 11 ). Thus, the results from the phosphoproteomics as well as the transcriptional profiles from Zak −/− mice and patients suggest that focal adhesion components are both substrates and downstream transcriptional targets of ZAKβ. Table 1 summarizes the list of genes contributing to focal adhesion for each of the above studies, in which filamins, integrins, collagens and thrombospondins appear as common hits. A role for ZAKβ in cell adhesion in muscle would be consistent with the reported relocalization of ZAKβ to stress fibres emanating from focal adhesions after an acute compression stimulus ( 13 ). As a generic test of cell adhesion and migration, we performed a wound healing assay on the ZAK-deficient and control cell lines ( Supplementary Material, Fig. S5A ). For this, the distance travelled by the sheet migration after creating a gap in the cell monolayer was measured at regular intervals for a 24-h period. The results consistently showed smaller distances of sheet migration in the D9 and C10 cell lines compared with C2C12 control ( Supplementary Material, Fig. S5B ). To account for the effect of cell proliferation rates, the assay was repeated using a live-cell imaging microscope that allowed individual cell tracking and cell division time measurements. Cell migratory speed was calculated by the total distance travelled divided by time over a 30-h period. The results showed that D9 and C10 migratory speeds are significantly lower than the C2C12 control ( Supplementary Material, Fig. S5C ), whereas cell division times for C212, D9 and C10 were 20.6, 27.4 and 22.3 h, respectively. We therefore conclude that the ZAK-deficient clones D9 and C10 have lower migration speeds than the C2C12 parental cell line. From the table of putative ZAKβ targets ( Supplementary Material, Table S3 ), Filamin C ( FLNC) attracted our attention because of its role in cell adhesion through its interactions with the transmembrane receptor β1-integrin ( 28 , 29 ) and components of the dystrophin–glycoprotein complex ( 28 ). Moreover, SYNPO2 was identified as a direct phosphorylation target of ZAKβ at S540 ( Supplementary Material, Table S1 ). Since SYNPO2 is involved in the turnover of FLNC through the chaperon-assisted selective autophagy mechanism (CASA) ( 30 ), we decided to investigate FLNC expression both in the mouse KO and in the biopsy of a ZAK-deficient patient. FLNC turnover is disrupted in some fibres from ZAK-deficient muscle We studied FLNC expression on soleus sections of 8-week-old mice. Both males and females show a high percentage of fibres with strong anti-FLNC antibody reactivity ( Fig. 6A ). FLNC appears highly concentrated in some fibres from the Zak −/− mouse, which are not detected in control sections ( Fig. 6B ). The percentage of fibres in the soleus showing these highly immunoreactive fibres is significantly reduced at 22 weeks in Zak −/− males and females ( Fig. 6B ). We identified BAG3, the core cochaperone in the CASA mechanism ( 30 , 31 ), showing similar distribution in the same FLNC-positive fibres in the Zak −/− soleus sections ( Fig. 6C ). Interestingly, neither Flnc nor Bag3 transcript levels are differentially expressed in the mouse soleus RNAseq data ( Supplementary Material, Fig. S7 ), suggesting that the increased signal on immunofluorescence is because of the CASA pathway being unable to cope with the FLNC turnover in some fibres. Since CASA is a tension-elicited pathway ( 31 ), we decided to test if mechanical stress would exacerbate these observations as predicted. For this, we tested the ability of ZAK-deficient soleus muscle to respond to chronic overload. We applied synergistic ablation to 30-week-old female mice and surgically removed the gastrocnemius muscle from one of the hind limbs. We then quantified the fibres showing abnormal reactivity to FLNC antibodies. The results showed an acute and significant difference in the overloaded muscle in Zak −/− compared with control ( Fig. 6D and E ). Thus, while the control showed up to 3% reactive fibres in the overloaded muscle from <1% in the sham leg, the soleus from Zak −/− mice showed an average of 15% reactive fibres in the overloaded muscle compared with 3% in the sham. We also confirmed the presence of myotilin, another myofibrillar myopathy marker ( 32 ), in the FLNC-positive fibres ( Supplementary Material, Fig. S8 ). Thus, despite the progressive adaptive changes that soleus goes through described in Figure 1 , it remains more susceptible to mechanical stress in Zak−/− mice compared with age-matched controls. Intriguingly, the percentage of fibres in the soleus showing aberrant distribution of FLNC is significantly reduced at 22 weeks in males and females ( Fig. 6B ), indicating that the age-dependent adaptive changes in the soleus allow this tonic muscle to cope better with mechanical loading. As expected, spared TA muscles did not show FLNC immunoreactive fibres at 8 weeks ( Supplementary Material, Fig. S9 ). To test the relevance of the findings above in the ultra-rare recessive human condition associated with ZAK deficiency, we analysed a biopsy sample from a previously described congenital myopathy patient ( 11 ). This patient carries a truncating variant in the ZAK gene in homozygosity (c.490?491delAT, p.Met164fs * 24, using reference sequence NM_133646) ( 11 ). We identified a small number of fibres that showed very high reactivity with FLNC antibodies in strong accumulations ( Fig. 7A ). Serial sections and double immunofluorescence showed that the same fibres stained by BAG3 and FLNC were also positive for myotilin ( Fig. 7B and C ). Thus, although the majority of the muscle remains apparently normal, expression of FLNC and other markers suggest that ZAK deficiency shows features of a mild myofibrillar myopathy. To look for evidence of this at an ultrastructural level, we performed electron microscopy of soleus muscle on 8-week-old animals ( Fig. 8 ). Overall, soleus samples from Zak −/− mice ( n = 3) presented a normal ultrastructure indistinguishable from control muscles. Granulofilamentous material or nemaline rods described in myofibrillar myopathy (e.g. ( 33 )) were not found. There was limited evidence of myofibrillar and Z-disc disorganization ( Fig. 8B ), which was not detected in the age- and sex-matched controls ( Fig. 8A ). Moreover, large vacuoles containing membranous material were observed, likely reflecting disrupted autophagy in isolated fibres ( Fig. 8C ). We did not attempt to quantify these observations, as sampling of the muscle using this technique was deemed insufficient. Aged ZAKβ zebrafish KO show reduced locomotor performance We next tested the effect of loss of ZAKβ in an independent animal model. In zebrafish, Zakα and Zakβ are encoded by different genes. We generated a homozygous loss of function mutant line by CRISPR/Cas9 mutagenesis of exon 2 of the Zakβ gene (ENSDARG00000044615.8), following the strategy described in Materials and Methods . The mutation is a 33 bp insertion with an in-frame stop codon upstream of the kinase domain ( Supplementary Material, Fig. S10 ). We then performed a number of locomotor tests using the experimental platform previously described ( 34 ) in conjunction with an in-house developed software to track the movement of individual zebrafish. Four parameters were measured: the distance travelled, percentage of time spent moving, swimming episode duration and mean velocity. These parameters were measured on independent cohorts of 6- and 18-month-old fish. The results summarized in Figure 9 show no significant differences are observed in 6-month-old fish, whereas the 18-month-old Zakβ −/− fish show a trend in lower distance travelled and mean velocity and a significant difference with controls for lower percentage of time spent moving and mean active swimming episode. To confirm this trend, we tested 35-month-old Zakβ −/− fish against same-age WT controls that had been kept in different tanks. The results show that differences with controls are exacerbated by age, with all parameters measured showing lower performance for the 35-month-old Zakβ −/− fish, noting that the results for these particular groups do not account for any potential variability provided by the tank environment. The 35-month-old zebrafish were also used for ultrastructural analysis of skeletal muscle ( Fig. 10 ). Broad views of electron microscopy images showed no differences with controls ( Fig. 10A ), with both genotypes showing well defined sarcomeres, mitochondria and nuclei. However, similar to the observations in mice, sporadic fibres showing apparent dissolution of the filamentous material were detected only in the samples from Zakβ −/− fish ( Fig. 10B ).
Discussion Loss of function of Zak causes a mild disease in mice compared with the congenital myopathy condition described in humans ( 11 , 35 ). Zak −/− mice do not show any overt difference to controls, including in behavioural tests ( 13 ). Common findings such as fibre atrophy and predominance of slow type fibres nonetheless suggest that analysis of the mouse pathogenesis can shed light on the human disease. Within the hindlimb, the tonically active soleus shows pathology in young mice, whereas tibialis and other muscles are spared. Age is a driver of pathology. Thus, in old mice, the proportion of centrally nucleated fibres increases in the soleus , whereas the spared tibialis begins to show similar changes. There is also a progressive shift with age in soleus to type I fibres expressing the slow myosin heavy-chain subunit. This fibre type conversion is likely adaptive, as ageing females show more type I fibres and less pathology than the age-matched males, suggesting that the higher fatigue-resistant profile of the female soleus contributes to ameliorate the pathology. In conclusion, in the mouse, tonically active muscles appear more susceptible to the loss of ZAKβ, consistent with contraction stimuli being a trigger of ZAKβ signalling ( 13 ). ZAKα and ZAKβ are expressed in proliferating C2C12 myoblasts and at the start of myoblast fusion. This prompted an in vitro test of myoblast fusion and differentiation, which showed a fusion defect in C2C12-derived ZAK KO cell lines. We hypothesized that disruption of COBL actin nucleation activity in the ZAK KO cell lines underlied the fusion defect. Unexpectedly, expression of COBL was blunted in two independently generated ZAK-deficient clones. We could not confirm COBL expression in vivo because antibodies performed poorly on muscle extracts, but a critical role for ZAK or COBL in myoblast fusion in vivo can be ruled out since both ZAK and COBL KO mice have a normal muscle development ( 13 , 23 ). Moreover, ZAKβ is not a key player of muscle fibre regeneration of adult muscle either, as in Zak −/ − mice, there is a small size effect on the speed of regeneration following injury and full regeneration is eventually achieved. This small delayed regeneration phenotype could be exacerbated at older time points since regeneration capacity declines with age ( 36 ). Therefore, ZAKβ function appears critical for muscle function but not for muscle development or regeneration. To identify other ZAKβ targets that may contribute to the pathogenesis, we performed a phosphoproteomic screen. The phosphoproteomics assay as well as transcriptional profiles in patients ( 11 ) and mice ( 13 ) identified components of cell/focal adhesion as the most represented, which prompted the hypothesis that ZAKβ may regulate the turnover of cell adhesion factors. To test this hypothesis, we decided to focus on FLNC for several reasons. FLNC plays an important role linking the sarcolemma to the extracellular matrix ( 28 , 29 ) and interacts with SYNPO2, a direct substrate of recombinant ZAKβ and component of the CASA protein turnover mechanism ( 30 ). Moreover, we note that Zak −/− mice show similar but much less severe pathology to that of Ky −/− mice ( 37 , 38 ). For example, in Ky −/− mice the shift to type I fibres in soleus is extreme and within 3 months, they make up 100% of this tissue ( 6 ), with FLNC showing strong mislocalization patterns ( 39 ). In Zak −/− mice, we confirmed the presence of fibres showing aberrant accumulation of FLNC in the soleus in males and females. Interestingly, as the soleus shifted towards type I fibres with age, the proportion of fibres with irregular FLNC expression decreased, with females showing less aberrant expression than males. Since FLNC is a well-characterized client of CASA ( 31 ), we interpret our results as deficient turnover of FLNC through the CASA mechanism ( 30 ). Under this hypothesis, in the absence of ZAKβ , costameric / transmembrane complexes are less able to cope with mechanical stress, overwhelming the CASA mechanism and leading to the accumulation of damaged FLNC in some fibres. Accumulation of the core CASA cochaperone BAG3 ( 31 ) in the same fibres that show irregular FLNC expression supported that notion. Furthermore, increased endogenous loading by partial ablation of the gastrocnemius in the hind limb caused a dramatic increase in the accumulation of fibres showing irregular FLNC expression in the Zak −/− soleus , indicating that ZAKβ signalling is necessary for a normal physiological response to sustained overloading. Despite the inherent limitations of the material, similar FLNC/BAG3 immunoreactive fibres were also detected in muscle sections from a ZAK-deficient patient biopsy. Moreover, Myotilin, a FLNC partner at the Z-disc ( 40 ) that underlies myofibrillar myopathy ( 32 ), accumulates in the same FLNC-positive fibres in mice and humans. It thus appears that ZAKβ signalling is required for the turnover of certain proteins and those identified here, FLNC, BAG3 and Myotilin, are implicated in myofibrillar myopathies ( 41 ). Electron microscopy views confirmed normal sarcomeric ultrastructure in soleus sections from the mutant and control mice. Rare examples of Z-disc disorganization or presence of large vacuoles with dislocated membranous content were detected only in samples from Zak −/− mice. Of note, rimmed vacuoles, autophagic in nature and often reported in myofibrillar myopathies ( 41 ), were also previously detected in biopsies from ZAK-deficient patients ( 11 ). We generated Zakβ −/− zebrafish to test the presence of the above features in an independent model. Electron microscopy showed normal preservation of organelles and sarcomeric organization, with sporadic evidence of myofibrillar disorganization in samples from 35-month-old fish, which were not found in age- matched controls. In all tests performed, Zakβ −/− zebrafish showed no phenotype at 6 months, but locomotor performance decreased with age, indicating that ZAKβ signalling also contributes to preserving normal muscle function in zebrafish. In summary, whereas ZAKβ loss of function in humans causes measurable muscle defects from birth ( 11 ), most phenotypic observations in smaller animal models appear with age and the ultrastructural hallmarks of myofibrillar myopathy, namely, Z-disc streaming, granulofilamentous/filamentous sarcoplasmic inclusions or large autophagic vacuoles ( 42 , 43 ), are absent or marginally present in the animal models. In the majority of myofibrillar myopathies, the mutant protein is a major component of the proteomic content of the aggregates they appear to cause ( 44 ). This is not a plausible mechanism for ZAK deficiency, as the loss of function ZAK mutations so far reported a result in total absence of protein ( 11 , 13 ). However, recessive loss of function mutations in the KY gene also result in mislocalized FLNC expression in patients ( 8 , 9 ) and mice ( 39 ), indicating that formation of aggregates driven by the mutant protein is not a universal mechanism in myofibrillar myopathies ( 45 ). The accumulation of mislocalized FLNC and BAG3 in highly reactive fibres reported here may potentially trap these proteins away from the Z-disc. Sequestration of FLNC or BAG3 has been proposed as a mechanism for myofibrillar myopathy whereby their accumulation depletes them from the Z-disc, compromising function and structural integrity in these fibres ( 46 , 47 ). Our data suggest that a similar mechanism may contribute to the muscle pathology of ZAK deficiency in mice and humans.
Abstract The ZAK gene encodes two functionally distinct kinases, ZAKα and ZAKβ. Homozygous loss of function mutations affecting both isoforms causes a congenital muscle disease. ZAKβ is the only isoform expressed in skeletal muscle and is activated by muscle contraction and cellular compression. The ZAKβ substrates in skeletal muscle or the mechanism whereby ZAKβ senses mechanical stress remains to be determined. To gain insights into the pathogenic mechanism, we exploited ZAK-deficient cell lines, zebrafish, mice and a human biopsy. ZAK-deficient mice and zebrafish show a mild phenotype. In mice, comparative histopathology data from regeneration, overloading, ageing and sex conditions indicate that while age and activity are drivers of the pathology, ZAKβ appears to have a marginal role in myoblast fusion in vitro or muscle regeneration in vivo. The presence of SYNPO2, BAG3 and Filamin C (FLNC) in a phosphoproteomics assay and extended analyses suggested a role for ZAKβ in the turnover of FLNC. Immunofluorescence analysis of muscle sections from mice and a human biopsy showed evidence of FLNC and BAG3 accumulations as well as other myofibrillar myopathy markers. Moreover, endogenous overloading of skeletal muscle exacerbated the presence of fibres with FLNC accumulations in mice , indicating that ZAKβ signalling is necessary for an adaptive turnover of FLNC that allows for the normal physiological response to sustained mechanical stress. We suggest that accumulation of mislocalized FLNC and BAG3 in highly immunoreactive fibres contributes to the pathogenic mechanism of ZAK deficiency.
Supplementary Material
Acknowledgements We would like to thank Peter F.M. van der Ven for the generous sharing of the FLNC antibody RR90. We would like to thank the Bioscience Technology Facility at the University of York for their expert assistance with the phosphoproteomics (Adam Dowle), electron microscopy (Clare Steele-King) and quantitative phase imaging (Karen Hogg). Data Availability The data that support the findings of this study are available upon reasonable request. Funding This work was partially funded by the Muscular Dystrophy Campaign (Grant No. 17GRO-PG12-0148). A.R. and A.S. hold BBSRC White Rose PhD Studentships (BB/M011151/1). Work in the Bekker-Jensen laboratory was supported by grants from The Novo Nordisk Foundation (NNF21OC0071475), The Nordea Foundation and the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Program (grant agreement 863911—PHYRIST).
CC BY
no
2024-01-16 23:47:16
Hum Mol Genet. 2023 Jul 10; 32(17):2751-2770
oa_package/4c/76/PMC10789240.tar.gz
PMC10789241
38226366
INTRODUCTION M–C σ bonds are one of the core features of organometallic complexes. As special organometallics, metallaaromatics can be defined as aromatic complexes that have one or more metal atoms in their aromatic ring [ 1–9 ]. Metallaaromatics mainly consist of metallabenzenes [ 10–16 ], metallabenzynes [ 17–20 ], heterometallaaromatics [ 21–24 ], dianion metalloles [ 25 , 26 ], spiro metalloles [ 27–29 ] and carbolong complexes [ 30 , 31 ], all containing at least one M–C σ bond. Specifically, carbolong complexes, which include metallapentalynes, metallapentalenes, and their derivatives, have not less than three M–C σ bonds. The name ‘carbolong’ comes from the fact that both metallapentalynes and metallapentalenes contain a long carbon chain (≥7C) coordinated to a bridgehead metal, and interestingly, the three M–C σ bonds within the metallapentalyne and metallapentalene rings are in the same plane. The first carbolong complexes were metallapentalynes containing three coplanar M–C σ bonds (Fig. 1 , I ), which were reported by our group in 2013 [ 32 ]. Thereafter many other carbolong complexes that contain three coplanar M–C σ bonds as well, such as metallapentalenes with skeletal structure I [ 33 , 34 ], and their derivatives with II [ 35 ] and III [ 36 ] frameworks, were discovered (Fig. 1 ). Besides, some carbolong complexes with four coplanar M–C σ bonds, for instance, metallapentalene derivatives with IV [ 37 , 38 ] and V [ 39 ] structures, have also been reported (Fig. 1 ). These structurally unique complexes exhibit interesting properties, and have been applied in several areas [ 30 ]. As our interest in carbolong chemistry continues, we aimed to prepare substances with more coplanar M–C σ bonds. Herein, we report the preparation and characterization of two types of osmium complexes containing five coplanar M–C σ bonds. The difference between the two types is the size and order of the rings in their structures. Density functional theory (DFT) calculations were performed to characterize the bonding in these unique structures. The successful construction of these structures may be attributed to the use of a carbon chain as a rigid and polydentate ligand (Fig. 1 ) and the conjugation effect between sp 2 carbons and the metal center, which maintains the M–C σ bonds in a single plane and prevents elimination of the two neighboring carbons that bind to the metal. These organometallics were found to be stable at temperatures up to 100°C in moisture or air.
MATERIALS AND METHODS Detailed materials and methods are available in the Supplementary data .
RESULTS AND DISCUSSION Design, synthesis, characterization and stability of complexes with five coplanar Os–C σ bonds Complex 1 (Fig. 2A ), which has been reported to possess possibilities for continued reaction to form a higher conjugated system [ 35 , 36 , 39 ], was chosen as the starting material. We used AgBF 4 to remove the chloride ligand from the osmium center of complex 1 , resulting in the isolation of complex 2 (Fig. 2A ), whose structure was confirmed by X-ray crystallographic analysis ( Fig. S1 ). Subsequently, materials with unsaturated carbons were used to construct rigid and conjugated systems, so that the corresponding M–C σ bonds were confined in a plane to improve their stability. Complex 2 was treated first with ethyl ethoxyethyne (HC≡COEt) and then with excess neutral alumina to absorb the acid that was generated, producing complex 3a . The structure of 3a is denoted as [5554], indicating the size of the rings, recorded in the direction from C1 to C12. An η 3 -coordinated intermediate ( 5a ) with the skeleton structure III as shown in Fig. 1 was also isolated in the absence of Al 2 O 3 . Treatment of 5a with a base such as Al 2 O 3 was found to yield 3a reversibly (Fig. 2A ). Complex 2 was also treated with 2-methyl-1-butene-3-yne (HC≡CCMe=CH 2 ) and produced complex 4a , whose structure is denoted [5545] (Fig. 2A ). Similar structures with different substituents ( 3b, 4b–4d ) could also be obtained from different substrates (see details of the synthesis in the Supplementary Materials ). To investigate the structures of 3a and 4a , X-ray crystallographic analysis was performed (Fig. 2B and 2C ). The skeletons of both of these structures contain one metal center and a multidentate hydrocarbon chain. The mean deviations from the least-squares plane of C1–C12 and Os were determined to be 0.160 Å in 3a and 0.036 Å in 4a , indicating good planarity of both compounds. The bond lengths of the five M–C σ bonds in 3a and 4a were determined, and are shown in Fig. 2A . The Os–C12 bonds in both 3a and 4a were found to be longer than the other Os–C bonds because the C12 atom is sp 3 hybridized, whereas carbons C1–C11 are all sp 2 hybridized and conjugated with osmium. In addition to the planarity of the skeletons, the X-ray data show five coplanar M–C bonds in each structure. The intermediate 5a was also characterized by X-ray diffraction and its structure is shown in Fig. 2D . It is a non-planar structure because it contains an η 3 -coordination fragment. In the X-ray crystallographic data, average distances between H1 and the protons on C12 (two H12 atoms) were determined to be 2.635 Å in 3a and 2.394 Å in 4a , which prompted us to consider the possibility of adding one more Os–C σ bond in the plane. In 3a and 4a , the carbon chain ligands composed of 12 carbons form 5 coplanar σ bonds with the metal, which significantly diminishes the space available to the substituents on the terminal carbons C1 and C12 raising the possibility of repulsion between the substituents on C1 and C12. The heteronuclear multiple bond correlation (HMBC) spectra of 3a and 4a ( Figs S2 and S3 , respectively) reveal strong interactions between C1 and H12 and between C12 and H1, which may be a result of the crowding of these atom pairs. This strong bond correlation appears to depend on repulsive forces and this was further confirmed by analysis of the noncovalent interactions (NCIs) derived from the DFT calculations ( Figs S4 and S5 ) [ 40 ]. This suggests a steric limit on the number of carbons in the equatorial plane around the osmium center, which may not support the introduction of one extra equatorial Os–C σ bond in these two systems. Thermal stability studies of 3a and 4a were then performed to investigate the stabilities of their skeletons ( Table S1 ). These experiments showed that 4a is stable in air for 1 month at room temperature (purity >95%) and that 3a has no detectable decomposition even after more than 6 months. Complex 3a was also found to be stable in air for at least 1 day at 100°C in the solid state with purity >95%. The stability of 3a and 4a can be attributed to their structures: the conjugated and rigid polydentate systems assume stable configurations, in which five covalent M–C bonds tightly connect the metal and the organic moiety. In addition, a chemical reagent tolerance was also performed on 3a and 4a . Complex 3a can tolerate extreme conditions and reactants such as sodium (Na), oxidants such as hydrogen peroxide (H 2 O 2 ) and bases such as sodium hydride (NaH). In acidic media, however, 3a is converted to 5a . This conversion is reversible, and 3a can be recovered after removal of the acid (Fig. 2A ). In contrast, compound 4a appears to be less capable of tolerating a strong chemical environment than 3a . This may be partly due to the strain in the four-membered ring of 4a . On the other hand, the stability of 3a and 4a may also be associated with their aromaticity [ 41 ]. The aromaticity of the skeletons of both 3a and 4a were investigated by determining the nucleus-independent chemical shift (NICS) [ 42 ] values and displaying the anisotropy of the induced current density (ACID) ( Figs S6–S8 ) [ 43 ]. The results show that 3a has aromatic character, mainly in the two five-membered rings (Os, C1–C7), providing an extra contribution to its stability. However, as for 4a , its two five-membered rings (Os, C1–C7) are not aromatic. In addition, the anti-aromatic four-membered ring may decrease its stability. This may cause the difference of stabilities between 3a and 4a . Theoretical studies on the bonding situation of the Os–C bonds DFT calculations were carried out to reveal the bonding situation of the metal and the carbon atoms. Using the NBO 7.0 software package, the Wiberg bond index (WBI) [ 44 , 45 ] of the M–C bonds was determined in 3′, 4′ and 5′ , three simplified skeletons of 3a, 4a , and 5a (Fig. 3A ). Relatively large WBI numbers indicate the presence of M–C σ bonds. Small WBI numbers, such as those for C10, C11 and C12 in 5′ , denote weak bonds and indicate η 3 -ligand coordination to osmium. To further describe the M–C σ bonds, the Pipek-Mezey localized molecular orbitals (PM-LMOs) [ 46 ] were determined using the Multiwfn software package [ 47 ]. This analysis shows five M–C σ bonds on the equatorial plane of 3′ and 4′ (Fig. 3B and 3C ), while in 5′ , only three such bonds are seen (Fig. 3D ). Two other PM-LMOs on C10–C12 together show a four-electron η 3 -ligand coordinated to the osmium center in 5' . For comparison, a similar analysis was performed for II-Os' as the simplified model for the reported complexes II-Os (Fig. 1 and Fig. 3E , left) [ 35 ]. Two LMOs of II-Os' were identified to engage in a combination of π donation and π back-donation (Fig. 3E , right). A natural resonance theory (NRT) analysis [ 48 ] was performed to determine the proportion of each resonance structure of II-Os′ . There are several resonance situations of C1 to C10; however, to focus on the part we are interested in, C11, C12 and Os, these resonance structures were combined and described using dashed bonds as shown in Fig. 3F . The results show that the resonance structures with the form of II-Os-b′ (53.3%) is dominant, relative to ones with the form of II-Os-a′ (46.7%). These results suggest that the C11–C12 moiety can be viewed as mainly a π -ligand, which is further supported by the C10–C11 (1.362(8) Å) and C11–C12 (1.394(8) Å) bond lengths of one example of II-Os (R = Me) in the crystal structure [ 35 ]. On the basis of these results, it was therefore concluded that in our previously reported complexes II-Os , there are mainly three coplanar Os–C σ bonds, and the role of C11 and C12 with Os are primarily π interactions, although the σ interactions cannot be ignored. Therefore, the skeleton of complexes II-Os is totally different from those of 3 and 4 . Additional DFT computational analyses were performed both with numerical and visual methods to investigate the σ character of the M–C bonds in 3′ and 4′ . Usually, bonds can be divided into two main types: ionic and covalent ( σ, π , etc.). To investigate the covalent and ionic characters of the Os–C bonds in these complexes, the Hirshfeld charges of atoms were calculated first with Multiwfn. The numbers of charges on osmium of 3′ and 4′ are, respectively, 0.007 and 0.002, nearly zero, indicating a non-ionic form. Natural bond order (NBO) analyses were then performed (Fig. 4A ) [ 44 ] and 5' was also used as a reference. In 3′ and 4′ , five large Os–C bond orders were observed, indicating strong bonding forces between carbon and osmium. All of the bonds in 3′, 4′ and 5′ except for the Os–C7 bonds had a predominantly covalent character, indicating that the M–C bonds are dominated by a σ -covalent rather than ionic component. The large NBO values of Os–C7 in 3′, 4′ and 5′ indicate that the Os–C7 bonds are double bonds, containing not only σ character but also π character. The half ionic NBO values of Os–C7 might be because its π electrons participate in the delocalization of the rings, resulting in less covalent properties. The NBO values of Os–C10 and Os–C12 in skeleton 5′ are lower than those in 3′ because the bonds in the former are η 3 -coordination bonds rather than σ bonds. This further corroborates the line drawings in Fig. 4A . Electron localization function (ELF) [ 49 ] and localized orbital locator (LOL) [ 50 ] analyses of both 3′ and 4′ were also performed, and the results are shown here as visual evidence (Fig. 4B and 4C ). The red parts in white cycles revealed the electron localizations between the carbons and osmium, further confirming the existence of five M–C σ bonds in each structure. Furthermore, by simply comparing the color gradients in ELF graphs, the electron densities of the M–C σ bonds are nearly the same level of those C–C bonds, indicating a possible stability of the M–C bonds. Thus, the results of these DFT calculations are proof of the existence of 5 coplanar M–C σ bonds in the [5554] and [5545] structures.
RESULTS AND DISCUSSION Design, synthesis, characterization and stability of complexes with five coplanar Os–C σ bonds Complex 1 (Fig. 2A ), which has been reported to possess possibilities for continued reaction to form a higher conjugated system [ 35 , 36 , 39 ], was chosen as the starting material. We used AgBF 4 to remove the chloride ligand from the osmium center of complex 1 , resulting in the isolation of complex 2 (Fig. 2A ), whose structure was confirmed by X-ray crystallographic analysis ( Fig. S1 ). Subsequently, materials with unsaturated carbons were used to construct rigid and conjugated systems, so that the corresponding M–C σ bonds were confined in a plane to improve their stability. Complex 2 was treated first with ethyl ethoxyethyne (HC≡COEt) and then with excess neutral alumina to absorb the acid that was generated, producing complex 3a . The structure of 3a is denoted as [5554], indicating the size of the rings, recorded in the direction from C1 to C12. An η 3 -coordinated intermediate ( 5a ) with the skeleton structure III as shown in Fig. 1 was also isolated in the absence of Al 2 O 3 . Treatment of 5a with a base such as Al 2 O 3 was found to yield 3a reversibly (Fig. 2A ). Complex 2 was also treated with 2-methyl-1-butene-3-yne (HC≡CCMe=CH 2 ) and produced complex 4a , whose structure is denoted [5545] (Fig. 2A ). Similar structures with different substituents ( 3b, 4b–4d ) could also be obtained from different substrates (see details of the synthesis in the Supplementary Materials ). To investigate the structures of 3a and 4a , X-ray crystallographic analysis was performed (Fig. 2B and 2C ). The skeletons of both of these structures contain one metal center and a multidentate hydrocarbon chain. The mean deviations from the least-squares plane of C1–C12 and Os were determined to be 0.160 Å in 3a and 0.036 Å in 4a , indicating good planarity of both compounds. The bond lengths of the five M–C σ bonds in 3a and 4a were determined, and are shown in Fig. 2A . The Os–C12 bonds in both 3a and 4a were found to be longer than the other Os–C bonds because the C12 atom is sp 3 hybridized, whereas carbons C1–C11 are all sp 2 hybridized and conjugated with osmium. In addition to the planarity of the skeletons, the X-ray data show five coplanar M–C bonds in each structure. The intermediate 5a was also characterized by X-ray diffraction and its structure is shown in Fig. 2D . It is a non-planar structure because it contains an η 3 -coordination fragment. In the X-ray crystallographic data, average distances between H1 and the protons on C12 (two H12 atoms) were determined to be 2.635 Å in 3a and 2.394 Å in 4a , which prompted us to consider the possibility of adding one more Os–C σ bond in the plane. In 3a and 4a , the carbon chain ligands composed of 12 carbons form 5 coplanar σ bonds with the metal, which significantly diminishes the space available to the substituents on the terminal carbons C1 and C12 raising the possibility of repulsion between the substituents on C1 and C12. The heteronuclear multiple bond correlation (HMBC) spectra of 3a and 4a ( Figs S2 and S3 , respectively) reveal strong interactions between C1 and H12 and between C12 and H1, which may be a result of the crowding of these atom pairs. This strong bond correlation appears to depend on repulsive forces and this was further confirmed by analysis of the noncovalent interactions (NCIs) derived from the DFT calculations ( Figs S4 and S5 ) [ 40 ]. This suggests a steric limit on the number of carbons in the equatorial plane around the osmium center, which may not support the introduction of one extra equatorial Os–C σ bond in these two systems. Thermal stability studies of 3a and 4a were then performed to investigate the stabilities of their skeletons ( Table S1 ). These experiments showed that 4a is stable in air for 1 month at room temperature (purity >95%) and that 3a has no detectable decomposition even after more than 6 months. Complex 3a was also found to be stable in air for at least 1 day at 100°C in the solid state with purity >95%. The stability of 3a and 4a can be attributed to their structures: the conjugated and rigid polydentate systems assume stable configurations, in which five covalent M–C bonds tightly connect the metal and the organic moiety. In addition, a chemical reagent tolerance was also performed on 3a and 4a . Complex 3a can tolerate extreme conditions and reactants such as sodium (Na), oxidants such as hydrogen peroxide (H 2 O 2 ) and bases such as sodium hydride (NaH). In acidic media, however, 3a is converted to 5a . This conversion is reversible, and 3a can be recovered after removal of the acid (Fig. 2A ). In contrast, compound 4a appears to be less capable of tolerating a strong chemical environment than 3a . This may be partly due to the strain in the four-membered ring of 4a . On the other hand, the stability of 3a and 4a may also be associated with their aromaticity [ 41 ]. The aromaticity of the skeletons of both 3a and 4a were investigated by determining the nucleus-independent chemical shift (NICS) [ 42 ] values and displaying the anisotropy of the induced current density (ACID) ( Figs S6–S8 ) [ 43 ]. The results show that 3a has aromatic character, mainly in the two five-membered rings (Os, C1–C7), providing an extra contribution to its stability. However, as for 4a , its two five-membered rings (Os, C1–C7) are not aromatic. In addition, the anti-aromatic four-membered ring may decrease its stability. This may cause the difference of stabilities between 3a and 4a . Theoretical studies on the bonding situation of the Os–C bonds DFT calculations were carried out to reveal the bonding situation of the metal and the carbon atoms. Using the NBO 7.0 software package, the Wiberg bond index (WBI) [ 44 , 45 ] of the M–C bonds was determined in 3′, 4′ and 5′ , three simplified skeletons of 3a, 4a , and 5a (Fig. 3A ). Relatively large WBI numbers indicate the presence of M–C σ bonds. Small WBI numbers, such as those for C10, C11 and C12 in 5′ , denote weak bonds and indicate η 3 -ligand coordination to osmium. To further describe the M–C σ bonds, the Pipek-Mezey localized molecular orbitals (PM-LMOs) [ 46 ] were determined using the Multiwfn software package [ 47 ]. This analysis shows five M–C σ bonds on the equatorial plane of 3′ and 4′ (Fig. 3B and 3C ), while in 5′ , only three such bonds are seen (Fig. 3D ). Two other PM-LMOs on C10–C12 together show a four-electron η 3 -ligand coordinated to the osmium center in 5' . For comparison, a similar analysis was performed for II-Os' as the simplified model for the reported complexes II-Os (Fig. 1 and Fig. 3E , left) [ 35 ]. Two LMOs of II-Os' were identified to engage in a combination of π donation and π back-donation (Fig. 3E , right). A natural resonance theory (NRT) analysis [ 48 ] was performed to determine the proportion of each resonance structure of II-Os′ . There are several resonance situations of C1 to C10; however, to focus on the part we are interested in, C11, C12 and Os, these resonance structures were combined and described using dashed bonds as shown in Fig. 3F . The results show that the resonance structures with the form of II-Os-b′ (53.3%) is dominant, relative to ones with the form of II-Os-a′ (46.7%). These results suggest that the C11–C12 moiety can be viewed as mainly a π -ligand, which is further supported by the C10–C11 (1.362(8) Å) and C11–C12 (1.394(8) Å) bond lengths of one example of II-Os (R = Me) in the crystal structure [ 35 ]. On the basis of these results, it was therefore concluded that in our previously reported complexes II-Os , there are mainly three coplanar Os–C σ bonds, and the role of C11 and C12 with Os are primarily π interactions, although the σ interactions cannot be ignored. Therefore, the skeleton of complexes II-Os is totally different from those of 3 and 4 . Additional DFT computational analyses were performed both with numerical and visual methods to investigate the σ character of the M–C bonds in 3′ and 4′ . Usually, bonds can be divided into two main types: ionic and covalent ( σ, π , etc.). To investigate the covalent and ionic characters of the Os–C bonds in these complexes, the Hirshfeld charges of atoms were calculated first with Multiwfn. The numbers of charges on osmium of 3′ and 4′ are, respectively, 0.007 and 0.002, nearly zero, indicating a non-ionic form. Natural bond order (NBO) analyses were then performed (Fig. 4A ) [ 44 ] and 5' was also used as a reference. In 3′ and 4′ , five large Os–C bond orders were observed, indicating strong bonding forces between carbon and osmium. All of the bonds in 3′, 4′ and 5′ except for the Os–C7 bonds had a predominantly covalent character, indicating that the M–C bonds are dominated by a σ -covalent rather than ionic component. The large NBO values of Os–C7 in 3′, 4′ and 5′ indicate that the Os–C7 bonds are double bonds, containing not only σ character but also π character. The half ionic NBO values of Os–C7 might be because its π electrons participate in the delocalization of the rings, resulting in less covalent properties. The NBO values of Os–C10 and Os–C12 in skeleton 5′ are lower than those in 3′ because the bonds in the former are η 3 -coordination bonds rather than σ bonds. This further corroborates the line drawings in Fig. 4A . Electron localization function (ELF) [ 49 ] and localized orbital locator (LOL) [ 50 ] analyses of both 3′ and 4′ were also performed, and the results are shown here as visual evidence (Fig. 4B and 4C ). The red parts in white cycles revealed the electron localizations between the carbons and osmium, further confirming the existence of five M–C σ bonds in each structure. Furthermore, by simply comparing the color gradients in ELF graphs, the electron densities of the M–C σ bonds are nearly the same level of those C–C bonds, indicating a possible stability of the M–C bonds. Thus, the results of these DFT calculations are proof of the existence of 5 coplanar M–C σ bonds in the [5554] and [5545] structures.
CONCLUSIONS To summarize, we have described the synthesis and characterization of two kinds of complexes with five coplanar M–C σ bonds. To keep the bonds stable and fix them in a single plane, we developed structures that were expected to have both rigidity and conjugation. As a result, those complexes were found to be exceptionally stable. The existence of the five coplanar M–C σ bonds was supported by both experimental and computational data. Note that the M–C bonds are not only σ characteristic, but some also have π component and give out a large conjugation system. We not only synthesized several new structures in this research, but also expanded the coplanar M–C σ bonds to five.
ABSTRACT The σ bond is an important concept in chemistry, and the metal–carbon (M–C) σ bond in particular is a central feature in organometallic chemistry. Synthesis of stable complexes with five coplanar M–C σ bonds is challenging. Here, we describe the synthesis of two different types of stable complexes with five coplanar M–C σ bonds, and examine the stability of such complexes which use rigid conjugated carbon chains to chelate with the metal center. Density functional theory (DFT) calculations show that the M–C σ bonds in these complexes have primarily a covalent character. Besides the σ nature, there are also a π conjugation component among the metal center and carbons, which causes delocalization. This work expanded the coplanar M–C σ bonds to five. Two different types of all-carbon-surrounded transition metal complexes with five coplanar M–C σ bonds were produced, and they show good stability in air and moisture.
Supplementary Material
ACKNOWLEDGMENTS We thank Prof. Xueming Yang and Prof. Jun Li for their kind advice. We also thank Dr. Jingxuan Zhang for the discussion. FUNDING This work was supported by the National Natural Science Foundation of China (21931002, 92156021, 22071098 and 22101123), Shenzhen Science and Technology Innovation Committee (JCYJ20200109140812302 and JCYJ20210324105013035), Guangdong Provincial Key Laboratory of Catalysis (2020B121201002), Guangdong Grants (2021ZT09C064), Introduction of Major Talent Projects in Guangdong Province (2019CX01C079), and Financial Support for Outstanding Talents Training Fund in Shenzhen. AUTHOR CONTRIBUTIONS H.X. devised the project. H.Z., D.C. and H.X. supervised the experimental study. Y.H., M.L. and Z.L performed the experimental work. Y.H. performed the computational work. Y.H., D.C. and H.X. wrote the paper and prepared the Supplementary Materials with the input from all authors. All authors discussed the results in detail and commented on the manuscript. Conflict of interest statement . None declared.
CC BY
no
2024-01-16 23:47:16
Natl Sci Rev. 2023 Dec 20; 10(12):nwad325
oa_package/29/0b/PMC10789241.tar.gz
PMC10789242
38224489
Introduction The genus Paenibacillus comprises Gram-positive facultative anaerobic, endospore-forming bacteria. They occur throughout nature, e.g., in water, soil, rhizosphere, and insect larvae. While P. larvae causes American foulbrood in honeybee larvae ( Daisley et al., 2023 ), Paenibacillus sp. str. FPU-7 is known for its chitinolytic activity of insoluble shrimp chitin flakes ( Itoh, 2021 ). Several Paenibacillus species belong to the microbiomes of agriculturally important crops, such as the diazotrophic plant growth-promoting Paenibacillus polymyxa that is used as an inoculant in agriculture ( Beneduzi et al., 2012 ; Souza et al., 2015 ; Langendries and Goormachtig, 2021 . This review focuses on the diazotrophic plant growth-promoting Paenibacillus sonchi genomovar Riograndensis strain SBR5T. Although the organism displays important growth-promoting traits, its full characterization remains unfinished. For the characterization and engineering of this non-model bacterium, genetic tools, and omics databases had to be developed. Examples of the technological developments and the insights gained are given as well as a perspective on how this knowledge can be leveraged to deepen our understanding of P. sonchi with regard to its physiology, genetics, evolution, and application as plant growth-promoting rhizobacterium (PGPR). Analyses performed on P. sonchi can also serve as a guide for work involving bacteria isolated from the environment that are difficult to handle.
Conclusion As was demonstrated in this review, we have learned a lot about P. sonchi genomovar Riograndensis SBR5T, showing how we can explore strains isolated from the environment through various methodologies. In one hand, it has been studied regarding the regulation and evolution of nitrogen fixation systems, regarding the alternative Fe-only nitrogenase. On the other hand, genetic tools including CRISPRi have been developed to perform functional gene analysis and applied to gain insight into its physiology such as vitamin biosynthesis. Genome, pangenome, and transcriptome analyses laid the foundation to explore the large genome of SBR5T as a rich source of biological functions highlighting its potential in providing environmental and economic benefits.
Associate Editor: : Lavínia Schüler-Faccini Conflict of Interest: The authors declare that there is no conflict of interest that could be perceived as prejudicial to the impartiality of the reported research. Abstract Paenibacillus sonchi genomovar Riograndensis SBR5T is a plant growth-promoting rhizobacterium (PGPR) isolated in the Brazilian state of Rio Grande do Sul from the rhizosphere of Triticum aestivum. It fixes nitrogen, produces siderophores as well as the phytohormone indole-3-acetic acid, solubilizes phosphate and displays antagonist activity against Listeria monocytogenes and Pectobacterium carotovorum. Comprehensive omics analysis and the development of genetic tools are key to characterizing and engineering such non-model microorganisms. Therefore, the complete genome of SBR5T was sequenced, and shown to encode 6,705 proteins, 87 tRNAs, and 27 rRNAs and it enabled a landscape transcriptome analysis that unveiled conserved transcriptional and translational patterns and characterized operon structures and riboswitches. The pangenome of P. sonchi species is open with a stable core pangenome. At the same time, the analysis of genes coding for nitrogenases revealed that the trait of nitrogen fixation is sparse within the Paenibacillaceae family and the presence of Fe-only nitrogenase in the P. sonchi group was exclusive to SBR5T. The development of genetic tools for SBR5T enabled genetic transformation, plasmid construction for constitutive and inducible gene expression, and gene repression using the CRISPRi system. Altogether, the work with P. sonchi can guide the study of non-model bacteria with economic potential. Keywords:
P. sonchi genomovar Riograndensis SBR5T: isolation, (pan)genome and RNA landscape Paenibacillus strain SBR5T was isolated in the Brazilian state of Rio Grande do Sul from the rhizosphere of Triticum aestivum by the laboratory of Luciane Passaglia at UFRGS and named P. riograndensis ( Beneduzi et al., 2008 ; Beneduzi et al. , 2010 ). Later, by using genome-based metrics and phylogenetic analyses the strain was shown to be a genomovar of Paenibacillus sonchi X19-5T ( Sant’Anna et al., 2017 ), which was described shortly before strain SBR5T ( Hong et al., 2009 ). The physiological characteristics of this facultatively anaerobic, endospore-forming bacterium comprise menaquinone MK-7 as major respiratory quinone, anteiso-C15:0 as major fatty acid, utilization of starch, production of dihydroxyacetone and catalase ( Beneduzi et al. , 2010 ). P. sonchi genomovar Riograndensis SBR5T is a PGPR and was demonstrated to improve the growth of wheat in greenhouse conditions ( Beneduzi et al., 2008 ; Campos et al. 2015 ). Its plant growth-promoting activities are associated with its ability to fix nitrogen, solubilize phosphate and produce siderophores and the phytohormone indole-3-acetic acid. It also displays antagonist activity against Listeria monocytogenes and Pectobacterium carotovorum ( Bach et al., 2016b ). After the isolation of a new bacterium, the determination of its complete genome sequence provides the first systems-level insight into its lifestyle and provides the basis for further omics-based technologies for systems-level characterization and engineering. A draft genome sequence of SBR5T comprised of 2,276 contigs identified the absence of plasmids, a GC content of 55.1%, and 7,467 open reading frames ( Beneduzi et al., 2011 ). Later, the complete genome sequence was obtained by sequencing two shotgun Paired-End and Mate-Pair libraries followed by joining all contigs, closing gaps, and resolving SNPs in repetitive regions ( Brito et al., 2015 ). While the GC content was confirmed, the size of the complete genome sequence was 523,056 bps larger than in the previous draft. These sequences were not clustered but scattered over the whole circular chromosome of 7,893,056 bp ( Brito et al. , 2015 ). The complete genome of SBR5T contained 6,705 protein-coding genes and genes for 87 tRNAs and 27 rRNAs. While several gene annotations already offered interesting glimpses into the physiological potential, e.g., genes encoding different types of nitrogenases, catabolic enzymes for utilization of sugars, biosynthesis of vitamins, and antibiotics resistance, the most valuable asset of the complete genome sequence of P. sonchi genomovar Riograndensis SBR5T is that it provides a foundation for all systems-level analyses of this species ( Brito et al. , 2015 ). To gain insight into the evolution of the genus Paenibacillus on the genome level, the pangenome repertoire of P. sonchi was analyzed by using not only the complete genome sequence of the genomovar Riograndensis SBR5T ( Brito et al., 2015 ), but also those of the genomovars Oryzarum CAR114, Oryzarum CAS34 and Sonchi LMG_24727 ( Ribeiro et al. 2022 ). The genomovars shared 39% of all genes (4,365 of 11,205 genes). Next, a pangenome analysis of one of the five phylogenetically isolated clades of diazotrophic Paenibacillus genomes was performed since it comprised most Paenibacillus genomes harboring alternative nitrogenase genes (83 genomes passed the quality criteria) including that of P. sonchi genomovar Riograndensis SBR5T. About 1,100 genes make up this core pangenome ( Ribeiro et al. 2022 ). The pangenome was determined to be open since an increasing tendency to expand the pangenome repertoire was observed if more genomes were incorporated, whereas the core pangenome was stabilized. This is characteristic of bacteria living in multiple or complex niches, and complex communities with larger effective population sizes ( Ribeiro et al. 2022 ). The pangenome analysis helped the characterization of the occurrence of different types of nitrogenases in P. sonchi genomomar Riograndensis (s. below). Furthermore, the genome of P. sonchi genomovar Riograndensis SBR5T served as the basis for a comprehensive RNAseq analysis that led to the determination of its RNA landscape ( Brito et al., 2017a ). That type of analysis demands the exposition of the studied organism to varied growth conditions, aiming to obtain RNA samples containing a wide diversity of expressed genes. To achieve such diversity, the bacterium underwent 15 different growth conditions that comprised temperature, pH, and nutrition variations, and the resulting RNA material was isolated and pooled in equal amounts for sequencing. Two different library generation methods were strategically applied, the first included the whole transcriptome of SBR5T and the second was prepared isolating only its primary transcripts with unaltered 5’-triphosphate ends. The native 5’-triphosphate ends are obtained by RNA hydrolysis followed by a treatment with a terminator 5′-phosphate-dependent exonuclease that digests all RNA having a 5′-monophosphate end but not the RNAs having a 5′-triphosphate end ( Pfeifer-Sancar et al., 2013 ). That sort of library enabled a plethora of genetic characterization possibilities for SBR5T, such as the identification of conserved sequences of ribosome binding sites (RBS) and translation start motifs, transcription start sites (TSS), as well as the elements within the 5′ untranslated regions (5’UTRs) of genes including cis -regulatory structures. In combination, the two libraries helped identify novel transcripts, quantify transcripts abundance and detect operon structures in SBR5T. A total of 1,268 TSS were identified as belonging to the 5’UTRs of annotated genes and 1,082 belonged to novel transcripts. Most 5’UTRs were 25 to 50 bp long and those larger than 100 bp (209 of 1,268 analyzed) revealed a conserved aGGaGg RBS motif and ttgaca and TAtaaT for the -35 and -10 hexamer motives, respectively ( Brito et al. , 2017a ). The operon analysis unveiled a majority of monocistronic transcripts and a total of 622 operons and 248 sub-operons, the description of such operon structures in that study helped to understand the gene expression processes inherent to some PGPR traits in SBR5T, for example, its siderophore transport. Moreover, cis -regulatory RNA elements were identified using the Infernal tool ( Nawrocki and Eddi, 2013 ), which resulted in the identification of the riboswitches present in the transcriptome of SBR5T, with further functional experiments showing a thiamine pyrophosphate (TPP) riboswitch interfering with the translational machinery within the thiamine biosynthesis of this bacterium (s. below) ( Brito et al. , 2017a ). Hence, the RNA landscape analysis provided a valuable basis for the development of genetic tools and omics-based characterizations such as differential RNAseq analysis of this bacterium. Figure 1 presents an overview of all studies that were done with P. sonchi SBR5 strain to contextualize how the analysis steps could be used together and how they might complement each other. Systems-level analyses of the proteome or the metabolome are still lacking but will be immensely valuable for further analysis of P. sonchi . P. sonchi genomovar Riograndensis SBR5T as a PGPR Nitrogen fixation Biological nitrogen fixation (BNF), performed by diazotrophic organisms, can be considered the most important plant growth-promoting trait. The biosynthesis of reduced nitrogen from the inert dinitrogen gas in the Earth’s atmosphere by the process of BNF is catalyzed by nitrogenase enzymes. This reaction (EC 1.18.6.1) is very energy-demanding and requires 16 ATP and 4 reduced ferredoxins per fixed molecule of dinitrogen ( Rubio and Ludden, 2008 ). Nitrogenase enzymes are comprised of two main subunits, dinitrogenase reductase and dinitrogenase. Due to their metal ion content, the subunits are named iron (Fe) protein and iron-molybdenum (FeMo) protein, respectively. In the catalytic cycle of nitrogen fixation, dinitrogenase enzyme directly reduces dinitrogen, whereas dinitrogenase reductase provides electrons for the reduction and energy by ATP hydrolysis ( Hoffman et al., 2009 ). The FeMo-nitrogenase subunits are encoded in the operons nifBHDKENXhesAnifV and nifENX . In some diazotrophic bacteria, two additional nitrogenase enzymes can also be found: V-nitrogenase (EC 1.18.6.2) and Fe-only nitrogenase, which contain FeV and FeFe co-factors, respectively ( Hu and Ribbe, 2015 ). Structural components of V-nitrogenase and Fe-only nitrogenase are encoded in the operons vnfHDGK and anfHDGK genes, respectively, plus an additional delta chain (encoded by vnfG or anfG genes, respectively) ( Mus et al., 2018 ). Beyond the structural genes of the three types of nitrogenase enzymes, accessory genes responsible for co-factor cluster synthesis, nitrogenase assembly, electron transfer, and gene regulation are known. In P. sonchi SBR5T, besides the conventional FeMo-nitrogenase, a Fe-only nitrogenase was identified and demonstrated to be functional in the absence of Mo ( Fernandes et al. , 2014 ). Conventional nitrogenase (FeMo-nitrogenase) genes are regulated by GlnR in P. sonchi ( Fernandes et al., 2017 ). This transcriptional regulatory protein represses transcription of the nif operon at high nitrogen status ( Fernandes et al. , 2017 ). GlnR binding to its target DNA sequences was demonstrated by surface plasmon resonance spectroscopy. Importantly, the link to the nitrogen status of the cell was revealed since GlnR-DNA affinity was greatly enhanced when GlnR was bound by glutamine synthetase when the latter was feedback-inhibited due to binding glutamine. In addition, the energy status was monitored since complex formation between GlnR and glutamine synthetase depended on ATP and AMP levels within the cell. The complex is bound to multiple operator sites to form a loop for strong repression of its target genes ( Fernandes et al. , 2017 ). While it was demonstrated that under Mo-limiting conditions the alternative Fe-only nitrogenase genes of P. sonchi are transcribed and catalytic activity of the Fe-only nitrogenase enzyme could be measured ( Fernandes et al. , 2014 ), this type of regulation does not involve GlnR. The underlying regulatory mechanisms of nitrogen- and Mo-dependent regulation of the Fe-only nitrogenase enzyme remain elusive. The evolution of the alternative nitrogenase has been studied in some detail in the Paenibacillaceae family by bioinformatics analysis ( Ribeiro et al., 2022 ). Of 930 genomes in the Paenibacillaceae family, 160 were identified as putative diazotrophic genomes. Thus, 17% of these species are expected to fix nitrogen. Of these, only a subset making up 2.5% of all Paenibacillaceae genomes possess genes for the alternative Fe- or V- nitrogenase enzymes ( Ribeiro et al. , 2022 ). Genomes encoding the Fe-only nitrogenase shared two operons, nifEN and anfHDGK , and belonged to the three genera Gorillibacterium, Fontibacillus, and Paenibacillus, in the latter there is the addition of the nifX in the operon that contains the nifEN genes. The species phylogeny of Paenibacillaceae separated the diazotrophs into five clades, one of these containing all occurrences of strains harboring alternative nitrogenases in the Paenibacillus genus. It was proposed that Fe-nitrogenase was acquired by the ancestral lineage of the genera Fontibacillus, Gorillibacterium, and Paenibacillus via horizontal gene transfer ( Ribeiro et al. , 2022 ). Later, gene transfers and gene losses shaped the evolution of the alternative nitrogenases in these groups, and accessory genes may have coevolved. Taken together, the trait of nitrogen fixation is sparse within the Paenibacillaceae and the presence of Fe-only nitrogenase in the P. sonchi group was exclusive to the genomovar Riograndensis ( Ribeiro et al. , 2022 ). Thus, the insight gained about the Fe-only nitrogenase in the P. sonchi group revealed the importance of establishing omics-based analysis to further our understanding of the evolution of rare metabolic traits in non-model microorganisms. Iron acquisition Besides nitrogen fixation, the production of siderophores is highly relevant for promoting plant growth. Siderophores complex iron ions and enable bacteria to access iron under iron-limiting conditions. An RNAseq analysis of P. sonchi in response to iron depletion revealed that RNA levels were increased for 71 genes and decreased for 79 genes ( Sperb et al., 2016 ). Besides gene expression changes related more generally to impaired growth like sporulation and DNA protection genes, the gene fecE encoding a Fe 3+ siderophore transporter was upregulated. Indeed, the FecE transporter has presented high expression in the SBR5T landscape transcriptome as well ( Brito et al., 2017a ). However, although it was demonstrated that SBR5T can produce siderophores ( Beneduzi et al., 2010 ), genes related to siderophore biosynthesis were not induced under iron starvation conditions ( Sperb et al. , 2016 ). Phosphate solubilization Phosphorus (P) is an essential nutrient for plant development. While inorganic phosphate for use as fertilizer currently is extracted from P-rich rock and its world supply is finite, P solubilization by PGPR is an important consideration. About half of the roughly 4.5 million tons of P used as fertilizer are lost by soil immobilization and surface runoff ( Granada et al., 2018 ). The authors calculated that PGPR may provide as much as 0.8 million tons of P to plants, thus, reducing the requirement for inorganic phosphate as fertilizer by one-third (Granada et al. , 2018). P. sonchi can solubilize hydroxyapatite, generating 1 mM free P from that insoluble P source in shake flask conditions. Differential gene expression analysis compared growth with P provided as soluble P (NaH 2 PO 4 ) or as insoluble P (hydroxyapatite). The RNAseq analysis determined that RNA levels of 68 genes were increased during growth with insoluble P and 100 genes were down-regulated. To reach these results, first, the sequence reads were mapped onto the reference genome of P. sonchi SBR5 ( Brito et al., 2015 ). Then, the tool Trimmotatic version 0.33 ( Bolger et al., 2014 ) was used to trim the sequences to a minimal length of 35 bps. These reads were mapped to SBR5 genome through the software Bowtie ( Langmead, 2010 ). The software ReadXplorer ( Hilker et al., 2014 ) was then used for the visualization of mapped reads and to perform the differential gene expression analysis. The statistical method DEseq ( Anders and Huber, 2010 ) was employed to analyze the resultant RNAseq data. To classify as differentially expressed, the gene needs to have a change in expression level higher than 30 and a P-value equal to or less than 0.05. Quantification of metabolites in the culture broth revealed higher concentrations of the osmolytes proline, trehalose, and glycine betaine during growth with insoluble P as compared to soluble P ( Brito et al. , 2020a ). Moreover, the osmolality levels in both intracellular and extracellular contents under the hydroxyapatite cultivation were superior to that under the soluble P cultivation. Accordingly, RNA levels were increased for glycine betaine uptake genes opuAA and opuAB as well as for the biosynthesis of proline ( proC ) and trehalose ( treA ) ( Brito et al. , 2020a ). In the same study, a promoter-reporter gene fusion assay showed that the promoter belonging to the first gene in the glycine betaine transporter operon, opuAA , was induced solely under insoluble P cultivation, in contrast to soluble P condition. That reinforced the idea that P. sonchi SBR5 invests in osmoprotection when solubilizing phosphates. Growth with insoluble P was also characterized by reduced RNA levels of TCA cycle genes odhAB and reduced secretion of TCA cycle-derived organic acids. Certainly, under the conditions of insoluble P, the specific activity of the SBR5T odhAB -encoded enzyme 2-oxoglutarate dehydrogenase was entirely depleted, whereas it remained active in the presence of soluble P. By contrast, P. sonchi secreted gluconic acid and acetic acid to solubilize insoluble P, indicating the secretion of organic acids as the main strategy used by this organism for phosphate solubilization. Furthermore, the expression of motility genes was reduced and those of thiamine biosynthesis were increased during growth with insoluble P ( Brito et al. , 2020a ). This comprehensive analysis is the first step and basis to gain a deeper insight into P solubilization by P. sonchi and has the potential to further improve its use as a crop inoculant regarding the provision of soluble P to plants. Other traits The analysis of P. sonchi regarding its plant-growth-promoting traits nitrogen fixation, siderophore production, and phosphate solubilization need to be complemented by further characterizations, e.g., with respect to producing plant hormones and antibiotics, but also to define its response to interaction with plants or pathogens. Moreover, most analysis performed to date combined RNAseq analysis with physiological characterization, but did not yet include functional analysis of genes by gene overexpression (gain-of-function) or repression/deletion (loss-of-function). Genetic tools for functional gene analysis in P. sonchi Transformation Genetic engineering for gain- and loss-of-function analysis is important to study genotype-phenotype relations, thus, genetic transformation has to be established, which often poses a challenge for non-model microorganisms. P. sonchi is difficult to transform by electroporation ( Bach et al., 2016a ), but a rarely used method to transform bacteria was shown to be applicable to this bacterium, namely physical permeation by magnesium amino-clays ( Brito et al., 2017b ). This method differs from chemical, electro, biolistic, or sonic transformation, as it relies on the Yoshida effect ( Yoshida et al., 2001 ). A colloidal solution with a nanosized acicular material is applied to exogenous DNA and bacterial cells to increase the frictional coefficient rapidly. Upon penetration of the resulting large complex into bacterial cells, exogenous DNA is taken up into the cells ( Figure 2a ) ( Yoshida and Sato 2009 ). Transformation of P. sonchi was shown by plasmid isolation and re-transformation as well as by heterologous production of three different fluorescent reporter proteins (Crimson, GFPuv, and mCherry). The protocols and plasmids could be transferred to P. polymyxa DSM365 ( Brito et al. , 2017b ). Plasmids for constitutive and inducible gene expression A suite of cloning and expression vectors with different modes of replication as well as for constitutive or inducible gene expression was developed for functional gene characterization and stable genetic engineering. Two compatible plasmids were designed for P. sonchi , one of which was based on the Staphylococcus aureus rolling circle-replicating vector pNW33N (pC194), while the other was based on the theta-replicating pHCMC04 (pBS72) from Bacillus subtilis . The latter supports higher genetic stability, while the former provides higher gene dosage. Both were shuttle vectors for Escherichia coli , the preferred DNA cloning host ( Brito et al., 2017b ). To enable constitutive expression, three promoters were chosen as they were known to be well expressed in other bacteria: the endogenous promoters of the EF-TU gene tuf , of the glyceraldehyde 3-phosphate dehydrogenase gene gap , and of pyruvate kinase gene pyk . When the promoters were fused to the promoter-less gfpUV gene and cloned into the rolling circle-replicating vector and used to transform P. sonchi genomovar Riograndensis SBR5T and P. polymyxa DSM365. Although the absolute GfpUV fluorescence values differed, in both Paenibacillus species the rank order of the promoter strengths was Pgap > Ptuf > Ppyk ( Brito et al., 2017b ). To enable inducible gene expression, two systems were used: the xylose-inducible XylR regulation system from Bacillus megaterium (PxylA) and the mannitol-inducible systems from B. subtilis and Bacillus methanolicus (PmtlA). The xylose-inducible system was shown to work well with the theta-replicating and the rolling circle-replicating vectors. The mannitol-inducible system was functional with the respective promoters from P. sonchi and B. methanolicus , but not from B. subtilis ( Brito et al., 2017b ). The compatible vectors pRM2- gfpUV and pTX- crimson , with mannitol- and xylose-inducible systems, respectively, were used consecutively to transform P. sonchi genomovar Riograndensis SBR5T and the recombinant SBR5(pRM2- gfpUV )(pTX- crimson ) was cultivated either without inducers, with 50 mM xylose alone, with 50 mM mannitol alone and with a mixture of 50 mM xylose and 50 mannitol ( Figure 2b ). As expected, in the presence of only one inducer single-fluorescence-positive cells were observed by flow cytometry, while double-fluorescence-positive cells were only observed in the presence of both inducers ( Figure 2b ) ( Brito et al., 2017b ). The system could be transferred to P. polymyxa DSM365 ( Brito et al. , 2017b ). As an example of a metabolic engineering application, the biotin auxotrophic P. sonchi genomovar Riograndensis SBR5T was converted to a biotin-prototroph when the biotin biosynthesis operon bioWAFDBI from B. subtilis was expressed using the mannitol-inducible expression system. The recombinant SBR5(pRM2- bioWAFDBI ) grew stably for seven serial transfers to a biotin-free medium when gene expression was induced by mannitol, but not uninduced ( Figure 3 ) ( Brito et al., 2017b ). CRISPRi To obtain valuable insights regarding gene function in P. sonchi genomovar Riograndensis SBR5T, the available genetic toolbox was expanded beyond gene expression. The established genetic tool was based on CRISPR (clustered regularly interspaced short palindromic repeats) technology. Many review papers on synthetic biology suggest that CRISPR technology has revolutionized the field by providing a precise and efficient way to regulate and modify microbial metabolism ( Cleto et al., 2016 ; Kim et al., 2017 ; Wang et al., 2019 ; Schultenkämper et al., 2020 ; Zhan et al., 2020 ). This way, CRISPR technology allows the targeted manipulation of metabolic pathways in PGPR towards gain- or loss-of-function characterization of plant growth-promoting features. CRISPR is a highly precise gene editing technology that allows modifications of DNA sequences. It uses a single-stranded guide RNA (sgRNA) complexed with a Cas enzyme to target specific genes and to cut and edit the DNA at that location. Based on the same technology, CRISPR interference (CRISPRi) is a tool that uses an endonuclease-deactivated Cas enzyme (dCas) that binds to DNA and blocks gene expression without cleaving the DNA sequence ( Figure 2d ). Furthermore, when fusing dCas with a transcriptional activator, dual mode control becomes possible to either activate (CRISPRa) or repress gene expression ( Figure 2e ) ( Schultenkämper et al., 2020 ). Besides P. sonchi , Paenibacillus species have been the subject of several studies involving the aforementioned variants of CRISPR technology. Rütering et al. (2017 ) successfully created a pUB110-derived CRISPR-Cas9 vector system for genome editing in P. polymyxa DSM 365. The system was applied to elucidate and increase the biosynthesis of a wide range of exopolysaccharides in that organism (Rütering et al. 2017; Schilling et al., 2022 ; Schilling et al., 2023 ). Furthermore, the same system was adapted in two approaches: homology-directed repair in large gene clusters targeted by the Cas9-sgRNA system; and the combined use of multiple sgRNAs towards multiplexed gene deletions and insertions in P. polymyxa ( Meliawati et al., 2022 ). The first approach resulted in the deletion of gene clusters related to exopolysaccharides and antibiotic production (12-41 kb), while the multiplex deletion was achieved with more than 80% efficiency. While CRISPR, CRISPRi and CRISPRa use the same basic components, they have different applications. CRISPR is typically used in metabolic engineering to introduce permanent genetic changes such as deletions or SNPs, while CRISPRi and CRISPRa are commonly used to study gene function and characterization by knockdown and overexpression. A novel CRISPRa technology based on a dCas12a linked to the transcription activator (TA) SoxS was developed for multiplexed gene expression activation in P. polymyxa ( Schilling et al., 2020 ). Gene activation takes place upstream of the targeted gene by RNA polymerase recruitment. The same system allows gene repression simultaneously with activation, when additional sgRNAs target genomic regions within open reading frames blocking transcription elongation ( Figure 2e ) ( Schilling et al., 2020 ). In P. sonchi , a pNW33N plasmid-based CRISPRi tool developed for B. methanolicus by Schultenkämper et al. (2019 ) was used as a basis for establishing gene repression ( Brito et al., 2020b ). The CRISPRi system used dCas9-sgRNA system to target endogenous sporulation genes spo0A , yaaT , and a sorbitol dehydrogenase gene ydjJ . By using CRISPRi-based spo0A and yaaT repression, it was observed that sporulation decreased, while biofilm formation increased in P. sonchi . The repression of ydjJ resulted in decrease in specific activity of sorbitol dehydrogenase in crude cell extracts and reduced biomass formation from sorbitol in shake flask cultivation ( Brito et al. , 2020b ). While the chosen gene targets served to demonstrate the repression of regulatory genes (with low expression) as well as metabolic genes (with high expression) by CRISPRi technology, the experiments demonstrated the functional roles of spo0A , yaaT , and ydjJ for sporulation, biofilm formation and carbon source utilization. Notably, the genetic tools for gene expression developed for Paenibacillus species could readily be transferred between P. sonchi and P. polymyxa ( Brito et al., 2017b ). Therefore, it is very plausible to explore and adapt the CRISPR systems developed for P. polymyxa for use in P. sonchi ’s future research. Genetically encoded biosensor Monitoring intracellular metabolite concentrations in single bacterial cells is challenging, but genetically encoded biosensors have been developed to provide ample opportunity to determine the physiologically relevant intracellular concentration ranges and to study intracellular metabolite concentrations at the single-cell level, e.g., by cytometry. The genetically encoded biosensors typically are based on a transcriptional regulator protein that senses the metabolite as an inducer or coactivator and controls a fluorescent reporter gene via its target promotor. A recently developed class of genetically encoded biosensors is based on riboswitches. The aforementioned RNA landscape analysis of P. sonchi identified cis -regulatory elements in the 5’UTRs, among them a TPP-dependent riboswitch as part of the thiamine biosynthesis gene thiC ( Brito et al., 2017a ). The 5’UTR of thiC including the TPP riboswitch was cloned between a constitutive promoter and the open reading frame of gfpUV such that transcription of gfpUV was expected to be constitutive, while translation initiation was expected to respond to different intracellular TPP concentrations. The functionality was shown when GFPuv fluorescence quantified by flow cytometry was low in the presence of externally added thiamine, while it was about fourfold higher in the absence of added thiamine ( Figure 3 ) ( Brito et al., 2017a ). Thus, the biosensor based on the TPP riboswitch can be used to monitor intracellular TPP concentrations. The RNAseq analysis comparing growth with soluble P and insoluble P revealed that RNA levels of genes of thiamine biosynthesis were increased when only insoluble P was available ( Brito et al., 2020a ). The TPP biosensor was then used to verify that indeed intracellular TPP levels differed. The TPP biosensor fluorescence was high during growth with soluble P, but low during growth with insoluble P, which indicated that thiamine biosynthesis occurred during growth with insoluble P ( Brito et al. , 2020a ). Learning from P. sonchi : a source of enzymes Biotechnological production by fermentation, whole-cell transformation, enzyme catalysis, or bio-chemo-catalysis requires specific enzymes that operate isolated, cascaded, or in (synthetic) metabolic pathways. Here, one of the central quests is the identification of the specific enzyme. Nature provides a large source for enzymes that has been complemented by enzyme evolution to enable new-to-nature chemistry (Nobel Prize in Chemistry, 2018, to Frances Arnold). To find new or better microbial enzymes in nature, microbes with large genomes are key since these do not only rely on a minimal gene set for survival but are equipped with a large metabolic potential to cope with complex and changing habitats. In this respect, P. sonchi genomovar Riograndensis is a rich source of biological functions since it is striving in a varied habitat and, thus, has a large genome with 6,705 protein-coding genes ( Brito et al., 2015 ). Most of the genes have been annotated, but only a few have been characterized by genetic or biochemical means. Nonetheless, the genome sequence is a treasure trove for recombinant applications in which P. sonchi is the donor and other bacteria such as E. coli or Corynebacterium glutamicum are the acceptors. P. sonchi showed activity for acid phosphatase, C4 esterase, α-glucosidase, β-galactosidase and was shown to degrade glycerol, malate, raffinose, N -acetylglucosamine, mannitol, mannose, arabinose and starch ( Beneduzi et al., 2010 ; Siddiqi et al., 2017 ). Three genes for putatively starch-degrading enzymes have been heterologously expressed in C. glutamicum , a bacterium used in the biotech industry for the million ton per year amino acid production ( Wendisch, 2020 ). C. glutamicum can utilize several carbon sources but has to be engineered to enable access for others ( Wendisch et al. , 2016 ). While the species description of P. sonchi showed that it can hydrolyze starch ( Beneduzi et al. , 2010 ), C. glutamicum cannot. Expression of the P. sonchi genes PRIO_3240 and PRIO_423 coding for putative α-amylase and α-amylopullulanase enzymes in the starch-negative C. glutamicum enabled the recombinant to grow with starch (Brito, Walter, and Wendisch, unpublished ) as shown also for the amylase gene of Streptomyces griseus ( Sgobba et al., 2018 ). More generally, other Paenibacillus species may extend the application potential of biogenic polymers and their uses. For example, P. xylanilyticus is a mesophilic, facultative, anaerobic, xylanolytic, and cellulolytic bacterium, and corresponding to its habitat a rich source of polysaccharide degrading enzymes. Cellulose is degraded involving lytic polysaccharide monooxygenases ( Ito et al., 2023 ), xylans by xylosidases ( Ito et al. , 2022 ), chitin by chitinases, chitosanases and β- N -acetylhexosaminidases, alginate by alginate lyases ( Itoh, 2021 ), and pectin by pectin methylesterases and lyases ( Zhong et al., 2021 ). In this respect, the full potential of P. sonchi is yet to be tapped. P. sonchi genomovar Riograndensis was also the source for genes of dipicolinic acid (DPA) biosynthesis ( Figure 4a ). In nature, DPA occurs in endospores of Gram-positive bacteria, in particular anaerobic Clostridum and aerobic Bacillus species. Sporulation initiates under adverse conditions and DPA biosynthesis is involved in this process. In the endospore, DPA chelates calcium ions and accumulates to about 10% weight of the B. subtilis endospore to prevent DNA denaturation and mediate heat resistance of the endospore ( Schwardmann et al., 2022 ). Although DPA has not been quantified in P. sonchi , the genes encoding the subunits of the enzyme dipicolinate synthase from this bacterium were heterologously expressed in C. glutamicum strains overproducing precursors of L-lysine to establish the first de novo production of DPA by C. glutamicum ( Schwardmann et al. , 2022 ) ( Figure 4b ). The intermediate of L-lysine biosynthesis 4-hydroxy-tetrahydrodipicolinate (HTPA) was converted by dipicolinate synthase DpaAB to DPA by dehydration and oxidation to yield the heterocyclic aromatic dicarboxylic acid DPA. Since heterologous expression of the P. sonchi genes was required for this enzymatic activity, they were demonstrated to code for functional dipicolinate synthase. DPA is non-toxic, easily biodegradable, heat-stabile, and has many technical applications ranging from metal chelation, as an antimicrobial, antioxidant, for production of pyridines and piperidines, stabilization of peroxides, and as the monomeric precursor of biopolymers. Since C. glutamicum was engineered to utilize various second-generation feedstocks, the production of DPA using the recombinant C. glutamicum strain with the P. sonchi dpaAB genes could be based on renewable carbon sources such as the pentose sugars xylose and arabinose and was shown to operate up to the 2.5 L bioreactor scale ( Schwardmann et al. , 2022 ).
Acknowledgments LFB acknowledges support as a fellow of the Ciência sem Fronteiras program of Brazil (Science Without Borders program). LMPP and VFW gratefully acknowledge funding by FAPERGS. The funding agencies were not involved in the design of the study, collection, analysis, and interpretation of data and in writing the manuscript.
CC BY
no
2024-01-16 23:47:16
Genet Mol Biol.; 46(3 Suppl 1):e20230115
oa_package/1b/4e/PMC10789242.tar.gz
PMC10789243
38226317
Introduction Fragile X syndrome (FXS) is the most commonly inherited form of single-gene mutation that causes a range of developmental problems. FXS is typically characterized by mild to severe cognitive dysfunctions with associated mood, social and behavioural challenges including autism spectrum disorder, attention-deficit hyperactivity disorder and aggression. 1 FXS is caused by a full mutation of the fragile X messenger ribonucleotide ( FMR1 ) gene, which arises from the hypermethylation of a cytosine–guanine–guanine trinucleotide repeat expansion. 2 A full mutation consists of >200 repeats, resulting in epigenetic silencing of FMR1 leading to loss of expression. 1 FMR1 encodes fragile X messenger ribonucleotide protein (FMRP), an RNA-binding protein involved in the regulation of editing, translation and transporting of neuronal mRNAs. 3 FMRP associates with thousands of mRNA targets in the brain, which in turn affects a wide range of neuronal processes and functions. 4-6 Some of these mRNA targets include a large fraction of the pre- and postsynaptic proteome including ion channels which regulate cellular excitability, as well as transcription factors and chromatin-modifying proteins that can affect the genetic and proteomic content of cells. FMRP also interacts directly with both voltage- and ligand-gated ion channels and, in so doing, can manipulate neuronal excitability. 7 Due to the ubiquitous expression of FMRP and its ability to regulate a large portion of the neuronal proteome, it is not surprising that loss of this protein has far-reaching consequences. The broad spectrum of symptoms associated with FXS, ranging from behaviours to cognition, shows the important role FMRP plays in neuronal development, functioning and network formation. FXS patients also display considerable clinical and genetic heterogeneity, which manifests in a wide spectrum of behavioural phenotypes among patients. 8 This heterogeneity is thought to stem from the heterogeneous genetic background of patients as well as the existence of mosaicism of FMR1 methylation, which results in a differential expression of FMR1 across the brain. 9 It is therefore not surprising that despite several potential targets being uncovered and trialled in the clinic, 10 no disease-modifying therapy currently exists that is able to address the multiple symptoms in FXS patients. It is, therefore, reasonable to suggest that a multitargeted polypharmacological approach would be best suited to address the broad range of symptoms in this heterogeneous patient population. Here, we propose a combination of two drugs, ibudilast and gaboxadol, from different classes for the treatment of FXS. This combination was first identified using Healx’s data-driven drug discovery platform as a candidate that may effectively rescue a complementary set of phenotypes, based upon the combined action against putative targets and other information our platform had gathered about drug and disease. Ibudilast is a broad-spectrum phosphodiesterase (PDE) inhibitor, with a preference for PDE3, PDE4, PDE10 and PDE11 11 and has shown to have several beneficial effects in the brain. 12-14 The therapeutic benefits of PDE inhibition in FXS patients were demonstrated with BPN14770, a PDE4D inhibitor, which significantly improved the cognitive performance in this patient population, 15 who have reduced levels of cyclic adenosine monophosphate (cAMP). 16 THIP/gaboxadol is a GABA A receptor agonist with a preference for extrasynaptic GABA A receptors containing alpha4beta3 and delta subunits. 17 , 18 Loss of GABAergic signalling in FXS results in excitatory and inhibitory imbalance, which contributes to the pathophysiology of FXS. 19 This mechanism is known as the GABAergic hypothesis, which is supported by the fact that individuals with FXS have reduced GABA A receptor availability 20 as well as altered GABA transport and synthesis. 21-23 Gaboxadol was found to be safe and well tolerated in FXS patients following a Phase 2a clinical assessment. In addition to this, gaboxadol also demonstrated an initial efficacy signal based on clinician- and caregiver-rated end-points that assessed behaviours such as hyperactivity, irritability, stereotypy and anxiety. 24 Here, we demonstrate that monotherapy treatment with ibudilast or gaboxadol was able to rescue distinct phenotypes in Fmr1 knockout (KO) mice. Gaboxadol was highly efficacious in rescuing behaviours, typically associated with FXS such as aggression, anxiety, hyperactivity and stereotypy, while ibudilast effectively reversed cognitive deficits in Fmr1 KO mice. Importantly, we demonstrate that ibudilast and gaboxadol co-treatment was able to rescue more phenotypes, including behaviours and cognitive deficits, in Fmr1 KO mice than either drug could achieve as monotherapies. This polypharmacological approach of targeting multiple pathways linked to FXS pathophysiology could allow for multiple phenotypes to be treated in a clinical population, which displays a wide and diverse spectrum of symptoms.
Materials and methods Computational prediction methods Given that FXS patients exhibit multiple and diverse symptoms, a drug combination treatment approach is attractive, in particular if the prospective combination is able to address multiple symptoms. However, identifying the most appropriate combination of approved drugs in existence can be challenging. For example, considering a combination of two potential drugs from the ∼4000 that are approved worldwide at a single dose would require ∼8 000 000 laboratory experimental tests, a prohibitively large task if performed manually. Moreover, the majority of combinations would likely be non-efficacious or even toxic. At Healx, we have implemented a suite of combination prediction algorithms that consider multiple data sets and treatment hypotheses to help solve this challenge. The combination of ibudilast and gaboxadol described here was discovered after being predicted by two of our methods that constitute part of the Healx data-driven drug discovery platform: Combination Gene Expression Matching (CGEM) and Target Optimisation (TargOpt; Supplementary Fig. 1A and B ). The CGEM method was based upon a connectivity-mapping scoring system that chooses combinations of drugs based upon the maximal reversal of differentially expressed genes in the disease. 25 The compound gene expression data were obtained from the CMap LINCS gene expression resource 26 ( https://clue.io/data/CMap2020#LINCS2020 ; accessed October 30, 2023). The FXS differentially expressed genes used for this analysis were generated from the Gene Expression Omnibus (GEO) data set GSE62721 27 and prepared as described previously. 28 The TargOpt method was based upon an algorithm that chooses combinations of drugs based upon the simultaneous maximizing of the number of on-target effects and minimizing of the number of off-target effects for a set of disease-related targets. 29 The drug and disease targets were obtained via curation and the Healx data-driven drug discovery platform. We coupled the predictions from these approaches with a number of annotations that assist a human expert to prioritize a subset of predictions for experimental validation by taking into consideration the possibility for synergy and adverse drug interactions. Animals Fmr1 knockout 2 ( Fmr1 KO) mice were generated by deletion of the promoter and first exon of Fmr1 . 30 The Fmr1 KO is both protein and mRNA null. In this study, we used Fmr1 KO and wild-type (WT) littermates generated on a C57BL/6J background and repeatedly backcrossed onto a C57BL/6J background for more than eight generations. Animal housing The Fmr1 KO mice were housed in four per cage groups of the same genotype in a temperature- and humidity-controlled room with a 12-h light–dark cycle (lights on 7 a.m.–7 p.m.). Mice were housed in commercial plastic cages (40 × 23 × 12 cm) with Aspen bedding and without environmental enrichment on a ventilated rack system. Food and water were available ad libitum , except during test sessions. Testing was conducted during the light phase on male Fmr1 KO mice and their WT littermates. All experiments were conducted by experimenters who were blind to genotype and drug treatment. Experiments were conducted in line with the requirements of the United Kingdom Animals (Scientific Procedures) Act, 1986. WT animals were assigned to the WT vehicle group, while each KO mouse was assigned to one of the KO treatment groups using randomization. At the end of the study, animals were sacrificed by cervical dislocation. Dosing and behavioural assays Fmr1 KO and WT littermate mice, whether receiving acute or chronic treatment, were injected intraperitoneal (i.p.) with vehicle [10% DMSO in 90% (20% Captisol in saline)] or gaboxadol or ibudilast or gaboxadol and ibudilast in combination. All drugs were formulated and dosed in the vehicle solution. Administration volumes were 3.85 mL/kg, such that an adult mouse weighing 26 g received a 0.1 mL injection volume. For all test articles, the volume to be administered was based on each mouse’s body weight. Treatment groups remained the same over the course of behavioural testing. Mice receiving acute treatment were dosed 30 min prior to behaviour testing (refer to Table 1 for dosing regimen). Mice receiving acute dosing were dosed with vehicle on the days they did not receive drug to ensure consistency of stress and handling with the chronically dosed mice. Behaviour tests were separated by a minimum of 3 days, during which the mice receiving acute dosing were dosed with vehicle solution. For chronic dosing, mice were dosed daily for 2 weeks before any behavioural testing (refer to Table 2 for dosing regimen). For combination dosing, each drug/vehicle was administered separately on either the left or right lower abdominal quadrant via the i.p. route. Following the 2 weeks of pretreatment, all mice continued their dosing regimen, outlined in Table 2 , until the completion of all the behavioural phenotyping. On the day of behavioural testing, mice were dosed 30 min before the start of the behavioural assay. All behaviour tests were separated by a minimum of 3 days, during which time dosing continued. Each behavioural test was performed between 8 a.m. and 4 p.m. Mice were dosed in the housing room (30 min prior to testing) and then brought to the experimental room to acclimate for 20 min before testing. Animals were tested in only one behavioural task on each experimental day, and each additional behavioural test was separated by at least 3 days. Prior to each test, a mouse that was not included in the study was placed in the experimental apparatus for 3 min. Then, this non-study animal was removed, and the apparatus was cleaned with moist and dry tissues before placing a study mouse into the apparatus. The aim was to create a low but constant background mouse odour for all experimental subjects. Experimenters were blinded to mouse genotype and treatment throughout all behavioural tests and data analysis. Open-field hyperactivity An open-field apparatus was used to test hyperactivity and habituation to a novel environment, in which decreased exploration as a function of repeated exposure to the same environment may be an index of memory. Each mouse was exposed individually to the open field in one session corresponding to 30-min posttest article administration. The open-field assay was performed using an automated system including a Noldus activity monitor chamber with the associated EthoVision software (Noldus Information Technology Inc., Leesburg, VA, USA). A mouse was placed into a corner square facing the wall and horizontal locomotor activity, measured as distance travelled in centimetres (cm) by the number of squares entered with the whole body, was recorded for 30 min. Self-grooming (stereotypy) Stereotypy is measured by an increase in repetitive self-grooming behaviour in the Fmr1 KO mice. After the 30-min drug pretreatment time elapsed, a mouse was individually placed in an empty VersaMax activity monitor chamber. Following an initial 10-min habituation phase, self-grooming was measured for 10 min using an automated system with the associated VersaDat software (AccuScan Instruments, Columbus, OH, USA). Novelty-suppressed feeding or hyponeophagia The novelty-suppressed feeding test, in which a highly palatable but novel liquid food was available for consumption in a novel environment, measured the latency to consume a defined amount of the novel food as an index of anxiety-like behaviour. Mice were food restricted overnight and tested the next morning. Twenty minutes prior to the test, each mouse was individually placed into a temporary holding cage to prevent social transmission of food preferences. Testing was conducted in a chamber (30 cm length × 30 cm width × 5 cm height) with three white walls and a fourth wall of transparent plastic to allow observation of the mouse. A food well (1.2 cm diameter, 0.9 cm height) was glued to the white Perspex base of the test chamber. An individual mouse was placed into the chamber facing away from the food well containing sweetened condensed milk diluted 50:50 with water. The latency from placement in the test chamber to the start of a proper drinking bout, defined as drinking continuously for 3 s, was measured. Mice that did not drink the novel food during the 5-min test received the maximal latency score. Aggression Offensive aggressive behaviour was measured as the number of mounts or the latency to the first attack of an unfamiliar conspecific. Mounting is a dominance behaviour that consists of attempts to climb on top of another animal. The test chamber was an empty commercial plastic cage (40 × 23 × 12 cm) with a Perspex lid to facilitate viewing of the subjects. An experimental mouse and a novel, WT mouse (with no prior contact with the test mouse) were placed in the cage simultaneously for a 3-min test. The total number of mounts was recorded from above with a light-sensitive video camera using the Noldus EthoVision XT system (Noldus Information Technology Inc., Leesburg, VA, USA). Novel object recognition Recognition memory of a familiar object compared to a novel object was assessed by the novel object recognition (NOR) task. A Plexiglas box (26 cm length × 20 cm width × 16 cm height) and two unique objects (4–6 cm diameter × 2–6 cm height), each in duplicate, were used. Mice were habituated individually to the experimental environment by allowing them to freely explore the box, which was empty, for 20 min per day for two consecutive days before testing. The test involved two consecutive trials, each 5 min in duration. For trial one, two identical objects were placed in the box, and the mouse was allowed to freely explore the objects for 5 min. These objects would be the familiar (f) objects. For trial two, one familiar object (f) is replaced with one novel object (n), and the mouse is allocated 5 min to explore. Object exploration was defined as the mouse sniffing or touching the object with its nose, vibrissa, mouth or forepaws. Time spent near or standing on top of the objects without interacting with the object was not counted as exploration. During the trial, a mouse was required to explore the objects for a minimum of 3 s for that individual animal to be included in the data analysis. For the test trial, the time spent exploring the novel object and the time spent exploring the familiar object were recorded for each mouse. Data were reported as the discrimination index (D2 score). The D2 score was calculated as follows: D2 score = (Time spent exploring novel object − time spent exploring familiar object)/(Total time spent exploring novel and familiar objects). Social recognition In the three-chambered social novelty task, a subject mouse was evaluated for its preference to explore a novel versus a familiar social stimulus mouse, defined as the time spent in the chamber with the novel mouse versus the chamber with the familiar mouse. The apparatus was a rectangular three-chambered box, in which each chamber measured 20 cm (length) × 40.5 cm (width)×22 cm (height). Dividing walls were made from clear Perspex, with openings (10 cm width × 5 cm height) that allowed access into each chamber. The apparatus was lit from below (10 lx). The test involved three consecutive phases: habituation, sociability and social novelty. During the habituation phase, an individual test mouse was placed in the middle chamber and allowed to freely explore all three chambers, which were empty, for 5 min. Then, the mouse was placed in an opaque holding cage for 3 min, while the apparatus was prepared for the sociability phase. During the sociability phase, the mouse was allowed to freely explore all three chambers, in which one side chamber contained an unfamiliar mouse (stranger one, with no prior contact with the test mouse) and the other side chamber was empty, for 10 min. The stranger mouse was enclosed in a circular wire cage (11 cm in height, bottom diameter of 10.5 cm and bars spaced 1 cm apart; Galaxy Cup, Spectrum Diversified Designs, Inc., Streetsboro, OH, USA), which allowed nose-to-nose contact between the bars. Animals serving as strangers were male mice previously habituated to placement in the cage for 10 min prior to testing. Then, the test mouse was placed in a holding cage for 3 min, while the apparatus was prepared for the social novelty phase. During the social novelty phase, the mouse was allowed to freely explore all three chambers, in which one side chamber still contained a familiar mouse (n) and the other side chamber now contained a novel mouse (n) for 10 min. The novel mouse was enclosed in a wire cage identical to that enclosing the familiar mouse. For each phase of the test, the amount of time spent in each chamber was recorded. An entry was defined as all four paws in one chamber. Data were reported as the discrimination index (D2 score). The D2 score was calculated as follows: D2 score = (Time spent exploring novel mouse − time spent exploring familiar mouse)/(Total time spent exploring novel and familiar mice). Object location The Object–Location Memory task is useful for assessing cognitive deficits in transgenic strains of mice and for evaluating novel chemical entities for their effect on cognition. Testing occurs in an open-field arena, to which the animals are first habituated. The next day, two objects of similar material but different shapes are introduced to the arena. They are spaced roughly equidistant from each other with space in the middle for introducing the subject. In the trial, the animal is allowed to explore the arena with the two objects, and shortly thereafter, the animal again encounters the two objects, except that one of them has switched positions. The trials are recorded using a camera mounted (Noldus) above the arena and scored for the percentage preference for the object in the new location using EthoVision (Noldus). Data were reported as the discrimination index (D2 score). The D2 score was calculated as follows: D2 score = (Time spent exploring novel location − time spent exploring familiar location)/(Total time spent exploring novel and familiar locations). Contextual fear conditioning Contextual fear conditioning (cFC) was performed as previously described. 31 The mice were placed individually in the contextual chamber (Coulbourn Instruments) and allowed to move freely for 2 min before a mild foot shock (0.7 mA, 2 s) was delivered. The mouse remained in the chamber for 1 min and was returned to its home cage. Twenty-four hours later, the trained mouse was re-introduced to the contextual chamber for 2 min, during which its freezing behaviour (i.e. immobility) was scored. Statistical analysis Data were analysed using GraphPad Prism (GraphPad Software, LLC, version 8.3.0, San Diego, CA, USA). Parametric data were analysed using one-way ANOVA followed by post hoc comparisons with Dunnett’s or Sidak’s multiple comparison tests where appropriate. Non-parametric data were analysed using the Kruskal–Wallis one-way ANOVA followed by Dunn’s multiple comparison test. An effect was considered significant if P < 0.05.
Results Gaboxadol dose range finding studies Other groups have reported that a single dose treatment of gaboxadol significantly reversed several phenotypes in preclinical FXS mouse models, including auditory startle response, hyperactivity, stereotypy and aggression. 23 , 32-34 This drug class has however previously demonstrated pharmacoresistance and loss of efficacy following chronic dosing in other disorders. 35 , 36 In addition to this, several mGluR5-negative allosteric modulators have also shown treatment resistance following chronic dosing in FXS mice. This tolerance is thought to be a potential factor behind their lack of success in the clinic. 37 To ensure repeated gaboxadol dosing did not affect its potency in FXS mice, we compared the efficacy, through behavioural phenotyping, of acute to chronic dosing. For chronic dosing, mice were dosed for 2 weeks prior to behaviour testing and dosing was maintained until all behaviour experiments were complete. The mice receiving the acute treatment were dosed 30 min prior to any behaviour testing with a 3-day washout between all behaviour assays. Mice receiving acute treatment were dosed with vehicle between washout periods to maintain consistency of handling. Doses were selected based on previously reported efficacy in FXS mouse models for gaboxadol. 33 Mice received 0.15, 0.5 or 1.5 mg/kg gaboxadol according to Table 1 dosing regimen. Vehicle-treated Fmr1 KO mice display a hyperactive phenotype by traveling a significantly greater distance in the open field ( Fig. 1A ) compared to WT mice. Acute and chronic treatment with 0.5 mg/kg gaboxadol significantly reduced locomotor activity back to levels observed in the WT vehicle group ( Fig. 1A ). Chronic, but not acute, treatment with 1.5 mg/kg gaboxadol was also efficacious in reducing hyperactivity ( Fig. 1A ). The Fmr1 KO vehicle–treated mice were significantly more aggressive in comparison to WT vehicle–treated mice, as measured by mounting episodes ( Fig. 1B ). Mounting is seen as a sign of dominance between two male mice and as such can be interpreted as a sign of aggression. 33 Acute treatment with 0.5 mg/kg gaboxadol was efficacious in reducing the aggressive phenotype in Fmr1 KO mice. Chronic doses of 0.5 or 1.5 mg/kg was also efficacious in reducing the aggressive phenotype ( Fig. 1B ). The Fmr1 KO vehicle–treated mice also displayed an increase in stereotypy, assessed by repetitive self-grooming ( Fig. 1C ). Both acute and chronic doses of 0.5 mg/kg gaboxadol were able to significantly reduce stereotypy ( Fig. 1C ) in Fmr1 KO mice back to levels observed in WT vehicle–treated mice. Similarly, chronic 1.5 mg/kg gaboxadol also significantly reduces self-grooming ( Fig. 1C ) in Fmr1 KO mice. Cognition was assessed using NOR, which tests an animal’s ability to differentiate between a familiar object and a novel object. If the test animal is able to differentiate between the two objects, it will naturally show a preference for investigating the novel object and, as a result, will have a higher D2 score. The Fmr1 KO vehicle–treated mice cannot recall interacting with the familiar object, and as a result, the time spent investigating both objects is equivalent ( Supplementary Fig. 2 ), resulting in a low D2 score. Neither acute nor chronic doses of gaboxadol were able to reverse the cognitive deficit in Fmr1 KO mice for the NOR task ( Fig. 1D ) or the object location (OL) assay ( Supplementary Fig. 3 ). The difference in the dose efficacy profile between the acute and chronic treatment regimens is not uncommon for this drug class. 38 A number of factors are proposed to contribute to altered sensitivity of GABA receptor modulators following chronic dosing, such as altered GABA receptor surface expression, changes in subunit expression, altered receptor coupling and modified intracellular signalling. 39 Although the exact process driving the efficacy of the chronic 1.5 mg/kg dose warrants further investigation, it does highlight how chronic dosing with this drug class can affect dose selection. This underlines the importance of comparing acute to chronic dosing in a disease-relevant model when performing efficacy-based dose selection studies. Importantly, the acute and chronic 0.5 mg/kg doses showed similar efficacy for all behaviours measured implying there was no loss of potency due to tolerance following chronic dosing of gaboxadol in Fmr1 KO mice. As this class of drug has the potential to induce sedation, we needed to ensure that the reduced hyperactivity, stereotypy and aggression in Fmr1 KO mice, following chronic dosing, were not due to sedative effects. For this experiment, WT mice were chronically dosed with 0.5 or 1.5 mg/kg of gaboxadol for 14 days before subjecting them to an open-field test, to measure activity, and self-grooming, to assess natural behaviour. A vehicle and a 0.5 mg/kg gaboxadol Fmr1 KO–treated group were included as internal controls to monitor consistency of behavioural end-points and pharmacological efficacy between studies. Consistent with previous studies ( Fig. 1A and C ), Fmr1 KO vehicle–treated mice were significantly more hyperactive ( Fig. 2A ) and showed increased stereotypy ( Fig. 2B ) compared to WT vehicle–treated mice. As demonstrated previously, 0.5 mg/kg chronically dosed gaboxadol significantly reduced the hyperactivity and stereotypy back to levels observed in the WT vehicle–treated mice. The WT mice dosed with chronic gaboxadol, 0.5 or 1.5 mg/kg, showed no signs of lethargy or reduced activity throughout the 14 days and showed an activity level comparable to vehicle-dosed WT mice on the final day of dosing. Similarly, the grooming behaviour was comparable to the WT vehicle–mice. From this, it can be concluded that the efficacy observed, in the Fmr1 KO mice, following 0.5 or 1.5 mg/kg chronically dosed gaboxadol, is not a result of sedation. In addition to this, no adverse effects were reported in WT or Fmr1 KO mice following chronic or acute dosing of gaboxadol. Ibudilast dose range finding studies Both cAMP and cyclic guanosine monophosphate (cGMP) are essential to support long- and short-term memory consolidation, and PDE4 inhibition has been shown to improve learning and memory in FXS patients, by increasing cAMP availability. 15 Ibudilast is a broad-spectrum PDE inhibitor that allows for the maintenance of both cAMP and cGMP levels. This makes ibudilast a promising potential therapeutic for FXS. To identify the lowest efficacious dose of ibudilast, Fmr1 KO mice were dosed with 3 or 6 mg/kg ibudilast for 14 days prior to phenotyping. As previously demonstrated, the Fmr1 KO vehicle–treated mice are unable to perform in the NOR task due to a reduced cognitive capacity. Ibudilast treatment was however able to reverse this cognitive deficit dose dependently, with the 6 mg/kg group improving the D2 score in Fmr1 KO mice. The 3 mg/kg ibudilast–treated group was unable to reverse the cognitive deficit in the NOR assay, giving a comparable D2 score to that of the Fmr1 KO vehicle–treated group ( Fig. 3A ). The partition test is a measure of social recognition (SR) that works on the same principle as NOR, except in this instance the test animals need to differentiate between a novel and a familiar mouse. The Fmr1 KO vehicle–treated mice showed no preference for either the novel or the familiar mouse, as demonstrated by their low D2 score. The 6 mg/kg ibudilast–treated group was able to reverse the SR deficit in Fmr1 KO mice back to comparable levels observed for the WT vehicle group. The 3 mg/kg-treated group showed no efficacy in Fmr1 KO mice in the SR test ( Fig. 3B ). Attack latency measures the time it takes for a mouse to attack a resident intruder and is a measure of aggression. The Fmr1 KO vehicle–treated group displays an aggressive phenotype as demonstrated by the significantly reduced latency to attack an intruder mouse. Neither the 3 nor the 6 mg/kg ibudilast–dosed groups were able to reduce this aggressive phenotype ( Fig. 3C ). Hyponeophagia is a measure of anxiety by introducing novel food in a novel environment. The Fmr1 KO vehicle–treated mice experience an increased latency to eat novel food compared to the WT vehicle–treated group. The 3 mg/kg ibudilast dose was not efficacious for reducing hyponeophagia; however, the 6 mg/kg dose showed a subtle but significant decrease in latency ( Fig. 3D ). The 6 mg/kg ibudilast dose showed limited efficacy for normalizing behaviours such as aggression and hyponeophagia; however, this dose was effective for improving cognition in Fmr1 KO mice as assessed by both NOR and the SR test. In contrast to this, gaboxadol was highly efficacious for normalizing several behaviours but showed no efficacy for improving cognition in Fmr1 KO mice. On this basis, it was decided to test the efficacy of gaboxadol and ibudilast in combination and to assess whether the co-treatment of these drugs could improve the cognitive deficits as well as the behavioural phenotypes in Fmr1 KO mice. Ibudilast and gaboxadol improve behavioural phenotypes of Fmr1 KO mice The efficacious doses of gaboxadol, 0.5 and 1.5 mg/kg, along with the efficacious ibudilast dose, 6 mg/kg, were selected to test the efficacy of combination treatments. To further explore the ibudilast 6 mg/kg dose, it was decided to compare once daily dosing (QD) with twice daily dosing (BID). Table 2 outlines the study design for comparing ibudilast or gaboxadol monotherapy to their combination. All drugs were dosed for 14 days prior to any behaviour testing, and dosing continued until the conclusion of all behaviour work. These animals were then subjected to a 14-day washout period during which the drug treatment was withdrawn and behavioural assessments (open field and self-grooming) were performed 14 days after the last dose of drug or vehicle to monitor the potential for sustained disease modification. Monotherapy gaboxadol, at both 0.5 and 1.5 mg/kg, was highly efficacious in reversing a number of behaviours typically associated with FXS ( Fig. 4A–D ). Monotherapy gaboxadol, 0.5 and 1.5 mg/kg, significantly normalized the hyperactivity in Fmr1 KO mice to levels observed in WT vehicle–treated mice ( Fig. 4A ). Monotherapy ibudilast, 6 mg/kg BID, also significantly reduced hyperactivity in Fmr1 KO mice, however not to levels observed for the WT vehicle–treated mice. The 6 mg/kg QD ibudilast dose showed no efficacy in the open field, inferring that twice daily dosing of ibudilast is more efficacious than once daily dosing for this behaviour test. From these data, we can conclude that gaboxadol was more effective in reducing hyperactivity in the Fmr1 KO mouse than BID ibudilast ( Fig. 4A ). All the combination treatments of gaboxadol (0.5 or 1.5 mg/kg) with ibudilast (QD or BID) were highly effective at reducing hyperactivity in Fmr1 KO mice back to levels observed in WT vehicle–treated mice. Despite ibudilast BID monotherapy showing partial efficacy and ibudilast QD monotherapy demonstrating no efficacy, in the open field, combining these doses with either 0.5 or 1.5 mg/kg gaboxadol significantly reverted the hyperactivity in Fmr1 KO mice to WT levels ( Fig. 4A ). Repetitive self-grooming was used as a measurement of stereotypy. Both monotherapy doses of gaboxadol significantly reduced the repetitive behaviour in Fmr1 KO mice to levels observed in WT vehicle–treated mice ( Fig. 4B ). Ibudilast BID also significantly reduced self-grooming in the Fmr1 KO mice, however not levels observed in the WT group ( P < 0.001 versus WT vehicle). Ibudilast QD did not demonstrate efficacy for stereotypy in Fmr1 KO mice. Despite this, all combinations of gaboxadol (0.5 or 1.5 mg/kg) and ibudilast (QD or BID) were able to fully revert the repetitive grooming phenotype in Fmr1 KO mice to WT vehicle levels ( Fig. 4B ). Gaboxadol monotherapy (0.5 or 1.5 mg/kg) was able to significantly reduce the aggressive phenotype in Fmr1 KO mice to levels observed in the WT group ( Fig. 4C ). Neither QD nor BID ibudilast showed any efficacy at ameliorating the aggression in Fmr1 KO mice. Despite this, all combinations of gaboxadol (0.5 or 1.5 mg/kg) and ibudilast (QD or BID) significantly reduced the aggressive phenotype in Fmr1 KO mice to levels observed in the WT group ( Fig. 4C ). Monotherapy gaboxadol treatment (0.5 or 1.5 mg/kg) significantly reduced anxiety, as measured by hyponeophagia, in Fmr1 KO mice to levels observed in WT mice ( Fig. 4D ). Ibudilast QD displayed a subtle but significant amelioration of anxiety in Fmr1 KO mice; however, this reduction was not to levels observed for WT mice ( P < 0.0001 versus WT vehicle). Ibudilast BID showed no efficacy for reducing anxiety in Fmr1 KO mice. All combinations of gaboxadol (0.5 or 1.5 mg/kg) and ibudilast (BID or QD) significantly rescued the anxious phenotype in Fmr1 KO mice to levels observed in the WT group. Importantly, no negative interactions, in terms of efficacy, were observed with any ibudilast–gaboxadol combination treatments for the behaviours measured. Cognitive integrity was assessed using four separate cognitive assays: NOR ( Fig. 5A ), OL ( Fig. 5B ), SR ( Fig. 5C ) and cFC ( Fig. 5D ). Each of these cognitive tasks utilizes bespoke brain regions and connections, with some degree of overlap. The Fmr1 KO vehicle–treated mice were significantly impaired in all the cognitive assays in comparison to WT mice. Monotherapy gaboxadol treatment was unable to improve the performance of the Fmr1 KO mice in any of the cognitive tasks ( Fig. 5A–D ). Contrary to this, monotherapy ibudilast (BID or QD) was able to completely reverse the cognitive deficit in Fmr1 KO mouse as measured in the NOL, OL and SR assays, back to levels observed in the WT animals ( Fig. 5A–C ). This efficacy in cognition was maintained when ibudilast (QD or BID) was combined with gaboxadol (0.5 or 1.5 mg/kg). Ibudilast BID was also able to reverse the cognitive deficit in Fmr1 KO mice observed in the cFC task back to levels observed in the WT mice. Ibudilast QD treatment was not efficacious in the cFC task, and this lack of efficacy was maintained when combined with gaboxadol ( Fig. 5D ). Surprisingly, the potency observed for ibudilast BID monotherapy was significantly reduced when combined with gaboxadol 0.5 mg/kg ( P < 0.01 versus ibudilast BID) or 1.5 mg/kg ( P < 0.0001 versus ibudilast BID) for the cFC task ( Fig. 5D ). These combinations, however, still showed a significant improvement in cFC-associated learning and memory in comparison to the Fmr1 KO vehicle–treated mice ( P = 0.0058 for ibudilast BID + 0.5 mg/kg or P = 0.001 for ibudilast BID + 1.5 mg/kg). Ibudilast and gaboxadol maintain efficacy for Fmr1 KO behavioural phenotypes following washout Mice that followed the dosing regimen outlined in Table 2 were subjected to a 14-day washout period during which time the animals received no drug. Following the 14-day washout period, mice were subjected to open-field and self-grooming assessment. Following drug washout, all treatments, monotherapy and combinations, maintained their efficacy for significantly reducing hyperactivity and stereotypy in Fmr1 KO mice, although with reduced potency ( Fig. 6A and B ).
Discussion We present here the combination of ibudilast and gaboxadol for the treatment of FXS. By simultaneously targeting pathways, which are dysregulated in FXS, we are able to rescue more phenotypes than can be achieved with a monotherapy treatment. Altered cAMP metabolism and reduced inhibitory GABA modulation have been proposed to be key pathophysiological pathways in FXS with each contributing to different, and sometimes overlapping, symptoms in the clinical population. Much of ibudilast’s efficacy is targeted towards reversing the cognitive deficits in Fmr1 KO mice, particularly for the BID group where cognition was improved for all the cognitive assays tested: NOR, OL, SR and cFC. We can speculate that the beneficial effects of ibudilast span across the brain as each of these cognitive assays targets different brain regions and connections. As confirmation, PDE4D inhibition has already been shown to improve cognition in this patient population. 15 Ibudilast does potentially have advantages over selective PDE4 inhibitors due to its broad selectivity profile against PDE3, PDE4, PDE10 and PDE11. 11 Despite this broad selectivity profile, ibudilast is more selective for PDE4 and PDE10, with IC 50 values in the lower micromolar range, compared to PDE3 and PDE11 where IC 50 values are in the 10 μM range. 11 This implies that ibudilast’s PDE efficacy is likely coming from its inhibition of PDE4 and PDE10. Reduced cAMP levels in FXS patient cells were first identified by Berry-Kravis et al. 16 decades ago. Although PDE4 is a minor target of FMRP, several preclinical and clinical studies have supported PDE4 inhibition as a viable target in FXS. The initial findings supporting PDE4 as a therapeutic target in FXS come from work in Drosophila and later mouse 40-42 and most recently a Phase 2 clinical trial. 15 PDE10 is however a target of FMRP 43 and as a result levels are elevated in FXS. PDE10 is highly expressed in medium spiny neurons (MSNs) of the striatum 44 , 45 that modulates the input and processing of cortical information by the basal ganglia circuit. 46 , 47 The basal ganglia is responsible for motor control, motor learning, executive functions, behaviours and emotions 48 and is dysregulated in FXS 49-51 leading to impaired executive function skills such as working memory, attention and inhibitory control. 52 Beneficial effects of PDE10 inhibition have been demonstrated in Fmr1 KO mice by normalizing EEG recorded chirp ITPC. 53 This suggests that PDE10 inhibition reduces auditory hypersensitivity, a debilitating condition that can lead to language delays, social anxiety and stereotypy 54 all common symptoms in FXS patients. Ibudilast has the potential to improve auditory hypersensitivity and reverse neural deficits within the basal ganglia through PDE10 modulation. Similarly, PDE2, another FMRP target, has also shown promise as a potential therapeutic target in a mouse model of FXS. 55 Ibudilast is however not selective against PDE2. PDE10 modulates cAMP and cGMP, both of which are essential for axonal, neurite and dendritic growth, maintenance and maturation. 56 The importance of these cyclic nucleotides was demonstrated when cAMP improved dendritic spine morphology in a mouse model of FXS. 42 Dense immature neuronal spines are a hallmark of FXS that contributes to cognitive impairment. Another factor affecting cognition is reduced levels of glutamate in the hippocampus 57 and cortex 58 of FXS mouse models. However, glutamate levels could be restored by increasing cGMP levels, which promotes presynaptic glutamate release. 59 PDE inhibition may also protect against overactive mGluR5 signalling, another hallmark of FXS, which leads to long-term depression and eventual synapse loss. 37 Activation of protein kinase A (PKA), by cAMP, directly binds to, phosphorylates and inhibits both isoforms of glycogen synthase kinase (GSK3), which is located downstream of the mGluR5 signalling pathway. 60 Inhibition of GSK has been shown to reverse the effects of overactive mGluR5 signalling and improve cognition in FXS mouse models. 37 Ibudilast also has potent anti-inflammatory properties, demonstrated by its ability to reduce proinflammatory cytokines and reactive oxygen species (ROS) in microglial cells. 61 , 62 In addition to this, ibudilast is able to bind and inhibit toll-like receptor 4 (TLR4). 63 The astrocyte-secreted factor tenascin C (TNC), which is an endogenous ligand of TLR4, has been found to be elevated in FXS astrocytes leading to increased extracellular interleukin-6 (IL-6) levels. 64 Elevated IL-6 increases excitatory synapse formation while impairing the development of inhibitory synapses. 65 , 66 This disruption in excitatory inhibitory imbalance is a key feature in FXS leading to changes in network synchrony. 67 Ibudilast could reduce IL-6 levels by inhibiting TNC-dependent TLR4 activation. Ibudilast is also able to protect against ROS, 68 which could prove favourable for FXS, as preclinical models have elevated lipid peroxidation and protein oxidation in the brain, caused by ROS. Elevated ROS, in FXS, are a result of increased activity of the nicotinamide adenine dinucleotide phosphate (NADPH) oxidase and deficits in the ROS-scavenging glutathione system. 69 Contrary to ibudilast, gaboxadol was unable to improve cognition in Fmr1 KO mice for any of the cognitive assays tested. We did demonstrate that gaboxadol effectively normalized behaviours such as hyperactivity, aggression, stereotypy and anxiety in Fmr1 KO mice. Many of these behaviours are exhibited by FXS patients and may be a result of decreased GABAergic function, 19 which is an excitatory/inhibitory imbalance brought about by reduced GABA A receptor availability, alterations in GABA transport, 21 synthesis and release. 20 , 22 , 23 Several ion channels are also differentially expressed in FXS, which contributes to the excitatory/inhibitory imbalance leading to altered neuronal resting membrane potential that ultimately affects neuronal development, network connections and interactions. 7 Two independent groups have demonstrated gaboxadol’s efficacy in FXS. 23 , 32 , 33 Each of these groups used different KO mouse models, Fmr1 KO1 and Fmr1 KO2, bred on different background strains, which added to the complexity and heterogeneity of the validation. Gaboxadol has also recently demonstrated efficacy in a Phase 2a clinical trial in FXS patients based on clinician- and caregiver-rated end-points, which assessed behaviours such as hyperactivity, irritability, stereotypy and anxiety. 24 The improvements for these clinical end-points mirror our findings, for gaboxadol monotherapy, as well as previously published work. 33 Gaboxadol is highly selective against the extrasynaptic GABA A receptor delta subunit, levels of which are significantly reduced in FXS. 70 , 71 Dysregulated expression of the delta subunit can impact behaviour, as was observed when it was selectively knocked out of cerebral granule cells of WT mice. The KO mice were hyperactive, displayed stress related behaviours, were anxious and were socially withdrawn. 72 These mice display a phenotype not dissimilar to that observed for FXS mouse models, which highlights the influence the delta subunit has on behavioural outcomes. In addition, a gene dosage effect was evident with homozygous KO mice displaying a greater phenotype than heterozygous KO mice. Importantly, certain behaviours could be reversed in heterozygous KO mice following gaboxadol treatment, 72 which gives some rationale for targeting this subunit in FXS and potentially other neurodevelopmental disorders (NDDs). The proposed therapeutic effects gaboxadol and ibudilast may have on FXS pathophysiology are illustrated in Fig. 7 . Our results presented here demonstrate that by pharmacologically targeting two independent pathophysiological pathways in FXS, using two drugs with different mechanisms of action (MoAs), specific phenotypes can be treated. We also demonstrate that when these two drugs are administered as a combination treatment, we were able to rescue both the cognitive deficits and the behaviour abnormalities in Fmr1 KO mice. Each combinatorial treatment significantly improved hyperactivity, stereotypy and hyponeophagia. The combination treatments were also able to significantly reduce aggression and improve cognition in Fmr1 KO mice as assessed by the NOR, OL and SR assays. Only the combination treatments of gaboxadol at 0.5 mg/kg and ibudilast at 6 mg/kg BID and gaboxadol at 1.5 mg/kg and ibudilast at 6 mg/kg BID significantly improved spatially related contextual memory in the cFC assay, although with a reduced response or potency in comparison to monotherapy ibudilast 6 mg/kg BID treatment. This reduced potency is thought to be the result of the anxiolytic effects of gaboxadol, which is able to reduce hyperexcitability in the amygdala of a FXS mouse model. 23 In addition, it has been demonstrated that WT mice dosed with gaboxadol show a similar reduced response in the fear conditioning assay in comparison to vehicle controls due to the anxiolytic effect of gaboxadol. 32 Importantly, both monotherapy and combination treatments maintained their efficacy following chronic dosing and showed no signs of adverse effects or pharmacoresistance. In addition, these drugs maintained efficacy, although with some reduced potency, following a 14-day washout that indicates a degree of disease modification. Both ibudilast and gaboxadol have been shown to be safe and well tolerated in patients, in particular gaboxadol that showed a good safety profile in FXS patients. 24 In addition, the combination of ibudilast and gaboxadol was well tolerated in the mouse. Animals receiving the combination were able to perform the behaviour assays to the same level as WT vehicle animals, showing they were not distressed as a result of the combination treatment. However, tolerability of the combination will need to be confirmed in a second species and/or in an appropriately phased clinical study. In summary, this polypharmacological approach of targeting different pathophysiological pathways allows for a larger number of symptoms to be addressed in this heterogeneous patient population and could revolutionize drug discovery going forward, particularly for other NDDs.
Abstract Fragile X syndrome is a neurodevelopmental disorder caused by silencing of the fragile X messenger ribonucleotide gene. Patients display a wide spectrum of symptoms ranging from intellectual and learning disabilities to behavioural challenges including autism spectrum disorder. In addition to this, patients also display a diversity of symptoms due to mosaicism. These factors make fragile X syndrome a difficult syndrome to manage and suggest that a single targeted therapeutic approach cannot address all the symptoms. To this end, we utilized Healx’s data-driven drug discovery platform to identify a treatment strategy to address the wide range of diverse symptoms among patients. Computational methods identified the combination of ibudilast and gaboxadol as a treatment for several pathophysiological targets that could potentially reverse multiple symptoms associated with fragile X syndrome. Ibudilast is an approved broad-spectrum phosphodiesterase inhibitor, selective against both phosphodiesterase 4 and phosphodiesterase 10, and has demonstrated to have several beneficial effects in the brain. Gaboxadol is a GABA A receptor agonist, selective against the delta subunit, which has previously displayed encouraging results in a fragile X syndrome clinical trial. Alterations in GABA and cyclic adenosine monophosphate metabolism have long since been associated with the pathophysiology of fragile X syndrome; however, targeting both pathways simultaneously has never been investigated. Both drugs have a good safety and tolerability profile in the clinic making them attractive candidates for repurposing. We set out to explore whether the combination of ibudilast and gaboxadol could demonstrate therapeutic efficacy in a fragile X syndrome mouse model. We found that daily treatment with ibudilast significantly enhanced the ability of fragile X syndrome mice to perform a number of different cognitive assays while gaboxadol treatment improved behaviours such as hyperactivity, aggression, stereotypy and anxiety. Importantly, when ibudilast and gaboxadol were co-administered, the cognitive deficits as well as the aforementioned behaviours were rescued. Moreover, this combination treatment showed no evidence of tolerance, and no adverse effects were reported following chronic dosing. This work demonstrates for the first time that by targeting multiple pathways, with a combination treatment, we were able to rescue more phenotypes in a fragile X syndrome mouse model than either ibudilast or gaboxadol could achieve as monotherapies. This combination treatment approach holds promise for addressing the wide spectrum of diverse symptoms in this heterogeneous patient population and may have therapeutic potential for idiopathic autism. Chadwick et al . utilized computational methods to identify two drugs to treat fragile X syndrome. As monotherapies, one of the drugs improved cognition and the other normalized autism spectrum disorder–like behaviours in a mouse model of fragile X syndrome. Combined treatment with both drugs reversed cognitive and behavioural deficits. Graphical Abstract
Supplementary material Supplementary material is available at Brain Communications online. Supplementary Material
Funding This research was funded by Healx Ltd. Competing interests The authors report no competing interests. Data availability The data supporting the results of this study are available upon reasonable request to the corresponding author.
CC BY
no
2024-01-16 23:47:16
Brain Commun. 2024 Jan 15; 6(1):fcad353
oa_package/79/b1/PMC10789243.tar.gz
PMC10789244
38224488
Introduction Maize ( Zea mays L.) is a very important cereal crop cultivated globally because it is used as a source of food, feed, and fuel ( Scott and Emery, 2016 ; Choudhary et al., 2020 ). Abiotic stresses, including salinity, heat, cold, drought, and waterlogging, seriously affect maize growth and development, thereby influencing the final grain quality and yield ( Peng et al., 2022 ). Waterlogging stress significantly decreases maize yields in tropical and subtropical regions ( Du et al., 2017 ; Yao, 2021 ). In recent years, global warming has resulted in frequent extreme weather events worldwide; these events have exacerbated the detrimental effects of waterlogging stress on maize ( Pan et al., 2021 ). In areas where maize is extensively cultivated, heavy rainfall occurring over a short period can result in waterlogged soils, which can severely damage maize seedlings ( Osman et al., 2013 ). Therefore, identifying waterlogging-responsive genes and elucidating the mechanisms underlying maize responses to waterlogging stress are essential for developing new waterlogging-tolerant maize varieties ( Zaidi et al., 2004 ; Qiu et al., 2007 ; Arora et al., 2017 ). Plants have evolved various strategies to withstand waterlogging stress, including morphological changes, chemical changes (e.g., redox reactions), and hormonal changes ( Zhang et al., 2017 ). When plants are waterlogged, they undergo morphological changes that enable them to absorb oxygen and compensate for the energy loss caused by metabolic disruptions. The main morphological changes are the rapid elongation of the apical meristem tissue, the formation of adventitious roots (ARs) or other aeration tissues, barriers to radial oxygen loss, and the formation of air films in the upper cuticle ( Hattori et al., 2009 ; Pedersen et al., 2009 ; Yamauchi et al., 2017 ; Pan et al., 2021 ). Through these morphological changes, plants can promote air exchange and the absorption of nutrients and water, which can stabilize the metabolic cycle and allow plants to grow normally ( Steffens and Rasmussen, 2016 ; Qi et al., 2019 ). Under waterlogging stress conditions, reactive oxygen species (ROS) contents in plants are balanced via the regulation of antioxidant enzyme systems and other active antioxidants, which helps to reduce damages caused by stress ( Zhang et al., 2007 ; Doupis et al., 2017 ). Waterlogging leads to hypoxia in plant cells, which increases intracellular ROS levels, especially hydrogen peroxide (H 2 O 2 ), leading to cell death and plant senescence ( Bailey-Serres and Chang, 2005 ; Mary et al., 2012 ; Pucciariello et al., 2012 ). Nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (NOX), which is primarily responsible for ROS production when plants are exposed to hypoxic conditions, plays a significant role in ROS-mediated signal transduction in plants ( Pan et al., 2021 ). Waterlogging stress induces the expression of NOX-related gene AtRbohD in Arabidopsis , which positively regulating the production of H 2 O 2 and enhancing the tolerance of Arabidopsis to waterlogging stress ( Yang CY et al., 2015 ; Sun et al., 2018 ). In a previous study in which several maize varieties were treated with waterlogging stress, the waterlogging-tolerant varieties had increased peroxidase (POD), superoxide dismutase (SOD), and catalase (CAT) activities ( Li et al., 2018 ). Similarly, a comparison of cucumber varieties exposed to waterlogging stress revealed POD, SOD, and CAT activities are lower in waterlogging-sensitive plants than in waterlogging-tolerant plants ( Li, 2007 ). According to the findings of these earlier studies, plants that are relatively resistant to waterlogging stress typically have highly active antioxidant enzymes and ROS scavengers. Plant hormones, such as ethylene (ETH) and abscisic acid (ABA), are critical for plant responses to waterlogging stress ( Yang and Choi, 2006 ; Bashar, 2018 ; Qi et al., 2019 , 2020 ; Hu et al., 2020 ). For example, the Arabidopsis response to hypoxic conditions involves the regulated expression of the ETH response factor (ERF) gene ERF73/HRE1 ( Hess et al., 2011 ; Yang, 2014 ). In maize, ZmEREB180 encodes a positive regulator of AR formation and ROS levels; the overexpression of ZmEREB180 enhances survival during prolonged periods of waterlogging stress ( Yu et al., 2019 ). Additionally, ABA is a crucial regulator of the plant water potential and stomatal opening, especially under waterlogged conditions ( Kim et al., 2021 ). When soybean hypocotyls are waterlogged, the ABA concentration decreases quickly and the secondary aerenchyma appears after 72 h, but the application of exogenous ABA inhibits the development of aerenchyma cells, implying ABA influences root aerenchyma development ( Shimamura et al., 2014 ). The identification of waterlogging-responsive genes is important for creating novel waterlogging-tolerant maize varieties. The new maize variety An’nong 876 has several excellent characteristics, including the resistance to multiple stresses (e.g., drought and heat) and high yields. In this study, comparison between cmh15 (the paternal parent of An’nong 876) and CM37 (the maternal parent of An’nong 876) seedlings exposed to waterlogging stress indicated that cmh15 is more tolerant to waterlogging than CM37. The gene expression profiles of cmh15 under the waterlogging treatment were investigated via transcriptome sequencing, and some key genes responsive to waterlogging were screened. The candidate genes identified in this study may be useful for the molecular breeding of waterlogging-tolerant maize as well as for future studies conducted to clarify the mechanism mediating the maize response to waterlogging stress.
Material and Methods Plant materials and waterlogging treatment The seeds of the cmh15 and CM37 inbred lines were provided by Professors Qing Ma and Beijiu Cheng. The seeds were sown in a greenhouse with a 16-h light (28 °C)/8-h dark (23 °C) photoperiod. At the three-leaf stage, the seedlings underwent the waterlogging treatment by adding water until the water level was 2-3 cm above the soil surface. The control seedlings were watered normally. The third leaf was collected from the waterlogging-treated and control seedlings 6 days later. They were immediately frozen in liquid nitrogen and stored at −80 °C for the subsequent RNA isolation. For the transcriptome sequencing analysis, three biological replicates were prepared for the control group (CKM-1, CKM-2, and CKM-3) and the waterlogging treatment group (WM-1, WM-2, and WM-3). Measurement of physiological and morphological indicators Morphological indicators were analyzed for the plants in the treatment and control groups, including plant height, root length, fresh weight, dry weight, and the differences between two groups were determined ( Qiu et al., 2007 ). After the 6-day waterlogging treatment, the middle part of the third leaf was collected from the waterlogging-treated and control seedlings to examine the accumulation of H 2 O 2 via diaminobenzidine (DAB) staining chromogenic kit according to manufacturer’s instructions (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Construction of cDNA libraries and RNA sequencing The TRIzol Reagent Mini Kit (Qiagen ChinaCo., Ltd, Shanghai, China) was used to extract total RNA from each leaf sample. The total RNA samples were quantified and the quality was assessed using the Agilent 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA) and the NanoDrop spectrophotometer (Thermo Fisher Scientific Inc.). The cDNA libraries were prepared using 1 μg total RNA according to the manufacturer’s protocol, which involved several key steps, including mRNA fragmentation, cDNA synthesis, adapter ligation, PCR amplification, and purification. The constructed cDNA libraries with various indices were sequenced using the Illumina HiSeq system (Illumina, San Diego, CA, USA). Sequence assembly and data analysis To acquire high-quality clean data, the raw data were processed using Cutadapt (v1.9.1), which removed adapters, sequences shorter than 75 bp, and low-quality sequences (Q < 20) from the 5′ and 3′ ends of the reads ( Martin, 2011 ). The clean reads were aligned to the maize B73 reference genome (RefGen_v4) using HISAT2 (v2.0.1) ( Kim et al., 2015 ). In addition, HTSeq (v0.6.1) was used to calculate the fragments per kilobase of exon per million mapped fragments (FPKM) value for each transcript ( Anders et al., 2015 ). The DESeq2 (v1.6.3) Bioconductor package was used for the differential expression analysis ( Anders and Huber, 2010 ; Anders and Huber, 2012 ; Love et al., 2014 ). The differentially expressed genes (DEGs) between the control and waterlogging treatment groups were identified using the following criteria: |log 2 (FC)| ≥ 1 and adjusted p value ≤ 0.05. The key differentially expressed transcription factors (TF) were identified according to the following criterion: |log2(FC)| > 2. Validation of RNA sequencing data by quantitative real-time PCR Eight DEGs were selected for the quantitative real-time PCR (qRT-PCR) analysis to verify the accuracy of the RNA sequencing (RNA-seq) data. First-strand cDNA was synthesized from RNA using the PrimeScript RT reagent Kit with gDNA Eraser (TaKaRa, China). The Primer Premier 5 (v5.0) was used to design gene-specific primers ( Table S1 ). The maize GAPDH gene (accession number: NM_001111943.1) served as an internal control for normalizing gene expression levels. The qRT-PCR analysis was performed as previously described ( Zhao et al., 2019 ) and the 2 −ΔΔCt method was used to calculate relative expression levels ( Livak and Schmittgen, 2001 ). Gene Ontology and pathway enrichment analyses The DEGs were annotated by Gene Ontology (GO) analysis using GOSeq (v1.34.1) ( Harris et al., 2004 ), which include three main functional categories (i.e., biological process, molecular function, and cellular component). The Kyoto Encyclopedia of Genes and Genomes (KEGG) database was used for pathway enrichment analysis for the identified DEGs ( Kanehisa and Goto, 2000 ).
Results Characteristics of cmh15 and CM37 under waterlogged conditions The maize cmh15 and CM37 seedlings exposed to waterlogging stress at the three-leaf stage were examined. The results indicated that the growth of cmh15 and CM37 was significantly affected under waterlogging stress, however, compared with cmh15, the CM37 seedlings exhibited yellowing and wilting leaves and their first leaves were severely yellow ( Figure 1a and b ). Analysis of the plant height, root length, root fresh weight, root dry weight, shoot fresh weight, and shoot dry weight indicated the biomass loss due to waterlogging was less for cmh15 than for CM37 ( Figures 1c-h , 2a ). For the analysis of the accumulation of H 2 O 2 in plant leaves via DAB staining, the CM37 leaves were more intensely stained than the cmh15 leaves, suggesting more H 2 O 2 was accumulated in CM37 than in cmh15 ( Figure 2b ). Accordingly, the cmh15 seedlings appeared to be more tolerant to waterlogging stress than the CM37 seedlings. RNA-seq analysis of cmh15 Because cmh15 was more tolerant to waterlogging stress than CM37, the RNA-seq analysis was performed using the control and waterlogging treatment groups of cmh15 to detect significant DEGs, which may include key genes involved in the response to waterlogging stress. Six samples from the cmh15 control (CKM1-CKM3) and waterlogging treatment (WM1-WM3) groups were used to construct cDNA libraries. The RNA-seq analysis of each cDNA library yielded 39.54-45.60 million raw reads. For the six libraries, 258,361,476 clean reads were retained after the raw reads were filtered for quality. Approximately 72.82%-74.58% of the clean reads were uniquely mapped to the maize B73 reference genome ( Table 1 ). The heatmap clustering results indicated that the three biological replicates for the control and treatment groups were clustered together ( Figure 3a ). The principal component analysis of the six samples revealed the high correlation between the replicates of each group ( Figure S1 ). Thus, the RNA-seq data were highly reproducible. Identification of DEGs in the response to waterlogging stress On the basis of the statistical analysis of the expressed genes, the most common FPKM values were 3-15, whereas the least common FPKM values were > 60 ( Table S2 ). According to the comparison of the FPKM values of all expressed genes between the control and waterlogging treatment groups, slight increase in expression was observed in the waterlogging treatment group than in the control group, indicative of the waterlogging-induced expression of some genes ( Figure S2 ). The DEGs were screened and subjected to a cluster analysis. The three biological replicates for each group were clustered ( Figure 3b ). In total, 2,189 DEGs were identified between the control and waterlogging treatment groups, including 1,359 down-regulated genes and 830 up-regulated genes ( Figure 3c ). Four up-regulated DEGs and four down-regulated DEGs were selected for the qRT-PCR analysis. The correlation between the qRT-PCR and RNA-seq data ( R 2 = 0.871) reflected the reliability of the RNA-seq results ( Figure 4 ). Enrichment analysis of the DEGs To investigate the biological roles of the DEGs responsive to waterlogging stress, the 2,189 DEGs between the control and waterlogging treatment groups were functionally annotated using GO enrichment analysis. Figure 5 shows the 30 most significantly enriched GO terms. Within the molecular function category, iron ion binding (GO:0005506) and heme binding (GO:0009055) were mainly enriched. In the biological process category, oxidation-reduction process (GO:0055114) and response to cold (GO:0009409) were the main enriched GO terms. Within the cellular component category, integral component of membrane (GO:0016021) and chloroplast (GO:0009507) were mainly enriched. Some of the DEGs annotated with these terms may play a vital role in the response of maize to waterlogging stress. For example, among the genes annotated with oxidation-reduction process (GO:0055114), Zm00001d020686 ( acco2 ) is important for the final step of the ETH biosynthesis pathway ( Ning et al., 2021 ). The KEGG analysis of these DEGs identified 121 enriched pathways. Figure 6 shows the 30 most significantly enriched pathways. Four pathways, including biosynthesis of amino acids (ko01230), metabolic pathways (ko01100), biosynthesis of secondary metabolites (ko01110) and carbon metabolism (ko01200), were enriched with the highest number of DEGs. Additionally, other pathways, such as glycolysis/gluconeogenesis (ko00010), may be closely related to the waterlogging stress response of maize. Analysis of key DEGs encoding TFs A total of 155 TFs from 37 TF families were identified from the DEGs. The families with the most TFs were MYB, G2-like, and bZIP. On the basis of the RNA-seq analysis, 36 of the 155 TFs were detected as the key differentially expressed members (|log 2 (FC)| > 2). The cluster analysis of these 36 TFs showed that eight TFs had up-regulated expression levels in response to the waterlogging treatment, whereas the expression levels of the other 28 TFs were down-regulated ( Figure 7 ). Due to the limited number of genes reported in response to waterlogging stress in maize, the analysis of the homologs of these TFs in Arabidopsis and rice suggested that some TFs may be important for regulating the maize response to abiotic stress and hormone responses. For example, a previous study showed that the overexpression of OMTN6 , which is the rice homolog of Zm00001d024268 , negatively affects the drought resistance of rice in the reproductive stage ( Fang et al., 2014 ).
Discussion Waterlogging is one of the important factors limiting global maize production. In the summer maize planting area in the Huang-Huai-Hai region, which accounts for more than one-third of the entire maize planting area in China, more than two-thirds of the annual precipitation is concentrated in the summer maize growing period ( Wang et al., 2022 ). Excessive rainfall may result in waterlogged soils, which can seriously affect the summer maize yield and quality ( Ren et al., 2015 ). There have been some researches on the mechanism mediating waterlogging tolerance ( Pan et al., 2021 ). In fact, multiple response mechanisms have been established in plant response to waterlogging stress, one of the responsive mechanisms of waterlogging stress is the expression changes of many genes, thus, identification of the key responsive genes can play an important foundation for the related mechanism research. There are relevant studies indicating that at the first two days during the onset of waterlogging, the waterlogging tolerance coefficient (WTCs) decreased slowly, but it started to show sharp decline from days 4 to 6, and the trend of descent became very slightly from days 8 to 12 ( Liu et al., 2010 ). According to related studies, a period of 6-10 days is commonly employed for treating waterlogging stress ( Du et al., 2016 ; Ren et al., 2017 ; Huang et al, 2022 ). In addition, different germplasm backgrounds have different responses to waterlogging stress. Therefore, we chose 6 days as the waterlogging treatment time in the experiments according to related studies, which also can reflect the responsive differences between cmh15 and CM37. The results showed that the CM37 leaves were more wilted and yellowed compared with cmh15 in waterlogging groups ( Figure 1a and 1b ). Analysis of some indicators such as plant height indicated that the biomass loss was more for CM37 than for cmh15 under waterlogging treatment ( Figures 1 and 2a ). For the analysis of the accumulation of H 2 O 2 in plant leaves via DAB staining, we found that CM37 accumulated more H 2 O 2 than cmh15 after waterlogging treatment ( Figure 2b ). Combined with the analysis of these phenotypic and physiological indicators, we concluded that cmh15 was more tolerant to waterlogging stress than CM37. The control and waterlogging treatment groups of cmh15 were included in the RNA-seq analysis performed in this study, and 1,359 down-regulated and 830 up-regulated DEGs were identified. The mainly enriched GO terms for the DEGs were oxidation-reduction process (GO:0055114) and integral component of membrane (GO:0016021). Furthermore, DEGs were enriched in some GO terms related to photosynthesis, such as photosynthesis, light harvesting (GO:0009765), photosystem I reaction center (GO:0009538) and photosystem II (GO:0009523). According to previous studies, waterlogging can inhibit the activity of photosynthesis related enzymes ( Voesenek et al., 2006 ; Wu and Yang, 2016 ). The KEGG pathway enrichment analysis indicated that metabolic pathways (ko01100) were highly enriched for the DEGs in the cmh15 waterlogging treatment group ( Figure 6 ). In a previous study, the RNA-seq analysis of the waterlogging-tolerant maize line ‘Suwan-2’ revealed metabolic pathways was significantly enriched, which may be related to waterlogging tolerance ( Yao, 2021 ). Furthermore, the KEGG pathway analysis indicated glycolysis/gluconeogenesis (ko00010) was an enriched pathway among the DEGs. An earlier study determined that plants exposed to hypoxia due to waterlogging can continue to produce energy to a certain extent through glycolysis and ethanol fermentation ( Pan et al., 2021 ). Therefore, the findings suggested the important roles of these terms and pathways in response to waterlogging stress. Transcription factors play a vital role in plant responses to abiotic stresses and hormones ( Wang et al., 2018 ). The ZmEREB180 encodes a TF that regulates waterlogging tolerance in maize seedlings by enhancing AR formation and antioxidant levels ( Yu et al., 2019 ). According to the RNA-seq analysis, 155 differentially expressed TFs were identified, while 36 had significant changes in their expression levels in response to waterlogging stress (|log 2 FC| > 2). The analysis of these TFs and their homologs in Arabidopsis and rice revealed the importance of some TFs in response to abiotic stress and hormone responses. Due to ETH diffusion rate in water being low, waterlogging stress resulted in the accumulation of ETH in plant tissues, which will induce the expression of genes involved in the response to waterlogging stress ( Voesenek and Bailey-Serres, 2015 ). Furthermore, ETH can regulate the formation of the plant aerenchyma and ARs, while also controlling the elongation of branches to cope with waterlogging stress. Earlier research showed the homolog of Zm00001d003451 ( EIL5 ) in rice ( OsEIL6 ) affects ETH signal transduction in rice plants ( Mao et al., 2006 ; Yang C et al., 2015 ). Because the ability to regulate the plant water potential, ABA is considered another key hormone in waterlogging stress ( Pan et al., 2021 ). The homologs of Zm00001d032923 ( HSF30 ) and Zm00001d044975 ( c1 ) in rice and Arabidopsis are involved in ABA signal transduction ( Huang et al., 2016 ; Wang et al. , 2020 ). Therefore, the TFs encoded by these genes may have important regulate roles in plant responses to waterlogging stress. In conclusion, the maize response to waterlogging stress involves many complex biological processes. The findings of this study are important for breeding waterlogging-tolerant maize varieties, while also serving as the basis for future research on the responsive mechanisms to waterlogging stress.
These authors contributed equally to this work. Associate Editor: Hong Luo Conflict of Interest: The authors declare that there is no conflict of interest that could be perceived as prejudicial to the impartiality of the reported research. Abstract Waterlogging stress is an important abiotic stress that adversely affects maize growth and yield. The mechanism regulating the early stage of the maize response to waterlogging stress is largely unknown. In this study, CM37 and cmh15 seedlings were treated with waterlogging stress and then examined in terms of their physiological changes. The results indicated that inbred line cmh15 is more tolerant to waterlogging stress and less susceptible to peroxide-based damages than CM37. The RNA sequencing analysis identified 1,359 down-regulated genes and 830 up-regulated genes in the waterlogging-treated cmh15 plants (relative to the corresponding control levels). According to the Gene Ontology analysis for the differentially expressed genes (DEGs), some important terms were identified which may play important roles in the response to waterlogging stress. Moreover, enriched Kyoto Encyclopedia of Genes and Genomes pathways were also identified for the DEGs. Furthermore, the substantial changes in the expression of 36 key transcription factors may be closely related to the maize in response to waterlogging stress. This study offers important insights into the mechanism in regulating maize tolerance to waterlogging stress, with important foundations for future research. Keywords:
Acknowledgements This work was supported by the Anhui Province University Natural Science Research Project (2022AH040123) and Science and Technology Major Project of Anhui Province (2022e03020008). The following online material is available for this article:
CC BY
no
2024-01-16 23:47:16
Genet Mol Biol.; 46(4):e20230026
oa_package/0e/6b/PMC10789244.tar.gz
PMC10789246
38226394
Introduction Molecular dynamics (MD) simulations of proteins are an invaluable tool in many branches of life sciences, from biophysics, biochemistry, structural biology, to molecular biology and more. With uses in academia as well as industry, some noted applications include drug discovery in pharmaceutical sciences ( 1 ), discovering and creating improved protein variants ( 2 ), gaining biological insights ( 3 ), etc . To gather the most insight from simulated systems, various methods of analyzing MD simulations of proteins have been developed over the years. Visual analysis of the simulated system is conducted by observing the simulation throughout its course, and then complemented with quantitative analyses. Some of the most common statistical analyses include root mean square deviation (RMSD), radius of gyration ( R gyr ), root mean square fluctuations (RMSF), and more. Given the complexity of simulated systems robust, fast, and easy to interpret methods of analyses are of great value and significance to both researchers conducting them and the audience interpreting them. Intuitive and unambiguous visualizations of quantitative analyses can simplify the distinction of important results from the ever-present large data noise, minimize human error, and facilitate communication of information and scientific discourse of research findings. Presented here is a novel method of representing protein molecular dynamics simulations, complex multidimensional systems, as two-dimensional heatmaps of proteins’ backbone movements. This approach, referred to as trajectory maps, offers intuitive visualizing of simulation courses, direct conclusive comparison of multiple simulation courses, plotting movements of specific regions of proteins during the simulation and more.
Materials and methods Trajectory maps The foundation of trajectory maps is in movements of amino-acid residues from their reference positions, referred to as shifts. Shifts, defined as a Euclidean distance of centers of masses of residues backbone in time t from reference time t ref , are shown in a matrix for every residue and every frame of the simulation. The expression used for calculating shifts is shown with equation 1 . A shift s is calculated for every residue r in time t against reference time t ref which is taken as the first frame of the simulation. Coordinates x , y and z are of the center of mass of a residue's backbone atoms: Cα, C, O and N. A matrix of shifts is created and its values are color coded. The resulting heatmap that represents the MD trajectories is in the text referred to as a trajectory map. Before subjecting trajectory to the trajectory maps analysis, frames have to be aligned so that rotation and/or translation of the whole systems is extracted. This can be achieved either in the simulation setup or in the trajectory processing by alignment of frames using trjconv command in GROMACS or align command in AMBER. Also, optimal performance of trajectory maps is achieved on trajectories containing between 500 and 1000 frames. So, the reduction of number of frames from original trajectory is recommended in order to obtain the most clear, readable, and easy to interpret trajectory map. In given examples shifts were calculated using the center of mass of the backbone atoms, but any other points could be used instead e.g. only the positions of α-carbon atoms of the backbone or the center of mass of the whole amino acid residues, etc. Using centers of mass (as opposed to Cα positions) results in a better resolution of the map's z axis (representing shifts) because in-residue vibrations are diminished in magnitude. Furthermore, a reference from which shifts are calculated is taken as the first frame of the simulation, t 0 , which can also be modified. By taking the previous timestep as a reference (shift from t i- 1 to t i ) a map can be obtained as well. A map of ‘previous step’ shifts is independent of conformational changes and only shows the amount of fluctuations in the protein. Since it is dependent on the previous step of trajectories, the choice of trajectory stride is critical. At the moment, this feature is deemed less useful than having the reference time be the first step of a simulation. For the reasons discussed none of these two features are available in the main Python program herby provided, but can be accessed by manually modifying the source code. While there exist in literature several instances of approaches similar to this, notably hereby mentioned references ( 4–6 ) and a feature in a plugin of a program VMD ( 7 ), the authors state that this is the effort of an original idea developed independently and solely by the stated authors. Furthermore, regarding literature, to the best of author's knowledge none have neither performed nor utilized it neither in a way nor to an extent as described per this article. Implementation of trajectory maps Implementation of trajectory maps can be achieved through TrajMap.py, an easy-to-use open-source Python-based script. TrajMap.py (TM) is dependent on four Python libraries: Numpy ( 8 ), Pandas ( 9 ), Matplotlib ( 10 ), and MDTraj ( 11 ); which can all be easily installed using pip . The script is ready to be used as-is, and on Windows operating systems it is recommended to be used through Anaconda distribution of Python. On Linux operating systems, TM can be used in two ways: (i) as a terminal application by manually entering inputs and (ii) with Bash scripts with pre-written inputs. Usage through Bash scripts is recommended as it is faster and easier than manual typing, and allows for upscaling and bulk processing, mitigating the room for human error. Both ways of usage require virtually no knowledge of Python or Bash, and user-friendly guidelines are present in both approaches. Features of TM include: (i) creating a trajectory map from a simulation, (ii) creating a shift-graph of a defined region of a protein, (iii) calculating an average of two or three trajectory maps and (iv) calculating the difference of two trajectory maps (of singular simulations, or simulation averages). The workflow centers around converting trajectories and topologies (.xtc + .gro; .xtc + .pdb; .nc + .prmtop; etc .) into a matrix of shifts that is saved as a .csv file. That constitutes the first step: preprocessing. In the second: making the map; the .csv matrix is loaded, and a map is created from it using inputted parameters. The reason for this two-step approach is that preprocessing is the longest step, taking around 5 min for 300 residues 500 frame simulation; and creating the map is significantly faster but could require multiple iterations to fine-tune the range of the z axis color-scale, and/or axes ticks, etc . The workflow scheme is provided with Figure 1 . TrajMap.py comes as TrajMap kit, with the main script TM.py, premade Bash scripts (preprocessing script and map-making script), and a ‘test kit’ folder. In it are included small (under 50 MB) test trajectories and a Bash script that runs the program and all features with test trajectories to make sure all modules are imported and working properly. Extensive documentation is provided as well, with detailed instructions on how to install the dependencies and use the module. Alongside the TM program TrajMap_local library is provided as well, and it can be used as a locally imported Python module. The full access to functions is therefore available, and they can be used as from any other a Python library. With sufficient knowledge of Python, axes names, figure sizes, colormaps, reference times and everything else can be modified and changed.
Results Trajectory maps Trajectory maps show the residue's backbone shift (defined as an Euclidean distance) from a residue's backbone's starting position of the simulation, in each frame of a simulation. On the x axis are the frames of the trajectories (simulation time), on the y axis are the residues (residue number), and on the z axis is the color scale representing the magnitude of a shift in each frame from the backbone's position in the first frame. Consequently, trajectory maps show the location, time, and magnitude of every movement of a protein's backbone during the simulation. This intuitive visualization of the full course of a simulation can greatly aid in understanding and interpreting it, especially in the early stages of analysis. Having a roadmap of conformational events can facilitate and speed up the visual analysis of studied trajectories, as well as indicate the future courses of analyses. Furthermore, it allows for comparing multiple simulations in a conclusive and intuitive way. Validation the of the method is provided through three case studies where trajectory maps were applied to study already investigated systems. Case study 1: differentiating stability of simulated systems Authors of a study titled ‘CATANA: an online modelling environment for proteins and nucleic acid nanostructures’ ( 12 ), tested their modelling tool by comparing simulations starting from structures built with it, against structures obtained by other common means (usually starting from the crystal structure). One of the simulated proteins, Transcription activator-like effector (TAL) ( 13 ), was simulated in a complex with a DNA sequence built with CATANA, and compared against a simulation of a TAL complex built with a crystal structure DNA sequence (Figures 2A and B ). Trajectory maps for both simulations when compared show the difference in stability, indicating that a TAL complex with CATANA-built DNA is more stable. Those results are further confirmed by RMSD and RMSF graphs, two commonly used analyses (Figure 2B and C ). Compared to RMSD and RMSF graphs, trajectory maps reveal additional detail in the form of starting times, locations, and magnitudes of even temporary conformational events. Conclusions drawn from trajectory maps are in accordance with the authors’ conclusion that CATANA structure built using Alphafold leads to a more stable protein complex compared to a complex built from crystal structure using other homology modelling tool ( 4 ). Additionally, trajectory maps reveal the regions of most instability and time frame in which they happen. Insights gathered in that way can be further explored through other means such as visual analysis. Case study 2: comparing structural dynamics of multiple systems Multiple simulations can easily be compared by subtracting their trajectory maps (e.g. A–B for simulations A and B). Shifts stronger in A yield positive values, and shifts stronger in B yield negative values, while similar shifts stay close to zero. Coloring the resulting difference map with a colormap divergent around a value (e.g. blue-white-red) makes positive values (shifts stronger in A) red, regions of similar shifts white, and regions with shifts stronger in B blue. Additionally, multiple simulations can be averaged out and the averages can be subtracted (e.g. subtracting the average of duplicates A and B: [A + A′ – B – B′] / 2). From trajectory maps shift graphs can be created by plotting the shift in time of a single residue as a two-dimensional graph. Same can be done for trajectory difference maps as well. Furthermore, an average of a region can be plotted to show the shift in time of a portion of a protein (e.g. a whole helix, a whole domain, etc .). That is analogous to looking at a desired chunk of a trajectory map through the y axis, so the trajectory map color scale z axis becomes the y axis of a newly constructed graph. In a study titled ‘Structural dynamics of the Bacillus subtilis MntR transcription factor is locked by Mn 2+ binding’ ( 14 ), Jelić Matoševć et al. studied the role of manganese ions in structural dynamics of a MntR protein, a homodimeric transcriptional factor from Bacilus subtilis . From that study, two simulations of a holo protein (in a complex with Mn 2+ ions) (in the referenced article, structures with PDB codes 2F5F and 2F5C) ( 15 ) and two simulations of an apo protein (in the referenced article, structures with PDB codes 2HYG and 2HYF) ( 16 ) were used for testing trajectory maps. From the four simulations, trajectory maps were generated, and by subtracting the average of holo simulations from the average of apo simulations, the differences between the systems were shown (Figure 3A ). From there, each band representing a difference between simulations can be described in the context of the protein. Blue regions represent shifts stronger in the case of the holo form of the protein, while red regions represent shifts stronger in the apo form. White indicated shifts that were of the same magnitude in both forms. A region identified as of importance was a DNA binding region that includes residues 30–40 and 171–181 (black arrows on the y axis of Figure 3A ) (those are the same chain regions because homodimeric chains were numerated 1–141 and 142–282). The average shift of a regions 30–40 and 171–181 were plotted for the difference map of averages (Figure 3B ) and for individual simulations (Figure 3C and E ). The trajectory difference map shows how for regions 30–40 and 171–181 shifts are stronger in the case of apo form. That is quantified by panels B–D, which all confirm that respective regions lose stability in holo form. Additionally, the four-simulation difference of shifts (panel B) demonstrates similar behavior of both residues 30–40 and 171–181, which is according to expectations as those are the same regions of two homodimeric chains. The conclusion is that holo forms, with a bound manganese ions, have the mentioned regions more stable and with less fluctuations than in the apo forms where the fluctuations are more pronounced. The quantified results are visually shown in the panel E, where it is visible how the orange apo form has more disordered fluctuations that the blue holo form; although that conclusion is less clear than shifts graphs. Conclusions drawn from trajectory map analysis support the author's conclusion that the binding of Mn 2+ ions reduces the conformational space of the protein and locks the orientation of the DNA-binding helices ( 14 ). Results obtained with trajectory maps are further confirmed by author's PCA analysis of proteins’ backbones, and other means utilized in the original article (which can be further quantified and given intuition and context by trajectory map analysis). Again, the same conclusions were obtained by the authors, but with much more effort in visualization and usage of variety of different analyzing tools. On the other hand, the trajectory difference map directly pointed out these regions and would have significantly reduced the effort and the time required from the authors to obtain and convey same results (if it were available at the time of the study). Additionally, shift graphs reveal that the observed effects alter only the fluctuations of the protein structure without causing any significant conformational change, the same was concluded by the authors of the study ( 14 ). In case of a stable conformational change, the shifts would increase and stay fluctuating at some values around a new stable conformation; which is shown in Figure 4 in the following section Case Study 3 where it is a case of a change in a conformation and not only an increase in fluctuations. Case study 3: quantifying conformational changes In an ongoing study, a variant of an enzyme horseradish peroxidase (HRP) ( 17 ) with several known mutations (a recombinant type) is being studied. The recombinant type of HRP (mHRP) was simulated with classical molecular dynamics. Visual analysis showed a change in a helix region of the enzyme near the active site (residues 142–155, Helix E). A trajectory map was generated, and a notable band corresponding to that region confirmed the magnitude of that change (Figure 4A , white arrow). A shift graph of that region was plotted, thereby quantifying the results (Figure 4B ). Furthermore, the trajectory map showed the existence of two more notable bands responding to conformational changes near the helix E region (on the trajectory map, band at residues ∼250), which later in the study aided the characterization of the mechanism by which it happens. In another instance of a mHRP simulation the mentioned conformational change was once again observed. To show its impact on the structure of HRP, a trajectory map was created and on it was superposed an RMSD graph (Figure 5 ). A jump in an RMSD graph in time frame at which is observed the start of helix E conformational change confirms and quantifies its overall impact on the structure. With an RMSD graph superposed on the map, the impact of individual conformational events on the overall stability of the enzyme can be seen and quantified, as shown in this example. Further in the study, HRP was simulated at four increasing temperatures in a range from 300 K to 353 K, for both the recombinant and the wild type (resulting in two batches of four simulations). Trajectory maps were generated, and an average across four temperatures for the recombinant type was subtracted from the temperature average of the wild type. The resulting difference map (Figure 6 ) showed all the difference of conformational events of backbones in wild and recombinant type, adjusted for increasing temperatures. To aid the characterization, secondary structures from literature were annotated on the y axis. From there it was easy to assign and characterize individually on the enzyme each band that responded to a difference between simulations (negative values showed shifts stronger in the recombinant type while positive values showed shifts stronger in wild type). From that, the effects of mutations on the structural dynamical properties of the enzyme, as related to an increase in temperature, were studied and interpreted. This constituted a fully conclusive overview and comparison of backbones’ conformational events of two batches of four simulations (eight simulations in total).
Discussion Trajectory maps, a novel method of visualizing and analyzing protein molecular dynamics simulations, was presented. With trajectory maps it is possible: (i) to visualize the full course of a simulation in an elegant and intuitive way, (ii) to directly compare courses of multiple simulations or multiple simulation sets, (iii) to visualize the behavior of a residue or a protein region during the simulation, (iv) to distinguish between significant conformational changes or increases in fluctuations and (v) to complement the established methods of analyses, such as RMSD and RMSF. The elegant and intuitive manner in which trajectory maps convey the full course of a simulation, handle multiple variant simulations, and visualize shifts of regions (in addition to other tools arising from this approach), could greatly benefit many fields associated with protein molecular dynamics simulations. Several of the benefits researchers could have from applying trajectory maps for analyzing molecular dynamics trajectories were demonstrated on three previously investigated systems. Backbone movements and conformational changes of regions are shown as a horizontal band on a heatmap that represents trajectories. Distinguishing start times of individual conformational changes as bands on a heatmap can aid in interpreting the results of other analyses such as RMSD, R gyr and others. By comparing an RMSD graph with a trajectory map it is possible to assign RMSD peaks with individual conformational changes that caused them. Trajectory maps provide a way to distinguish between conformational changes that lead to a stable different conformation of a region, and destabilizing changes that result in an increase in fluctuations; which complements the RMSF analysis (where this distinction can be ambiguous). Furthermore, a useful feature of trajectory maps is their ability to showcase the order in which conformational changes occur. By examining the starting times of bands responding to shifts of regions involved in a mechanism, it is simple to deduce the order in which these changes occur; which can aid in characterizing a mechanism of that conformational change. Another powerful feature is its ability to plot the average shift of a defined region throughout time, in the form of a 2D shift-graph. The resulting shift-graph can be used to show the changing position of structural elements e.g. a helix throughout the simulation, as a proof of a conformational change/proposed mechanism. This is analogous to looking at a defined slice of a trajectory map through the y axis, so the color-coded z axis responding to the magnitude of a shift becomes the y axis of the constructed graph. Since heatmaps are color-coded matrices, it is possible to perform calculations with them. Several heatmaps can be averaged out to show the average course of e.g. a triplicate simulation. By subtracting two matrices (e.g. apo – holo , recombinant – wild, etc .), all the differences of their courses and conformational changes are concisely and conclusively shown. The resulting trajectory difference map is color-coded with a divergent colormap to highlight the differences (divergent colormap diverge around a value in both positive and negative values, e.g. blues for negative values, whites for zero values, and reds for positive values). TrajMap.py, an easy-to-use open-source Python-based script created for generating trajectory maps, was provided, together with clear and simple instructions for its usage. Provided content includes the main script, Bash scripts for easy and fast usage on a larger scale, and documentation with detailed descriptions of its workings. Trajectory maps is a simulation analysis tool that is simple to use and provides the results that are easy to comprehend and interpret. In order to demonstrate advantages of trajectory maps over similar tools, Case studies 1 and 2 were also analysed by other tools and comparison of the obtained results is provided in Supplementary Data ( Supplementary Figures S1 – S4 in Supplementary Data ). The advantage of trajectory maps over similar tools in the presentation of the obtained results can be seen for both cases. Furthermore, the required skills to obtain and to present such results ( Supplementary Figures S2a , S2 b and S4 b in Supplementary Data ) are significantly more demanding in case of other similar trajectory analysis tools than in the case of trajectory maps. In addition, it is worth noting that such usage, as is presented in our case for comparison purposes, isn’t mentioned in the documentation of tools we compared to trajectory maps, and to the best of our knowledge hasn’t been previously performed in that way in any literature. The best example of the impact that trajectory maps could have in the field of MD simulations is seen from the Case study 3. In that case, subtle conformational changes during eight simulations were identified and, more importantly, impact of each conformational change on the overall stability of the enzyme was quantified. Without trajectory maps the same analysis would be likely impossible. With the insight we present in this paper, it might be possible, but it would require deep understanding of the mathematical background of this and similar methods of analysis, as well as an advanced usage of tools we compared to trajectory maps. For sure, it would be demanding and time consuming. Therefore, trajectory maps present opportunity for even a beginner in the field to perform such comprehensive and powerful analyses in a simple and straight forward way.
Abstract Molecular dynamics simulations generate trajectories that depict system's evolution in time and are analyzed visually and quantitatively. Commonly conducted analyses include RMSD, R gyr , RMSF, and more. However, those methods are all limited by their strictly statistical nature. Here we present trajectory maps, a novel method to analyze and visualize protein simulation courses intuitively and conclusively. By plotting protein's backbone movements during the simulation as a heatmap, trajectory maps provide new tools to directly visualize protein behavior over time, compare multiple simulations, and complement established methods. A user-friendly Python application developed for this purpose is presented, alongside detailed documentation for easy usage and implementation. The method's validation is demonstrated on three case studies. Considering its benefits, trajectory maps are expected to adopt broad application in obtaining and communicating meaningful results of protein molecular dynamics simulations in many associated fields such as biochemistry, structural biology, pharmaceutical research etc . Graphical Abstract
Supplementary Material
Acknowledgements We acknowledge Croatian Science Foundation, grant number IP-2020-02-3446. We are grateful to Zoe Jelić Matošević and Tana Tandarić for valuable technical support. Data availability The source code for TrajMap.py is open source and fully available on Zenodo at DOI: 10.5281/zenodo.10428488 ( https://zenodo.org/doi/10.5281/zenodo.10428488 ). The Zenodo repository serves as a permanent backup of the original GitHub repository cited in this publication ( https://github.com/matkozic/TrajMap ) with a permanent DOI. Supplementary data Supplementary Data are available at NARGAB Online. Funding Croatian Science Foundation project "Manganese metallosensors" [IP-2020-02-3446]. Conflict of interest statement . The authors declare no conflicts of interest.
CC BY
no
2024-01-16 23:47:16
NAR Genom Bioinform. 2024 Jan 15; 6(1):lqad114
oa_package/96/c1/PMC10789246.tar.gz
PMC10789248
38226177
The massive integration of silicon-based transistors into modern electronic devices greatly increases power consumption and heat generation/dissipation [ 1 ]. In this concern, transistors based on atomically-thin graphene that exhibit high electric and thermoelectric conductivities with negligible contact resistance, wide-range gate-tunability and high compatibility with existing processing technologies, have emerged as promising alternatives in the post-Moore era [ 2 , 3 ]. Due to its gapless band dispersion, however, graphene-based transistors show a limited on/off switching ratio (<100). Various methods such as doping [ 4 ], substrate engineering [ 5 ] and dimensional reduction [ 6 ] have been proposed, but almost none of them could reach a satisfying on/off ratio without sacrificing carrier mobility. Further graphene functionalization, dedicated device architectures and even external field regulation are still required. Writing in National Science Review , a team led by Prof. Yanpeng Liu from Nanjing University of Aeronautics and Astronautics innovatively overcame this compromise between the on/off ratio and carrier mobility (Fig. 1 ). They proposed a novel device architecture based on a graphene/black phosphorus (BP) heterostructure that is peculiarly prone to nano-buckles on demand under local current-annealing. The as-fabricated graphene-based transistors show outstanding on/off ratios >10 3 at room temperature while preserving intrinsic ultrahigh carrier mobility. In the frame of electro-thermo-mechanical multiple fields coupling at the atomic scale, Prof. Liu and his coworkers first embarked on heavily lattice-mismatched heterostructures, for instance, a hexagonal graphene monolayer on orthogonal BP flakes, and selectively applied a direct current to thermally anneal graphene on puckered BP in between two electrodes. Upon global annealing, graphene/BP is likely to form a rotation-angle–dependent Moiré superlattice and pseudomagnetic field, accompanied with electron localization within circumscribed graphene landscapes [ 8 ]. Here, the annealed graphene/BP region locally buckles at the nanoscale with a ridge-like spatial structure (behaving as electron reflective interface) formed at the intersection with the adjacent un-annealed one. This electrical interface blocks the in-plane current flow and gives rise to an off-state in the logic transistors. For the on-state, the authors alternatively leverage a simple back-gate to make the BP bottom layer conductive thus generating dual-channel propagations of graphene and BP. In this manner, the logic operation of the graphene transistor is readily realized by low-power gate control, even at room temperature and intense magnetic fields. Via electrostatic gating, the graphene-based transistor simultaneously generates an on/off ratio >10 3 and a high mobility of ∼8000 cm 2 V −1 s −1 at room temperature, fulfilling the low-power criterion suggested by the International Roadmap for Devices and Systems [ 9 ]. This breakthrough not only sheds light on the control of two-dimensional electrons through nanoscale multiple-fields coupling, but also establishes a paradigm for the development of graphene-based transistors, edging us one step closer to their ultimate commercialization.
Conflict of interest statement . None declared.
CC BY
no
2024-01-16 23:47:16
Natl Sci Rev. 2024 Jan 3; 11(2):nwad316
oa_package/12/8c/PMC10789248.tar.gz
PMC10789249
38226176
INTRODUCTION Solar-driven CO 2 conversion into CO, CH 4 , CH 3 OH and other products through photocatalysis offers a sustainable approach to synchronously alleviating the energy crisis and achieving net CO 2 emissions [ 1–5 ]. However, since the lifetime of photogenerated electrons is usually in the order of sub-picoseconds to seconds [ 6 , 7 ], the photocatalytic reaction will stop rapidly upon the end of illumination. The asynchrony between solar energy supply and utilization demand, affected by day length and weather, is a great obstacle for the practical application of CO 2 photoreduction [ 8 , 9 ]. As such, it is of great significance to develop a method for decoupling CO 2 reduction from solar energy supply toward the goal of round-the-clock and all-weather CO 2 conversion. Natural photosynthesis offers us an insight into the decoupling of light and dark reactions. The conversion of CO 2 into carbohydrates by natural photosynthesis in green plants is divided into two steps—the widely known light reaction and dark reaction. During the light reaction, chloroplasts can synthesize reduced equivalents, which are used to produce the reduced nicotinamides adenine dinucleotide phosphate (NADPH) and adenosine triphosphate (ATP) (Fig. 1a ) [ 10 ]. In the dark reaction, with the help of NADPH and ATP, CO 2 can be converted stepwise to generate carbohydrates. Pioneering studies have proposed the multistep solar-derived hydrogen production. In a typical example, Amthor et al. used a covalent photosensitizer–polyoxometalate dyad to store the photogenerated electrons in polyoxometalate under visible-light irradiation [ 11 ]. Subsequently, hydrogen can be released by adding a proton donor to the dyad solution in an on-demand manner. In another case, Lau et al. reported the formation of ‘blue radical’ species within the cyanamide-functionalized polymeric network of heptazine units under solar irradiation [ 12 ]. By adding a Pt cocatalyst, the ‘blue radical’ species can give off the trapped electrons in the dark to release H 2 . While the delayed on-demand solar hydrogen production has been achieved by several seminal works [ 11–13 ], it still remains a challenge to decouple light and dark reactions for CO 2 conversion, particularly toward hydrocarbon fuel production, as the process involving multielectron hydrogen-coupled reduction is substantially more complex than H 2 production. To make it work for such a complex process, both the photogenerated electrons and the hydrogen atoms have to be efficiently stored in a well-designed material under light irradiation so that the CO 2 reduction process can then be triggered spontaneously upon their release in the dark. In principle, the reduction of CO 2 to CH 4 through a proton/electron pathway undergoes a negative change in Gibbs energy (ΔG° = –113 kJ mol –1 ) ( Supplementary Fig. S1 ) [ 14 ]. It is thus feasible to achieve sustainable CO 2 methanation in the dark, which is decoupled from the solar irradiation that provides the reduced protons and electrons. To achieve the decoupling of the light reaction and dark reaction processes, the catalyst has to meet stringent requirements. First, under solar illumination, photocatalytic water splitting should occur to produce hydrogen atoms and electrons for energy storage. Second, the catalyst should possess suitable sites for storing hydrogen atoms and electrons. Third, driving forces should be present within the catalyst to facilitate the storage of electrons and hydrogen atoms under illumination as well as their subsequent release in the dark, enabling the reaction to take place spontaneously. Moreover, the structural changes during the storage and release of electrons and hydrogen atoms should be reversible to ensure cyclic stability. Among various catalytic materials, hexagonal tungsten oxide (h-WO 3 )-based photocatalysts can perfectly meet these specific requirements. Specifically, the energy band position of engineered h-WO 3 can meet the needs of photocatalytic CO 2 conversion and water splitting to product hydrogen atoms and electrons [ 15 , 16 ], despite its relatively weak reduction capacity. In addition, h-WO 3 has a rich channel structure [ 17 ], providing abundant sites for hydrogen storage. Furthermore, in the presence of suitable cocatalysts (e.g. Pt, Pd, Cu), the hydrogen spillover process can be realized, which allows a fraction of hydrogen atoms to be transferred to the storage sites instead of being released in the form of H 2 [ 18 ]. In principle, the processes of storing and releasing H atoms in h-WO 3 are accompanied by reversible transformation of W 5+ and W 6+ to maintain charge balance and, as such, no structural damage of h-WO 3 occurs [ 19 ]. Taken together, the modified h-WO 3 is an ideal model material for demonstrating the concept of decoupling the light reaction and dark reaction processes for all-weather CO 2 utilization. Here, we report a well-designed material model for decoupling the light reaction and dark reaction to realize CO 2 utilization in the dark. As illustrated in Fig. 1b , Pt-loaded h-WO 3 (noted as Pt/h-WO 3 ) is employed as a model catalyst to demonstrate our concept. In the light reaction, the light irradiation on the catalyst initiates photocatalytic water splitting, which in turn stores some of the electrons and H atoms in h-WO 3 . Upon the end of illumination, the stored electrons and H atoms are spontaneously released from the h-WO 3 to trigger the catalytic reduction of CO 2 in the dark. On the advantages of decoupling light and dark reactions, this work opens a new avenue for sustainable round-the-clock and all-weather CO 2 utilization.
METHODS Materials preparation To prepare the h-WO 3 nanorods, 0.99 g of Na 2 WO 4 ·2H 2 O and 1.19 g of NaHSO 4 ·H 2 O were dissolved in 40 mL of deionized water with constant stirring. After stirring for 1 h, the mixture solution was then transferred into a 100-mL autoclave and heated in oven at 180°C for 24 h. After reaction, the precipitates were collected by using centrifugation and washed using deionized water and ethanol several times. The powder was obtained and dried in a vacuum at 70°C overnight (noted as h-WO 3 ). Pt-modified h-WO 3 was obtained by an in situ reduction of Pt by low-valence W 5+ . Briefly, 0.2 g of h-WO 3 and 100 mL of water were added into a 200-mL quartz closed reactor and then ultrasonically treated for 30 min to form a uniform light-white mixture solution. After the solution mixture was purged by using Ar for 1 h, a 500-W xenon lamp was used as the light source to irradiate the solution for 1 h. The color of the mixture solution changed into light blue after 1 h of illumination, indicating that the reduced W 5+ was formed. Then 10 mL of H 2 PtCl 6 ·xH 2 O (2 mg/mL) was dropped into the obtained solution with constant stirring. After stirring for 1 min, the sample was separated by using centrifugation and washed three times alternately with deionized water and ethanol, and dried in a vacuum at 70°C overnight (noted as Pt/h-WO 3 ). Catalytic CO 2 reduction measurement The CO 2 reduction performance of the as-obtained samples was measured in a home-made cylindrical quartz closed reactor (200 mL). First, 10 mg of catalyst and 60 mL of deionized water were added into the reactor and then ultrasonically dispersed for 30 min. Second, the air in the reactor was removed by using a vacuum pump for 1 h. Then the simulated sunlight was produced by using a 300-W Xe arc lamp irradiated on the sample through a quartz window for a certain period of time. After illumination, a certain amount of high-purity CO 2 (99.999%) was implanted into the reactor to keep the pressure equal to the atmosphere. Finally, the reactor was placed in the dark for the CO 2 reduction reaction. The products were detected by using gas chromatography (7890B, Agilent) equipped with a flame ionization detector and a thermal conductivity detector. The reactor was maintained at 20°C by circulating water throughout the reaction. The AQE was calculated by using AQE (%) = (Number of evolved CH 4 molecules × 8)/(Number of incident photons) × 100%. Using 13 CO 2 instead of 12 CO 2 or D 2 O instead of H 2 O as a reactant in the isotope labeling experiment, the product (CH 4 or CD 4 ) was detected by using gas chromatography-mass spectrometry (7890A and 5975C, Agilent).
RESULTS AND DISCUSSION Material synthesis and characterization Pt/h-WO 3 was prepared by hydrothermal synthesis of h-WO 3 followed by in situ deposition of Pt, as illustrated in Fig. 2a . The synthesized sample exhibits a nanorod-like structure with diameters of between 300 and 500 nm ( Supplementary Figs S2 and S3 ). The diffraction peaks of X-ray diffraction (XRD) patterns show that the synthesized sample contains h-WO 3 (PDF#75-2187) (Fig. 2b ). The hexagonal structure is based on an arrangement of WO 6 octahedra sharing corners in (WO 6 ) 6 wheels, which are stacked along the c -axis to yield hexagonal tunnels ( Supplementary Fig. S4 ) [ 19 ]. The existence of hexagonal tunnels was proven by using atomic resolution aberration-corrected high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) (Fig. 2c ), which is consistent with the N 2 sorption isotherms results ( Supplementary Fig. S5 ). According to previous reports [ 19 , 20 ], the tunnel structure of Pt/h-WO 3 is conducive to species embedding and dissociation, which is beneficial for the storage of hydrogen species. In our synthesis, Pt modification was achieved through in situ reducing Pt 4+ with the photogenerated electrons stored in h-WO 3 . The successful addition of Pt to h-WO 3 nanorods can be demonstrated by using high-resolution Pt 4f X-ray photoelectron spectroscopy (XPS) ( Supplementary Fig. S6 ) and the Pt loading content of Pt/h-WO 3 is 0.11 wt% as measured by using inductively coupled plasma-mass spectrometry (ICP–MS). Note that no peaks corresponding to Pt species are found in XRD patterns, suggesting that Pt atoms are not in a crystalline form. To look into the form of Pt atoms, diffuse reflectance Fourier transform infrared (FTIR) spectroscopy was employed to examine the sample using CO as a probe molecule. As shown in Fig. 2d , Pt/h-WO 3 displays a CO adsorption peak at 2119 cm −1 , which is different from the adsorption of CO on Pt nanoparticles (2080 cm −1 ) ( Supplementary Fig. S7 ), indicating that Pt was mainly loaded on the surface of Pt/h-WO 3 in the form of highly dispersed atoms [ 21 ]. Moreover, the corresponding energy-dispersive spectroscopy mapping for Pt/h-WO 3 confirmed that the Pt atoms are homogeneously distributed on the Pt/h-WO 3 catalyst ( Supplementary Fig. S8 ). Upon the addition of Pt, the absorption of Pt/h-WO 3 in the visible range is slightly reduced ( Supplementary Fig. S9 ), which is mainly caused by the decrease in W 5+ content in the sample [ 15 ]. Further characterization shows that Pt can effectively accept electrons from the conduction band of the h-WO 3 carrier so that the photogenerated charges are efficiently separated and in turn localized on the catalyst surface ( Supplementary Figs S10–S13 ). The above characterizations prove that we have successfully introduced Pt to the surface of h-WO 3 , which can effectively improve the utilization rate of photogenerated charges. Another consideration for integrating h-WO 3 with Pt is to potentially utilize hydrogen spillover for harnessing hydrogen storage into h-WO 3 . The migration of H atoms from Pt to the tungsten trioxide was reported by Khoobiar as early as 1964 [ 22 ]. To this end, we studied the hydrogen spillover phenomenon over Pt/h-WO 3 by using H 2 temperature programmed reduction (H 2 -TPR). As shown in Fig. 2e , the three reduction peaks all move to a low temperature after Pt is deposited onto the h-WO 3 , suggesting the hydrogen spillover from Pt sites to the h-WO 3 carrier [ 18 ]. Moreover, previous reports revealed that water could significantly increase the diffusion rate of the reducing species from Pt to tungsten trioxide [ 23 ]. In our case, once light irradiation produces H atoms from water with the photogenerated electrons, it would be feasible to insert H atoms into the h-WO 3 carrier with a tunnel structure in aqueous solution. Catalytic CO 2 reduction performance Upon proposing that the synthesized material may store electrons and hydrogen atoms, we are now in a position to prove whether the stored energy can be used to reduce CO 2 under dark conditions. To this end, we analysed the band structure of Pt/h-WO 3 and found that it can meet the demand of CO 2 reduction ( Supplementary Fig. S9 ). The CO 2 reduction performance of catalysts was then investigated using a home-made reactor ( Supplementary Fig. S14 ). During the measurements, the catalysts were charged in pure water under simulated solar illumination for 10 min, followed by a CO 2 reduction reaction under dark conditions. After light illumination, a certain amount of O 2 and H 2 were detected ( Supplementary Fig. S15 ), indicating that photocatalytic water splitting had occurred. The ratio of detected H 2 production to O 2 production was dramatically less than the stoichiometric ratio of water splitting (2 : 1), indicating that a fraction of H atoms were stored [ 24 ]. Given the storage of H atoms during the 10 min of light illumination, high-purity CO 2 was then injected into the reactor for a reduction reaction in the dark. As shown in Fig. 3a , after 10 min of light illumination, the catalytic CO 2 reduction reaction continued for 10 days after the light was turned off and the yield of CH 4 reached 51.6 μmol/g. The CH 4 yield is equivalent to 309.6 μmol/g per hour of illumination, which was a fairly high rate in pure water systems compared with most of the existing reports ( Supplementary Table S1 ). No other carbon-containing products were detected except for a small amount of CO ( Supplementary Fig. S16 ). As Pt is an excellent cocatalyst for hydrogen generation, H 2 was detected in the product after the reaction ( Supplementary Fig. S17 ). Even without CO 2 , H 2 can be released under dark conditions ( Supplementary Fig. S18 ). To suppress the H 2 evolution, we further increased the pressure of the reaction system. As a result, the mass transfer process of CO 2 conversion could be improved by effectively turning down the process of H 2 evolution [ 25 ], in which the yield of CH 4 was further increased from 51.6 to 72.0 μmol/g ( Supplementary Fig. S19 ). In addition, we found that the products can be controlled by modifying the cocatalyst, such as replacing Pt with Cu, and the main products changed to H 2 ( Supplementary Fig. S20 ). In comparison, when CO 2 was replaced by high-purity Ar in the dark reaction process, only a negligible amount of CH 4 was detected (0.21 μmol/g), indicating that CH 4 is predominately generated from the CO 2 in the dark reaction [ 26 ]. The results of the 13 C isotope labeling experiment confirm that the CH 4 in the product really originated from CO 2 reduction rather than carbon impurities (Fig. 3b ). In addition, the reaction product changed to CD 4 by replacing H 2 O with D 2 O, proving that the hydrogen in the product came from water. From the above results, we can safely conclude that the decoupling of light and dark reactions indeed achieves CO 2 reduction in the dark. In order to verify that the reduction products originated from the stored energy in the catalyst, we carried out a series of control experiments. First of all, CO 2 was directly introduced into the reactor in the absence of light irradiation and no carbon-containing products were generated, indicating that the driving force of CO 2 reduction comes from light. Thus, we explored the influence of the light-irradiation time on the amount of CH 4 generation. As shown in Fig. 3c , the product of CH 4 increases by prolonging the illumination time before the dark reaction, suggesting that CO 2 conversion depends on the number of photons. However, when the light time was extended to >10 min, the increase in CH 4 yield was very limited, indicating that saturation can be reached after 10 min of illumination. To further prove whether the energy is stored in the catalyst, different dosages of catalysts were used to carry out CO 2 reduction tests. As shown in Fig. 3d , the production of CH 4 in the dark reaction was gradually promoted with an increase in the catalyst used, suggesting that the driving force for CO 2 reduction is stored in the catalyst during the illumination process. When the catalyst dosage was increased by two orders of magnitude, the production rate of CH 4 per unit mass catalyst was almost maintained ( Supplementary Fig. S21 ), indicating that the catalyst has the prospect for further large-scale application. In addition, we expressed the efficiency of energy storage in terms of the apparent quantum efficiency (AQE). The efficiency of photon-to-chemical energy conversion under different monochromatic light wavelengths is summarized in Fig. 3e . The AQE of 1.13% at 320 nm demonstrates that short-wavelength light excites the catalyst. The information gleaned above demonstrates that the reduction of CO 2 under dark conditions is triggered by the energy from light stored in the catalyst. The catalyst stability is another critical factor that largely determines whether the catalyst can be used in practice. The cyclic stability test indicates that the recycled catalyst retains ∼98.7% of the original activity after four runs (Fig. 3f ). The stability of the catalyst was also proven by the XRD patterns, FTIR spectra and XPS spectra of the catalyst after the reaction in comparison with the fresh one ( Supplementary Fig. S22 ). The content of Pt after cycle testing was measured by using ICP–MS, revealing that the content of Pt (0.11 wt%) did not change during the reaction, which demonstrates the good stability of the material. In addition, the concentration of oxygen vacancies did not change significantly during the whole reaction, indicating that the structure of the catalyst was relatively stable during the reaction ( Supplementary Fig. S23 ). In all, our designed Pt/h-WO 3 can decouple the light and dark reaction processes to achieve CO 2 reduction under dark conditions with high recyclability, indicating that such a working mechanism meets the preliminary requirement for sustainable all-weather CO 2 conversion. Mechanism of decoupled CO 2 conversion process The successful practice, in which the Pt/h-WO 3 material with light-irradiation pretreatment can sustain CO 2 conversion under dark conditions, urges us to decode the mechanism of decoupling light and dark reaction processes with systematic investigations. As such, we have extensively examined the mechanisms for energy storage under light irradiation and energy release in the dark. During the whole reaction process, the most distinct phenomenon was the significant color change of the catalyst. The color of the catalyst changed from gray white to light blue in the process of illumination but slowly returned to its original color after the dark reaction ( Supplementary Fig. S24 ). This phenomenon is related to the change of light absorption in the visible-light range. Visible or near infrared light can induce polaron transitions—the hopping of polarons from W 5+ to nearby W 6+ positions [ 15 ], resulting in light absorption. In our case, W 5+ was generated in the process of illumination and consumed in the dark reaction, altering the light absorption. To look into the generation and consumption of W 5+ , electron paramagnetic resonance (EPR) spectroscopy was employed to examine the catalyst after light and dark reactions. The EPR signal of W 5+ was characterized by using an axial g-tensor with g values of 1.909 and 1.880 [ 19 , 27 ]. Under the simulated solar light, the EPR signal for W 5+ appeared and reached the maximal intensity after 10 min (Fig. 4a ). After turning off the illumination, the intensity of the EPR signal related to W 5+ in the catalyst slowly decreased and finally returned to the original state (i.e. prior to the illumination) after 10 days (Fig. 4b ). The consistency between the evolution of W 5+ species and the process of CO 2 reduction in the timescale indicates that the CO 2 reduction under dark conditions should have involved W 5+ species. To further elucidate the origin and destination of W 5+ , we studied the valence state of W at different reaction stages by using XPS. As shown in Fig. 4c , the deconvoluted W 4f spectrum of Pt/h-WO 3 can be fitted into two W oxidation states, namely W 6+ (4f 7/2 , 35.82 eV) and W 5+ (4f 7/2 , 34.81 eV), without other valences [ 15 , 28 , 29 ]. The ratio of W 5+ in the Pt/h-WO 3 was promoted from 2.14% to 8.36% after the light reaction, suggesting that a part of the W 6+ was reduced to W 5+ in the process of illumination. Such a reduction of W 6+ to W 5+ during illumination is essentially a process of storing the photogenerated electrons. In the dark reaction process, the proportion of W 5+ slowly decreased to 7.79% after 24 h and returned to the initial state (i.e. 2.48%) after 10 days. This manifests the dark reaction process that the stored electrons were spontaneously released after turning off the illumination, which in turn triggered the CO 2 reduction reaction. With the information for electron storage in mind, we are still questioned by two remaining issues—the source of H in the CH 4 product and the destination of the positive charge species along with the electron storage in W 5+ . As we originally proposed, both should be associated with the formation of H atoms from photocatalytic water splitting and their insertion into the catalyst. To further verify that H atoms had inserted into h-WO 3 during the light reaction, we characterized the Pt/h-WO 3 before and after the light reaction by using FTIR spectroscopy. Note that, due to the presence of a small amount of water and hydrogen atoms in the catalyst ( Supplementary Fig. S25 and Supplementary Table S2 ), we can only judge whether additional hydrogen atoms are inserted into the catalyst by the change in the intensity of the peak. After the light reaction, the peaks attributed to the O–H species of ∼3500–3700 cm −1 were obviously enhanced, indicating that the content of H in the sample had increased (Fig. 4d ). Further experiments were carried out by replacing the water (H 2 O) with deuterium oxide (D 2 O). The spectra showed that the FTIR peak attributed to the O–D species appeared at 2500–2700 cm −1 after the light irradiation while the peaks for the O–H species of ∼3500–3700 cm −1 were fairly similar to those of pristine Pt/h-WO 3 , indicating that the H that was inserted into the catalyst came from water splitting. In addition, we further quantified the stored hydrogen by means of ion exchange ( Supplementary Table S2 ). The above characterization indicates that H was inserted and stored in the Pt/h-WO 3 during the light reaction process. Next, we further proved whether the stored H atoms and electrons can trigger CO 2 reduction under dark conditions. To this end, in situ FTIR spectroscopy was performed to reveal the activation of the CO 2 by the stored H atoms and electrons. For fresh Pt/h-WO 3 , after being exposed to CO 2 /H 2 O vapors in the dark, the peaks at 1700, 1688 and 1603 cm −1 that should be attributed to the formation of bidentate carbonate b-CO 3 2− appeared [ 30 , 31 ] ( Supplementary Fig. S26 ) . In addition, the peak for adsorbed H 2 O was also detected at 1641 cm −1 [ 15 ]. No intermediates corresponding to CO 2 methanation could be detected. In contrast, after Pt/h-WO 3 has been treated with H 2 O vapor under light irradiation, some peaks attributed to intermediates can be detected upon exposure to CO 2 /H 2 O vapors in the dark (Fig. 4e ). Most notably, the absorption peaks attributed to COOH intermediates were observed at 1730, 1580 and 1547 cm −1 , respectively [ 30 , 32–34 ]. Previous studies have reported that the formation of the *COOH structure is a crucial step in CO 2 activation [ 30 , 33 ]. Thus, this observation indicates that the H atoms and electrons stored in Pt/h-WO 3 can trigger CO 2 activation under dark conditions. In addition, the absorption peak attributed to *CHO was also observed (at 1460 cm −1 ), suggesting that CO 2 followed the path of CH 4 formation [ 35 , 36 ]. Based on the above characterization results, the light and dark reactions are expressed by the following equations: Light reaction: (1) Dark reaction: (2) In the light reaction process, the electrons are excited from the valence band of the h-WO 3 carrier to its conduction band and then transferred to the Pt sites to realize the water splitting. The O atoms are oxidized by the holes remaining in the valence band to release O 2 , while the H atoms at the Pt site spill over the h-WO 3 carrier. In the meantime, a part of W 6+ in the h-WO 3 surface is reduced to W 5+ for electron storage. In the dark reaction, the stored electrons and H atoms are spontaneously released to achieve CO 2 reduction. Overall, the unique performance of our designed material, with the capability of storing and releasing energy in light on/off cycles, should be related to the fact that the H atom, as an energy carrier, can be stored in the channel of h-WO 3 via the spillover through Pt. To further illustrate this point, we highlight the importance of two processes—the spillover and storage of H atoms along with electrons, and the release of H atoms and electrons, with reference samples separately. First, we prepared Au-loaded h-WO 3 as a reference sample and found that no hydrogen spillover takes place on this catalyst ( Supplementary Fig. S27 ). As a result, the CO 2 conversion performance is nearly equal to that of bare h-WO 3 . This indicates that the effective hydrogen spillover to the h-WO 3 carrier is a prerequisite for CO 2 conversion in the dark. Second, the reference sample of Ni-loaded h-WO 3 offers the ability of storing H atoms and electrons along with H spillover ( Supplementary Fig. S28 ). However, under the dark conditions, the stored electrons and H atoms cannot be released spontaneously, hindering the CO 2 reduction. In comparison, the reference sample of Pd-loaded h-WO 3 possesses similar properties to Pt/h-WO 3 , in which the electrons and H atoms can be both stored and released, enabling CO 2 conversion under dark conditions ( Supplementary Fig. S29 ). Taken together, the above reference samples clearly demonstrate that the efficient storage and spontaneous release of electrons and H atoms together ensure the decoupling of light and dark reactions for CO 2 conversion. Demonstration of sustainable solar-driven application In order to verify the feasibility of our concept in practical application, we conducted a round-the-clock and all-weather demonstration of CO 2 conversion under natural light irradiation. We divided the reactor into a light reaction module and a dark reaction module (Fig. 5a and Supplementary Figs S30 and S31 ). The light reaction module was designated for sunlight absorption and energy storage, and then the CO 2 reduction reaction was conducted in the dark reaction module. The light reaction module and dark reaction module were connected through pipes and the catalyst dispersion in pure water was driven by using a peristaltic pump to realize circulation in the two modules. The experiment was conducted from 8 to 23 September 2022 in Xi’an, China. We recorded the light intensity during this period by using a solar radiometer. The specific weather conditions are listed in Supplementary Table S3 . As shown in Fig. 5b , the solar light intensity decreases significantly on cloudy days and decreases to zero at night and on rainy days. On the first day of the reaction, we put the whole reaction system in the dark. We found that no CH 4 was generated (Fig. 5c and d ), indicating that the energy for the subsequent generation of CH 4 was all from sunlight. On the second day of the experiment, the reaction system was transferred outdoors. CH 4 was slowly generated under sunlight, with a CH 4 yield of 1.2 μmol in the daytime. As it switched to the night, the CH 4 production continued but was observed with a slightly reduced rate, which yielded another 0.86 μmol of CH 4 . This result demonstrates that we have successfully achieved round-the-clock CO 2 reduction. The third day was cloudy so that the supply of sunlight decreased significantly; however, the rate of CH 4 production increased slightly, indicating that the intermittent supply of energy was also collected. The fourth to ninth days were sunny with a certain amount of cloud cover and, as a result, the rate of CH 4 production increased slowly and then reached an equilibrium. It is worth mentioning that, on the ninth to thirteenth days of the test, the rainy weather made the intensity of the Sun drop to near zero, but the CH 4 production did not stop. Moreover, the performance of the catalyst recovered after exposure to solar irradiation. This result manifests that the unique reaction was not significantly affected by short-term weather change, demonstrating the all-weather application.
RESULTS AND DISCUSSION Material synthesis and characterization Pt/h-WO 3 was prepared by hydrothermal synthesis of h-WO 3 followed by in situ deposition of Pt, as illustrated in Fig. 2a . The synthesized sample exhibits a nanorod-like structure with diameters of between 300 and 500 nm ( Supplementary Figs S2 and S3 ). The diffraction peaks of X-ray diffraction (XRD) patterns show that the synthesized sample contains h-WO 3 (PDF#75-2187) (Fig. 2b ). The hexagonal structure is based on an arrangement of WO 6 octahedra sharing corners in (WO 6 ) 6 wheels, which are stacked along the c -axis to yield hexagonal tunnels ( Supplementary Fig. S4 ) [ 19 ]. The existence of hexagonal tunnels was proven by using atomic resolution aberration-corrected high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) (Fig. 2c ), which is consistent with the N 2 sorption isotherms results ( Supplementary Fig. S5 ). According to previous reports [ 19 , 20 ], the tunnel structure of Pt/h-WO 3 is conducive to species embedding and dissociation, which is beneficial for the storage of hydrogen species. In our synthesis, Pt modification was achieved through in situ reducing Pt 4+ with the photogenerated electrons stored in h-WO 3 . The successful addition of Pt to h-WO 3 nanorods can be demonstrated by using high-resolution Pt 4f X-ray photoelectron spectroscopy (XPS) ( Supplementary Fig. S6 ) and the Pt loading content of Pt/h-WO 3 is 0.11 wt% as measured by using inductively coupled plasma-mass spectrometry (ICP–MS). Note that no peaks corresponding to Pt species are found in XRD patterns, suggesting that Pt atoms are not in a crystalline form. To look into the form of Pt atoms, diffuse reflectance Fourier transform infrared (FTIR) spectroscopy was employed to examine the sample using CO as a probe molecule. As shown in Fig. 2d , Pt/h-WO 3 displays a CO adsorption peak at 2119 cm −1 , which is different from the adsorption of CO on Pt nanoparticles (2080 cm −1 ) ( Supplementary Fig. S7 ), indicating that Pt was mainly loaded on the surface of Pt/h-WO 3 in the form of highly dispersed atoms [ 21 ]. Moreover, the corresponding energy-dispersive spectroscopy mapping for Pt/h-WO 3 confirmed that the Pt atoms are homogeneously distributed on the Pt/h-WO 3 catalyst ( Supplementary Fig. S8 ). Upon the addition of Pt, the absorption of Pt/h-WO 3 in the visible range is slightly reduced ( Supplementary Fig. S9 ), which is mainly caused by the decrease in W 5+ content in the sample [ 15 ]. Further characterization shows that Pt can effectively accept electrons from the conduction band of the h-WO 3 carrier so that the photogenerated charges are efficiently separated and in turn localized on the catalyst surface ( Supplementary Figs S10–S13 ). The above characterizations prove that we have successfully introduced Pt to the surface of h-WO 3 , which can effectively improve the utilization rate of photogenerated charges. Another consideration for integrating h-WO 3 with Pt is to potentially utilize hydrogen spillover for harnessing hydrogen storage into h-WO 3 . The migration of H atoms from Pt to the tungsten trioxide was reported by Khoobiar as early as 1964 [ 22 ]. To this end, we studied the hydrogen spillover phenomenon over Pt/h-WO 3 by using H 2 temperature programmed reduction (H 2 -TPR). As shown in Fig. 2e , the three reduction peaks all move to a low temperature after Pt is deposited onto the h-WO 3 , suggesting the hydrogen spillover from Pt sites to the h-WO 3 carrier [ 18 ]. Moreover, previous reports revealed that water could significantly increase the diffusion rate of the reducing species from Pt to tungsten trioxide [ 23 ]. In our case, once light irradiation produces H atoms from water with the photogenerated electrons, it would be feasible to insert H atoms into the h-WO 3 carrier with a tunnel structure in aqueous solution. Catalytic CO 2 reduction performance Upon proposing that the synthesized material may store electrons and hydrogen atoms, we are now in a position to prove whether the stored energy can be used to reduce CO 2 under dark conditions. To this end, we analysed the band structure of Pt/h-WO 3 and found that it can meet the demand of CO 2 reduction ( Supplementary Fig. S9 ). The CO 2 reduction performance of catalysts was then investigated using a home-made reactor ( Supplementary Fig. S14 ). During the measurements, the catalysts were charged in pure water under simulated solar illumination for 10 min, followed by a CO 2 reduction reaction under dark conditions. After light illumination, a certain amount of O 2 and H 2 were detected ( Supplementary Fig. S15 ), indicating that photocatalytic water splitting had occurred. The ratio of detected H 2 production to O 2 production was dramatically less than the stoichiometric ratio of water splitting (2 : 1), indicating that a fraction of H atoms were stored [ 24 ]. Given the storage of H atoms during the 10 min of light illumination, high-purity CO 2 was then injected into the reactor for a reduction reaction in the dark. As shown in Fig. 3a , after 10 min of light illumination, the catalytic CO 2 reduction reaction continued for 10 days after the light was turned off and the yield of CH 4 reached 51.6 μmol/g. The CH 4 yield is equivalent to 309.6 μmol/g per hour of illumination, which was a fairly high rate in pure water systems compared with most of the existing reports ( Supplementary Table S1 ). No other carbon-containing products were detected except for a small amount of CO ( Supplementary Fig. S16 ). As Pt is an excellent cocatalyst for hydrogen generation, H 2 was detected in the product after the reaction ( Supplementary Fig. S17 ). Even without CO 2 , H 2 can be released under dark conditions ( Supplementary Fig. S18 ). To suppress the H 2 evolution, we further increased the pressure of the reaction system. As a result, the mass transfer process of CO 2 conversion could be improved by effectively turning down the process of H 2 evolution [ 25 ], in which the yield of CH 4 was further increased from 51.6 to 72.0 μmol/g ( Supplementary Fig. S19 ). In addition, we found that the products can be controlled by modifying the cocatalyst, such as replacing Pt with Cu, and the main products changed to H 2 ( Supplementary Fig. S20 ). In comparison, when CO 2 was replaced by high-purity Ar in the dark reaction process, only a negligible amount of CH 4 was detected (0.21 μmol/g), indicating that CH 4 is predominately generated from the CO 2 in the dark reaction [ 26 ]. The results of the 13 C isotope labeling experiment confirm that the CH 4 in the product really originated from CO 2 reduction rather than carbon impurities (Fig. 3b ). In addition, the reaction product changed to CD 4 by replacing H 2 O with D 2 O, proving that the hydrogen in the product came from water. From the above results, we can safely conclude that the decoupling of light and dark reactions indeed achieves CO 2 reduction in the dark. In order to verify that the reduction products originated from the stored energy in the catalyst, we carried out a series of control experiments. First of all, CO 2 was directly introduced into the reactor in the absence of light irradiation and no carbon-containing products were generated, indicating that the driving force of CO 2 reduction comes from light. Thus, we explored the influence of the light-irradiation time on the amount of CH 4 generation. As shown in Fig. 3c , the product of CH 4 increases by prolonging the illumination time before the dark reaction, suggesting that CO 2 conversion depends on the number of photons. However, when the light time was extended to >10 min, the increase in CH 4 yield was very limited, indicating that saturation can be reached after 10 min of illumination. To further prove whether the energy is stored in the catalyst, different dosages of catalysts were used to carry out CO 2 reduction tests. As shown in Fig. 3d , the production of CH 4 in the dark reaction was gradually promoted with an increase in the catalyst used, suggesting that the driving force for CO 2 reduction is stored in the catalyst during the illumination process. When the catalyst dosage was increased by two orders of magnitude, the production rate of CH 4 per unit mass catalyst was almost maintained ( Supplementary Fig. S21 ), indicating that the catalyst has the prospect for further large-scale application. In addition, we expressed the efficiency of energy storage in terms of the apparent quantum efficiency (AQE). The efficiency of photon-to-chemical energy conversion under different monochromatic light wavelengths is summarized in Fig. 3e . The AQE of 1.13% at 320 nm demonstrates that short-wavelength light excites the catalyst. The information gleaned above demonstrates that the reduction of CO 2 under dark conditions is triggered by the energy from light stored in the catalyst. The catalyst stability is another critical factor that largely determines whether the catalyst can be used in practice. The cyclic stability test indicates that the recycled catalyst retains ∼98.7% of the original activity after four runs (Fig. 3f ). The stability of the catalyst was also proven by the XRD patterns, FTIR spectra and XPS spectra of the catalyst after the reaction in comparison with the fresh one ( Supplementary Fig. S22 ). The content of Pt after cycle testing was measured by using ICP–MS, revealing that the content of Pt (0.11 wt%) did not change during the reaction, which demonstrates the good stability of the material. In addition, the concentration of oxygen vacancies did not change significantly during the whole reaction, indicating that the structure of the catalyst was relatively stable during the reaction ( Supplementary Fig. S23 ). In all, our designed Pt/h-WO 3 can decouple the light and dark reaction processes to achieve CO 2 reduction under dark conditions with high recyclability, indicating that such a working mechanism meets the preliminary requirement for sustainable all-weather CO 2 conversion. Mechanism of decoupled CO 2 conversion process The successful practice, in which the Pt/h-WO 3 material with light-irradiation pretreatment can sustain CO 2 conversion under dark conditions, urges us to decode the mechanism of decoupling light and dark reaction processes with systematic investigations. As such, we have extensively examined the mechanisms for energy storage under light irradiation and energy release in the dark. During the whole reaction process, the most distinct phenomenon was the significant color change of the catalyst. The color of the catalyst changed from gray white to light blue in the process of illumination but slowly returned to its original color after the dark reaction ( Supplementary Fig. S24 ). This phenomenon is related to the change of light absorption in the visible-light range. Visible or near infrared light can induce polaron transitions—the hopping of polarons from W 5+ to nearby W 6+ positions [ 15 ], resulting in light absorption. In our case, W 5+ was generated in the process of illumination and consumed in the dark reaction, altering the light absorption. To look into the generation and consumption of W 5+ , electron paramagnetic resonance (EPR) spectroscopy was employed to examine the catalyst after light and dark reactions. The EPR signal of W 5+ was characterized by using an axial g-tensor with g values of 1.909 and 1.880 [ 19 , 27 ]. Under the simulated solar light, the EPR signal for W 5+ appeared and reached the maximal intensity after 10 min (Fig. 4a ). After turning off the illumination, the intensity of the EPR signal related to W 5+ in the catalyst slowly decreased and finally returned to the original state (i.e. prior to the illumination) after 10 days (Fig. 4b ). The consistency between the evolution of W 5+ species and the process of CO 2 reduction in the timescale indicates that the CO 2 reduction under dark conditions should have involved W 5+ species. To further elucidate the origin and destination of W 5+ , we studied the valence state of W at different reaction stages by using XPS. As shown in Fig. 4c , the deconvoluted W 4f spectrum of Pt/h-WO 3 can be fitted into two W oxidation states, namely W 6+ (4f 7/2 , 35.82 eV) and W 5+ (4f 7/2 , 34.81 eV), without other valences [ 15 , 28 , 29 ]. The ratio of W 5+ in the Pt/h-WO 3 was promoted from 2.14% to 8.36% after the light reaction, suggesting that a part of the W 6+ was reduced to W 5+ in the process of illumination. Such a reduction of W 6+ to W 5+ during illumination is essentially a process of storing the photogenerated electrons. In the dark reaction process, the proportion of W 5+ slowly decreased to 7.79% after 24 h and returned to the initial state (i.e. 2.48%) after 10 days. This manifests the dark reaction process that the stored electrons were spontaneously released after turning off the illumination, which in turn triggered the CO 2 reduction reaction. With the information for electron storage in mind, we are still questioned by two remaining issues—the source of H in the CH 4 product and the destination of the positive charge species along with the electron storage in W 5+ . As we originally proposed, both should be associated with the formation of H atoms from photocatalytic water splitting and their insertion into the catalyst. To further verify that H atoms had inserted into h-WO 3 during the light reaction, we characterized the Pt/h-WO 3 before and after the light reaction by using FTIR spectroscopy. Note that, due to the presence of a small amount of water and hydrogen atoms in the catalyst ( Supplementary Fig. S25 and Supplementary Table S2 ), we can only judge whether additional hydrogen atoms are inserted into the catalyst by the change in the intensity of the peak. After the light reaction, the peaks attributed to the O–H species of ∼3500–3700 cm −1 were obviously enhanced, indicating that the content of H in the sample had increased (Fig. 4d ). Further experiments were carried out by replacing the water (H 2 O) with deuterium oxide (D 2 O). The spectra showed that the FTIR peak attributed to the O–D species appeared at 2500–2700 cm −1 after the light irradiation while the peaks for the O–H species of ∼3500–3700 cm −1 were fairly similar to those of pristine Pt/h-WO 3 , indicating that the H that was inserted into the catalyst came from water splitting. In addition, we further quantified the stored hydrogen by means of ion exchange ( Supplementary Table S2 ). The above characterization indicates that H was inserted and stored in the Pt/h-WO 3 during the light reaction process. Next, we further proved whether the stored H atoms and electrons can trigger CO 2 reduction under dark conditions. To this end, in situ FTIR spectroscopy was performed to reveal the activation of the CO 2 by the stored H atoms and electrons. For fresh Pt/h-WO 3 , after being exposed to CO 2 /H 2 O vapors in the dark, the peaks at 1700, 1688 and 1603 cm −1 that should be attributed to the formation of bidentate carbonate b-CO 3 2− appeared [ 30 , 31 ] ( Supplementary Fig. S26 ) . In addition, the peak for adsorbed H 2 O was also detected at 1641 cm −1 [ 15 ]. No intermediates corresponding to CO 2 methanation could be detected. In contrast, after Pt/h-WO 3 has been treated with H 2 O vapor under light irradiation, some peaks attributed to intermediates can be detected upon exposure to CO 2 /H 2 O vapors in the dark (Fig. 4e ). Most notably, the absorption peaks attributed to COOH intermediates were observed at 1730, 1580 and 1547 cm −1 , respectively [ 30 , 32–34 ]. Previous studies have reported that the formation of the *COOH structure is a crucial step in CO 2 activation [ 30 , 33 ]. Thus, this observation indicates that the H atoms and electrons stored in Pt/h-WO 3 can trigger CO 2 activation under dark conditions. In addition, the absorption peak attributed to *CHO was also observed (at 1460 cm −1 ), suggesting that CO 2 followed the path of CH 4 formation [ 35 , 36 ]. Based on the above characterization results, the light and dark reactions are expressed by the following equations: Light reaction: (1) Dark reaction: (2) In the light reaction process, the electrons are excited from the valence band of the h-WO 3 carrier to its conduction band and then transferred to the Pt sites to realize the water splitting. The O atoms are oxidized by the holes remaining in the valence band to release O 2 , while the H atoms at the Pt site spill over the h-WO 3 carrier. In the meantime, a part of W 6+ in the h-WO 3 surface is reduced to W 5+ for electron storage. In the dark reaction, the stored electrons and H atoms are spontaneously released to achieve CO 2 reduction. Overall, the unique performance of our designed material, with the capability of storing and releasing energy in light on/off cycles, should be related to the fact that the H atom, as an energy carrier, can be stored in the channel of h-WO 3 via the spillover through Pt. To further illustrate this point, we highlight the importance of two processes—the spillover and storage of H atoms along with electrons, and the release of H atoms and electrons, with reference samples separately. First, we prepared Au-loaded h-WO 3 as a reference sample and found that no hydrogen spillover takes place on this catalyst ( Supplementary Fig. S27 ). As a result, the CO 2 conversion performance is nearly equal to that of bare h-WO 3 . This indicates that the effective hydrogen spillover to the h-WO 3 carrier is a prerequisite for CO 2 conversion in the dark. Second, the reference sample of Ni-loaded h-WO 3 offers the ability of storing H atoms and electrons along with H spillover ( Supplementary Fig. S28 ). However, under the dark conditions, the stored electrons and H atoms cannot be released spontaneously, hindering the CO 2 reduction. In comparison, the reference sample of Pd-loaded h-WO 3 possesses similar properties to Pt/h-WO 3 , in which the electrons and H atoms can be both stored and released, enabling CO 2 conversion under dark conditions ( Supplementary Fig. S29 ). Taken together, the above reference samples clearly demonstrate that the efficient storage and spontaneous release of electrons and H atoms together ensure the decoupling of light and dark reactions for CO 2 conversion. Demonstration of sustainable solar-driven application In order to verify the feasibility of our concept in practical application, we conducted a round-the-clock and all-weather demonstration of CO 2 conversion under natural light irradiation. We divided the reactor into a light reaction module and a dark reaction module (Fig. 5a and Supplementary Figs S30 and S31 ). The light reaction module was designated for sunlight absorption and energy storage, and then the CO 2 reduction reaction was conducted in the dark reaction module. The light reaction module and dark reaction module were connected through pipes and the catalyst dispersion in pure water was driven by using a peristaltic pump to realize circulation in the two modules. The experiment was conducted from 8 to 23 September 2022 in Xi’an, China. We recorded the light intensity during this period by using a solar radiometer. The specific weather conditions are listed in Supplementary Table S3 . As shown in Fig. 5b , the solar light intensity decreases significantly on cloudy days and decreases to zero at night and on rainy days. On the first day of the reaction, we put the whole reaction system in the dark. We found that no CH 4 was generated (Fig. 5c and d ), indicating that the energy for the subsequent generation of CH 4 was all from sunlight. On the second day of the experiment, the reaction system was transferred outdoors. CH 4 was slowly generated under sunlight, with a CH 4 yield of 1.2 μmol in the daytime. As it switched to the night, the CH 4 production continued but was observed with a slightly reduced rate, which yielded another 0.86 μmol of CH 4 . This result demonstrates that we have successfully achieved round-the-clock CO 2 reduction. The third day was cloudy so that the supply of sunlight decreased significantly; however, the rate of CH 4 production increased slightly, indicating that the intermittent supply of energy was also collected. The fourth to ninth days were sunny with a certain amount of cloud cover and, as a result, the rate of CH 4 production increased slowly and then reached an equilibrium. It is worth mentioning that, on the ninth to thirteenth days of the test, the rainy weather made the intensity of the Sun drop to near zero, but the CH 4 production did not stop. Moreover, the performance of the catalyst recovered after exposure to solar irradiation. This result manifests that the unique reaction was not significantly affected by short-term weather change, demonstrating the all-weather application.
CONCLUSION In summary, we prepared a Pt-loaded h-WO 3 as a model catalyst that can store photogenerated electrons and hydrogen atoms under light irradiation for dark reaction CO 2 conversion. The CH 4 yield of this catalyst reached 51.6 μmol/g under dark conditions after 10 min of simulated solar illumination and could be maintained at 98.7% after four cycles of use. While there is still plenty of room to improve the conversion rate in the future, this work clearly demonstrates for the first time that the concept of decoupling light and dark reaction processes can work for sustainable solar-driven CO 2 conversion. Our systematic investigations have proven that the reduction in CO 2 under dark conditions is indeed triggered by the electrons and hydrogen atoms that are generated by photon energy during light irradiation and stored in the catalyst. The unique characteristics of the h-WO 3 carrier that offer variable valence states and tunnel structures, along with the capability of Pt in splitting water and spilling hydrogen atoms over onto the h-WO 3 surface, are key to achieving the decoupling of light and dark reactions for CO 2 conversion. Toward practical applications, we conducted a demonstration using natural light that CH 4 production persisted at night and on rainy days, indicating that our proposed concept can achieve round-the-clock and all-weather CO 2 conversion. On the strength of decoupling the light reaction and dark reaction, this work contributes to the solar-driven conversion of CO 2 into valuable products in a sustainable way.
ABSTRACT Solar-driven CO 2 conversion into hydrocarbon fuels is a sustainable approach to synchronously alleviating the energy crisis and achieving net CO 2 emissions. However, the dependence of the conversion process on solar illumination hinders its practical application due to the intermittent availability of sunlight at night and on cloudy or rainy days. Here, we report a model material of Pt-loaded hexagonal tungsten trioxide (Pt/h-WO 3 ) for decoupling light and dark reaction processes, demonstrating the sustainable CO 2 conversion under dark conditions for the first time. In such a material system, hydrogen atoms can be produced by photocatalytic water splitting under solar illumination, stored together with electrons in the h-WO 3 through the transition of W 6+ to W 5+ and spontaneously released to trigger catalytic CO 2 reduction under dark conditions. Furthermore, we demonstrate using natural light that CH 4 production can persist at night and on rainy days, proving the accomplishment of all-weather CO 2 conversion via a sustainable way. The solar-driven CO 2 reduction is free from the uncertainty of sunlight supply, allowing all-weather CO 2 utilization.
Supplementary Material
FUNDING This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences, China (XDA23010300 and XDA23010000), the National Natural Science Foundation of China (21725102, 22232003, 51878644 and 41573138) and the Youth Cross Team Scientific Research Project of the Chinese Academy of Sciences (JCTD-2022-17). AUTHOR CONTRIBUTIONS Y.H. and Y.X. supervised the project. X.S., Y.H., J.C. and Y.X. conceived and designed the experiments. X.S. and L.W. performed the research. X.S., Y.H., Z.W., R.L., G.Z., J.C. and Y.X. co-wrote the manuscript. All the authors discussed the results and commented on the manuscript. Conflict of interest statement . None declared.
CC BY
no
2024-01-16 23:47:16
Natl Sci Rev. 2023 Oct 28; 11(2):nwad275
oa_package/fe/db/PMC10789249.tar.gz
PMC10789250
37480192
INTRODUCTION Esophageal cancer is the eighth leading cause of cancer death worldwide, with a notoriously insidious onset, aggressive tumor biology and a reported 5-year survival of ~15%. 1 , 2 The depth of invasion of esophageal mucosa correlates with possible lymph node involvement, thereby stratifying treatment options. Curative endoscopic resection is offered when tumor invasion is limited to the most superficial third of the submucosa (SM1), whereas, deeper invasion beyond this (SM2) requires chemoradiotherapy or surgical resection or a combination of therapies. 3 , 4 Early diagnosis of esophageal cancer is often limited by a lack of symptoms, but also by subtle macroscopic appearances at endoscopy, often requiring skilled endoscopists to identify and biopsy areas of suspicion. 5 Artificial intelligence (AI) refers to machine intelligence. Deep learning and machine learning are important components of AI. Deep learning comprises layers of features that are learned from data using a general-purpose learning procedure. 6 , 7 Machine learning refers to a system that can be taught to discriminate characteristics of data samples and then use this information to interpret new information. 6 , 7 Convolutional neural networks (CNNs) are supervised machine learning models made up of multiple network layers that function by extracting key features from an input and provide final classification through connected layers as an output. 8 Machine learning with support vector machines is based on researchers manually identifying features of interest as input data, in order to train a system to recognize discriminative features, and then produce appropriate outputs. 8 These AI algorithms have been used in multiple studies to assist with the computer-aided diagnosis (CAD) of esophageal cancer. Given the wide range of AI modalities, the variability in expertise in endoscopy and the diagnostic subtleties of esophageal malignancies, there is an increasing clinical need for AI to support the endoscopic diagnosis of esophageal cancer. In some cases, AI has been shown to be superior to inexperienced endoscopists in diagnostic capabilities, with the additional benefit of reducing inter-observer variability and minimizing human error. 6 , 8 This article aims to provide an up-to-date summary of the available published evidence, in the form of a meta-analysis and systematic review of the current literature using AI in endoscopic diagnosis of esophageal cancer.
METHODS Literature search The search methodology was defined according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 9 A systematic literature search was carried out using Pubmed, MEDLINE and Ovid EMBASE databases (date range: 1992 to 6 January 2023) using the following search strategies with standard Boolean operators: ‘artificial intelligence,’ ‘endoscopy,’ ‘machine learning,’ ‘esophageal cancer’ and ‘oesophageal cancer.’ Furthermore, the reference lists of included articles and review articles were searched for additional studies. Only English language studies that used either still endoscopic images or videos were included in the study. Some studies using AI to examine histological slides were excluded, as well as studies reporting on the use of AI-assisted endoscopic sponge cytology. Statistical analysis All statistical analyses were performed using STATA/SE, version 16.0 (StataCorp LLC, College Station, TX). The overall pooled estimate of sensitivity and specificity with their corresponding 95% confidence interval (95% CI) was calculated using the random-effects model by the metandi command in STATA/SE. Sensitivity was defined as the proportion of patients with esophageal cancer that were correctly confirmed by AI, while specificity was defined as correctly identifying patients without the disease. Forest plots were used to visualize the variation of the diagnostic parameters effect size estimates with 95% CI and weights from the included studies.
RESULTS Figure 1 illustrates the search methodology used in this study. In total, 48 articles were included in the qualitative and quantitative analysis of AI use in endoscopy. The meta-analysis is separated into AI algorithm application in the diagnosis of esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma (EAC). Esophageal squamous cell carcinoma Diagnosis In total, 24 studies explored the use of AI in the endoscopic diagnosis of ESCC. Fourteen retrospective studies had sufficient data to be included in this meta-analysis. A total of 1590 patients were used in the validation cohort of the studies. The pooled sensitivity and specificity were 91.2% (84.3–95.2%) and 80% (64.3–89.9%) ( Figs 2 and 3 ). The meta-analysis included six studies using AI algorithms in still images, 3 , 10–14 three studies using video images 15–17 and five studies using a combination of still images and videos. 18–22 The benefit of using videos in the assessment of AI technology is the realistic application of the system for diagnostic purposes. The majority of the literature used CNN AI systems in the assessment of early ESCC. Twelve out of fourteen studies used either white-light imaging (WLI) or narrow-band imaging (NBI) images or a combination of both 3 , 11 , 12 , 14–22 with four studies specifically comparing magnifying endoscopy (ME) and non-ME images. 12 , 14 , 19 , 20 Wang et al . 15 used a single shot multibox detector (SSD) model in WLI and NBI for esophageal neoplasms, SSD showed higher specificity in diagnosing WLI images but higher sensitivities in NBI images with similar accuracies. Tang et al . 21 developed a system using WLI exclusively, the benefit lying in the convenience and easy availability of this type of imaging, as opposed to using NBI. Their results showed accuracy of 91.3%, sensitivity of 97.9% and specificity of 88.6%, these statistical values outperformed the endoscopists. In order to assess the capabilities of the AI system, 12 studies compared AI diagnostic performance with a variety of skilled endoscopists. 3 , 11 , 13 , 14 , 16–22 Eight of these studies showed that AI technology was superior to endoscopists in the diagnosis of ESCC. 3 , 11 , 16–19 , 21 , 22 Five further studies analyzed the efficacy of combining AI technology assist to endoscopy and evaluated whether it improved endoscopists accuracy, sensitivity and specificity. 11 , 16 , 17 , 20 , 21 Yang et al . showed that using AI assistance improved the diagnostic accuracy of novice endoscopists to comparable levels to that of experts (85.7%), while Cai et al . showed that even senior endoscopists could improve their accuracy rates from 88.8 to 93.5% when AI technology was used as an adjunct to endoscopy. 11 , 20 Depth of invasion Three out of the fourteen studies were retrospective reviews focusing on using AI algorithms to identify depth of tumor invasion in ESCC. 3 , 12 , 19 Traditional options for detecting invasion depths include WLI, magnifying endoscopy with NBI (ME-NBI) and endoscopic ultrasound (EUS). These modalities have accuracies of 71.4 and 65.3% for WLI and ME-NBI, respectively, while EUS has sensitivity of 85% and specificity of 87% for T1a tumors. 3 Nakagawa et al . and Tokai et al . both used deep learning CNN systems to assess invasion depth of tumors <200 μm and >200 μm (SM1 and SM2). 3 , 12 Nakagawa et al . 12 validated their system on non-magnified images (WLI/NBI), magnified images and then iodine-stained images in 155 patients with histologically proven ESCC. Their system was then compared with endoscopists’ capabilities using the same dataset. 12 The AI system diagnosed all the images in 29 seconds compared with 115 minutes required by the endoscopists. The diagnostic performance of the AI system was comparable with experienced endoscopists. 12 Tokai et al . 3 developed a system that took an even shorter period of time (10 seconds) to diagnose 291 NBI and WLI images. The features used to indicate deeper invasion in this study included thickness, marginal elevation, red color and apparent lesion depression. 3 AI NBI image detection was more sensitive than WLI, with overall accuracy of 80.9% in this study. 3 Shimamoto et al . 19 also used videos of WLI and blue laser images (similar to NBI) in their study. They analyzed non-ME tumor protrusions, depression and hardness of lesions, followed by ME evaluation of superficial vascular architecture and then iodine staining to delineate cancer spread. 19 This study showed higher accuracy and sensitivity in ME diagnosis compared with endoscopists, but lower specificity. 19 Overall, the AI system showed better performance with ME videos compared with non-ME images. 19 All of these studies showed promise in accurate detection of tumor invasion, without requiring specific endoscopic expertise. Intrapapillary capillary loops Intrapapillary capillary loops (IPCL) is an endoscopic microvascular pattern found in atypical esophageal squamous epithelium. 7 , 23 The Japan Esophageal Society (2012) classified IPCL into Types A and B, with Type B vessels further being separated into three classes based on vessel tortuosity and diameter. 23 IPCL is commonly identified using chromoendoscopy or advanced endoscopic imaging with NBI. 24 The precise classification in real-time is subjective, inconsistent and reliant on experienced endoscopists. With the increasing volume of work in endoscopy, AI models have been developed to improve the diagnostic accuracy of IPCLs and possibly mitigate the need for expert endoscopists. 7 , 23 Uema et al . 23 developed a CNN system that had accuracy of 84.2% in diagnosing Type B microvascular patterns compared with the average endoscopist’s accuracy of 77.8%. Other studies with promising results in this field include Zhao et al . 25 who used NBI-ME images and AI to discriminate between Type A and Types B1 and B2 patterns, which achieved a classification rate of 87% and mean accuracy of 89.2%. Everson et al . used a CNN model, which identified vessel patterns with sensitivity and specificity of 89% and 98%, respectively. 23 , 24 García-Peraza-Herrera et al . 26 also created a detection system for IPCL identification, but their proposed system ResNet-18-CAM-DS had lower accuracy (91.7%) compared with 12 expert endoscopists (94.7%). In 2022, Yuan et al . 27 conducted a multicenter study using ME-NBI images to identify Type A and B1–3 vessels. Their AI system showed combined accuracy of 91.4% for diagnosing IPCL subtypes compared with senior and junior endoscopist’s accuracy of 87.1 and 78.2%, respectively. 27 AI-assisted endoscopy was shown to reduce diagnostic times for both junior and senior endoscopists, improve inter-observer agreement and improve overall accuracy. 27 Therefore, the value in AI technology assisting in identifying IPCL patterns lies in the ability to diagnose early ESCC, predict level of invasion, objectively identify pathology and potentially prevent allergic reactions associated with iodine staining in traditional diagnostic methods. 24 Esophageal adenocarcinoma Diagnosis Nine out of fifteen studies including 478 patients had enough statistical information to be included in the meta-analysis of AI to endoscopically identify EAC. The pooled sensitivity and specificity were 93.1% (86.8–96.4) and 86.9% (81.7–90.7) ( Figs 4 and 5 ). Of the nine studies used, five based their system on WLI and two used both WLI and NBI. 28–32 De Groof et al . 2019 was a prospective study, all the other studies were retrospective in nature. The majority of the studies focused on identifying early EAC on a background of Barrett’s esophagus (BE). The use of AI technology in identifying EAC compared with a diverse range of skill sets of endoscopists was analyzed in six studies. 28–33 The AI system by van Der Sommen et al . was found to be inferior to endoscopists, while in a paper from 2020 Ebigbo et al . had comparable outcomes to endoscopists. In 2019 and 2020, de Groof et al . found that their AI system was superior to non-expert endoscopists, and in another study by Ebigbo et al . and Qi et al ., they showed overall superior results when compared with endoscopists. 28–33 BE surveillance and the diagnosis of early EAC The endoscopic surveillance of BE is a time- and resource-consuming process due to the need for repeated endoscopies and multi-level biopsies. AI is a crucial adjunct to circumvent these challenges. In 2016, van der Sommen et al . 28 used a machine learning algorithm trained to use color and texture to identify neoplastic lesions in BE. This model had sensitivity and specificity of 83% per image. 28 Horie et al . used a CNN model in eight EAC patients with accuracy of 90% and per case sensitivity of 88% for WLI and NBI and an image processing speed of 0.02 seconds. 34 In 2022, Knabe et al . 35 used AI to assess the tumor (T) staging of adenocarcinoma in BE in 1020 images. They were able to identify mucosal cancer with accuracy of 68% and larger T3/T4 lesions with accuracy of 73%. 35 Ebigbo et al . used a deep learning system to differentiate between T1a and T1b tumors in still WLE images. The results of the AI system showed superior accuracy (71 vs 70%) and sensitivity (77 vs 63%), however a lower specificity (64 vs 78%). 36 Ebigbo et al . 32 also used a CAD ResNet model utilizing two databases to improve identification of BE endoscopically. The CAD model utilized still images and achieved WLI and NBI sensitivities and specificities of 97% and 88% and 94% and 80%, respectively, in one database. In the second database, sensitivities and specificities for WLIs were 92% and 100%. 32 The results from dataset 1 outperformed 11 out of 13 endoscopists. 32 Ebigbo et al . then went on to use their AI system in real-time endoscopy. The AI system showed results comparable to experienced endoscopists with accuracy of 89.9%, sensitivity of 100% and specificity of 83.7%. 31 Mendel et al . used one of the same databases as Ebigbo et al . consisting of 39 patients and achieved sensitivities and specificities of 94% and 88% with their AI technology for WLI. 37 Ghatwary et al . 38 used CNNs to detect abnormal areas of the esophagus in 100 high-definition white-light endoscopy (HD-WLE) images. Four methods for feature extraction were used—regional-based CNN (R-CNN), Fast R-CNN, Faster R-CNN and SDD. The SDD was the most successful method, achieving sensitivity of 96% and specificity of 92%. 38 The SDD also predicted cancerous images in 0.1–0.2 seconds, thereby being the fastest method. 38 In 2019, de Groof et al . 29 used experts to delineate esophageal lesions as input for an algorithm. An overlap of at least four delineations was considered to be an area of high suspicion whereas one delineation was considered to be low suspicion. 29 These data were then used to assess the performance of the algorithm. 29 Per image analysis showed accuracy of 91.7%, sensitivity of 95% and specificity of 85%. 29 The AI model had a localization score of 95% for low suspicion areas and 92.5% for high suspicion areas. 29 The model was effectively used in real-time endoscopic detection of early Barrett's neoplasia and was useful as a guide to identify the best sites to target biopsies. 29 De Groof et al . 30 then went on to develop a hybrid ResNet-U Net model classifying images as neoplasms or non-dysplastic BE based on five independent datasets. This CAD outperformed all of the non-expert endoscopists in accuracy of detecting lesions and identified the optimal site for biopsy in 97% and 92% of cases in a dataset of 40 images delineated by three experts and 40 images delineated by six experts. 30 de Groof et al . 30 also used an AI system in real-time endoscopy obtaining multiple image-based CAD predictions at 2 cm intervals. A per-image analysis showed accuracy, sensitivity and specificity of 84%, 76% and 86%, respectively. The system was able to correctly identify 9 out of 10 neoplastic lesions when combining 3 consecutive still images. 30 Hashimoto et al . 39 also developed a CNN to identify dysplasia in BE. Two expert endoscopists annotated neoplastic images, this was used as the ground truth for the training of the algorithm. 39 Per image AI identification of dysplasia versus non-dysplasia had accuracy, sensitivity and specificity of 95.4%, 96.4% and 94.2%, respectively, with 24 of 26 patients correctly diagnosed with dysplasia. 39 The value of this system was its successful use in real-time endoscopy. 39 More recently, Abdelrahim et al . 40 utilized CNN technology in a multicenter study using real-time endoscopic videos to detect BE and had accuracy of 92%. This system was superior to non-expert endoscopists, especially in identifying flat lesions, however the negative predictive value for low-grade dysplasia, high-grade dysplasia and adenocarcinoma was 95.1%. 40 This is lower than the 98% per-patient NPV recommendation from The Preservation and Incorporation of Valuable Endoscopic Innovations standards of the American Society for Gastrointestinal Endoscopy (ASGE). However, this value may be increased if the AI model was applied to a population with a lower prevalence of neoplasia and therefore may still be beneficial in screening for BE. 40 Further applications of AI in advanced endoscopic imaging techniques High-resolution micro-endoscopy High-resolution micro-endoscopy (HRME) is a low-cost technique developed to image the epithelium at a cellular level, competing with chromoendoscopy as a screening tool for ESCC. 41 Shin et al . 10 used this technique to identify neoplastic and non-neoplastic squamous esophageal mucosa. Prior to HRME, a nuclear stain was applied to the mucosa. A two-class linear discriminant algorithm was then developed to identify pathological lesions. The AI system had sensitivity and specificity of 84% and 95% in the validation set. 10 Quang et al . also used HRME in 2016 to identify ESCC. They had comparable results to Shin et al . with sensitivity of 95% and specificity of 91%. 10 , 42 The benefit of this technique is the potential low-cost and accurate diagnosis of early ESCC without the need for multiple biopsies or skills to interpret HRME images. 10 HRME has also been used in patients with BE, providing images with a resolution approaching that of conventional histopathology. Shin et al . 43 had a per-biopsy analysis that resulted in sensitivity of 88% and specificity of 85%. Optical coherence tomography Optical coherence tomography (OCT) is a technique that provides a high-resolution image of esophageal mucosa by the addition of a fiber-optic catheter probe inserted through a standard endoscope. 33 Previously, the accuracy of endoscopists using this technique has been insufficient to make clinical decisions; however, a CAD system was developed to possibly improve the usefulness of this method in diagnosing dysplasia in BE. 33 The results of the study showed that the CAD system had accuracy of 83%, sensitivity of 82% and specificity of 74%. 33 With further refinements of the system, there is potential for this AI system to be a valuable tool in the future.
DISCUSSION The early diagnosis of esophageal cancer has a notable effect upon available curative treatment options and thus overall prognosis. Endoscopy remains the gold standard for diagnosing esophageal cancer by providing both a visual and histological diagnosis. Endoscopy services are often under a lot of strain due to the volume of work and the expected level of expertise. Early esophageal cancer often has subtle mucosal features which require expert endoscopy skills and interpretation. In addition, BE surveillance can itself be time-consuming due to the need for multiple sequential biopsies. Due to these challenges, AI has increasingly been incorporated into the endoscopic diagnosis of esophageal cancer. This systematic review and meta-analysis summarizes the evidence of 45 studies analyzing the use of AI as an adjunct to diagnosing esophageal malignancy. For ESCC, 24 studies were included in this review, of which 14 studies contributed to the meta-analysis, with pooled sensitivity and specificity of 91.2% and 80%, respectively. For EAC, 15 studies were included in this review, of which 9 studies contributed to the meta-analysis with pooled sensitivity of 93.1% and specificity of 86.9%. Sixteen out of 24 ESCC studies and 10 out of 15 EAC studies had independent or external validation datasets to assess their AI algorithms. 11 , 12 , 14 , 16–23 , 27 , 28 , 30 , 31 , 35 , 36 , 38–40 , 42–46 Validation dataset size ranged from 52 to 2123 cases for ESCC and 9 to 199 cases for EAC. 11 , 35 , 38 , 44 Overall, there is good evidence to support the use of AI in the endoscopic diagnosis of esophageal cancers. The application of the AI algorithms showed promising results in both WLI and NBI as well as other imaging modalities such as HRME and OCT. Comparison with endoscopists Twenty-nine studies from the included qualitative and quantitative literature included a comparison of AI efficacy with endoscopists. Overwhelmingly, the review largely showed comparable or superior outcomes for AI algorithms compared with endoscopists, with the greatest benefit of AI use being seen as an adjunct to non-expert endoscopists. Shiroma et al . 17 showed that the sensitivities of 13 out of 18 endoscopists improved by a median of 10% using AI real-time assistance during endoscopy. Speed of interpretation A noteworthy finding is that AI outperformed endoscopists in terms of the speed of image interpretation, with Tang et al . 21 showing that their AI system only required 15 milliseconds per image to diagnose an esophageal lesion and additionally outperforming endoscopists in terms of accuracy, sensitivity and specificity. In terms of depth of invasion, Nakagawa et al . and Tokai et al . demonstrated that their AI models were not only superior to endoscopists but were importantly faster at image interpretation (29 seconds and 10 seconds, respectively, compared with 115 minutes). 3 , 12 The importance of the speed of interpretation will allow for more direct clinical integration, to give potential feedback to an endoscopist in real-time, and advise them on the notable suspicious areas for biopsy. Limitations Errors in the results obtained by AI technology are often related to shadows or anatomical structures such as the esophagogastric junction, the left main bronchus or vertebra. Benign lesions such as resection scars and atrophy may also cause false positive results. False negative interpretation may result from technical errors where the lesion is not fully visualized or from background esophageal inflammation. 34 With improved image capturing and larger training datasets, more robust AI models can be developed to potentially overcome these errors. The majority of the included studies used still images, or a combination of still image and video analysis. Still images do not emulate real-life endoscopy and as such videos are more accurate to assess the true capabilities of an AI system. In addition, most of the images used in these studies were obtained by experts and test samples were supplemented with subtle lesions, potentially accounting for significant selection bias. In practice, poorer quality sample images are produced, therefore the current study outcomes limit the use of AI’s ability to be accurately applied to community practice endoscopy. Another limitation identified in this systematic review and meta-analysis is that only English language articles were included. Given that much of the literature is from Asian centers, perhaps casting a wider net in terms of languages may have captured a broader range of studies. Furthermore, given the geographical variation in types of esophageal cancer, with the Far East having more cases of ESCC, international implementation of the findings of this review may not be entirely applicable. In addition, many of the articles, especially for adenocarcinoma, have data contributions by the same group in different years, and therefore, their findings may be less generalizable to other centers. It must be emphasized that almost every study included in the meta-analysis had a unique AI algorithm, a variable study design, independent datasets used for training and validation processes and different endoscope models used to obtain images (Olympus, Fujifilm and Pentax), this heterogeneity limits scientific value when pooling these studies in a meta-analysis. Overall, there was evidence of moderate heterogeneity with i2 values of 67.3 and 72.5 for Figures 3 and 5 , respectively.
CONCLUSION AI shows great promise in improving diagnostic capabilities for esophageal malignancy, in particular, early diagnosis of cancer. The value of accurate detection of early stage malignancy offers patients the potential for curative treatment endoscopically, with survival outcomes similar to surgery, and a substantially improved Health Related Quality of Life through an organ preservation strategy. This article highlights many developments in AI technology with the potential to revolutionize esophageal malignancy diagnosis and management, however, higher quality of evidence is required first before such technology can be implemented into standard patient care.
Joint first authors Abstract Early detection of esophageal cancer is limited by accurate endoscopic diagnosis of subtle macroscopic lesions. Endoscopic interpretation is subject to expertise, diagnostic skill, and thus human error. Artificial intelligence (AI) in endoscopy is increasingly bridging this gap. This systematic review and meta-analysis consolidate the evidence on the use of AI in the endoscopic diagnosis of esophageal cancer. The systematic review was carried out using Pubmed, MEDLINE and Ovid EMBASE databases and articles on the role of AI in the endoscopic diagnosis of esophageal cancer management were included. A meta-analysis was also performed. Fourteen studies (1590 patients) assessed the use of AI in endoscopic diagnosis of esophageal squamous cell carcinoma—the pooled sensitivity and specificity were 91.2% (84.3–95.2%) and 80% (64.3–89.9%). Nine studies (478 patients) assessed AI capabilities of diagnosing esophageal adenocarcinoma with the pooled sensitivity and specificity of 93.1% (86.8–96.4) and 86.9% (81.7–90.7). The remaining studies formed the qualitative summary. AI technology, as an adjunct to endoscopy, can assist in accurate, early detection of esophageal malignancy. It has shown superior results to endoscopists alone in identifying early cancer and assessing depth of tumor invasion, with the added benefit of not requiring a specialized skill set. Despite promising results, the application in real-time endoscopy is limited, and further multicenter trials are required to accurately assess its use in routine practice.
Financial support: None.
CC BY
no
2024-01-16 23:47:16
Dis Esophagus. 2023 Jul 21; 36(12):doad048
oa_package/85/3a/PMC10789250.tar.gz
PMC10789256
38226066
Extract Over recent decades, the prevalence and incidence of nontuberculous mycobacterial pulmonary disease (NTM-PD) have increased worldwide, with Mycobacterium avium complex being the most common causative agents [1–5]. The 2020 American Thoracic Society/European Respiratory Society/European Society of Clinical Microbiology and Infectious Diseases/Infectious Diseases Society of America (ATS/ERS/ESCMID/IDSA) guidelines recommended a treatment regimen with at least three drugs, including a macrolide, in patients with a nodular–bronchiectatic, macrolide-susceptible M. avium complex pulmonary disease (MAC-PD) [6]. Tweetable abstract How to identify MAC-PD patients with limited treatment options: an expert consensus https://bit.ly/3QwLQ8T
To the Editor: Over recent decades, the prevalence and incidence of nontuberculous mycobacterial pulmonary disease (NTM-PD) have increased worldwide, with Mycobacterium avium complex being the most common causative agents [ 1 – 5 ]. The 2020 American Thoracic Society/European Respiratory Society/European Society of Clinical Microbiology and Infectious Diseases/Infectious Diseases Society of America (ATS/ERS/ESCMID/IDSA) guidelines recommended a treatment regimen with at least three drugs, including a macrolide, in patients with a nodular–bronchiectatic, macrolide-susceptible M. avium complex pulmonary disease (MAC-PD) [ 6 ]. For patients with cavitary or advanced/severe bronchiectatic disease, guidelines suggest the addition of parenteral amikacin into the initial regimen [ 6 ]. Once treatment is initiated, up to 40% of NTM-PD patients might experience an unsuccessful outcome [ 7 , 8 ]. For the first time, the 2020 ATS/ERS/ESCMID/IDSA guidelines identified patients with a “refractory” MAC-PD as those with a positive sputum culture after 6 months of guideline-based therapy (GBT) [ 6 ]. In refractory MAC-PD, a recommendation was made in the guidelines to add amikacin liposome inhalation suspension (ALIS) to the treatment regimen [ 6 ]. This recommendation was based on the CONVERT study, demonstrating an improved culture conversion rate in refractory MAC-PD [ 6 , 9 ]. Based on those results, the US Food and Drug Administration (FDA) approved ARIKAYCE, a proprietary ALIS formulation by Insmed, Inc. (Bridgewater, NJ, USA), for “treatment-refractory MAC lung disease” [ 10 ]. The European Medicines Agency (EMA) went further, licensing ARIKAYCE to treat NTM lung infections “caused by MAC in adults with limited treatment options who do not have cystic fibrosis” [ 11 ]. However, the term “limited treatment options” (LTOs) might sound ambiguous, and no clinical study, expert consensus or guidelines exist to clarify the meaning of limited treatment options. The experts convened on such ambiguity in their preliminary discussion and decided to possibly contribute to shedding light on the issue term. With regards to the reimbursement of ARIKAYCE, the product is generally covered by health insurers in the USA and is fully reimbursed by the National Health Insurance of Japan. In Europe, the product is fully reimbursed in the UK, France, Ireland, Belgium, the Netherlands and Finland. In Germany, Denmark and Greece, it is generally funded for individual patients. To reach a consensus on common clinical scenarios of LTOs for MAC-PD patients and to identify which MAC-PD patients with LTOs could benefit from ALIS, a panel of experts in respiratory and infectious diseases, including tuberculosis, from continental Europe, the UK and Israel convened in Milan, Italy, on 22–23 February 2023. The focus was on the ambiguous EMA definition, leading to the decision to include only European and Israeli experts; a perhaps limiting decision. However, the consensus outcomes could also be of some help to NTM specialists outside Europe. A thorough search of the published high-level literature on the management of NTM-PD and MAC-PD patients, and its authors, was the basis for identifying and selecting the roster of 18 candidate European and Israeli experts. All 18 invited experts, including A. Fløe, C. Prados, A. Sánchez-Montalvá and J. van Ingen, contributed to online discussions before the face-to-face Milan meeting; 14 of the identified experts also participated in the face-to-face event: S. Aliberti, F. Blasi, P-R. Burgel, A. Calcagno, D. Grogono, M.R. Loebinger, A. Papavasileiou, E. Poliverino, G. Rohde, H.J.F. Salzer, M. Shteinberg, E. Van Braeckel, N. Veziris and D. Wagner. Similarly, a thorough literature search helped to identify all the available evidence covering MAC-PD patients with LTOs. The selected manuscripts were shared among the reviewers and carefully reviewed to contextualise the topic. An online collection of real-life MAC-PD clinical situations personally confronted by the panel experts followed, with the development of a draft set of real-life cases of possible MAC-PD patients with LTOs from those real-life cases. Before the Consensus Conference, the panel experts voted blindly online to decide if they considered the cases as LTO situations with the following voting options: “Yes, an LTO example”, “No, not an LTO example” and “Not enough information to decide”. Draft examples without full expert endorsement were either modified or discarded. During the face-to-face meeting in Milan, the experts discussed and altered the iconic illustrations and their descriptors (“statements”) to reach a consensus about their relevance as realistic clinical possibilities. As a final step, the panel members voted on each statement using a modified Delphi method. The double goals of the voting process, with scores variable from 0 to 10, were to confirm whether each iconic statement might qualify as an actual MAC-PD LTO and whether ALIS might have a role in the real-life management of such situations. All experts considered refractory MAC-PD (according to the 2020 ATS/ERS/ESCMID/IDSA guidelines definition) as an LTO situation (100% agreement), including both smear-negative and smear-positive disease, and to include ALIS in their treatment strategy. Some experts would also consider using intravenous amikacin in some refractory conditions with evidence of continuing severe disease. In smear-positive, nodular–bronchiectatic, refractory MAC-PD, two experts would start i.v. amikacin before shifting to ALIS. In cavitary refractory MAC-PD, one expert would start i.v. amikacin before turning to ALIS. The FDA license and the clinical guidelines based on the evidence provided by the CONVERT study cover refractory MAC-PD. However, given the wording of the EMA licensing, the expert panel identified seven potential LTOs for MAC-PD patients ( table 1 ). The panel also debated whether the now available ALIS formulation could be speculatively used with benefit in each of these situations while unanimously acknowledging that current evidence is for refractory MAC-PD only. At any rate, all the described speculative scenarios cannot and should not be equated to the EMA's definition of LTOs. Confirmatory validation through targeted clinical trials would still be a must. All 14 voting members of the panel agreed that newly diagnosed, noncavitary MAC-PD caused by a macrolide-resistant strain represents an LTO, regardless of whether the strain is amikacin-susceptible or amikacin-intermediate [ 13 ]. All experts agreed that some patients might derive potential benefits from ALIS in this group, although with the understanding that there is no direct evidence for the benefit of ALIS in this specific setting. The experts also recognised newly diagnosed MAC-PD caused by a macrolide-resistant and i.v. amikacin-resistant strain as an LTO situation. The 2018 Clinical and Laboratory Standards Institute M24-A3 and M62 guidelines and the ATS/ERS NTM guideline laboratory section support the use of ALIS with i.v. amikacin-resistant strains if the amikacin minimum inhibitory concentration (MIC) is 64 μg·mL −1 associated with i.v. amikacin resistance and ALIS susceptibility, due to high local concentrations and potentially improved intracellular penetration of the liposomal formulation [ 6 , 14 , 15 ]. All experts agreed that the proprietary ALIS formulation should not be an option if amikacin MICs are ≥128 μg·mL −1 . Seven experts (50%) acknowledged the role of ALIS in the macrolide-resistant, amikacin-resistant (>64 and <128 μg·mL −1 ) strain condition. Regarding MAC-PD patients intolerant of a GBT regimen for any reason, the outcomes of discussions were variable. If feasible, switching within the macrolide class should consistently preserve a macrolide-based regimen. Premature discontinuation of the macrolide because of an inability to take the drug was considered an LTO situation by all the experts, who also unanimously (100%) identified a potential role of ALIS in this case. Some experts also argue about initiating i.v. amikacin according to the patient's preferences or needs. The inability to take ethambutol was also recognised as an LTO situation, with most experts not identifying any role for the ALIS formulation (11 out of 14 voting panel members, 79%). An oral drug, such as clofazimine, seemed advisable in cases of intolerance to the “companion” drug ethambutol [ 6 ]. Intolerance of rifamycins was regarded as an LTO situation by only three experts out of 14 (21%), with emerging data and ongoing clinical trials ( www.clinicaltrials.gov identifier numbers NCT03672630 and NCTO4677569) investigating their concrete benefit in MAC-PD patients [ 16 ]. In these cases, the advice was to try switching within the rifamycin class or considering clofazimine as a third drug with no role for ALIS. Finally, when suggested by guidelines ( e.g. cavitary disease), the inability to use i.v. amikacin was also considered an LTO situation by all the experts who unanimously (100%) agreed that ALIS administration could be an option in this case. Relapses in a GBT-treated patient were the final situation discussed, unanimously (100%) recognised as LTO conditions with a role for the proprietary ALIS formulation for all experts. Provided there was adequate compliance to GBT, drug susceptibility testing and the frequency of microbiology work-up are essential when deciding on treatment. It is crucial to try differentiating relapse from re-infection by genome sequencing, with relapse considered similar to a refractory situation. All experts agreed that re-infections are not an LTO situation. This expert panel discussion and consensus might help physicians interpret EMA documents referring to LTO in MAC-PD patients. However, it is essential to understand that the value of the document is in discussing which conditions may relate to MAC-PD patients with LTO and in which of these the panel considers that there could be some potential benefit of using ALIS with the knowledge that, at present, evidence and guidelines on the use of ALIS relate to refractory MAC-PD.
CC BY
no
2024-01-16 23:47:16
ERJ Open Res. 2024 Jan 15; 10(1):00610-2023
oa_package/fe/e1/PMC10789256.tar.gz
PMC10789262
38226060
Introduction Wheezing caused by viral infections is common among preschool children, with up to 50% experiencing at least one wheezing episode in the first 6 years of their life [ 1 , 2 ]. Studies have shown that children suffering from wheeze, especially persistent phenotypes, have a higher risk of developing asthma later in childhood [ 3 ]. The main treatment goals for preschool wheezing are respiratory symptom control, reduction of exacerbations and increasing quality of life for the child [ 4 ]. To achieve these aims, training caregivers to correctly detect airway obstruction and administer pharmacological and supportive treatments is essential. However, when it comes to young children (under the age of 6 years), the correct assessment of wheezing breath sounds is particularly difficult, since the distinction of wheezing sounds from physiological breath sounds may be challenging [ 5 , 6 ]. A misjudgement of respiratory symptoms may consequently result in an under- or overtreatment with reliever medication [ 7 , 8 ]. On the other hand, it has been shown that guided self-management options for respiratory diseases such as wheezing can reduce hospitalisations and visits to the emergency department and improve lung function [ 9 , 10 ]. Therefore, supporting parents in the at-home self-management of their child's respiratory condition is key to obtaining and maintaining respiratory symptom control and improve patient outcomes. Over the past years, adoption of digital health technologies has continuously increased, from patient-specific electronic health records over sensor technology and (adherence) monitors to telemedicine options for easier accessibility to healthcare specialists [ 11 , 12 ]. Digital solutions are often easily available and adaptive into everyday life, due to the increasing access to portable internet and smartphones around the globe [ 13 ]. Mobile health also allows an efficient and cost-effective data collection via electronic diaries and is usually well received by both patients and clinicians [ 11 ]. Therefore, there is a growing interest in the potential of eHealth solutions to allow tailored self-management options specifically for young children suffering from wheezing disorders as well as their parents. There are several devices such as portable smart inhalers [ 14 , 15 ], asthma symptom diaries [ 16 , 17 ], wearable trackers [ 18 ] and health-education-based computer games [ 19 ] available for the at-home management of wheezing disorders. Despite progress in the development and testing of medication sensors and adherence reminders, only a few devices have been built and evaluated to detect pathological airway sounds such as wheezing [ 20 , 21 ]. While most digital support tools are currently being evaluated in feasibility studies and small clinical trials, large randomised controlled trials testing their clinical impact on outcomes such as symptom control and quality of life are scarce. In terms of sound recognition, it is not only important to test the accuracy of devices, but also to evaluate the usability and clinical efficacy. These aspects are particularly important to determine potential benefits and limitations, ensuring the safe and effective use of digital support for conditions such as preschool wheezing and paediatric asthma. The aim of this study was to test the hypothesis that a digital support tool for wheeze recognition improves symptom control in a study population of preschool children. In addition, the device's impact on disease-specific and parental quality of life as well as subjective self-efficacy in disease management among parents was evaluated in this randomised controlled trial across culturally diverse paediatric populations. Finally, the usability of and satisfaction with the device were assessed.
Materials and methods Study population The multicentre randomised controlled open-label trial consisted of two groups, following usual care with (intervention) and without (control) the use of a digital wheeze detector (WheezeScanTM; OMRON Healthcare Co. Ltd, Kyoto, Japan) device, respectively. Patients were recruited between October 2021 and September 2022 in six specialised paediatric pulmonology outpatient departments located in Berlin (ambulatory clinics of E. Dellbrügger, S. Roßberg, T. Weichert and C. Grenzbach), London (Blizard Institut at Queen Mary University of London) and Istanbul (ambulatory clinic of Karadag and Marmara University Istanbul). The inclusion criteria were: 1) age 4 months to 7 years; 2) at least one episode of doctor-diagnosed wheezing and/or recurrent cough requiring treatment according to Global Initiative for Asthma (GINA) guidelines steps 1 or 2 in the last 12 months [ 22 ]; and 3) availability of a smartphone. The exclusion criteria were: 1) an anatomical malformation causing chronic nasal and/or bronchial obstruction; 2) presence of another severe chronic disease; and 3) wheezing disorders requiring treatment step 3 or 4 according to GINA guidelines. The study was registered in the German Clinical Trials Registry (DRKS00026740), and ethics approval was obtained at all study sites. Study design After randomisation, parents were interviewed regarding the personal and family anamnesis of their child including information on allergic diseases and demographic variables. Validated questionnaires regarding disease severity and disease-specific quality of life were administered. Families of the intervention group were trained to use the digital wheeze detector WheezeScanTM and received their device to take home. A comparison between the device's measurement and the study physician's auscultation was recorded. The current treatment scheme was reported for all participants by the study physician. After the recruitment visit (T0), all participating families were asked to fill in an electronic daily questionnaire via the mobile application WheezeMonitor® (TPS Production S.rl, Rome, Italy) over a total of 120 observation days. Families of the intervention group were additionally encouraged to use the WheezeScanTM device whenever they felt that their child could be experiencing respiratory distress. An initial follow-up visit (T1) was performed after 90 days of the monitoring period. Parents responded again to questionnaires on disease severity, self-efficacy and quality of life, and the attending physician assessed whether an adaptation of treatment was necessary according to GINA guidelines. Participants of the intervention group were also asked to fill a usability and satisfaction questionnaire regarding the WheezeScanTM device. After T1, all participants continued the observation period for another 30 days until the final study visit (T2), where any changes in asthma control via the TRACK (Test for Respiratory and Asthma Control in Kids) questionnaire and changes in treatment according to GINA guidelines were recorded. The aim of the final visit was to evaluate potential clinical improvements after a treatment adaptation. For an overview on the enrolment and randomisation, please see figure 1 . Questionnaires The primary outcome of symptom control was assessed via the TRACK [ 23 ] questionnaire at T1. Secondary outcomes were assessed at T1 via the Parental Asthma Management Self-Efficacy Scale (PAMSES) [ 24 ], the Pediatric Asthma Caregiver's Quality of Life Questionnaire (PACQLQ) [ 25 ] and the TAPQOL on parent's perception of health-related quality of life in preschool children [ 26 ], and user satisfaction with the device in the intervention group was assessed via a usability and satisfaction questionnaire. An additional secondary outcome was assessed via the TRACK questionnaire at T2. The secondary outcome regarding the frequency of reliever medication use was recorded in the e-diary of the monitoring App during the entire study period. Digital wheeze detector Families allocated to the intervention group received a digital wheeze detector (WheezeScanTM, OMRON Healthcare Co. Ltd, Kyoto, Japan) to be used whenever needed in a home care setting. Performance, safety and usability of the device had been evaluated in previous studies [ 27 , 28 ]. The WheezeScanTM assesses respiratory symptoms of the child via a sound detector once placed on the chest just below the right collarbone. After approximately 30 s of measurement, results are indicated via an integrated display ( supplementary figure S1 ). Statistics For information on the sample size calculation, please see the supplementary material . As descriptive analysis, summary measures such as mean± sd , median, first and third quartile (q1–q3), number (n) and percentage (%) depending on the scaling of the variables are reported for the two groups and the total study population. The intervention effect on the primary and secondary outcomes was assessed in the full analysis set using the intention-to-treat principle. All randomised participants were included in the full analysis set. Primary analysis consisted of a comparison of the TRACK score at T1 between intervention and control group using ANCOVA with the TRACK score at T1 as dependent variable and treatment group, centre and TRACK score at T0 as covariates. Mean difference between intervention and control group and 95% CI as well as standardised effect size were estimated. A two-sided significance level of α=0.05 was used for the primary analysis. For information on the secondary and subgroup analyses, please see the supplementary material . All statistical analyses were performed in R version 4.2.2 (R Foundation for Statistical Computing). More information on materials and methods can be found in the supplementary material .
Results In total, 167 children were enrolled in the study, with 85 (50.9%) of the children being recruited in Berlin, 59 (35.3%) in Istanbul, and 23 (13.8%) in London. 87 participants were allocated to the intervention group, and 80 to the control group. Of the 167 families, 150 (89.8%) completed the first follow-up assessment and 153 (91.6%) the final study visit. While seven participants were lost to follow-up, another seven actively withdrew their consent owing to experiencing technical difficulties with the device and/or monitoring application. On average, participants had a mean age of 3.2±1.6 years, 116 of 167 (69.5%) were male and 133 of 167 (79.6%) had one or more siblings. Relevant differences between the two treatment groups were observed regarding the presence of an allergic disease. While 31% (27 of 87) of the children in the intervention group suffered from allergy, this was only the case for 17.5% (14 of 80) of the controls ( table 1 ). Impact of the digital wheeze detector on asthma control (TRACK): primary outcome analysis The intervention group started off with a slightly lower baseline TRACK score (64.5±20.9 points) than the control group (66.3±22.8 points). At T1, the intervention group had an average of 79.1±17.7 points, while the control group had a mean score of 76.2±19.8 points ( table 2 ). Although the absolute increase in the intervention group was higher than in the control group ( figure 2 ), the mean difference (at T1) between the two groups was not statistically significant (3.6, 95% CI −2.3–9.4, p=0.228, primary end-point). At T2 (120 days after baseline), both groups had a similar mean TRACK score (the intervention group: 78.4±18.7 points, the control group: 78.3±19.4 points) ( supplementary figure S3A ). Regarding a potential impact of treatment changes at T1, no changes in treatment were performed. When analysing categorical asthma control (“not well controlled” and “well controlled”) between T0 and T1 in each of the treatment groups, the proportion of patients with good control of wheezing increased in both groups with a slightly larger increase among the intervention group (increase in portion of participants with controlled wheezing from T0 to T1 in control group by 15%, in intervention group by 27%) ( figure 3 ). For subgroup analyses please refer to the supplementary material . Disease-specific and parental quality of life and parental self-efficacy in asthma management The disease-specific quality of life of the participating children improved in both groups between the baseline and follow-up visit. There was no intervention effect at T1, as underlined by the mean difference between groups at T1 of −0.9 (95% CI −4.1–2.3, p=0.6) ( figure 4b and f). Similar results were observed for parental quality of life (PACQLQ mean difference 0.2, 95% CI −0.1–0.6; p=0.1) ( figure 4c and g) and parental self-efficacy in managing their child's wheezing condition (PAMSE score mean difference −0.8, 95% CI −3.6–2.0; p=0.6) ( figure 4d and h, table 2 ). Interestingly, all secondary outcomes improved between study visits with little to no differences between the study groups. Use of reliever medication and unscheduled healthcare utilisation In case of short-acting β-agonist treatment, no pronounced differences were observed between study arms ( supplementary figure S8 ). Further, the number of unscheduled visits at the paediatrician's office or emergency department between T0 and T1 (120 days) was slightly higher in the control group than in the intervention group with a mean of 2.16±3.33 versus 1.57±2.45 and 0.81±1.34 versus 0.60±1.73 visits, respectively ( Table 3 ). Satisfaction and usability Regarding device usability, 80% (60 of 75) of the intervention group indicated having used the digital wheeze detector without complications and only 14 of 75 (19%) considered the handling difficult. If parents experienced problems, these were most frequently due to challenges in keeping the child calm enough for the device to function well. Ten families (13%) of the intervention group reported having difficulties in technically operating the device ( table 4 ]) When asked whether they believe their child to have benefitted from the use of the WheezeScanTM, the majority answered positively (45 of 75, 60%). For more information on usability results by study centre and age group, please see the supplementary material .
Discussion In our multicentre randomised open-label controlled trial on the clinical efficacy and usability of a digital wheeze detector for preschool children we observed: 1) no significant difference regarding wheeze control between study groups; 2) no impact on disease-specific or parental quality of life; 3) almost no differences regarding parental self-efficacy in managing the child's disease; 4) good usability reports by parents, particularly for the older children in our study; 5) positively perceived benefit for the child from device usage, particularly for the very young and older children; and 6) regional differences in device usage and evaluation. Performance of digital interventions for wheezing disorders Although often limited by small patient numbers and exploratory designs, several studies have reported positive results on the sensitivity and specificity of electronic devices detecting pathological airway sounds such as wheezing or cough [ 20 , 21 , 27 , 29 , 30 ]. For instance, the device used in this study demonstrated high sensitivity (100%) and specificity (95.7%) for wheeze detection when compared to the auscultation of specialised physicians [ 27 ]. A study, comparing digital electronic stethoscopes to paediatricians’ auscultation has even suggested that the digital stethoscopes tested were more sensitive to detecting wheeze in children than the clinician when matching both to automated spectrogram analysis [ 20 ]. The high sensitivity and improving accuracy of such devices underlines their potential benefit as valuable tools for (remote) diagnosis and monitoring in research settings [ 9 ]. However, there is a lack of published randomised controlled trials assessing the clinical efficacy of digital detectors of pathological airway sounds in the hands of patients and/or caretakers. Reviews examining the impact of digital interventions in children with asthma on clinical parameters have also shown inconsistent findings. A systematic review of digital asthma interventions found that half of the studies favoured digital interventions [ 9 ], while the other half reported no significant difference in asthma control. Our results suggest that the wheeze detector used has no statistically significant clinical impact among this particular study population. However, trends in the results suggest that the impact of a digital device may be related to a variety of factors such as age, cultural background, disease severity, access to specialised healthcare providers or usage patterns. Therefore, the chosen setting and study population might play an important role in studies evaluating clinical efficacy of developed technologies. For example, in a pilot study on the use of WheezeScanTM, parents of 20 preschool children were instructed to use the device once every morning and evening in addition to when they felt it was needed. This led to a more frequent use than in the randomised controlled trial. Interestingly, the pilot study showed a positive trend in parental self-efficacy (PAMSES), although the absence of a control group and small sample size in the pilot study increases the risk of confounding factors. This observation underlines that in addition to standardised study protocols, it is essential to consider the characteristics of the target population when interpreting results. One size/device does not fit all Potential factors influencing device efficacy have also been identified in the satisfaction and usability evaluation of the present study. The device usability was rated as uncomplicated by most parents, and most of them perceived their child to have benefitted from its use. This indicates a high degree of willingness to use an at-home digital support system among parents of children suffering from recurrent wheeze and is in line with current research in the field. However, several families reported difficulties, such as result variability, and found the device to be too sensitive to background noises. This perception may have impacted the willingness to use the wheeze detector frequently, which in turn may affect the power of outcome analyses. As real-life circumstances may have a significant impact on the use and effectiveness of digital devices, continued efforts to understand usage scenarios are key when evaluating the clinical effectiveness of digital support tools. In addition, results on the perceived benefit varied not only between age groups, but also according to geographical and cultural background. A more critical evaluation and less frequent use of the device by patients from Berlin compared to Istanbul or London may imply that in addition to potential differences in the healthcare setting, cultural variations could affect aspects, such as perceived benefit and usage behaviour. Therefore, a deep understanding of the perception and usage of a specific digital tool among targeted patient groups is crucial in assessing digital health solutions. Strengths and limitations The study's strengths include its randomised controlled trial design, enabling a more selective evaluation of the effects of the digital device on the various patient outcomes. A multicentre approach in different geographical and cultural settings, as well as the relatively large sample size compared to previous studies increase the generalisability of results. However, the study also has important limitations as it focused on mildly to moderately affected children and excluded those with a more severe phenotype. Furthermore, the participants of the study were recruited in slightly different settings according to the study site. Whereas in Berlin and Istanbul children were recruited mainly from paediatric outpatient clinics, the study centre in London was a specialised clinic where participants were likely to have their first contact with a specialised paediatric pulmonologist. This may have affected the improvement of wheeze control and quality of life for all participants, independently from the study group. Finally, the recruitment period stretched over several seasons. Although the summer season was avoided by a recruitment break in July and August, the prevalence of wheezing exacerbations was relatively low and may have varied according to the time point of enrolment. Conclusions and perspectives for future research The clinical evaluation of digital support tools to be used by patients and/or caretakers is crucial, as their clinical use may differ fundamentally from highly standardised research settings. The present study shows that despite its very good performance in previous validation studies, no significant clinical impact could be observed for the wheeze detector when tested in a multicentre randomised controlled trial. However, the study underlines the importance of further studies and a deeper understanding of parameters characterising the target group for digital support tools. Further research is needed to gain better insight into patient perception, usage behaviour and barriers to successful implementation.
Conclusions and perspectives for future research The clinical evaluation of digital support tools to be used by patients and/or caretakers is crucial, as their clinical use may differ fundamentally from highly standardised research settings. The present study shows that despite its very good performance in previous validation studies, no significant clinical impact could be observed for the wheeze detector when tested in a multicentre randomised controlled trial. However, the study underlines the importance of further studies and a deeper understanding of parameters characterising the target group for digital support tools. Further research is needed to gain better insight into patient perception, usage behaviour and barriers to successful implementation.
Introduction Wheezing is common in preschool children and its clinical assessment often challenging for caretakers. This study aims to evaluate the impact of a novel digital wheeze detector (WheezeScanTM) on disease control in a home care setting. Methods A multicentre randomised open-label controlled trial was conducted in Berlin, Istanbul and London. Participants aged 4–84 months with a doctor's diagnosis of recurrent wheezing in the past 12 months were included. While the control group followed usual care, the intervention group received the WheezeScanTM for at-home use for 120 days. Parents completed questionnaires regarding their child's respiratory symptoms, disease-related and parental quality of life, and caretaker self-efficacy at baseline (T0), 90 days (T1) and 4 months (T2). Results A total of 167 children, with a mean± sd age of 3.2±1.6 years, were enrolled in the study (intervention group n=87; control group n=80). There was no statistically significant difference in wheeze control assessed by TRACK (mean difference 3.8, 95% CI −2.3–9.9; p=0.2) at T1 between treatment groups (primary outcome). Children's and parental quality of life and parental self-efficacy were comparable between both groups at T1. The evaluation of device usability and perception showed that parents found it useful. Conclusion In the current study population, the wheeze detector did not show significant impact on the home management of preschool wheezing. Hence, further research is needed to better understand how the perception and usage behaviour may influence the clinical impact of a digital support. Tweetable abstract In this study population, a digital wheeze detector did not show significant impact on the management of preschool wheezing disorders at home. Further research is needed to enhance device usability and understand parental perception of this device. https://bit.ly/45Wgazm
Supplementary material
Acknowledgements We thank all study participants and staff involved in this study.
CC BY
no
2024-01-16 23:47:16
ERJ Open Res. 2024 Jan 15; 10(1):00518-2023
oa_package/24/3f/PMC10789262.tar.gz
PMC10789265
38103961
Introduction Seasonal influenza is a major global public health concern [1] . The World Health Organization (WHO) estimates that influenza causes approximately 3–5 million severe cases and 290,000–650,000 deaths annually [2] , [3] . Vaccination is the most cost-effective preventive measure. However, China has a low influenza vaccination rate among priority populations, including children at 11.9 % and older adults at 21.7 % [4] . Lack of vaccine confidence and public funding may be important contributing factors [5] . Vaccine confidence is encompassed in measures related to perceived vaccine safety, effectiveness, and importance [6] , [7] , [8] , [9] , [10] . Previous evidence suggested that individuals who had higher perceived confidence are more likely to receive a vaccine [11] , [12] , [13] , [14] , [15] . Vaccine confidence is driven by a mix of psychological, sociocultural factors [16] , including community engagement [17] , trust in health providers [18] , and individual socioeconomic status [16] . Dissemination of false information on the Internet further undermines public confidence [19] . However, few studies have focused on improving vaccine confidence. One study indicated that education increased COVID-19 vaccine uptake and confidence in Canadian [20] . Currently, some ongoing studies are exploring innovative interventions to improve vaccine confidence [21] , [22] . But none of these studies have examined the association between interventions, vaccine confidence and vaccine uptake. Our research team implemented a pay-it-forward intervention to improve influenza vaccination in which one individual received a free influenza vaccine as a community gift and was offered an opportunity to donate to support another person to receive the same service. Pay-it-forward involving public engagement and community kindness, was proven to have increased influenza vaccine uptake compared to a self-paid vaccination approach [23] . However, our previous analysis primarily focused on the effects of the intervention on vaccine uptake and vaccine confidence as separate outcomes and did not attempt to examine the potential associations between these variables. Additionally, while pay-it-forward involved financial support, whether the associations between these variables vary by level of annual income remains unclear. This secondary analysis aimed to examine 1) the potential mediating roles of vaccine confidence on the association between pay-it-forward intervention and influenza vaccine uptake via a mediation analysis; and 2) potential varying associations between the pay-it-forward intervention and vaccine uptake by different individual annual income levels via a sub-group analysis. We hypothesized that the pay-it-forward intervention might be associated with vaccine uptake behaviors through potential mediators of vaccine confidence; and associations between the intervention and vaccine uptake/vaccine confidence may vary by level of individual annual income(≤$1860 or >$1860).
Methods Study design We conducted a secondary analysis using data from a parent quasi-experimental trial study that assessed the effectiveness of a pay-it-forward intervention against a standard-of-care self-paid vaccination arm [23] . Data were collected between September 21, 2020 and March 3, 2021 in Guangdong Province. The parent study comprised two intervention arms - a standard-of-care arm and a pay-it-forward arm. The parent study adopted a quasi-experimental design for pragmatic reasons. Due to the overwhelming workload during COVID-19 pandemic, community healthcare workers had limited willingness and capacity to help implement a randomized trial. Instead, recruited participants were chronologically allocated into the two study arms. Participants in the standard-of-care arm had to pay the standard market price of $8·5-23·2 for their vaccines, whereas participants in the pay-it-forward arm received a free influenza vaccination and a postcard message from a local group and were then asked if they would like to voluntarily donate any amount of money or write postcards to support vaccination for subsequent individuals. All participants received an introductory pamphlet about influenza and vaccination. A total of 300 individuals, 150 children (aged between 6 months and 8 years, via caregivers) and 150 older people (≥60 years old) were recruited in our study, with 75 children and 75 older people in each arm. Caregivers of children and older participants completed a questionnaire survey and then determined whether their children or older adults themselves wanted to receive a vaccination after the intervention in each arm. Variable selection Sociodemographic characteristics The final questionnaire survey administered in our study collected information on sociodemographic characteristics of participants (i.e., caregivers or older people) including study site (Yangshan, Zecheng, or Tianhe), sex of participant (male/female), age, educational level (primary school, middle school, and undergraduate or college), occupation (unemployed, peasant or employed), annual income (≤$1860 or >$1860), marital status (live alone/engaged or married), and participants’ attitudes towards vaccine confidence (importance, effectiveness and safety). Moreover, we considered the question “Is price of the vaccine a barrier for your child and/or older individuals in your family to get the influenza vaccine? (Yes/No)” as a proxy measure of sensitivity to vaccine costs (Yes/No). Participants who answered “Yes” were considered cost-sensitive, while those who answered “No” were considered cost-insensitive. Independent variable The study participants were assigned to either the standard-of-care arm or the pay-it-forward arm based on their enrollment order. Intervention arm was treated as the independent variable in this secondary analysis. Mediators Vaccine confidence in safety, importance, and effectiveness were hypothesized as potential mediators in this analysis. We measured vaccine confidence by adapting existing Vaccine Confidence Index TM scales for influenza vaccination in China [6] , [24] . Vaccine confidence was assessed by the degree to which respondents agreed with the following statements on a five-point Likert scale: “In general, I think the influenza vaccine is important,” “In general, I think the influenza vaccine is safe,” “In general, I think the influenza vaccine is effective.” Based on prior vaccine confidence categorization in the literature [6] , [25] , the responses to the three statements were recoded into binary variables - agree (including “strongly agree” and “agree”) and disagree (including “strongly disagree”, “disagree”, and “unsure”). “Unsure” was categorized as disagree because the expression of uncertainty is generally interpreted as a form of disagreement in Chinese culture [26] . Dependent variable Vaccine uptake (Yes/No) among children and older adults was treated as the dependent variable. Data for the dependent variable was obtained from clinical vaccination records. Statistical analysis First, descriptive analysis was used to describe sample characteristics, including sociodemographic characteristics (study site, age group, sex, age, educational level, occupation, annual income, marital status, cost-sensitivity) and vaccine confidence (safety, importance and effectiveness). Second, we employed multivariable logistic regression model to examine the association between pay-it-forward and vaccine uptake, controlling for sociodemographic variables (study site, age group, sex, age, educational level, occupation, annual income, marital status, cost-sensitivity). We also used multivariable logistic regression models to test the associations between pay-it-forward and vaccine confidence including safety, importance and effectiveness. Third, we conducted sub-group analysis based on level of individual annual income to determine if the association between pay-it-forward and vaccine uptake is still present within sub-groups with different income-levels. We categorized the participants into two sub-groups based on the cut-off value for defining low-income individuals in Guangdong Province in 2021 (≤$1860, low income group; >$1860, middle-to-high income group) [27] . Then, we performed chi-square tests within the two sub-groups, to compare the vaccine uptake between pay-it-forward and standard-of-care arms. Multivariable logistic regressions were also conducted within the two sub-groups. It is theoretically possible that pay-it-forward might have a stronger association with vaccine uptake among individuals who were more financially incapable [28] , [29] . Last, a mediation analysis was conducted to explore the indirect effect of vaccine confidence (vaccine importance, effectiveness and safety) on the association between pay-it-forward and vaccine uptake. In the parallel mediation model, we had the three vaccine confidence effects together as mediators in one model ( Fig. 1 ). Pay-it-forward was the independent variable X, vaccine importance, effectiveness and safety were the mediator M 1 , M 2 and M 3 , the vaccine uptake was the dependent variable Y. The paths a 1 to a 3 assess the relationship between X and M and paths b 1 to b 3 assess the relationship between M and Y. The direct effect of X on Y while partialling out the effect of M is denoted as path Direct. And the indirect effect of X on Y through M is the path Indirect. In our analysis results, the direct effect and indirect effects were reported after standardization [30] . The indirect effect is the product of standardized (a 1 and b 1 )/ (a 2 and b 2 )/ (a 3 and b 3 ). Sociodemographic variables, including study site, age group, sex, age, educational level, occupation, annual income, marital status, cost-sensitivity, were controlled for the whole process of mediation analysis. Financial support, such as providing free vaccines, has a direct impact on vaccine uptake [31] , so capability to pay (eg, proxy measure as annual income) were treated as a confounder but not a mediator. The “BruceR” package in R Studio was used to perform the mediation analysis and calculate the 95 % confidence interval using 10,000 bootstrapping resamples (R version 4.2.1).
Results Descriptive outcomes A total of 300 participants were enrolled in the study across two study arms (pay-it-forward, standard-of-care). The pay-it-forward arm had 150 participants, including 75 child caregivers and 75 older individuals with an average age of 53 years. The standard-of-care arm had 150 participants, with 75 child caregivers and 75 older individuals with an average age of 52 years. Of all participants, 73.3 % were female, and the vast majority were married or engaged (84 %). Over half of the participants (50.7 %) were unemployed, 76.3 % had received at least a middle school education, and 36.0 % earned under $1860 per year (low-income group). 66/300 (22 %) of the participants were sensitive to the cost (i.e., considered prices as a barrier). The samples' sociodemographic characteristics including age, sex, occupation, income, marital status and cost-sensitivity did not differ significantly between the two arms, except for their educational level (p = 0.04). We observed significant difference in participants’ vaccine confidence (influenza vaccine importance (p = 0.002), influenza vaccine safety (p < 0.001), and influenza vaccine effectiveness (p < 0.001) between participants in pay-it-forward arm and standard of care arm ( Table 1 ). Multivariable analysis results The pay-it-forward intervention was significantly associated with greater levels of perceived influenza vaccine importance (adjusted odds ratio (aOR) = 3.60, 95 %CI: 1.77–7.32), effectiveness (aOR = 3.37, 95 %CI: 1.75–6.52) and safety (aOR = 2.20, 95 %CI: 1.17–4.15). Greater perceived influenza vaccine importance was associated with increased vaccine uptake (aOR = 8.51, 95 %CI: 3.04–23.86) ( Appendix Table 1 ). Sub-group analysis Table 2 showed that one-third of our participants belonged to the low-income sub-group according to local standards. The pay-it-forward intervention was associated with an increased odds of vaccine uptake (p < 0.001) compared against the self-paid strategy regardless of the income level of participants. Through sub-group analysis, people with greater perceived vaccine importance were more likely to receive the vaccine within both middle-to-high and low-income individuals (p = 0.002, aOR = 8.62, 95 %CI:2.23–33.39; p = 0.04, aOR = 6.08, 95 %CI:1.11–33.33) ( Table 3 ). Additional sub-group analyses by level of cost-sensitivity were performed ( Appendix Tables 2-3 ). Mediation analysis results Table 4 showed the coefficients of the parallel mediation model and single mediation models can be found in Appendix Table 4 and Appendix Figure panel 1. Both coefficients of a 1 and b 1 were significant at the 95 %confidence level. The indirect effect of the intervention of pay-it-forward on vaccination was significant mediated through confidence in vaccine importance (indirect effect 1 = 0.07, 95 %CI: 0.02–0.11) and participants in the pay-it-forward arm were more likely to show higher confidence in vaccine importance which then subsequently leading to higher uptake compared to those in the standard-of-care. The coefficients of a 2 , a 3 were significant at the 95 % confidence interval but b 2 and b 3 were not significant. The indirect effect of the intervention of pay-it-forward on vaccination, nevertheless, was non-significant through confidence in vaccine effectiveness and safety (indirect effect 2 = 0.03, 95 %CI: −0.004–0.06; indirect effect 3 = 0.005, 95 %CI: 0.02–0.03). Additionally, the direct effect of the pay-it-forward intervention on vaccination was statistically significant (direct effect = 0.26, 95 %CI: 0.15–0.37).
Discussion The parent intervention study suggested that the pay-it-forward strategy may increase influenza vaccine uptake compared to a self-paid standard approach [23] . However, in addition to the financial support that pay-it-forward offers, what other factors may be associated with increased influenza vaccine uptake is unclear. This study extends the pay-it-forward literature by testing our hypothesis that this intervention may work through changing service users’ vaccine confidence. This analysis identified the changing perceived vaccine importance as a significant mediator of the association between pay-it-forward intervention and improved vaccine uptake. This study also found that, regardless of individual income levels, pay-it-forward was associated with a remarkable increase in vaccine uptake compared to the standard-of-care approach. We found that greater perceived influenza vaccine importance was associated with an increased uptake. This is consistent with previous study findings that participants' confidence in vaccine importance can positively impact vaccine uptake [25] , [32] , [33] , [34] . A high level of public confidence in vaccine importance is key to achieving and maintaining a high coverage [33] , [35] . Poor perceived vaccine importance is associated with greater vaccine skepticism and a decreased vaccination rate [25] , [32] , [34] , whereas perceived vaccine importance may mitigate losses in vaccine uptake [25] . This is partly because perceived vaccine importance is a well-identified individual determinant of vaccine acceptance [34] . These indicate that health education and advocacy programs targeting improved public confidence in vaccine importance may be a first step in creating public demand and subsequently an increased uptake among priority groups. Our data further suggested that pay-it-forward may be a solution that can help improve user-perceived vaccine importance, which subsequently led to an increased uptake. The parallel mediation analysis results showed that, amongst the three vaccine confidence domains, vaccine importance was identified as the only significant mediator of the association between the pay-it-forward intervention and vaccine uptake, and confidence in safety and effectiveness were found to be less relevant to the mediation pathway in our study sample. Our interpretation is that, in the Chinese market where there is still limited awareness and demand for influenza vaccines, pay-it-forward approach as an intervention package containing educational, peer-based psycho-behavioral, and community engagement components may help enhance public awareness and trust, perceived vaccine importance and subsequent acceptance of the vaccine [36] , [37] , [38] . This is supported by previous pay-it-forward studies suggesting that positive experiences (e.g., kindness and reciprocity generated among the community through donations and handwritten messages) contributed to community solidarity, public trust and encouraged people to receive medical service [39] , [40] , [41] , [42] , [43] . More in-depth data are however needed to better understand the mechanisms. Finally, our sub-group regression analysis showed that, regardless of individual income level, pay-it-forward intervention may be associated with an increased odds of vaccine uptake compared against the self-paid strategy, and vaccine importance remained another significant factor associated with a higher level of vaccine uptake. These suggest that confidence in vaccine importance may be an important target domain for future interventions among varying income groups, and supported by our mediation model results, pay-it-forward may be a promising model to improve influenza vaccine uptake via improving perceived vaccine importance among different income groups. But future research is still needed to affirm potential causal effects.
Conclusion Our findings suggest that vaccine confidence is positively associated with pay-it-forward intervention. Perceived confidence in vaccine importance seems to be a potential mediator of the association between pay-it-forward and vaccine uptake. Regardless of the individual income level, pay-it-forward intervention may be a promising strategy to improve influenza vaccine uptake. The findings of our study suggest the possibility of a pro-social pay-it-forward model to increase influenza vaccine uptake among priority populations via improved perceived vaccine importance. Author contributions DW conceived the idea and analysis plan. WJ analyzed data and generated the figures and tables, XY offered help. WJ, CL and DW wrote the first draft of the paper and all coauthors provided constructive comments and edited the manuscript.
These authors contributed equally to this work. Introduction A Chinese clinical trial has demonstrated that a prosocial pay-it-forward intervention that offered subsidized vaccination and postcard messages effectively increased influenza vaccine uptake and vaccine confidence. This secondary analysis explored the potential mediating role of vaccine confidence on the association between a pay-it-forward intervention and influenza vaccine uptake, and how this might vary by individual annual income levels. Methods Data from 300 participants (150 standard-of-care and 150 pay-it-forward participants) were included in the analysis. We conducted descriptive analysis of demographic and vaccine confidence variables. Multivariable regression and mediation analysis on interventions, vaccine confidence and vaccine uptake were conducted. A sub-group analysis was conducted to further understand whether associations between these variables vary by income levels (<=$1860 or >$1860). Results The pay-it-forward intervention was significantly associated with greater levels of perceived influenza vaccine importance (adjusted odds ratio (aOR) = 3.60, 95 %CI: 1.77–7.32), effectiveness (aOR = 3.37, 95 %CI: 1.75–6.52) and safety (aOR = 2.20, 95 %CI: 1.17–4.15). Greater perceived influenza vaccine importance was associated with increased vaccine uptake (aOR = 8.51, 95 %CI: 3.04–23.86). The indirect effect of the pay-it-forward intervention on vaccination was significant through improved perceived influenza vaccine importance (indirect effect 1 = 0.07, 95 %CI: 0.02–0.11). This study further revealed that, irrespective of the individual income level, the pay-it-forward intervention was associated with increased vaccine uptake when compared to the standard-of-care approach. Conclusions Pay-it-forward intervention may be a promising strategy to improve influenza vaccine uptake. Perceived confidence in vaccine importance appears to be a potential mediator of the association between pay-it-forward and vaccine uptake. Keywords
Limitations Our analysis has several limitations. First, the parent study used a quasi-experimental study design without randomization [44] .Although the pay-it-forward participants were recruited immediately after standard-of-care group, reducing the influence of temporal changes on the observed differences, inferences made from this data should be made with caution [45] . More robust research, such as a randomized controlled trial, is needed to confirm this association. Our analysis informs a hypothesis to understand the potential mechanisms of a pay-it-forward intervention in vaccine services research. Second, education level appeared to differ between pay-it-forward and standard-of-care arms ( Table 1 ), but this variable was adjusted in our regression and mediation models. Finally, participants from the sub-urban Zengcheng site had relatively higher levels of vaccine confidence compared to those from Yangshan and Tianhe ( Appendix Table 1 ). We speculate that this might be because they had higher level of trust in the health facility but our current data lack relevant information to test the hypothesis and this needs further research to verify. Implication Our findings have important public health and research implications. Increasing perceived vaccine importance may help to increase vaccine uptake in the Chinese context. The pay-it-forward strategy increased vaccine uptake significantly and may have been mediated by confidence in vaccine importance. This study is pathbreaking in examining influenza-specific vaccine confidence among older adults and child caregivers in China, and leveraging an innovation to improve vaccine attitudes. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Supplementary data The following are the Supplementary data to this article: Data availability The data will be available by putting on the website of WWW.SESHGlobal.ORG Acknowledgements This study was funded by the 10.13039/100000865 Bill & Melinda Gates Foundation (OPP1217240, INV-034554) and the 10.13039/501100000272 National Institute for Health Research (NIHR200929). Additionally, HL and LL’s work was, fully or partially, supported by AIR@InnoHK administered by 10.13039/501100003452 Innovation and Technology Commission . The funder played no role in this study design, data collection, analysis and interpretation of data, or the writing of this manuscript.
CC BY
no
2024-01-16 23:47:16
Vaccine. 2024 Jan 12; 42(2):362-368
oa_package/c4/cd/PMC10789265.tar.gz
PMC10789266
38061956
Introduction Pertussis vaccination during pregnancy for prevention of early infant mortality is recommended by the World Health Organization in case of resurgence of pertussis or in countries with high or increasing infant morbidity/mortality from pertussis [1] . Pertussis immunization in pregnancy leads to active transport of maternal immunoglobulin G (IgG) antibodies across the placenta beginning in the second trimester to protect the infant during the first months of life [2] . Pertussis vaccination during pregnancy is supported by observational and randomized-controlled studies reporting significantly elevated blood antibody levels in both the mother and newborns at birth compared with those who received placebo or no vaccination, with no indication of increased risk of adverse pregnancy complications [3] , and evidence that severe disease in infants may be prevented [4] , [5] . Many high-income countries and countries in Latin America have adopted policies for maternal pertussis immunization (MPI), in the second or third trimester [1] , [6] , [7] , [8] whereas, in low- and middle-income countries (LMIC), MPI is not implemented because of unclear disease burden [9] , vaccine pricing, and supply constraints. Although an immunologic correlate of protection has not been established for pertussis vaccines, the demonstrated efficacy in the context of both primary and booster immunization of vaccines containing only inactivated pertussis toxin (PT), which is a key virulence factor for Bordetella pertussis, indicate that immune responses to this antigen are essential [10] . Indeed, all acellular pertussis (aP) vaccines (APV) contain a PT component and nearly all of them include filamentous haemaglutinin (FHA) as well [11] . Two-component APV (PT and FHA) have been widely used in infants and have been combined with diphtheria and tetanus toxoids (DT or Td). APV differ not only in the number and concentration of the antigen components but also with regard to the bacterial clone used in production and methods of purification and detoxification (chemical or genetic) [1] . Given the potentially detrimental effects of chemical detoxification on epitope preservation [12] , genetically inactivated PT (PT gen ) containing vaccines using recombinant DNA technology have been compared to chemically inactivated acellular pertussis vaccines [13] . Earlier studies demonstrated that the genetically inactivated pertussis vaccines had higher immunogenicity than the chemically inactivated pertussis vaccines while having similar reactogenicity [11] , [14] . aP5 gen is a two-component recombinant acellular pertussis vaccine containing 5 μg of recombinant pertussis toxin (rPT or PT gen ) and 5 μg of FHA, developed and manufactured by BioNet (Thailand) and licensed as monovalent (Pertagen, aP5 gen ) or combined vaccines (Td-Pertagen/Boostagen®, TdaP5 gen ) in Thailand [15] and Singapore. Safety and non-inferior and superior immunogenicity of both vaccines to a licensed comparator were shown in a phase 2/3 randomized controlled trial in adolescents [16] . The long-lasting immunity induced by the two vaccines were also confirmed in a three-year pertussis antibody persistence study [17] . Due to its higher immunogenicity, it is possible that vaccines containing a lower dose of PT gen could provide comparable immunity to chemically inactivated pertussis vaccines for maternal vaccination, reducing cost and making vaccine more accessible in developing countries. Two randomized controlled trials, one in women of childbearing age and one in pregnant women, comparing monovalent (pertussis-only ap) or combined vaccines (Tdap/TdaP) with different concentrations of PT gen showed the vaccines were safe and non-inferior to Tdap8 chem (BoostrixTM; GlaxoSmithKline Biologicals, Belgium) [18] , [19] . The present report describes the follow up of pregnant women at the time of delivery and their newborns at birth and before any pediatric pertussis vaccination, determining pregnancy outcomes and transferred immunity to infants.
Methods Study design This phase 2, observer-blind, randomized, active-controlled study was conducted at two sites, Siriraj Hospital and King Chulalongkorn Memorial Hospital in Thailand (Thai Clinical Trial Registry number: TCTR20180725004). Healthy pregnant women aged 18 to 40 years old with an uncomplicated singleton pregnancy were randomized to receive a single 0.5 mL dose of one of five vaccines at 20–33 weeks gestation. These five vaccines are outlined below: ap1 gen contains 1 μg of PT gen and 1 μg of FHA; Tdap1 gen contains tetanus and reduced dose diphtheria toxoids (Td) (7.5 Lf of tetanus toxoid and 2.0 Lf of diphtheria toxoid) in combination with ap1 gen ; Tdap2 gen contains Td (7.5 Lf of tetanus toxoid and 2.0 Lf of diphtheria toxoid) in combination with 2 μg of PT gen and 5 μg of FHA; TdaP5 gen contains Td (7.5 Lf of tetanus toxoid and 2.0 Lf of diphtheria toxoid) in combination with 5 μg of PT gen and 5 μg of FHA; and Tdap8 chem vaccine contains Td (5 Lf of tetanus toxoid and 2.5 Lf of diphtheria toxoid) combined with 8 μg of PT, 8 μg of FHA and 2.5 μg of Pertactin. Table 1 describes the composition and batch numbers of the vaccines used in this study, that were administered by intramuscular injection preferably to the non-dominant deltoid. For this phase 2 study, the inclusion criteria defined the second trimester of pregnancy as beginning at week 20 up to week 26. The third trimester of pregnancy was defined as beginning at week 27 up to week 33 gestational age. Exclusion criteria are outlined in the Supplementary Appendix. The present report outlines the immunogenicity and safety data from delivery up to 2 months post-delivery, including pregnancy outcomes. The study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice consistent with International Conference on Harmonisation guidelines. Randomization and masking Pregnant women were randomized in a 1:1:1:1:1 ratio with block size of five (80 participants per vaccine group) to receive one dose (0.5 mL intramuscular injection) of either ap1 gen , Tdap1 gen , Tdap2 gen , TdaP5 gen (Boostagen®), or a comparator Tdap8 chem (BoostrixTM), according to a computer-generated (PROC PLAN, SAS® version 9.4) randomization scheme. The trial was carried out in an observer-blind manner for the participants for all vaccine groups until 2 months after delivery, except for the ap1 gen group. Pregnant women assigned to this group were unblinded at Day 28 post-vaccination and were offered one dose of commercially available tetanus toxoid (TT) vaccine after blood draw and one dose of Td vaccine soon after delivery. The study pharmacist and vaccine administrator were not masked to the vaccine assignment. All other study personnel, participants, and laboratory staff were blinded to maintain the observer-blind status. Study endpoints The immunogenicity endpoints in the pregnant women include the geometric mean concentration (GMC) of anti-pertussis toxin IgG, anti-FHA, anti-tetanus, anti-diphtheria and the geometric mean titer (GMT) of PT neutralizing antibody (PTNA) titers measured at baseline and at the time of delivery, seroconversion rate defined as percentage of pregnant women with anti-PT and anti-FHA antibody concentration ≥ 4-fold increase from baseline to delivery and the percentage of pregnant women with anti-tetanus and anti-diphtheria antibody concentrations ≥0.1 IU/mL at baseline and at delivery. In infants, the immunogenicity endpoints included GMC of anti-PT, anti-FHA, anti-tetanus, anti-diphtheria and GMT of PTNA titers measured at the time of birth (cord blood or neonatal blood ≤72 hours after birth) and at 2 months of age. The immunogenicity outcomes are further outlined in the Supplementary Appendix. Safety endpoints in the pregnant women included the frequency of unsolicited adverse events (AEs), and serious AEs (SAEs). Additional safety outcomes and assessment of the frequency of specific complications of pregnancy and delivery are summarised in the Supplementary Appendix. Safety endpoints in the infants included the frequency of SAEs (including congenital anomalies, neonatal blood screening abnormalities, hearing deficiency detected through neonatal screening, and any other SAEs); percentage of infants with prematurity (<37 weeks of gestation), small for gestational age (SGA) (<10th percentile for gestational age), or low birthweight (<2000 grams); and the percentage with medically significant AEs. The safety data for pregnant women and infants are reported through the time of birth. Assessments For pregnant women, blood draws were taken immediately before vaccination, 28 days after vaccination, and on the day of delivery. Approximately 5 mL cord blood sample was collected at delivery to assess maternal antibody transfer; if such a sample was not obtained, ∼3 mL of venous blood was collected from the infant within 72 hours of birth. An additional ∼ 3 mL blood sample was obtained from infants at 2 months of age. Immunogenicity in pregnant women and infants was assessed by enzyme-linked immunosorbent assay (ELISA) for serum IgG specific for PT, FHA, diphtheria toxin (DT), and TT using blood serum [20] . Anti-PT, FHA, DT, and TT antibody testing was conducted at BioNet Human Serology Laboratory (Thailand), a nationally accredited laboratory by BLQS, DMSC Thailand in compliance with ISO 15189:2012 & 15190:20. Concentrations of serum IgG antibody against PT gen and FHA were measured using validated indirect ELISA [20] . Concentrations were expressed in IU/mL, calibrated to the WHO International Standard Pertussis Antiserum (Human) 06/140. The lower limits of quantification (LLOQ; assay cut-off) were 5 IU/mL for PT-IgG and 1 IU/mL for FHA-IgG. Antibody concentrations below the assay cut-off were given arbitrary values of half the assay cut-off. The WHO Reference Reagent Pertussis Antiserum (Human) 06/142 (NIBSC, UK) was used as the positive control. Serum TT-specific and DT-specific IgG concentrations were measured using validated commercially available ELISA kits (Serion, Germany). Sera from a subset of 24 mother-infant pairs (30%) in each vaccine group (120 total pairs) were randomly selected and assayed for PT neutralizing serum antibody by Chinese hamster ovary (CHO) cell assay at BioNet [20] . The pertussis toxin neutralizing titer was reported as IU/mL on the basis of the relative activity of the WHO International Standard Pertussis Antiserum (Human) 06/140. Safety assessment is presented for both pregnant women and infants. The case definitions developed by the Brighton Collaboration Global Alignment of Immunization Safety in Pregnancy working groups for the assessment of AEs in mothers and infants following maternal immunization were used [21] , when applicable. A Data and Safety Monitoring Board supervised enrolment and monitored the safety of maternal participants and infants throughout the trial. Statistical analysis The study includes 400 pregnant women (80 per vaccine group). The sample size was calculated based on non-inferiority test for anti-PT IgG antibody measured at the time of vaccination and 28 days after vaccination in a pooled population of subjects (400 pregnant women and 250 non-pregnant women of childbearing age [19] , [22] . For analysis of immunologic results, GMCs for anti-PT, anti-FHA, anti-TT and anti-DT antibody and GMT of PT neutralizing antibody at delivery were calculated for each vaccine group along with its two-sided 95% CI, by exponentiating the corresponding log-transformed mean and its 95% CI limit. The difference in GMC or GMT between each of the study group and the comparator group were analyzed and the GMC or GMT ratio with the two-sided 95% CI were calculated based on ANOVA with Bonferroni post-hoc analysis. An alpha level less than 0.05 was applied for assigning statistical significance. At delivery, the GMC were adjusted for baseline concentration and gestational age at delivery using analysis of covariance (ANCOVA). These results are presented as ratio of adjusted GMC between each study group and comparator group. Seroconversion rates for anti-PT and anti-FHA antibody and PTNA, and seroprotection rates for anti-DT and anti-TT antibody were computed along with the corresponding exact two-sided Clopper-Pearson 95% CI for each vaccine group. The differences in the rates between each of the study group and the comparator group were calculated along with the two-sided 95% CI obtained by the Miettinen and Nurminen method. The ratio of GMC/GMT in cord blood or neonatal blood to that in maternal participants at the time of delivery in terms of anti-PT, anti-FHA, anti-DT, and anti-TT IgG concentrations and PTNA titers (IU/mL) was computed along its two-sided 95% CI, by exponentiating the mean of the difference in log-transformed antibodies between infants and maternal participants and its 95% CI limits. The GMC of anti-PT antibodies and GMT of PTNA at the time of birth (cord blood or neonatal blood ≤72 hours after birth) in maternal participants vaccinated during the second trimester and third trimester of pregnancy in each vaccine group were computed along its two-sided 95% CI. For maternal participants, the number and percentage experiencing medically attended AEs or SAEs reported until infants were 2 months of age were tabulated by vaccine group, severity, and causality. The frequency and percentage of participants experiencing post-vaccination pregnancy and delivery complications were also summarized by vaccine group. The frequency and percentage of infants with prematurity (<37 weeks of gestation), SGA (<10th percentile for GA), or low birthweight (<2000 g) were summarized by vaccine group. For the percentage, an exact two-sided 95% Clopper-Pearson CI was computed. Data management and statistical analyses were performed by the Center of Excellence for Biomedical and Public Health Informatics (Bangkok, Thailand). All statistical analyses were carried out using Statistical Analysis System (SAS®) software version 9.4.
Results Participant characteristics Between February 1, 2019 and October 10, 2019, a total of 400 pregnant women were enrolled. 398 of 400 maternal participants delivered between April 4, 2019 and February 18, 2020. Two maternal participants were excluded before delivery due to informed consent withdrawal and geographical relocation. Demographic characteristics of pregnant women at baseline were similar across all vaccine groups ( Table 2 ). At the time of vaccination, 48.3% of pregnant women were in the second trimester of pregnancy and 51.5% in the third trimester of pregnancy. In infants, demographic and other characteristics at birth were similar across all vaccine groups ( Table 2 ). Over half of infants were male with birth weight between 2020 and 4455 grams; 94.9% of infants were assessed as appropriate for gestational age. Maternal and infant disposition at delivery and 2 months after delivery are shown in Fig. 1 . 398 pregnant women were included in the safety assessment and 386 in the immunogenicity evaluation (386 participants for PT and FHA, 309 participants for DT and TT ELISA testing (for which the ap1 gen arm was not included), and a randomly selected subset of 115 participants for PT neutralization assay analyses) ( Table 3 ). Among the 398 infants born to maternal participants, 393 were included in the safety population and 385 cord blood or neonatal blood samples were included in the immunogenicity evaluation at birth. For follow up at 2 months of age, 384 eligible infants returned to the study site for safety assessment but only 371 were included in the immunogenicity evaluations. None of the mothers nor infants were excluded for vaccine-related safety issues. Five infants were excluded from the safety population because they were ineligible as per study protocol (low birthweight or underlying medical conditions). The main reasons some pregnant women and infants were excluded from the immunogenicity evaluation are blood samples not collected, mothers receiving other vaccines before delivery and infants receiving pediatric vaccines before blood draw. Details of excluded participants are shown in Supplementary Appendix. Immunogenicity of maternal and infant participants The immunogenicity findings in mothers and in infants are summarized in Table 4 . The anti-PT antibody concentrations slightly decreased from Day 28 (44.7 IU/mL – 125.9 IU/mL) to delivery (28.7 IU/mL – 92.8 IU/mL) in all groups. At delivery, the anti-PT antibody GMCs (IU/mL) in maternal participants ranged from 28.7 (95% CI 23.8–34.5) in the Tdap1 gen group to 92.8 (95% CI 71.4–120.5) in the TdaP5 gen group. The geometric mean fold rises (GMFRs) from baseline were higher in the ap1 gen , Tdap1 gen , Tdap2 gen, and TdaP5 gen groups than in the Tdap8 chem group. The adjusted anti-PT GMCs were similar in the Tdap1 gen and Tdap2 gen compared to Tdap8 chem group, and significantly higher in the ap1 gen and TdaP5 gen groups than in the Tdap8 chem group. Interestingly, the non-combined vaccine, ap1 gen induces higher anti-PT antibody GMC than the combined vaccine (Tdap1 gen ). Analyses of anti-FHA IgG concentrations in maternal participants are summarized in Table 4 . A trend towards a slight decrease in anti-FHA antibody concentrations from Day 28 to delivery in all vaccine groups was seen. Anti-FHA IgG GMC was higher in the Tdap8 chem group than for the other vaccine groups. At delivery, PTNA (IU/mL) in mothers ranged from 28.7 (95% CI 14.0–59.1) in the ap1 gen group to 91.2 (95% CI 57.9–143.8) in the TdaP5 gen group ( Table 4 ). The GMFRs from baseline were higher in the ap1 gen , Tdap2 gen, and TdaP5 gen groups than in the Tdap1 gen and Tdap8 chem groups. The adjusted PT neutralizing GMT ratio of each study vaccine to the comparator vaccine (Tdap8 chem ) showed that the adjusted PT neutralizing GMT was similar in the ap1 gen , Tdap1 gen Tdap2 gen groups compared to the Tdap8 chem group, and significantly higher in the TdaP5 gen groups than in the Tdap8 chem group. At delivery, the difference in anti-PT seroresponse rates between study groups and the comparator group was highest for TdaP5 gen (34.3% [95 % CI 21.4–46.6]) and lowest for Tdap1 gen (5.9% [95% CI −9.4–21.1]) ( Table 4 ). For anti-FHA, the differences in the seroresponse rates were highest for TdaP5 gen (0.8 [95% CI −10.6–12.1]) and lowest for Tdap1 gen (-30.1 [95% CI −43.2–16.1]). For PTNA, the difference in the rates were highest for TdaP5 gen (33.0 [95% CI 8.5–54.8]) and lowest for ap1 gen (2.5 [95% CI −25.1–29.7]). Analyses of anti-DT and anti-TT IgG concentrations in maternal participants are summarized in Table 5 . Increases in anti-DT and anti-TT IgG concentrations within the 28 days after vaccination were observed (unpublished data), with a trend towards a slight decrease in anti-DT and anti-TT IgG concentrations from Day 28 to delivery. At delivery, the anti-TT seroprotection rates were 100% for all groups, while anti-DT seroprotection rates were above 90% in all study vaccines including the comparator. In infants at birth, anti-PT GMCs ranged from 37.2 (95% CI 30.6–45.2) in Tdap1 gen group to 118.8 (95% CI 93.9–150.4 in the TdaP5 gen group. These levels were consistently higher in cord blood or neonatal samples compared to maternal blood, demonstrating active transport of antibodies from maternal participants to their infant. Interestingly, the anti-PT GMCs in all groups were ≥30 IU/mL, a cut-off value considered potentially predictive of protection of infants until 2 to 3 months of age based on a half life of 36 days [23] . In infants at 2 months of age, anti-PT GMCs (IU/mL) ranged from 10.5 (95% CI 8.6–12.9) in the Tdap1 gen group to 32.8 (95% CI 25.7–42.0) in the TdaP5 gen group. This highlights significantly higher anti-PT IgG concentration in TdaP5 gen than Tdap8 chem (12.5 [95% CI 10.1–15.5], p<0.05). For anti-FHA, the GMC values ranged from 16.6 (95% CI 13.8–19.8) in the Tdap1 gen group to 54.8 (95% CI 43.2–69.3) in Tdap8 chem. For PTNA, there was no significant difference at 2 months between the recombinant vaccines and the comparator with the following GMTs: 11.8 (95% CI 6.4–21.5) in the ap1 gen group, 12.5 (95% CI (8.0–19.6) in the Tdap8 chem group and 28.7 (95% CI 18.1–45.6) in the TdaP5 gen group ( Table 4 ). In infants at 2 months of age, anti-TT seroprotection rates were 100% across all groups and anti-DT rates were highest for Tdap8 chem group (78.4% [95% CI 67.3–87.1]) and lowest for Tdap1 gen (67.1% [95% CI 55.1–77.7]) ( Table 5 ). For pertussis, all infant groups had anti-PT GMC ≥30 IU/mL at birth and ≥10 IU/mL at 2 months of age. Even though there is no well-defined anti-PT concentration that correlates with protection, the birth cut-off value of ≥30 IU/mL was considered potentially predictive of protection of infants until 2 to 3 months of age based on a half life of 36 days [23] . The proportion of infants at birth with cut-off value of ≥30 IU/mL for anti-PT antibody concentration ranged from 61.8% (Tdap1 gen group) to 90.7% (Boostagen®) ( Fig. 2 ). At 2 months of age, the proportion of infants with cut-off value of ≥10 IU/mL for anti-PT antibody concentration ranged from 57.3% (Tdap1 gen group) to 89.0% (Boostagen®) ( Fig. 2 ). A comparison of GMCs of anti-PT antibodies and GMTs of PT neutralizing antibody at the time of birth (cord blood or neonatal blood ≤72 hours after birth) in maternal participants vaccinated during the second versus third trimester of pregnancy. At second trimester of pregnancy, the anti-PT antibodies ranged from 33.03 IU/mL (95% CI 24.05–45.38) for Tdap1 gen to 113.36 IU/mL (95% CI 82.82–155.17) for TdaP5 gen , and at third trimester, the range are from 41.17 IU/mL (95% CI 32.15–52.73) for Tdap1 gen to 123.66 IU/mL (95% CI 86.56–176.67) for TdaP5 gen vaccine group. Similar values were obtained for PT neutralizing antibody. No difference in these outcomes between the second and third trimester of pregnancy was observed. Safety of maternal participants and infants An overview of safety results in maternal participants until 2 months after delivery is presented in Table 6 . There were no vaccine-related SAEs, no AE leading to study withdrawal, and no deaths. Between 21.5% and 38.0% of maternal participants in each study arm experienced at least one complication during pregnancy. Furthermore, between 21.3% and 35.4% of participants in each vaccine group had at least one complication during labor or delivery. Please refer to Table 7 for data regarding specific diagnoses. The percentage of maternal participants who had caesarean section ranged from 41.3% (TdaP5 gen group) to 53.8% (Tdap1 gen group), with a history of previous caesarean section as the predominant reason leading to caesarean section in the study. In infants, the outcome at birth is shown in Table 6 with 9.6% (38/398) presenting with one or more of either prematurity, SGA, or low birthweight, and ranging from 1.3% (1/79) for Tdap8 chem to 16.3% (13/80) for Tdap1 gen . Overall, prematurity was the most frequently reported outcome at 7.5% (30/398). Three cases of congenital anomalies were reported (0.8%) in this study. One or more non vaccine-related SAEs were reported in 13.7% (54/393) of eligible infants. No SAEs led to death or study withdrawal.
Discussion We evaluated the pregnancy outcome and antibody transfer to neonates after maternal immunization with recombinant pertussis vaccines containing different concentrations of genetically inactivated pertussis toxin. We found that anti-PT antibody levels at delivery were similar or higher across multiple dose levels compared to a reference vaccine known to protect against neonatal pertussis. Anti-PT, anti-FHA, anti-DT, and anti-TT GMC slightly decreased from 28 days after vaccine administration to the time of delivery with a single dose of recombinant acellular pertussis vaccine (either ap1 gen , Tdap1 gen , Tdap2 gen , or TdaP5 gen ) in the second or third trimester of pregnancy. The GMCs were higher through the time of delivery than the GMC levels reported before vaccine administration. Immunogenicity data in infants reflected the pattern of results observed in the maternal participants; antibody titers were slightly higher than in maternal participants, indicating an active antibody transfer from the mother to the infant, and persistent with effective antibody levels up to 2 months of age. No safety issues of concern were identified in the study. Evaluation of pregnancy and neonatal outcomes in mothers and infants showed no vaccine-related adverse effects on pregnancy or newborn health outcomes. Adverse pregnancy and neonatal outcomes in infants were similar across all vaccine groups. In our study, the incidence rate of prematurity or preterm birth (7.5%) and the rate of caesarean section (48.6%) were found similar to that in the general population at the two sites. There was no difference in proportion of caesarian section between vaccine groups. The high rate of caesarean sections in Thailand can be due to fear of labor pain [24] and belief in “auspicious dates” [25] . The ratio of anti-PT antibody from cord blood to delivery, in which GMCs were consistently higher in cord blood or neonatal samples than in maternal blood, demonstrates the active transfer of antibodies from mothers to their infants across all recombinant pertussis vaccine formulations [26] , [27] , [28] . Of note, pertussis vaccination is recommended in the third trimester of gestation by the American College of Obstetricians and Gynecologists [6] while it is recommended in the second or third trimester in other countries [8] , [29] . Previous studies have investigated the optimal timing of maternal Tdap vaccination indicating that administration in the second or third trimester results in relatively higher neonatal antibody concentrations [23] , [30] , [31] . The evidence to date has been inconclusive regarding which trimester is preferable. However, in one prospective, observational, nonrandomized study comparing the transfer of anti-PT antibodies to the newborn following vaccination in the second and third trimester of pregnancy, second trimester vaccination conferred higher antibody concentrations to the newborn, particularly for premature babies [31] . In our study, we found no difference in GMC of anti-PT antibodies and GMT of PTNA at the time of birth if the vaccine was given during the second versus third trimester of pregnancy. These findings provide useful information regarding programmatic suitability for maternal pertussis immunization implementation, as the studied vaccines can be given at any time during the second or third trimester of gestation and at least fifteen days before delivery, as per World Health Organisation recommendations. When comparing Tdap1 gen and Tdap2 gen to TdaP5 gen , we have observed a dose-dependent immune response against PT antigen in mothers and infants. Among all vaccine groups including Tdap8 chem , the immune response to FHA was also dose dependent with titers increasing according to FHA content (1 μg, 5 μg or 8 μg). When compared to Tdap8 chem , Tdap1 gen induces the same anti-PT GMC with an adjusted GMC ratio of 1 (95% CI, 0.7–1.4), demonstrating PT gen is a strong immunogen even at a very low concentration (1 μg PT gen ) [12] , [11] , [14] , [19] . In addition, vaccination with monovalent ap1 gen induced higher anti-PT GMC than Tdap1 gen in mothers and infants up to 2 months of age. This difference was found significant only when using adjusted GMC in mothers at delivery. This may be explained by potential interference of diphtheria and tetanus toxoids on anti-pertussis responses. Interestingly, when compared to Tdap8 chem , the ap1 gen vaccine induced significantly higher PT levels at the time of delivery, confirming the non-inferiority and superiority of ap1 gen demonstrated in non-pregnant and pregnant women, one month after vaccination [19] . Given these findings, monovalent acellular pertussis vaccine could potentially be used for subsequent pregnancies that do not need the diphtheria and tetanus components, avoiding unnecessary reactogenicity and cost of Td-combined formulations. We also found that in 2 month-old infants, levels of PT IgG elicited by recombinant pertussis formulations were higher than non-recombinant Tdap comparator, Tdap8 chem , which has been shown to be effective in reducing neonatal pertussis in observational studies [5] . Higher titers against (PT) elicited by the recombinant vaccines would be expected to increase protection against severe pertussis in young infants. A Consensus Conference organized by the World Association for Infectious Disease and Immunological Disorders (WAidid), with the goal of evaluating the most important reasons for the pertussis resurgence and the role of different acellular pertussis vaccines in this resurgence concluded that present knowledge indicates that PT, particularly if genetically detoxified, represents the main antigen that ensures protection from disease, although not from infection; and that the contribution of other pertussis antigens (FHA, PRN and FIM) in vaccine efficacy and long-lasting protection is still under discussion and needs further study [32] . PT gen containing vaccines (aP5 gen , TdaP5 gen and Tdap2 gen ) are licensed for booster use in adolescents and adults including pregnant women and the elderly. The availability of affordable low-dose ap1 gen and Tdap1 gen vaccines could make maternal pertussis vaccination more accessible in low-middle income countries.
These authors contributed equally. Highlights • Recombinant pertussis vaccine is safe for both mother and newborn. • There is effective transplacental antibody transfer to infants at birth. • No difference in antibody response is shown during 2nd or 3rd trimester of pregnancy. Introduction Recombinant acellular pertussis (ap) vaccines containing genetically inactivated pertussis toxin (PT gen ) and filamentous hemagglutinin (FHA) with or without tetanus (TT) and diphtheria (DT) vaccines (Td) were found safe and immunogenic in non-pregnant and pregnant women. We report here maternal antibody transfer and safety data in mothers and neonates. Methods This is the follow up of a phase 2 trial in 2019 among 400 pregnant women who randomly received one dose of recombinant pertussis-only vaccine containing 1 μg PT gen and 1 μg FHA (ap1 gen ), or Td combined with ap1 gen (Tdap1 gen ), or with 2 μg PT gen and 5 μg FHA (Tdap2 gen ), or with 5 μg PT gen and 5 μg FHA (TdaP5 gen, Boostagen®, BioNet, Thailand) or chemically-inactivated acellular pertussis comparator (Tdap8 chem, BoostrixTM, GSK, Belgium), either in the second or third trimester of gestation. IgG against PT, FHA, TT and DT were assessed by ELISA, PT-neutralizing antibodies (PTNA) by Chinese Hamster Ovary cell assay and safety outcomes at delivery in mothers and at birth. Results Anti-PT and anti-FHA geometric mean concentration (GMC) ratio between infants at birth and mothers at delivery was above 1 in all groups. PT GMC in infants at birth were ≥30 IU/mL in all groups with the highest titers in infants found in TdaP5 gen group at birth (118.8 [95% CI 93.9–150.4]). At 2 months, PT GMC ratio to Tdap8 chem (98.75% CI) was significantly higher for TdaP5 gen (2.6 [1.7–4.0]) and comparable for other recombinant vaccines. No difference in PTNA titers at birth was observed between all groups nor between time of vaccination. Adverse events were comparable in all vaccine groups. Conclusions BioNet licensed (TdaP5 gen and Tdap2 gen ) and candidate vaccines (Tdap1 gen and ap1 gen ) when given to pregnant women in the second or third trimester of gestation are safe and have induced passive pertussis immunity to infants.
Funding This work was funded by a grant from the Bill & Melinda Gates Foundation, Seattle, USA [grant number OPP1120084]. The findings and conclusions contained within are those of authors and do not reflect position or policies of the Bill & Melinda Gates Foundation. CRediT authorship contribution statement Kulkanya Chokephaibulkit: Supervision, writing – review & editing. Thanyawee Puthanakit: Supervision, writing – review & editing. Surasith Chaithongwongwatthana: Investigation, writing – review & editing. Niranjan Bhat: Conceptualization, methodology, writing – review & editing. Yuxiao Tang: Conceptualization, methodology, writing – review & editing. Suvaporn Anugulruengkitt: Investigation, writing – review & editing. Chenchit Chayachinda: Investigation, writing – review & editing. Sanitra Anuwutnavin: Investigation, writing – review & editing. Keswadee Lapphra: Investigation, writing – review & editing. Supattra Rungmaitree: . Monta Tawan: Investigation. Indah Andi-Lolo: Writing – review & editing. Renee Holt: Writing – review & editing. Librada Fortuna: Methodology, investigation, validation, writing – review & editing. Chawanee Kerdsomboon: Investigation, validation, writing – review & editing, visualization. Vilasinee Yuwaree: Investigation, writing – review & editing. Souad Mansouri: Conceptualization, methodology, supervision, writing – review & editing. Pham Hong Thai: Conceptualization, methodology, resources, writing – review & editing. Bruce L. Innis: Conceptualization, methodology, writing – review & editing. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: LF, CK, VY, SM and PHT are employed by BioNet. All other authors declare no competing interests.
Supplementary material The following are the Supplementary data to this article: Data availability Data will be made available on request. Acknowledgments We are grateful to all participants and clinical trial staff in Thailand for their significant contributions to the study. We also thank the members of the data and safety monitoring board (Dr. Bernard Fritzell [Chair], Dr. Sriluck Simasathien and Dr. Damrong Tresukosol) for providing safety oversight for all participants throughout this trial, and to BIOPHICS for data management and statistical analysis. Thank you to the PATH team for their invaluable contribution. Thanks to Professor Keith P. Klugman, Ajoke Sobanjo-ter Meulen and Janet White from the Bill & Melinda Gates Foundation for their guidance throughout this study. Many thanks to BioNet teams especially Ladda Suwitruengrit and to our Scientific Advisory Board for their advice on the manuscript. In addition, we thank Tricia Newell for writing the initial draft of the manuscript.
CC BY
no
2024-01-16 23:47:16
Vaccine. 2024 Jan 12; 42(2):383-395
oa_package/2a/0e/PMC10789266.tar.gz
PMC10789269
38226064
Introduction Progressive fibrosing (PF) interstitial lung disease (ILD) describes a cohort of patients who develop disease progression despite optimal pharmacotherapy [ 1 ]. It is characterised by a combination of worsening respiratory symptoms, declining lung function and increasing extent of fibrosis on high-resolution computed tomography (HRCT). PF-ILD is observed in a wide range of fibrosing ILDs including connective tissue disease (CTD)-associated ILD, hypersensitivity pneumonitis, idiopathic nonspecific interstitial pneumonia (NSIP) and unclassifiable ILD [ 1 ]. The clinicophenotypical and mechanistic overlap between idiopathic pulmonary fibrosis (IPF) and PF-ILD allows the potential for a common treatment pathway [ 1 ]. Estimations of the proportion of patients with fibrosing ILD who develop a progressive phenotype have varied historically, and have been reported to be between 13% and 53% [ 2 ]. Recently, a robust multicentre Canadian prospective registry study has demonstrated progression in 39–59% of patients with fibrosing ILD despite conventional therapy, dependent on disease subtype [ 3 ]. The landmark INBUILD study demonstrated the efficacy of the antifibrotic tyrosine kinase inhibitor nintedanib in treating a wide range of fibrosing ILDs [ 4 ]. Nintedanib was shown to reduce the annual rate of lung function decline in patients with fibrosing ILD regardless of underlying subtype [ 4 – 6 ]. Nintedanib was approved by the National Institute of Health and Care Excellence (NICE) in November 2021 for use in PF-ILD. The NICE technology appraisal used the criteria established by the INBUILD study to define the prescribing criteria in England, Wales and Northern Ireland [ 4 ]. Nintedanib is licensed for use in patients with fibrosing ILD which has progressed despite conventional therapy. While the INBUILD study excluded the use of immunosuppression other than low-dose prednisolone at baseline, evidence for the concomitant use of nintedanib and immunosuppression originates from the Safety and Efficacy of Nintedanib in Systemic Sclerosis (SENSCIS) study of nintedanib in systemic sclerosis-associated ILD. The study included 279 (48.4%) out of 576 participants who were receiving mycophenolate mofetil (MMF) at baseline [ 6 ]. Nintedanib reduced the annual rate of forced vital capacity (FVC) decline in participants both receiving and not receiving MMF, with no difference in adverse events [ 7 ]. Data from a national early access programme in the United Kingdom (UK) demonstrated real-world efficacy of nintedanib in PF-ILD [ 8 ]. Despite the study cohort demonstrating a greater impairment in FVC and transfer capacity of the lung for carbon monoxide ( D LCO ) at baseline compared to INBUILD, nintedanib was still able to slow the rate of lung function decline. The aim of this UK-wide service evaluation was to document prescribing practices in the real world for the use of nintedanib for PF-ILD. We aimed to document the criteria used for PF-ILD diagnosis, the underlying disease subsets, the severity of disease at drug initiation and the concomitant use of immunosuppressive therapies.
Methods Service evaluation In England, antifibrotic medications are available through ILD specialist centres and through general hospitals in Scotland, Wales and Northern Ireland. 26 antifibrotic prescribing centres in the UK were invited to service evaluation inclusion by email. Individual participating centres registered this project with their local healthcare trust service evaluation/audit departments, complying with Caldicott principles. No personal identifiable information was submitted for study. The study was not considered research by the UK Health Research Authority (HRA) decision tool and did not require HRA or ethical approval. Data collection Study centres completed a pre-defined survey for patients with a multidisciplinary team (MDT) decision to commence nintedanib for non-IPF PF-ILD between 17 November 2021 and 30 September 2022. A copy of the survey in presented in the supplementary material . Data collected included underlying diagnosis, diagnostic criteria, concomitant therapy, radiological pattern, FVC and D LCO at baseline and reason for drug discontinuation. Summary data were grouped into categories prior to submission by individual participating centres. Individual-level patient data were not collated centrally. Responses were collected by electronic survey (Jisc Online Surveys, UK) and collated by the coordinating centres (North Bristol NHS Trust and Royal Devon University Healthcare NHS Foundation Trust). Progression criteria The UK has adopted progression criteria defined by the INBUILD study [ 4 ]. All patients commenced upon nintedanib for PF-ILD are required to fulfil one of the following criteria: a relative decline in FVC of ≥10% predicted over the previous 24 months; a relative decline in FVC% predicted of ≥5%, but <10%, with worsening respiratory symptoms; a relative decline in FVC% predicted of ≥5%, but <10%, with increasing fibrotic changes on HRCT compared with the previous 24 months; worsening respiratory symptoms and increasing fibrotic changes on HRCT over the previous 24 months.
Results 24 (92.3%) out of 26 centres responded to the survey; participating centres are listed in the supplementary material . The number of patients prescribed nintedanib across specialist centres differed during the assessment period (between seven and 129). In total, 1120 patients had an MDT recommendation to commence nintedanib for PF-ILD. Treatment by subtype Figure 1 demonstrates the subtypes of ILD for which nintedanib was prescribed. The most common subtypes were hypersensitivity pneumonitis (298 out of 1120, 26.6%), rheumatoid arthritis related ILD (180 out of 1120, 16.0%), idiopathic NSIP (125 out of 1120, 11.2%) and unclassifiable ILD (100 out of 1120, 8.9%). 72 out of 1120 patients had criteria for diagnosis labelled as “other”. This included pleuroparenchymal fibroelastosis (PPFE) (19 out of 1120, 1.7%), other CTD-related ILD (11 out of 1120, 1.0%), fibrotic organising pneumonia (seven out of 1120, 0.6%), interstitial pneumonia with autoimmune features (six out of 1120, 0.5%) and asbestosis (five out of 1120, 0.4%). PF-ILD criteria and radiological pattern Figure 2 demonstrates the primary criteria by which PF-ILD was diagnosed in the cohort. 418 (37.3%) out of 1120 were diagnosed based on progressive disease identified on HRCT with progression of symptoms (criterion 4). 281 (25.1%) out of 1120 patients fulfilled more than one diagnostic criterion for nintedanib prescription. The MDT consensus radiological patterns reported were definite usual interstitial pneumonia (UIP) pattern (252 out of 1120, 22.5%), probable UIP (111 out of 1120, 9.9%), indeterminate for UIP (53 out of 1120, 4.7%), fibrotic hypersensitivity pneumonitis (262 out of 1120, 23.4%), fibrotic NSIP (261 out of 1120, 23.3%) and alternative pattern (181 out of 1120, 16.2%). Concomitant therapy Concomitant immunomodulatory therapy was commonly prescribed. 609 (54.4%) out of 1120 patients were receiving oral corticosteroids at the time of commencing nintedanib. MMF was the most commonly co-prescribed immunosuppressive therapy after corticosteroids (335 out of 1120, 29.9%). Table 1 demonstrates the range of immunomodulatory and immunosuppressive therapies that were intended to be continued alongside nintedanib. Immunosuppressive or immunomodulatory therapy was stopped prior to commencing nintedanib in 21 out of 1120 patients. The most commonly discontinued medications were MMF (eight out of 21) and oral corticosteroids (five out of 21). Pulmonary function at time of initiation Baseline pulmonary function tests at the time of MDT decision to commence nintedanib for PF-ILD are demonstrated in figure 3 . The median percentage predicted FVC category was ≥60% pred to <70% pred and median percentage predicted D LCO category was <40%. 181 out of 1120 participants had no value for percentage predicted D LCO , which represented patients with missing data or who were unable to perform D LCO testing. Multidisciplinary team There were a range of prescribing healthcare professionals (n=24) across and within prescribing centres. The healthcare professionals prescribing nintedanib were reported to be respiratory physicians (23 out of 24), nurse specialists (14 out of 24), specialist pharmacists (12 out of 24) and rheumatologists (four out of 24). Drug initiation and discontinuation By 30 September 2022, 928 (82.9%) out of 1120 patients had commenced nintedanib; the remaining patients were awaiting initiation. The proportion of patients awaiting drug initiation as of 30 September 2022 varied by prescribing centre, ranging from 42% to 100% of patients. 10 out of 24 participating centres had initiated all intended patients on nintedanib by 30 September 2022. At the time of service evaluation submission, 175 (18.8%) out of 928 participants had discontinued nintedanib. The most common reasons for nintedanib discontinuation were death (63 out of 175, 36.0%), drug tolerability (83 out of 175, 47.4%) and deranged liver function test (16 out of 175, 9.1%).
Discussion The service evaluation has demonstrated widespread uptake of the use of nintedanib for PF-ILD in the UK. The NICE technology appraisal guidance predicted that a total of 900 patients in the UK living with PF-ILD would be eligible for nintedanib [ 9 ]. This service evaluation has demonstrated that 1120 patients had an MDT decision to commence nintedanib for PF-ILD between November 2021 and September 2022 and 928 had initiated treatment by 30 September 2022. This highlights the underestimation of the potentially eligible patients living with PF-ILD in the UK, which has important implications for service provision. The study highlights variation in the proportion of patients with an MDT decision to commence nintedanib and those who have initiated treatment. This variation could be explained by a combination of size of specialist centre, local referral patterns and the ability of individual centres to manage the increased demand for nintedanib initiation. Discrepancies in UK service provision have been highlighted by the 2021 Getting it Right First Time report [ 10 ]. This report revealed variation in waiting times for clinical assessment, differences in medical workforce provision and variation in specialist nursing and pharmacy services. The prescription of nintedanib for PF-ILD in the UK was not permitted until 90 days following the NICE recommendation. This had the effect of reducing the period covered by this evaluation within which patients could commence treatment. Our data identified fewer patients commenced on nintedanib for lung function deterioration compared to the INBUILD study [ 4 ]. A relative decline in FVC of >10% pred was the most common criterion met to diagnose PF-ILD in the INBUILD treatment arm (160 out of 332, 48.2%) compared to only 162 (14.5%) out of 1120 in our study. Our study demonstrated a higher proportion of patients having progression defined by HRCT (418 out of 1120, 37.3%) than those in the INBUILD treatment arm (n=62, 18.7%). FVC trajectories are known to be poorer in those with disease progression identified by HRCT and our data demonstrate the real-world practice of using HRCT to determine disease progression prior to treatment initiation [ 11 ]. This increased reliance on HRCT could reflect the reduced availability of lung function testing during the coronavirus disease 2019 pandemic [ 12 ]. Regardless of the rationale, the increased use of HRCT to diagnose progression emphasises the requirement for specialist thoracic radiologist review in the context of an MDT. Accurate quantification of disease progression including the identification of subtle changes in disease extent and morphology may be aided by the use of artificial intelligence assessment of serial HRCT [ 13 ]. PF-ILD encompasses a broad range of underlying ILD subtypes. This service evaluation highlights differences between our real-world patient cohort and that examined by the INBUILD study. A higher proportion of autoimmune ILDs (including rheumatoid arthritis-associated ILD and CTD-ILD) were seen in this service evaluation compared to the INBUILD nintedanib arm (377 (33.7%) out of 1120 and 82 (24.7%) out of 332, respectively), perhaps reflecting the exclusion of patients receiving concomitant immunosuppression in INBUILD [ 4 ]. Idiopathic NSIP and unclassifiable ILDs were underrepresented in the real-world study compared to INBUILD; 125 (11.1%) out of 1120 versus 64 (19.3%) out of 332 and 100 (8.9%) out of 1120 versus 64 (19.3%) out of 332, respectively. However, there were significant similarities, for example hypersensitivity pneumonitis was the most common underlying diagnosis in both the INBUILD nintedanib arm (84 out of 332, 25.3%) and our service evaluation (298 out of 1120, 26.7%). These data have highlighted the use of nintedanib for PPFE (19 out of 1120, 1.7%). Both PPFE and nintedanib use are associated with weight loss and the nintedanib rate of adverse events of in this population is unknown [ 14 , 15 ]. Evidence for the use of nintedanib in PPFE is limited to conflicting retrospective reports, highlighting the need for prospective, controlled studies [ 16 , 17 ]. While the INBUILD study did include patients with PPFE, they represented a small proportion of the overall cohort and were not analysed as a separate subgroup [ 18 ]. Patients being commenced on nintedanib for PPFE will require close monitoring and follow-up to manage potentially burdensome adverse events. Nintedanib has a broad range of antifibrotic, anti-inflammatory and vascular remodelling effects [ 19 ]. Immunosuppressive therapies have a myriad of mechanisms of action depending on the drug in question, different to those seen with nintedanib therapy. Combined immunosuppressive and antifibrotic treatment may therefore offer synergistic effects of reducing the progression of ILD. Despite this promise there are limited trial or real-world data concerning the efficacy of co-prescription of these drugs [ 8 ]. Our data have highlighted the common use of nintedanib combined with immunosuppressive or modulatory therapies ( table 1 ). Oral corticosteroids (609 out of 1120, 54.4%), MMF (335 out of 1120, 29.9%), hydroxychloroquine (76 out of 1120, 6.8%) and methotrexate (73 out of 1120, 6.5%) were all commonly used alongside nintedanib. The INBUILD trial did not include participants taking concomitant immunosuppressive therapy, except for low-dose corticosteroids; however, 16% of participants were initiated on therapy other than nintedanib after 6 months [ 4 ]. In the treatment arm of the SENSCIS trial of nintedanib for systemic sclerosis, 48% (139 out of 288) of patients were receiving MMF with a suggested beneficial effect of MMF on lung function decline and without increased adverse events when used in combination with nintedanib. However, the seminal PANTHER study of immunosuppression in IPF demonstrated the potential harmful effect of immunosuppression in patients with IPF. Consequently, the use of immunosuppression in the context of a progressive fibrosing phenotype with UIP pattern fibrosis requires further evidence to ensure greatest patient benefit and minimise potential harm [ 20 ]. Within the SENSCIS trial adverse event rate in the nintedanib and placebo arms were similar between the subgroups receiving and not receiving MMF [ 7 ]. The study reported that 15 (10.8%) out of 139 participants in the nintedanib group who were receiving MMF at baseline discontinued nintedanib treatment. By comparison, we demonstrated an overall discontinuation rate for reasons other than death in 112 (12.1%) out of 928 patients. Limited real-world single-centre tolerability data demonstrated no significant difference in nintedanib discontinuation rates, tolerability or side-effect profile between cohorts receiving co-antifibrotic and immunosuppressive therapy and those receiving antifibrotic monotherapy [ 21 ]. The overall discontinuation rate in our cohort (including death) was 175 (18.9%) out of 928. Real-world UK data for the use of nintedanib in IPF demonstrate varying overall discontinuation rates ranging from 26% (32 out of 119) to 30% (15 out of 49) [ 22 , 23 ]. The discrepancies between these data and those reported for our cohort may represent several factors including the shorter follow-up period in our service evaluation, improvements in adverse-effect management and differences in patient demographics and disease severity. It is encouraging that our real-world data suggest an acceptable tolerability profile of nintedanib in PF-ILD. Importantly, 63 (6.8%) out of 928 patients had death recorded as the reason for nintedanib discontinuation. The limitations of the data collected mean that we are unable to identify the duration of treatment with nintedanib prior to death. However, the study does identify a high proportion of patients commencing nintedanib with significant lung function impairment. 231 (20.6%) out of 1120 had an FVC <50% pred at the time of MDT decision to commence nintedanib, and 436 (38.9%) out of 1120 had a D LCO <40% pred. This could reflect the recent approval of nintedanib for patients who previously had no antifibrotic treatment options and therefore had more advanced disease at the time of initiation. While the unpredictable nature of ILD progression makes prognostication difficult, consideration of life expectancy and severity of lung function impairment should be considered prior to initiation of nintedanib to ensure that the beneficial effect on lung function preservation remains greater than the symptom burden. In patients with high symptom burden and poor prognosis, a supportive management approach may be more appropriate. Limitations There are several limitations to the reported service evaluation. Firstly, the service evaluation methodology was adopted to enable rapid data collection; as such, the study did not record individual-level patient data. This limited the interpretation of individual patient prescribing patterns. In addition, the doses of oral corticosteroids and immunosuppression were not recorded. The data did not capture patients who were, for example, prescribed oral corticosteroids and MMF with nintedanib. We are unable to elucidate whether those patients who discontinued nintedanib treatment were those on concomitant therapy or had greater impairment in lung function. In addition, the specific drug tolerability reasons were not recorded as part of the study. The NICE guidance is only applicable to NHS England, Wales and Northern Ireland. Nintedanib was available for PF-ILD prior to November 2021 in NHS Scotland. Our service evaluation may not have captured cohorts of patients commenced on nintedanib prior to this date. Furthermore, patients were able to receive nintedanib on a named-patient basis prior to November 2021, as reported by R aman et al . [ 8 ]. These patients were purposely excluded from the current study, but represent an important cohort of patients with PF-ILD who may benefit from nintedanib. There will also be a cohort of patients who meet PF-ILD diagnostic criteria, but decline or have contraindications to antifibrotic treatment. This cohort was not captured by the current study. Despite these limitations, this study presents real-world data for the majority of UK prescribing centres and provides important practice-based evidence for the management of PF-ILD. Conclusions Nintedanib is widely prescribed in UK practice for the treatment of PF-ILD. Our service evaluation has demonstrated its use in a variety of underlying diagnoses, for a broad range of disease severity and commonly with concomitant immunosuppressive therapy. The service evaluation has highlighted variances in prescribing practices and important distinctions between real-world and clinical trial practice. Furthermore, we have emphasised gaps in the evidence base including the use of concomitant immunosuppression and antifibrotic therapy, the use of nintedanib in patients with severely impaired lung function and the increased use of HRCT to identify disease progression.
Conclusions Nintedanib is widely prescribed in UK practice for the treatment of PF-ILD. Our service evaluation has demonstrated its use in a variety of underlying diagnoses, for a broad range of disease severity and commonly with concomitant immunosuppressive therapy. The service evaluation has highlighted variances in prescribing practices and important distinctions between real-world and clinical trial practice. Furthermore, we have emphasised gaps in the evidence base including the use of concomitant immunosuppression and antifibrotic therapy, the use of nintedanib in patients with severely impaired lung function and the increased use of HRCT to identify disease progression.
Background Nintedanib slows progression of lung function decline in patients with progressive fibrosing (PF) interstitial lung disease (ILD) and was recommended for this indication within the United Kingdom (UK) National Health Service in Scotland in June 2021 and in England, Wales and Northern Ireland in November 2021. To date, there has been no national evaluation of the use of nintedanib for PF-ILD in a real-world setting. Methods 26 UK centres were invited to take part in a national service evaluation between 17 November 2021 and 30 September 2022. Summary data regarding underlying diagnosis, pulmonary function tests, diagnostic criteria, radiological appearance, concurrent immunosuppressive therapy and drug tolerability were collected via electronic survey. Results 24 UK prescribing centres responded to the service evaluation invitation. Between 17 November 2021 and 30 September 2022, 1120 patients received a multidisciplinary team recommendation to commence nintedanib for PF-ILD. The most common underlying diagnoses were hypersensitivity pneumonitis (298 out of 1120, 26.6%), connective tissue disease associated ILD (197 out of 1120, 17.6%), rheumatoid arthritis associated ILD (180 out of 1120, 16.0%), idiopathic nonspecific interstitial pneumonia (125 out of 1120, 11.1%) and unclassifiable ILD (100 out of 1120, 8.9%). Of these, 54.4% (609 out of 1120) were receiving concomitant corticosteroids, 355 (31.7%) out of 1120 were receiving concomitant mycophenolate mofetil and 340 (30.3%) out of 1120 were receiving another immunosuppressive/modulatory therapy. Radiological progression of ILD combined with worsening respiratory symptoms was the most common reason for the diagnosis of PF-ILD. Conclusion We have demonstrated the use of nintedanib for the treatment of PF-ILD across a broad range of underlying conditions. Nintedanib is frequently co-prescribed alongside immunosuppressive and immunomodulatory therapy. The use of nintedanib for the treatment of PF-ILD has demonstrated acceptable tolerability in a real-world setting. Tweetable abstract Nintedanib is used for the treatment of PF-ILD in the UK for a broad range of underlying diagnoses and with variation in disease severity at the point of initiation. Real-world data suggest an acceptable tolerability profile of nintedanib in PF-ILD. https://bit.ly/3Mdmtri
Supplementary material
CC BY
no
2024-01-16 23:47:16
ERJ Open Res. 2024 Jan 15; 10(1):00529-2023
oa_package/8b/92/PMC10789269.tar.gz
PMC10789304
38226100
Introduction Abdominal pain is a common chief concern in pediatric medicine, with a broad differential including constipation with overflow stool and infectious gastroenteritis when fever and emesis are also present. In this report, we present a case of abdominal pain originally misdiagnosed both clinically and on imaging as simple constipation with stool impaction. After the patient failed to improve and a further history of preceding consumption of a large quantity of shelled watermelon seeds was obtained, the differential was broadened and further workup was pursued, leading to the correct diagnosis. Rectal seed bezoars, sterocoral colitis, and sigmoid volvulus are three rare pathologies that can each cause pediatric abdominal pain, all of which are uniquely seen together in this single pediatric case.
Discussion The leading diagnosis for this patient is that the large volume of consumed seeds led to the development of a rectal seed bezoar that was misidentified as a fecalith on imaging; this seed bezoar caused irritation, resulting in stercoral colitis with rectal inflammation. He was additionally found to have sigmoid volvulus during one of his rectal disimpactions, which was also likely secondary to his rectal seed bezoar. Stercoral colitis is rarely documented outside of adults, with few case reports in pediatrics, and there are no cases of seed bezoar causing volvulus previously published in the literature. Bezoars are retained, indigestible material that can accumulate within the gastrointestinal tract. Bezoars are classified by composition, with seed bezoars being a subset of the most common type of bezoar, phytobezoars (fruit and vegetable material) [ 1 ]. While gastric bezoars are more common overall, seed bezoars are most often found in the rectum [ 2 ]. Compared to gastric bezoars, rectal seed bezoars remain infrequently documented in the literature, particularly in pediatric patients [ 3 - 5 ]. In the cases described, watermelon seeds, followed by prickly pear and sunflower seeds, are most often reported [ 3 ]. It is speculated that the size of the seeds allows passage through the stomach and small intestine, leading to accumulation in the colon [ 2 ]. For this reason, children with rectal seed bezoars generally have no underlying predisposing conditions [ 3 ], which contrasts with the high prevalence of underlying gastrointestinal dysmotility noted in those with gastric bezoars [ 6 ]. Similar to our patient, the most common presenting symptoms of rectal seed bezoars are constipation followed by abdominal/rectal pain [ 2 ]. The typical treatment includes fecal disimpaction, either manually or surgically, with less success with chemical dissolution (e.g., Coca-Cola) compared to fiber bezoars [ 7 ]. When reported, cases of rectal seed bezoars typically focus on diagnosis and treatment, with few documented complications. The patient’s hospital course was notable for complications of both stercoral colitis and sigmoid volvulus secondary to his large rectal seed bezoar. Stercoral colitis is an inflammatory colitis thought to be secondary to pressure necrosis from a fecal mass leading to hypoperfusion. The prevalence is highest in elderly adults with only a few case reports published in the pediatric literature [ 8 - 10 ]. It is most often reported as a complication of constipation, with no case reports of stercoral colitis secondary to seed bezoar. With case reports of rectal ulceration in adults with seed bezoars, it is possibly underdiagnosed in children given the rarity in pediatrics. Given the non-specific symptoms of abdominal pain, distension, and other gastrointestinal symptoms, a high index of suspicion is needed to make the diagnosis. CT imaging is often required for diagnosis, with the most common imaging findings in adults including large stool burden, bowel wall thickening/inflammation/mucosal hyperenhancement, and fat stranding [ 11 ]. Prompt recognition of this condition and early disimpaction are important given the risks of peritonitis, bowel perforation, and sepsis if left untreated. Similar to stercoral colitis, sigmoid volvulus is rare in the pediatric literature with less than 100 cases reported over the last 50 years [ 12 , 13 ]. Sigmoid volvulus occurs due to the twisting of a redundant portion of the sigmoid colon on its mesentery, which may lead to obstruction and colonic ischemia [ 14 ]. There is some discussion that underlying constipation (i.e., Hirschsprung disease), elongated mesentery, neurological disorders, or prior abdominal surgery may predispose a patient to sigmoid volvulus. However, there are no reported cases associated with rectal seed bezoars [ 13 , 15 , 16 ]. One pediatric case report describes a jejunal trichobezoar resulting in a small bowel volvulus, and it is conceivable that the mechanism they propose, where the large weight of the bezoar displaced bowel loops and initiated rotation of the mesentery, was also present in this case of sigmoid volvulus [ 17 ]. Unlike in adult patients, there are no current consensus guidelines for the management of pediatric patients with sigmoid volvulus due to the low incidence [ 13 , 14 ]. Therefore, a high index of suspicion is necessary to promptly recognize and reduce the volvulus either endoscopically or surgically to prevent further complications and minimize morbidity and mortality [ 12 ].
Conclusions Pediatric abdominal pain is a common chief concern in pediatrics, and it is important that clinicians consider a broad differential. Rare diagnoses, such as rectal seed bezoars, are best uncovered by obtaining a comprehensive history of the present illness and maintaining a high index of suspicion. This unique case discusses previously unreported pediatric complications of rectal seed bezoars including stercoral colitis and sigmoid volvulus, and addresses the management of this rare presentation.
This case describes a seven-year-old healthy boy who presented with seven days of abdominal pain, small-volume liquid stools, tenesmus, fevers, and dehydration after consuming an unknown amount of shelled watermelon seeds. He was ultimately found to have a large rectal seed bezoar that caused irritation, resulting in stercoral colitis with rectal inflammation. He was additionally found to have sigmoid volvulus during one of his disimpactions, which was also likely secondary to his rectal seed bezoar. This case uniquely highlights the importance of maintaining an index of suspicion for rectal seed bezoars, discusses previously unreported pediatric complications of rectal seed bezoars, including stercoral colitis and sigmoid volvulus, and addresses the management of this rare presentation.
Case presentation A seven-year-old healthy boy presented to the emergency department with seven days of tenesmus, abdominal pain, and dehydration. Stools were described as brown, small-volume, liquid, and “foul-smelling.” He endorsed waking to stool overnight, the sensation of incomplete emptying when stooling, and stooling every one to two hours. Additionally, he reported tenesmus, fecal incontinence, rectal pain, and generalized abdominal pain. The family reported a small amount of bright red blood in his stool on the day of presentation and that his anus “looked wider than previously,” but otherwise denied hematochezia and melena. He did not have a history of constipation. He ate an unknown amount of shelled watermelon seeds in the weeks leading up to symptom onset, with the family noting intermittent full shells in his stools. The family otherwise endorsed a normal and unrestricted diet history. Three weeks prior to presentation, he was treated with a course of amoxicillin for streptococcal pharyngitis while vacationing in Mexico. There had been no emesis, weight loss, or fevers. On arrival, he was febrile to 38.2°C with a pulse rate of 117 beats/minute, blood pressure of 111/69 mmHg, and respiratory rate of 28 breaths/minute. The physical exam was notable for generalized abdominal tenderness and fullness, without abdominal masses or peritoneal signs, and a large anal fissure (Figure 1 ). His labs showed a leukocytosis of 14,300 WBCs per mL (14.3 × 109/L), CRP 18.2 mg/dL (182 mg/L), and ESR 33 mm, with an otherwise normal complete blood count and comprehensive metabolic panel. A multiplex PCR panel for gastrointestinal pathogens and stool microscopy for ova and parasites were negative. Fecal calprotectin testing was elevated (2,314 mg/kg). An abdominal X-ray showed a non-obstructive bowel gas pattern with a large amount of formed stool in the rectal vault (Figure 2 ). Pediatric surgery was consulted and medical management was recommended for his anal fissure. He was admitted to the general Pediatric Hospital Medicine service for bowel cleanout, intravenous fluid hydration, and pain management. Over the next 24 hours, he developed progressively worsening abdominal pain. A CT abdomen/pelvis was obtained on day two and showed moderate fecal loading and a large rectal fecalith with mild circumferential rectal wall thickening and enhancement, with perirectal and presacral fat stranding (Figure 3 ). Given his fever, worsening pain and exam, and imaging findings, the patient was started on ceftriaxone and metronidazole, and surgery was re-engaged. He underwent operative fecal disimpaction on day three, where a large seed bezoar (Figure 4 ) was found and removed. Contrast enema post-op showed concern for residual seeds; thus, he was started on nasogastric GoLytely cleanout on day three. Despite passing seeds with GoLytely, he continued to have significant abdominal and rectal pain that was not relieved with intravenous opiates and topical lidocaine and nifedipine applied to his anal fissure. He underwent a second sedated disimpaction on day four, at which point a sigmoid volvulus was noted and de-torsed endoscopically. During the case, he was noted to have moderate inflammation and ulceration in the rectum (Figure 5 ). Antibiotics were discontinued on day four due to improving clinical status and a lower concern for infectious complications. Finally, he underwent a third fecal disimpaction and was discharged on hospital day seven. Given the atypical presentation and significant rectal inflammation on endoscopy, underlying pathology such as inflammatory bowel disease was considered as a potential predisposing factor. Although prior literature notes healthy patients without risk factors for seed bezoars (besides occasional constipation), underlying colonic inflammation could predispose to seeds getting stuck and the development of a seed bezoar. Consideration of repeat outpatient endoscopy, colonoscopy, and magnetic resonance enterography three months after the acute episode was discussed to ensure his inflammation had resolved. Given that the patient had no further gastrointestinal symptoms, experienced resolution of his anal fissure within weeks, and continued to gain weight, further work-up was deferred.
Thank you to Mike Watson, M.D. for his review of this manuscript.
CC BY
no
2024-01-16 23:47:17
Cureus.; 15(12):e50625
oa_package/c1/ec/PMC10789304.tar.gz
PMC10789305
38226112
Introduction Hypertension, the most common medical problem during pregnancy, may complicate as many as 10-15% of pregnancies throughout the globe. Hypertensive disease of pregnancy (HDP) is a leading cause of maternal and fetal morbidity and mortality, accounting for an estimated 14% of all maternal deaths globally. HDP remains among the major causes of maternal mortality worldwide, even though maternal mortality is far lower in high-income countries (HICs) than in low- and middle-income countries (LMICs). Between 2009 and 2015, the rate of HDP-related maternal fatalities ranged from 0.08 to 0.42 per 100,000 live births, with a percentage of 2.8% in the United Kingdom and Ireland (2011-2013). Projections show that HDP causes 7.4% of maternal fatalities in the United States and accounts for one-fifth of prenatal hospitalizations and two-thirds of referrals to day assessment facilities. In France, HDP accounts for one-fourth of all obstetric ICU hospitalizations. However, in LMICs, HDP is associated with 10-15% of direct maternal mortality [ 1 ]. It is estimated that 116.4 of every 100,000 women of childbearing age live with HDP. The highest regional mean HDP prevalence was in Southeast Asia (136.8 per 100,000 women of reproductive age) and the Middle East (121.4 per 100,000 women of childbearing age). With a mean frequency of 334.9 per 100,000 reproductive-age women in Africa, the continent had the highest overall HDP incidence. The Western Pacific region had the lowest HDP frequency, with 16.4 cases per 100,000 women of childbearing age. When comparing HICs and LMICs, there is a significant gap in the HDP illness burden [ 1 ]. Inequity in the average prevalence of HDP is seen worldwide. The average HDP incidence is highest in Africa compared to any other continent. Next, HDP has a mean prevalence of above 0.1% among women of reproductive age in the Eastern Mediterranean and South East Asia. In the Western Pacific, the average rate of HDP is the lowest [ 1 ]. In India, there are regional variations in the prevalence of hypertensive diseases during pregnancy. It is estimated that the overall pooled prevalence of HDP was one out of 11 women, or 11% (95% CI, 5%-17%). Despite various government programs, there is still a high prevalence of hypertension, which calls for stakeholders and healthcare professionals to focus on providing therapeutic and preventive care. The best solution is to concentrate more on the early detection of pregnancy-related hypertension and to guarantee its universal application so that proper care can be carried out at the proper time and location to reduce maternal and fetal morbidity and mortality [ 2 ]. Classifying hypertensive illnesses in pregnancy into four categories is the recommendation made by the National High Blood Pressure Education Program Working Group on High Blood Pressure in Pregnancy: (1) gestational hypertension (transient hypertension of pregnancy included); since this word is more specific, it has replaced the more generic “pregnancy-induced hypertension” (PIH); (2) preeclampsia-eclampsia; (3) preeclampsia (PE) superimposed on chronic hypertension; (4) chronic hypertension [ 3 , 4 ]. PE is the most frequent hypertension disorder during pregnancy, and it may have devastating effects on the expectant mother and the unborn child. This subject warrants much investigation. After 20 weeks of pregnancy, hypertension and proteinuria signal the development of PE and HDP. PE affects 2-8% of pregnancies, yet it is a significant source of neonatal and maternal morbidity and death. Pregnancy leads to temporary physiological adaptations that can have extensive effects [ 5 ]. Despite extensive interest in the disease and its impact on maternal and fetal health, no effective treatment other than delivery of the placenta has been developed. Recent evidence suggests that PE can be further subdivided into early and late PE, the former being associated with a higher incidence of fetal growth restriction and both short and long-term maternal mortality and morbidity. According to an update published in 2014 by the International Society for the Study of Hypertension in Pregnancy (ISSHP), PE is defined as the de novo emergence of hypertension after the 20th week of pregnancy in addition to signs of maternal organ failure, which include the following: new-onset proteinuria of greater than 300 mg per day or other signs of renal insufficiency, hematological issues like thrombocytopenia and liver dysfunction, neurological issues like visual disturbance, and/or signs of uteroplacental issues like fetal growth restriction [ 6 ]. It is thought that in women with PE, a complex interaction between placental factors, maternal constitutional factors, and pregnancy-specific vascular and immunological adaptation occurs in the first trimester of their pregnancy. The clinical manifestations of PE, such as high blood pressure and proteinuria, are only terminal features of this cascade of events. Therefore, early recognition of women at risk and timely intervention ahead of clinical onset might enable tailored pregnancy care and better pregnancy outcomes. A multisystem pregnancy disease, PE is characterized by varying degrees of placental malperfusion and the release of soluble substances into the bloodstream. These elements harm the vascular endothelium of the mother, which results in hypertension and organ damage. Fetal growth limitation and stillbirth can result from placental illness [ 7 , 8 ]. By contrast, late-onset PE is linked to milder maternal illness and a lower risk of fetal involvement, with delivery occurring at or after 34 weeks. Late-onset PE often results in not discouraging perinatal outcomes. After issues have been identified, clinical care and proper monitoring may begin sooner if PE is detected early. Prophylactic treatments for PE starting in mid-pregnancy are not effective in clinical trials. The purpose of introducing specific measures for better maternal and infant health is to reduce and improve maternal-fetal outcomes [ 8 , 9 ]. A reliable diagnostic test for this condition is crucial for reducing death rates. To date, no one test has shown enough predictive value for PE to be used in clinical practice. These tests are most helpful when used in conjunction with other variables. Due to the diverse character of PE, it may be easier to build appropriate prediction algorithms if many independent biomarkers are used. Multiparametric techniques, which take into account a large number of variables all at once, are the most successful in the prediction of PE. Pregnancies at high risk for early-onset PE may be identified by maternal risk indicators, such as uterine artery pulsatility index (UtA-PI), mean arterial pressure (MAP), and maternal serum pregnancy-associated plasma protein-A (PAPP-A) [ 10 , 11 ]. According to preliminary data from the National Eclampsia Registry (NER) of the Federation of Obstetric Societies of the United States and the International Confederation of Obstetricians and Gynaecologists, both HDP and eclampsia are on the rise, especially among cases handled in less affluent settings by medical personnel who lack appropriate education and experience. PE was reported to have a 10.3% incidence rate (NER 2013). More than half of all instances of eclampsia occur during pregnancy, and another 13% occur shortly after delivery. Eclampsia is responsible for a 4-6% fatality rate in pregnant women. There is an unmet need in LMICs for the recognition and management of HDP and its complications because of myths and misconceptions about pregnancy, difficulties in transportation facilities, low socioeconomic status, the need for a multidisciplinary approach, a lack of accurate prediction methods, and a scarcity of high dependency units (HDU) [ 12 ]. Hypertension in pregnancy is best managed by a multidisciplinary team that includes obstetricians, maternal-fetal medicine specialists, neonatologists, nephrologists, hypertension specialists, cardiologists, anesthesiologists, pharmacists, nurses, and midwives, all of whom work together to ensure the best possible outcomes for both mother and child before, during, and after pregnancy. When it comes to improving health outcomes, preventive approaches such as group prenatal care, evaluations of economic vulnerability and chronic stress, medication modifications, dietary advice, lifestyle counseling, and educational materials have been demonstrated to be effective [ 13 ]. By using early warning scores, hypertension bundles, and toolkits, nurses may identify maternal compromise sooner upon hospital admission, which has been linked to a decrease in maternal mortality due to hypertensive conditions [ 14 ]. Since PE is now recognized as a distinct risk factor for cardiovascular disease (CVD) by the American Heart Association, it has been included in the algorithms used to determine a woman’s future cardiovascular risk score [ 15 - 17 ]. PE, which raises blood pressure independently or in addition to chronic vascular illness, complicates around 5-7% of all pregnancies and is one of the primary causes of maternal and fetal morbidity. While premature birth is linked with acute neonatal morbidity, PE is an early predictor for the development of cardiovascular and other metabolic problems in the future [ 18 , 19 ]. Hypertensive diseases during pregnancy continue to be one of the least-researched and least-funded areas, as measured by disability-adjusted life years (DALYs). In turn, this leads to debates over how to best categorize, diagnose, and treat hypertension problems in pregnant women. Gestational hypertension and PE are the most prevalent forms of pregnancy-related hypertension. Glossary of hypertension in pregnancy-related terms Gestational Hypertension Systolic blood pressure (SBP) ≥140 mmHg and/or diastolic blood pressure (DBP) ≥90 mmHg on at least two occasions at least four hours apart after 20 weeks of gestation in a previously normotensive patient without proteinuria or evidence of end-organ damage [ 20 ]. Pre-eclampsia SBP ≥140 mmHg and/or DBP ≥90 mmHg on at least two occasions at least four hours apart after 20 weeks of gestation in a previously normotensive patient and with or without proteinuria and or evidence of end-organ damage. Proteinuria ≥0.3 g in a 24-hour urine specimen or protein/creatinine ratio ≥0.3 in a random urine specimen or dipstick ≥2+; platelet count <100,000/microL; serum creatinine >1.1 mg/dL or doubling of the creatinine concentration; rise in liver transaminases at least twice the upper limit of the normal; pulmonary edema; persistent headache; visual disturbances [ 20 ]. Preeclampsia With Severe Features PE with severe features is considered if a patient with PE exhibits any of the following symptoms: SBP ≥160 mmHg and/or DBP ≥110 mmHg on two occasions at least four hours apart while the patient is on bed rest. New-onset cerebral or visual disturbance, such as photopsia, scotomata, cortical blindness, retinal vasospasm, and/or severe headache. Raised serum transaminase > two times the upper limit of the normal range and/or severe persistent right upper quadrant or epigastric pain. Serum creatinine >1.1 mg/dL or doubling of the creatinine concentration and platelet count <100,000/microL. Eclampsia When alternative causes of a generalized tonic-clonic seizure in a pregnant woman with PE have been eliminated, the diagnosis is PE. Chronic (Pre-existing) Hypertension Hypertension diagnosed or present before pregnancy with SBP >140 mmHg and or DBP >90 mmHg on at least two occasions before 20 weeks of gestation taken four hours apart. Chronic Hypertension With Superimposed Preeclampsia Any of these findings in a patient with chronic hypertension: a sudden increase in blood pressure that was previously well-controlled or an escalation of antihypertensive therapy to control blood pressure; new onset of proteinuria or a sudden increase in proteinuria in a patient with known proteinuria before or early in pregnancy; significant new end-organ dysfunction consistent with PE after 20 weeks of gestation or postpartum [ 20 - 22 ]. Pathophysiology Pathophysiological explanations for PE may be found in interactions between the mother, the developing baby, and the placenta. Hypertension and other manifestations of the disease, including hematologic, cardiac, pulmonary, renal, and hepatic dysfunction, may arise from abnormalities related to the development of the placental vasculature early in pregnancy, leading to relative placental underperfusion/hypoxia/ischemia, which may release antiangiogenic factors into the maternal circulation [ 23 ]. Abnormal development of the placenta The placenta plays an important part in the pathophysiology of PE. The fetus is not required for the development of PE, but the placental tissue is [ 24 - 26 ]. PE usually goes away within a few days to a few weeks after the placenta is delivered. Postpartum hypertension and PE have been reported to develop as late as eight weeks after birth. Some of the reasons that may contribute to this phenomenon include post-delivery complement activation, delayed clearance of antiangiogenic agents, and/or the mobilization of extracellular fluid into the intravascular compartment [ 27 ]. Researchers have studied human placentas at different stages of pregnancy, gaining insight into normal uteroplacental circulation that is potentially important to PE. Hypertensive diseases during pregnancy, as well as fetal growth limitation, have been linked to problems in spiral artery remodeling and trophoblast invasion [ 28 ]. Abnormal remodeling of spiral arteries Blood supply to the developing baby and placenta typically originates from the maternal spiral arteries, the terminal branches of the uterine artery, via the invasion of placental cytotrophoblast cells through the endothelium and the highly muscular tunica media. The placenta receives blood via arteries that change from microscopic muscle arterioles into high-capacitance channels with low resistance [ 29 ]. Remodeling of the spiral arteries begins in the late first trimester and is complete by 18-20 weeks of gestation; however, the exact gestational age when the invasion of arteries concludes is uncertain. In PE, cytotrophoblast cells infiltrate the decidual part of the placenta instead of the myometrial area of the spiral arteries. The inability of the spiral arteries to develop into wide, convoluted arterial channels caused by the replacement of the musculoelastic wall with fibrinoid material leads to placental hypoperfusion and moderately hypoxic trophoblast tissue, fetal mortality beyond 20 weeks of gestation, abruptio placentae, PE with or without intrauterine growth restriction, intrauterine growth restriction without maternal hypertension, preterm labor, and pre-labor rupture of membranes are all potential complications of a deep placentation abnormality [ 30 ]. Defective trophoblast differentiation Improper trophoblast invasion of the spiral arteries has been linked to a defect in trophoblast differentiation. At the time of endothelial differentiation, which occurs during trophoblast development, the expression of several distinct types of molecules changes [ 31 , 32 ]. The HLA-G molecule is part of the major histocompatibility complex class Ib, along with cytokines, adhesion molecules, extracellular matrix molecules, metalloproteinases, and other molecules. Invading trophoblasts undergo pseudo-vasculogenesis when their adhesion molecule expression changes during normal differentiation from epithelial cell-specific molecules like integrin alpha 6/beta 1, integrin alpha v/beta 5, and E-cadherin to endothelial cell-specific molecules like integrin alpha 1/beta 1, integrin alpha v/beta 3, and VE-cadherin. Placental hypoperfusion and ischemia The following evidence points to a connection between placental hypoperfusion, aberrant placental development, and PE. Abnormal placentation and PE are greatly increased in the presence of vascular insufficiency disorders like hypertension, diabetes, systemic lupus erythematosus, renal disease, and thrombophilias [ 33 ]. Relative ischemia may occur in situations when there is placental mass but no compromise in placental blood flow, such as in hydrops fetalis, hydatidiform mole, diabetes mellitus, or numerous pregnancies [ 33 ]. There may be a correlation between increased incidence and altitude (>3100 meters) in certain circumstances. Caused by abnormal placental development, hypoperfusion may be life-threatening. Due to the inability of the defective uterine vasculature to support the expected rise in blood flow to the baby and placenta, hypoperfusion worsens as gestation progresses. Late placental changes related to ischemia include atherosclerosis, fibrinoid necrosis, thrombosis, sclerotic constriction of arterioles, and placental infarction [ 34 ]. Decidual pathology Failed decidualization may lead to downregulated cytotrophoblast invasion, a phenomenon that was studied. Microarray studies of chorionic villus samples have also revealed a signature of impaired decidualization. Interestingly, decidual cells from women with PE also overexpress sFLT1, suggesting that inadequate suppression of anti-angiogenic factors during the implantation period may lead to shallow implantation [ 35 ]. Immunological factors The research of immunologic variables as a potential contribution to aberrant placental development [ 36 ] was motivated by the observation that previous exposure to paternal and fetal antigens seems to have a protective character against PE. Women who are nulliparous, who switch partners between pregnancies, who have long interpregnancy intervals, who use barrier contraception frequently, or who conceive via intracytoplasmic sperm injection are at increased risk for PE, as they are less likely to be exposed to paternal antigens. According to a meta-analysis, women who conceive with the help of an egg donor are more than twice as likely to have PE as those who conceive with the help of any other kind of assisted reproduction. The meta-analysis also indicated that women who undergo artificial conception are four times more likely to develop PE than those who conceive naturally, lending credence to the idea that immunologic intolerance between the mother and fetus contributes to the development of PE. Immunological alterations comparable to those observed in organ rejection modules are present in preeclamptic women. The human leukocyte antigen (HLA) class I antigens HLA-C, HLA-E, and HLA-G are expressed in an unusually high frequency by the extravillous trophoblast (EVT) cells. In the maternal decidua, the EVT cells are found in close proximity to natural killer (NK) cells expressing a range of receptors (CD94, killer cell immunoglobulin-like receptor (KIR), and immunoglobulin-like transcript (ILT)) known to detect class I molecules. The communication between NK cells and EVT cells has been shown to control placental implantation. Patients with PE tend to have lower levels of regulatory T cells (Tregs) in both the systemic circulation and the placental bed, suggesting that this specialized CD4 T cell subset plays a significant role in safeguarding the fetus by moderating the inflammatory immune response. It is hypothesized that a disagreement between the parents’ genes leads to abnormal placental implantation because of elevated NK cell activity, reduced Tregs, and other mediators of the immune response. Biopsies taken from the pre-eclamptic placental bed have shown an increase in the number of dendritic cells infiltrating the decidual tissue. They seem to have a crucial role in setting off antigen-specific T-cell responses to transplanted antigens. Large increases in dendritic cells at the decidual level have been linked to abnormal implantation and a compromised mother's immune response to fetal antigens [ 37 , 38 ]. Genetic factors PE may have a hereditary component, and heredity has been linked to other diseases. A primigravida’s chance of suffering PE increases two to fivefold if she has a first-degree relative, such as her mother or sister, who also had the condition during pregnancy. The maternal effects of imprinted genes might be considered. The PE phenotype was not displayed in a study of two sisters with PE when the baby or placenta contained the imprinting paternal homolog rather than a maternal STOX1 missense mutation on chromosome 10q22 [ 39 ]. There is a seven-fold increased risk of PE in the current pregnancy if there is a prior diagnosis of PE [ 40 ]. Women are more likely to get PE than women without this major history if the previous partner of their spouse had PE during the antenatal period. Additionally, it should be highlighted that women who conceive through a man whose prior relationship resulted in PE have a similar likelihood of experiencing it as if the previous partner’s pregnancy had been normal blood pressure [ 41 ]. The genes for both sFlt-1 and Flt-1 are located on chromosome 13. More of these gene products will be made by embryos with an additional copy of chromosome 13 (e.g., trisomy 13). The risk of PE in pregnant women carrying a baby with trisomy 13 is well-known to be much higher than in women carrying babies with any other trisomy or in control of pregnant patients. This also contributes to the elevated risk of PE in these women, since their circulating sFlt-1 to placental growth factor (PlGF) ratio is much higher than average. Genome-wide association studies (GWAS) with huge sample sizes have aided in the identification of a genetic risk variation that has a widespread impact [ 42 ]. The locus at 12q is associated with HELLP (hemolysis, elevated liver enzymes, and low platelet) syndrome, but not PE without HELLP syndrome, suggesting that the genetic mechanisms at play in HELLP syndrome are separate from those in PE [ 43 ]. One possible mechanism that might lead to HELLP syndrome is a change in the long non-coding RNA located at 12q23. Extravillous trophoblast migration may be influenced by the genes regulated by this long noncoding RNA. PAI-1 4G/5G polymorphism, the angiotensinogen gene variation (T235), and endothelial nitric oxide synthase (eNOS) [ 44 ] are all genetic regions that have been linked to PE’s development. Environmental and maternal susceptibility factors Environmental and maternal susceptibility factors are as follows: (1) low calcium intake; (2) high body mass index; (3) pregnancies conceived using in vitro fertilization (IVF); (4) inflammation: maternal inflammatory symptoms that seem to be present in healthy pregnancies at term are exacerbated by PE; Chlamydia pneumoniae , Helicobacter pylori , Cytomegalovirus , human immunodeficiency virus (both treated and untreated), malaria, herpes simplex virus type 2, bacterial vaginosis, and antibodies to Mycoplasma hominis were associated with PE [ 45 ]; (5) increased sensitivity to angiotensin 2; (6) complement activation; (7) pre-existing maternal vascular/metabolic/kidney/autoimmune disease: PE is very common in women who already have one or more of the risk factors for vascular disease, including hypertension, diabetes, chronic kidney disease, and autoimmune disorders. Preeclamptic women may be at an increased risk for CVD later in life owing to endothelial damage. Women with a history of PE have an increased risk of developing chronic kidney disease and hypothyroidism [ 46 ]. Discussion of the elements used in this study Mean Arterial Pressure (MAP) MAP is the measurement of the average pressure in the arteries throughout the course of a complete cardiac cycle (systole and diastole). Multiple variables influence both cardiac output and systemic vascular resistance, which in turn affects MAP. The cardiac output is calculated by multiplying the rate of heartbeats per minute by the volume of each heartbeat. Stroke volume is determined by a variety of factors, including ventricular inotropy and preload. Preload is affected by both blood volume and vein compliance. Boosting cardiac output and stroke volume requires increasing blood volume, which increases preload. As afterload increases, the stroke volume decreases. Heart rate is affected by the myocardium’s chronotropy, dromotropy, and lusitropy. The following formula is often used to get a MAP estimate: MAP = SBP + 1/3 (diastolic pressure - systolic pressure) DBP minus SBP equals pulse pressure (PP) [ 47 ]. The rapid MAP calculation it allows makes it preferable for use in most clinical contexts. MAP ensures the continued viability of all of the body’s tissues by supplying them with oxygen-rich blood. Mechanisms exist to keep the MAP at a steady 60 mmHg, ensuring that blood flows freely to all organs and muscles. MAP and pregnancy: The chance of developing PE was strongly correlated with the mother’s mean MAP during the first trimester, even after controlling for other potential risk variables. Second-trimester MAP does not reliably predict who will and who will not get the illness. Pregnancy-Associated Plasma Protein-A (PAPP-A) Despite its presence in human prenatal plasma, the function of the antigen PAPP-A has remained unclear since its discovery in 1972. Embryonic trophoblast cells secrete a big, highly glycosylated protein termed PAPP-A. Syncytiotrophoblasts produce PAPP-A, a zinc-containing metalloproteinase that binds insulin-like growth factor (IGF), according to the locations of precipitate lines in immunodiffusion experiments [ 48 ]. Three more (non-proteolytic) pregnancy-associated proteins were discovered and given the acronyms PAPP-B, -C, and -D. Two disulfide-bound subunits make up the PAPP-A homotetramer, as was discovered. Until the pro-form of eosinophil major basic protein (proMBP) was discovered in 1993, PAPP-A was assumed to be a homotetramer composed of two PAPP-A subunits and two proMBP subunits. The fraction of a pregnant woman’s blood that contains proteins that are not complexed with proMBP but instead circulate as disulfide-bound homodimers is very low (1%). There are possible critical functions for the IGF system in placental growth and development. Therefore, a higher risk of PE is seen in those with low blood levels of PAPP-A. The maternal serum PAPP-A level is increased in symptomatic PE. Within the first several weeks of pregnancy, PAPP-A levels double in about three days and continue to rise progressively afterward. Cleaving insulin-like growth factor binding proteins (IGFBP), particularly IGFBP-4, is how PAPP-A controls the level of free IGF-2. A big heterotetramer comprised of two subunits of PAPP-A and two subunits of pro-major basic binding protein is produced by the placental cytotrophoblast layer and circulates in the blood. To screen for aneuploidy, PAPP-A levels are only measured once, during the first trimester. Pregnancy-induced PAPP-A is complexed with its natural inhibitor, proform of eosinophil major basic protein; in contrast, free PAPP-A has metalloproteolytic activity (proMBP). The only recognized receptors for human IGF-1 are IGFBPs 4 and 5. The release of bound IGF is triggered by these proteins, and this IGF has been demonstrated to activate macrophages, stimulate chemotaxis, increase low-density lipoprotein absorption by macrophages, and cause the production of inflammatory cytokines. On the other hand, current research suggests that IGF may protect against ischemic heart disease by maintaining endothelial function, increasing plaque stability, and acting as an antioxidant and anti-inflammatory. In any case, it is not yet known whether PAPP-A encourages plaque instability or has stabilizing and healing effects on plaque. These control insulin growth factors, which affect placental development and trophoblast penetration into the maternal decidua. In the maternal circulatory system, it binds to eosinophil major binding protein, blocking its proteolytic function. Because the multifactorial pathogenesis of different PE phenotypes has not been fully elucidated, prevention and prediction are still not possible, and symptomatic clinical management should be mainly directed to prevent maternal morbidity (e.g., eclampsia) and mortality [ 48 ]. PE and other negative pregnancy outcomes are associated with decreased PAPP-A levels in the first trimester of pregnancy, as revealed by Karumanchi et al. [ 49 ]. It is no secret that trisomy 21 has a biomarker that has been around for a while. PE is more likely to occur in women whose PAPP-A levels are low, as shown by research by Spencer et al. With a PAPP-A cutoff at the 5th centile of normal (multiple of the median (MoM): 0.415; 95% CI: 2.3-4.8), 15% of individuals were diagnosed with PE [ 50 ]. Uterine Artery Doppler During a normal pregnancy, the spiral arteries of the mother undergo a series of modifications at the hands of invasive cytotrophoblasts. To achieve optimum placental perfusion, the fetomaternal circulation must undergo a remodeling process that enhances flow. However, inadequate remodeling of the spiral arteries during placentation has been associated with PE, intrauterine growth restriction, and other related issues [ 51 ]. It is possible to non-invasively evaluate the existence of significant uteroplacental resistance using the uterine artery Doppler ultrasonography method. Predicating PE is aided by the uterine artery Doppler screening program. A high pulsatility index (PI) in the uterine artery implies inadequate placentation, which raises the risk of PE, fetal development restriction, abruption, and stillbirth. An abnormally high PI in the uterine artery is defined as one that is more than the 90th percentile. In a normal pregnancy, a woman’s PI in her uterine artery rises in women of African descent and falls in correlation with the length of the baby’s crown to its rump and the mother’s weight gain. When deciding whether or not a certain measurement is normal, these maternal characteristics should be taken into account. Measurement of uterine artery PI: The internal cervical os and cervical canal may be easily identified in sagittal slices of the uterus. When the transducer is moved slowly from side to side, the blood flow patterns may be mapped in color to pinpoint the exact location of each uterine artery. These arteries, which provide blood to the cervix and uterus, run vertically along either side at the level of the internal os. Whereas if the sampling gate is set to 2 mm and the insonation angle is maintained below 30 degrees, then pulsed wave Doppler may be used to check the whole blood artery. The mean PI of the left and right arteries should be calculated, and the PI and peak systolic velocity (PSV) of the three identical subsequent waveforms should be compared. A PSV in the uterine artery of 60 cm/s is the bare minimum that should be met. When readings are low, it usually means the wrong vessel was checked. If you are doing risk assessments, you need access to the Fetal Medicine Foundation’s Certificate of Competence in Doppler Ultrasound and the 11-13-week scan. It has been demonstrated in a number of studies that the use of maternal characteristics, feasible and cost-effective biomarkers, and uterine artery Doppler may aid in the early prediction of hypertensive diseases in pregnancy.
Materials and methods Study design and population Study Design It was a prospective observational study of longitudinal variety. Study Duration The study was conducted from December 2020 onwards for a period of 18 months. The study was reviewed and permitted by the Institute’s Ethics Committee (Mahatma Gandhi Institute of Medical Sciences, Sewagram, India; approval number: 4663). Study Subjects All pregnant women who came to the outpatient department of the institute for routine antenatal check-ups and those who gave consent for the study were the study subjects. These women agreed to deliver at our institute. A special follow-up card was also given. Sample Size The number of women admitted to the hospital for labor care during 2019 was 5261. A total of 513 were diagnosed with hypertensive diseases of pregnancy. A vast majority of these patients were admitted through the outpatient department. At a prevalence rate of 10%, we calculated a sample size of 350 to achieve a sensitivity of 85% with an absolute error of 12.5% at a 95% CI. Inclusion Criteria Pregnant women included in the study were those cases who reported to the outpatient department with a gestational age of 11-13 weeks during the enrolment period. Women who gave consent were included. Women who got counseled to come for follow-up until delivery were included. Exclusion Criteria Exclusion criteria were the following: any pregnant women who do not give consent; any pregnant women who come before 11 weeks and after 13 weeks; multi-fetal pregnancies; women with chronic hypertension; women with existing renal diseases. Methodology All pregnant women who came for antenatal check-ups were enrolled in the study. Written and informed consent was obtained from these women in the language they comprehend. Their detailed history was taken and an examination was done, ultrasound examination with Doppler study of uterine artery was done, and serum PAPP-A was tested between 11 and 13 weeks. The details of each component are described below. Maternal Characteristics Patients were asked to complete a questionnaire detailing the mother’s demographics, including her age, marital status, occupation, education qualification, number of children, number of pregnancies, and any history of complications. Maternal body mass index (BMI) was determined using the mother’s measured height and weight. Blood pressure: Mercury sphygmomanometers were used to record the participants’ blood pressure manually, and they were checked for accuracy before and during the research. Doctors with proper training used the equipment to make the recordings. Adult cuffs ranging in size from 22 to 42 centimeters were employed, with the ladies sitting and their arms propped up at heart level. We used MAP = diastolic 13 + systolic 14 to get the average arterial pressure (MAP) (systolic - diastolic). The difference between the two readings was used to determine the pulse pressure. Blood pressure was taken at four visits, i.e., at 11-13 weeks, 14-24 weeks, 25-36 weeks, and more than 37 weeks. MAP was computed for each. All patient data are stored in an Excel spreadsheet (Microsoft Corporation, Redmond, WA), where the formula has been placed for ease of use. While Miller et al. (2007) [ 52 ] considered a MAP result vital if it was more than 88 mmHg, we considered a value of 90 mmHg or above to be meaningful. Uterine Artery Pulsatility Index (UtA-PI) Ultrasounds using Doppler technology were performed transabdominally throughout the first trimester. The cervical canal and internal cervical os were found in a sagittal segment of the uterus taken between weeks 11 and 13. Next, color flow mapping was performed to locate each uterine artery (UtA) along the cervix and uterine side at the level of the internal os by gently rocking the transducer from side to side. Following the detection of each UtA, the whole vessel was scanned using a pulsed-wave Doppler with a sampling rate of 2 mm. To avoid examining the arcuate artery, we ensured the insonation angle was less than 30 degrees, and the peak systolic velocity was more than 60 centimeters per second. Measurements of PI were taken from three sequentially recorded waveforms of the same shape, and the PI mean for the left and right arteries was determined. Radiologists with appropriate credentials from the local medical board performed all Doppler tests. Ranges greater than or equal to 1.69 are considered statistically significant for this study’s purposes [ 53 ]. Pregnancy-Associated Plasma Protein-A (PAPP-A) A sample of venous blood (about 3-4 ml) was drawn into a vacutainer and placed in a test tube devoid of the anticoagulant. After the clot formed, the serum was centrifuged and frozen at 80°C for further examination. In this case, chemiluminescence immunoassay equipment was employed. PAPP-A was considered when its value was lower than 0.77. When measured between 11 and 13 weeks post conception, an average PAPP-A level is between 0.77 and 12.6 mIU/ml [ 54 ]. Diagnosis of gestational hypertension and preeclampsia For the most part, we relied on the most recent recommendations by the Working Group on High Blood Pressure in Pregnancy of the National High Blood Pressure Education Program (NHBPEP) [ 55 ]. Gestational hypertension is said to be when the SBP is more than or equal to 140 mmHg and DBP is more or equal to 90 mmHg. In a previously normotensive woman, two readings were taken four hours apart without the presence of urine proteins. According to these standards, PE is diagnosed when blood pressure is raised or equal to 140/0 mmHg, with proteinuria or end-organ damage. Proteinuria is diagnosed when two separate urine samples taken four hours apart show a protein content in the urine of at least 30 milligrams per deciliter (mg/dL) or a reading of 1 on a urine dipstick. Proteinuria was frequently measured using urine dipsticks. A thorough data sheet was compiled after the MAP, ultrasound results, and women’s characteristics were put into a computer database, including the women’s demographic information. Statistical analysis was done using descriptive and inferential statistics using Pearson’s chi-square test and receiver operating characteristic (ROC) curve and the software used in the analysis was SPSS version 27.0 (IBM Corp., Armonk, NY). P < 0.05 was considered as the level of significance.
Results Maternal characteristics Age There were 12 (3.4%) females in the teenager group. Of these, two (16.6%) were hypertensive, whereas 10 (83.3%) were non-hypertensive. The maximum number of women (325, 92.9%) falls into the reproductive age group of 20-34 years and the incidence of hypertensive disease of pregnancy was seen in 74 (22.8%) cases. In the elderly age group of >35 years, out of 13 women, four women had hypertensive disease of pregnancy (30.8%), hence hypertension is a significant predictor in elderly aged females, as it is significant at p-value < 0.05 (Table 1 and Figure 1 ). Education Here, women with primary education dominate the group with 181 people, which comes at 51.7%. This group also hosts 42 (23.2%) of the hypertensives with 139 (76.8%) of the normotensive herd. Leading up second are the illiterate women with 109 (31.1%) cases, with 24 (22%) females who succumbed to hypertension with 85 (78%) normotensive women. A total of 56 (16%) women had attended up to middle school, and among them, 14 (25%) were hypertensive and 42 (75%) were of normal blood pressure. Few women who had completed higher education (3, 0.9%) and a single graduate woman did not suffer from hypertension. On evaluating illiterate, primary, middle, high school, and graduate women who developed hypertension, the results came as between 22% and 25%, which was statistically non-significant as the p-value was <0.05 (Table 1 and Figure 2 ). Occupation The housewives, unskilled workers, and semi-skilled workers constitute 201 (57.4%), 148 (42.3%), and one (0.3%), respectively. Where 47 (23.4%) housewives ailed from hypertension and 33 (22.3%) unskilled workers suffered from the same fate. No cases of hypertension were observed in the semi-skilled worker group. Meanwhile, 76.6%, i.e., 154 housewives and 115 (77.7%) remained normotensive. Again, occupation does not seem to be associated with hypertension as the percentage of women who developed hypertension was between 22% and 23% in all groups with a p-value of 0.838 (Table 1 and Figure 3 ). Socioeconomic Status The females in this study fall into two categories out of the five in accordance with the modified Kuppuswamy's scale. The lower middle class accounted for the highest numbers, i.e., 195 (55.7%), among which 45 (23.1%) were hypertensive and the rest 150 (76.9%) were normal. A total of 35 (22.6%) out of 155 (44.3%) in the lower socioeconomic group had hypertension while 120 (77.4%) females remained unaffected. No association was found between the development of hypertension and socioeconomic class as the p-value was insignificant at 0.913 (Table 1 and Figure 4 ). Gravidity The majority of women (205, 58.6%) were primi gravida, and only 40 (11.4%) were third gravida. Primi gravida had 53 (25.9%) cases who became hypertensive compared to 20 (19%) and seven (17.5%) second and third in multigravida. Therefore, gravidity remains an important maternal characteristic for predicting hypertensive diseases of pregnancy as the p-value remains significant (Table 1 and Figure 5 ). Marriage Duration The duration of four to six years harbors the highest population among others with 194 (55.4%) women. While, one to three years, seven to 10 years, and 11-12 years constituted 133 (38%), 21 (6%), and two (0.6%) of the women. A total of 149 (76.8%) are normotensive in the four to six years duration group. While 101 (75.9%) and 19 (90.5%) have failed to develop hypertension in one to three years and seven to 10 years duration groups, respectively. There were 32 (24.1%) females who had developed hypertension in the one to three years group and 45 (23.2%) in the four to six years group. As the p-value is 0.380, there is no association between hypertension and duration of marriage (Table 1 and Figure 6 ). Body Mass Index A total of 57 (23.4%) people developed hypertension in the normal range BMI group, which consisted of 244 (69.7%) females, and 187 (76.6%) remained normotensive. Out of 98 (28%) underweight females, 23 (23.5%) developed hypertension and 75 (76.5%) were normotensive. All eight overweight women surprisingly were normotensive in this study. Hence, BMI in this study was not a predictor for developing hypertensive disease of pregnancy as the p-value is 0.297, i.e., insignificant (Table 1 and Figure 7 ). Mean Arterial Pressure The MAP was normal in a total of 239 (68.2%) females out of which 45 (18.7%) were hypertensive and 194 (81.3%) were normotensive. While MAP was raised in 111 (31.8%) women, 35 (31.5%) had hypertensive disease of pregnancy. So, MAP is one of the important maternal characteristics that can predict hypertension, as the p-value is less than 0.05 (0.008). Hence, we do have enough proof for association between MAP and hypertension (Table 2 and Figure 8 ). Biochemical marker - pregnancy-associated plasma protein-A (PAPP-A) PAPP-A emerged to be on the lower side in 66 (74.1%) cases of hypertension and 23 (25.9%) normotensive cases, which accounts for a total of 89 (25.6%). Whereas, PAPP-A was on the normal side in 14 (5.4%) hypertensive patients and 247 (94.6%) non-hypertensive females, bringing it to a total of 261 (74.6%) cases. Since the p-value for Pearson chi-square is less than 0.05 (0.000), there is an association between PAPP-A and hypertension. It was also observed that out of 89 women, where PAPP-A was on the lower side, 66 (74.1%) were hypertensive, so the study clearly states that low serum PAPP-A values are the predictors for hypertensive diseases of pregnancy (Table 3 and Figure 9 ). Biophysical marker - uterine artery pulsatility index Out of the total positives of 77 (22%) cases, the UtA-PI was raised in 61 (17.4%) hypertensive cases and 16 (4.6%) non-hypertensive cases. There were 19 (5.4%) hypertensive and 254 normotensive (72.5%) cases in a total of 273 (78%). There were 61 (17.4%) hypertensive patients for the positive group, which is significant. Since the p-value for Pearson chi-square is less than 0.05 (0.000), there is an association between UtA-PI and hypertension (Table 4 and Figure 10 ). Analysis of the efficiency of the combined screening method ROC Curve of Mean Arterial Pressure, PAPP-A, and UtA-PI The greater the area under the ROC curve of a particular variable, the better the classifier. Here, our aim is to find the best classifier for predicting hypertension. As we can see from the graph, the PI (UtA-PI) curve has the greatest area, then MAP, and then PAPP-A. Hence, using this graph, we can say that PI is the best classifier that can be used to classify hypertension. After running the logistic regression model, taking hypertension as a dependent variable and MAP, PI-Index, and PAPP-A as independent variables. By looking at the p-value, we found that all three variables are significant and can be used to predict hypertension (Figure 11 and Table 5 ). Hypertension = (-.10308) * MAP + (-3.70385) * PAPP-A + (6.13524) * PI. Rate of Predictability by Using Combined Methods (Maternal Characteristic, PAPP-A, and UtA-PI) When all three parameters, i.e., maternal characteristic - MAP, biophysical profile - UtA-PI, and biochemical marker - PAPP-A, were positive, they were able to successfully predict 51 (14.6%) cases accurately, with 12 (3.4%) cases of false positive, 29 (8.2%) cases of false negative, and 258 (73.71%) of true negative cases. The sensitivity comes out to be 63.7%, specificity of 95.5%, positive likelihood ratio of 14.34, negative likelihood ratio of 0.38, disease prevalence of 22.86%, positive predictive value of 80.95%, and negative predictive value of 89.90%, with an accuracy of 88.29% (Table 6 ). Blood pressure analysis Comparison of Systolic Blood Pressure in Normotensive Population During Four Visits Here, there is a comparison of the mean SBP of 270 normotensive women who had visited during 11-13 weeks, 14-24 weeks, 25-36 weeks, and more than 37 weeks for whom the mean pressures were 113.3 mmHg, 110.74 mmHg, 120.58 mmHg, and 120.74 mmHg, respectively. The graph here shows a slight dip at 14-24 weeks followed by almost a plateau after 25 weeks (Figure 12 ). Comparison of Diastolic Blood Pressure in Normotensive Population During Four Visits In this study, we have compared the mean DBP of 270 normotensive women who have visited during 11-13 weeks, 14-24 weeks, 25-36 weeks, and more than 37 weeks for whom the mean pressures are 72.35 mmHg, 70.11 mmHg, 75.04 mmHg, and 75.51 mmHg, respectively. The graph here slowly steeps low at 14-24 weeks at 70.11 mmHg followed by an abrupt rise thereafter (Figure 13 ). Comparison of Systolic Blood Pressure in the Hypertensive Population During Four Visits Again, there is a comparison of the mean SBP of 80 hypertensive women who visited during 11-13 weeks, 14-24 weeks, 25-36 weeks, and more than 37 weeks for whom the mean pressures are 115.92 mmHg, 126.96 mmHg, 143.15 mmHg, and 141.55 mmHg, respectively. The graph here swiftly rises followed by a slight downfall after 25 weeks (Figure 14 ). Comparison of Diastolic Blood Pressure in the Hypertensive Population During ​​​​​​​ Four Visits In this study, we compared the mean DBP of 80 hypertensive women who visited during 11-13 weeks, 14-24 weeks, 25-36 weeks, and more than 37 weeks for whom the mean pressures are 78.57 mmHg, 85.35 mmHg, 94.42 mmHg, and 90.63 mmHg, respectively. The graph here swiftly rises followed by a gradual collapse after 25 weeks (Figure 15 ). The main finding of these results is that the women who do not get the mid-trimester fall in blood pressure are the ones who have to be monitored for developing hypertension. Body edema in relation to hypertensive disease of pregnancy In the present study, it was observed that 10.6% of cases had edema feet; however, a very interesting finding was that out of 37 cases who had body edema, especially anterior wall edema, 20 (54%) developed hypertension. Whereas out of 313 women who had absent edema, 19.1% had hypertension. This is significant as the p-value is 0.001 (Table 7 and Figure 16 ). Urine protein Urine protein is found to be present in 71 (20.3%) of hypertensives and three (0.8%) in normotensive females bringing to a total of 74 (21.1%). Out of 276 (78.9%) women, where urine proteins were absent in nine (2.5%) hypertensive and 267 (76.4%) normotensive women. The p-value for Pearson chi-square is less than 0.05 (0.006), hence the presence of urine proteins is strongly associated with hypertension (Table 8 and Figure 17 ). Mode of delivery There were a total of four (1.1%) preterm deliveries of which all were pre-term cesarean sections. Three (0.8%) were from the hypertensive group while one (0.3%) was of the normotensive group. No vaginal deliveries were conducted for the preterm group. In contrast, a total of 322 (92%) term vaginal deliveries took place, where 66 (18.9%) were hypertensive females while 256 (73.1%) were of normotensive nature. Only 24 (6.8%) of total term deliveries were by means of cesarean section. A total of 13 (3.7%) were of normotensive and the rest 11 (3.1%) were of the diseased group. The overall conclusion was that the cesarean section rate was higher in the hypertensive group (17.5%) compared to 5.2% in the normotensive group, which is statistically significant. Admission to the neonatal intensive care unit All four preterm babies (1.1%) were admitted to the NICU, of which three (0.8%) had hypertensive mothers while one had a normotensive mother. There were two (0.5%) hypertensive and three (0.8%) normotensive mothers whose babies had to be admitted to the NICU. While all 341 (97.4%) babies were handed to their 75 (21.4%) hypertensive and 266 (76%) normotensive mothers. It was clear from the study that NICU admission in the hypertensive group was 6.25% as compared to the non-hypertensive group, where it was 1.48%. This is statistically significant.
Discussion HDP is defined as raised SBP of more than or equal to 140 mmHg systolic and more than or equal to 90 mmHg DBP, recorded after 20 weeks of gestation, two readings, four hours apart in a previously normotensive woman [ 20 , 21 ]. It is one of the causes of maternal mortality in India and it also affects neonatal outcomes [ 56 ]. Even with the advent of technology, taking manual blood pressure is the gold standard for diagnosing HDP. Efforts were made by researchers to develop a comprehensive model to predict this calamity as early as 20 weeks to minimize the disastrous maternal complications, reduce their deaths, and subsequently improve neonatal outcomes. Only a few of the models were developed, including the Indian population. Hence, to bridge this gap, an effort has been made through this study. Therefore, this study was performed within a rural tertiary care hospital to develop a comprehensible model using the maternal characteristics - MAP, maternal biophysical profile using UtA-PI, and the biochemical marker - PAPP-A. The observations and their discussion are presented below. Maternal characteristics Age In this study, there were 12 (3.4%) females in the teenager group, of whom 16.6% were hypertensive. The maximum number of women (325) in the reproductive age group had findings of 22.7% cases of hypertension. Glancing at the elderly age group of more than 35 years, four out of 13 women had hypertension accounting for 30.8%. This value is statistically significant, as it was clear that the elderly age group has more cases of hypertension. Berhe et al. (2018) [ 57 ] conducted a study and were of the opinion that pregnant women who are more or equal to 35 years of age are probable to develop hypertension in pregnancy with an odds ratio of 1.64. There was no demonstrable statistically significant difference, which was seen in the hypertension and young female groups (OR = 2.92; 95% CI = 0.88, 9.70), according to this study. Also, Montan et al. (2007) [ 58 ] have observed that increased maternal age is a standalone factor in the development of pregnancy-induced hypertension. His study design also led to the conclusion that many obstetrical complications are associated with an elderly parturient mother. His study is consistent with our findings of the development of hypertension in the elderly group. Gortzak-Uzan et al. (2001) [ 59 ] and Berenson et al. (1997) [ 60 ] found that the findings of the development of hypertension in the teenager group were similar to the results of this study. Hence our findings agree with the literature for the prevalence of hypertension in the elderly population. All the women in the teenage group were married. But still, we received only 12 females may be due to the reluctance of the unmarried females to report to our tertiary care hospital, which is considered a taboo in our society. They prefer to either abort or deliver secretly. Both authors have found that similar incidences and complications were found in the group of young females and that of the middle-aged group. Education The primary education group dominates the group with 181 (51.7%) of the population and also comprises 42 (52.5%) of the hypertensives. This was followed by the illiterate women group with 109 (31.1%) cases, with 24 (22%) females with hypertension. A total of 56 (16%) women had attended up to middle school, and among them, 14 (25%) were hypertensive. Few women had completed higher education (3, 0.9%). On evaluating the education component, women who developed hypertension were between 22% and 25%, which was statistically non-significant, as the p-value was < 0.05. Harris et al. (2020) [ 61 ] attempted to find the inconsistencies incurred in health in relation to the hypertensive diseases of pregnancies. He found that members of the African American and Hispanic communities are behind in seeking medical attention due to racism, low levels of education, and social and cultural differences. He was also of the opinion that due to a low level of education, home-based blood pressure monitoring for low-risk hypertensive females was not possible. He has devised a three-step intervention for tackling these issues and reaching this set of women through creating awareness and understanding of the beliefs of the women and imparting education regarding the subject of hypertension. A retrospective study done by Silva et al. (2008) [ 62 ] observed a positive association between lower education level and the development of hypertension when used against controls for the same age and gravidity. He concluded that women with middle-low education with an odds ratio of 1.52 and 95% CI of 1.02-2.27 and low education with an odds ratio of 1.30 and 95% CI of 0.80-2.12 had a greater substantial risk of gestational hypertension than women with high-level education. The above study was based in the Netherlands, where the components of history taking for low middle and low education groups consisted of maternal substance abuse such as smoking, using of banned drugs, and alcohol consumption; maternal characteristics like raised body mass index found in the latter half of the educational group. Our study is based on a rural Indian population where females have a primary-level education. Also, illegal substance abuse is not seen in these women. Hence, our study could not find a relationship between a low level of education and the development of hypertension. But it was also noteworthy to know that like in the United States, if our rural women are educated about the complications of hypertension, as simple as to care enough to report events like headaches, can prevent disastrous effects. Occupation Housewives, unskilled workers, and semi-skilled workers constitute 201 (57.4%), 148 (42.3%), and one (0.3%), respectively. Where 47 (23.4%) housewives ailed from hypertension along with 33 (22.3%) of unskilled workers. Occupation does not seem to be associated with hypertension, as the percentage of women who developed hypertension was between 22% and 23% in all groups with a p-value of 0.838. Spadarella et al. (2021) [ 63 ] did a systemic meta-analysis using 27 eligible studies and found that employed women have a 5.1% more cumulative risk of developing hypertension than non-employed women. Professional and retail jobs are at high risk. Physical workload and shift work are controversial. Physical workload seems to have a protective effect on hypertension. She was of the opinion that clerical, skilled, and service sector jobs, which also could be comparatively less stressful, do not add up to the development of the disease. Spracklen et al. (2016) [ 64 ] found a substantially lower risk of PE when the mothers spent more than 8.25 hours active per day, including both physical activity at work or home and during relaxation time. Our study findings are similar to the findings of Nugteren et al. (2012) [ 65 ], who established that no work-related risk factors, like standing or walking for a long duration, heavy weight lifting, night shifts, and excessive working hours, or any history of exposure to chemicals are related to the development of HDP. Reviewing the studies, it is then evident that we could not attribute the development of hypertension to one particular group as our rural women, although the majority are housewives, still do not lead a sedentary lifestyle. As being an agrarian society, our females are farmers and lend a hand there. Hence again contributing to non-significant occupational findings. Socioeconomic Status Here, the women fell into two categories out of the five in accordance with the modified Kuppuswamy’s scale. The lower middle class accounted for the highest numbers, i.e., 195 (55.7%), among which 45 (23.1%) were hypertensive. A total of 35 (22.6%) out of 155 (44.3%) lower socioeconomic group had hypertension. No association was found between the development of hypertension and the socio-economic class, as the p-value was insignificant at 0.913. Ospina et al. (2020) [ 66 ] was of the opinion that gestational hypertension was one of the low health inequalities observed among the group of low socioeconomic status In Canada. The middle level was small for gestational age and gestational diabetes. The highest level was substance abuse and smoking. Our study's finding is further supported by the study done by Lawlor et al. (2005) [ 67 ], which established that both childhood and adult socioeconomic conditions have no association with the peril of developing HDP. When fully adjusted, the odds ratio at a 95% confidence interval linking those who were born in labor-intensive social classes to those who were born in non-labor-intensive social classes for PE was 1.10 (0.72 to 1.73), and for gestational hypertension, it was 1.02. Parallel results associating females in labor-intensive with non-labor-intensive social classes stratified during each antenatal period were 1.09 for PE and 0.99 for gestational hypertension. Thus, compared with other studies, it is seen that our women, who belong to the lower socioeconomic status, depend on the daily wages and are laborers. Although in first-world countries, low socioeconomic status is a cause of hypertension, it is also to be seen that their drug and smoking-related habits are very much prevalent in those groups, hence changing the dynamics of maternal health. Gravidity The majority of the women (205, 58.6%) were primi gravida, and only 40 (11.4%) were third gravida. Primi gravida had 53 (25.9%) cases who became hypertensive compared to 20 (19%) and seven (17.5%) in the second and third in multigravida. Therefore, gravidity remains an important maternal characteristic for predicting HDP as the p-value remains significant. Berhe et al. (2018) [ 57 ] were of the opinion that there was no such link between pregnancy-induced hypertension and the number of pregnancies with an odds ratio of 1.37. Sarwar et al. (2016) [ 68 ] stated that a higher incidence of hypertension is seen in the primigravidae age group of more than 20 years. Several studies revealed that primigravidas are more prone to developing HDP; however, contradictory research is also available on sporadic disease. Marriage Duration In this study, it is seen that the duration of marriage of four to six years has the highest population of 194 (55.4%) women. While, one to three years, seven to 10 years, and 11-12 years constituted 133 (38%), 21 (6%), and two (0.6%) of the women. A total of 32 (24.1%) females had developed hypertension in the one to three years group, and 45 (23.2%) in the four to six years group. Only one developed hypertension in the 11-12 years group. Overall, there is no association between the duration of marriage and the development of hypertension. Saravade et al. (2021) [ 69 ] found an increased incidence of HDP in elderly primi women who were married for a long duration. A study done by Robillard et al. (1994) [ 70 ] concluded that the time of sexual cohabitation before the commencement of sexual acts was contrariwise related to the incidence of HDP. They theorized that the longer a woman is exposed to paternal genetics, the more easier it is to develop immune tolerance. Body Mass Index A total of 57 (23.4%) people developed hypertension in the normal range BMI group, which consisted of 244 (69.7%) females. Out of 98 (28%) underweight females, 23 developed hypertension. All eight overweight women surprisingly were normotensive in this study. Hence, according to our study, BMI was not a predictor for developing HDP, as the p-value was 0.297, i.e., insignificant. Bicocca et al. (2020) [ 71 ] researched that obese women (BMI > 30 kg/m2) have a higher risk of developing early and late-onset hypertensive diseases. Alves et al. (2020) [ 72 ] were of the opinion that maternal obesity was a risk for developing gestational hypertension and gestational diabetes and have a higher incidence of cesarean section births. Gaillard et al. (2011) [ 73 ] found that maternal obesity and morbid obesity are strongly linked with the development of high blood pressure in each trimester, and also amplify the risks of HDP. In our study, we could not establish this link as the majority of our population fell into the lower range of BMI. This may be attributed to the thin-built, malnourished, and short-stature component of the rural population of females. We did not get cases of BMI more than 30 kg/m2, hence BMI did not prove to be a causative factor. Maternal characteristics - mean arterial pressure At par with our study, the MAP is normal in a total of 239 (68.2%) females, out of whom 45 (18.7%) were hypertensive and 194 (81.3%) were normotensive. While MAP was raised in 111 (31.8%) women, 35 (31.5%) had HDP. So, MAP is one of the important maternal characteristics that can predict hypertension as the p-value is less than 0.05 (0.008), hence we do have enough proof for the association between MAP and hypertension. Reddy et al. (2020) [ 74 ] did a retrospective cohort study in Australia and found that the measurement of MAP was substantial to the development of maternal hypertension and an effective screening parameter during pregnancy. Gasse et al. (2018) [ 75 ] conducted a study to approximate the predictive value of first-trimester MAP for the hypertensive disorders of pregnancy. At the end of the study, they were of solid opinion that the measurement of first-trimester MAP strongly corresponded with the development of gestational hypertension and PE. Poon et al. (2010) [ 76 ] established that MAP is the best predictor of systolic and diastolic blood pressure and the best screening performer to predict PE. Poon et al. (2008) [ 77 ] have concluded that raised MAP was linked with a hazard of development of hypertension in pregnancy. They developed a screening model that included measurement of MAP along with maternal characteristics like age, BMI, and ethnicity. They were also of the opinion that even if women did not develop hypertension, those with raised MAP during pregnancy were at risk of developing chronic hypertension in the future. Hence, in synchronous with other studies, MAP is an important tool for the measurement as well as prediction of HDP. Biochemical profile - pregnancy-associated plasma protein-A PAPP-A emerged to be on the lower side in 66 (74.1%) cases of hypertension and 23 (25.9%) normotensive cases, which accounts for a total of 89 (25.6%). Since the p-value for Pearson chi-square is less than 0.05 (0.000), there is an association between PAPP-A and hypertension. Dascau et al. (2020) [ 78 ] aimed to study the prediction of HDP using PAPP-A at 11-14 weeks. He found it to be an effective parameter when used alone and also when used with UtA-PI. Meloni et al. (2009) [ 79 ] conducted a study to establish a predictive value for linking low levels of PAPP-A with hypertension. The research was in favor that small levels of serum PAPP-A (0.8 MoM) may be a latent source for early screening of expectant women as they are at a higher risk of developing HDP. A total of 111 (8.9%) out of 973 pregnant women over three years were found to have hypertension and the result with the help of ROC curve statistics revealed that a PAPP-A MoM value <0.8 was able to significantly predict hypertension in pregnancy with p < 0.001 and the area under the ROC curve of 83%. Yaron et al. (2008) [ 80 ] performed research with respect to the decreased value of PAPP-A. They discovered that when the level of PAPP-A is less than 25 MoM, it is linked to a range of maternal and fetal outcomes, including aneuploidy, non-proteinuric HDP, fetal growth restriction, and spontaneous abortion. After reviewing other authors and from our findings, PAPP-A measurement is a good predictor of maternal hypertension. Biophysical profile - uterine artery pulsatility index Out of the total positives of 77 (22%) cases in this study, UtA-PI was raised in 61 (17.4%) hypertensive cases and 16 (4.6%) non-hypertensive cases. There were 19 (5.4%) hypertensive and 254 normotensive (72.5%) cases of a total of 273 (78%). There were 61 (17.4%) hypertensive patients for the positive group, which is significant. Since the p-value for Pearson chi-square was less than 0.05 (0.000), there is an association between UtA-PI and hypertension. Shinde et al. (2021) [ 81 ] found success in the prediction of hypertension in pregnancy by employing UtA-PI as early as 11 to 13 weeks. Khong et al. (2015) [ 51 ] suggest that first-trimester uterine artery Doppler has much higher predictive accuracy than late-onset PE in exposing early-onset PE and fetal development limitation. According to Plasencia et al. (2007) [ 82 ], maternal characteristics, such as history, ethnicity, and body mass index, as well as measurements of UtA-PI at 11-13 weeks of gestation, are helpful in the early diagnosis of PE. Utilizing UtA-PI alone or in combination has proved to be of immense important biophysical profile in the early detection of HDP. Some authors solely use this for prediction. Predictive accuracy of the combined screening method using maternal characteristic (MAP), biophysical profile (UtA-PI), and biochemical profile (PAPP-A) According to our ROC analysis that used the logistic regression model, taking hypertension as a dependent variable and MAP, UtA-PI, and PAPP-A as independent variables, by looking at the p-value, we found that all three variables are significant and can be used to predict hypertension. Hypertension = (-.10308) * MAP + (-3.70385) * PAPP.A + (6.13524) * PI. The statistical analysis shows that the sensitivity comes out to be 63.7%, with a specificity of 95.5%, positive likelihood ratio of 14.34, negative likelihood ratio of 0.38, disease prevalence of 22.86%, positive predictive value of 80.95%, and negative predictive value of 89.90% with an accuracy of 88.29%. Ji-jun et al. (2022) [ 83 ] have confirmed that adding PlGF to MAP, UtA-PI, and PAPP-A combination improves the detection of hypertensive disorders of pregnancy. Hu et al. (2021) [ 84 ], using a combination of MAP, UtA-PI, and PAPP-A, created a screening technique in China to detect preterm PE with the following results detection rates for preterm PE of 65.0%, 72.7%, and 76.1%, respectively. Sapantzoglou et al. (2021) [ 85 ] found that the addition of cell-free fetal DNA did not improve the already combined screening model using MAP, UtA-PI, and PAPP-A. Zumaeta et al. (2020) [ 86 ] found that a similar detection rate can be accomplished but at a higher screen-positive rate when PAPP-A was used instead of PlGF in the combined screening methods. According to research conducted in Iran by Masihi et al. (2016) [ 87 ], PAPP-A dropped while MAP and UtA-PI increased. PAPP-A fared less than UtA-PI, with a cut-off point of 2.1 having a specificity of 83.7% and a sensitivity of 100% in predicting hypertensive diseases. Predicting hypertensive disorders in the first trimester, particularly early PE, was facilitated with a combination of UtA-PI, MAP, and PAPP-A. Selvaraj et al. (2016) [ 88 ] commented that the predictive efficacy for detecting PE and fetal growth restriction is fairly good when PAPP-A is added to MAP and UtA-PI. Kumar et al. (2015) [ 89 ] revealed that when the maternal characteristic of BMI was added to the MAP, UtA-PI, and PAPP-A, the sensitivity and specificity of the test were 73% and 70%, respectively, and it was a good predictor of early-onset PE. Scazzocchio et al. (2013) [ 90 ] comprised a model of MAP, UtA-PI, and PAPP-A to identify early PE in routine care, low-risk scenario. A total of 5170 people were included, and of those, 136 (2.6%) experienced early PE, while 26 (0.5%) did not. A total of 110 (2.1%) women had late PE. At 5% and 10% false-positive rates, the detection rates for early PE were 69.2% and 80.8% (area under the curve: 0.95; 95% CI: 0.94-0.98), while for late PE, they were 29.4% and 39.6% (area under the curve: 0.71; 95% CI: 0.66-0.76). Poon et al. (2010) [ 91 ] found that the inclusion of PAPP-A, when paired with MAP and UtA-PI at 11-13 weeks, improves the effectiveness of screening for early PE. The detection rate of early PE in screening for a combination of uterine artery left pulsatility index, MAP, and PAPP-A was 83.8%, with a 5% false-positive rate. After reviewing the literature, it is evident that a combined screening method using maternal characteristics (MAP), biophysical profile (UtA-PI), and biochemical profile (PAPP-A) is an effective model for predicting hypertension as early as 11-13 weeks of gestation. Also, its availability makes it appropriate for usage in rural populations. Blood pressure changes during pregnancy in normal and hypertensive patients After reviewing the graphs showing the mean systolic and diastolic blood pressure for both normotensive and hypertensive women at numbers 6-9, it was found that both the systolic and diastolic blood pressure changes trimester-wise in accordance to the physiology as given in the literature. Hermida et al. (2000) [ 92 ] monitored around 1494 patients and our findings were similar to her research. In healthy pregnant women, up to the halfway point of the pregnancy, blood pressure gradually drops; after that, it gradually rises until delivery day. When a woman has gestational hypertension or PE, her blood pressure remains steady during the first half of pregnancy before steadily rising until birth. Body edema in relation to hypertension In the present study, it was observed that 10.6% of cases had edema feet; however, of 37 cases who had body edema, especially anterior wall edema, 20 (54%) developed hypertension. This is significant as the p-value is 0.001. The findings are consistent with the physiology and sign of raised blood pressure as the amount of water in the body as a whole rises by 6 to 8 liters, of which 4 to 6 liters are extracellular and at least 2 to 3 liters are interstitial. Between the mother’s extracellular compartments and the product of conception, there is a total sodium retention of roughly 950 mmol. Thus, a slight decrease in interstitial fluid colloid osmotic pressure and an increase in capillary hydrostatic pressure, as well as changes in the hydration of connective tissue ground material, are all associated with variations in local starling forces, according to Davison (1997) [ 93 ]. Urine proteins and maternal hypertension Urine proteins are found to be present in 71 (20.3%) of hypertensives of a total of 74 (21.1%). Out of 276 (78.9%) women, urine proteins were absent in nine (2.5%) hypertensive cases. The p-value for Pearson chi-square is less than 0.05 (0.006), hence the presence of urine proteins is strongly associated with hypertension. Yıldız et al. (2022) [ 94 ] found the levels of protein in urine can be co-related with adverse maternal and neonatal outcomes in hypertensive females. Serum total protein levels and the degree of proteinuria were noted at PE diagnosis and delivery, according to Morikawa et al. (2020) [ 95 ]. It predicted a worse maternal outcome when urine proteins were positive. Thus detection of urine proteins remains an important factor of association with the prediction and outcome of hypertension in expectant mothers. Mode of delivery and hypertension In this study, the cesarean section rate was more in the hypertensive group (17.5%) compared to 5.2% in the normotensive group, which is statistically significant. Dassah et al. (2019) [ 96 ] concluded that there is a higher rate of cesarean section for expectant mothers with HDP in a tertiary care center in Ghana. Stella et al. (2008) [ 97 ] were of the opinion that there is a significant increase in the number of cesarean sections for hypertensive mothers (odds ratio of 1.62 and confidence interval of 1.47-1.78). Our findings are at par with the findings of Gofton et al. (2001) [ 98 ] that there is an increase in obstetrical operative interventions, including cesarean sections in women with hypertension. Hence, it is seen that there is an increase in births by cesarean section in women with hypertension. Time of delivery and maternal hypertension In our study, out of 350, only four (1.1%) preterm deliveries had taken place, of which all were preterm cesarean sections. Three (0.8%) were from the hypertensive group. Adu-Bonsaffoh et al. (2019) [ 99 ] commented that there was an increase in pre-term births attributed to maternal hypertension, advanced maternal age of the mother, and premature rupture of membranes. Sibai (2006) [ 100 ] explored PE as a cause of pre-term delivery in patients with hypertension. These findings are consistent with our study, as all three hypertensive mothers who had to be delivered preterm had emergency cesarean sections due to uncontrolled hypertension despite adequate anti-hypertensive therapy. Out of whom one patient had persistent decreased fetal movements with poor non-stress test. NICU admission and hypertension Our study revealed that NICU admission in the hypertensive group was 6.25% as compared to the non-hypertensive group, where it was 1.48%, which is statistically significant. Abdelazim et al. (2020) [ 101 ] have cited that hypertensive mothers, especially those diagnosed with PE, have a higher chance of perinatal morbidity like low Apgar score, low birth weight, pre-term delivery, and subsequent higher NICU admissions. Similarly, Stella et al. (2008) [ 97 ] commented on the increase in NICU admissions due to maternal hypertensive diseases. When fetal growth restriction, ventilator assistance, and respiratory distress syndrome were investigated in pregnant mothers with hypertensive disease, Hauth et al. (2000) [ 102 ] showed that there was an increase in NICU hospitalizations. It is evident from the above literature and with our findings that the predictive accuracy for the three combined screening factors at 11-13 weeks of pregnancy employing the use of maternal characteristic MAP, biophysical profile UtA-PI, and biochemical profile PAPP-A is far more superior, and to an extent cost-effective for the rural population as simple blood pressure measurement is feasible at every visit and uterine artery Doppler is being performed along the first-trimester scan. Hence, with the following evidence stated, it can be accepted that a new model for screening gestational hypertension has been developed. PAPP-A test is slightly expensive, but it can be subsidized if used in regular screening. However, in comparison to the cost of the treatment of morbidities associated with HDP, this single test when combined with others is very cost-effective. This test not only helps in HDP screening but also gives an idea about genetic malformations, especially in elderly gravidas. So, this study strongly recommends combined screening for early detection of HDP.
Conclusions The discussion revolves around the development and testing of a comprehensive model for predicting hypertensive diseases of pregnancy, a significant cause of maternal mortality in India with implications for neonatal outcomes. The study aims to bridge gaps in existing models, particularly for the rural Indian population. It focuses on maternal characteristics, including age, education, occupation, socio-economic status, gravidity, marriage duration, BMI, as well as biophysical and biochemical profiles. The maternal characteristics analysis reveals that older age, lower education levels, and specific occupations are associated with higher incidences of hypertension. However, socio-economic status does not show a significant correlation. Gravidity and marriage duration are explored, with longer marriage durations not necessarily linked to hypertension. BMI, despite being a known risk factor in other populations, does not prove significant, potentially due to the thin and malnourished nature of the rural population. The study then delves into specific maternal characteristics: MAP, PAPP-A, and UtA-PI. MAP and PAPP-A are identified as predictors of hypertensive diseases of pregnancy, consistent with existing literature. The UtA-PI also shows a significant association with hypertension, and a combined screening method using these factors is proposed. The model's statistical analysis suggests high sensitivity, specificity, and predictive values, highlighting its potential for early detection. The study further explores blood pressure changes during pregnancy, body edema, urine proteins, and their associations with hypertension. Cesarean section rates, preterm deliveries, and NICU admissions are also analyzed in hypertensive mothers, demonstrating higher rates compared to normotensive counterparts. The discussion concludes by emphasizing the importance of the proposed combined screening method for the early detection of HDP. Despite the slightly higher cost of PAPP-A testing, the study argues that its inclusion in regular screening is cost-effective when compared to the potential costs of treating associated morbidities. The model not only aids in hypertensive disease screening but also provides insights into genetic malformations, particularly in elderly gravidas, making a strong case for its adoption in prenatal care.
Introduction: The most frequent medical issue during pregnancy is hypertension, which can complicate up to 10% to 15% of pregnancies worldwide. An estimated 14% of all maternal fatalities worldwide are thought to be caused by hypertensive disease of pregnancy, one of the main causes of maternal and fetal morbidity and mortality. Despite the fact that maternal mortality is substantially lower in high-income countries than in low- and middle-income countries, hypertension is still one of the leading causes of maternal death globally. Maternal mortality associated with hypertension fluctuated between 0.08 and 0.42 per 100,000 births between 2009 and 2015. In India, the estimated overall pooled prevalence of HDP was determined to be one out of 11 women, or 11% (95% CI, 5%-17%). Despite various government programs, there is still a high prevalence of hypertension, which calls for stakeholders and healthcare professionals to focus on providing both therapeutic and preventive care. The best solution is to concentrate more on the early detection of pregnancy-related hypertension and to guarantee its universal application so that proper care can be carried out to prevent maternal and fetal morbidity. Aim: To estimate the predictive value of the combination of maternal characteristics, i.e., mean arterial pressure (MAP), biophysical evaluation (uterine artery Doppler), and biochemical markers (pregnancy-associated plasma protein A (PAPP-A)), in the first trimester of pregnancy for hypertensive diseases of pregnancy. Methodology: It was a prospective observational study of longitudinal variety that took over 18 months in a tertiary care rural hospital. The number of women admitted to the hospital for labor care during 2019 was 5261. A total of 513 were diagnosed with hypertensive illnesses during pregnancy. At a prevalence rate of 10%, we calculated a sample size of 350 to achieve a sensitivity of 85% with an absolute error of 12.5% at a 95% CI. Maternal histories, such as age, education, socio-economic status, gravidity, and BMI, were taken along with three parameters, i.e., MAP, which was significant above 90 mmHg, uterine artery Doppler, which was taken significant above 1.69, and serum PAPP-A, which was significant at less than 0.69 ml/IU. Observation and results: We have found that the following are associated with the prediction of hypertension: among the maternal characteristics are advanced age >35 years, presence of body edema, and urine proteins along with MAP, uterine artery pulsatility index (UtA-PI), and PAPP-A are significant. The predictive accuracy of the combination of MAP, UtA-PI, and PAPP-A is also significant. We also found that there is a significant increase in cesarean sections and NICU admissions in hypertensive patients. Conclusion: A combination of screening parameters, including MAP, UtA-PI, and PAPP-A, to predict early hypertensive disease of pregnancy is developed and tested.
Nabeela Razzak and Sameer Khan contributed equally to suggest corrections and proofread the results of the data.
CC BY
no
2024-01-16 23:47:17
Cureus.; 15(12):e50624
oa_package/1a/be/PMC10789305.tar.gz
PMC10789307
38225955
Introduction Bones play numerous vital roles in the human body, providing a structural framework for muscles and other tissues, facilitating movement and protecting internal organs from injury [ 1 ]. Large bone defects resulting from bone tumors, injuries and other skeletal diseases often cannot self-heal through the body’s natural repair mechanisms. Thus, bone grafts, either natural or synthetic, are required to replace the diseased or missing bone [ 1 , 2 ]. Although traditional grafting techniques are effective, they may lead to surgical complications and suboptimal outcomes. This has propelled the rise of tissue engineering solutions, especially bone tissue engineering, as promising alternatives [ 3 , 4 ]. In the evolution of bone tissue engineering, scaffolds constructed using biomaterials, combined with stem cells, have been osteoinduced in vitro for 2–3 weeks before implantation into bone defect sites in animals, achieving promising bone defect repair outcomes [ 5 ]. Electrospinning is a prevalent technique for fabricating tissue engineering scaffolds for bone tissue engineering. During electrospinning, precursor solutions transform from droplets to fine fibers under a high-voltage electric field. The collected ultrafine fibers form a fibrous membrane, which mimics the extracellular matrix, providing a conducive environment for cell proliferation and osteoinduction. Our previous work has validated this [ 6 , 7 ]. Despite the significant advantages of electrospun scaffolds, they have limitations, one of which is the uneven distribution of seeded cells inside and outside the scaffold, leading to poor cell infiltration. To overcome this limitation, a novel method, cell electrospinning (CES), has been proposed. CES, a technique based on electrospinning, produces fibers that can embed live cells. The primary distinction between CES and traditional electrospinning lies in the use of live cells [ 8 ], offering immediate implantation conditions for our experiments. In addition to fabrication techniques, biomaterials are indispensable in bone tissue engineering. The choice of biomaterial is crucial. Poly(3-hydroxybutyrate-co-4-hydroxybutyrate) (P34HB), a polyhydroxyalkanoate derivative, stands out due to its mechanical robustness, biocompatibility and biodegradability [ 9 ]. Its superior surface properties further enhance its appeal as a scaffold material [ 10 ]. Consequently, P34HB has garnered significant interest as a carrier for the sustained release of bioactive molecules or as a degradable implant material. Our preliminary investigations have confirmed its excellent biocompatibility, biodegradability, and promotive effect on the osteogenic differentiation of bone marrow mesenchymal stem cells [ 6 , 11 ]. In this study, we continue to employ P34HB for the development of CES fiber scaffolds for osteogenesis. Furthermore, ASP, β-glycerophosphate sodium (GP) and dexamethasone (DEX) are essential supplements in osteogenic cell induction culture media. In vitro , ascorbic acid has been proven to promote osteoblast differentiation [ 12 ]. Due to the strong reducing nature of ascorbic acid, its stability in external environments is relatively low. ASP, a long-acting derivative of ascorbic acid, is gradually replacing ascorbic acid in specific applications due to its superior stability and similar function [ 13 , 14 ]. Glycerophosphate acts as a phosphate group donor in bone matrix mineralization studies [ 15 ]. DEX, a widely used synthetic glucocorticoid, promotes osteoblast differentiation [ 16 ]. However, reports on these three components inducing stem cell osteogenesis in vivo are scarce. Wang et al. fabricated a lysine diisocyanate osteogenic scaffold with sustained release of ascorbic acid, β-glycerophosphate and DEX, studying its in vitro osteogenic induction capability on stem cells [ 17 ]. Yet, the combination of these three inducing components with CES technology for immediate implantation and exploration of their in vivo osteogenic induction capability remains unexplored. In studies of drug-carrying electrospun fibers, a high concentration of a drug is often distributed on the fiber surface, which exhibits an initial burst release. Nanofibers prepared by core-shell nozzles can confine the drug to the core layer, which is expected to limit the drug release at the initial stage [ 18 ]. Coaxial electrostatic spinning is a common method for preparing core-shell fibers. By this technique, drugs can be encapsulated inside the fibers and the release time can be effectively prolonged as the encapsulation rate increases [ 19 ]. In this research, we introduce a novel P34HB-based fiber slow-release system, crafted using dual-nozzle CES. Combined with a core-shell nozzle, this system embeds HUCMSCs into P34HB fibers loaded with ASP, GP and DEX for immediate in vivo application. It consistently promotes osteogenic differentiation of HUCMSCs.
Materials and methods Materials The primary materials and equipment used are as follows: Electrospinning device (Ucalery, Beijing, China), Electron microscope (Hitachi, Japan), Universal mechanical testing machine (MTS Systems Corporation, China), Incubator (Thermo, China), Static contact angle measurement system (JC2000C, Shanghai), P34HB (300 kDa, aseptic grade, MedPHA Biotech Co., Ltd, China) and PVP (360 kDa, Sigma-Aldrich, USA). All chemicals and reagents used in this study were purchased from Sigma-Aldrich (St Louis, USA), unless otherwise specified. Preparation of ultrafine fiber slow-release system Two hundred fifty milligrams ASP, 80 mg GP and 1 mg DEX were dissolved in 3 ml deionized water to obtain solution A, which was fully filtered three times using a 0.22-μm filter to remove bacteria. Solution B was obtained by dissolving P34HB in a mixture of organic solvents (chloroform and dimethylformamide, 4:1, v/v) (Sigma, St Louis, MO, USA) to configure an 8% solution of P34HB. Ten percent of PVP was configured using PBS and filtered to remove bacteria, and 1 × 10 6 /ml of HUCMSCs (ZhongGuan Biotechnology Co., Ltd, China) was mixed to obtain Solution C. The electrospinning equipment and the surrounding environment were fully sterilized with 75% alcohol and ultraviolet light before the start of the experiment. The instruments used in the experiment were autoclaved and sterilized, and the aseptic operation was strictly observed during the experiment [ 20 ]. Solutions A and B were placed in the inner and outer layers of a coaxial nozzle 1, connected to a 5-ml glass syringe. Receiving distance 10 cm, voltage 8 kV, pushing speed inner layer 0.3 mm/min, outer layer 0.03 mm/min. Receiving distance 10 cm, voltage 8 kV, pushing speed inner layer 0.3 mm/min, outer layer 0.03 mm/min. The receiver was a rotating petri dish containing culture medium, with a diameter of 15 cm and a rotation speed of 10 rpm/min. Cell/fiber scaffolds P(AGD)-CES group containing ASP, GP and DEX with a thickness of 0.1 cm and a length and width of 1 cm × 1 cm were collected and cut. P(AGD) group without CES was collected using the same conditions as well as pure P34HB (P group) fiber scaffolds for spare. P-AGD group was induced in vitro with the addition of osteogenic induction medium for P34HB. Optical microscopy, scanning electron microscopy and ultrafine fiber diameter distribution Observation of P(AGD) group fibers and internal DEX, GP, ASP structures on glass slides. After drying, gold spraying and other treatments, SEM images were obtained using an SEM (XL30, FEI, USA) for the P(AGD)-CES group, P(AGD) group and P group. The diameter of the ultrafine fibers was calculated using ImageJ. Energy dispersive X-ray analysis The chemical elements of the P(AGD) and P group fiber scaffold were analyzed using energy dispersive X-ray analysis (EDX) (Thermoscientific Apreo S, USA). EDX analyses to characterize the elemental composition were performed from the total area of each root canal sealer specimen by using EDAX Team software. X-ray diffraction analysis X-ray diffraction analysis (XRD) spectra of the P(AGD)-CES (cell-free), P(AGD) and P34HB group scaffolds were recorded on an X-ray diffractometer in the 2θ range of 5°–40°. CuKα radiation (wavelength, 1.54 A°; filament current, 30 mA; voltage, 40 kV) was used to produce X-rays. The crystallinity of the samples was assessed from the XRD patterns by separating the amorphous and crystalline portions and using the following expression: where Acr is the area under the crystalline peak and Aam is the area under the amorphous portion. Porosity The porosity of P(AGD)-CES (cell-free), P(AGD) and P34HB group scaffolds was measured using a high-performance, fully automated piezomercury tester, with a pressure range of 0.5–33000 psi and pore size range of 50 μm–5 nm (AutoPore V 9600, Micromeritics, USA). Fourier transform infrared spectroscopy The chemical structure of the P(AGD)-CES (cell-free), P(AGD) and P34HB group scaffolds was determined using Fourier transform infrared (FT-IR) spectroscopy using the KBr method. Infrared spectra were recorded in the energy range of 4000–500 cm −1 , with a resolution of 4 cm −1 . Each spectrum was an average of eight scans. Mechanical testing Fiber membranes from P(AGD)-CES (cell-free), P(AGD) and P34HB group scaffolds were cut into small pieces measuring 60 mm × 15 mm, with an effective tensile length of 40 mm. Tensile tests were conducted using an electronic testing machine (100 N sensor, stretching speed of 5 mm/min). The tensile strength, elastic modulus and elongation at break were averaged for three samples, and stress–strain curves were plotted (Meters Industrial Systems, Inc., China). ASP, DEX slow-release curve The slow release of ASP or DEX was determined according to the reported method [ 21 ]. The P(AGD) group bioactive scaffolds were cut and weighed, and each group was 100 mg. They were then transferred to small tubes containing 1 ml PBS. These tubes were kept at 37°C, shaking at 60 rmp/min in a shaking incubator. Every 48 h, 1 ml was taken out from each tube and replaced with fresh PBS solution. The release solutions were analyzed using a Shimadzu-2700 UV-Visible spectrophotometer, where λ ASP = 260 nm and λ DEX = 262 nm. Analyze and calculate both release formulas using OriginPro. Hydrophilicity The hydrophilicity of the P(AGD)-CES (cell-free), P(AGD) and P34HB group scaffolds was evaluated through contact angle (water) and water absorption analyses. A drop of water was placed on the fiber membrane, and the water droplet was captured to measure the contact angle. The samples (length × width × thickness: 5 cm × 5 cm × 0.1 mm) were soaked in PBS at 37°C. After 4 h, the water absorption rate of the fiber pad was calculated using the following formula: where Md is the original weight of the bioactive scaffold and Mw is the weight of the scaffold after soaking in PBS. In vitro degradation experiment P(AGD)-CES (cell-free), P(AGD) and P34HB group scaffolds were cut into 1 cm × 1 cm pieces with thickness of 0.1 mm, vacuum freeze-dried for 24 h, weighed and recorded. Then, the membranes were placed in small beakers containing 10 ml of physiological saline. These beakers were kept shaking in a 37°C water bath. The degradation rate of the fiber membranes was calculated every week for 2 months. where M0 is the weight before degradation and M1 is the weight after degradation. CES cell survival rate, biocompatibility and HUCMSCs proliferation The P(AGD)-CES group with a thickness of 0.1 mm and a length and width of 1 × 1 cm was co-cultured in a six-well plate. The same-sized scaffolds of the P(AGD), P-AGD and P34HB groups were inoculated with 1 × 10 6 HUCMSCs per well for in vitro culture. Osteogenic induction medium was added to the P-AGD group. CES survival was calculated by assessing cell viability in the P(AGD)-CES group using a live/dead staining kit after 1 day of culture. Cell viability on the scaffolds of P(AGD)-CES group, P-AGD group, P-AGD group, P group was assessed using live/dead staining kits after 7 and 14 days and photographed using a two-photon microscope (FVMPE-RS, Olympus, Japan). HUCMSCs proliferation was analyzed using ImageJ. SEM of HUCMSCs adhesion The P(AGD)-CES group with a thickness of 0.1 mm and a length and width of 1 × 1 cm was co-cultured in a six-well plate. The same-sized scaffolds of the P(AGD), P-AGD and P34HB groups inoculated with 1 × 10 6 HUCMSCs per well for in vitro culture. After 7 and 14 days, the four groups of cell/scaffold composites were fixed overnight in 2.5% glutaraldehyde, dehydrated through a gradient of ethanol, dried and gold-sprayed. The distribution and morphology of cells on the fiber scaffolds were observed using an SEM at an accelerating voltage of 20 kV. Osteogenesis promotion of scaffolds on mesenchymal stem cells in vitro The P(AGD)-CES group with a thickness of 0.1 mm and a length and width of 1 × 1 cm was co-cultured in a six-well plate. The same-sized scaffolds of the P(AGD), P-AGD and P34HB groups were inoculated with 1 × 10 6 HUCMSCs per well for in vitro culture. Immunohistochemical staining was performed after 7 and 14 days using calcein, type I collagen and osteopontin staining kits to evaluate the in vitro osteogenic induction of HUCMSCs under the four conditions. Fluorescent photos were taken using a two-photon microscope, and analysis was performed using Image-ProPlus. Ectopic osteogenesis of fiber slow-release system In vivo ectopic osteogenesis induction studies were conducted on New Zealand White rabbits (20 rabbits, 2–3 kg). This animal experiment was ethically reviewed and approved by the Ethics Committee of Guizhou Medical University (No. 1901083). All procedures were conducted according to our animal research institution guidelines and in accordance with the UK Animals (Scientific Procedures) Act of 1986. The study complied with the ‘Guide for the Care and Use of Laboratory Animals’ published by the US National Institutes of Health (NIH Publication No. 8023, revised 1978). P(AGD)-CES group scaffolds with a thickness of 0.1 mm and a length and width of 1 × 1 cm and P(AGD), P-AGD and P34HB group scaffolds inoculated with HUCMSCs at 1 × 10 6 /well were prepared and set aside. After anesthetizing the animals and shaving their backs, a 2-cm incision was made subcutaneously, the cell scaffold was implanted and the wound was sutured. The P(AGD)-CES group was implanted under the rabbit skin immediately after preparation; the P(AGD), P-AGD and P34HB groups were implanted under the rabbit skin after 2 weeks of co-culture in vitro ; and the P-AGD group was added with in vitro osteogenic induction medium. Two months later, the newly formed tissue was removed, fixed in 4% polyformaldehyde and observed for new bone formation using micro-CT. Histological staining All samples removed from the subcutis were fixed with 4% paraformaldehyde, dehydrated with gradient ethanol and embedded in paraffin. They were then cross-sectioned at 5 μm for histological analysis. Before staining, the sections were deparaffinized and rehydrated. Four groups of subcutaneous samples were stained for in vivo osteogenesis using HE, Alizarin Red and Masson staining kits, and the slices were viewed and photographed under a light microscope and analyzed using Image-ProPlus. Statistical analysis All experimental data are presented as mean ± standard deviation. One-way analysis of variance was used, with differences considered statistically significant at * P < 0.05 or ** P < 0.01 ( n ≥ 3).
Results Microscopic morphology of fiber slow-release system P(AGD)-CES group, P(AGD) group and P group fiber scaffolds were successfully prepared, and their ultrafine fiber structures were observed ( Figure 1A and C–E ). Under the electron microscope, the ultrafine fibers were distributed disorderly, and cells were evenly embedded within the fiber network ( Figure 1E ). Under the optical microscope, crystals formed by the three effective components ASP, GP and DEX were evenly distributed inside the ultrafine fibers ( Figure 1A ). Transmission electron microscopy revealed the crystalline structure of ASP, GP and DEX within the fibers (white arrow) ( Figure 1B ). The average diameter of P34HB was measured to be 1.65 ± 0.56 μm by SEM. Energy dispersive X-ray analysis Through EDX, the distribution and content of the three inducers DEX, ASP and GP in the fiber membrane were clarified. P34HB contains four elements: C, H, O and P ( Figure 1F ). The molecular formula of DEX is (C22H29FO5), ASP is (C6H6Mg1.5O9P·×H2O), and GP is (C3H7Na2O6P·5H2O). Scanning revealed that the pure P group contains C: 60.39 ± 2.37%, O: 39.16 ± 2.51% and P: 0.44 ± 0.11%. In the P(AGD) fibers, the presence of F, Mg and Na elements confirmed the uniform distribution of DEX, ASP and GP within the ultrafine fibers ( Figure 1G ). The mass percentages of the three elements in the P(AGD) group were measured as F: 0.06 ± 0.01%, Na: 0.34 ± 0.05% and Mg: 2.12 ± 0.35% ( Figure 1G ). Thus, the mass percentages of the three substances in the prepared ultrafine fiber slow-release system were calculated as DEX: 1.24 ± 0.21%, ASP: 4.73 ± 0.78% and GP: 15.5 ± 2.28%. X-ray diffraction analysis The XRD analysis spectra of the P(AGD)-CES, P(AGD) and P34HB group fiber scaffolds were tested ( Figure 2A ). The crystallinity of each group was calculated as P(AGD)-CES group: 8.13° ± 2.12°, P(AGD) group: 17.32° ± 3.74° and P34HB: 2.52° ± 1.23° ( Figure 2B ). It was concluded that after adding DEX, ASP and GP to P34HB, the crystallinity of the fiber slow-release system significantly increased. The increase in crystallinity led to an increase in tensile strength, which was reflected in the mechanical property testing. Porosity The average porosity of the scaffolds was determined by the pressed mercury porosimetry method ( Figure 2C ), which reflects the environment for cell growth in the scaffolds and the ability of cells to exchange nutrients with the outside world. Scaffolds with high porosity are more favorable for providing a microenvironment for cell growth. Porosity was measured as 82.3° ± 1.3° for the P34HB group, 74.9° ± 2.7° for the P(AGD) group and 85.81° ± 4.1° for the P(AGD)-CES group. From the above results, we can conclude that all three groups of scaffolds have high porosity. The addition of DEX, ASP and GP to the P34HB fibers decreased the porosity, but the porosity was enhanced after the addition of CES. This is closely related to the addition of PVP, which releases more space after hydrolysis. The increase in porosity of the P(AGD)-CES group provides a better microenvironment for the growth of HUCMSCs. FT-IR spectroscopy The FT-IR of the P(AGD)-CES, P(AGD) and P34HB group fiber membranes are shown in Figure 2D . The wavenumbers of the P34HB fiber membrane are 2985, 2938 cm −1 peaks for the C–H stretching vibration of the carbon–hydrogen bond, the 1737 cm −1 absorption peak corresponds to the stretching vibration of the carbonyl C=O, the 1458 and 1385 cm −1 absorption peaks are for the bending vibration of the carbon–hydrogen bond C–H, the 1183 cm −1 absorption peak corresponds to the stretching vibration of C–O–C and the 1057 cm −1 absorption peak is for the coupled vibration of the C–O–C–O–C bond ( Figure 2E ). From this result, it can be analyzed that in the P(AGD)-CES and P(AGD) groups with added PVP and ASP, GP and DEX, only the intensity of each peak is weakened, and no new peaks are found. No new groups are formed, indicating that no chemical reactions were found between the various substances. Mechanical testing The mechanical properties of the slow-release fiber scaffold were evaluated, and the corresponding stress–strain curve was plotted ( Figure 2F ). The elastic modulus of the fiber membrane in P34HB was 21.42 ± 3.1 MPa, P(AGD) was 59.78 ± 4.2 MPa and P(AGD)-CES was 69.15 ± 3.1 MPa ( Figure 2G ). The tensile strength of the fiber membrane in P34HB was 1.8 ± 0.4 MPa, P(AGD) was 6.5 ± 0.8 MPa and P(AGD)-CES was 7.68 ± 1.31 MPa ( Figure 2H ). The fracture elongation rate of the fiber membrane in P34HB was 313.28 ± 31.12%, P(AGD) was 478.35 ± 40.26% and P(AGD)-CES was 525.6 ± 25.45% ( Figure 2I ). It was concluded that after adding DEX, ASP and GP to the fiber membrane, the elastic modulus and tensile strength of the fiber membrane were enhanced to a certain extent. After adding CES, the elastic modulus and tensile strength were further increased, which is related to the addition of PVP. Good mechanical properties are essential for bone tissue engineering scaffolds, and DEX, ASP, GP and PVP were added to enhance the mechanical properties of fiber scaffolds, which is beneficial for tissue-engineered bone. DEX and ASP slow-release curve The slow release of ASP and DEX from P(AGD) group fibers was tested by UV spectrophotometry, and their cumulative release curves were plotted. DEX’s sustained release can last for 2 months ( Figure 2J ), with a larger release in the first 10 days, and a stable release of about 5–10 ng/ml/2 days after 10 days. ASP’s release can last for about 1 month ( Figure 2K ). This is related to the water solubility of DEX and ASP. ASP is more water soluble, while DEX dissolves slowly. Although ASP’s slow release can only last for about 1 month, since in vitro osteogenic induction usually only requires 14–21 days, it is sufficient to complete the osteogenic induction effect on HUCMSCs. Based on the release curves of both, we calculated the cumulative release formulas for DEX, ASP as: Y DEX = 96.87 + 1.71X; Y ASP = 532.01 + 18.38X, where Y is the cumulative release concentration (measuring unit: ng/ml) and X is the number of days of release. Hydrophilicity The hydrophilicity and water absorption capacity of the bioactive fiber membrane were tested. The water contact angles of the fiber membranes were P34HB (120.34° ± 1.83°), P(AGD) (111.63° ± 3.02°) and P(AGD)-CES (88.97° ± 3.05°) ( Figure 3A ). With the addition of DEX, ASP and GP, the hydrophilicity of the fiber membrane was enhanced. The inclusion of PVP can significantly improve the hydrophilicity of the bio-fiber membrane, with the P(AGD)-CES group having the best hydrophilicity. The water storage capacity of the three groups of fiber membranes soaked in PBS for 4 h was also tested. The P(AGD)-CES group had the strongest water storage capacity, reaching (338.1 ± 15.13%), followed by the P(AGD) group (317.3 ± 8.08%), while the P34HB group had a poorer water storage capacity of (240.1 ± 8.47%) ( Figure 3B ). Hydrophilic materials are friendly to cell adhesion and proliferation, and they can store more water and nutrients for better nourishment of cells [ 7 ]. P(AGD)-CES group has the best hydrophilicity and water storage capacity, and it is more suitable to be used for bone tissue engineering scaffold preparation compared to other groups. In vitro degradation In vitro degradation of the fiber scaffold membrane was verified ( Figure 3C ). The P34HB fiber membrane decreased 6.2 ± 0.72% in the first month and 12.2 ± 0.41% in the second month in mass. The P(AGD) group decreased 4.3 ± 0.77% in the first month and 8.7 ± 1.11% in the second month. The P(AGD)-CES group decreased 21.3 ± 1.42% in the first month and 26.7 ± 1.28% in the second month. Since PVP is a highly hydrophilic biomaterial, the biodegradability of the P(AGD)-CES group containing PVP is better than other groups, which can degrade quickly and release space for cell growth, promoting cell growth and development. CES cell survival rate, biocompatibility and HUCMSCs proliferation The cell survival rate of CES was observed by 1-day live/dead staining imaging ( Figure 3E ). The survival rate was calculated to be 88 ± 4.3% by the number of live/dead cells at an electric field of 0.08 kV/mm. This indicated that the safety of HUCMSCs in CES could be ensured with less damage by voltage during electrospinning. Good biocompatibility of the fibrous scaffolds was observed by live/dead staining imaging after 7 and 14 days of co-culture of the fibrous scaffolds with HUCMSCs in the P(AGD)-CES, P(AGD) group, P-AGD, and P34HB groups ( Figure 3F ). The cells were well grown and a large number of green live cells and a small number of red dead cells were observed. This proves that P34HB and its internal ASP, GP and DEX have less toxic effects on cell growth and reproduction, and the materials we used have good biocompatibility. Cell proliferation rates at 7 and 14 days were calculated by ImageJ ( Figure 3D ), 124.12 ± 24.54% at 7 days and 266.18 ± 42.87% at 14 days in the P(AGD)-CES group, 95.72 ± 17.27% at 7 days and 215.78 ± 31.02% at 14 days in the P(AGD) group, 121.96 ± 28.18% at 7 days and 237.16 ± 36.77% at 14 days in the P-AGD group, 101.96 ± 12.36% at 7 days and 177.27 ± 34.03% at 14 days in the P group. Comparison of 7-day HUCMSCs showed nearly 1-fold cell expansion at 14 days, with no statistical difference in cell proliferation between groups. SEM of HUCMSCs adhesion Cell scaffolds co-cultured for 7 and 14 days were scanned with SEM to observe the behavior of the cells ( Figure 3G ). In SEM, it can be observed that the cells of P(AGD)-CES group are uniformly distributed between the fibers, the cells are in contact with each other, and they grow in three-dimensional space within the scaffold with good morphology. The pseudopods and secreted extracellular matrix of the cells could be seen, and calcium salt (CS) deposition and scaffold mineralization could be observed. The cells in the P(AGD), P-AGD and P34HB groups all showed two-dimensional growth only on the surface of the scaffold. Their pseudopods could not extend into the scaffold, and their cellular activities and behaviors would be spatially limited. Scaffold promotes osteogenic differentiation of HUCMSCs in vitro The effect of osteogenic differentiation in vitro can be indirectly reflected by the staining of calcein, type I collagen and osteopontin. After 7 and 14 days of cell/scaffold co-culture, the osteogenic differentiation effect of HUCMSCs in vitro was observed using calcein ( Figure 4A ), osteopontin ( Figure 4B ) and type I collagen staining kits ( Figure 4C ). Using two-photon microscopy to take photographs, the software ImageJ measured the content of calreticulin, type I collagen, and osteoblastin in each group. Quantification of positive areas in calcein staining was 10.16 ± 2.9% at 7 days and 26.20 ± 2.46% at 14 days in the P(AGD)-CES group, 7.70 ± 3.1% at 7 days and 24.34 ± 2.3% at 14 days in the P(AGD) group, 7.16 ± 2.1% at 7 days and 25.22 ± 1.1% at 14 days in the P-AGD group, 0.5 ± 0.2% at 7 days and 3.25 ± 0.3% at 14 days in the P34HB group ( Figure 4D ). Quantification of positive areas in osteopontin staining was 2.08 ± 0.23% at 7 days and 3.04 ± 0.36% at 14 days in the P(AGD)-CES group, 1.89 ± 0.29% at 7 days and 2.67 ± 0.33% at 14 days in the P(AGD) group, 1.67 ± 0.41% at 7 days and 2.87 ± 0.42% at 14 days in the P-AGD group, 0.26 ± 0.13% at 7 days and 0.34 ± 0.12% at 14 days in the P34HB group ( Figure 4E ). Quantification of positive areas in type I collagen staining was 12.15 ± 2.9% at 7 days and 30.06 ± 3.46% at 14 days in the P(AGD)-CES group, 9.75 ± 3.5% at 7 days and 28.26 ± 2.25% at 14 days in the P(AGD) group, 7.56 ± 3.1% at 7 days and 27.33 ± 3.12% at 14 days in the P-AGD group, 0.5 ± 0.23% at 7 days and 2.42 ± 0.42% at 14 days in the P34HB group ( Figure 4F ). The experimental results showed that the P(AGD)-CES group and the P(AGD) group could still induce osteogenic differentiation of HUCMSCs without the addition of osteogenic induction medium. Its induction effect was close to that of the P(AGD) group, and the difference was not statistically significant. Scaffold promotes HUCMSCs ectopic osteogenesis in vivo In vivo heterotopic osteogenesis was further evaluated for the scaffold promotion of osteogenic differentiation of HUCMSCs. The scaffolds in the P(AGD)-CES, P-AGD, P(AGD) and P groups were implanted subcutaneously in New Zealand White rabbits for 8 weeks and then removed. The newborn bone tissue inside each group was observed using micro-CT and its three-dimensional structure was simulated ( Figure 5A ). Newborn bone volume was measured in each group using MIMICS ( Figure 5C ). The P(AGD)-CES group had the largest bone volume of 5.02 ± 1.1% of the total volume. The amount of new bone in the P-AGD group of 2.24 ± 0.5% and in the P(AGD) group of 2.73 ± 0.6% was close to each other, and the difference was not statistically significant. The above results indicate that compared with in vitro osteogenic induction, P(AGD)-CES in vivo osteogenic induction was able to exert more advantages with a higher amount of new bone than other groups. HUCMSCs grown in three dimensions within the scaffold were able to undergo better osteogenic differentiation and promote more new bone generation. Histological staining HE, Alizarin Red and Masson staining were used to evaluate the ability of the bioactive scaffold to form new bone ( Figure 5B ). HE staining confirmed the formation of new bone within the newly formed tissue block, where the ‘osteoid trabeculae’ structure is shown inside the red dashed line (regenerated bone). This can be observed in the P(AGD)-CES group, P-AGD group and P(AGD) group. Additionally, osteocytic lacunae and osteocytes (indicated by yellow arrows) and osteoblasts (indicated by red arrows) can be observed within the stained osteoid trabeculae structure in the other three groups. This demonstrates the excellent osteogenic capability of the fiber slow-release system we prepared. Alizarin Red staining further clarified the CS content in the tissue, with the deep red staining indicating the CS area. Masson staining highlighted the blue collagen fibers (CF) and red muscle fibers (MF) distributed within the newly formed tissue block, with blue collagen occupying the majority of the field of view. In addition, abundant fibrous connective tissue, blood vessels, etc. can be observed in the stained image. Based on the staining results, we concluded that the fiber slow-release system P(AGD)-CES group we prepared has excellent bone induction effects, superior to the traditional bone induction techniques of the P-AGD group and the cell electrospinning P(AGD) group. The osteogenic effects of the P-AGD group and the P(AGD) group are similar ( Figure 5D–F ). Additionally, a small number of osteoblasts can be observed in the control P group with HE staining, and Alizarin Red and Masson staining show a certain amount of CS and CF, which also proves that P34HB itself has a certain bone induction capability.
Discussion Stem cell-based bone tissue engineering has emerged as a promising approach for addressing bone defects [ 22 , 23 ]. While P34HB has shown potential as a scaffold material, its inability to provide an intrinsic osteogenic environment in vivo necessitates external osteogenic factors. Common osteogenic inducers, including ascorbic acid, β-glycerophosphate and DEX, have been widely used in vitro . Studies have shown that in vitro systems, the induction of type I collagen, osteocalcin, bone sialoprotein and alkaline phosphatase mRNA levels by ascorbic acid, β-glycerophosphate and DEX are related to the occurrence of bone nodules [ 24 ]. Among them, ascorbic acid is an effective collagenase inhibitor that promotes collagen synthesis [ 25 ], activates alkaline phosphatase [ 26 ], induces osteoblast differentiation and stimulates cell proliferation [ 27 ]. β-glycerophosphate is a simple phosphate donor and a classic serine-threonine phosphatase inhibitor used for kinase reaction buffers [ 28 ]. DEX is a parathyroid hormone that can significantly induce osteoblast differentiation [ 29 ]. Studies have found that ascorbic acid and β-glycerophosphate promote matrix mineralization by inducing an increase in matrix vesicle neutral metalloproteinases, which is beneficial for mineral precipitation [ 29 ]. But their combined potential in an in vivo setting, especially when integrated with CES, remains largely uncharted. In this study, we successfully prepared a P34HB ultrafine fiber slow-release system containing three components: ASP, GP and DEX. This system can induce stem cell osteogenic differentiation by slowly releasing ASP, GP and DEX, omitting the step of adding osteogenic induction medium for induction differentiation in the experiment. We observed through a microscope and SEM that the three effective components ASP, GP and DEX were effectively wrapped inside the P34HB fiber filament. Transmission electron microscopy confirmed the crystalline structure formed by ASP, GP and DEX inside the fiber filament. This structure can achieve slow release. In vitro slow-release tests showed that ASP can be continuously released for 1 month, and DEX can be continuously released for 2 months. However, in vitro osteogenic induction culture only requires 2–3 weeks. Its mechanical properties, hydrophilicity and biodegradability also performed well in the experiment. FT-IR spectroscopy confirmed that the components ASP, GP and DEX did not undergo chemical reactions, and the effective components were retained. EDX specified the mass fraction of effective components, which is roughly the same as the required ratio of the three in the in vitro osteogenic induction culture medium. Furthermore, we successfully implemented the immediate in vivo implantation of the P34HB ultrafine fiber slow-release system through the dual-nozzle and CES technology to perform in vivo induction. The addition of CES solved the problem of uneven distribution of cells inside and outside the scaffold after traditional cell seeding and poor cell infiltration. Secondly, it realized the immediate implantation of the tissue engineering scaffold into the animal body for in vivo self-induction osteogenesis after the preparation of the tissue engineering scaffold. This saves the in vitro cell induction step and can avoid uncontrollable risks such as cell aging and contamination that may be caused by in vitro culture, which has not been reported in previous studies. It has been shown that the electric field is an important variable affecting cell viability, and that high cell viability (90%) can be achieved with an electric field in the range of 0.05–0.075 -kV/mm [ 30 ] or a low electric field of 0.1 kV/mm [ 31 ]. In this study, cell survival was maintained at 88 ± 4.3% at an electric field of 0.08 kV/mm. The cell fiber scaffold prepared can be observed under an electron microscope and a microscope that the cell/fiber distribution is uniform. In vitro , it was verified that the cells have good biocompatibility, and the osteogenic staining through in vitro induction confirmed that its osteogenic ability is higher than the control group. After immediate implantation in vivo for 8 weeks, a large amount of new bone formation was observed inside the P(AGD)-CES group scaffold through micro-CT and histological staining scans. Both in vitro and in vivo experiments showed that the P(AGD)-CES fiber slow-release system we prepared has better physicochemical properties and superior osteogenic induction effects, which is an important basis for its better realization of bone tissue regeneration. Previous studies have reported some material-based slow-release scaffolds, such as those based on PLGA, which can release DEX and ascorbic acid-2-phosphate for in vitro and in vivo osteogenic differentiation and osteogenesis [ 17 ]; scaffolds loaded with DEX-containing polylactic acid (PLLA) and chitosan (CS) mesoporous silica nanoparticles, slow-release DEX induces osteogenesis, etc. [ 32 ], all of which have shown great potential in inducing stem cell osteogenesis. Studies in tissue engineering regarding CES, such as combining 3D printing and CES, have built 3D structures with high mechanical strength for bone regeneration [ 31 ], preparation of cardiac patches for improved cardiac function by CES [ 33 ], portable handheld CES for wound repair in rats [ 34 ], etc. Compared with the scaffolds reported in the literature, the P(AGD)-CES fiber slow-release system we prepared still has these advantages, such as excellent biocompatibility and long-lasting slow-release performance, effectively promoting HUCMSCs osteogenic differentiation and CS deposition. Secondly, the slow-release system combined with CES enables immediate implantation of scaffolds into animals for in vivo self-induced osteogenesis, which is very novel compared with previous research. Its biomimetic structure, mechanical properties, biocompatibility, slow-release ability and excellent bone regeneration ability make it a bone regeneration material with broad application prospects. While our findings are promising, further research is imperative to elucidate the in situ bone regeneration and repair capabilities of our system. As the field of bone tissue engineering continues to evolve, innovations like ours pave the way for more efficient and effective therapeutic solutions.
Conclusion In this study, we successfully developed the P(AGD)-CES ultrafine fiber slow-release system using dual-nozzle and CES, enabling immediate in vivo implantation without prior in vitro induction. The system’s potential in promoting osteogenic differentiation of HUCMSCs, as evidenced by both in vitro and in vivo experiments, underscores its promise as a novel therapeutic tool. Given its biomimetic attributes, mechanical robustness, biocompatibility and sustained release capabilities, the P(AGD)-CES system offers an innovative approach to addressing bone defects in clinical settings. The shortcomings of this study are the lack of in vivo analysis of osteogenic markers and the assessment of in situ osteogenesis. It will be further improved in subsequent studies.
The first two authors contributed equally to this study. Abstract This study presents the development and evaluation of a poly(3-hydroxybutyrate-co-4-hydroxybutyrate) (P34HB) ultrafine fiber slow-release system for in vivo osteogenic induction of human umbilical cord mesenchymal stem cells (HUCMSCs). Utilizing dual-nozzle and cell electrospinning techniques, the system encapsulates L-ascorbic acid-2-phosphate magnesium (ASP), β-glycerophosphate sodium and dexamethasone (DEX) within the fibers, ensuring sustained osteogenic differentiation. The scaffold’s morphology, characterization, hydrophilicity, mechanical properties and cellular behavior were examined. Immediate subcutaneous implantation in rabbits was conducted to observe its ectopic osteogenic induction effect. Successfully fabricated P34HB ultrafine fiber slow-release system. Characterization confirmed the uniform distribution of HUCMSCs and inducing components within the scaffold, with no chemical reactions affecting the active components. In vitro tests showcased a prolonged release of DEX and ASP, while biocompatibility assays highlighted the scaffold’s suitability for cellular growth. Alizarin Red, type I collagen, and osteopontin (OPN) staining verified the scaffold’s potent osteogenic induction effect on HUCMSCs. Notably, immediate implantation into New Zealand White rabbits led to significant new bone formation within 8 weeks. These findings underscore the system’s potential for immediate in vivo implantation without prior in vitro induction, marking a promising advancement in bone tissue engineering. Graphical Abstract
Acknowledgements Thanks to Dr Liping Shu for providing an effective discussion in this study. Funding This research was supported by National Natural Science Foundation of China [81960416 (C.Y.) and 82260428 (L.Y.)]; Department of Science and Technology of Guizhou Province [2020] 6013-2 (C.Y.); Doctor start-up Fund of Affiliated Hospital of Guizhou Medical University [gyfybsky-2022-13 (L.Y.)]; Excellent Reserve talents Fund of Affiliated Hospital of Guizhou Medical University [gyfyxkrc-2023-11 (L.Y.)]. Conflicts of interest statement . None declared.
CC BY
no
2024-01-16 23:47:17
Regen Biomater. 2023 Dec 25; 11:rbad113
oa_package/88/10/PMC10789307.tar.gz
PMC10789308
0
Introduction Diseases caused by pathogenic fungi pose a significant risk to grape production, leading to economic losses [ 1 , 2 ]. Botrytis cinerea is the second most important plant pathogen responsible for causing grey mould rot in various economically significant crops [ 3 ]. In vineyards, this plant pathogen can infect grape leaves and, most frequently, flowers, resulting in latent infections that become aggressive during fruit ripening and berry maturation [ 4 ]. In addition to this primary pathogen, the populations present at the sites of infection, known as the microbiome, can significantly impact disease outcomes [ 5 ]. Also, the prevalence of yeasts and moulds in grapevine microbiome studies holds significant importance, given their potential roles in biocontrol and metabolic activities crucial to the winemaking process [ 6 , 7 ]. Improvements in DNA sequencing techniques expanded B. cinerea detection and plant microbiome research, but a specific issue characterizes each. The Sanger sequencing method cannot process the microbiome data; sequencing of fungal genetic markers such as ITS1 and ITS2 requires a pure culture and cannot distinguish species in a mixture of amplicons, limiting its utility [ 8 ]. Massive parallel sequencing of short specific genetic markers, such as ITS1 and ITS2, has been generally used for research. While shorter amplicons offer high read quality and high throughput, this approach, commonly employed by Illumina and pyrosequencing techniques, often fails to assign taxonomy reliably at the species level [ 9 ]. In mycobiome research, single-molecule sequencers Nanopore and PacBio have their respective strengths and disadvantages in long amplicon sequencing [ 10 ]. Nanopore excels in real-time data acquisition and portability, making it ideal for fieldwork and rapid diagnostics. On the other hand, PacBio also stands out for exceptionally long reads and low error rates, making it a powerful tool for in-depth genomic studies of fungi. Both Nanopore and PacBio might require low DNA input and have high phasing capabilities [ 11 , 12 ]; however, it is important to note that the PacBio instruments may not be as widely available as Nanopore due to high costs associated with it, thus limiting its access for researchers in certain regions [ 13 ]. Nanopore sequencing can be achieved either through whole metagenome sequencing, adaptive sampling, or amplicon-based sequencing. The whole-genome sequencing (WGS) protocol for fungal identification is robust. However, it generates an excess of sequenced data from the host organism genome, which does not allow optimal Nanopore flow cell utilization [ 14 , 15 ]. An alternative to WGS is the enrichment approach provided by the ONT adaptive sampling concept, which is thought to represent a software-controlled enrichment. However, the software and library preparation steps are under investigation [ 16 ]. Long amplicon sequencing methods for fungal profiling using ONT involve sequencing the complete ribosomal operon (∼5500 bp), encompassing the 18S, ITS1, 5.8S, ITS2, and 28S regions [ 9 ]. The potential of long-read sequencing for fungal microbiome profiling was demonstrated on animal and human specimens [ 17 , 18 ]. Investigating mycobiota on external otitis in dogs has proved that the method has the potential to characterize fungal communities in diverse samples, be it healthy or clinically affected [ 17 ]. The fungal communities were also characterized in human specimens (sputum) with ONT, revealing the capacity of long-read sequencing for the accurate identification of fungal diseases [ 18 ]. Consequently, the past years have seen improvements in pathogens detection from diseased plant tissues in microbial community studies. To date, there has been no research examining key grape diseases using this long amplicon ONT sequencing protocol. Our research aims to address key questions, including the detectability of B. cinerea with long amplicon sequencing, even if it coexists with other fungal species within grapevine leaf samples. Furthermore, this study explores the potential for identifying other fungal species present in infected grapevine leaves, as a ground for grey rot control and future microbiome studies.
Material and methods Sample collection B. cinerea B05.10 strain, used as control, was kindly provided by the University of Padova. In addition, two pure cultures of B. cinerea used as positive controls were kindly provided by Agricultural University of Georgia. These isolates were grown on Potato dextrose agar (PDA) at 21°C for 7 days, with a 12-h day photoperiod. An aseptic sampling of infected grapevine leaf tissues was performed on-site at small vineyards in different regions of Georgia: Kakheti and Imereti. A total of 10 asymptomatic grapevine leaves were collected at BBCH stage 7 (development of fruits), from plants exhibiting symptoms of B. cinerea infection in the previous year 2021. Before processing, the plant samples' surfaces were sterilized by treating them with 0.1% sodium hypochlorite for 30 s and rinsing them three times with sterile distilled water. Tissue fragments (200–400 mg) were aseptically removed from the samples using sterile scalpels or scissors, transferred to sterile plastic bags, pulverized with a hammer, and suspended in a pre-lysis buffer (OxGEn). DNA extraction and qPCR DNA extraction from pre-lysed plant samples and from pure fungal cultures was performed using OxMag Pathogen DNA Purification Kit (OxGEn) following the manufacturer's instructions. The quality and quantity of the extracted DNAs were assessed using the NanoDrop ND-1000 instrument (NanoDrop Technologies) and Qubit 4 Fluorometer (Invitrogen by Thermo Fisher Scientific). Ten grapevine leaf samples were tested with B. cinerea TaqMan PCR Kit (Norgen Biotek Corp.) according to the manufacturer’s protocol. Two positive samples AW1 and AW2 (with B. cinerea detected at 34 and 36 Ct, respectively) were selected for downstream applications. Sanger sequencing library construction The pure B. cinerea culture strains and the B05.10 control strain were sequenced with the Sanger sequencing method. Polymerase chain reaction (PCR) was performed using fungal-specific forward primer ITS1F (CTTGGTCATTTAGAGGAAGTAA) and reverse primer ITS4 (TCCTCCGCTTATTGATATGC), which amplified the ITS1 and ITS2 regions of nuclear ribosomal RNA genes [ 15 ]. The amplification reaction includes 30 ng of total DNA template, HOT FIREPOL ® DNA Polymerase (Solis Biodyne), 2.5 mM MgCl2 mM, and a 0.1 μM concentration of each primer. PCR was carried out in a GENE PRO TC-E-4l Thermal Cycler (BIOER) with an initial denaturation step at 95°C for 3 min, followed by 35 cycles of 95°C for 30 s, 58°C for 30 s, and 72°C for 1 min, and a final extension at 72°C for 10 min. Amplicons were submitted to Macrogen Europe BV (Nederland, Amsterdam) on ABI PRISM 3730XL Analyser (Applied Biosystems) with BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems). Long-read amplicon sequencing library construction Amplification and sequencing were performed for grapevine leaf fungal communities and B. cinerea pure cultures. The fungal-specific forward primer LR12 (GACTTAGAGGCGTTCAG) and the reverse primer SR1R (TACCTGGTTGATTCTGCCAGT) [ 19 ] were used at a concentration of 0.1 μM each, with up to ∼ 30 ng of total DNA template and OneTaq ® Hot Start DNA Polymerase (New England Biolabs). PCR was performed using a Thermal Cycler (BIOER), with an initial denaturation step at 94°C for 30 s, followed by 25 cycles of 94°C for 30 s, 50°C for 45 s, and 68°C for 5 min, and a final extension at 68°C for 5 min. The resulting amplicons were visualized on a 1% agarose gel by electrophoresis and analysed by GelRed staining. Sequencing of the amplicons was performed by ONT sequencing with a MinION device (Oxford Nanopore Technologies) using a Ligation Sequencing Kit (LSK-109, Oxford Nanopore Technologies) for preparation of the amplicon library. The DNA was processed for end repair and dA-tailing using the NEBNext End Repair/dA-tailing Module (New England Biolabs). A purification step using 1X Agencourt OxMag XP beads (OxGEn) was performed. Native Barcoding kit (SQK-NBD114.24, Oxford Nanopore Technologies) was used for sample barcoding. For adapter ligation, Blunt/TA ligase master mix (New England Biolabs) was used. The 48-h sequencing protocol was run using MinKNOW software version 23.11.2. Bioinformatics and taxonomic-based analysis The Albacore v.2.2.1 software was used for base-calling and de-multiplexing fast5 files. Porechop version 0.2.4 was employed for barcode and adapter trimming. Minimap2, integrated into the EPI2ME software, served as a taxonomy classifier, utilizing a custom reference database and the assignment taxonomy tool. The mapping parameters used for taxonomic classification in EPI2ME are integrated into the software and aligned against a reference that has been uploaded using the FASTA Reference Upload analysis with minimap2 version 2.12. The default parameters embedded in the EPI2ME software are utilized for this analysis. The reference database, known as FRODO (Fungal rRNA Operon Database for ONT-sequences), was assembled by collecting 9072 fungal genome sequences from various sources, including NCBI, JGI, FungiDB, Ensembl Fungi, and the Broad Institute, to extract complete rRNA operon sequences [ 10 ]. All read classifications underwent a filtering process, and only those with coverage exceeding 70% and an identity of at least 90% were retained. Geneious Prime (version 2022 1.1) software was employed to analyse Sanger sequencing results. Consensus sequences were then subjected to NCBI BLAST search (Megablast—fast, high similarity matches) to determine the species of the samples. Pairwise alignments between the Sanger sequencing results and ONT were performed using BLAST software.
Results Prior to ONT sequencing, the control and pure cultures were confirmed by Sanger sequencing, while the presence of B. cinerea in the leaf samples was confirmed by a diagnostic commercial qPCR kit. Long-read sequencing of fungal amplicons resulted in the identification of 63 632 reads in pure culture samples and 65 173 in grapevine leaf samples (total number of reads from five samples: 130 805), with an average quality score of 11.4 and a read length of ∼5300 bp (base pairs). The distribution of B. cinerea in both pure cultures and grapevine leaf tissues is presented in Table 1 . The AP1, AP2, and AP3 samples correspond to pure cultures, while AW1 and AW2 represent the total mycobiome of grapevine leaf samples. The average number of reads per sample was 25 761, providing sufficient sequencing depth to accurately assess the abundance of B. cinerea within the complex fungal microbiota associated with field samples. The results indicate that B. cinerea was correctly assigned in the pure culture samples, with percentages ranging from 97.44% to 98.05%. Comparative analysis of Sanger and Nanopore in pure cultures revealed a high level of identity between Nanopore and Sanger sequences, with an average of 98.17% identity within 28% query coverage due to sequence read length difference. The high identity percentage is attributed to the sequencing depth of ONT consensus, despite the quality score of 11.4. In the grapevine leaf tissue samples, B. cinerea exhibited lower abundance (Ct values 34 and 36), accounting for only 5.52% to 4.79% of the sequenced reads. Other fungi, different from B. cinerea , were more prominent in the grapevine tissue samples, constituting 88.98% to 90.78% of the sequenced reads. 5.50% and 5.08% could not be assigned to known fungal species, indicating the presence of unidentified fungi in the analysed samples. The analysis of fungal species and their distribution within the studied samples provides valuable insights into the composition of the fungal mycobiota ( Fig. 1 ). Among the identified fungal groups, the Ascomycota yeast S. cerevisiae was consistently present in the AW1 sample, with an abundance of 53,97%. In comparison, the Zygomycota Mucor racemosus was the dominant species in the AW2 sample (79.23%). We detected a total of 52 different fungal species from the leaf samples, but only 14 species’ abundance was higher than 0.5%. Species with lower abundance than 0.5% were filtered to enhance the reproducibility of the reported information ( Supplementary Table 1 ). B. cinerea was detected in the grapevine leaf samples at a relatively lower abundance than S. cerevisiae or M. racemosus and other fungal species in both AW1 and W2 samples ( Fig. 1 ). The average length of the Botrytis cinerea complete ribosomal operon was determined to be 5385 bp. We contributed our sequenced data to the NCBI nucleotide collection (nr/rt) database (LC743580.1; LC749799.1; LC750323.1; LC754729.1; LC756294.2).
Discussion Our paper describes a long-read amplicon Nanopore sequencing approach enabling the sequencing of complete ribosomal operons for the correct identification of B. cinerea from control pure cultures. Experimental design with three known pure cultures allowed to ensure the accuracy and reliability of the sequencing process and the reproducibility of the results. As expected, in these samples most of the reads belonged to B. cinerea , but 1.04 to 1.89% of reads were misclassified as other fungal species. This misclassification may have been the result of SNPS introduced during PCR amplification, a high degree of sequence similarity between B. cinerea and other Botrytis species, or it could be due to problems with non-comprehensive databases or inherited sequencing errors of Nanopore [ 13 , 20 ]. Furthermore, our Nanopore sequencing method provides the potential for the detection and species-level taxonomic resolution of B. cinerea within grapevine leaf samples. In our research, we obtained sequences of 52 distinct fungal species within leaf samples infected by B. cinerea . Saccharomyces cerevisiae and M. racemosus were the most abundant species in the two different samples. The remaining species ( Saccharomyces boulardii, Ascochyta lentis, Didymella keratinophila, Didymella zeae-maydis, Ascochyta rabiei, Didymella segeticola, Mucor lusitanicus, and Mucor circinelloides ) were detected with abundance above 0.5%. The differences in the mycobiome composition in the AW1 and AW2 samples could be explained by the regional differentiation and agricultural practices. Several studies have suggested that the observed differences could be explained by the dominance of different species in different parts of a country. For instance, a significant association of Aspergillus and Penicillium spp. with the Chardonnay grapevine cultivar was observed in Napa; Bacteroides, Actinobacteria, Saccharomycetes, and Erysiphe necator were abundant in the Central Coast; Botryotinia fuckeliana and Proteobacteria were the dominant microorganisms in Sonoma [ 6 ]. Comparative analysis of Sanger and ONT sequencing have shown the methods’ capacity to accurately identify B. cinerea at the species level. Our analysis of sequences derived from the full-length rRNA operon revealed identity matches exceeding 95% when compared to species-representative sequences available in the FRODO database. We detected B. cinerea in complex microbiological samples, even when its abundance in the grapevine leaf samples was relatively low compared to other fungal species (5% according to ONT sequencing data, as also confirmed by qPCR with Ct values of 34 and 36). In our research, qPCR Ct values served as a control for the presence of B. cinerea infection, but we could not assign the Ct values to a fungal load in the infected samples. Further field experiments are needed with mock communities to establish whether the method is providing a real abundance of species. To exclude false-positive results, the enrichment of databases with closely related species sequences within different genera is crucial [ 21 ]. Complete ribosomal operon-based sequencing method serves as an effective tool for researching the phylogenetic composition of fungal taxa, which is challenging to be classified using ITS sequences alone [ 17 ]. The utilization of longer rRNA gene sequences, with the availability of both small-subunit (SSU) and large-subunit (LSU) reads, can significantly enhance the taxonomic resolution of fungal taxa [ 18 ]. In our study, we employed PCR primer pairs, specifically LR12/SR1R, which offered a comprehensive amplicon encompassing all regions of the complete ribosomal operon. We detected the presence of B. cinerea in leaf samples but also obtained host-free DNA sequences. This is a significant improvement over previous studies in the field of plant microbiome research, including Nanopore adaptive sampling protocols, where the issue of host DNA is still a challenge [ 16 ]. At the molecular level, the internal transcribed spacer (ITS) region continues to maintain its role as the universal fungal barcode marker, with the UNITE fungal database for nuclear ribosomal ITS regions containing over 230 000 sequences [ 22 ]. Notably, there is also a growing demand for longer sequences encompassing not only the complete ITS region but also the small-subunit (SSU) and large-subunit (LSU) regions to provide enhanced taxonomic resolution and information. In our study, we relied on the FRODO database (Fungal rRNA Operon Database for ONT-sequences) as our reference resource, which encompasses 9072 fungal amplicon sequences [ 10 ]. While FRODO offers a more focused dataset that aligns well with ONT sequencing, it underscores the persisting need for enrichment of databases to ensure the accurate identification of fungal species. This requirement remains a crucial step for future improvements in fungal taxonomy and mycobiome research. In conclusion, the development and optimization of the Nanopore long amplicon-based fungal identification approach show significant promise across a wide range of applications in plant disease diagnostics, particularly within the grape industry. In regions with prevalent grapevine diseases, this methodology can enable host DNA-free, real-time diagnosis, significantly reducing confirmation time and offering crucial insights for pathogen surveillance and mycobiome research.
Abstract Botrytis cinerea is a well-known plant pathogen responsible for grey mould disease infecting more than 500 plant species. It is listed as the second most important plant pathogen scientifically and economically. Its impact is particularly severe in grapes since it affects both the yield of grape berries and the quality of wines. While various methods for detecting B. cinerea have been investigated, the application of Oxford Nanopore Technology (ONT) for complete ribosomal operon sequencing, which has proven effective in human and animal fungal research and diagnostics, has not yet been explored in grapevine ( Vitis vinifera ) disease research. In this study, we sequenced complete ribosomal operons (∼5.5 kb amplicons), which encompass the 18S, ITS1, 5.8S, ITS2, and 28S regions, from both pure cultures of B. cinerea and infected grapevine leaf samples. Minimap2, a sequence alignment tool integrated into the EPI2ME software, served as a taxonomy classifier, utilizing the custom reference database FRODO. The results demonstrate that B. cinerea was detectable when this pathogen was not the dominant fungal species in leaf samples. Additionally, the method facilitates host DNA-free sequencing and might have a good potential to distinguish other pathogenic and non-pathogenic fungal species hosted within grapevine’s infected leaves, such as Alternaria alternata, Saccharomyces cerevisiae, Saccharomyces boulardii, Mucor racemosus, and Ascochyta rabie. The sequences were uploaded to the NCBI database. Long amplicon sequencing method has the capacity to be broadened to other susceptible crops and pathogens, as a valuable tool for early grey rot detection and mycobiome research. Future large-scale studies are needed to overcome challenges, such as comprehensive reference databases for complete fungal ribosomal operons for grape mycobiome studies.
Supplementary Material
Acknowledgements We Thank Dr. Nana Bitsadze for providing us pure cultures of fungal strains, confirmed by morphology. Author contributions Vladimer Baramidze (Conceptualization[lead], Data curation [lead], Funding acquisition [lead], Methodology [equal], Project administration [lead], Supervision [equal], Writing—review & editing [equal]), Luca Sella (Conceptualization [equal], Funding acquisition [equal], Methodology [equal], Supervision [equal]), Tamar Japaridze (Conceptualization [equal], Funding acquisition [equal], Project administration [equal]), Nino Abashidze (Methodology [equal], Resources [equal], Validation [equal], Writing—original draft [equal], Writing—review & editing [equal]), Daviti Lamazoshvili (Data curation [equal], Formal analysis [equal], Methodology [equal], Software [equal], Visualization [equal], Writing—original draft [equal], Writing—review & editing [equal]), Nino Dzotsenidze (Conceptualization [equal], Methodology [equal]), and Giorgi Tomashvili (Data curation [equal], Software [equal], Supervision [supporting]). Conflict of interest statement . None declared.
CC BY
no
2024-01-16 23:47:17
Biol Methods Protoc. 2024 Jan 5; 9(1):bpad042
oa_package/b5/81/PMC10789308.tar.gz
PMC10789309
38180876
1 Introduction Circular RNA (circRNA) is the peculiar class of RNAs produced by pre-mRNA. Unlike common RNAs with the two ends of 5′ and 3′, circRNA has a unique ring structure formed by the reverse splicing mechanism ( Bogard et al. 2018 , Hao et al. 2019 ), widely present in human, hippocampus, mouse, and other cells and tissues ( Dori et al. 2019 , Li and Han 2019 ). This special structure can enhance the stability of circRNA and usually has a stage-specific expression pattern ( Rybak-Wolf et al. 2015 ). More and more evidence has proved that circRNA can participate in the processes of gene expression regulation through combining the corresponding RNA-binding protein (RBP) ( Chen 2016 , Zang et al. 2020 ). Like other non-coding RNAs ( Huang et al. 2022a , b ), It can also play a crucial part in the screening and therapy of many diseases ( Jiao et al. 2021 , Wang et al. 2021 ), especially cancer ( Zhang et al. 2018 , Su et al. 2022 ). Therefore, the understanding of the action mechanism between circRNA and RBP is crucial to reveal the circRNA formation and its biological function ( Chen et al. 2022 , Niu et al. 2022a , b , c ). With the emergence of some biological technologies about sequencing, such as high-throughput sequencing with crosslinking immunoprecipitation (HITS-CLIP), many RBP targets in mature circRNAs have been found in eukaryotes ( Dudekula et al. 2016 , Ruan et al. 2019 ). However, due to the high cost of detecting each pair of interaction sites, many computational methods for identifying circRNA–RBP sites have been developed. Thanks to the advancements in deep learning, the identification performance of RBP-binding sites has been continuously improved. For example, CSCRSites ( Wang et al. 2019a , b ) is a deep learning algorithm that identifies RBP-binding sites about cancer-specific only using nucleotide sequences information. CircSLNN ( Ju et al. 2019 ) is a novel approach that transforms the RNA-binding site prediction problem into a sequence labeling problem, which adopts a word-embedded based coding scheme to capture the context and semantic information of sequences. CRIP ( Zhang et al. 2019 ) proposes a stacked codon encoding deep learning algorithm based on convolutional neural networks and recurrent neural networks, which respectively learn abstract features and sequence dependences to complete the RBP-binding site recognition task. However, these methods are single-view algorithms, and the useful features obtained from the sequence are quite limited, and often constrained by the size of the data, and cannot achieve good performance. Subsequently, researchers have introduced some multi-view algorithms. PASSION ( Jia et al. 2020 ) is a multi-view integrated neural network algorithm, and the optimal feature subset is selected and input into the network through incremental features selection and XGBoost algorithm. iCircRBP-DHN ( Yang et al. 2021 ) proposes to use two new encoding schemes: K-tuple nucleotide frequency patterns and CircRNA2Vec word embedding encoding as inputs. Deep multi-scale residual network, bidirectional gate recurrent unit (BiGRU), and self-attention mechanism are used as algorithms for deep network architecture. CRBPDL ( Niu et al. 2022a , b , c ) proposes an Adaboost integrated deep network architecture, which includes deep multi-scale residual networks and BiGRU. The performance of the algorithm is further improved. HCRNET ( Yang et al. 2022 ) incorporates a fine-tuned DNABERT model and a deep temporal convolutional network to capture global context-dependent semantic and syntactic information for circRNA sequences. As for the networks based on CNN, RNN or their deformation used in the above research as deep feature extraction networks, there are problems, such as poor network parallel capability, difficulty to capture features long-time series dependence, and insufficient algorithm stability. CircSSNN ( Cao et al. 2023 ) proposes an algorithm that fully uses the self-attention mechanism to extract deep features and achieves better performance. Although these algorithms are constantly updating the performance of the recognition task, they are based on supervised learning, in other words, the algorithm requires a great number of sample labels in network training. Usually, the ratio of training samples to test samples is as high as 80%:20%. Although the algorithm achieves good performance, it greatly limits the exploration of the unknown circRNA–RBP interaction mechanism. As a consequence, it has immense practical significance to develop an algorithm based on supervised weakly, self-supervised, and even unsupervised in this task. Self-supervised learning (SSL) ( Liu et al. 2021 ) is a special kind of unsupervised learning. It learns required features without the need for real labels through pre-designed agent tasks, and subsequent tasks often require only a few labels (or even none) to significantly enhance performance according to specific tasks. Contrast learning performs particularly well in computer vision because it can learn invariant representations from enhanced data without label information ( Hjelm et al. 2019 , Chen et al. 2020 , He et al. 2020 ), demonstrating significant self-supervised capabilities. The specific operation process is as follows: first, data augmentation is used to get a number of different perspectives (usually two) from the original image that are slightly different. Then, different views of the same sample are taken as positive sample pairs and the others as negative samples. By maximizing the similarity between positive sample pairs and minimizing the similarity between positive sample and negative sample, a “label” is artificially constructed to guide the learning of network features. However, while contrast learning can be useful in the field of images, it is difficult to apply to time-series data for several reasons: above all, there exists a challenge of capturing temporal dependencies in the data, which is very critical. Secondly, image-based augmentation techniques, such as random cropping, do not work with time-series datasets. Thus far, there have been few studies on contrast learning for time-series data, and it has not been applied to the prediction of circRNA-binding sites. For the sake of reducing the dependence of the algorithm on the sample label as much as possible, thereby enhancing its applicability across a wider range of scenarios. This article carried out in-depth research and innovatively proposed an algorithm named circRNA-binding site identification based on self-supervised learning (CircSI-SSL). The algorithm uses only KNFP, CircRNA2Vec, and electron–ion interaction pseudo-potential (EIIP) shallow statistical feature descriptors, which reduces the computational resource requirements. After encoding, our Transformer model RNA_Transformer, which is improved for CircRNA recognition task, is used to: (i) perform cross-view sequence prediction tasks to train the network, capture temporal dependencies in sequence multi-view data, and learn the overall representation of the sequence; (ii) apply a very small number of sample tags (10%) to fine-tune network parameters for a specific task, thereby completing the RBP-binding site prediction task. Through a comprehensive experiment conducted on 12 widely used datasets, it is shown that the algorithm obtains a significant improvement over the supervised learning algorithms. In summary, the primary contributions of this article can be outlined as follows. A novelSSL method is applied in the domain of circRNA-binding protein recognition, which changes the situation that most of the label information is needed to obtain good performance. Using only a small amount of supervised information can lead to a substantial enhancement in the algorithm’s performance, which has a wide range of application value. We propose a novel proxy task that captures sequence temporal dependencies using an improved RNA_Transformer as a benchmark model and completes cross-view sequence prediction based on multiple feature descriptors instead of using sequence augmentation techniques. Comprehensive experiments conducted on six widely used circRNA datasets and six linRNA datasets demonstrate that the proposed algorithm exhibits comprehensive advantages over previous supervised learning approaches. Even when utilizing only 10% of the labeled data for training, the proposed algorithm demonstrates stable and outstanding performance, along with robust scalability.
2 Materials and methods 2.1 Datasets In order to assess the validity of our approach, we selected six widely used circRNA datasets, WTAP, FXR1, C17ORF85, QKI, TAF15, and AUF1. These circRNA sequences derive from circRNA interaction set of database ( https://circinteractome.nia.nih.gov/ ), which extracted data includes circRNA–RBP interaction information, also includes RBPs that bind to mature circRNA upstream and downstream flanker sequences ( Yee et al. 2019 ). We then use the identical data processing steps as previous research ( Zhang et al. 2019 ). Resulting 101 nucleotides sequence fragments in length are obtained as positive samples, and randomly selecting other sequences to acquire negative samples with same numbers. These similar sequences are removed using CD-HIT technique, with the threshold of 0.8 ( Li and Godzik 2006 ). After the removal of sequence redundancy, a total of 15 570 samples were obtained, and all samples used in the experiments were randomly shuffled. In addition, we transplant the circSI-SSL algorithm to linear RNA datasets and compare the performance of several existing supervised algorithms in identifying RBP interactions. The same linear RNA datasets are downloaded from iDeepS ( Pan et al. 2018 ) and DeepBind ( Alipanahi et al. 2015 ), including six datasets after HITS-CLIP processing: hnRNPC-2, U2AF65, hnRNPC-1, QKI, ELVAL1-2, and Y2AF65. 2.2 Feature muti-descriptors To enrich the originally single sequence, we employ three quantitative feature methods to extract the preliminary statistical features of the sequence: (i) KNFP, which is used to capture local semantic features at disparate positions. (ii) CircRNA2Vec, which is employed to capture remote dependencies. (iii) The EIIP, which is utilized to characterize the free electron energy on the circRNA sequence. 2.2.1 KNFP In this section, we introduce KNFP schema in detail. Different from the traditional One-hot representation ( Zhang et al. 2019 ), KNFP schema can extract various short-range sequence-dependent information ( Orenstein et al. 2016 ) and local semantic features, which greatly makes up for the deficiency of One-hot information and retains the original sequence schema. Taking a specific circRNA sequence of length as an example, KNFP slidingly selects k consecutive nucleotides on the circRNA sequence, and counts the frequencies of the corresponding combinations in the form of k tuples (different combinations of k nucleotides), as the final encoding. In detail, for a k -tuple, which has different combinations, the frequency of the corresponding K -tuple pattern is statistically calculated according to the specific circRNA sequence. Here, represents the frequency of the -th k -tuple pattern. Upon processing a single circRNA sequence, the resulting feature dimension becomes . We concatenate the encoded features obtained by k = 1, 2, and 3, respectively, and complete them with 0 at the end. 2.2.2 CircRNA2Vec CircRNA2Vect ( Yang et al. 2021 ) is a feature descriptor that employs the Doc2Vec algorithm to learn global contextual features of circRNA. Doc2Vec ( Le and Mikolov 2014 ) is an extension of Word2Vec, capable of learning fixed-length feature representations from variable-length texts. Unlike Word2Vec, Doc2Vec introduces an additional paragraph vector at the input layer, which captures the contextual information of paragraphs. This enables the linkage of word vectors with paragraph vectors, addressing the limitation of Word2Vec that focuses solely on training word vectors while overlooking the grasp of paragraph-level context. We collect as many circRNA splicing sequences as possible from circBase ( Glažar et al. 2014 ) to serve as the corpus. Utilizing a sliding window of size 10, extract subsequences from each circRNA sequence, resulting in multiple sequences. This allows the algorithm to capture semantic information within these subsequences for modeling purposes. Given a text sequence of length , where the word at time step t is denoted as . For context window size , the likelihood function of the model is the probability of generating a specific word , which express as term . The goal of the model is to maximize the average logarithmic probability as follow: 2.2.3 EIIP EIIP, as introduced by Nair and Sreenadhan (2006) , is a novel feature encoding scheme that describes the energy of delocalized electrons in amino acids and nucleotides present in circRNA sequences. Four binary indicator sequences are used to encode the sequence. It has been widely utilized in the Resonance Recognition Model. The EIIP values for the nucleotide “G,” “C,” “T,” and “A” are “0.0806,” “0.1340,” “0.1335,” and “0.1260,” respectively. To enrich the feature representation, we incorporate a PSTNPss encoding scheme. It is position-specific feature encoding based on single-strand of DNA. See He et al. (2018) for more details. 2.3 CircSI-SSL algorithm architecture In this section, we introduce the CircSI-SSL self-supervised algorithm framework for learning high-quality representations of sequences, using only a small number of samples to fine-tune the CircSI-SSL algorithm for specific tasks to achieve excellent results. The overall framework is shown in Fig. 1 . For a more intuitive understanding, we provide the pseudo-code as follow. The model consists of two components: cross-view prediction and fine-tuning. (i) Multiple feature encoders are employed to encode initial features obtained from various descriptors extracted from the raw sequence data. A cross-view sequence prediction is conducted using RNA_Transformer. (ii) The trained encoded features are then fused, followed by employing RNA_Transformer to extract structured features from the fused multi-view features based on a small number of labels, facilitating the classification task. 2.3.1 Cross-view prediction The initial research on SSL started with the application of agent tasks on image datasets, which aims to learn high-quality representations. For example, some previous work predicts image rotation ( Gidaris et al. 2018 ), images colorization ( Zhang et al. 2016 ), and puzzle solving ( Noroozi and Favaro 2016 ). By using image augmentation technology to construct positive and negative samples, the application range of contrast learning is broadened: case discrimination fields, such as SimCLR ( Chen et al. 2020 ) and MoCo ( He et al. 2020 ); time-series analysis, such as CPC ( Oord et al. 2018 ) and TS-TC ( Eldele et al. 2021 ). Unfortunately, these algorithms’ performance depends heavily on the augmentation techniques used, especially for time-series data, and it is difficult to find a set of effective and widely used augmentation techniques for operations, such as random cropping and image graying. This greatly restricts the application of contrast learning to time-series data. Building upon this, this article studies a new contrast task, which extracts features from multiple real views for mutual prediction without the help of augmentation techniques. We take the improved Transformer ( Vaswani et al. 2017 ) and TS-TC ( Eldele et al. 2021 ) as feature extraction networks, as revealed in Fig. 2 . It principal consists of Multi-head Attention and Feed Forward Neural Network (FFN), Layer Normalization (LN) blocks. The FFN block consists of a fully connected layer, a non-linear ReLU function and dropout. The model uses a pre-norm residual connection ( Wang et al. 2019a , b ) and LN prior to passing through a multi-head self-attention network, resulting in more stable gradients: Where and are the mean and variance of , respectively, and represent the parameter vectors of scaling and translation, respectively. Query information, key information, and value information related to a specific task are represented as , , and , respectively. The number of operation heads is represented as , and the aggregated , , and after multiple heads are denoted as , , and respectively. Here, signifies the dimensions of the input vector. Then, LayerNorm regularization is carried out, and the features extracted by multiple heads are aggregated through FFN blocks to finally obtain the context feature C that represents the whole sequence. The entire process can be summarized as follows: given a circRNA sequence with a batch size of , the preliminary features and are extracted by CircRNA2Vec and EIIP descriptors, respectively. It is then encoded by an encoder (using a 1D convolutional neural network) as and , where the feature sequence length is . Then, context variables and are extracted by improved transformer respectively, and cross-view mutual prediction is carried out. The loss functions are: 2.3.2 Fine-tuning After mutual prediction across the sequence of views, we get the trained RNA_Transformer. This allows him to learn the expression of the context of the overview from the sequence features. We then fine-tune the network for specific tasks to meet the needs of circRNA–protein binding site prediction. Specifically, we combine the features encoded by the above three feature descriptors and input them into RNA_Transformer. Context information of fusion features is extracted, processed by projection_head and normalized by softmax to obtain prediction label . Finally, using cross-entropy loss and training with only a very small number of real labels, excellent results can be obtained: As far as we know, this is the first time to apply the SSL algorithm to address the RNA–protein binding site prediction problem. Different from HCRNet and CircSSNN, the three feature descriptors we selected are relatively shallow algorithms and do not use DNABert’s large language model, which requires lower hardware resources and is easy to be widely used. Compared with previous supervised learning algorithms, it reduces the excessive dependence on actual labels. After representing sequences in agent task learning without using real tags, superior performance can be achieved with only a small number of tags depending on the final task.
3 Results and discussion 3.1 Experimental setup In our experiment, the networks are trained by the Adam optimizer, where , , and weight_decay is set to 3e-4 and batchSize to 64. The optimizer’s learning rate is automatically controlled by the scheduling that comes with pytorch, where initial value is 3e-3. We employ a layer of RNA_Transformer and set dim to 400, heads to 8, and mlp_dim to 200. 3.2 Existing supervised algorithm performance We demonstrate the AUC performance achieved by eight existing supervised recognition algorithms on six circRNA–RBP datasets, as shown in Fig. 3 . These include CircSSNN, HCRNet, iCircRBP-DHN, PASSION, CRIP, CircRB, CSCRSites, and CircSLNN. The dataset ratio is set to 8:2, based on the number of training and test samples as claimed in their respective papers. It can be seen from the picture that the latest algorithm CircSSNN has achieved nearly perfect performance, and HCRNet and iCircRBP-DHN are not much different from it. Since Fig. 3 cannot be well distinguished, we independently draw the results of these three algorithms on these datasets to draw box plots ( Fig. 4 ). However, it should be noted that these algorithms require up to 80% of the training samples, i.e. 80% of the labeling labels obtained through biological experiments need to be invested in the algorithm for auxiliary learning, so as to guide the network to learn useful and easily distinguishable features. We know that the biological experiment analysis cost is high, the cycle is long, the efficiency is low, consumes a lot of human, material, and financial resources, which greatly limits the universality of the algorithm. Therefore, the algorithm’s dependence on labels should be reduced as much as possible to reduce the cost. 3.3 Our CircSI-SSL performance To validate the low dependency for labels and recognition effectiveness of our CircSI-SSL algorithm, we selected three algorithms with the best supervised performance, CircSSNN, HCRNet, and iCircRBP-DHN, and compared them with our algorithm under the premise of train:test = 1:9. The results are shown in Fig. 5 . It can be seen that our algorithm has achieved remarkable performance on most datasets and indicators, but we also see that our algorithm is slightly lower than HCRNet in Recall index. The reason may be that when very little supervision information is involved in training, supervised algorithms tend to pay too much attention to individual indicators and failure to achieve overall performance. For example, HCRNet focuses on recall index, while ACC and Precision fail to achieve good results. In contrast, our CircSI-SSL achieves a balanced and excellent performance across all metrics. It can also be seen from the comprehensive index AUC that the algorithm in this article has the best comprehensive ability and has a wide application prospect. For the convenience of comparison, we visualized the average AUC of the algorithm on six datasets as Fig. 6 . It can be intuitively seen that the algorithm in this article achieved the highest performance compared with other datasets, which was 3.3% higher on average and more than 5% higher on some datasets. To further explore the relationship between the performance of the proposed algorithm and the amount of supervised information introduced, it is proved that the proposed algorithm can achieve stable performance under the condition of very few training samples. We conducted a step test according to the training samples from 1 to 9. Ratio was used to represent the ratio between the training set and the test set. The AUC performance obtained was shown in Table 1 . We can see that in general, the algorithm has learned easily distinguishable features under the sample ratio of 1:9, and achieved excellent classification performance. With the continuous increase of training samples, the performance of the algorithm can maintain a certain increase, but the difference is not much compared with the initial. This fully indicates that the cross-view prediction task based on SSL has trained the RNA_Transformer feature extractor and learned enough contextual features to represent the entire sequence. Only a very small number of samples are required to fine-tune for subsequent recognition tasks. 3.4 Ablation analysis In this section, we conduct an ablation analysis to demonstrate that the improved performance of our algorithm is a direct result of the SSL task we designed. The AUC performance obtained by the CircSI-SSL algorithm on these datasets is presented in Table 2 , where fine-tuning based on real labels is performed directly without cross-view sequence prediction task. It is evident that when no proxy task is performed, the algorithm performance drops off a cliff, with an average decline of about 10%, as shown in Fig. 7 . In particular, there is also an extreme AUC performance of 0.5. This is sufficient to show that it is necessary to conduct self-supervised tasks, to learn the overall expression of the sequence from the data (without labels), and thus to significantly improve subsequent classification tasks with only a few labels. 3.5 Transplant analysis To further demonstrate the advantages of the proposed algorithm in more aspects, we transplanted the circSI-SSL algorithm originally designed for circRNA into the binding protein prediction task of linRNA without any network modification and with consistent hyperparameters. In the performance comparison between the six widely used linRNAs and several supervised algorithms as shown in Fig. 8 below, the ratio of training set to test set is still 1:9. Remarkably, the proposed algorithm achieves the best overall performance without any task-oriented tuning. In Fig. 8 , we can see that although iCircRBP-DHN also obtained a good average AUC value, it can also clearly see huge fluctuations in ACC, Precison, and Recall, which are separate indicators. HCRNet algorithm is relatively stable, but its performance on Recall index is poor. In the case of a very small number of training datasets put into training, the performance of the above two in each indicator is not balanced, and the overall good performance is not achieved. Therefore, supervised learning algorithm is not a good choice when there are only a few labeled samples. In contrast, the algorithm in this article achieves the overall optimal performance, even in such a harsh environment.
3 Results and discussion 3.1 Experimental setup In our experiment, the networks are trained by the Adam optimizer, where , , and weight_decay is set to 3e-4 and batchSize to 64. The optimizer’s learning rate is automatically controlled by the scheduling that comes with pytorch, where initial value is 3e-3. We employ a layer of RNA_Transformer and set dim to 400, heads to 8, and mlp_dim to 200. 3.2 Existing supervised algorithm performance We demonstrate the AUC performance achieved by eight existing supervised recognition algorithms on six circRNA–RBP datasets, as shown in Fig. 3 . These include CircSSNN, HCRNet, iCircRBP-DHN, PASSION, CRIP, CircRB, CSCRSites, and CircSLNN. The dataset ratio is set to 8:2, based on the number of training and test samples as claimed in their respective papers. It can be seen from the picture that the latest algorithm CircSSNN has achieved nearly perfect performance, and HCRNet and iCircRBP-DHN are not much different from it. Since Fig. 3 cannot be well distinguished, we independently draw the results of these three algorithms on these datasets to draw box plots ( Fig. 4 ). However, it should be noted that these algorithms require up to 80% of the training samples, i.e. 80% of the labeling labels obtained through biological experiments need to be invested in the algorithm for auxiliary learning, so as to guide the network to learn useful and easily distinguishable features. We know that the biological experiment analysis cost is high, the cycle is long, the efficiency is low, consumes a lot of human, material, and financial resources, which greatly limits the universality of the algorithm. Therefore, the algorithm’s dependence on labels should be reduced as much as possible to reduce the cost. 3.3 Our CircSI-SSL performance To validate the low dependency for labels and recognition effectiveness of our CircSI-SSL algorithm, we selected three algorithms with the best supervised performance, CircSSNN, HCRNet, and iCircRBP-DHN, and compared them with our algorithm under the premise of train:test = 1:9. The results are shown in Fig. 5 . It can be seen that our algorithm has achieved remarkable performance on most datasets and indicators, but we also see that our algorithm is slightly lower than HCRNet in Recall index. The reason may be that when very little supervision information is involved in training, supervised algorithms tend to pay too much attention to individual indicators and failure to achieve overall performance. For example, HCRNet focuses on recall index, while ACC and Precision fail to achieve good results. In contrast, our CircSI-SSL achieves a balanced and excellent performance across all metrics. It can also be seen from the comprehensive index AUC that the algorithm in this article has the best comprehensive ability and has a wide application prospect. For the convenience of comparison, we visualized the average AUC of the algorithm on six datasets as Fig. 6 . It can be intuitively seen that the algorithm in this article achieved the highest performance compared with other datasets, which was 3.3% higher on average and more than 5% higher on some datasets. To further explore the relationship between the performance of the proposed algorithm and the amount of supervised information introduced, it is proved that the proposed algorithm can achieve stable performance under the condition of very few training samples. We conducted a step test according to the training samples from 1 to 9. Ratio was used to represent the ratio between the training set and the test set. The AUC performance obtained was shown in Table 1 . We can see that in general, the algorithm has learned easily distinguishable features under the sample ratio of 1:9, and achieved excellent classification performance. With the continuous increase of training samples, the performance of the algorithm can maintain a certain increase, but the difference is not much compared with the initial. This fully indicates that the cross-view prediction task based on SSL has trained the RNA_Transformer feature extractor and learned enough contextual features to represent the entire sequence. Only a very small number of samples are required to fine-tune for subsequent recognition tasks. 3.4 Ablation analysis In this section, we conduct an ablation analysis to demonstrate that the improved performance of our algorithm is a direct result of the SSL task we designed. The AUC performance obtained by the CircSI-SSL algorithm on these datasets is presented in Table 2 , where fine-tuning based on real labels is performed directly without cross-view sequence prediction task. It is evident that when no proxy task is performed, the algorithm performance drops off a cliff, with an average decline of about 10%, as shown in Fig. 7 . In particular, there is also an extreme AUC performance of 0.5. This is sufficient to show that it is necessary to conduct self-supervised tasks, to learn the overall expression of the sequence from the data (without labels), and thus to significantly improve subsequent classification tasks with only a few labels. 3.5 Transplant analysis To further demonstrate the advantages of the proposed algorithm in more aspects, we transplanted the circSI-SSL algorithm originally designed for circRNA into the binding protein prediction task of linRNA without any network modification and with consistent hyperparameters. In the performance comparison between the six widely used linRNAs and several supervised algorithms as shown in Fig. 8 below, the ratio of training set to test set is still 1:9. Remarkably, the proposed algorithm achieves the best overall performance without any task-oriented tuning. In Fig. 8 , we can see that although iCircRBP-DHN also obtained a good average AUC value, it can also clearly see huge fluctuations in ACC, Precison, and Recall, which are separate indicators. HCRNet algorithm is relatively stable, but its performance on Recall index is poor. In the case of a very small number of training datasets put into training, the performance of the above two in each indicator is not balanced, and the overall good performance is not achieved. Therefore, supervised learning algorithm is not a good choice when there are only a few labeled samples. In contrast, the algorithm in this article achieves the overall optimal performance, even in such a harsh environment.
4 Conclusion In this article, we propose the novel CircSI-SSL framework for circRNA–RBP site recognition tasks based on SSL. By designing a cross-view sequence prediction task, the algorithm can learn the overall representation of the sequence in an unsupervised manner, and significantly enhance subsequent RBP identification performance with only a small amount of supervised information. Based on the improved Transformer network RNA_Transformer in this article, the framework extracts sequence context features from multiple views to characterize the sequence. By designing reasonable and effective proxy tasks, along with a stable and efficient network architecture, significant improvements were achieved with only a small amount of supervised information on the widely used six circRNA datasets and six linRNA datasets compared to supervised learning algorithms. In short, the CircSI-SSL algorithm based on SSL has good identification performance, expansion performance, and wide application range, only a small amount of label information can significantly improve the recognition performance. It is a very competitive tool for circRNA–RBP binding site identification.
Abstract Motivation In recent years, circular RNAs (circRNAs), the particular form of RNA with a closed-loop structure, have attracted widespread attention due to their physiological significance (they can directly bind proteins), leading to the development of numerous protein site identification algorithms. Unfortunately, these studies are supervised and require the vast majority of labeled samples in training to produce superior performance. But the acquisition of sample labels requires a large number of biological experiments and is difficult to obtain. Results To resolve this matter that a great deal of tags need to be trained in the circRNA-binding site prediction task, a self-supervised learning binding site identification algorithm named CircSI-SSL is proposed in this article. According to the survey, this is unprecedented in the research field. Specifically, CircSI-SSL initially combines multiple feature coding schemes and employs RNA_Transformer for cross-view sequence prediction (self-supervised task) to learn mutual information from the multi-view data, and then fine-tuning with only a few sample labels. Comprehensive experiments on six widely used circRNA datasets indicate that our CircSI-SSL algorithm achieves excellent performance in comparison to previous algorithms, even in the extreme case where the ratio of training data to test data is 1:9. In addition, the transplantation experiment of six linRNA datasets without network modification and hyperparameter adjustment shows that CircSI-SSL has good scalability. In summary, the prediction algorithm based on self-supervised learning proposed in this article is expected to replace previous supervised algorithms and has more extensive application value. Availability and implementation The source code and data are available at https://github.com/cc646201081/CircSI-SSL .
Conflict of interest None declared. Funding The work was supported by the National Natural Science Foundation of China [62231013, 62250028, 62271329]; the Sichuan Provincial Science Fund for Distinguished Young Scholars [2021JDJQ0025]; the fund of Shenzhen Polytechnic [6022310036K, 6023310037K]; and the Municipal Government of Quzhou [No. 2022D040].
CC BY
no
2024-01-16 23:47:17
Bioinformatics. 2024 Jan 5; 40(1):btae004
oa_package/c1/15/PMC10789309.tar.gz
PMC10789310
38175777
INTRODUCTION Acute type A aortic dissection (ATAAD) is a lethal condition with dismal outcomes without surgical intervention. Complex dissection morphologies and preoperative clinical complications need to be addressed in a timely fashion as they add an additional complexity level to emergent surgical repairs. ATAAD is aggravated by cardiopulmonary arrest (CPA) necessitating cardiopulmonary resuscitation (CPR) in 3.4% and 6.6% of all patients [ 1–3 ]. CPR at presentation was demonstrated as independent predictor for mortality in the setting of ATAAD [ 4 ]. Limited literature is available on this topic, mainly based on case reports and relatively small case series on a maximum of 44 patients [ 1–3 , 5 ]. This subgroup of patients experiences extremely high mortality rates up to 61.8% compared to the overall ATAAD population [ 3 ]. The goal of this study was to investigate factors affecting survival and outcome in ATAAD patients requiring CPR at presentation that were deemed as surgical candidates at 2 European aortic centres.
PATIENTS AND METHODS Ethics statement The study was approved by local ethics committee in both centres (Charité Medical School, Berlin, Germany, No. EA2/096/20; Medical University Innsbruck, Innsbruck, Austria No. UN5106) and complies with the Declaration of Helsinki. Patient informed consent was waived based on the retrospective character of the study. Patient population Patients suffering from ATAAD between January 2000 and March 2022 from 2 high-volume centres were screened ( n = 1997). Out of these, 112 patients (5.6%)—deemed surgical candidates despite undergoing preoperative CPR before or after diagnosis of ATAAD was confirmed with computed tomography (CT) scans—were retrospectively included in this study. CPR was carried out in-hospital by trained medical staff in the majority of cases (83.9%), while 16.1% were initially resuscitated by lay rescuers or paramedics. In 2 patients undergoing CPR with suspected coronary ischaemia, venoarterial extracorporeal membrane oxygenation (vaECMO) was implanted under radiographic guidance before ATAAD was confirmed with CT. Clinical data as well as imaging studies were evaluated. Coronary malperfusion was defined as ischaemia-specific pathological findings in electrocardiogram with or without wall motion abnormalities or elevation of cardiac enzymes. Aortic rupture was defined as abrupt haemodynamic collapse concomitant with massive pericardial effusion that was absent in the CT scan or in the echocardiography, which led to emergency connection to cardiopulmonary bypass (CPB) or rapid exsanguination and was confirmed intraoperatively. Arrhythmogenic events were defined in the few cases where no coronary malperfusion or tamponade were present, but still the patient required CPR after developing malignant arrhythmias. Spinal malperfusion was defined as a new onset of paraplegia. According to 30-day mortality, which was defined as death within the first 30 days after diagnosis of ATAAD, patients were divided into 2 groups (30-day survivor and 30-day non-survivor). Surgical repair Once diagnosis of ATAAD was confirmed, patients were immediately transferred and rushed to the operation theatre. Out of 112 patients, 23 patients (20.5%) passed away, either during induction of anaesthesia ( n = 18) or before CPB ( n = 5) could successfully be installed. In the remaining 89 patients (79.5%), surgical repair was performed. Operative strategy has been previously described [ 6 ]. Due to the critical state of these patients, rapid arterial cannulation was predominantly performed via femoral artery ( n = 55, 61.8%), and axillary artery was cannulated in slightly stabilized patients ( n = 27, 29.2%). Direct cannulation of the aorta ( n = 6; 6.7%) or cannulation of the innominate trunk ( n = 2; 2.2%) was chosen less frequently. The majority of surgical repairs were performed in hypothermic circulatory arrest (HCA) (86.5%) with a mean circulatory arrest time of 36 ± 17 min. Antegrade cerebral perfusion with cold blood (20–25°C) at a flow rate of 10–15 ml/kg/min body weight was used in 38 patients (41.3%). Isolated retrograde cerebral perfusion ( n = 36; 39.1%) via an angled cannula, which was inserted in the superior vena cava and snared during HCA, was utilized mainly in the early study period (2000–2009). Straight deep HCA without additional cerebral perfusion was performed in very selected cases ( n = 5; 5.4%). A primarily tear-oriented approach towards surgical repair was followed in both surgical centres. Depending on the extent of intimal defects or a pre-existing dilatation, root ( n = 31, 34.8%) and/or total arch replacement ( n = 11; 12.4%) were performed. Additional coronary artery bypass grafting (CABG) was performed in 25 patients with either coronary disruption or the presence of severe calcification and myocardial ischaemia. Postoperative treatment was standardized for every patient at a dedicated, experienced intensive care unit. Statistical analysis Statistical analysis was performed using SPSS (IBM Corp. Released 2021. IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY: IBM Corp) and R Version 4.0.0 [R Development Core Team (2019). R: A Language and Environment for Statistical Computing]. Categorical variables are presented as frequencies with corresponding percentages. Continuous variables were expressed as mean ± standard deviation. Differences between the 2 groups were tested by means of Chi-square test or Fisher’s exact test (in case of expected cell frequencies were <5) or Student’s t -test as appropriate. In order to identify the multivariable binary logistic regression model with the best fit, stepwise backward variable selection based on Akaike’s information criterion was performed to identify independent associates of 30-day mortality. To test the model assumptions, the variance inflation factor was used. A variance inflation factor of <5 was considered to rule out a significant effect on the model induced by multi-collinearity. Results of regression analyses were displayed as odds ratio (OR) with corresponding 95% confidence interval (CI). A P -value of <0.05 was considered as statistically significant. Mortality rates were calculated using the Kaplan–Meier method and compared using log-rank test. Kaplan–Meier curves were drawn using R version 3.6.0 (Library ‘Survival’ and ‘Survminer’).
RESULTS Preoperative management and demographics The study group consisted of 112 patients with a mean age of 63.4 ± 12 years. CPR was predominantly performed by trained medical staff in-hospital (83.9%). Reason for CPR was grouped in 4 different entities, with cardiac tamponade as the leading cause (40.2%), followed by coronary malperfusion (26.8%), aortic rupture (19.6%) or arrhythmogenic events (13.4%). All patients presented in a very critical state with a calculated German registry of acute aortic dissection type A score of 64%. Coronary malperfusion was significantly higher in patients with dismal outcome (30-day non-survivors 64.7% vs 30-day survivors 41.9%, P = 0.018). While rapid diagnosis and transfer to the operation theatre were provided in both centres, mortality rate prior to successful installation of CPB predominantly due to aortic rupture was high (20.5%). Further details are presented in Table 1 . Surgical repair Pain-to-cut time was 520 ± 436 min, with no difference between 30-day survivors and 30-day non-survivors. There was a significantly higher rate of right axillary cannulation in 30-day survivors (40% vs 30-day non-survivor 20%, P = 0.038). While extent of surgical repair did not differ between the groups, circulatory arrest time (41 ± 20 min in 30-day non-survivors vs 30 ± 13 min in 30-day survivor, P = 0.003) as well as CPB (320 ± 132 min in 30-day non-survivors vs 252 ± 140 min in 30-day survivor, P = 0.020) time was significantly longer in patients with worse outcome. Almost 1 out of 4 patients in the 30-day non-survivor group needed mechanical support due to severe impairment of left ventricular function (intraoperative vaECMO implantation; 30-day non-survivors 24% vs 30-day survivors 2%, P = 0.003). Table 2 provides detailed surgical data. Risk factors for 30-day mortality Overall 30-day mortality in patients undergoing CPR was high with a rate of 61.6% ( n = 69). The predominant reason of death was cardiac failure in 48%, followed by aortic rupture or haemorrhagic shock (28%) and neurologic complications in 15%. Multiorgan failure, sepsis or a combination of both accounted for <10%. Multivariable regression analysis identified age (OR 1.04, 95% CI 1.01–1.09, P = 0.034), preoperative coronary malperfusion (OR 3.42, 95% CI 1.34–9.26, P = 0.012) and spinal malperfusion (OR 12.49, 95% CI 1.83–225.02, P = 0.028) as independent predictors for 30-day mortality while CPR due to tamponade was associated with improved early survival (OR 0.29, 95% CI 0.091–0.81, P = 0.023) (Table 3 ). Postoperative outcome Out of 89 patients undergoing surgical repair, 26 patients died intraoperatively or within the first 24 h postoperatively. Outcome analysis was performed in patients exceeding this period ( n = 63) (Table 4 ). Because the non-survivors group was more likely to undergo discontinuation of therapy, length of intensive care unit stay was significantly shorter in these patients. Overall postoperative morbidity was high with prolonged intubation and increased rates of continuous renal replacement therapy (46%). 41% of patients suffered from perioperative stroke with permanent neurologic deficit at discharge or transfer to neurologic rehab clinics in 94%. Functional follow-up of 26 patients surviving 1 year or longer revealed good recovery in 18 patients, persistent neurology but ability to walk with mobility aids in 5 patients and impaired neurologic recovery and wheelchair dependency in 3 patients. Despite tremendous early mortality, survival after discharge was rather stable with 29.3 ± 4% at 1 year and 22 ± 4% at 3 and 5 years (Fig. 1 ).
DISCUSSION The goal of this paper was to investigate factors affecting outcomes in patients suffering ATAAD with preoperative CPR that were accepted for surgery at 2 aortic centres. Our data show high mortality rates (30-day mortality 61.6%), with age, coronary and spinal malperfusion being independent predictors for mortality. Nevertheless, patients undergoing CPR due to cardiac tamponade were more likely to experience favourable outcomes. According to these findings and the available literature on the duration of CPR, we propose an algorithm that may aid the decision-making when treating such a high-risk subgroup of patients (Fig. 2 ). Enhanced awareness of the disease and improvements in diagnostics and treatment improved survival rates of ATAAD over the last 2 decades [ 7–9 ]. At the same time, this evolution confronts us with a more demanding and complex patient cohort and the evident need for optimized preoperative risk assessment. CPA requiring CPR reflects the most urgent condition where a choice between escalation of therapy and proceeding to surgical treatment or discontinuation of life-saving processes has to be made in ATAAD patients. Necessity of preoperative CPR has been proven as independent risk factor for impaired survival [ 3 , 10 ]. This has been reflected in preoperative mortality risk calculation tools like the German registry of acute aortic dissection type A score, where preoperative CPR accounts for the highest OR for 30-day mortality [ 11 ]. There seems to be a subgroup of patients who benefit from surgery that ideally could be identified by preoperative markers. When it comes to the prevalence of CPR in ATAAD, registry data as well as experienced centres from all around the world report rates ranging from 3.4% to 6.6% [ 1 , 3 ]. This is very much in line with our reported CPR rate of 5.6%. Interestingly, data from Japan recently reported diverging results with an extremely high rate of preoperative CPA in ATAAD patients of 33.1% [ 12 ]. In their study cohort, a well-defined institutional protocol for extracorporeal cardiopulmonary resuscitation (eCPR) was followed and vaECMO was implanted within a very short time interval from collapse in 31 patients not responding to conventional CPR. Twenty-four patients died before surgery mainly due to aortic rupture, while 7 patients were bridged to surgery but also had dismal outcome. Ohbe et al. provided outcome data from the Japanese Diagnosis Procedure Combination inpatient database on 398 patients undergoing eCPR in the setting of ATAAD and showed a 98% mortality rate. In addition, eCPR was associated with an incremental economic burden of 161 504 US Dollars per quality-adjusted life year gained [ 13 ]. In our study cohort, eCPR via femoro-femoral vaECMO support was applied in 2 patients with CPA and primarily suspected acute coronary syndrome. After correct diagnosis of ATAAD was confirmed in these patients, surgical repair was proceeded due to young age, but both patients died within 48 h after surgery. While eCPR might stabilize the patient in terms of organ perfusion, it can trigger or worsen lethal aortic rupture, which accounted for high mortality in the study by Nakai and colleagues [ 12 ]. In our 2 patients, we assume that retrograde flow of vaECMO from femoral cannulation worsened myocardial perfusion due to extensive perfusion of the false lumen via the entry tear. Based on the existing evidence and the limited numbers provided in literature, one might conclude that eCPR should not be considered in the setting of CPA and confirmed ATAAD. Even if rapid decision has to be made once patients with ATAAD code and need CPR, a basic understanding of the underlying reason for CPA is mandatory in order to define next steps in patient management. Cardiac tamponade was the most common cause for CPR in our cohort and other series [ 1–3 ]. This underlying cause of severe haemodynamic compromise can be treated rapidly with pericardial drainage. One has to keep in mind, that performance of pericardial drainage should not delay surgery. According to the 2010 Guidelines of the AAC/American Heart Association/American Association of Thoracic Surgeons, pericardiocentesis can be performed in this patient cohort who cannot survive until surgery by withdrawing just enough fluid to restore perfusion [ 14 ]. This recommendation has been also promoted in the most recent expert consensus statement of the American Association for Thoracic Surgery [ 15 ]. Pericardial tamponade reflects a life-threatening condition that can be resolved rapidly in experienced hands and can lead to an immediate improvement of haemodynamics. Nakai et al. [ 12 ] reported significantly higher survival rates in patients undergoing urgent pericardial drainage. This underlines our findings that pericardial tamponade emerged as the only preoperative predictor for improved survival in our patient cohort. Risk factor analysis for early mortality in a surgical cohort of patients undergoing preoperative CPR is limited due to the low number of patients ranging from 17 to a maximum of 44 recently reported from the NORCAAD registry [ 5 , 13 ]. This study provides the so far largest cohort of 89 patients undergoing emergent surgery after CPR in ATAAD. Risk factor analysis revealed age and spinal as well as coronary malperfusion as important associates for early mortality. Age is a well-known risk factor for impaired survival in a surgical ATAAD patient group [ 11 ]. Complicated type A dissection, defined as presence of malperfusion, neurologic injury or CPR, emerged as independent risk factor for 30-day survival in octogenarians in one of our past studies [ 16 ]. Coronary malperfusion as another important risk factor has been extensively studied. Preoperative coronary malperfusion is associated with a higher operative mortality up to 41.5% [ 17 , 18 ]. Hashimoto and colleagues illustrated in a multivariable model, that cardiac ischaemia in combination with cardiogenic shock (Killip class IV) was a strong predictor for in-hospital mortality (OR 2.86, 95% CI 1.50–5.44) [ 18 ]. Preoperative coronary malperfusion as independent predictor for early mortality was reconfirmed in our cohort. Despite a rapid approach towards surgical treatment, coronary ischaemia and hypoperfusion due to CPR was associated with irreversible myocardial damage in most patients being reflected in the high rate of vaECMO support. Only 1 out of 12 patients being bridged with intraoperative implantation of vaECMO survived. Identical data on dismal vaECMO outcome were presented by Uehara and colleagues [ 3 ]. Outcome of patients with spinal malperfusion and CPR in our cohort was disillusioning with 100% mortality rate within the first 30 days. While we have to accept that overall mortality rate is extremely high in patients suffering from ATAAD and undergoing CPR, we intended to identify patients who still might benefit from surgical repair. Our database did not depict reliable documentation of CPR duration; therefore, we were unable to consider this important factor in our risk assessment. In literature, there is strong evidence that CPR duration exceeding 15 min is associated with an 8-fold increase in-hospital mortality [ 3 ]. Limitations Major limitations in this study originate from its retrospective nature. As data from 2 aortic centres with standardized patient care were merged, local differences especially pre-hospital treatment algorithms might have affected the outcome. Despite the fact that most patients underwent CPR in professional hands, duration of CPR was not sufficiently documented and therefore could not be included in our data presentation. In patients undergoing CPR detailed preoperatively neurologic evaluation is limited due to the emergency situation and the need for sedation and intubation. Our registries do not cover patients undergoing CPR in the setting of ATAAD being denied from surgery, this unreported number of patients would be of great interest. Furthermore, the logistic regression model and the resulting cocnlusions need to be interpreted under the light of the low event rate of certain variables. This is especially the case for spinal malperfusion.
CONCLUSION Patients undergoing CPR in the setting of ATAAD have to be carefully evaluated for the reason of CPA. Pericardial tamponade, which can rapidly be resolved with pericardial drainage is a predictor for improved survival, while age and presence of coronary and spinal malperfusion are associated with dismal outcome in this high-risk patient group.
Matteo Montagner and Markus Kofler contributed equally to this work. Abstract OBJECTIVES Cardiopulmonary resuscitation (CPR) aggravates the pre-existing dismal prognosis of patients suffering from acute type A aortic dissection (ATAAD). We aimed to identify factors affecting survival and outcome in ATAAD patients requiring CPR at presentation at 2 European aortic centres. METHODS Data on 112 surgical candidates and undergoing preoperative CPR were retrospectively evaluated. Patients were divided into 2 groups according to 30-day mortality. A multivariable model identified predictors for 30-day mortality. RESULTS Preoperative death occurred in 23 patients (20.5%). In the remaining 89 surgical patients (79.5%) circulatory arrest time (41 ± 20 min in 30-day non-survivors vs 30 ± 13 min in 30-day survivor, P = 0.003) as well as cardiopulmonary bypass time (320 ± 132 min in 30-day non-survivors vs 252 ± 140 min in 30-day survivor, P = 0.020) time was significantly longer in patients with worse outcome. Thirty-day mortality of the total cohort was 61.6% ( n = 69) with cardiac failure in 48% and aortic rupture or haemorrhagic shock (28%) as predominant reasons of death. Age [odds ratio (OR) 1.04, 95% confidence interval (CI) 1.01–1.09, P = 0.034], preoperative coronary (OR 3.42, 95% CI 1.34–9.26, p = 0.012) and spinal malperfusion (OR 12.49, 95% CI 1.83–225.02, P = 0.028) emerged as independent predictors for 30-day mortality while CPR due to tamponade was associated with improved early survival (OR 0.29, 95% CI 0.091–0.81, P = 0.023). CONCLUSIONS Assessment of underlying cause for CPR is mandatory. Pericardial tamponade, rapidly resolved with pericardial drainage, is a predictor for improved survival, while age and presence of coronary and spinal malperfusion are associated with dismal outcome in this high-risk patient group. Acute type A aortic dissection (ATAAD) is a lethal condition with dismal outcomes without surgical intervention.
Presented at the EACTS Annual Meeting 2022, Milan, Italy. FUNDING No funding was provided. Conflict of interest: none declared. DATA AVAILABILITY The data obtained for this manuscript cannot be shared publicly due to data protection policies but will be provided to interested parties upon reasonable request to the corresponding author. Author contributions Matteo Montagner: Conceptualization; Data curation; Formal analysis; Investigation; Validation; Writing—original draft. Markus Kofler: Data curation; Formal analysis; Validation; Visualization) Leonard Pitts: Validation. Simone Gasser: Validation. Lukas Stastny: Validation. Stephan D. Kurz: Validation; Writing—review & editing. Michael Grimm: Writing—review & editing. Volkmar Falk: Conceptualization; Resources; Supervision; Validation; Writing—review & editing. Jörg Kempfert: Validation; Writing—review & editing. Julia Dumfarth: Conceptualization; Data curation; Formal analysis; Methodology; Resources; Supervision; Writing—original draft. Reviewer information European Journal of Cardio-Thoracic Surgery thanks Roman Gottardi, David C. Reineke, Giacomo Murana and the other anonymous reviewer for their contribution to the peer review process of this article. ABBREVIATIONS Acute type A aortic dissection Cardiopulmonary bypass Confidence interval Cardiopulmonary arrest Cardiopulmonary resuscitation Computed tomography Extracorporeal cardiopulmonary resuscitation Hypothermic circulatory arrest Odds ratio Venoarterial extracorporeal membrane oxygenation
CC BY
no
2024-01-16 23:47:17
Eur J Cardiothorac Surg. 2024 Jan 4; 65(1):ezad436
oa_package/ad/d7/PMC10789310.tar.gz
PMC10789311
38180866
1 Introduction Haplotype-based summary statistics—such as iHS ( Voight et al. 2006 ), nSL ( Ferrer-Admetlla et al. 2014 ), XP-EHH ( Sabeti et al. 2007 ), and XP-nSL ( Szpiech et al. 2021 )—have become commonplace in evolutionary genomics studies to identify recent and ongoing positive selection in populations (e.g. Colonna et al. 2014 , Zoledziewska et al. 2015 , Nédélec et al. 2016 , Crawford et al. 2017 , Meier et al. 2018 , Lu et al. 2019 , Zhang et al. 2020 , Salmón et al. 2021 ). When an adaptive allele sweeps through a population, it leaves a characteristic pattern of long high-frequency haplotypes and low genetic diversity in the vicinity of the allele. These statistics aim to capture these signals by summarizing the decay of haplotype homozygosity as a function of distance from a putatively selected region, either within a single population (iHS and nSL) or between two populations (XP-EHH and XP-nSL). These haplotype-based statistics are powerful for detecting recent positive selection ( Colonna et al. 2014 , Zoledziewska et al. 2015 , Nédélec et al. 2016 , Crawford et al. 2017 , Meier et al. 2018 , Lu et al. 2019 , Zhang et al. 2020 , Salmón et al. 2021 ), and the two-population versions can even out-perform pairwise Fst scans on a large swath of the parameter space ( Szpiech et al. 2021 ). Furthermore, haplotype-based methods have also been shown to be robust to background selection ( Fagny et al. 2014 , Schrider 2020 ). However, each of these statistics presumes that haplotype phase is known or well-estimated. As the generation of genomic sequencing data for non-model organisms is becoming routine ( Ellegren 2014 ), there are many great opportunities for studying recent adaptation across the tree of life (e.g. Campagna and Toews 2022 ). However, often these organisms/populations do not have a well-characterized demographic history or recombination rate map, two pieces of information which are important inputs for statistical phasing methods ( Delaneau et al. 2013 , Browning et al. 2021 ). Recent work has shown that haplotype-based statistics can be adapted for use on unphased data ( Klassmann and Gautier 2022 ) and that converting haplotype data into “multi-locus genotype” data is an effective approach for using haplotype-based selection statistics such as G12, LASSI, and saltiLASSI ( Harris et al. 2018 , Harris and DeGiorgio 2020 , DeGiorgio and Szpiech 2022 ) in unphased data. Recognizing this, we have reformulated the iHS, nSL, XP-EHH, and XP-nSL statistics to use multi-locus genotypes and provided an easy-to-use implementation in selscan 2.0 ( Szpiech and Hernandez 2014 ). We evaluate the performance of these unphased statistics under various generic demographic models and compare against the original statistics applied to simulated datasets when phase is either known or unknown.
2 Materials and methods When the --unphased flag is set in selscan v2.0+, biallelic genotype data is collapsed into multi-locus genotype data by representing the genotype as either 0, 1, or 2—the number of derived alleles observed. In this case, selscan v2.0+ will then compute iHS, nSL, XP-EHH, and XP-nSL as described below. We follow the notation conventions of Szpiech and Hernandez (2014) . 2.1 Extended haplotype homozygosity In a sample of diploid individuals, let denote the set of all possible genotypes at locus . For multi-locus genotypes, , representing the total counts of a derived allele. Let be the set of all unique haplotypes extending from site to site either upstream or downstream of . If is a site immediately adjacent to , then , representing all possible two-site multi-locus genotypes. We can then compute the extended haplotype homozygosity (EHH) of a set of multi-locus genotypes as where is the number of observed haplotypes of type . If we wish to compute the EHH of a subset of observed haplotypes that all contain the same “core” multi-locus genotype, let be the partition of containing genotype at . For example, choosing a homozygous derived genotype ( ) as the core, . Thus, we can compute the EHH of all individuals carrying a given genotype at site extending out to site as where is the number of observed haplotypes of type and is the number of observed multi-locus genotypes with core genotype of . Finally, we can compute the complement EHH of a sample of multi-locus genotypes as where is the number of observed multi-locus genotypes with a core genotype of not . 2.2 iHS and nSL Unphased iHS and nSL are calculated using the equations above. First, we compute the integrated haplotype homozygosity (iHH) for the homozygous ancestral ( ) and derived ( ) core genotypes as where is the set of downstream sites from the core locus and is the set of upstream sites. is a measure of genomic distance between to markers and is the genetic distance in centimorgans or physical distance in basepairs for iHS ( Voight et al. 2006 ) or the number of sites observed for nSL ( Ferrer-Admetlla et al. 2014 ). We similarly compute the complement integrated haplotype homozygosity (ciHH) for both homozygous core genotypes as The (unstandardized) unphased iHS is then calculated as where and Conceptually, this is nearly identical to the phased version of iHS, where the log ratio of the integrated haplotype homozygosity is computed between all haplotypes carrying the ancestral allele at the core locus versus all haplotypes carrying the derived allele at the core locus. In this case, however, we compare the iHH of the haplotypes containing homozygous genotypes of one allele at the core locus to the iHH of the haplotypes containing all other genotypes at the core locus. Doing this for both homozygous derived and homozygous ancestral haplotypes separately, we then choose the most extreme value. We assign a positive sign for long low-diversity haplotypes containing the derived homozygous genotype at the core locus, and we assign a negative sign for long low-diversity haplotypes containing the ancestral homozygous genotype at the core locus. Unstandardized iHS scores are then normalized in frequency bins, as previously described ( Voight et al. 2006 , Ferrer-Admetlla et al. 2014 ). Unstandardized unphased nSL is computed similarly with the appropriate distance measure [see Ferrer-Admetlla et al. (2014) where they show that nSL can be reformulated as iHS with a different distance measure]. Large positive scores indicate long high-frequency haplotypes with a homozygous derived core genotype, and large negative scores indicate long high-frequency haplotypes with a homozygous ancestral core genotype. Clusters of extreme scores in both directions indicate evidence for a sweep. 2.3 XP-EHH and XP-nSL Unphased XP-EHH and XP-nSL are calculated by comparing the iHH between populations and , using the entire sample in each population. iHH in a population P is computed as where the distance measure is given as centimorgans or basepairs for XP-EHH ( Sabeti et al. 2007 ) and number of sites observed for XP-nSL ( Szpiech et al. 2021 ). The XP statistics between population and are then computed as and are normalized genome wide. Large positive scores indicate long high-frequency haplotypes in population , and large negative scores indicate long high-frequency haplotypes in population . Clusters of extreme scores in one direction indicate evidence for a sweep in that population. 2.4 Simulations We evaluate the performance of the phased and unphased versions of iHS, nSL, XP-EHH, and XP-nSL under a generic two-population divergence model using the coalescent simulation program discoal ( Kern and Schrider 2016 ). We explore five versions of this generic model and name them Demo 1 through Demo 5 ( Supplementary Table S1 ). Let and be the effective population sizes of Population 0 and Population 1 after the split from their ancestral population (of size ). For Demo 1, we keep a constant population size post-split and let . For Demo 2, we keep a constant population size post-split and let . For Demo 3, we keep a constant population size post-split and let . For Demo 4, we initially set and let grow stepwise exponentially every 50 generations starting at 2000 generations ago until . For Demo 5, we initially set and let grow stepwise exponentially every 50 generations starting at 2000 generations ago until . For each demographic history we vary the population divergence time generations ago. For non-neutral simulations, we simulate a sweep in Population 0 in the middle of the simulated region across a range of selection coefficients . We vary the frequency at which the adaptive allele starts sweeping as , where indicates a hard sweep and indicates a soft sweep, and we also vary the frequency of the selected allele at time of sampling as well as representing fixation of the sweeping allele generations ago. For all simulations we set the genome length to be basepairs, the ancestral effective population size to be , the per site per generation mutation rate at , and the per site per generation recombination rate at . For neutral simulations, we simulate 1000 replicates for each parameter set, and for non-neutral simulations we simulate 100 replicates for each parameter set. We sample haplotypes, randomly paired together to form diploid individuals, from each population for analysis. These datasets represent the case where phase is known perfectly. We also create a set of “unphased” datasets from these phased datasets by swapping the alleles of each heterozygote to the opposing haplotype with probability 0.5. As iHS and nSL are single population statistics, we only analyze Demo 1, Demo 3, and Demo 4 with these statistics, as Demo 2 and Demo 5 have a constant size history identical to Demo 1 for Population 0, where the sweeps are simulated. For XP-EHH and XP-nSL we analyze all five demographic histories. For all simulations, we compute the relevant statistics (--ihs, --nsl, --xpehh, or --xpnsl) with selscan v2.0 using the --trunc-ok flag. We set --unphased when computing the unphased versions of these statistics, and we do not set it when computing the original phased versions. For iHS and XP-EHH, we also use the --pmap flag to use physical distance instead of a recombination map. 2.5 Power and false positive rate Here we evaluate the power and false positive rate for the unphased version of iHS, nSL, XP-EHH, and XP-nSL. For comparison, we also compute the power for the original phased versions of these statistics in two different ways. We compute the phased statistics for a set of simulated datasets where perfect phase is known, and we compute them again for a set of simulated datasets where we destroy phase information (see Section 2.4 ). As the unphased statistics collapse genotypes into derived allele counts, there is no functional difference between these two datasets for these statistics. We compute power in the same way for each statistic regardless of underlying dataset analyzed as described below. To compute power for iHS and nSL, we follow the approach of Voight et al. (2006) . For these statistics, each non-neutral replicate is individually normalized jointly with all neutral replicates with matching demographic history in 1% allele frequency bins. Because extreme values of the statistic are likely to be clustered along the genome ( Voight et al. 2006 ), we then compute the proportion of extreme scores ( or ) within 100kbp non-overlapping windows. We then bin these windows into 10 quantile bins based on the number of scores observed in each window and call the top 1% of these windows as putatively under selection. We calculate the proportion of non-neutral replicates that fall in this top 1% as the power. To compute the false positive rate, we compute the proportion of neutral simulations that fall within the top 1%. To compute power for XP-EHH and XP-nSL, we follow the approach of ( Szpiech et al. 2021 ). For these statistics, each non-neutral replicate is individually normalized jointly with all matching neutral replicates. Because extreme values of the statistic are likely to be clustered along the genome ( Szpiech et al. 2021 ), we then compute the proportion of extreme scores (XP-EHH or XP-nSL ) within 100kbp non-overlapping windows. We then bin these windows into 10 quantile bins based on the number of scores observed in each window and call the top 1% of these windows as putatively under selection. We calculate the proportion of non-neutral replicates that fall in this top 1% as the power. To compute the false positive rate, we compute the proportion of neutral simulations that fall within the top 1%.
3 Results We find that the unphased versions of iHS and nSL generally have good power at large sample sizes ( Fig. 1A and B , Supplementary Figs S1, S7 , and S8 ) to detect selection prior to fixation of the allele, with nSL generally outperforming iHS. In smaller populations ( Supplementary Fig. S1C and D ), power does suffer relative to larger populations ( Supplementary Fig. S1A, B, E, and F ). We note that these statistics struggle to identify soft sweeps when the population is undergoing exponential growth ( Supplementary Fig. S1E and F ). Each of these statistics also have low false positive rates hovering around 1% ( Supplementary Tables S2–S5 ). These single-population statistics only perform well for relatively large samples ( Fig. 1A and B , and Supplementary Figs S19, S25, S26, S31, S32, S37, S43, S44, S55, S61 , and S62 ). Similarly, we find that the unphased versions of XP-EHH and XP-nSL have good power as well even for relatively low sample sizes ( Fig. 1C, D, G, and H and Supplementary Figs S2, S3, S9–S12, S20, S21, S27–30, S38, S39, S45–48, S56, S57, S63–S66 ). When the sweep takes place in the smaller of the two populations ( Supplementary Figs S2C, S2D, S20C, S20D, S38C, S38D, S56C , and S56D ), we see a similar decrease in power, likely related to the lower efficiency of selection in small populations. When one population is undergoing exponential growth ( Supplementary Figs S3, S21, S39 , and S57 ) performance is generally quite good, likely the result of a larger effective selection coefficient in large populations. These two-population statistics generally outperform their single-population counterparts, especially at small diploid sample sizes and for sweeps that have reached fixation recently. Each of these statistics also have low false positive rates hovering around 1% ( Supplementary Tables S2–S5 ). Next, we turn to comparing the performance of these unphased statistics to their phased counterparts when they are used to analyze either phased data or unphased data. In Fig. 1E–H and Supplementary Figs S4–S6, S13–S18, S22–S24, S31–S36, S40–S42, S49–S54, S58–S60 , and S67–S72 , we plot the difference in power between the unphased statistics and the phased counterpart applied to data with phase known (red lines) or phase scrambled (blue lines). Where these lines are greater than or equal to 0 indicates that the unphased statistic performed as well as or better than the phased counterpart. We find that iHS tends to underperform the traditional phased implementations, but nSL tends to perform as well as the phased versions ( Fig. 1E and F and Supplementary Figs S4, S13, S14, S22, S31, S32, S40, S49, S50, S58, S67 , and S68 ). Although we note noticeable drops in unphased nSL power for softer sweeps in exponential growth scenarios ( Supplementary Figs S4F, S13F, S14F, S22F, S31F, S32F, S40F, S49F, S50F, S58F, S67F , and S68F ) and for sweeps near completion in small population sizes ( Supplementary Figs S4E, S13E, S14E, S22E, S31E, S32E, S40E, S49E, S50E, S58E, S67E , and S68E ). When comparing the unphased versions of XP-EHH and XP-nSL, we find that they consistently perform as well or better than their phased counterparts ( Fig. 1G and H and Supplementary Figs S5, S6, S17, S18, S23, S24, S35, S36, S41, S42, S53, S54, S59, S60, S71 , and S72 ), except in limited circumstances where phase is known, and the sweep is fairly young (sweeping allele at 0.7 frequency) or the divergence time is further in the past.
4 Discussion We introduce multi-locus genotype versions of four popular haplotype-based selection statistics—iHS ( Voight et al. 2006 ), nSL ( Ferrer-Admetlla et al. 2014 ), XP-EHH ( Sabeti et al. 2007 ), and XP-nSL ( Szpiech et al. 2021 )—that can be used when the phase of genotypes is unknown. Although phase would seem to be a critically important component of any haplotype-based method for detecting selection, here we show that, by collapsing haplotypes into derived allele counts (thus erasing phase information), we can achieve similar power to using this information. We observed that single-population statistics such as iHS and nSL require relatively large diploid sample sizes ( for iHS, for nSL), but the two-population statistics XP-EHH and XP-nSL perform well even for diploid sample sizes down to per population. This follows other work that has shown similar patterns with other haplotype-based statistics for detecting selection ( Harris et al. 2018 , Harris and DeGiorgio 2020 , DeGiorgio and Szpiech 2022 , Klassmann and Gautier 2022 ). Importantly, this approach now opens up the application of several popular haplotype-based selection statistics (based on extended haplotype homozygosity) to more species where phase information is challenging to know or infer. For ease of use of these new unphased versions of iHS, nSL, XP-EHH, and XP-nSL, we implement these updates in the latest v2.0 update of the program selscan ( Szpiech and Hernandez 2014 ), with source code and pre-compiled binaries available at https://www.github.com/szpiech/selscan .
Abstract Summary Several popular haplotype-based statistics for identifying recent or ongoing positive selection in genomes require knowledge of haplotype phase. Here, we provide an update to selscan which implements a re-definition of these statistics for use in unphased data. Availability and implementation Source code and binaries are freely available at https://github.com/szpiech/selscan , implemented in C/C++, and supported on Linux, Windows, and MacOS.
Supplementary Material
Acknowledgements Computations for this research were performed using the Pennsylvania State University’s Institute for Computational Data Sciences’ Roar supercomputer. Supplementary data Supplementary data are available at Bioinformatics online. Conflict of interest None declared. Funding This work was supported by the National Institute of General Medical Sciences of the National Institutes of Health [award number R35GM146926]; and start-up funds from the Pennsylvania State University’s Department of Biology. Data availability The data underlying this article are available in the article and in its online supplementary material .
CC BY
no
2024-01-16 23:47:17
Bioinformatics. 2024 Jan 5; 40(1):btae006
oa_package/17/f2/PMC10789311.tar.gz
PMC10789312
38200571
1 Introduction The human microbiome is the collection of microorganisms including bacteria, viruses, archaea, and fungi living in the human body. The development of high-throughput sequencing technology has enabled efficient and detailed characterizations of microbial communities, leading to an explosive growth in studies investigating the human microbiome. There are two major sequencing approaches to quantify the composition of species. One is gene-targeted sequencing, where specific marker genes such as the 16S ribosomal RNA (rRNA) genes are amplified and sequenced ( Tringe and Rubin 2005 , Caporaso et al. 2010 , Lasken 2012 , Rapin et al. 2017 ). Sequencing reads are usually clustered into operational taxonomic units (OTUs) with a sequence similarity, such as 97% ( Nguyen et al. 2016 ). A phylogenetic tree that captures evolutionary relationships among species can be constructed based on sequence divergences of OTUs ( Price et al. 2010 ). Thus, OTUs that are close to a phylogenetic tree are usually also functionally related. The other method is the shotgun metagenomic sequencing, which sequences all microbial genomic DNA ( Truong et al. 2015 , Scholz et al. 2016 ). Although shotgun metagenomics can profile microbial communities more accurately, the targeted approach is more popular due to its low cost. Large studies such as the Human Microbiome Project ( Consortium et al. 2012 ) and the American Gut Project (AGP, McDonald et al. 2018 ) used the targeted sequencing approach to generate microbiome data. Over the past decades, studies have established associations between microbiome and health outcomes ( Morgan et al. 2012 , Wu et al. 2016b ). Different statistical methods have been developed for microbiome data, and many of them used distance-based methods ( Zhao et al. 2015 , Wu et al. 2016a , Koh et al. 2017 , Ma et al. 2020 , Wang et al. 2022 ). The performance of distance-based methods is known to be greatly affected by distance metrics used ( Chen et al. 2012 ). For microbiome data, several distance metrics have been developed and widely used. UniFrac distances ( Lozupone and Knight 2005 , Lozupone et al. 2007 ) weight branch lengths in a phylogenetic tree either by differences in the presence/absence of the descending OTUs between two samples, thus capturing signals of rare taxa (unweighted UniFrac distance) ( Lozupone and Knight 2005 ), or by differences in abundance levels of the descending OTUs, thus capturing signals of abundance taxa (weighted UniFrac distance) ( Lozupone et al. 2007 ). Generalized UniFrac distances ( Chen et al. 2012 ) focus on OTUs in between. Another commonly used distance metric for microbiome data is the Bray–Curtis distance ( Bray and Curtis 1957 ), which only considers abundance information of OTUs. Several existing methods that test for associations between microbiome and health outcomes consider multiple distance metrics ( Zhao et al. 2015 , Wu et al. 2016a , Koh et al. 2017 ) and choose an optimal one. That is, only one form of association between taxa and health outcomes is considered in the final model. Studies have also investigated how microbiome predicts health outcomes using either general-purpose prediction methods such as Random Forest ( Breiman 2001 ), sparse regression models like Lasso ( Tibshirani 1996 , Knights et al. 2011 ), or methods specifically developed for microbiome data for prediction ( Tanaseichuk et al. 2014 , Chen et al. 2015 , Xiao et al. 2018 , Wassan et al. 2018 ). Most recently, prediction models with deep learning methods were also developed using microbiome data ( Grazioli et al. 2022 , Wang et al. 2021 , Sharma et al. 2020 , Reiman et al. 2020 ) with many of them using a convolutional neural network (CNN) that can capture the spatial relationship. In these models, convolutional layers were used to mimic taxonomic ranks to capture the phylogenetic relationship among microbial species. However, many studies have suggested that in real microbiome studies, multiple forms of microbiome–outcome association exist ( Giliberti et al. 2022 ). For example, health outcomes including obesity ( Turnbaugh et al. 2009 ), irritable bowel disease ( Morgan et al. 2012 ), and diabetes ( Karlsson et al. 2013 ) are associated with the presence/absence information of some taxa and are also associated with the abundance level of other taxa. In addition, the associated taxa may be close to each other on a phylogenetic tree (referred to as phylogenetically related) or are scattered on a phylogenetic tree (referred to as phylogenetically unrelated). For prediction purposes, no methods exist that consider multiple forms of microbiome–outcome associations. In this paper, we propose MK-BMC, a M ulti- K ernel framework with B oosted distance M etrics for C lassifications using microbiome data, with each kernel being transformed from a boosted distance metric for microbiome data capturing one form of association between taxa and a health outcome. MK-BMC learns kernel weights for multiple kernels with kernel weights reflecting contributions of individual kernels, i.e. individual types of microbiome–outcome associations. Here we propose to first boost existing distance metrics for microbiome data using taxon-level association signal strength to up-weight taxa that are potentially associated with a health outcome of interest, and down-weight those that are potentially noises to further improve prediction. The proposed MK-BMC method then uses kernels derived from these boosted distance metrics. Through extensive simulation studies, we demonstrated the superior prediction performance of (i) the proposed boosted distance metrics over the original ones and (ii) the proposed MK-BMC method over several competing methods. We applied MK-BMC and competing methods to predict thyroid, obesity, and inflammatory bowel disease (IBD) status using gut microbiome data from the American Gut Project and observed much-improved prediction performance of MK-BMC over that of competing methods. The estimated kernel weights give insights into contributions of different forms of microbiome–outcome associations.
2 Methods Let be the case–control status (1 for case, 0 for control), be the relative abundances levels of q OTUs, and be the L covariates (e.g. age, gender) for sample i , . We denote as the rooted phylogenetic tree with R branches with branch lengths . 2.1 The proposed boosted distances for microbiome data 2.1.1 Recap of distance metrics for microbiome data Several popular distance metrics for microbiome data have been proposed ( Kuczynski et al. 2010 , Fukuyama et al. 2012 , Tang et al. 2016 ). They can be categorized into tree-based distances, such as unweighted and weighted UniFrac distances ( Lozupone and Knight 2005 , Lozupone et al. 2007 ) calculated based on phylogenetic tree information, and non-tree-based distances including Bray–Curtis ( Bray and Curtis 1957 ) and Hamming distances ( Zhang et al. 2018 ) without incorporating phylogenetic tree information. Alternatively, these distances can be divided into abundance-based (using species’ abundance levels) and presence-absence-based (using species’ presence–absence status, Tang et al. 2016 , Zhang et al. 2018 ). Specifically, the weighted UniFrac distance between samples i and j is defined as for R branches. The unweighted UniFrac distance is defined as , where is an indicator function. Both the Bray–Curtis distance and the Hamming distance are calculated from the abundance levels of q OTUs without referring to phylogenetic tree information. Note that the Hamming distance is equivalent to the presence–absence version of the Bray–Curtis distance, as the denominator of the Bray–Curtis distance is actually a constant. 2.1.2 The proposed boosted distance metrics for microbiome data The aforementioned four distance metrics comprehensively quantify the difference in microbiome compositions between two samples. However, to predict health outcomes using microbiome, not all taxa in the entire microbiome of a sample are predictive. We propose to up-weight taxa that are potentially associated with an outcome of interest and down-weight those that are potentially noises using taxon-level association signal strengths. For a binary health outcome, we could apply a two-sample t -test to compare abundance levels between the two groups for each taxon and boost the weighted Unifrac and Bray–Curtis distances by the P -values of the t -tests. To boost unweighted UniFrac and Hamming distances, we could apply the Pearson’s χ 2 test or Fisher’s exact test to test association between the outcome and a taxon’s presence/absence status. We define taxon-level weights as normalized , where p is the P -value of the association test. We propose the boosted version of the four distance metrics for microbiome data as follows: 2.1.3 From distance to kernel The relationship between taxa and the outcome is usually unknown. Thus, we use Gaussian kernel ( Wang et al. 2020 ), which is a universal kernel ( Micchelli et al. 2006 ) and can approximate a large class of functions. Here is the distance between samples i and j , and is a parameter that is set as the mean of all pairwise distances among training samples. Note that captures the similarity between samples i and j . If we want to use L covariates together with microbiome to predict health outcomes, we can calculate e.g. the Euclidian distance between samples i and j in terms of a covariate, and similarly use Gaussian kernel or other kernel forms to capture appropriate relationships between covariates and the outcome through L kernel matrices . To simplify the notation, we denote kernels for the four boosted distance metrics for microbiome data as and kernels for L covariates as . 2.2 The proposed multi-kernel model: MK-BMC To predict a binary outcome utilizing multiple forms of microbiome–outcome associations, we propose the following model that uses the weighted sum of multiple kernels transformed from the proposed boosted distance metrics and distances for covariates: where is the weight of kernel l and is a tuning parameter. With N training samples, CC is a matrix of case–control status: The intuition behind the first term in the objective function (1) is that similarities should be relatively small between groups and large within groups. The second term is an entropy loss that encourages equal contributions of multiple kernels. As increases, kernel weights tend to be close to each other. In practice, we set the maximum as the value that achieves max entropy when . We tune by considering possible values and select the optimal one through 5-fold cross-validations based on the AUC in training samples. 2.2.1 Optimization procedure Optimizing the objective function (1) is a simple linear programming problem. If the tuning parameter is zero, there only exists one kernel. If we define the (generalized) Lagrangian function with parameters and as By setting , it is easy to see that 2.3 Building a prediction tool With estimated kernel weights , we calculate similarities between samples i and j as . For sample i in a training set with N samples, we assign a similarity t-score as the two-sample t-statistic comparing its similarities with the remaining cases and controls. With and their group label , we fit a simple logistic model , which serves as the classifier to predict testing samples’ case–control status. To predict the case–control status of a testing sample j , we compute its similarity with training cases and with training controls separately as and . We then assign testing sample j a t-score as the t-statistic comparing these two sets. With , we can easily calculate the probability of testing sample j being a case using the fitted logistic regression classifier.
3 Results 3.1 Simulation studies We performed simulation studies to evaluate the prediction performance of the proposed MK-BMC method and that of several competing methods including Random Forest (RF), PAAM-RF, an extended version of RF incorporating the phylogenetic tree structure ( Wassan et al. 2018 ), and MDeep, a recently developed deep learning method ( Wang et al. 2021 ). MDeep orders OTUs based on a hierarchical clustering analysis using pairwise patristic distances within the phylogenetic tree. The ordered OTUs are subsequently utilized as inputs for a convolutional neural network, enabling predictions that leverage both the phylogenetic tree and OTU abundance levels. We also compared our method with models with single distance metrics or their boosted versions. The single kernel models are denoted as , , , and representing Bray–Curtis kernel, weighted Unifrac kernel, unweighted Unifrac kernel, and Hamming kernel, respectively. The corresponding boosted versions are denoted as , , , and , respectively. For RF and PAAM-RF, we set the number of decision trees as 1,000 and the number of variables to possibly split at each node as the (rounded down) square root of the number of variables. All other parameters follow defaults in the R package “ranger.” For MDeep, we used default parameter values on the authors’ GitHub repository ( https://github.com/lichen-lab/MDeep ). We generated 1,000 datasets, each has a training set and a testing set of equal size n . Within each training and testing set, there are an equal number of cases and controls. We considered different sample sizes . As with and without covariates, while influencing the overall prediction performance of all methods, it does not fundamentally alter the relative prediction performance of each method, we only included simulation studies without covariates in the main text but included simulation studies with covariates in the supplementary materials . 3.1.1 Simulation settings Following Chen et al. (2012) , we simulated microbiome data mimicking a real upper respiratory tract microbiome data ( Charlson et al. 2010 ) consisting of 856 OTUs after discarding singletons. Specifically, for sample i , the total count of 856 OTUs was generated from a negative binomial distribution with mean 1,000 and size 25. Given , to model the over-dispersion of OTU counts, 856 OTU counts were generated from a Dirichlet-multinomial distribution with proportions and an over-dispersion parameter , all of which were estimated from the original upper respiratory tract microbiome data and extracted from the R package “MiSPU.” We then transformed OTU counts into OTU abundance levels by dividing total OTU counts of each sample. To simulate case–control status, we considered three scenarios. Under simulation scenario I, a set of OTUs that are close to each other on the phylogenetic tree were selected as signal OTUs that are associated with the case–control status and thus are referred to as phylogenetically related. Under simulation scenario II, signal OTUs are a set of OTUs that are far away on the phylogenetic tree and thus are referred to as phylogenetically unrelated. Under simulation scenario III, signal OTUs are a mixture of phylogenetically related and unrelated OTUs. Within each scenario, we considered settings when different OTU abundance levels or OTU presence/absence status are associated with the case–control status. 3.1.1.1 Simulation scenario I: signal OTUs are phylogenetically related To simulate the case–control status of sample i , we considered two models: Model A uses relative abundances of signal OTUs and Model B uses presence/absence information of signal OTUs: where G is the set of signal OTUs, “scale ” standardizes variables with a mean of 0 and a standard deviation of 1, and is an indicator function. We set all signal OTUs to have the same effect size for simplicity and considered or 3. Under simulation scenario I, to select a set of signal OTUs G that are close to each other on the phylogenetic tree, we first partitioned 856 OTU into 20 clusters by partitioning around medoids based on the cophenetic distance matrix using branch lengths on the phylogenetic tree. Numbers of OTUs and total abundance levels of these 20 clusters vary. For Model A, when abundance levels of signal OTUs are related to a binary outcome, we selected the 2nd and 6th most abundant clusters, with 57 and 53 OTUs and total abundance levels 10.39% and 4.91%, respectively, as signal OTUs G s. For Model B, when a binary outcome is associated with presence/absence information of signal OTUs, we selected the 8th and 17th most abundant clusters, with 29 and 25 OTUs and total abundance levels 4.59% and 1.43%, respectively, whose average relative abundance per OTU is similar to that of the two clusters used in Model A. 3.1.1.2 Simulation scenario II: signal OTUs are phylogenetically unrelated Under simulation scenario II, was similarly simulated using Models A and B, but signal OTUs are a set of OTUs that are far away from each other on the phylogenetic tree. To do so, we ordered 856 OTUs by their abundance levels and selected a set of signal OTUs G as 9 OTUs from nine different clusters with descending abundance levels. For Model A, we selected two sets of nine signal OTUs with total abundance levels 11.14% and 4.77%, respectively. For Model B, we selected another two sets of nine signal OTUs with total abundance levels 10.91% and 2.26%, respectively. 3.1.1.3 Simulation scenario III: Signal OTUs are a mixture of scenarios I and II Under simulation scenario III with a mixture of phylogenetically related and unrelated signal OTUs, we considered several combinations of signal OTU set: where is a set of phylogenetically related OTUs and is a set of phylogenetically unrelated OTUs. We set or 3. 3.1.2 Simulation results We evaluated the prediction performance of each method using the area under the ROC curve (AUC), sensitivity, and specificity in testing sets and presented results for in the main text. Results for , and , and are shown in the supplementary materials . We first investigated if the proposed boosted distance metrics improve prediction performance over the original ones. We compared the prediction performance of two single kernel models with two kernels transformed from either boosted or original distance metrics. Figure 1 displays box plots of AUCs of four pairs of single kernel models from 1,000 simulations for four simulation settings. Single kernel models with kernels that reflect the true microbiome–outcome relationships are in boxes. Complete simulation results of all simulation settings are shown in Figure S1 in the supplementary materials. We observed improved prediction performance of boosted single kernel models over un-boosted versions consistently across almost all simulation settings considered. Models with kernels that reflect the true microbiome–outcome relationships usually benefit the most. This suggests that the proposed boosted distance metrics that up-weight taxa that are potentially associated with the outcome of interest and down-weight taxa that are potentially noises help overall predictions. We then investigated the prediction performance of the proposed MK-BMC method. Table 1 displays mean AUCs and 0.025 and 0.975 quantiles across 1,000 testing sets for the proposed MK-BMC method and competing methods with the best model in bold. Here “oracle” AUC is calculated from with true parameter values. Overall, MK-BMC almost always has the best performance or comparable performance to the best competing model when different competing methods perform the best under different simulation settings. More specifically, in simulation scenarios I and II, single kernel models with kernels reflecting true microbiome–outcome relationships always perform the best, while the proposed MK-BMC achieves comparable performance to that of the best single kernel model. In simulation scenario III, when signal OTUs are a mixture of phylogenetically related and unrelated, presence/absence and abundant OTUs, i.e., under scenarios that MK-BMC was designed for, MK-BMC outperforms all competing methods. Moreover, kernel weights give us insights into types of contributing OTUs. Figure 2 displays box plots of kernel weights of MK-BMC. We notice that weights for kernels that reflect the true microbiome–outcome relationships are the largest for all simulation settings in simulation scenarios I and II, while in simulation scenario III, four kernel weights are more similar with kernels representing true microbiome–outcome relationships being slightly larger. For example, under the second setting in simulation scenario III with mixtures of phylogenetically related and unrelated presence/absence (Model B) signal OTUs, kernel has the largest weight followed by kernel , while the weights of kernels and are small. This is promising for real microbiome studies when true associations between microbiome and outcomes are complicated and unknown. For competing methods, PAAM-RF that uses tree information almost always outperforms RF when signal OTUs are phylogenetically related. When signal OTUs are phylogenetically unrelated, RF performs better than PAAM-RF. The deep learning method MDeep performs worse than PAAM-RF in most simulation settings and does not have much predictability with signal OTUs being presence/absence. We included sensitivity and specificity results in the Supplementary Tables S2 , S3 , S5 , S6 , S8 , and S9 , where the cutoff for classifying cases/controls for all methods is 0.5. We can see that, across all simulation settings, no single method consistently outperforms others in terms of both sensitivity and specificity. There is a trade-off between them, methods with higher sensitivity than others tend to have lower specificity, and vice versa. Only under simulation settings, when the presence/absence information of abundant phylogenetically related signal OTUs is related to a health outcome, the proposed MK-BMC performs the best in all three metrics, AUCs, sensitivities, and specificities across all methods. We studied the impact of signal density when signal OTUs are phylogenetically related or unrelated ( Supplementary Table S11 ). To do so, we fixed total abundance levels of all signal OTUs but increased the number of signal OTUs to increase the “signal density.” Thus, the abundance level per signal OTU decreases as signal density increases. As expected, when signal OTUs are phylogenectically unrelated, AUCs of MK-BMC and all competing methods decrease with increasing number of signal OTUs while fixing the total abundance level. However, when signal OTUs are phylogenectically related, AUCs of MK-BMC and several competing methods that use the phylogenetic tree information improve with increasing number of signal OTUs when fixing the total abundance level. This is because with more signal OTUs that are close to each other on the phylogenetic tree, MK-BMC, PAAM, Mdeep, and single kernel methods and can use more of the phylogenetic tree information and thus have improved prediction performance. See Supplementary Section A5 for more details. For additional simulation studies with covariates, in general, we observed similar prediction patterns with/without covariates and the prediction performance of all methods improves as the effect size of covariates increases as expected. Moreover, kernel weights of covariates in MK-BMC also increase with increasing effect size of covariates. 3.2 Applications to the American Gut Project We applied MK-BMC and competing methods to the microbiome data from the American Gut Project (AGP, McDonald et al. 2018 , http://americangut.org ; EBI: ERP012803) to predict multiple binary health outcomes. To evaluate the prediction performance, we randomly split samples into equally sized training and testing data 1000 times. We trained MK-BMC and competing methods using training data and evaluated their prediction performance using AUCs in testing data. AGP was launched in 2012 to better understand the role of microbes in health. AGP participants provided detailed self-reported metadata. Microbiome samples were collected from different body habitats including fecal, oral, skin, and other body sites. We downloaded the latest version of the processed OTU count table (similarity level 97%), which includes 19 524 samples and 36 405 OTUs from ftp://ftp.microbio.me/AmericanGut/ag-2017-12-04/03-otus.zip/100nt/gg-13_8-97-percent/otu_table.biom . We also downloaded health-related information from https://qiita.ucsd.edu/study/description/10317 . We considered 4749 samples out of the 19 524 samples whose “country” was “USA” and “country residence” was “United States”. We further removed samples with total OTU counts less than 1250 ( McDonald et al. 2018 ), thus yielding 4,620 samples. We focused on gut samples and considered three binary outcomes for predictions: thyroid status, obesity status, and inflammatory bowel disease (IBD) status. We incorporated covariate age to enhance prediction performance for each binary outcome. Samples without age information were excluded. To make prediction results with and without age comparable for each outcome, we used the same set of samples to conduct predictions with microbiome only, age only, and microbiome + age. For RF and PAAM-RF, we included age during every node split in addition to the randomly selected subset of OTUs to make sure age is always in the model. All parameters for RF and PAAM-RF were set the same as in simulation studies. Note that the deep learning method MDeep was not implemented to handle covariates, and thus it was not included as a competing method here. Detailed information on the three outcomes, data processing steps, and prediction results using microbiome only with larger sample sizes without considering missing age are included in the supplementary materials . Table 2 summarizes AUC means and 0.025 and 0.975 quantiles in testing sets across 1,000 50/50 random splits for the three outcomes with the best models in bold. MK-BMC consistently performs the best or as well as the best model across all methods for the three outcomes. As shown in Supplementary Figure S8 with box plots of AUCs of eight single kernel models, we noticed that single kernel models using boosted distance metrics have better and more stable performance across different random splits than that of single kernel models using original distance metrics in general. When predicting thyroid status based solely on microbiome information, the proposed MK-BMC has the best AUC across all methods, which is as good as that of and . Moreover, also has similar AUCs, which indicates that a mixture of abundant phylogenetic-unrelated taxa and rare phylogenetic-related taxa is predictive of thyroid status. This observation is also confirmed by the fact that PAAM-RF slightly outperforms RF. Age is a stronger predictor than microbiome for thyroid status, with a mean AUC around 0.63. When considering both microbiome and age as predictors, MK-BMC performs the best with a mean AUC of 0.647. However, for RF and PAAM-RF, predictions using both microbiome and age have lower performance than that of using age only, although better than microbiome only. This is because for RF and PAAM-RF, with large number of OTUs whose effects are small, the effect of age can be easily buried. On the other hand, MK-BMC treats age and microbiome as distinct kernels thus can effectively capture age signal even in the presence of a large number of OTUs. We further investigated estimated kernel weights in MK-BMC ( Fig. 3 ) and noticed that kernel has the largest weight, while , , , and have similar weights, which is consistent with the findings from single kernel models, suggesting that a mixture of abundant phylogenetic-unrelated taxa and rare phylogenetic-related taxa together with age is predictive of thyroid status. When predicting obesity status based solely on microbiome information, the proposed MK-BMC has the best AUC across all methods, which is as good as that of and . This indicates that a mixture of rare microbiome profiles that are phylogenetically related and unrelated is predictive of obesity. Age itself is also predictive of obesity with a mean AUC of about 0.65. When age is incorporated together with microbiome, MK-BMC has a mean AUC of 0.700, which is close to that of the best method PAAM-RF with a mean AUC of 0.713. The performance of MK-BMC, RF, and PAAM-RF all improved adding age. This is because the effect size of both microbiome and age is strong. In terms of estimated kernel weights, kernel has the largest weight and barely has any weight, while , , and have similar weights, again suggesting that a mixture of rare microbiome profiles that are phylogenetically related and unrelated together with age is predictive of obesity status. For IBD status, age is not predictive with a mean AUC around 0.51. Thus, we only fit two models, i.e. age only and microbiome only. With microbiome only, MK-BMC outperforms all other methods with a mean AUC of 0.688. PAAM-RF performs slightly better than RF. Further investigation of estimated kernel weights in MK-BMC shows that all four kernels have relatively similar weights, while weights of kernels and are slightly larger. This indicates that some taxa either rare or abundant that are phylogenetically related are predictive of IBD.
4. Discussion In this paper, we developed MK-BMC, a multi-kernel model with boosted distance metrics for microbiome data for classification. With several widely used distance metrics for microbiome data including weighted and unweighted UniFrac distances and Bray and Curtis distance, the proposed boosted distance metrics up-weight taxa that are potentially associated with an outcome of interest and down-weight taxa that are potentially noises. MK-BMC then uses multiple kernels transformed from the proposed boosted distance metrics to consider multiple forms of microbiome–outcome associations and thus can use multiple prediction signals to improve overall prediction performance. The learned kernel weights by MK-BMC give insights into contributions of different types of taxa on an overall prediction. In simulation studies covering a wide range of scenarios, we demonstrated the advantages of the proposed boosted distance metrics that use taxon-level signal strengths for overall predictions over original ones. Similar ideas that up-weight potential signal features and down-weight potential noise features in distance-based methods have been used in other types of omics data ( Ruan et al. 2019 , Wang et al. 2019 ) for disease subtyping or for disease signal identifications. We also showed the much-improved prediction performance of MK-BMC over competing methods in almost all simulation scenarios considered. We observed that (i) when signal OTUs are a mixture of different types of OTUs, e.g. either phylogenetically related or unrelated, etc., i.e., scenarios MK-BMC was designed for, MK-BMC always performs the best; and (ii) when signal OTUs are single type of OTUs, MK-BMC performs almost always as well as the single kernel model with the kernel that reflects the true microbiome–outcome association. We applied MK-BMC and competing methods to predict binary thyroid, obesity, and IBD status using gut microbiome data from the AGP while incorporating age as a covariate. MK-BMC consistently performs the best or as well as the best model across all methods for the three outcomes. Moreover, for outcomes where age and microbiome are both predictive, MK-BMC consistently improves when incorporating age, while prediction performance of RF and PAAM-RF with both age and microbiome may sometimes be worse than that with age only depending on how strongly age and OTUs are predictive. Furthermore, kernel weights from MK-BMC provide information on the contributions of different types of microbiome profiles in predicting these outcomes. To boost individual taxon in calculating distance metrics for microbiome data, both taxon-level P -values and effect sizes are potential choices. We compared prediction performance of these two types of boosting weights with different sample sizes. We observed that prediction results with P -values being boosting weights are more stable than that with effect sizes being boosting weights as sample sizes decrease. This is because P -value calculations consider variations in effect size estimates and thus are less affected by sample sizes. Nevertheless, it is noteworthy that, due to the boosting process, kernel weights may not remain stable when the sample size is small. Despite this, MK-BMC demonstrates robust performance across various sample sizes. While we used Gaussian kernel for covariates, MK-BMC has the flexibility to incorporate linear or other kernel forms to capture appropriate covariate–outcome relationships. While we only considered binary outcomes, with continuous health outcomes, we could use kernel regressions based on the proposed boosted distance metrics for microbiome data.
Abstract Motivation Research on human microbiome has suggested associations with human health, opening opportunities to predict health outcomes using microbiome. Studies have also suggested that diverse forms of taxa such as rare taxa that are evolutionally related and abundant taxa that are evolutionally unrelated could be associated with or predictive of a health outcome. Although prediction models were developed for microbiome data, no prediction models currently exist that use multiple forms of microbiome–outcome associations. Results We developed MK-BMC, a M ulti- K ernel framework with B oosted distance M etrics for C lassification using microbiome data. We propose to first boost widely used distance metrics for microbiome data using taxon-level association signal strengths to up-weight taxa that are potentially associated with an outcome of interest. We then propose a multi-kernel prediction model with one kernel capturing one form of association between taxa and the outcome, where a kernel measures similarities of microbiome compositions between pairs of samples being transformed from a proposed boosted distance metric. We demonstrated superior prediction performance of (i) boosted distance metrics for microbiome data over original ones and (ii) MK-BMC over competing methods through extensive simulations. We applied MK-BMC to predict thyroid, obesity, and inflammatory bowel disease status using gut microbiome data from the American Gut Project and observed much-improved prediction performance over that of competing methods. The learned kernel weights help us understand contributions of individual microbiome signal forms nicely. Availability and implementation Source code together with a sample input dataset is available at https://github.com/HXu06/MK-BMC
Supplementary Material
Conflicts of interests None declared. Funding This work has been supported by the departmental fund from the Department of Biostatistics, Columbia University. Data availability Source code together with a sample input dataset is available at https://github.com/HXu06/MK-BMC .
CC BY
no
2024-01-16 23:47:17
Bioinformatics. 2024 Jan 10; 40(1):btad757
oa_package/c5/fe/PMC10789312.tar.gz
PMC10789314
38195719
1 Introduction Protein engineering is the design of proteins for improved or unique fitness, where fitness can describe any property of the protein including reactivity, enantioselectivity, or thermostability ( Brannigan and Wilkinson 2002 , Tsuboyama et al. 2022 ). Approaches to protein engineering explore various aspects of the sequence-structure-function paradigm. One of the most popular and successful strategies is the use of directed evolution, where libraries of variants are constructed by mutating the wildtype sequence ( Brannigan and Wilkinson 2002 , Packer and Liu 2015 ). Variants with high fitness are selected and used in further iterations to create new highly fit enzymes. Directed evolution takes advantage of the vast exploration of the sequence landscape with sequencing and assaying at scale. However, directed evolution is labor intensive and does not leverage 3D structural information that may guide sequence design and prediction of protein function. Rational design, on the other hand, takes advantage of the protein structure and interactions between protein and its native substrate to engineer the protein ( Lutz 2010 , Song et al. 2023 ). However, it requires an accurate model of not only the protein structure but also interactions and mechanistic insight into substrates of interest. A high-throughput computational protocol is required that can offer rapid and informed discovery of new variants to explore while bridging the gap between sequence, structure, and function. AlphaFold2 (AF2) ( Jumper et al. 2021a ), the top-performing structure prediction method in CASP14 ( Jumper et al. 2021b ), has made it possible to rapidly generate high-quality structures of novel sequences. Enormous structure libraries such as the AF2 database of protein structure predictions ( Varadi et al. 2022 ) and ESM Metagenomic Atlas ( Lin et al. 2023 ) demonstrate the possibility to utilize and explore novel prediction methods at scale ( Bouatta and AlQuraishi 2023 ). Previously, state-of-the-art structural prediction methods were limited to the domain of hours to days ( AlQuraishi 2019 ), but examples such as ColabFold ( Mirdita et al. 2022 ), DMPFold2 ( Kandathil et al. 2022 ), ESMFold ( Lin et al. 2023 ), and RoseTTAfold ( Baek et al. 2021 ) have made state-of-the-art structure prediction accessible in minutes to hours. ColabFold uses a modified multiple sequence alignment (MSA) pipeline utilizing MMseqs2 ( Mirdita et al. 2019 ), demonstrating that AF2 can be repurposed for speed and accessibility. We describe an approach that also utilizes a modified MSA generation pipeline and enables us to quickly explore a target enzyme family. Multiple approaches exist for associating structures generated in such a manner with their function ( Lee et al. 2007 ). High-throughput methods to relate protein and ligand structure to affinity or reactivity generally use empirical models and quantitative structure-activity relationships ( Perkins et al. 2003 ). However, predictions that provide geometric and functional insight rely on protein-ligand docking and the prediction of binding affinity ( Meng et al. 2011 ). Molecular docking has made it possible to predict with reasonable accuracy the binding poses and affinities of substrates in protein structures, creating the opportunity to predict protein function a priori from the binding pose and docking score of the ligand. To dock an array of multiple ligands to hundreds of protein structures, a high-throughput docking approach is needed with an accurate scoring method. In the work presented here, we describe a high-throughput protocol for building structure-function models in silico that leverages AF2 ( Jumper et al. 2021a ) and GPU-accelerated fast Fourier transform-based docking (FFTDock) ( Ding et al. 2020 ) as implemented in CHARMM ( Brooks et al. 2009 ) utilizing the physical forcefields from the CHARMM36 ( Brooks et al. 2009 , Huang et al. 2017 ) and CGenFF ( Vanommeslaeghe et al. 2010 ) forcefield efforts. To assess the validity of the high-throughput structure and docking pipeline, we focus on a model of catalysis by fungal flavin-dependent monooxygenases (FDMOs). TropB is an FDMO that carries out oxidative dearomatization, a useful reaction in organic synthesis that exhibits high site and stereoselectivity across a variety of resorcinol substrates ( Baker Dockrey et al. 2017 ). AfoD, AzaH, and SorbC are related FDMOs that also possess unique reactivities and site-selectivities with relatively minor changes in the steric and electronic environments of their substrates ( Baker Dockrey et al. 2017 , Pyser et al. 2019 ). Structural models and molecular modeling have been utilized to probe function, mechanism, and rational engineering ( Baker Dockrey et al. 2019 , Pyser et al. 2019 , Rodríguez Benítez et al. 2019 , Tweedy et al. 2019 ) in this family of FDMOs as well. For example, a mechanistic study of TropB revealed that the face of the ligand presented toward the activated FAD cofactor leads to hydroxyl group addition on that face, suggesting the use of molecular docking to elucidate stereochemistry and reactivity. Additionally, ancestral sequence reconstruction (ASR) of mammalian FDMOs has been used to find stable ancestors and learn important structural features ( Nicoll et al. 2019 ). More recent work has demonstrated the efficacy of using ancestral sequence resurrects to determine key residues controlling stereoselectivity in fungal FDMOs TropB, AfoD, and AzaH ( Chiang et al. 2023 ). In the work we describe below, we demonstrate the protocol we lay out in the following can infer a priori protein enantioselectivity and reactivity from interactions between protein and ligand structural models and predict the effect of known key stereochemical switches ( Chiang et al. 2023 ). Our approach builds upon previous exploration of structure-function models to guide design. A previous pipeline ( Aadland et al. 2019 ) for modeling ASR enzymes illustrated the use of MODELLER ( Eswar et al. 2006 ) to apply high-throughput homology structure modeling to a family of double-stranded RNA binding enzymes. MODELLER requires high CPU usage to scale to larger phylogenies and the protocol relied on pre-trained models to assign binding affinity to the structures. Wong et al . 2022 demonstrated the use of AF2 database structures ( Varadi et al. 2022 ) and AutoDock Vina ( Eberhardt et al. 2021 ) to screen anti-bacterial compounds against E. coli essential proteins and demonstrated the use of machine learning (ML) rescoring functions to slightly improve binding affinity predictions. AutoDock Vina is more computationally expensive than FFTDock and alone was unable to predict top binders. Wijma et al. 2015 demonstrated in silico enzyme design of enantioselective enzymes by using multiple independent molecular dynamics simulations in a high-throughput fashion (HTMI-MD) ( Wijma et al. 2014 , Arabnejad et al. 2020 ), but this approach relies on a single starting crystal structure, docking of predefined R and S orientations and brute force molecular dynamics simulations, and hence is of limited scalability. AlphaFill ( Hekkelman et al . 2022 ) is an algorithm that matches structures with cofactors and ligands in the PDB library to models in the AF2 structure database and uses the YASARA ( Krieger et al. 2002 ) forcefield to minimize transplanted small molecules into AF2 models. We describe a similar approach of transplanting the FAD cofactor into our predicted structures using the CHARMM36 ( Huang et al. 2017 ) and CGenFF ( Vanommeslaeghe et al. 2010 ) forcefields. Nevertheless, these efforts serve as inspiration for the work we describe below. Ultimately the driving goal in structurally characterizing enzymes is to elucidate the key determinants of function. However, given the large number of predictions generated for multiple ligands and a large sequence-structure space, it can be difficult to infer the residues to target in the design of a better biocatalyst. ML approaches have been applied in the context of directed evolution to map enormous sequence-fitness landscapes, speeding up directed evolution with a more informed selection of mutations ( Wu et al. 2019 , Yang et al. 2019 ). In particular, the ML framework of gradient-boosted trees has been previously used to fit enantioselectivity to enzyme properties ( Cadet et al. 2018 ). We describe a generalizable approach to fit a sequence-function model using an ensemble of decision tree methods, by representing the sequence data in the tabular form of an MSA. We then identify residues that control reactivity and stereochemistry using SHapley Additive exPlanations (SHAP) ( Lundberg et al. 2017 ). SHAP is an approach to linearly approximate the features determining a model’s prediction and has been widely adopted in the field of explainable artificial intelligence (XAI) ( Linardatos et al. 2020 ). SHAP has been previously used to understand the role of features such as composition, property, and nucleotide type in mRNA modification site prediction ( Bi et al. 2020 , Rodríguez-Pérez and Bajorath 2020 ), and to highlight key functional groups in small molecule potency predictors ( Rodríguez-Pérez and Bajorath 2020 ). We demonstrate its application to protein sequence-function analysis, with SHAP values assigning each amino acid to a stereochemistry and reactivity contribution. SHAP analysis of key residues from in silico sequence-function pairs will serve as a rapid and reasonably accurate step to guide protein engineering efforts.
2 Methods Sequence library The wild-type sequence library consisted of 277 extant flavin-dependent monooxygenase sequences, 276 maximum likelihood ancestral resurrect sequences, and 276 alt-all ancestral resurrect sequences as previously described ( Chiang et al. 2023 ). Of these, 67 were previously expressed and experimentally assayed for stereochemistry and conversion denoted as the ancestral FDMO library. These sequences formed the basis for training and testing our pipeline and are included in Supplementary Data 1 . Model generation with Alphafold2 A consensus sequence from the MSA of the 277 extant sequences used to perform ASR ( Chiang et al. 2023 ) ( Supplementary Data 2 ) was generated using HHconsensus from HHsuite3 ( Steinegger et al. 2019 ), with match states in columns with less than 50% gaps. AF2 v2.0’s data pipeline, model weights, and inference script were used. AF2’s data pipeline was used to generate MSAs from the consensus sequence, and the MSAs were combined into a FASTA formatted set of 84,572 sequences, representing the consensus sequence hits. This database was used to generate AF2 models of the ancestral sequences by replacing the standard data pipeline for the feature dictionary with a JackHMMER search on the consensus sequence hits. The standard AF2 MSA pipeline’s BFD database consists of over 2.5 billion sequences, with HHblits on the BFD database being CPU limited and highly I/O intensive. Using the consensus sequence hits reduces MSA generation from hours/days to under a minute. The MSA of the top 10000 hits was used with HHsearch on the PDB70 database ( Steinegger et al. 2019 ) to find templates with AF2’s template featurizer. The model generation step used monomer model 1 with 1 ensemble and default Amber relaxation constants. Minimization and addition of FAD cofactor The AF2 models were superposed with TM-align ( Zhang and Skolnick ) to the previously generated QM/MM refined chain A of RCSB PDB 6NES docked with 3-methyl-orcinaldeyde (3 MO) ( Rodríguez Benítez et al. 2019 ). The superposed structures were represented with CHARMM in vacuum using the pyCHARMM ( Buckner et al. 2023 ) package. The superposed structures were minimized using 1000 steps of steepest descents (SD). Next, the FAD cofactor present in the QM/MM structure was added to the models. The FAD cofactor was minimized in the structure using successive rounds of SD and Adopted Basis Newton Raphson (ABNR) minimization, with more and more of the protein atoms being restrained each round ( Supplementary Methods 1.1 ). Pose generation with CHARMM fast Fourier transform dock and refinement Docking grids representing protein and FAD atoms were generated in pyCHARMM ( Buckner et al. 2023 ) with FFTG ( Ding et al. 2020 ), the CHARMM module for FFTDock, with a grid center at the average coordinates of 3 MO in QM/MM refined TropB ( Rodríguez Benítez et al. 2019 ) ( Supplementary Methods 1.2 ). The top 500 poses were used as starting poses for grid-based minimization, explicit protein atom minimization, and simulated annealing. Grids using the same parameters as previously used for FFTDock ( Ding et al. 2020 ) were generated with varying epsilon to minimize the FFTDock poses ( Supplementary Methods 1.3 ). For explicit protein minimization, AF2 structures with FAD cofactor were used to minimize the 500 FFTDock poses in vacuum with varying epsilon ( Supplementary Methods 1.4 ). The top 10 poses from FFTDock were used as starting conformers to generate 500 total rotamers via random rotation and random translation for simulated annealing ( Supplementary Methods 1.5 ). Simulated annealing was based on the CHARMM simulated annealing protocol using grids with varying softcore parameters and utilized CHARMM OpenMM_dock ( Ding et al. 2020 ) (OMMD) to carry out parallel simulated annealing of 500 rotamers. Stereochemistry prediction Each docking approach generated 500 final poses that were clustered using cluster.pl from the MMTSB toolset ( Feig et al. 2004 ), with parameters kclust, nolsqfit, 1 Å radius, and heavy atoms. For each cluster, the lowest energy pose was chosen as the cluster representative and the energy of this representative was used to rank the clusters. The stereochemistry of a cluster was calculated using the representative pose. From this pose, three atoms on the ligand resorcinol ring were selected to calculate a normal vector of the plane describing the ring. The normal vector was used to classify the representative pose as R or S based on its orientation relative to the vector from the ligand average coordinates to the FAD ring. The angle of a pose was constructed as the angle between the plane normal vector and the vector to the FAD ring. An angle between 0 o and 90 ° indicated an S pose and an angle between 90 ° and 180 ° indicated an R pose. Each cluster was assigned a size based on the number of members of that cluster, a cluster energy determined by the representative pose, and a predicted stereochemistry and angle based on the representative pose ( Supplementary Fig. S6 ). The protein-ligand complex was assigned an overall predicted R fraction utilizing the average of the Boltzmann distribution ( Gibbs 2010 ) of each cluster: where i represents the cluster index, is the energy of the representative cluster pose, is the Boltzmann constant, is the size of the cluster, and the summation in the numerator is restricted to those clusters for which the geometry of the representative pose is identified to produce R stereochemistry and the summation in the denominator (Z) is over all clusters. Stereochemistry labels were assigned using an R frac greater than 0.8 as R stereochemistry, an R frac less than 0.2 as S stereochemistry, and racemic (R/S) for 0.2 < R frac < 0.8. Accuracy of a docking approach was measured by comparing predicted stereochemistry labels against the ancestral FDMO library observations for the subset of reactive ligands. The consensus stereochemistry of a protein was the mode of the stereochemistry across all ligands, with R/S in case of multiple modes. Reactivity prediction A logistic regression model using statsmodels ( Seabold and Perktold 2010 ) was used to fit the reactivity of an enzyme in the ancestral FDMO library as a binary classification problem. For training, an enzyme was labeled reactive if it showed any non-zero conversion with any ligand in the library. The logistic regression model considered five computed descriptors to predict reactivity: FAD distance, FAD angle, anion distance, docking energy efficiency, and Pafnucy ( Stepniewska-Dziubinska et al. 2018 ) predicted pK a efficiency. Docking energy efficiency was found by taking the energy of the top-ranked cluster pose and dividing it by the number of ligand heavy atoms. The Pafnucy predicted pK d efficiency was found by dividing the predicted pK d from Pafnucy for the complex with the protein, FAD, and the top-ranked cluster representative pose by the number of ligand heavy atoms. FAD distance was defined as the distance from the ligand average coordinates of the top pose to the atom on the FAD ring that bonds to the hydroperoxyl group in the activated FAD. The FAD angle is the angle from the top cluster derived from stereochemistry prediction. The anion distance is the distance from the top-ranked ligand pose (the ligands are all anions) and the CZ atom of R206 (TropB numbering). The logistic regression model was trained on the ancestral FDMO library (67 enzymes previously assayed with 2–5 ) and used to predict the conversion for all 830 enzymes. Functional screen The finalized functional screen involved predicting the structure of an enzyme with AF2 and the consensus sequence hits, incorporation of FAD, pose generation with FFTDock, minimization of FFTDock poses with minimization in vacuum in the environment of a fixed explicit all atom protein and epsilon of 0.75, prediction of stereochemistry from pose geometry, and prediction of reactivity with the trained logistic regression model. This protocol was used to predict stereochemistry and reactivity for the entire sequence library. Sequence alignment and preprocessing All extant and ancestral sequences were aligned using Clustal Omega ( Sievers et al. 2011 ) with default parameters to construct an MSA as input to the sequence-function models. The MSA was trimmed to columns of interest defined by a set of binding site residues and second-shell residues. For each of the 67 assayed enzymes in the ancestral FDMO library, a list of binding site residues was defined as the union of any residue with a heavy atom within 4.5 Å of any ligand ( 2 – 5 ) heavy atom in the top pose across the studied ligands. A list of second-shell residues was defined as any residue with a heavy atom within 4.5 Å of any binding site residue. The union of all binding site and second-shell residues across the 67 enzymes was taken to define residues of interest. Feature preprocessing for modeling was done by dropping MSA columns that were not in the residues of interest or contained more than 10% gaps. Sequence-Function model and SHAP analysis The mljar automated machine learning (AutoML) framework ( Plonska and Plonski 2021 ) was used to train multiple classification models to predict consensus stereochemistry or consensus reactivity from the functional screen using the processed MSA. The input was the MSA with amino acid labels and transformation to numerical features was performed by mljar. Stereochemistry labels were defined as -1 (S), 0 (R/S), and 1 (R) from averaging the cutoffs applied to predicted R frac for ligands 2–5 . Reactivity labels were defined as 0 (unreactive) or 1 (reactive) from the consensus logistic regression model using averaged reactivity descriptors for ligands 2–5 . For a comparison of all default models, we fitted an mljar AutoML using Explain mode, explain level 2, and algorithms: Baseline, Decision Tree ( Pedregosa et al. 2011 , Breiman 2017 ), Linear ( Pedregosa et al. 2011 ), XGBoost ( Chen and Guestrin 2016 ), Random Forest ( Breiman 2001 , Pedregosa et al. 2011 ), LightGBM ( Ke et al. 2017 , Dorogush et al . 2018 ), CatBoost ( Dorogush et al . 2018 ), Neural Network ( Abadi et al. 2016 ), and Nearest Neighbors ( Pedregosa et al. 2011 ). Hyperparameter tuning was done using mljar AutoML in perform mode, explain level 2, no golden features, and CatBoost, XGBoost, and Random Forest algorithms. To rank the key residues, the mean absolute SHAP ( Lundberg et al. 2017 ) importance was normalized between 0 and 1 for each fold of each trained model. Then the normalized mean absolute SHAP importances were averaged across all generated models and their respective folds, to get a mean absolute SHAP importance of every residue across all trained models. To infer the effect of a single residue on stereochemistry or reactivity, a SHAP explainer ( Lundberg et al. 2017 , 2020 ) was fitted to each fold of each model. The fold level SHAP values from the SHAP explainer were normalized between -1 and 1. For each residue, the normalized fold SHAP values were separated by amino acid type and plotted across all folds of all models to create a consensus dependence plot.
3 Results and discussion Our objective in the current work is to describe and demonstrate a high-throughput framework for exploring and optimizing biocatalytic enzyme function through the combination of modern structure prediction, ligand docking and refinement, and ML-based sequence-function modeling. This framework and workflow for sequence-structure-function prediction is illustrated in Fig. 1 , where we show the components of our predictive scheme comprising two basic elements: structure-based function prediction (reactivity and stereoselectivity) followed by sequence-based ML to identify key protein residues responsible for determining reactivity and stereoselectivity. These two components constitute a general method to guide rational design of novel biocatalysts, to rationalize observed sequence-function screens, and to direct and inform rational design approaches based on directed evolution. To effectively utilize our pipeline for a target enzyme system, we require a functionally rich related sequence-function landscape to explore, a test set of enzymes with experimental annotations to benchmark docking-based predictions, and the ability to reasonably predict an enzyme function from the docked ligand structure. While protein families and previous directed evolution campaigns provide a wealth of knowledge from which further mutations can be suggested, the most important requirement is finding systems for which docking-derived metrics can recapitulate enzymatic function. We suggest focusing on systems with previous success in molecular modeling and rational design, of which there are a wide array of examples ( Song et al. 2023 ). In the following, we will illustrate this methodology through application to the rational exploration of an ancestral sequence reconstruction (ASR) of a family of fungal FDMOs in which we: i) create structural models of extant sequences, predicted ancestor sequences, and the first-alternative sequences ( Eick et al. 2017 ) from the ASR, including the incorporation of cofactor FAD into all structures using a modified AF2 prediction methodology ( Jumper et al. 2021a ) and molecular modeling, ii) dock a panel of four ligands, which are part of the experimental screen of these proteins for catalytic activity and stereochemistry outcome ( Chiang et al. 2023 ), using FFT-based ligand-receptor docking methods ( Ding et al. 2020 ), iii) create classifiers based on these structural models and docking approach to predict both reactivity and stereochemistry ( Jumper et al. 2021a ) and molecular modeling, ii) dock a panel of four ligands, which are part of the experimental screen of these proteins for catalytic activity and stereochemistry outcome ( Chiang et al. 2023 ), using FFT-based ligand-receptor docking methods ( Ding et al. 2020 ), iii) create classifiers based on these structural models and docking approach to predict both reactivity and stereochemistry. We then use this framework in conjunction with sequence-based ML methods to identify key binding/active site and second-sphere residues responsible for the control of stereochemical switching of products. The elements i-iii are tested and evaluated by comparison of predictions for known structures (although unknown to the AF2 prediction framework), assessment of docking poses compared with known ligand positioning within the family of extant proteins, and through comparison of predictions for a set of 67 enzymes previously screened experimentally for reactivity and stereochemistry of products. The final component of our workflow is evaluated by comparison of identified key residues with experimental mutational studies. In what follows we discuss the elements described above and present our findings in both assessing the methods and in their application to annotating the reactivity and stereochemistry of the predicted ASR. 3.1 Fast High-Fidelity models with Alphafold2 using consensus sequence hits Although GPU-based inference through AF2’s Evoformer and Structure modules are on the order of minutes, the most expensive part of AF2’s pipeline is the construction of MSAs as input to the Evoformer module. Because our family of sequences is highly related, we hypothesize that the sequence search space can be greatly constrained while maintaining high-fidelity predictions. We pass the consensus sequence of the extant phylogenetic tree into AF2’s MSA construction pipeline and limit future MSA generation to use these consensus sequence hits (see Methods). The AF2 ( Jumper et al. 2021a ) models built using MSAs from the consensus sequence hits showed good agreement with TropB and AfoD crystal structures ( Fig. 2 ). After alignment with TM-align ( Zhang and Skolnick 2005 ), the TropB AF2 model and crystal structure had a C root mean square deviation (RMSD) of 0.91 Å ( Fig. 2a ) and AfoD had a C RMSD of 0.99 Å ( Fig. 2b ). For consistency we use the residue numbering and amino acid lettering of TropB to refer to residues across enzymes (see Supplementary Data 1 for list of extant and ASR enzymes). Key binding site residues R206 and Y239 are identically positioned with the crystal structure in TropB ( Fig. 2c ) and AfoD ( Fig. 2d ). R206 and Y239 do not significantly change position compared to the QM/MM refined model ( Rodríguez Benítez et al. 2019 ), suggesting that docking to the apoprotein can recapitulate the correct pose and stereochemistry. While we did not limit the template space searched, we observed that even with dummy templates a high agreement was obtained for the crystal structures of AfoD and TropB, with an overall C RMSD of 1.528 Å for TropB ( Table S1 ) and 1.525 Å for AfoD. This suggests that our modifications of the AF2 prediction pipeline with limited sequences and templates are robust. Predictions for AfoD and TropB were on par with other protein structure prediction web servers, despite our smaller preconditioned library for MSA generation. This suggests that the initial sequences and templates found from constructing the ancestral tree or hits from the consensus sequence can be repurposed for rapid structure prediction aided by GPU-based model inference and AMBER ( Case et al. 2005 ) relaxation. 3.2 Predicted structures show high predicted accuracy with consensus MSA library Because of the increased speed of prediction resulting from the consensus library method we implemented, obtaining structural models for thousands of ancestral and extant proteins is possible with modeling on the order of minutes for each structure. To judge the quality of these structures we utilize the per residue predicted Local Distance Difference Test (pLDDT) ( Mariani et al. 2013 ) score from AF2. The pLDDT score represents a measure of model confidence for the prediction. The average pLDDT score across the 830 structural models predicted for the ASR sequence library was 93.2, indicating confidence in high-quality structures ( Supplementary Fig. S2 ). It is worth noting that in some of the most distant ancestors, e.g. ancestor 278 at the root of tree, very few binding site residues are in common, and an overall pLDDT of 64.1 suggests a structural model of somewhat lower quality is predicted in this case ( Supplementary Fig. S3 ). With the original AF2 MSA search pipeline, we observed an overall pLDDT of 69.3 for ancestor 278 ( Supplementary Fig. S4 ), indicating that the consensus library is not a key contributor to the lower confidence. The consensus structural model for ancestor 278 still maintains higher accuracy in the more conserved central core ( Supplementary Fig. S3 ), suggesting that even lower-scoring models are useful in providing mechanistic insight. 3.3 Positioning cofactor FAD in its binding pocket In the family of fungal FDMOs the cofactor FAD is known to exist in both an IN and an OUT conformation ( Rodríguez Benítez et al. 2019 ), related to the mechanistic role of FAD in both catalyzing the chemistry on substrate molecules in the IN conformation and recycling its oxidative state by interacting with external cofactor NADPH (or NADH) in the OUT conformation. Consequently, the FAD cofactor was positioned in the binding pocket for the IN conformation (see Methods for details) because it is most like the catalytically active conformation with C4α-hydroperoxyflavin ( Tweedy et al. 2019 ). Docking ligands based on the IN conformation should capture near-native poses that are poised to adopt the correct stereochemistry in their products. After FAD incorporation, the interaction energy between the FAD cofactor and the AF2 structure of TropB was -198.90 kcal/mol suggesting a highly favorable interaction and correct modeling of the FAD pocket. The majority of the sequence library had the FAD cofactor pocket successfully modeled and incorporated into the structures, based on our observations of similar interaction energies between the proteins and the FAD cofactor. An average of -210.8 kcal/mol across all structures was obtained ( Supplementary Fig. S5 ). Hydrogen bonding interactions dominate between FAD and the protein with an average electrostatic interaction energy of -160.4 kcal/mol and an average van der Waals interaction energy of -50.4 kcal/mol. Thus, incorporation of the cofactor into our predicted models was relatively straightforward and gave us confidence in docking ligands into the structures predicted from our AF2 pipeline. 3.4 Fast rigid receptor docking and stereochemistry prediction The ligands (ligands 2–5 illustrated in Fig. 2e ) were docked into our predicted cofactor-integrated AF2 protein structures utilizing the GPU-accelerated FFTDock protocol ( Ding et al. 2020 ) available in CHARMM ( Ding et al. 2020 ). In a few seconds, thousands of poses can be generated and scored. Because of the use of a soft grid (see Methods), many of these poses had some overlap with protein backbone atoms and needed further refinement. We explored various strategies for rescoring these poses, by minimizing them further in the FFTDock grid, minimization in an explicit protein representation, and using the starting poses as conformers for simulated annealing-based flexible ligand docking ( Ding et al. 2020 ). Simulated annealing allows for further exploration of ligand conformational space, while explicit protein gives the most accurate but computationally expensive representation of the protein. To benchmark different rescoring strategies, we compared predicted stereochemistry from docking against known stereochemistries of 2–5 in the 67 enzymes of the ancestral FDMO library ( Chiang et al. 2023 ) (see also Fig. 2e ), using the angle of the clustered poses relative to the FAD cofactor (see Methods, Supplementary Fig. S6 ). We calculated the overall predicted stereochemistry for an enzyme-ligand pair by treating the system as a Boltzmann-weighted ensemble of R and S mesostates (see Eqn 1 in Methods). This strategy allows multiple top hits to contribute to the final stereochemistry prediction (ie, a consensus), reducing the influence of highly ranked outlier clusters. Simulated annealing and minimization in a fixed protein environment achieved similar stereochemical accuracies ranging from 55–70% across all experimentally active (non-zero conversion) protein-ligand pairs, similar in performance to traditional docking methods in recapitulating native poses ( Wang et al. 2016 ) ( Supplementary Fig. S7 ). A general trend we observed was that using a lower dielectric constant improved prediction accuracy and Matthew’s correlation coefficient (MCC) ( Baldi et al. 2000 , Gorodkin 2004 ) as is illustrated in Supplementary Fig. S8 . The top-performing docking approach was minimization of FFTDock poses in explicit protein with an R-dependent dielectric constant of 0.75, giving an accuracy for stereochemistry prediction across all protein-ligand pairs of 73% and an MCC of 0.51 ( Table S2 ). The MCC score indicates the successful prediction of both R and S poses. Finding the consensus stereochemistry by taking the mode across ligands 2–5 (R/S in case of multiple modes) leads to slightly increased robustness with this selected rescoring strategy yielding an overall accuracy of 77% and an MCC of 0.65 ( Supplementary Fig. S9 ). As is evident in Fig. 3 , from the optimal scoring/stereochemistry prediction protocol just discussed we find very good agreement between the predicted and observed screening results. Moreover, where reliable docked poses are found for proteins that appear as unreactive in the experimental screen, which may be indicative of substrates that fail to convert but may also result from sub-optimal conditions used in screening for a given substrate-protein pair, we are able to predict the anticipated stereochemistry of the product. While predicting the stereochemistry of product formation is useful, the reactivity of an enzyme is a critical metric in selecting potential ancestors for resurrection and testing. Therefore, we developed a machine learning model to predict substrate-protein pair reactivity in the specific screen being employed. 3.5 Prediction of reactivity with docking energy and ML rescoring Docking metrics representing possible features correlating with enzyme reactivity were extracted from the top-ranked ligand pose. FAD distance and FAD angle were measures of the ligand proximity and orientation necessary for reacting with the c4α-hydroperoxyflavin ( Tweedy et al. 2019 ). Anion distance was chosen to represent the importance of R206 and Y239 in positioning the substrate in TropB ( Rodríguez Benítez et al. 2019 ). Docking energy efficiency and Pafnucy ( Stepniewska-Dziubinska et al. 2018 ) predicted pK d efficiency were chosen as measures of the overall binding affinity of the protein and ligand, utilizing information from the CHARMM36 ( Huang et al. 2017 ) and CGenFF ( Vanommeslaeghe et al. 2010 ) forcefields and a convolutional neural network (CNN) trained on binding affinities from PDBBind ( Wang et al. 2005 ). FAD distance was determined to be non-predictive of reactivity, with a p-value greater than 0.10 in 3 out of 4 ligand-specific logistic regressor models. The ligand-specific logistic regression models achieved an accuracy of 75.4% and an MCC of 0.50 across all ancestral FDMO protein-ligand pairs ( Table S4 ). We explored averaging metrics across ligands to create a consensus predictor of reactivity. The consensus logistic model obtained an accuracy of 79.1% and a MCC of 0.57, based on predicting conversion in a protein that displayed any reactivity with ligands 2–5 . The predicted pK d from Pafnucy and docking energy were identified as the top predictors of reactivity ( Table S3 ), demonstrating the utility of ML-based scoring functions as rescoring methods in combination with docking scores. We have established the fidelity and accuracy of each of the components of our pipeline in the above discussion, showing that very good predictions can rapidly be obtained for the structure of the protein and the pose of the ligand, and that from these the prediction of stereochemistry and reactivity can be achieved. Thus, combining the components of our sequence-structure-function pipeline we are now in a strong position to “annotate” sequences of unknown structure and function and serve to inform experimental studies seeking to discover novel protein sequences as a basis for biocatalyst design (or redesign). We also note that in principle, while not directly explored in this study, this pipeline model can be applied to the “functional annotation” of any collection of protein sequences for which high-throughput predictions of protein-bound ligand structures can be utilized to establish metrics from which reactivity and function can be inferred. We move on to explore the annotation of the constructed ancestral phylogeny of fungal FDMOs related to TropB, AfoD, and AzaH ( Chiang et al. 2023 ) and to identify those residues within the active/binding site and in the second sphere of residues around this set that are key determinants to the functional outcome for each sequence. 3.6 Annotation of all sequences To simplify further analysis, we utilize a consensus stereochemistry, the mode of the predicted stereochemistries across ligands 2–5 (R/S in the case of bimodal R and S predictions). The consensus reactivity is defined as the prediction from the consensus logistic regression model. Consensus stereochemistry and consensus reactivity predictions were generated for the full ASR sequence library and are shown in Fig. 4 and available in Supplementary Data 1 . Our predictions suggest that clades are grouped by similar stereochemistry and reactivities, indicating the structure-function pipeline can capture the connection between sequence and functional similarity. The TropB, AfoD, and AzaH clades were classified as reactive, and the reactivity predictor identified other potential clades to be reactive for substrates 2–5, which provides starting points for further exploration of the phylogenetic tree. The ancestral FDMO library was used to experimentally demonstrate a stereochemistry shift from S to R in the TropB (R) clade, and an R to S transition in the AfoD (S) clade. The shift in stereochemistry is controlled by the identity of residue 239, with F239 promoting S stereochemistry, and Y239 promoting R stereochemistry ( Chiang et al. 2023 ). The TropB clade is predicted to contain two subfamilies of R and S enzymes consistent with the residue 239 F/Y switch. An intermediate clade between TropB and AzaH, containing ancestor 455, exhibits R stereochemistry and reactivity while possessing phenylalanine at residue 239 like AzaH. This demonstrates how the pipeline can be used to identify novel sequences to guide exploration, where future studies could focus on ancestors near 455 as a bridge between the unique stereo-control mechanism of AzaH ( Chiang et al. 2023 ) in further detail and the capacity of the proposed pipeline to capture these mechanisms. 3.7 Predictions in the AfoD, TropB, and AzaH clades The observed binding pose in the TropB clade across all substrates is similar to previous work characterizing TropB’s native substrate utilizing varying docking approaches ( Rodríguez Benítez et al. 2019 , Tweedy et al. 2019 ). The R-binding pose in ancestors 333, 334, 335, and 346 with Y239 is consistent with the previously observed binding pose of TropB ( Supplementary Fig. S11 ). The S-binding pose in ancestors with F239 is found to involve a rotation of the resorcinol core around the anion axis, maintaining the anion hydrogen bonding interaction with R206 ( Supplementary Fig. S12 ). For the AfoD clade, in which we observe the stereochemistry switch from R to S ( Supplementary Fig. S13 ), ancestor 369 has an R-binding pose consistent across all the R-producing enzymes. For AfoD and its S-producing ancestors with the F239 mutation, a V250F substitution can be found that blocks Arg206 from accessing the substrate and creates an active site surrounded by three phenylalanine residues ( Fig. 2d ). Thus, the binding pose is entirely flipped but still leads to the S-product. This unique orientation of ligands in AfoD and its S-producing ancestors informs the exploration of AfoD-specific design targets. AzaH performs R stereochemistry but possesses phenylalanine at residue 239, leading to a separate stereo-control mechanism. The AzaH clade is predicted to be predominately S, and AzaH on average is predicted to be racemic, with R stereochemistry for 2/4 ligands ( Supplementary Fig. S14 ). The top ligand clusters of AzaH for 3 are dominated by low-scoring S-oriented representative poses, most likely favored by F239. Simulated annealing and protein minimization at various epsilon (the dielectric constant used in the refinement and scoring of ligand poses) still lead to identification of ligands binding to AzaH as S or racemic. This suggests that the more complex stereo-control mechanism of AzaH is not fully captured by the modeling protocol and may need to be explored further using flexible docking or docking to an ensemble of AF2 structures to better delineate the conformational space of the protein receptor. From the results for AzaH, one may argue that the protocol simply captures the role of residue 239 alone. However, we demonstrate with mutations to residue 239 and sequence-function modeling the capacity of the model to explore a more complex stereo-control mechanism through exploration of the rest of the sequence library. 3.8 Prediction of residue 239 mutations Residue 239 plays a key role in stereochemistry determination, where F239 promotes S stereochemistry and Y239 promotes R stereochemistry. F239Y in AfoD promotes the catalysis of a racemic product distribution ( Chiang et al. 2023 ) and the functional screen should capture this effect. We tested the functional screen in predicting the effect of mutation to this key residue by running the full screen on F239Y and Y239F variants of the whole ASR library. In the ancestral FDMO library (the 67 enzymes assayed with 2–5 ), we observed 65.6% of protein-ligand pairs change consensus stereochemistry from the wildtype stereochemistry to racemic or the opposite stereochemistry ( Supplementary Fig. S15 a-b ). 61.4% of proteins in the ancestral FDMO library with the F239Y mutation changed stereochemistry and 73.9% with the Y239F mutation. This indicates that the functional screen captures the significant role residue 239 plays in stereochemistry, and the observation that mutation of residue 239 alone is insufficient to fully change stereochemistry. In the entire sequence library, sequences with mutation Y239F resulted in a 22.7% decrease in the number of predicted R class, and sequences with mutation F239Y had a 50.3% decrease in the number of predicted S class ( Supplementary Fig. S15c and d ). This suggests that mutation to residue 239, while key in a majority of the tree, is not able by itself to fully control stereochemistry. We then applied ML models to identify potential residues besides 239 that contribute to the stereo-control mechanism. 3.9 Sequence-function modeling with random forest and gradient boosted trees We trained multiple sequence-function models on all sequence-function pairs to predict consensus stereochemistry and consensus reactivity. An MSA of binding site and previously unexplored second-shell residues of the 830 sequences in the full ASR library was used as an input to the model ( Supplementary Fig. S16 ), in order to predict the previously obtained consensus stereochemistry and consensus reactivity with ligands 2–5. To determine the best ML architecture for this problem we used the mljar AutoML framework ( Plonska and Plonski 2021 ) to train models ranging in complexity from linear models, ensemble-based decision tree methods, support vector machines, and feedforward neural networks. Ensemble tree-based methods were most successful in predicting stereochemistry and reactivity and demonstrated that the sequence-structure-function pipeline predicted properties that could be mapped to the original sequence ( Supplementary Figs S17 and S18 ). We then used mljar in perform mode to create multiple hyperparameter-tuned models for the CatBoost ( Dorogush et al . 2018 ), XGBoost ( Chen and Guestrin 2016 ), and Random Forest ( Pedregosa et al. 2011 ) algorithms which slightly increased performance relative to the default models ( Supplementary Figs S19-S20 ). The best-performing model for stereochemistry prediction was a hyperparameter-tuned CatBoost model that yielded an accuracy of 74.3% and a macro average F1 score of .547. The best-performing model for reactivity prediction was a hyperparameter-tuned XGBoost model with an accuracy of 87.7% and an F1 score of 0.88. 3.10 Key features from SHAP analysis The hyperparameter-tuned models did not have significantly different performances but varied in agreement among the top residues by mean absolute SHAP value ( Supplementary Figs S21 and S23 ). To overcome selection bias from choosing a top model, we used the consensus among all folds of all models by averaging the normalized mean absolute SHAP values to suggest consensus top residues ( Supplementary Figs S22 and S24 ). We observed that for both stereochemistry and reactivity prediction, the first highest-ranked residues were the binding pocket, but multiple unexplored second-shell residues in a specific region played a key role in the prediction ( Fig. 5a, c ). Residue 239 was scored as the highest contributor to stereochemistry prediction, suggesting that it plays a significant role not only in the AfoD and TropB clades but across the entirety of the sequence library. The SHAP dependence plot for residue 239 ( Fig. 5b ) allows for easy interpretation of the F/Y switch, and dependence plots can be used for other top features to guide design strategies. Interestingly, M54, a previously unexplored residue, was scored as the highest contributor to reactivity prediction across all sequences, with residue 239 scored as the second most important feature. The side chain of residue 54 is located near the FAD cofactor, and changes at position 54 would indirectly affect ligand binding through the interaction with FAD. According to the dependence plot in Fig. 5d , V54 significantly negatively impacts reactivity and suggests an I/V reactivity switch as the library has 48.6% V and 35.9% I. V54 is associated with a lower average docking score and Pafnucy pK d ( Supplementary Fig. S25a and b ). This suggests a novel approach to engineering selectivity by engineering residues around the FAD cofactor to modify the ligand pocket and would demonstrate an interesting avenue to explore.
3 Results and discussion Our objective in the current work is to describe and demonstrate a high-throughput framework for exploring and optimizing biocatalytic enzyme function through the combination of modern structure prediction, ligand docking and refinement, and ML-based sequence-function modeling. This framework and workflow for sequence-structure-function prediction is illustrated in Fig. 1 , where we show the components of our predictive scheme comprising two basic elements: structure-based function prediction (reactivity and stereoselectivity) followed by sequence-based ML to identify key protein residues responsible for determining reactivity and stereoselectivity. These two components constitute a general method to guide rational design of novel biocatalysts, to rationalize observed sequence-function screens, and to direct and inform rational design approaches based on directed evolution. To effectively utilize our pipeline for a target enzyme system, we require a functionally rich related sequence-function landscape to explore, a test set of enzymes with experimental annotations to benchmark docking-based predictions, and the ability to reasonably predict an enzyme function from the docked ligand structure. While protein families and previous directed evolution campaigns provide a wealth of knowledge from which further mutations can be suggested, the most important requirement is finding systems for which docking-derived metrics can recapitulate enzymatic function. We suggest focusing on systems with previous success in molecular modeling and rational design, of which there are a wide array of examples ( Song et al. 2023 ). In the following, we will illustrate this methodology through application to the rational exploration of an ancestral sequence reconstruction (ASR) of a family of fungal FDMOs in which we: i) create structural models of extant sequences, predicted ancestor sequences, and the first-alternative sequences ( Eick et al. 2017 ) from the ASR, including the incorporation of cofactor FAD into all structures using a modified AF2 prediction methodology ( Jumper et al. 2021a ) and molecular modeling, ii) dock a panel of four ligands, which are part of the experimental screen of these proteins for catalytic activity and stereochemistry outcome ( Chiang et al. 2023 ), using FFT-based ligand-receptor docking methods ( Ding et al. 2020 ), iii) create classifiers based on these structural models and docking approach to predict both reactivity and stereochemistry ( Jumper et al. 2021a ) and molecular modeling, ii) dock a panel of four ligands, which are part of the experimental screen of these proteins for catalytic activity and stereochemistry outcome ( Chiang et al. 2023 ), using FFT-based ligand-receptor docking methods ( Ding et al. 2020 ), iii) create classifiers based on these structural models and docking approach to predict both reactivity and stereochemistry. We then use this framework in conjunction with sequence-based ML methods to identify key binding/active site and second-sphere residues responsible for the control of stereochemical switching of products. The elements i-iii are tested and evaluated by comparison of predictions for known structures (although unknown to the AF2 prediction framework), assessment of docking poses compared with known ligand positioning within the family of extant proteins, and through comparison of predictions for a set of 67 enzymes previously screened experimentally for reactivity and stereochemistry of products. The final component of our workflow is evaluated by comparison of identified key residues with experimental mutational studies. In what follows we discuss the elements described above and present our findings in both assessing the methods and in their application to annotating the reactivity and stereochemistry of the predicted ASR. 3.1 Fast High-Fidelity models with Alphafold2 using consensus sequence hits Although GPU-based inference through AF2’s Evoformer and Structure modules are on the order of minutes, the most expensive part of AF2’s pipeline is the construction of MSAs as input to the Evoformer module. Because our family of sequences is highly related, we hypothesize that the sequence search space can be greatly constrained while maintaining high-fidelity predictions. We pass the consensus sequence of the extant phylogenetic tree into AF2’s MSA construction pipeline and limit future MSA generation to use these consensus sequence hits (see Methods). The AF2 ( Jumper et al. 2021a ) models built using MSAs from the consensus sequence hits showed good agreement with TropB and AfoD crystal structures ( Fig. 2 ). After alignment with TM-align ( Zhang and Skolnick 2005 ), the TropB AF2 model and crystal structure had a C root mean square deviation (RMSD) of 0.91 Å ( Fig. 2a ) and AfoD had a C RMSD of 0.99 Å ( Fig. 2b ). For consistency we use the residue numbering and amino acid lettering of TropB to refer to residues across enzymes (see Supplementary Data 1 for list of extant and ASR enzymes). Key binding site residues R206 and Y239 are identically positioned with the crystal structure in TropB ( Fig. 2c ) and AfoD ( Fig. 2d ). R206 and Y239 do not significantly change position compared to the QM/MM refined model ( Rodríguez Benítez et al. 2019 ), suggesting that docking to the apoprotein can recapitulate the correct pose and stereochemistry. While we did not limit the template space searched, we observed that even with dummy templates a high agreement was obtained for the crystal structures of AfoD and TropB, with an overall C RMSD of 1.528 Å for TropB ( Table S1 ) and 1.525 Å for AfoD. This suggests that our modifications of the AF2 prediction pipeline with limited sequences and templates are robust. Predictions for AfoD and TropB were on par with other protein structure prediction web servers, despite our smaller preconditioned library for MSA generation. This suggests that the initial sequences and templates found from constructing the ancestral tree or hits from the consensus sequence can be repurposed for rapid structure prediction aided by GPU-based model inference and AMBER ( Case et al. 2005 ) relaxation. 3.2 Predicted structures show high predicted accuracy with consensus MSA library Because of the increased speed of prediction resulting from the consensus library method we implemented, obtaining structural models for thousands of ancestral and extant proteins is possible with modeling on the order of minutes for each structure. To judge the quality of these structures we utilize the per residue predicted Local Distance Difference Test (pLDDT) ( Mariani et al. 2013 ) score from AF2. The pLDDT score represents a measure of model confidence for the prediction. The average pLDDT score across the 830 structural models predicted for the ASR sequence library was 93.2, indicating confidence in high-quality structures ( Supplementary Fig. S2 ). It is worth noting that in some of the most distant ancestors, e.g. ancestor 278 at the root of tree, very few binding site residues are in common, and an overall pLDDT of 64.1 suggests a structural model of somewhat lower quality is predicted in this case ( Supplementary Fig. S3 ). With the original AF2 MSA search pipeline, we observed an overall pLDDT of 69.3 for ancestor 278 ( Supplementary Fig. S4 ), indicating that the consensus library is not a key contributor to the lower confidence. The consensus structural model for ancestor 278 still maintains higher accuracy in the more conserved central core ( Supplementary Fig. S3 ), suggesting that even lower-scoring models are useful in providing mechanistic insight. 3.3 Positioning cofactor FAD in its binding pocket In the family of fungal FDMOs the cofactor FAD is known to exist in both an IN and an OUT conformation ( Rodríguez Benítez et al. 2019 ), related to the mechanistic role of FAD in both catalyzing the chemistry on substrate molecules in the IN conformation and recycling its oxidative state by interacting with external cofactor NADPH (or NADH) in the OUT conformation. Consequently, the FAD cofactor was positioned in the binding pocket for the IN conformation (see Methods for details) because it is most like the catalytically active conformation with C4α-hydroperoxyflavin ( Tweedy et al. 2019 ). Docking ligands based on the IN conformation should capture near-native poses that are poised to adopt the correct stereochemistry in their products. After FAD incorporation, the interaction energy between the FAD cofactor and the AF2 structure of TropB was -198.90 kcal/mol suggesting a highly favorable interaction and correct modeling of the FAD pocket. The majority of the sequence library had the FAD cofactor pocket successfully modeled and incorporated into the structures, based on our observations of similar interaction energies between the proteins and the FAD cofactor. An average of -210.8 kcal/mol across all structures was obtained ( Supplementary Fig. S5 ). Hydrogen bonding interactions dominate between FAD and the protein with an average electrostatic interaction energy of -160.4 kcal/mol and an average van der Waals interaction energy of -50.4 kcal/mol. Thus, incorporation of the cofactor into our predicted models was relatively straightforward and gave us confidence in docking ligands into the structures predicted from our AF2 pipeline. 3.4 Fast rigid receptor docking and stereochemistry prediction The ligands (ligands 2–5 illustrated in Fig. 2e ) were docked into our predicted cofactor-integrated AF2 protein structures utilizing the GPU-accelerated FFTDock protocol ( Ding et al. 2020 ) available in CHARMM ( Ding et al. 2020 ). In a few seconds, thousands of poses can be generated and scored. Because of the use of a soft grid (see Methods), many of these poses had some overlap with protein backbone atoms and needed further refinement. We explored various strategies for rescoring these poses, by minimizing them further in the FFTDock grid, minimization in an explicit protein representation, and using the starting poses as conformers for simulated annealing-based flexible ligand docking ( Ding et al. 2020 ). Simulated annealing allows for further exploration of ligand conformational space, while explicit protein gives the most accurate but computationally expensive representation of the protein. To benchmark different rescoring strategies, we compared predicted stereochemistry from docking against known stereochemistries of 2–5 in the 67 enzymes of the ancestral FDMO library ( Chiang et al. 2023 ) (see also Fig. 2e ), using the angle of the clustered poses relative to the FAD cofactor (see Methods, Supplementary Fig. S6 ). We calculated the overall predicted stereochemistry for an enzyme-ligand pair by treating the system as a Boltzmann-weighted ensemble of R and S mesostates (see Eqn 1 in Methods). This strategy allows multiple top hits to contribute to the final stereochemistry prediction (ie, a consensus), reducing the influence of highly ranked outlier clusters. Simulated annealing and minimization in a fixed protein environment achieved similar stereochemical accuracies ranging from 55–70% across all experimentally active (non-zero conversion) protein-ligand pairs, similar in performance to traditional docking methods in recapitulating native poses ( Wang et al. 2016 ) ( Supplementary Fig. S7 ). A general trend we observed was that using a lower dielectric constant improved prediction accuracy and Matthew’s correlation coefficient (MCC) ( Baldi et al. 2000 , Gorodkin 2004 ) as is illustrated in Supplementary Fig. S8 . The top-performing docking approach was minimization of FFTDock poses in explicit protein with an R-dependent dielectric constant of 0.75, giving an accuracy for stereochemistry prediction across all protein-ligand pairs of 73% and an MCC of 0.51 ( Table S2 ). The MCC score indicates the successful prediction of both R and S poses. Finding the consensus stereochemistry by taking the mode across ligands 2–5 (R/S in case of multiple modes) leads to slightly increased robustness with this selected rescoring strategy yielding an overall accuracy of 77% and an MCC of 0.65 ( Supplementary Fig. S9 ). As is evident in Fig. 3 , from the optimal scoring/stereochemistry prediction protocol just discussed we find very good agreement between the predicted and observed screening results. Moreover, where reliable docked poses are found for proteins that appear as unreactive in the experimental screen, which may be indicative of substrates that fail to convert but may also result from sub-optimal conditions used in screening for a given substrate-protein pair, we are able to predict the anticipated stereochemistry of the product. While predicting the stereochemistry of product formation is useful, the reactivity of an enzyme is a critical metric in selecting potential ancestors for resurrection and testing. Therefore, we developed a machine learning model to predict substrate-protein pair reactivity in the specific screen being employed. 3.5 Prediction of reactivity with docking energy and ML rescoring Docking metrics representing possible features correlating with enzyme reactivity were extracted from the top-ranked ligand pose. FAD distance and FAD angle were measures of the ligand proximity and orientation necessary for reacting with the c4α-hydroperoxyflavin ( Tweedy et al. 2019 ). Anion distance was chosen to represent the importance of R206 and Y239 in positioning the substrate in TropB ( Rodríguez Benítez et al. 2019 ). Docking energy efficiency and Pafnucy ( Stepniewska-Dziubinska et al. 2018 ) predicted pK d efficiency were chosen as measures of the overall binding affinity of the protein and ligand, utilizing information from the CHARMM36 ( Huang et al. 2017 ) and CGenFF ( Vanommeslaeghe et al. 2010 ) forcefields and a convolutional neural network (CNN) trained on binding affinities from PDBBind ( Wang et al. 2005 ). FAD distance was determined to be non-predictive of reactivity, with a p-value greater than 0.10 in 3 out of 4 ligand-specific logistic regressor models. The ligand-specific logistic regression models achieved an accuracy of 75.4% and an MCC of 0.50 across all ancestral FDMO protein-ligand pairs ( Table S4 ). We explored averaging metrics across ligands to create a consensus predictor of reactivity. The consensus logistic model obtained an accuracy of 79.1% and a MCC of 0.57, based on predicting conversion in a protein that displayed any reactivity with ligands 2–5 . The predicted pK d from Pafnucy and docking energy were identified as the top predictors of reactivity ( Table S3 ), demonstrating the utility of ML-based scoring functions as rescoring methods in combination with docking scores. We have established the fidelity and accuracy of each of the components of our pipeline in the above discussion, showing that very good predictions can rapidly be obtained for the structure of the protein and the pose of the ligand, and that from these the prediction of stereochemistry and reactivity can be achieved. Thus, combining the components of our sequence-structure-function pipeline we are now in a strong position to “annotate” sequences of unknown structure and function and serve to inform experimental studies seeking to discover novel protein sequences as a basis for biocatalyst design (or redesign). We also note that in principle, while not directly explored in this study, this pipeline model can be applied to the “functional annotation” of any collection of protein sequences for which high-throughput predictions of protein-bound ligand structures can be utilized to establish metrics from which reactivity and function can be inferred. We move on to explore the annotation of the constructed ancestral phylogeny of fungal FDMOs related to TropB, AfoD, and AzaH ( Chiang et al. 2023 ) and to identify those residues within the active/binding site and in the second sphere of residues around this set that are key determinants to the functional outcome for each sequence. 3.6 Annotation of all sequences To simplify further analysis, we utilize a consensus stereochemistry, the mode of the predicted stereochemistries across ligands 2–5 (R/S in the case of bimodal R and S predictions). The consensus reactivity is defined as the prediction from the consensus logistic regression model. Consensus stereochemistry and consensus reactivity predictions were generated for the full ASR sequence library and are shown in Fig. 4 and available in Supplementary Data 1 . Our predictions suggest that clades are grouped by similar stereochemistry and reactivities, indicating the structure-function pipeline can capture the connection between sequence and functional similarity. The TropB, AfoD, and AzaH clades were classified as reactive, and the reactivity predictor identified other potential clades to be reactive for substrates 2–5, which provides starting points for further exploration of the phylogenetic tree. The ancestral FDMO library was used to experimentally demonstrate a stereochemistry shift from S to R in the TropB (R) clade, and an R to S transition in the AfoD (S) clade. The shift in stereochemistry is controlled by the identity of residue 239, with F239 promoting S stereochemistry, and Y239 promoting R stereochemistry ( Chiang et al. 2023 ). The TropB clade is predicted to contain two subfamilies of R and S enzymes consistent with the residue 239 F/Y switch. An intermediate clade between TropB and AzaH, containing ancestor 455, exhibits R stereochemistry and reactivity while possessing phenylalanine at residue 239 like AzaH. This demonstrates how the pipeline can be used to identify novel sequences to guide exploration, where future studies could focus on ancestors near 455 as a bridge between the unique stereo-control mechanism of AzaH ( Chiang et al. 2023 ) in further detail and the capacity of the proposed pipeline to capture these mechanisms. 3.7 Predictions in the AfoD, TropB, and AzaH clades The observed binding pose in the TropB clade across all substrates is similar to previous work characterizing TropB’s native substrate utilizing varying docking approaches ( Rodríguez Benítez et al. 2019 , Tweedy et al. 2019 ). The R-binding pose in ancestors 333, 334, 335, and 346 with Y239 is consistent with the previously observed binding pose of TropB ( Supplementary Fig. S11 ). The S-binding pose in ancestors with F239 is found to involve a rotation of the resorcinol core around the anion axis, maintaining the anion hydrogen bonding interaction with R206 ( Supplementary Fig. S12 ). For the AfoD clade, in which we observe the stereochemistry switch from R to S ( Supplementary Fig. S13 ), ancestor 369 has an R-binding pose consistent across all the R-producing enzymes. For AfoD and its S-producing ancestors with the F239 mutation, a V250F substitution can be found that blocks Arg206 from accessing the substrate and creates an active site surrounded by three phenylalanine residues ( Fig. 2d ). Thus, the binding pose is entirely flipped but still leads to the S-product. This unique orientation of ligands in AfoD and its S-producing ancestors informs the exploration of AfoD-specific design targets. AzaH performs R stereochemistry but possesses phenylalanine at residue 239, leading to a separate stereo-control mechanism. The AzaH clade is predicted to be predominately S, and AzaH on average is predicted to be racemic, with R stereochemistry for 2/4 ligands ( Supplementary Fig. S14 ). The top ligand clusters of AzaH for 3 are dominated by low-scoring S-oriented representative poses, most likely favored by F239. Simulated annealing and protein minimization at various epsilon (the dielectric constant used in the refinement and scoring of ligand poses) still lead to identification of ligands binding to AzaH as S or racemic. This suggests that the more complex stereo-control mechanism of AzaH is not fully captured by the modeling protocol and may need to be explored further using flexible docking or docking to an ensemble of AF2 structures to better delineate the conformational space of the protein receptor. From the results for AzaH, one may argue that the protocol simply captures the role of residue 239 alone. However, we demonstrate with mutations to residue 239 and sequence-function modeling the capacity of the model to explore a more complex stereo-control mechanism through exploration of the rest of the sequence library. 3.8 Prediction of residue 239 mutations Residue 239 plays a key role in stereochemistry determination, where F239 promotes S stereochemistry and Y239 promotes R stereochemistry. F239Y in AfoD promotes the catalysis of a racemic product distribution ( Chiang et al. 2023 ) and the functional screen should capture this effect. We tested the functional screen in predicting the effect of mutation to this key residue by running the full screen on F239Y and Y239F variants of the whole ASR library. In the ancestral FDMO library (the 67 enzymes assayed with 2–5 ), we observed 65.6% of protein-ligand pairs change consensus stereochemistry from the wildtype stereochemistry to racemic or the opposite stereochemistry ( Supplementary Fig. S15 a-b ). 61.4% of proteins in the ancestral FDMO library with the F239Y mutation changed stereochemistry and 73.9% with the Y239F mutation. This indicates that the functional screen captures the significant role residue 239 plays in stereochemistry, and the observation that mutation of residue 239 alone is insufficient to fully change stereochemistry. In the entire sequence library, sequences with mutation Y239F resulted in a 22.7% decrease in the number of predicted R class, and sequences with mutation F239Y had a 50.3% decrease in the number of predicted S class ( Supplementary Fig. S15c and d ). This suggests that mutation to residue 239, while key in a majority of the tree, is not able by itself to fully control stereochemistry. We then applied ML models to identify potential residues besides 239 that contribute to the stereo-control mechanism. 3.9 Sequence-function modeling with random forest and gradient boosted trees We trained multiple sequence-function models on all sequence-function pairs to predict consensus stereochemistry and consensus reactivity. An MSA of binding site and previously unexplored second-shell residues of the 830 sequences in the full ASR library was used as an input to the model ( Supplementary Fig. S16 ), in order to predict the previously obtained consensus stereochemistry and consensus reactivity with ligands 2–5. To determine the best ML architecture for this problem we used the mljar AutoML framework ( Plonska and Plonski 2021 ) to train models ranging in complexity from linear models, ensemble-based decision tree methods, support vector machines, and feedforward neural networks. Ensemble tree-based methods were most successful in predicting stereochemistry and reactivity and demonstrated that the sequence-structure-function pipeline predicted properties that could be mapped to the original sequence ( Supplementary Figs S17 and S18 ). We then used mljar in perform mode to create multiple hyperparameter-tuned models for the CatBoost ( Dorogush et al . 2018 ), XGBoost ( Chen and Guestrin 2016 ), and Random Forest ( Pedregosa et al. 2011 ) algorithms which slightly increased performance relative to the default models ( Supplementary Figs S19-S20 ). The best-performing model for stereochemistry prediction was a hyperparameter-tuned CatBoost model that yielded an accuracy of 74.3% and a macro average F1 score of .547. The best-performing model for reactivity prediction was a hyperparameter-tuned XGBoost model with an accuracy of 87.7% and an F1 score of 0.88. 3.10 Key features from SHAP analysis The hyperparameter-tuned models did not have significantly different performances but varied in agreement among the top residues by mean absolute SHAP value ( Supplementary Figs S21 and S23 ). To overcome selection bias from choosing a top model, we used the consensus among all folds of all models by averaging the normalized mean absolute SHAP values to suggest consensus top residues ( Supplementary Figs S22 and S24 ). We observed that for both stereochemistry and reactivity prediction, the first highest-ranked residues were the binding pocket, but multiple unexplored second-shell residues in a specific region played a key role in the prediction ( Fig. 5a, c ). Residue 239 was scored as the highest contributor to stereochemistry prediction, suggesting that it plays a significant role not only in the AfoD and TropB clades but across the entirety of the sequence library. The SHAP dependence plot for residue 239 ( Fig. 5b ) allows for easy interpretation of the F/Y switch, and dependence plots can be used for other top features to guide design strategies. Interestingly, M54, a previously unexplored residue, was scored as the highest contributor to reactivity prediction across all sequences, with residue 239 scored as the second most important feature. The side chain of residue 54 is located near the FAD cofactor, and changes at position 54 would indirectly affect ligand binding through the interaction with FAD. According to the dependence plot in Fig. 5d , V54 significantly negatively impacts reactivity and suggests an I/V reactivity switch as the library has 48.6% V and 35.9% I. V54 is associated with a lower average docking score and Pafnucy pK d ( Supplementary Fig. S25a and b ). This suggests a novel approach to engineering selectivity by engineering residues around the FAD cofactor to modify the ligand pocket and would demonstrate an interesting avenue to explore.
Conclusion Alphafold2 has revolutionized protein structure prediction, not only in terms of accuracy but also in speed and accessibility. Together with state-of-the-art docking approaches and machine learning frameworks, we are moving closer to the successful prediction and understanding the function of proteins in silico and, clearly, to a facile means of exploring key problems in the engineering of enzymes. In the work presented here, we have developed a framework for the establishment of a sequence-structure-function pipeline for the prediction of protein structures, function, and key residues. The approach is generalizable to many protein systems, and we successfully demonstrated for the case of fungal flavin-dependent monooxygenases that this framework and the specific pipeline we developed for this application can not only recapitulate enantioselectivity and reactivity with good accuracy, but also that it can guide new approaches to engineering. The protocol can annotate stereochemistry and reactivity of unexplored enzymes and provide more informed selection of novel enzymes to explore. In addition, through the sequence-function model we are able to capture roles of more distant residues that would normally elude a first pass in which active/binding site adjacency only is considered. We anticipate that sequence-structure-function frameworks based on the ideas we present and discuss here will serve a significant role in informing future studies aimed at the engineering and design of new proteins for specified functional purposes.
Abstract Motivation Protein engineering techniques are key in designing novel catalysts for a wide range of reactions. Although approaches vary in their exploration of the sequence-structure-function paradigm, they are often hampered by the labor-intensive steps of protein expression and screening. In this work, we describe the development and testing of a high-throughput in silico sequence-structure-function pipeline using AlphaFold2 and fast Fourier transform docking that is benchmarked with enantioselectivity and reactivity predictions for an ancestral sequence library of fungal flavin-dependent monooxygenases. Results The predicted enantioselectivities and reactivities correlate well with previously described screens of an experimentally available subset of these proteins and capture known changes in enantioselectivity across the phylogenetic tree representing ancestorial proteins from this family. With this pipeline established as our functional screen, we apply ensemble decision tree models and explainable AI techniques to build sequence-function models and extract critical residues within the binding site and the second-sphere residues around this site. We demonstrate that the top-identified key residues in the control of enantioselectivity and reactivity correspond to experimentally verified residues. The in silico sequence-to-function pipeline serves as an accelerated framework to inform protein engineering efforts from vast informative sequence landscapes contained in protein families, ancestral resurrects, and directed evolution campaigns. Availability Jupyter notebooks detailing the sequence-structure-function pipeline are available at https://github.com/BrooksResearchGroup-UM/seq_struct_func
Supplementary Material
Acknowledgements The authors acknowledge the efforts of Chang-Hwa (Chad) Chiang and Professor Alison Narayan for efforts to resurrect and screen protein sequences from the family of fungal flavin-dependent monooxygenases in their laboratory and to Dr Troy Wymore for his work in constructing the ASR for proteins in this family. Supplementary data Supplementary data are available at Bioinformatics online. Conflict of interest None declared. Funding This work has been supported by the National Institutes of Health [GM130587]. Data availability The data underlying this article are available in the article and in its online supplementary material .
CC BY
no
2024-01-16 23:47:17
Bioinformatics. 2024 Jan 9; 40(1):btae002
oa_package/3f/58/PMC10789314.tar.gz
PMC10789324
37927271
INTRODUCTION Duchenne muscular Dystrophy (DMD) is an x-linked, severe progressive muscular dystrophy affecting 15.9 to 19.5 per 100.000 live births [ 1, 2 ]. DMD typically comprises of proximal muscle weakness in the early stages of the disease, leading to loss of ambulation around 8–14 years, and more distal weakness, also affecting arm and hand function in the later disease stages [ 3 ]. Due to the introduction of mechanical ventilation, scoliosis surgery, corticosteroids and the improvement of multidisciplinary care, survival beyond twenty years is common [ 4 ]. The extended life expectancy provides more opportunities for education, work and other activities, which continue to demand arm- and hand function. However, arm function decreases due to progressive muscle weakening in people with DMD and with this also the ability to perform activities with the upper extremity (UE) [ 3 ]. Besides decrease of muscle strength, muscles shortening is common in DMD. The typical pattern of upper extremity muscles shortening in DMD includes decreased supination, ulnar deviation and shortening of the long finger flexors (flexors digitorum profundus, FDPs) [ 5 ]. FDP shortening results in decreased ability to extend the wrist with extended fingers, which is crucial in positioning hands during activities, for example with typing [ 6 ]. In current clinical practice, hand orthoses are advised to delay FDP shortening. However, evidence for effectiveness of these orthoses is limited. Weichbrodt et al. [ 7 ] studied 8 people with DMD with passive wrist extension less than 50 degrees (with extended fingers), and found that hand orthoses could delay development of contractures. In general, a wrist extension of 40 degrees is needed for performance of precision tasks [ 8 ]. From our clinical experience and previous research [ 9 ], we know that the compliance of wearing hand orthoses is limited; people with DMD already have extensive care rituals, and orthoses can cause discomfort and further limit the functionality. On the other hand, we found that the participants were motivated to preserve their hand function and a personalized wearing schedule was helpful [ 9 ]. Currently knowledge on the course of FDP shortening during the different disease stages (i.e. early ambulant, late ambulant, early non-ambulant, late non-ambulant) is lacking [ 10 ]. More insight in the course of the FDP shortening is needed to decide on the most appropriate timing of interventions and to be able to evaluate the effect of preventive measures such as hand orthoses on the delay of FDP shortening. We aim to investigate the longitudinal course of the FDP shortening during different disease stages in both hands, focusing on timing, symmetry and decline of the length of the FDPs.
METHODS Data collection A retrospective longitudinal multicenter study was carried out using clinical data registered in the Dutch Dystrophinopathy Database (DDD). Data was collected between January 2014 and March 2022 in the two national reference centers in the Netherlands, Radboud university medical center and Leiden university medical center. Both centers are part of the Duchenne Center Netherlands. Within this collaboration outcome measures are aligned and neuromuscular therapists are jointly trained in the use of these outcome measures. Parameters were derived as part of the standards of care during annual visits to the outpatient clinics where patients were assessed by trained physiotherapists and occupational therapists. Inclusion criteria were: the DMD diagnosis had to be confirmed genetically and/ or by muscle biopsy. An additional criterium for the current study was that there was at least one FDP measurement available. Females were excluded as DMD mutations in females have a great variability in phenotype [ 11 ]. Furthermore, people with an intermediate phenotype (being able to walk 10 meters independently after 16 years of age) were excluded. The study was approved by the local medical ethical committee (no. 2019-5760). Clinical parameters From the DDD, the following data were retrieved for each visit: date of visit, age at time of visit, functional status, and the reported goniometric data of the upper extremity, including the FDP length. Functional status was assessed by the Brooke scale and reported the functional abilities of the upper extremities on a 6-point scale [ 12, 13 ]. The Vignos scale was used to categorize the functional abilities of the lower extremities on a 10-point scale [ 14, 15 ]. Disease stages were defined according to the guidelines developed by Bushby et al. [29]: the early ambulatory stage (EAS) (Vignos 1–3), the late ambulatory stage (LAS) (Vignos 4–8), the early non-ambulatory stage (ENAS) (Vignos 9–10, Brooke 1–3), and the late non-ambulatory stage (LNAS) (Vignos 9-10, Brooke > 4). In both centers, the length of the long finger flexors, further referred to as FDP outcome, was determined by measuring the maximal passive wrist extension with the fingers fully extended using a manual goniometer and expressing FDP outcome in degrees [ 7 ]. Longitudinal analyses excluded the people who were not able to extent their fingers, because in that case, the FDP outcome could not be measured. Statistical analysis Statistical analysis was conducted using SPSS version 25.0 (IBM SPSS, Inc., Armonk, New York) and Stata/SE 16.0 for Windows (StataCorp LLC, Texas). First, descriptive statistics were used to summarize enrollment characteristics at first visit. Means and standard deviations were used for continuous variables, frequencies (percentages) were used for categorical variables. Longitudinal graphs were used to depict the longitudinal course of the FDP outcome over time in different disease stages and to explore difference between the right and left hand. Second, mixed model analyses were used according to the restricted maximum likelihood estimation procedure to quantify the FDP outcome over time, per disease stage and lastly, to determine the FDP outcome per year in each Brooke score. All mixed model analyses were corrected for age, and ‘age squared’ was used to correct for non-linear relation with age, if this was significant ( P < 0.02).
RESULTS Enrollment data and explorative longitudinal analyses Data on 534 visits of 197 males were included, with on average 2.7 visits per male (range 1–5 visits). Patient characteristics and enrollment FDP outcome data of the total group and per disease stage are displayed in Table 1 . The disease stage at enrollment could be defined for 146 males based on the available Brooke and Vignos scales. No significant differences in FDP outcome were found between the right and left hands. Cross sectional data in different disease stages showed a decrease in FDP outcome during disease progression. Furthermore, in the ambulatory disease stages and the early non-ambulatory disease stage, all people were able to passively fully extend the fingers. In the late non- ambulatory stage full passive extension of the finger joints was not possible in 18 (34%) of the right hands and 19 (31%) of the left hands. For this reason, FDP outcome was missing and were excluded from the longitudinal analysis. Figure 1 displays the longitudinal trajectories of FDP outcome per hand. Before the age of 10, this range is generally above 40 degrees, whereafter, there is a declining trend. The variation of FDP outcome between males enlarges with age. Figure 2 demonstrates the difference between the right and left FDP outcome per disease stage. The variation in differences between left and right increases with increasing disease stage with most variation in the late non-ambulatory stage. Still, the mean difference between the right and left hands is in around 0 in all disease stages. In Fig. 3 the longitudinal data of both hands are set out in a box-plot for the different Brooke scale scores, which shows that with Brooke 1 and 2, only few FDP outcome are below 40 degrees. In Brooke 4, 41% of the measured FDP outcome in Brooke 4 are less than 40 degrees (for both sides). In Brooke 5 and 6, the majority of the FDP outcome is below 40 degrees. Mixed model analyses Mixed model analyses showed an overal β-estimate of –3.38 ( P < 0.001) (right hand) and –3.57 ( P < 0.001) (left hand) for age with the FDP outcome, which means that overall the FDPs tend to shorten around 3.5 degrees per year. As this relation is not expected to be exactly linear, we summarized the β estimates for the FDP outcomes for the right and left hand per disease stage ( Table 2 ) and per year during each Brooke score ( Table 3 ). Table 2 shows that when progressing from the early non-ambulatory stage to the late non- ambulatory stage, there is a significant decline in FDP outcome in both hands (resp. right/left: –11.4, –20.1 degrees, P < 0.001). Table 3 displays the difference in FDP outcome per year in the different Brooke scales. The biggest decline per year is seen in Brooke 5; –15.84 degrees ( P < 0.001) for the right hand and –15.22 degrees ( P = 0.002) for the left hand. Additional mixed model analyses showed a high correlation between the right and left FDP outcome (0.93, P < 0.001), indicating a symmetry in longitudinal decline. The correlation in the LAS is (0.88, P < 0.001).
DISCUSSION This is the first longitudinal study which describes in detail the course of the long finger flexors length decline in patients with DMD. Results show that the decline in FDP outcome is largely symmetrical, however in the late non-ambulatory disease stage more variability occurs between the two hands. Decline of the length of the long finger flexors is largest during the Brooke 5 score (resp. right/ left: –15.8, –15.2 degrees per year). During the Brooke 4 disease stage, however, already 41% of the measured FDP outcome was below 40 degrees which is generally acccepted as a threshold in being able to elaborate manual precision tasks [ 8 ]. The longitudinal course of the FDP outcome shows that for the majority of the people with DMD, FDP shortening is not an issue until the age of 10, which is in line with previous cross-sectional findings by McDonald et al. [ 16 ] and with the longitudinal MRI abnormalities in the FDPs, investigated by Brogna et al. [ 17 ]. However, hand weakness already exists at a young age [ 18 ] and also limitations in activity of the upper extremity was found in young people with DMD before [ 3, 19 ]; both can limit the active use of the full range of motion and lead to shortening of muscles in the longer term. Active use of the upper limbs in the early stages of DMD, especially the extension of wrist and fingers may delay the shortening of the FDPs. After the age of 10, a decline in FDP outcome is seen in both hands, which is significantly correlated with age and functional status. Preventive measures, such as stretching and wearing orthoses should be started before the hand function declines. As almost half of the FDP outcome was below 40 degrees in Brooke 4, we recommend to start preventive measures before transitioning to this stage. Moreover, attention is needed for people with DMD within the Brooke 5 as the FDP outcomes decline very rapidly. However, it is seen that in the later disease stages the variability increases, which means that interventions need to be personalized according to the annual follow-up of the FDP outcome in combination with the Brooke score. The majority of the FDP outcomes is symmetrical i.e., no significant differences were seen between the right and left hand in enrollment data, a high correlation existed between the two sides in longitudinal analyses, and the mean difference between the FDP outcomes between the right and left hands was zero in all disease stages. Symmetric decline in the upper limbs has been seen before in the study of Janssen et al. [ 3 ]. However, for some participants a difference did exist up to 40 degrees, which confirms that personalized care is important. Future research on preventive measures can benefit from the largely symmetrical decline, as it is possible to use the contralateral side as a control. The FDP outcome could not be measured in all cases in the late non-ambulatory stage, due to the inability to passively fully extend the fingers due to contractures in the metacarpal and/ or finger joints. Other causes for missing values were that measurements were too painful. This has to be taken into account when interpreting these results. Hopefully, preventive measures can reduce pain and limitations in finger mobility in the late non ambulatory disease stage in the future. Hand function is influenced by FDP contractures as wrist extension is important in the conduction of fine motor tasks [ 8 ], and also the grip strength is higher with wrist extension compared to wrist flexion [ 20 ]. With already existing hand weakness, it is even more important to maintain wrist extension range of motion. The maintenance of hand function and the ability to conduct activities with the upper extremity may make a large difference to be able to participate in different life roles. Janssen et al. [ 21 ], found associations between upper extremity function and living an active life by participating in school and work-related activities. Timely interventions, such as prevention of FDP contractures, but also support of active use of the upper extremity, including wrist- and finger extension, may enhance participation, which is increasingly important in the context of the longer life expectancy of people with DMD [ 22 ]. This study needs to be interpretated in the light of its strengths and weaknesses. The strengths of this study are that we were able to longitudinally investigate FDP outcome in a large number of patients. Second, the results were interpreted after a correction for age and age square, which means that results can be interpreted in the light of the disease stages, independent from age and linearity was taken into account. The weaknesses of this study included the retrospective design which always includes missing data and the possibility of entry errors. Precise data analysis and the large study population allows the interpretation of the data to still be very valuable. Second, data on corticosteroid use and on preventive measures such as stretching, wearing orthosis, and positioning of the hands were not included, as information on dosage and compliance was lacking. Despite the fact that this additional information could possibly have differentiated the population into subgroups, the results of the present study still provide a good impression of the overall course of long finger flexor length, irrespective of treatment modalities. Prospective analyses of the FDP outcome in people with DMD wearing and not-wearing orthoses would be very useful in future research. Third, only few people were in the late ambulatory stage, which is probably because this stage is often very short, and in the Netherlands, the use of long leg braces is scarce. At last, the measurement of the FDP outcome is not yet validated and measurement errors can exist [ 23 ]. Although, both centers have a dedicated neuromuscular team, which is involved in these measurements for 6 years now, we need further research into the reliability of this measurement. In conclusion, our retrospective exploration of the FDP decline, showed that in our DMD population the largest decline occurred within Brooke score 5. In persons with Brooke score 4, already 41% of the FDP outcomes were below 40 degrees. We recommend to consider preventive measures from Brooke score 4 onwards for persons with DMD who show a decline in FDP length.. In addition, this article highlights the need for a prospective study of FDP outcomes, including data on preventive measures, corticosteroid use and functional outcome measures, to improve understanding of contracture prevention.
Several authors of this publication are members of the Radboudumc Center of Expertise for neuromuscular disorders (Radboud-NMD), Netherlands Neuromuscular Center (NL-NMD) and the European Reference Network for rare neuromuscular diseases (EURO-NMD). BACKGROUND: Shortening of the long finger flexors (Flexor Digitorum Profundus, FDPs) in Duchenne Muscular Dystrophy (DMD) causes reduced hand function. Until now, longitudinal studies on the natural course of the shortening of the FDPs are lacking, which impedes recommendations on timing and evaluation of preventive measures. OBJECTIVE: To investigate the longitudinal course of the FDP length during different disease stages focusing on symmetry, timing, and decline of the FDP length. METHODS: A retrospective, longitudinal multicenter study was conducted in the Radboud university medical center and the Leiden university medical center. The FDP outcome was measured using goniometry and gross motor function was assessed using the Brooke score. Longitudinal mixed model analyses were used to describe the course of the FDP outcome, and to investigate symmetry in both hands. RESULTS: Data on 534 visits of 197 males (age ranged 4–48 years) showed that in the ambulatory stages the FDP outcome was within a normal range. The mean decline in FDP outcome is 3.5 degrees per year, the biggest decline was seen in Brooke 5 (>15 degrees per year). In Brooke 4, 41% of the FDP outcome was < 40 degrees. No significant differences were found between right and left. CONCLUSIONS: This study supports the consideration of preventive measures to delay shortening of the FDPs in DMD patients transitioning to a Brooke scale of 4 or higher. Besides, natural history of FDP outcome has been established, which provides a base to evaluate (preventive) interventions.
ACKNOWLEDGMENTS We would like to all therapists from our neuromuscular centers who measured and reported the FDP outcome: M. Pelsma, L. Merkenhof, Y. van den Elzen- Pijnenburg, T. Popping, Janneke van Egmond- van Dam, Pieteke van Weperen, Jules van Benthem. Besides, we would like to thank Y. D. Meijer-Krom and J. Bongers for support on database queries. DECLARATION OF INTEREST None.
CC BY
no
2024-01-16 23:47:17
J Neuromuscul Dis.; 11(1):17-23
oa_package/19/f6/PMC10789324.tar.gz
PMC10789344
38108348
INTRODUCTION Neurodegenerative diseases, such as Alzheimer’s disease (AD), develop gradually and in their early phase it is not often easy to distinguish from normal aging. In patients with very mild cognitive symptoms, it is difficult to predict the individual disease course and risk of progression to dementia. An acute and personalized diagnosis is critical to provide the appropriate care and guidance to people seeking help for their cognitive complaints [ 1 ]. Verbal memory tests, such as word-list recall tests, are widely endorsed tests for measuring verbal declarative memory impairment in the early diagnostics of cognitive impairment and dementia. An example of such a word-list recall test is the verbal learning test (VLT), a well-validated, reliable, and examiner-administered instrument widely used to measure verbal episodic memory. It has high sensitivity and specificity to distinguish participants with mild cognitive impairment (MCI) and dementia from controls and dementia from MCI [ 2, 3 ]. For example, Hamel and colleagues have shown that deterioration of memory performance on the VLT could be detected about 7 years before the dementia diagnosis [ 4 ]. Specifically, process scores such as serial position effects and semantic clustering have been shown to increase sensitivity and earlier detection of cognitively intact older adults at risk for cognitive decline and may reveal differences in performance between individuals with different subtypes ofMCI [ 5–7 ]. Computerized advancements in neuropsychological assessments offer more adaptive and sensitive measures for detecting cognitive impairment [ 8 ]. Therefore, they provide practical advantages of automated speech recognition (ASR) and reporting, such as ease of language adjustments, and reduced need for trained professionals, which in turn enable efficient and scalable administration for large-scale screening [ 9 ]. In general, existing literature has shown that complementary more fine-grained variables (e.g., speech breaks and semantic relatedness), rather than just clinical total scores, have been shown to aid automatic and early detection of cognitive impairment [ 10–13 ]. Notwithstanding, the clinical and diagnostic merit of the related and novel VLT features (such as total serial clusters, peak learning slope, and constancy learning index) remains to be explored. A meta-analysis of clinical total scores of the VLT has shown low to moderate correlation with other cognitive tests such as story recall tasks, the semantic verbal fluency (SVF) task, and the digit symbol substitution test (DSST) [ 14 ]. However, the relationship between automatically derived, more fine-grained VLT features and other cognitive tests, as well as disease severity remains unknown. The accuracy of automatically derived features in the early diagnostics of cognitive disorders, such as AD could provide more insight into verbal memory, thus leading to a non-invasive, cost-effective tool for diagnostics and prescreening in clinical trialdesigns. In the present study, we investigated the accuracy of automated processing of the VLT compared to clinical scoring. Additionally, we were interested in the diagnostic accuracy and added value of automatically derived speech VLT features in clinical practice for distinguishing people with subjective cognitive decline (SCD) versus MCI and dementia compared to the gold standard, which is represented by the clinical VLT-total scores (total immediate recall, delayed recall and recognition count) used in clinical practice. Lastly, we investigated the relationship between these automatically derived VLT verbal memory features and other cognitive tasks, as well as disease severity, and functioning in daily living.
METHODS Participants As part of the DeepSpA (Deep Speech Analysis) project, 138 participants from the BioBank Alzheimer Centre Limburg (BBACL) study were included between 2019 and 2021 ( Table 1 ). The BBACL study is an ongoing prospective cohort study that includes patients who were all referred to the memory clinic of the Maastricht University Medical Center+ (MUMC+). Of these 138 participants, 69 were diagnosed with SCD, and 69 were diagnosed with MCI or dementia (56 with MCI and 13 with mild dementia). Out of 138 participants, 137 had an MRI/CT scan available (including measures of medial temporal lobe atrophy (MTA), white matter abnormalities (WMA, i.e., Fazekas), and global cortical atrophy (GCA)). Inclusion criteria were a total score of ≥20 on the Mini-Mental State Examination (MMSE) [ 15, 16 ] and Clinical Dementia Rating scale (CDR) [ 17 ] global score of ≤1. Exclusion criteria were non-degenerative neurological diseases, a recent history of severe psychiatric disorders, the absence of a reliable informant, and the clinical judgment that a follow-up assessment after one year would not be feasible. Experiments on human subjects were performed in accordance with the ethical standards of the Committee on Human Experimentation of our Institution, which is in accordance with the Helsinki Declaration of 1975. The local Medical Ethical Committee (METC azM/UM) approved the study (MEC 15-4-100). Each participant had given their written informed consent before theassessments. Clinical assessment Each participant underwent a standardized assessment including a medical history taking, a neurological and psychiatric assessment, and several questionnaires to measure disease severity (CDR), functioning in daily living (Disability Assessment for Dementia, DAD) [ 18, 19 ], and depressive symptomatology (Geriatric Depression Scale-15 items, GDS-15) [ 20 ]. In addition, participants underwent an extensive neuropsychological assessment, consisting of a test for measuring global cognition (MMSE), episodic memory (15-word Verbal Learning Test) [ 21, 22 ], & Storytelling of the RBMT (Rivermead Behavioral Memory Test), semantic memory (Semantic Verbal Fluency, SVF) [ 23 ], attention and executive functioning (Concept Shift Test, CST) [ 23 ] (or if not available the Trail Making Test [ 24 ], and Stroop [ 25 ]). The multidisciplinary clinical diagnosis was based on the Diagnostic and Statistical Manual of Mental Disorder (DSM-IV-TR, DSM-5) criteria for MCI (cognitive disorder not otherwise specified (NOS) in DSM-IV-TR; mild neurocognitive disorder in DSM-5; and dementia (DSM-IV) or major neurocognitive disorder (DSM 5)) [ 26, 27 ]. AD dementia diagnoses were made according to McKhann’s [ 28 ] Core Clinical criteria, meaning that patients diagnosed with AD implicated an amnestic memory profile, insidious onset, and history of deterioration of cognition by report or observation [ 28 ]. When cognitive impairments could not be objectified, participants were classified as having SCD [ 29 ]. Word-Verbal Learning Test (15-VLT) This study used the Dutch 15-word Verbal Learning Test (VLT) [ 22 ], which is an adaptation of the commonly used Rey Auditory Verbal Learning Test (RAVLT) [ 30 ]. Trained clinical psychodiagnostic test leaders presented (visual stimulus presentation) 15 unrelated monosyllabic nouns. After this presentation, the participants had to recall each word they remembered. The VLT consists of five learning trials, resulting in the total number of correctly remembered words (total immediate recall). After 20 min of nonverbal and non-memory tasks, individuals were (unexpectedly) asked again to recall all words they could remember (delayed recall). Finally, a list of 30 words was presented in which the 15 stimulus words were intermixed with 15 nontarget words and the participant had to recognize the words from the stimulus list (recognition) [ 21 ]. Three parallel list versions of the Dutch 15-VLT were used. Clinical scores include total immediate recall (sum of trial 1 to trial 5), delayed recall, and recognition count (true positive). Speech data recording and processing The VLT was audio recorded, scored, and processed using a mobile application provided by ki:elements GmbH (iOS iPad version; ki:elements, 2022). The application recorded participants’ speech responses while they performed neuropsychological assessments in the clinic. The application used the iPad’s standard internal microphone, which was placed in front of the participant. After speech responses were recorded, they were sent to the backend of ki:elements for preprocessing (such as cutting recordings into relevant parts and audio transformation), automatic speech recognition, and feature extraction [ 31 ]. This resulted in two different measurements of both the total immediate recall and delayed recall: the automatically derived ASR score, and the clinician’s independent score. Based on the automatically derived application scores, 102 VLT-specific performance metrics such as serial-positioning effects, slopes, and subjective organization & serial clustering were automatically calculated. Note that the clinical recognition count was manually added to the application. See Supplementary Table 1 for a complete listing of the VLT features. Statistical analyses The data were analyzed using IBM SPSS Statistics Mac (version 27) and R 4.1.2 (R Core Team, 2021). Group differences were analyzed with independent t -tests for continuous variables and with Chi-square tests for categorical variables. When a variable was not normally distributed a Mann-Whitney-U test was performed. Educational level was categorized into low (at most primary education), mid (junior vocational training), and high (senior vocational or academic training) according to a Dutch grading system [ 32 ], which is comparable with the Standard Classification of Education [ 33 ]. The intraclass correlation coefficient (ICC) of the total scores was calculated to examine the agreement between the ASR-based total immediate recall and delayed recall score and the independent clinical total immediate recall and delayed recall score, based on a mean-rating (k = 2), absolute-agreement, 2-way-mixed-effects model. Effect sizes for the verbal memory features were calculated using the Z-statistic of the Mann-Whitney U test (|Z|/√N), a non-parametric test due to the skewness of most of the verbal memory features. To visualize correlations between features and other cognitive tests, a correlation matrix in form of a heatmap was constructed using the R software package (version 3.6) and the package corrplot. Age, sex, and education-adjusted Z-scores of the cognitive tests were based on published normative data for the Dutch population [ 23 ]. Correlation strength was interpreted based on Akoglu [ 34 ]. In Python 3.9.7, machine learning models (Extra Trees classifier) were trained to differentiate between the two different groups (SCD versus MCI/dementia) using the sklearn Python package [ 35 ]. Extra Trees is an ensemble tree-based machine-learning approach. Due to the limited sample size, no held-out test set could be maintained. Instead, models were evaluated using Leave-One-Out Cross-Validation, a procedure in which one sample at a time is removed from the training set and used as a test case. This procedure was repeated for each sample and the average of the model’s performance was calculated. The area under the receiver operating characteristics curve (AUC-ROC), which allows visualization of multiple different potential trade-offs between sensitivity and specificity, was created for three models (model 1 crude: VLT-ASR total score; model 2: model 1 and age, model 3: model 2 and ASR based verbal memory features) for each VLT subtest separately, as well as the full VLT (total immediate recall, delayed recall, and recognition count). Confidence intervals (CI), p -values (DeLong method), and F1-scores for all AUC-ROCs were calculated using the sklearn Python package.
RESULTS Sociodemographic and clinical data Demographic information of participants with SCD and MCI/dementia are presented in Table 1 . As expected, the MCI/dementia group was older than the SCD group and had lower performances on all cognitive tests and a higher CDR-Sum of Boxes score. No significant group differences were found for sex, education level, and GDS-15 score. The most common etiology of the MCI/dementia group was AD (48%). Residual etiologies included vascular etiology (30%), mixed (AD and vascular) etiology (6%), and other non-cognitive disorders (16%) such as MCI due to Parkinson’s disease. There were no clear signs of evident medial temporal lobe atrophy in 93% of the SCD group. Inter-rater reliability between automatic and manual scoring The ICC for the inter-rater reliability between the clinical score and the ASR of the total immediate recall was 0.87 (95% CI 0.28–0.95; Fig. 1 a). The mean difference between the clinical score and the ASR was 7 words with a range from –1 to 38 words. Except in one case, the ASR detected fewer words than the clinical score. In 13 out of 138 (9.5%) people, the ASR missed more than 14 words. A sensitivity analysis showed that the ICC was lower for the SCD group (0.77 95% CI 0.09–0.91) than for the MCI/dementia group (0.88 95% CI 0.10–0.96). When separating the MCI/dementia group in participants with MCI and participants with dementia, results showed that the ICC was better in participants with MCI (0.87 95% CI 0.08–0.96) than in participants with dementia (0.82 95% CI –0.15–0.96). The ICC for the inter-rater reliability between the clinical score and the ASR of the delayed recall of the VLT was 0.94 (95% CI 0.88–0.97; Fig. 1 b). The mean difference between the clinical score and the ASR scoring was 1 word with a range from –2 to 10 words. In general, the ASR detected fewer words than the clinical score. In 7 out of 138 (5%) people, the ASR missed more than 4 words. A sensitivity analysis showed that the ICC was lower for the SCD group (0.86 95% CI 0.64–0.93) than for the MCI/dementia group (0.95 95% CI 0.91–0.97). When separating the MCI/dementia group in participants with MCI and participants with dementia, results showed that the ICC was comparable in both groups (MCI; 0.94 95% CI 0.89–0.97, dementia; 0.95 95% CI 0.85–0.98). In a posthoc analysis, we analyzed how many words were mentioned in the first 10 s of each of the 5 immediate trials. We found that about half of the recalled words were mentioned in the first 10 s. Looking at the ICC for each immediate recall trial and group individually, we saw that the ICCs per trial for the group with MCI/dementia stayed quite stable compared to the group with SCD, for which trial 1 starts with a high ICC but declines to a lower ICC for all the residual trials (See Supplementary Table 2 for posthoc results). Diagnostic classification The ROC curves differentiation between the SCD group and MCI/dementia group for the total immediate recall is shown in Fig. 2 a. The full model including the total immediate recall, age, and verbal memory features (model 3) was able to differentiate between the SCD group and the MCI/dementia group (AUC = 0.77, 95% CI 0.70–0.85, F1-score = 0.65). The full model including the verbal memory features had a slightly higher AUC compared to the age-corrected total immediate recall (model 2) (AUC = 0.75, 95% CI 0.68–0.84, F1-score = 0.74) and the total immediate recall only (model 1) (AUC = 0.72, 95% CI 0.64–0.81, F1-score = 0.73). When comparing whether the models differ from each other, no significant differences can be found. Figure 2 b shows the differentiation between the SCD group and MCI/dementia group for delayed recall. The full model including the delayed recall, age, and verbal memory features was able to differentiate between both groups (AUC = 0.82, 95% CI 0.75–0.89, F1-score of 0.70). The full model (model 3) including the verbal memory features had a slightly higher AUC compared to the age-corrected delayed recall (model 2) (AUC = 0.79, 95% CI 0.71–0.87, F1-score of 0.71) and the delayed recall only (model 1) (AUC = 0.79, 95% CI 0.71–0.86, F1-score of 0.65). When comparing whether the models differ from each other, no significant differences canbe found. Figure 2 c shows the differentiation between the SCD group and the MCI/dementia group for the recognition count. The full model including the recognition count, age, and verbal memory features (model 3) was able to differentiate between both groups (AUC = 0.79, 95% CI 0.72–0.88, F1-score of 0.72). The full model including the speech features had a substantially higher AUC compared to the age-corrected recognition count (model 2) (AUC = 0.70, 95% CI 0.61–0.79, F1-score of 0.69) and recognition count only (model 1) (AUC = 0.47, 95% CI 0.38–0.58, F1-score of 0.61). When comparing whether the models differ from each other, significant differences can be found for all models (model 1 versus model 2; p < 0.001, model 1 versus model 3; p < 0.001, and model 2 versus model 3; p < 0.01). Figure 2 d shows the differentiation between the SCD group and the MCI/dementia group for the full VLT. The full model including the total immediate recall and delayed recall, recognition count, age, and verbal memory features (model 3) was able to differentiate between both groups (AUC = 0.80, 95% CI 0.73–0.87, F1-score of 0.72). The full model including the verbal memory features had a slightly higher AUC compared to the age-corrected model (model 2) (AUC = 0.79, 95% CI 0.72–0.87, F1-score of 0.71) and the model with the clinical score only (model 1) (AUC = 0.76, 95% CI 0.67–0.84, F1-score of 0.68). When comparing whether the models differ from each other, no significant differences can be found. A sensitivity analysis showed that after excluding the participants with dementia (N = 13), the AUC of total immediate recall decreased slightly from 0.68 (model 1) to 0.67 (model 3). For delayed recall, the AUC increased from 0.72 to 0.77. For recognition count, the AUC increased from 0.39 to 0.72. Lastly, for the full model including total immediate recall, delayed recall, and recognition count the AUC increased from 0.73 to 0.79. In comparison to the results of the analysis including patients with dementia, the crude models were slightly lower when patients with dementia wereexcluded. Effect sizes of the automatically derived verbal memory features and discriminative power Table 2 shows the 10 features with the best discriminative ability of all ASR-based 102 VLT features, including total scores, between the SCD group and MCI/dementia group (see Supplementary Table 1 for a more detailed description of the features). The highest effect sizes were found for 1) delayed recall, 2) midlist item counts trial 3, 3) delayed recall midlist items, 4) late learning slope, 5) immediate total midlist items, 6) immediate count trial 5, 7) total immediate recall, 8) immediate count trial 4, 9) delayed recall recency items, 10) immediate count trial 3. A sensitivity analysis, in which participants with dementia were excluded from the cognitively impaired group, resulted in the same 10 best differentiating features, with a different ascending order only. Correlations between the 10 best ASR VLT features and other cognitive tests Of 138 participants, 99 had all cognitive test performances available. All 10 VLT features were significantly correlated with each other, ranging from a moderate correlation r (136) = 0.57, p < 0.01 (immediate count trial 4 and delayed recall midlist items) to a very strong correlation r (136) = 0.92, p < 0.01 (immediate count trial 3 and immediate total) ( Fig. 3 ). Regarding cognitive functioning in general, all 10 VLT features were significantly positively correlated with the MMSE ranging from a moderate correlation r (135) = 0.46, p < 0.01 (delayed recall recency) to a moderate correlation r (135) = 0.56, p < 0.01 (total immediate recall). All 10 VLT features were significantly positively correlated with the semantic verbal fluency ranging from r (133) = 0.38, p < 0.01 (immediate count trial 4) to r (133) = 0.48, p < 0.01 (delayed recall). Regarding executive functioning, none of the VLT features were correlated to the Stroop-III, and only weak correlations were found between the VLT features and the TMT-B. For story recall, all 10 VLT features were significantly positively correlated to the RBMT delayed r (117) = 0.30, p < 0.01(immediate midlist trial 3) to r (117) = 0.50, p < 0.01 (delayed recall). The 10 VLT features were positively correlated to the immediate recall score of the RBMT. Concerning disease severity, all 10 VLT features had a significantly weak to moderate negative correlation with the CDR-SOB ranging from r (129) = –0.35, p < 0.01 (immediate midlist trial 3) to r (129) = –0.44, p < 0.01 (delayed recall).
DISCUSSION The current study evaluated the reliability and clinical validity of ASR technology compared with the clinical scoring of the 15-VLT in a memory clinic setting. Our results show that the ASR of the commonly used total immediate recall and delayed recall were comparable to the clinical retrieved scores with good reliability (based on [ 36 ]) for total immediate recall and excellent reliability for delayed recall. The higher ICC of the delayed recall can be explained by the difference in range between both measures (total immediate recall 0–75 versus delayed recall 0–15 words), resulting in a lower probability of error in the delayed recall. Note that we have to take the wide confidence intervals into account which indicates that further investigation is warranted and that current results should be optimized for individual diagnostic purposes. Our results showed that in 9.5% of participants, the ASR missed more than 14 words in the total immediate recall, and in 5% of participants, the ASR missed more than 4 words in the delayed recall. Looking into those cases in more detail, we determined that words were missed by the ASR when participants recalled words very quickly and without pauses between words, or spoke very quietly. In a posthoc analysis, we analyzed how many words were mentioned in the first 10 seconds of each of the 5 immediate trials. We found that about half of the recalled words were mentioned in the first 10 seconds. Interestingly, looking at the ICC for each immediate recall trial and group individually, we saw that the ICCs per trial for the group with MCI/dementia stayed quite stable compared to the group with SCD, for which trial 1 starts with a high ICC but declines to a lower ICC for all the residual trials (See Supplementary Table 2 for post-hoc results). Thus, by increasing the total number of recalled words with each trial and listing them quickly after the start of the recall phase, we see that the ICC decreases with each subsequent trial. In general, this could indicate that the ICC depends on the number of words recalled: the higher the word count the lower the ICC. Accordingly, we reason that participants who are instructed to recall words find it easier to recall them as quickly as possible so as not to forget words, possibly resulting in mumbling and recalling words without pausing in between. Additionally, some individuals might enumerate some words not intended to increase their total word count, but rather to invoice the order of words they learned, i.e., a learning efficiency strategy related to attention and learning [ 6 ]. The ASR might see these as repetitions when in reality they were not. In general, we expected that these effects interfere more with the ASR in the SCD group, which is confirmed by a lower ICC for the SCD group compared to the MCI/dementia group. However, these noise effects may be less applicable to delayed recall. Delayed recall is characterized as a measure of consolidation, thus reducing immediate recall strategies. To improve and mitigate such limitations of the ASR technology in the future, recall instructions should be optimized to emphasize that participants should speak clearly and separate recall words with small pauses in-between. Note that this in itself might be challenging by interfering with recall strategies, especially in individuals with cognitive impairment. In the evaluation of the added value and accuracy of the additional automatically derived verbal memory features, we found that the full model including the total immediate recall, age, and verbal memory features was able to accurately distinguish between the SCD and the MCI/dementia group. The discriminative value of the full model is slightly higher than the model based on total immediate recall only. This reflects a slight increase in differentiating value of these automatically derived verbal memory features compared to the traditionally used clinical score. This suggests a 77% chance that the ASR of the 15-VLT and its verbal memory features can distinguish between both diagnostic groups, compared to a 72% chance with total immediate correct recall only [ 37 ]. The full model including delayed recall, age, and automatically derived verbal memory features were also able to accurately distinguish between the SCD group and the MCI/dementia group. This suggests an 82% chance that the ASR of the VLT and its verbal memory features can distinguish between both diagnostic groups, compared to a 79% chance with delayed recall only. Interestingly, there was a notable increase in the discriminative power of the prediction models regarding VLT recognition features. The full model including the recognition count, age, and the automatically derived features was able to accurately distinguish between the SCD group and the MCI/dementia group with a good discrimination ability. The AUC increased from a 47% chance with recognition count only, which is deemed unacceptable and similar to the chance level, to a 79% chance. When analyzing the ROC of the three commonly clinically used scores total immediate recall, delayed recall, and recognition count together, we found a discrimination ability of 76%. Adding all verbal memory features resulted in a 4% increase. Note that these results need to be interpreted carefully, as the increase was not statistically significant for all models, except the recognition sub-task, and the clinical value therefore questionable. Looking at the F1-scores of all models, all models were deemed acceptable, the lowest being 65% for model 3 from the immediate recall and model 1 from the delayed recall to the highest, 73% for model 2 of the immediate recall. Interestingly, model 3 of the immediate recall, which refers to the clinical score plus age correction and features, resulted in a lower F1-score than model 1, the clinical score alone. This was not applicable for the delayed recall, recognition, and all sub-tasks combined. In general, adding the verbal memory features increases discrimination ability in all separate subtasks and when performing the whole task including all subtasks. The increase in the AUCs cannot be explained by the age differences between groups because age was included in the second model for all AUCs. Accordingly, the results from this study suggest that automatic processing of the 15-VLT provides additional information beyond a clinician-rated total word count alone with no additional effort required. The sensitivity analysis excluding participants with dementia from the cognitively impaired group showed that participants with SCD and MCI could be differentiated, indicating that the 15-VLT is sensitive enough to detect differences in the early stages of cognitive deterioration. Out of the 102 total additional automatically derived verbal memory features, we were interested in which ones differentiate most between participants with SCD and MCI/dementia. The best differentiating features were delayed recall, immediate midlist items trial 3, delayed recall midlist items, late learning slope, and the immediate total midlist items. Previous research has already demonstrated the diagnostic accuracy and sensitivity/specificity of good to excellent list learning tests were related to delayed recall and total immediate recall [ 3, 38, 39 ], and recognition count having poorer diagnostic accuracy than immediate or delayed recall scores [ 38 ]. Our results are in line with these findings and add value to the discriminative power of ASR-retrieved verbal memory features. The 10 best differentiating features were moderately to strongly correlated with each other. This is not unexpected, as the features represent different sub-parts of the VLT, all of which measure episodic memory. The highest correlation was seen between the immediate correct count in trial 3 and the total immediate recall, which could indicate a ceiling effect of the VLT after trial 3. Although ceiling effects are more common in a younger population, a study by Davis et al. [ 40 ] showed that when only 3 trials were administered, the age-related decline in the delayed recall and recognition test was comparable to administering 5 trials and thus reduced ceiling effects [ 40 ]. 15-VLT features correlated with measures for other cognitive measures. The total immediate recall correlated moderately with the MMSE score. Although the MMSE measures global cognitive functioning, it includes an immediate one-trial 3-word list recall, delayed recall, and orientation in time and place and thus includes subtests measuring episodic memory which could explain the moderate correlation [ 41 ]. Correlations between the 15-VLT features and other memory tests such as the RBMT and the SVF resulted in low to moderate associations. These results are in line with a previous study which also found low to moderate correlations between the VLT and RMBT or SVF [ 14 ]. Looking at executive function, we did not find any significant correlations for the VLT features and inhibition effects (Stroop III), and only very weak correlations for the VLT and mental switching (TMT-B). Interestingly, these results are in line with Abulafia et al. [ 42 ] and Magalhães, Mallow-Diniz & Hamdan [ 43 ], who also did not find significant correlations between the VLT and the Stroop III or TMT-B [ 42, 43 ]. This confirms that the VLT indeed measures cognitive domains other than the Stroop III and TMT-B, i.e., executive function. The exact association between executive function and memory performance needs further attention as other studies found evidence for this relation [ 44 ]. Lastly, all best distinguishing features were negatively correlated to the CDR-SOB, i.e., disease severity, with the highest correlation for delayed recall. This might be caused by the relative weight of the memory domain included in the CDR. Accordingly, this possibly implies and confirms that the more severe the disease severity is, the lower the delayed recall is. Taken together, especially the delayed recall, including its verbal memory features, offers a high predictive diagnostic ability to distinguish cognitive impairment. In general, more research is needed in regard to the range in which the measures of the VLT and its automatically derived features correlate with other cognitive tests. Our current study was conducted in a face-to-face assessment at a memory clinic setting by recording and automatically processing the VLT. This speech-based analysis offers opportunities for remote neuropsychological testing [ 45 ]. Accordingly, it would be of great interest to investigate whether the VLT could be administered and processed remotely, e.g., by phone or video speech/conferencing platforms, to facilitate screening or participation of clinical trial participants or to monitor disease progression, e.g., for people living in medical deserts with less access to care facilities or patients who do not want to travel due to health precautions or limited mobility. Remote neuropsychological testing adapted to this specific population could have benefits in the future such as reduced (travel) costs and increased flexibility and comfort. This study also has some limitations. This study was performed in a Dutch memory clinic setting, which in this case means that all participants were Caucasian and Dutch-speaking. Accordingly, findings cannot be generalized to the (healthy) general population. Further, this study consists of a relatively small sample. We had significant differences in age between groups, which is in line with other clinical studies, however, future studies could make use of an age-matched control design to overcome this limitation. As participants were recruited via the memory clinic of the MUMC+, Dutch clinical guidelines for the diagnostics of cognitive impairment and dementia were followed [ 46 ]. Accordingly, no PET scans are available, and cerebrospinal fluid is only available in a limited subset (N = 6). Further, we did not use a hold-out validation set, i.e., models were not naive to classification. Although this is state of the art and performs good validation for machine-learning models, we suggest that in the future, an independent validation cohort from another study would be interesting to check robustness between studies and cohorts. Additionally, intrusions could not be identified by the ASR technology. Intrusions refer to the participants recalling words that were not on the list, i.e., inaccurate memory. In general, susceptibility to intrusion effects has been associated with a higher probability of underlying cognitive decline at the prodromal phase of MCI [ 47, 48 ]. Thus, improving language detection in ASR to discern intrusions would be beneficial for future ASR studies that include the VLT as a measure of verbal memory. In conclusion, the VLT and its associated ASR-derived verbal memory features can distinguish participants with SCD from those with MCI and dementia. Current results present ASR scores that are close to being consistent with clinical scores regarding discrimination ability in diagnosing cognitive impairment. Thus, the ASR and associated verbal memory features of the VLT could be potentially used in clinical diagnostics or to facilitate a non-invasive tool to screen participants. An integrated approach of ASR in semi or fully-automated (telephone) assessments might improve efficiency and accelerate recruitment in clinical trials, as no clinician would be needed to score the neuropsychological tests.
Background: Previous research has shown that verbal memory accurately measures cognitive decline in the early phases of neurocognitive impairment. Automatic speech recognition from the verbal learning task (VLT) can potentially be used to differentiate between people with and without cognitive impairment. Objective: Investigate whether automatic speech recognition (ASR) of the VLT is reliable and able to differentiate between subjective cognitive decline (SCD) and mild cognitive impairment (MCI). Methods: The VLT was recorded and processed via a mobile application. Following, verbal memory features were automatically extracted. The diagnostic performance of the automatically derived features was investigated by training machine learning classifiers to distinguish between participants with SCD versus MCI/dementia. Results: The ICC for inter-rater reliability between the clinical and automatically derived features was 0.87 for the total immediate recall and 0.94 for the delayed recall. The full model including the total immediate recall, delayed recall, recognition count, and the novel verbal memory features had an AUC of 0.79 for distinguishing between participants with SCD versus MCI/dementia. The ten best differentiating VLT features correlated low to moderate with other cognitive tests such as logical memory tasks, semantic verbal fluency, and executive functioning. Conclusions: The VLT with automatically derived verbal memory features showed in general high agreement with the clinical scoring and distinguished well between SCD and MCI/dementia participants. This might be of added value in screening for cognitive impairment.
Supplementary Material
ACKNOWLEDGMENTS The authors have no acknowledgments to report. FUNDING This work was supported by the European Institute for Innovation and Technology (EIT) - Health (Grant number: 19249). CONFLICT OF INTEREST For ki:elements, Johannes Tröger, Alexandra König, and Nicklas Linz are employed by the company ki:elements, which developed the mobile application and calculated the verbal memory features. Johannes Tröger, Alexandra König, and Nicklas Linz own shares in the ki:elements company. DATA AVAILABILITY The data supporting the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
CC BY
no
2024-01-16 23:47:18
J Alzheimers Dis.; 97(1):179-191
oa_package/48/0a/PMC10789344.tar.gz
PMC10789351
38073386
INTRODUCTION The multifactorial [ 1 ] etiology of Alzheimer’s disease (AD) is often associated with several comorbidities [ 2 ], thus complicating the identification of effective treatment strategies. Although AD is characterized by the presence of amyloid-β (Aβ) containing extracellular plaques and tau-containing intracellular neurofibrillary tangles, decreased synaptic density is an early feature of AD [ 3 ]. Recent studies expanded the concept of brain plasticity to include activity-dependent changes in white matter structure and organization [ 4, 5 ] and suggested restoring myelin or preventing myelin loss as one treatment strategy [ 6 ]. White matter degeneration can affect brain function and connectivity, as myelin ensheathes axons and facilitates local and distant communication. Due to the crucial role of myelin in electrical impulse conduction within neuronal networks, longitudinal evaluation of white matter volume and myelin content could provide valuable insight into AD progression. Previous studies have evaluated myelin content through routinely collected magnetic resonance imaging (MRI) to facilitate clinical implementation, especially when access to advanced quantitative MRI techniques may not be readily available. This approach utilizes T1-weighted (T1w) and T2-weighted (T2w) MRI acquisition sequences. T1w sequences are used to study anatomical structures of the brain. Due to the short relaxation time, T1w sequences are characterized by better contrast-to-noise ratio in white matter. The intensities of white matter in T1w sequences are provided by the spatial distribution of myelin-bound cholesterol, which contributes the most to the contrast in T1w images of the brain. On the other hand, given that T2 relaxation time is associated with proton transfers, molecular exchange, and diffusion of water, T2w sequences can be applied to better differentiate structural differences in regions with high water content, which is important for many disease process diagnoses. Since molecular motion of protons is constrained by hydrophobic properties of the lipid bilayer in myelin, relatively larger myelin content results in relatively lower intensity on T2w images. Therefore, T1w/T2w ratio images can provide a proxy estimate of myelin content and are used as a practical option to study myelin content. Previous studies have demonstrated positive effects of 40 Hz gamma-sensory stimulation in AD transgenic mouse models [ 7–10 ] and in patients across the AD spectrum [ 11–13 ]. However, the potential impact of the gamma stimulation therapy on white matter volume and myelin content has not been studied. Of special interest are white matter changes in the entorhinal region, which contain bidirectional connections that support information transfer between cortical and hippocampal networks impacted in the early stages of AD, as shown in histological and neuroimaging studies [ 14 ]. Herein, we describe the effect of combined visual and auditory 40 Hz gamma-sensory stimulation on changes in white matter volume and myelin content in patients with mild cognitive impairment (MCI) or mild-moderate AD using volumetric MRI (T1w) and T1w/T2w ratios to evaluate white matter atrophy and regional differences in myelin content, respectively. We hypothesize that daily 1-h combined visual and auditory 40 Hz gamma-sensory stimulation therapy for a 6-month period may prevent oligodendrocyte damage or modify other pathological processes that may lead to a reduction in myelin loss, protect axons, and attenuate white matter atrophy in patients with MCI or mild-moderate AD.
MATERIALS AND METHODS Study population and design This analysis is based on data from the OVERTURE, Phase I/II randomized, placebo-controlled clinical trial (NCT03556280), https://clinicaltrials.gov/ct2/show/NCT03556280 . The study evaluated the safety, tolerability, adherence, and efficacy of combined visual and auditory gamma-sensory stimulation treatment in participants with MCI and mild-moderate AD (Mini-Mental State Examination (MMSE) scores 14–26). The study was reviewed and approved by Advarra IRB (FDA IORG#0000635, OHRA IRB Registration #00000971). Informed consent was obtained from all participants. In cases where subjects were not competent to provide informed consent, a Legally Authorized Representative provided it and the individual participant assented to the research. At screening, participants with confounding pathology, such as ischemic stroke, intracerebral macro-hemorrhages, or more than four micro-hemorrhages, were excluded. Exclusion criteria also included profound hearing or visual impairments, seizure history, and the use of anti-seizure/anti-epileptic medication. During the trial, cholinesterase inhibitors were permitted at a stable dose. Memantine was not permitted. During OVERTURE, 46 participants were assigned to active treatment, of which 33 completed the study, and 28 participants were assigned to sham treatment, of which 20 completed the study. Active treatment group participants received daily, 1-hour 40 Hz simultaneous audio-visual sensory stimulation for a 6-month period while sham group participants received sham stimulation for the same period. For each participant, the volume of the auditory stimulation and the intensity of the visual stimulation were set within a range that was comfortable. Analyses were conducted on all OVERTURE participants who met none of the exclusion criteria: The exclusion criteria for white matter volumetric analyses were as follows: (a) participants who declined > 4 standard deviations from the mean in multiple efficacy measures (1 sham participant excluded); (b) participants who did not have both baseline and end of study (i.e., Month 6) data for structural MRI (22 participants excluded, of which 21 of them did not complete the trial); (c) participants who did not have a sufficient T1w image quality, including excessive motion artifact and insufficient gray matter-white matter contrast (13 participants excluded). Overall, thirty-eight participants (25 Treatment and 13 Sham) were included in white matter volumetric assessments. Of the thirty-eight participants, two failed T2w image quality due to reconstruction error, and a total of thirty-six participants (24 Treatment and 12 Sham) were included for longitudinal T1w/T2w white matter myelin content assessments. Therapy device The device used in this study was a gamma-sensory stimulation device (Figure 1B of [ 13 ]) developed by Cognito Therapeutics, Inc. It consisted of a handheld controller, an eye-set for visual stimulation, and headphones for auditory stimulation. During the therapy, participants could adjust the brightness of the visual stimulation and the volume of the auditory stimulation using push buttons on the controller. If assistance is needed, they could communicate with a care partner. The device captured usage information and adherence data. All the information was uploaded to a secured cloud server for remote monitoring. MRI data acquisition Structural MRI was acquired at Baseline, Month 3 and Month 6 using 1.5 Tesla MRI scanners. The study adopted an ADNI1 comparable standardized MRI scan protocol. For T1w, it included 1.25×1.25 mm in-plane spatial resolution, 1.2 mm thickness, TR 2400 ms and TE 3.65 ms for Siemens Espree scanner, 0.94×0.94 mm in-plane spatial resolution, 1.2-mm thickness, TR ∼3.9 ms and TE 1.35 ms for General Electric scanner Signa HDxt and 0.94×0.94 mm in-plane spatial resolution, 1.2-mm thickness, TR 9.5 ms and TE ∼3.6 or 4 ms for Philips Ingenia scanner or Philips Achieva scanner. For T2w, it included 1×1 mm in-plane spatial resolution, 4 mm thickness, TR 3000 ms and TE 96 ms for Siemens and GE scanners and 1×1 mm in-plane spatial resolution, 4 mm thickness, TR 3000 ms and TE 92 ms for Philips scanner [ 15 ]. Parcellation FreeSurfer’s standard reconstructive pipeline (“recon-all”, version 7.2.0) was used to process and automatically parcellate T1 MRI data into 68 predefined, left and right hemispheres separated, white matter structures [ 16–23 ]. Per hemisphere white matter structures were joined to obtain 34 white matter structures that combined the two hemispheres. FreeSurfer was also used to automatically generate 12, left and right hemispheres separated, lobar white matter structures. Similarly, per hemisphere lobar white matter structures were also joined to obtain 6 lobar white matter structures that combined the two hemispheres. We focus on these 52, 12 hemispherical-based and 34 + 6 joined, white matter structures to assess changes in volume and myelin. Myelin-reflecting contrast To acquire a myelin-reflecting contrast, a non-invasive imaging sensitive to the estimation of myelin content was employed by using T1w/T2w ratio [ 24–26 ]. This process included co-registration of the T2w images to the T1w images using rigid transformation, inhomogeneity correction for both T1w and T2w images and linear calibration of image intensity using non-brain tissue masks to create T1w/T2w ratio images corresponding to myelin content [ 27, 28 ]. T1w/T2w ratios were processed using MRTool (v. 1.4.3, https://www.nitrc.org/projects/mrtool/ ), the toolbox implemented in the SPM12 software (University College London, London, UK, http://www.fil.ion.ucl.ac.uk/spm ). Statistical methods Demographic and biomarker data of the active treatment group participants and the sham group participants were compared using two-sample t -tests for numerical data or chi-square tests for categorical data. To evaluate the efficacy, we calculated the change in volumetric data and myelin content in each of the white matter structures using the formula Change = 100*( V Follow - up / V Baseline - 1) % where V is the volume or the myelin content. The changes were then assessed using a Bayesian linear mixed effects model. Non-informative priors are used for all effects of the model that includes total intracranial volume, baseline MMSE score, baseline age, visit (as number of days from the start of the treatment), group, baseline MRI measures (volume for white matter atrophy assessment and sum of the T1w/T2w ratios across each studied white matter structure for myelin content assessment), group-visit interaction and baseline MRI measure-visit interaction. Random effects of the model include subject and site information. The Kenward-Roger approximation of the degrees of freedom was used. For volumetric analysis, volume change (% change from baseline) and for myelination analysis, sum of T1w/T2w ratio change (% change from baseline) were assessed for each studied white matter structure. All statistical analyses were conducted using R (version 4.1.1).
RESULTS Thirty-eight participants (25 Active Treatment, 13 Sham) who completed the 6-month study and whose MRI met the study criteria were evaluated in the volumetric analysis, whereas thirty-six (24 Active Treatment, 12 Sham) were who met study criteria evaluated in the T1w/T2w myelin content analysis (T1w and T2w MRI images of a sample active treatment participant and a sham participant can be found in Supplementary Figure 1 ). At baseline, the sham treated group were older than the active treatment group (76.62±9.97 versus 68.36±7.69, p = 0.02), otherwise there were no significant differences in sex, MMSE, Alzheimer’s Disease Cooperative Study - Activities of Daily Living, APOE4 status, white matter volume or T1w/T2w between the two groups ( Table 1 ). A baseline comparison of Fazekas scores was conducted for the population where T1w/T2w myelin analysis is performed: The active group had 1 participant with grade 0, 18 participants with grade 1, and 5 participants with grade 2. The sham group had 8 participants with grade 1, 3 participants with grade 2, and 1 participant with grade 3. There was no statistically significant difference between the two groups. In the participants receiving active treatment, headaches, and tinnitus were the most reported adverse events. There were no observations of ARIA-E (vasogenic edema and sulcal effusions) or ARIA-H (hemosiderin deposit). Compared to baseline MRI values, we observed that the active treatment group demonstrated a 0.17±1.08% (1.06±5.35 cm 3 ) increase and the sham group demonstrated a –2.54±1.38% (–12.37±6.81 cm 3 ) decrease in total cerebral white matter volume after a 6-month period, representing a statistically significant difference ( p < 0.038) ( Fig. 1A ). Furthermore, a statistically significant ( p < 0.025) difference was also observed in the myelin-reflecting T1w/T2w ratio between groups; the active treatment group demonstrated a –1.42±2.35% decrease and the sham group demonstrated a –6.19±2.63% decrease ( Fig. 1B ). Fifty-two distinct white matter structures were analyzed based on volume ( Supplementary Table 1 ) and myelin-reflecting T1w/T2w ratio ( Supplementary Table 2 ) changes from baseline following a 6-month daily gamma visual and auditory sensory stimulation. We observed that all statistically significant changes favored the active treatment group: Compared to the sham group, significant ( p < 0.05) attenuation in volume loss was found in 12 of 52 structures: entorhinal region, left cingulate lobe, pars triangularis region, cuneus region, lateral occipital region, postcentral region, left occipital lobe, left frontal lobe, left parietal lobe, occipital lobe, left temporal lobe and caudal middle frontal region (sorted in ascending order by p value) for the active treatment group after 6 months of treatment ( Fig. 2A , see Fig. 2B for maps of T-statistics depicting the differences between the two groups in lobar white matter volume). Forty Hz gamma-sensory stimulation therapy for a 6-month period prevented white matter atrophy in the entorhinal region: The active treatment group demonstrated a 5.14±3.66% (0.08±0.06 cm 3 ) increase, while the sham group demonstrated a –7.60±4.35% (–0.13±0.07 cm 3 ) decrease in volume. The difference between these two groups was statistically significant ( p < 0.002). The treatment also trended in the direction of preventing volume loss (0.05≤ p < 0.1) in the precentral region, paracentral region, lingual region, fusiform region, frontal lobe, rostral anterior cingulate region, inferior temporal region, right occipital lobe, parietal lobe, rostral middle frontal, precuneus region, medial orbitofrontal region, and temporal lobe (sorted in ascending order by p value) ( Fig. 2C ). The full extended results of volume changes for 52 white matter structures from baseline for a 6-month period are shown in Table 2 . Compared to the sham group, significantly less myelin content loss (smaller T1w/T2w ratio change) was observed in the entorhinal region, pars triangularis region, postcentral region, left parietal lobe, lateral occipital region, paracentral region, rostral middle frontal region, supramarginal region, precentral region, parietal lobe, right occipital lobe, fusiform region, occipital lobe, left frontal lobe, cuneus region, precuneus region, inferior parietal region, frontal lobe, lingual region, left occipital lobe, left temporal lobe, right parietal lobe and pars orbitalis region ( Fig. 3A , white matter structures sorted in ascending order by p value), indicating significant differences ( p < 0.05) between the active treatment group and the sham group (see Fig. 3B for maps of T-statistics depicting the differences between the two groups in lobar white matter myelin content). Within the 52 studied white matter structures, the most significant myelin content T1w/T2w ratio change was also in the entorhinal region. The active treatment group participants exhibit a 2.78±4.97% increase from baseline on T1w/T2w ratio while the sham group participants exhibit a –10.59±5.63% decrease from baseline on sum of T1w/T2w ratio ( p < 0.003), suggesting that 40 Hz gamma-sensory stimulation therapy for a 6-month period may significantly protect myelination. The treatment may also trend towards slowing down demyelination (0.05≤ p < 0.1) in the right frontal lobe, caudal middle frontal region, rostral anterior cingulate region, superior frontal region, temporal lobe, medial orbitofrontal region, posterior cingulate region, superior parietal region, left cingulate lobe, superior temporal region, cingulate lobe, and temporal pole region ( Fig. 3C , white matter structures sorted in ascending order by p value). All myelin content T1w/T2w ratio changes in the 52 white matter structures from baseline for a 6-month period are shown in Table 3 . While the sham-treated group was older at baseline and age was included as a covariate in our models, it did not contribute significantly to any of the changes we observed over the course of six months.
DISCUSSION Our data suggests that daily, 1-hour 40 Hz combined visual and auditory gamma-sensory stimulation therapy for a 6-month period in participants with AD resulted in reduced white matter atrophy and myelin content loss compared to sham treatment. We also consistently observed reduced myelin content loss (T1w/T2w ratio) across the brain same regions. Overall, the effect of gamma-sensory stimulation on decreasing white matter atrophy and myelin loss was greatest in the entorhinal region. Although we observed a statistically significant difference in change of white matter volume and myelin content between active and sham arm participants after six months of treatment, and a positive change in some regions in the active group, owing to the small sample size, it is unclear whether 40 Hz gamma sensory stimulation increases white matter volume or simply prevents atrophy. While the extracellular accumulation of amyloid plaques and intra-neuronal presence of tau-containing neurofibrillary tangles may lead to neuron and synapse loss in AD, there is evidence that myelin damage in preclinical AD independently contributes to disease progression [ 29, 30 ]. Studies have revealed that white matter degeneration and myelin damage may be associated with neuronal dysfunction and cognitive decline [ 31–33 ]. Subsequent studies suggest that white matter atrophy and myelin loss may be a mechanistically important AD treatment target [ 34 ] and may identify individuals at high risk of disease progression. Based on experimental results, myelin preservation has recently been suggested as a therapeutic strategy for improving cognition in AD [ 6 ]. Furthermore, AD amyloid pathology may be affected by myelin alterations, which precede the onset of amyloid and tau pathological changes [ 29, 35 ]. The potential link between white matter degeneration and clinical disease progression [ 34 ] provides an important perspective to view results from gamma-sensory stimulation therapy as a potential disease modifying approach to slow AD progression. Preservation of entorhinal white matter by this innovative treatment may be particularly relevant to AD given its afferent connections into the hippocampus and the entorhinal cortex [ 14, 36 ]. Due to densely concentrated axons of the perforant path in the white matter entorhinal region, stimulation of white matter in the entorhinal region recruits many of these axons and benefits subsequent memory during learning [ 36, 37 ]. Preservation of white matter volume and myelin content by 40 Hz combined visual and auditory gamma-sensory stimulation may help protect the existing connections and prevent further damage to this region. The white matter lobar regions, particularly in the left hemisphere, that showed statistically significant differences between active and sham treatment in our analysis, are known to be affected in AD. Studies have demonstrated that damage to the temporal lobe affects memory and that temporal lobe predominant damage, specifically atrophy of the medial temporal lobe, is the most predictive structural brain biomarker for AD [ 38, 39 ]. Early-onset AD patients exhibit bilateral posterior myelin loss spreading to the temporal areas of their left temporal area, while late-onset AD patients exhibit distributed bilateral myelin loss affecting the temporal and cingulate areas [ 40, 41 ]. A separate diffusion MRI study additionally demonstrated the directional diffusivity changes with a decrease in axial diffusivity and an increase in radial diffusivity in temporal white matter of AD patients, suggesting entire myelinated axonal loss as observed in Wallerian degeneration [ 42 ]. Severe white matter occipital atrophy has been observed in the posterior cortical atrophy variant of early age-of-onset AD [ 43 ], and there is evidence that neuropathological abnormalities of the occipital lobe may lead to visual hallucinations in AD [ 44 ]. Although myelin sheath structural integrity deteriorates with normal aging, particularly in regions of late myelination like the frontal lobe, it has been shown that this degradation is more severe in AD patients [ 45 ]. One AD development model that aims to characterize the chain of pathological events leading to AD pathology and diagnosis highlights the importance of the parietal lobe [ 38 ]. In this model, amyloid accumulation crosses a threshold because of myelin breakdown and disconnection between the posterior cingulate gyrus/precuneus and the medial temporal lobe and leads to a cascade of events. Given the practical clinical implementation and the direct correspondence to regional differences in myelin, myelin-reflecting T1w/T2w ratio has been applied to non-invasively study white matter pathology in other neurological disorders. In schizophrenia, the regions identified in white matter by using T1w/T2w ratio are consistent with previous studies linking cerebellar deficits to neurological signs [ 28 ]. Based on their findings, the authors concluded that T1w/T2w ratio can yield improved differentiation from healthy controls than studying T1w and T2w images alone. In multiple sclerosis, the T1w/T2w ratio has been used to characterize microstructural changes in myelin and neuroaxonal integrity [ 46 ]. T1w/T2w ratios are lower in lesioned white matter regions compared to non-lesioned white matter regions, consistent with known disease-related reduced myelin content, while no differences have been observed in the T1w/T2w ratio in normal-appearing white matter regions of multiple sclerosis subjects and the white matter of controls. Nonetheless, another study found that patients with clinically isolated syndrome, some of whom later developed multiple sclerosis, a decrease in T1w/T2w was demonstrated prior to the onset of lesion formation [ 47 ]. The specific mechanisms that contribute to preservation of white matter and myelin by 40 Hz combined visual and auditory sensory stimulation is unclear, although synaptic and non-synaptic effects on oligodendrocytes (OL) may ultimately lead to positive effects within myelin structures. For example, non-invasive gamma stimulation may lead to increased axon-glia signaling at functional synapses between neurons and oligodendrocyte precursor cells (OPC) and hence influence OPC [ 48 ]. In addition, release of neurotransmitter-filled vesicles of neurons at non-synaptic junctions with OPC may promote myelination. These hypotheses are consistent with in-vivo studies where proliferation of OPC and their development into OL, extension and stabilization of myelin sheaths, regulation of myelinating capacity of OL are achieved by changing the neuronal activity via external stimulation or by placing animals into an enriched environment [ 49 ]. The relatively new understanding of white matter as a non-static, dynamic, adaptive structure, extending into adulthood has led to increased interest in examining glial contributions to disease onset and progression. Stimulation-induced neuronal activity can change white matter properties, i.e., white matter plasticity; which can lead to changes in myelin density, affecting the speed, precision, and timing of axonal signal conduction, leading to optimal synchronization of spike-time arrival which plays a crucial role in optimizing neuronal network function. The regulation of myelination and adaptive myelination plays an important role in the temporal structure of neural interactions as a means to achieve self-organization and influence the dynamics of the network function. It also shifts the focus from a synaptic strength only model to one that also incorporates the role of glial-mediated self-organizing networks, which can reorganize the timing of neural interactions to preserve specific network target dynamics and achieves brain homeostasis [ 5 ]. Understanding that white matter properties may be modified, and myelin may be regulated in an activity dependent manner may advance our understanding of white matter plasticity and its role in achieving optimal network dynamics. Our results demonstrate the positive effects of combined visual and auditory gamma-sensory stimulation on white matter atrophy and myelin content. This may ultimately contribute to restoring neural network function in AD and in other neurodegenerative disorders. To the best of our knowledge, the T1w/T2w ratio in white matter has not been previously studied in AD patients. Despite its merits, there are limitations to consider. The first is that MRI acquired with very different pulse sequences at different imaging centers may generate variations in image contrast. The originally introduced MRI scan protocol [ 26 ] used isotropic voxels for both T1w and T2w images. In our MRI scan protocol, like ADNI1 protocol, the voxels of T2w images were not isotropic. Secondly, in addition to myelin alteration, other pathological changes such as edema, inflammation, iron accumulation, free water or fiber density may also change T1w/T2w ratio in individuals with central nervous system disorders [ 28, 50 ]. It is critically important for future studies to consider confounding effects including iron and inflammation and assess T1w/T2w ratio in both healthy and pathological tissue histologically. Furthermore, it is important for future studies to add other advanced MRI imaging modalities such as diffusion tensor imaging, neurite orientation and dispersion density imaging, quantitative susceptibility mapping, multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDEPOT), and magnetization transfer ratio imaging to improve both the sensitivity and the specificity of myelin content quantification. Our research represents a retrospective evaluation of a Phase II study. Our foremost aim was to pinpoint specific endpoints, with the anticipation of their prospective utilization in future studies, underpinned by a more robust sample size. Our results suggest that 40 Hz combined visual and auditory gamma-sensory stimulation therapy may have beneficial effects on MRI pathophysiological features of AD and other neurodegenerative diseases. Six months of treatment significantly attenuated total and regional white matter volume loss and significantly reduced myelin content loss, consistently across the same brain regions. Specifically, white matter structures with statistically significant differences in volume or myelin content were better preserved in the active treatment group with the most significant difference in the entorhinal region, which is an important structure relevant to AD pathology. (See a comparison of p values and adjusted p values for white matter volume ( Supplementary Table 3 ) and myelin content ( Supplementary Table 4 ). Further larger studies (such as the HOPE pivotal study (NCT05637801)) may provide additional insights into this promising therapeutic strategy to improve function and cognition in AD and possibly other neurodegenerative diseases that are vulnerable to white matter abnormalities.
Background: Patients with Alzheimer’s disease (AD) demonstrate progressive white matter atrophy and myelin loss. Restoring myelin content or preventing demyelination has been suggested as a therapeutic approach for AD. Objective: Herein, we investigate the effects of non-invasive, combined visual and auditory gamma-sensory stimulation on white matter atrophy and myelin content loss in patients with AD. Methods: In this study, we used the magnetic resonance imaging (MRI) data from the OVERTURE study (NCT03556280), a randomized, controlled, clinical trial in which active treatment participants received daily, non-invasive, combined visual and auditory, 40 Hz stimulation for six months. A subset of OVERTURE participants who meet the inclusion criteria for detailed white matter (N = 38) and myelin content (N = 36) assessments are included in the analysis. White matter volume assessments were performed using T1-weighted MRI, and myelin content assessments were performed using T1-weighted/T2-weighted MRI. Treatment effects on white matter atrophy and myelin content loss were assessed. Results: Combined visual and auditory gamma-sensory stimulation treatment is associated with reduced total and regional white matter atrophy and myelin content loss in active treatment participants compared to sham treatment participants. Across white matter structures evaluated, the most significant changes were observed in the entorhinal region. Conclusions: The study results suggest that combined visual and auditory gamma-sensory stimulation may modulate neuronal network function in AD in part by reducing white matter atrophy and myelin content loss. Furthermore, the entorhinal region MRI outcomes may have significant implications for early disease intervention, considering the crucial afferent connections to the hippocampus and entorhinal cortex.
Supplementary Material
ACKNOWLEDGMENTS We would like to thank all patients and their caregivers who participated in OVERTURE. We also would like to thank Paul Solomon, Elizabeth Vassey, Michelle Papka, and Mark Brody for their collaboration, Karen Martin and Holly Mrozak for data management, and Khalil Saikali for his edits. We also would like to thank Brent Vaughan for his ongoing support during this study. FUNDING Financial support for this work is provided by Cognito Therapeutics, Inc. CONFLICT OF INTEREST Mr. Xiao Da is an employee of and owns stock options in Cognito Therapeutics, Inc. and has patent applications assigned to Cognito Therapeutics, Inc. Mr. Evan Hempel is an employee of and owns stock options in Cognito Therapeutics, Inc. Dr. Yangming Ou receives grant support from Abbott Inc. Ms. Olivia Elizabeth Rowe is an employee of and owns stock options in Cognito Therapeutics, Inc and receives consulting fees from the MGH Athinoula A Martinos Center for Biomedical Imaging. Mr. Zach Malchano is an employee of and holds stocks and options in Cognito Therapeutics, Inc and has issued patents and applications assigned to Cognito Therapeutics, Inc. Dr. Mihály Hajós is an employee of and owns stocks in Cognito Therapeutics, Inc, has patent applications assigned to Cognito Therapeutics, Inc, and is a shareholder of Biogen and Pfizer. Dr. Ralph Kern is an employee of Cognito Therapeutics, Inc, and is on the Scientific Advisory Board of Brainstorm Cell Therapeutics. Dr. Jonathan Thomas Megerian received payments from Cognito Therapeutics for his services as Acting Chief Medical Officer. Dr. Aylin Cimenser is an employee of and owns stock options in Cognito Therapeutics, Inc. and has patent applications assigned to Cognito Therapeutics, Inc, and is a shareholder of Boston Scientific. DATA AVAILABILITY The data supporting the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
CC BY
no
2024-01-16 23:47:18
J Alzheimers Dis.; 97(1):359-372
oa_package/e6/4d/PMC10789351.tar.gz
PMC10789356
37483054
Introduction Work interruptions come in various forms, such as e-mails, instant messages, or colleagues looking for a conversation partner. Research demonstrates that work interruptions are considered one of the most common work stressors [ 1, 2 ]. Information workers spend on average more than two hours per day dealing with work interruptions [ 3 ] and often get caught up in “distraction chains” before resuming to their main tasks. These interruptions incur recovery costs (i.e., the time it takes to resume work after an email interruption), which typically amount to several minutes per interruption [ 4 ]. This makes it particularly necessary to investigate the challenges and especially the effects of work interruptions for information workers. Work interruptions can be defined as a temporary interruption of goal-directed actions [ 5 ]. Although work interruptions can be externally or internally initiated, this research focuses on externally initiated interruptions caused by unplanned tasks related to the completion of a main task [ 6 ]. According to the action regulation theory, interruptions can disrupt the sequential action regulation process and thus be regarded as a regulation obstacle [ 7 ]. Thus, higher workload is to be expected, as the interrupted person must engage in a new task and resume to the former task later, i.e. a change of task is required, which necessitates mental regulation [ 5 ], shifting attention and adjusting the goal of action [ 8 ]. Resulting overload can lead to the individual’s disability to cope with the job demands, leading to slower work rates, including slower responses to critical events, as well as higher error rates, for example [ 9 ]. In the long-term, this increases the risk of serious health issues, long-term sick leave, or early retirement [ 10 ]. Looking at previous research on this topic, it is striking that only a limited number of studies have investigated work interruptions in the context of office work. Most of the research to date has been conducted in other occupational contexts (particularly healthcare) or in laboratory experiments [ 11, 12 ]. However, office workplaces represent a significant economic factor. In Germany, 71% of all employees work at least some of the time at an office workplace [ 13 ]. Office work refers to work at a desk workstation and is often accompanied by the intensive use of digital information and communication technologies. The work tasks of office work can be diverse, with predominantly routine-based requirements or, conversely, knowledge-based requirements, or a combination of both aspects. This makes it difficult to analyze the effects of work interruptions in these workplaces, as it can be assumed that these different main tasks can lead to different effects of interruptions [ 11, 12 ]. A review of previous literature shows in particular studies in the context of knowledge work [ 14–17 ] as well as among IT-professionals [ 18–21 ] and call-center employees [ 22 ], but a comparison of results is difficult due to the different study designs and variables collected. In addition, most studies lack an overview of work tasks, which prohibits more detailed deductions regarding the influence of work interruptions for office workers. However, taking into account laboratory findings, it may be assumed that the complexity of the interrupted task (i.e., the primary work task) serves as a key moderator, since interruptions of more complex tasks leads to greater information processing burden and higher mental effort [ 23, 24 ]. Highlighting this, previous research has even demonstrated that interruptions during routine-based work tasks can actually have positive effects, as they allow workers to engage in activities that are important for emotional well-being, job satisfaction, and continued productivity [ 25 ]. Moreover, it is shown, that in monotonous activities, interruptions which divert the attention can contribute to the activity’s variety and intellectual stimulations, thus serving as a source of work enrichment [ 26 ]. Accordingly, it must be expected that interruptions can have different effects even within office workplaces, depending on the primary work tasks. With regard to the operationalization of work interruptions in field research, it appears that studies under this framework usually adopt a subjective approach, often focusing on frequency of interruptions and involving participants subjectively estimating the number of work interruptions (e.g., [ 6, 27 ]). However, research already shows that the individual evaluation of work demands in general seems to be an important mediating pathway between work demands and their effects on work attitudes and mental workload. Furthermore, it is confirmed, that there are certain organizational variables which influence individual appraisal of work interruptions and thus the effects of work interruptions for the working person [ 28 ]. An important distinction to physical stressors, is that psychosocial stressors are determined entirely, or at least in part, by the way people perceive them (e.g., require cognitive assessment) [ 29 ]. Regarding the individual evaluation of work interruptions, initial research has addressed the perception of work interruptions in the context of technostress research under the term interruption overload [ 14, 30 ]. Following these authors, interruption overload describes the extent to which individuals perceive that they receive more interruptions than they can effectively handle. Interruption overload originates in the cognitive resources needed when a person switches focus between tasks and is grounded in the literature on cognitive workload. It refers to the ability of people to achieve a given level of performance when limited mental resources are available [ 31 ]. It can be assumed that when a primary task is interrupted by an external stimulus, attention is consciously or unconsciously diverted from this primary task. At this point, a decision must be made whether to focus on the new task, divide attention between tasks, or ignore the interruption. Even if the decision is made not to pay attention to the interrupting stimulus, this decision is itself a decision point about whether or not it is worth paying attention to the stimulus, which may increase cognitive load. In order to gain a more detailed insight into the influencing factors as well as effects of work interruptions at office workplaces, the present study conducted a one-day diary survey on work interruptions among office workers in order to analyze whether the complexity of the primary work task leads to different effects of work interruptions. Two hypotheses were formulated to be answered in the context of this research. The first hypothesis concerns the influence of perceived interruption overload as a mediating variable between the relationship of interruption frequency and subjective workload. Here, when operationalizing work interruptions, the objective number of these is included in the analysis rather than the subjectively estimated number. Continuing, it is assumed that professionals who are interrupted during more complex primary work tasks are more negatively affected by work interruptions. The second hypothesis therefore focuses on the moderating influence of the complexity of the primary work task: Hypothesis 1: A higher frequency of work interruptions is significantly positively related to a higher subjective workload; this relationship is mediated by perceived interruption overload. Hypothesis 2: A higher complexity of primary work tasks strengthens the positive relationship between frequency of work interruptions and subjective workload.
Method Procedure The present study was conducted in the form of a one-day diary study in January 2022 to examine the association between the frequency of work interruptions and subjective workload of the working person. Data were collected via an online survey by participants which were recruited through a survey panel provider. The survey panel was accessed through Bilendi GmbH , a service provider who have access within their panel to registered natural persons who voluntarily participate in surveys. Contact with the participants in this study was therefore established by the panel provider through its standardized process of inviting potential participants. A random sample was drawn from the entire survey panel, provided they met the inclusion criteria of the survey. The inclusion criteria required that respondents were at least 18 years old, no older than 67 years (retirement age in Germany), worked exclusively at an office workstation, and were employed full-time (at least 35 working hours per week). Office work was operationalized with two items. Only participants who work predominantly at a desk workstation with a computer were included in this survey. Participation was voluntary, anonymity and confidentiality were guaranteed. Upon full participation, respondents received monetary compensation for their participation from the panel provider. As the study fulfilled a list of standard criteria (e.g. anonymized participation, adult participants, no intrusive measures, no deception), further ethical approval was waived. In a first step, the subjects were informed about the content and procedure of the study as well as the data protection regulations. This information was provided to the participants in written form via the questionnaire platform. For potential questions, a designated contact person was given. In addition, the demographic data were verified regarding the inclusion criteria. In this first step, 985 participants were recruited who agreed to take part in the study. After giving their consent, subjects could choose any day within a two-week period on which they wanted to participate. On this day, the subjects received the first part of the questionnaire in the morning before their workday, following which all work interruptions during the workday were noted. The second part of the survey was filled out at the end of the workday with regard to perceived interruption overload and overall workload to be assessed. In this phase of the study, 615 participants took part in the study (response rate: 62.44%), of which 492 could be included in the analysis (49.95%) after the quality check of the data. Sample The sample consisted of 492 full-time office employees working in Germany. The sample included 45.5% female participants and 53.9% male employees, representative of the German working population. The age of participants ranged from 21 to 67 years with an average age of 43.9±11.9 years. The age grouping also took into account the representativeness of the German working population. The majority of participants were regular employees (65.6%), a further 27.0% were team leaders or middle managers, while 5.7% were senior managers. A complete sample description is provided in Table 1 . Measures After screening potential participants regarding inclusion criteria, the first part of this survey was to be completed in the morning before the start of the workday (pre-work measurements), the second part during the workday, and the third part at the end of the workday (post-work measurements). The pre-work and post-work measurements are described in more detail below; the measurements during the workday consisted of noting all interruptions during the workday in a previously distributed template. Screening and pre-work measures The first part of the survey included a screening of potential participants to test for inclusion criteria. Demographic data of the participants were collected (gender, age, position, work experience, weekly working hours, type of workplace, and questions about work equipment). Participants who met the inclusion criteria received an additional questionnaire designed to elicit psychosocial requirements and resources, which, however, are not part of this analysis. In addition, the procedure of the study was explained in more detail, especially what a work interruption is and when and how participants should note them. This information was provided to the participants in written form via the questionnaire platform, for potential questions a designated contact person was given. An overview of the information collected that is relevant for this analysis is given in Table 2 . Post-work measures In the evening, the participants were asked to rate their perceived interruption overload, i.e., the extent to which they have received more interruptions than they can effectively process and manage using the corresponding scale [ 14 ]. Since there is currently no validated German version for this scale, the translation was done by the researchers of this work. In the translation process, three independent translation drafts were created, from which the final version was developed in joint consultation. The scale was rated on a 5-point Likert scale with values ranging between “ Not at all ” (1) to “ Fully agree ” (5), whereby a higher value describes a higher perceived interruption overload. Furthermore, the complexity of today's work tasks was assessed using two items of the Work Design Questionnaire (WDQ), whereby an already validated German translation was used [ 32 ]. Response options ranged from “ Not at all ” (1) to “ Fully agree ” (5), with a higher value describing more complex work tasks. In addition, participants were asked about their subjective workload during the workday using the Raw TLX scale, which is based on NASA TLX scale, using the six subscales: Mental, physical, and temporal demands, frustration, effort, and performance, without pairwise comparisons [ 33 ]. The total workload is calculated using the mean value of the subscales; values range between low (0) and high (100) total workload ( Table 3 ). Analyses First, a multi-stage screening of the responses received was carried out to ensure sufficient data quality. Participants with implausible completion times were excluded, using the relative speed index with a lenient cut-off of 2.0 as criterion [ 34 ]. In addition, two attention check items were included, which participants had to pass [ 35 ]. Finally, the counted work interruptions were checked for outliers using boxplot diagrams. Values that were more than 2.5 times the interquartile range away from the third quartile were checked for further use in the analysis by checking all data for plausibility, for example with regard to the stated occupation and the open answers given to describe the interruptions. In order to ensure that a possible existing effect can be found with sufficient probability, the necessary sample size was determined in advance. For this purpose, the simulation-based calculation was chosen, since it takes into account that both a-path and b-path have to be interpreted. A power of at least 0.80 was assumed, and it was taken into account that percentile bootstrapping will be used. Expecting a medium effect size of a-path and b-path a sample size of N = 78 is necessary, whereas expecting a small sample size for both paths a sample size of N = 558 is necessary [ 36 ] (for an explanation of a-path and b-path, see Fig. 1 ). For this study, the number of subjects was targeted at N = 558 participants. However, due to dropouts and the multi-stage screening procedure to ensure the quality of the data, the final sample size of N = 492 unfortunately fell just short of this target. For hypothesis 1, a mediation model was calculated. Mediation or an indirect effect is when a mediator (M) transmits the causal effect of a predictor (X) on a criterion (Y). Mediation is moderated when the indirect effect of X on Y through one or more mediators (M) depends on a moderator (W), which on the one hand can be called moderated mediation or, following more recent research, conditional indirect effect [ 37, 38 ]. Such a conditional indirect effect model was calculated to answer hypothesis 2 ( Fig. 1 ). Both models were analyzed using PROCESS procedure for R version 4 [ 38 ], which uses ordinary least squares regression, yielding unstandardized coefficients for all effects. In order to calculate an even more robust model independent of possible violations of normality and heteroscedasticity, bootstrapping with 5000 samples together with heteroscedasticity consistent standard errors (HC3; [ 39 ]) were employed to compute the confidence intervals. For a more accurate result, effects were deemed significant when the confidence interval did not include zero. The prerequisites for calculating mediation and conditional indirect effect models were checked, whereby the assumption of linearity was established and confirmed by visual inspection of the scatter plots after LOESS smoothing. To assess the mediation effect, first the indirect relationship between frequency of work interruptions and subjective workload through perceived interruption overload was calculated, along with the confidence interval (CI), by using the PROCESS Model 4. This first step includes an assessment of the signs and significance levels of the direct paths between frequency of work interruptions and perceived interruption overload and frequency of work interruptions and subjective workload. In line with the proposed theoretical framework, the PROCESS Model 7 was used, to estimate the moderating effect of primary work task on the path between frequency of work interruptions and perceived interruption overload. To test for the presence of moderated mediation, the effect sizes of conditional relationships were compared when the moderator is one standard deviation below its mean (–1 SD), at its mean (M) and one SD above its mean (+1 SD).
Results Descriptive statistics and correlation analyses The participants stated that they were interrupted 25±29.42 times during their workday, of which 12±15.94 interruptions were due to emails, and around 7±10.57 interruptions due to phones calls. The remaining interruptions were due to instant messages, meetings and personal contacts ( Table 4 ). The subjects rated their perceived interruption overload during the past working day with M = 2.10±0.99. Internal consistency was checked to verify the translation. Cronbach’s alpha was 0.94 for this scale, illustrating very good internal consistency. The complexity of primary work tasks was rated with M = 3.44±0.10 (items were to be answered on a scale from not at all (1) to fully agree (5)) and subjective workload was rated with M = 48.15±15.06 (items were to be answered on a scale from low (0) to high (100)). The correlation analysis shows that the age of the participants was significantly negatively correlated with perceived interruption overload and furthermore significantly positively correlated with subjective workload. The complexity of primary work tasks, on the other hand, is significantly positively correlated with frequency of work interruptions, perceived interruption overload and subjective workload. The frequency of work interruptions is significantly positively correlated with perceived interruption overload and further significantly positively correlated with subjective workload. A detailed overview is given in Table 5 . According to the results, age is included as a covariate in the further analyses. To check whether gender also has to be included as a covariate in the following models, t-tests for independent samples are calculated for all relevant variables. Based on the sample distribution, only female and male participants are compared. There was no statistically significant difference between female and male participants with regard to interruption frequency (t(487) = 0.808, p = 0.419), perceived interruption overload (t(487) = 0.169, p = 0.866), complexity of primary work tasks (t(487) = –1.841, p = 0.06) and subjective workload (t(487) = –0.357, p = 0.177). According to the results, gender of the participants is not included in the further analyses. Mediating influence of the perception of work interruptions as overload A mediation model was calculated to analyze whether frequency of work interruptions predicts subjective workload and whether the direct path would be mediated by the perception of work interruptions as overload. Bases on the analysis above, age is included as a covariate into the model. An effect of frequency of work interruptions on subjective workload was observed (β= 0.164, p < 0.001). After entering the mediator into the model, frequency of work interruptions predicted perceived interruption overload significantly (β= 0.397, p < 0.001), which in turn predicted subjective workload significantly (β= 0.424, p < 0.001). The relationship between frequency of work interruptions on subjective workload is partially mediated by the perceived interruption overload, indirect effect ab = 0.168, 95% -CI [0.126, 0.216] ( Table 6 ). Moderating influence of the complexity of primary work tasks A conditional indirect effect model was performed to analyze whether frequency of work interruptions predicts subjective workload and whether the direct path would be mediated by perceived interruption overload and further, whether the complexity of primary work tasks moderates the relationship between interruption frequency and perceived interruption overload. Bases on the analysis above, age is included as a covariate into the model. The results show a positive, significant effect of the interaction term (frequency of interruptions and complexity of work tasks) ( b = 0.003, p = 0.01), with ΔR 2 = 0.007. Taking the conditional effects of the focal predictor into account, the analysis reveals that the relationship between frequency of work interruptions and perceived interruption overload is stronger when primary work tasks are more complex. Figure 2 illustrates the influence of the moderator, whereby the moderator is shown at the mean value (M) and at both one standard deviation below and above the mean value (±1 SD) of primary task complexity. Even with only a few work interruptions, a clear difference in perceived interruption overload between less complex (–1 SD) and more complex (+1 SD) primary tasks can be seen. This difference becomes even more obvious with many interruptions. Thus, the moderator strengthens the positive relationship between interruption frequency and perceived interruption overload. The formal test of moderated mediation, which assesses the index of moderated mediation and the corresponding confidence intervals, did not confirm a significant analysis. A conditional indirect effect can therefore not be proven ( Table 7 ).
Discussion Focusing on office workplaces, the present study aimed to answer the question of whether perceived interruption overload serves as a mediator between work interruption frequency and subjective workload and, further, whether the direct relationship between interruption frequency and perceived interruption overload is moderated by the complexity of the primary work tasks. Results indicate a significant partial mediation of perceived interruption overload and a significant moderation of work task complexity, however, no conditional indirect effect was measurable. The findings are now discussed in more detail below. First, the mediation model is discussed. Following the authors [ 14, 30 ], perceived interruption overload was examined as a mediator, whereupon partial mediation is shown. Partial mediation results from the fact that the direct relationship between interruption frequency and overall strain remains significant despite the addition of the mediator. Thus, perceived interruption overload, i.e., individual evaluation of work interruptions, has a significant and strong influence on subjective workload. The first hypothesis can therefore be confirmed. It can be inferred that the individual evaluation of work interruptions has a crucial importance for the overall subjective strain. This result can be considered decisive when it comes to identifying interventions and measures for long-term healthy working. It is only conditionally a matter of eliminating interruptions in the work context, but rather of identifying the factors that make the perception of work interruptions particularly serious. In other words, it is about finding out what factors and characteristics of a work interruption cause employees to perceive it as overwhelming. Previous studies have been able to provide some evidence of such factors and characteristics, but it is very clear that mainly experimental studies have addressed this research topic [ 11, 12 ]. These have manipulated the modality of interruptions, the complexity and similarity of tasks, the timing of interruptions, and the resumption delay (e.g., [ 40–42 ]), but in quantitative field research, a rather unidimensional view of interruptions can be observed, as mostly the frequency of work interruptions was queried. However, with regard to the results of the experimental research, a number of other characteristics can be considered significant, but they still need additional validation in field research. The goal should be to use these results to identify factors and characteristics on the one hand, and to develop appropriate interventions and health-related measures in dealing with incoming interruptions on the other. Previous intervention studies have focused primarily on reducing work interruptions, but not on how to deal with interruptions or what an interruption itself should look like. In addition to focusing on just reducing interruptions, the focus of previous studies has also been very much on health occupations. A systematic literature review shows that of 36 identified intervention studies, 35 were conducted in healthcare settings and primarily in hospital workplaces; of these, 20 studies focused exclusively on interruption reduction. These findings are, however, not transferable to the context of the office workplace [ 11 ]. In summary, there is some evidence of successful interventions to reduce interruptions in medical and nursing work. The research field still has potential for be expanded, on the one hand to offer insight to how interruptions are perceived in the first place, and on the other hand to determine what an interruption might look like to cause less overload. Further research in the context of office work must also be validated, since the results to intervention measures in the healthcare sector are most likely not be directly applicable. In the second part of the analysis, a conditional indirect effect model was calculated, which included the complexity of primary work tasks as a moderating variable in the mediation model. The results show that the complexity of primary work tasks strengthens the positive relationship between interruption frequency and perceived interruption overload. Accordingly, a higher complexity of primary work tasks leads to a higher perceived interruption overload for the same interruption frequency. Therefore, the second hypothesis can be confirmed, even though it must be mentioned that the effect size is very small and no conditional indirect effect is measurable (i.e., there is no effect of the complexity of the primary work tasks as a moderating a-path variable on the overall strain on the working person). It should be noted that an existing small effect could not be found with sufficient probability due to the sample size, as the required sample size was not met. With a power of at least 0.80 and percentile bootstrapping, a sample size of N = 558 would be required to test for a small effect [ 36 ]. However, due to dropouts and the multistage screening procedure to ensure data quality, this goal was unfortunately missed with a final sample size of N = 492, meaning that small effects may not be detectable. However, the results indicate that interruptions are perceived differently, depending on characteristics of the interrupted task. The characteristics of the interrupted task form a group of previously studied moderators, with the complexity of the interrupted task being a key moderator. The findings are consistent which suggest that work interruptions in more complex primary tasks have a more negative effect on the interrupted person [ 23, 24 ]. The previous studies on this branch of research can therefore be substantiated in the context of office work and even extended to the fact that it is not only about the primary task actually interrupted at that moment, but it is generally about the type of tasks that a person works on in the course of the working day. The results are particularly interesting when considering possible positive effects of interruptions. Following the authors, interruptions during routine work may actually have positive effects because they allow them to engage in activities that are important for emotional well-being, job satisfaction, and continued productivity [ 25 ]. Moreover, it is shown that distractions of attention caused by work interruptions provide variety and intellectual stimulation in monotonous activities and thus serve as a source of work enrichment [ 26 ]. These results suggest positive effects that were not investigated in the present study but could bring further clarity to the understanding of work interruptions and their handling in the context of office work in the future. Limitations Due to the design of the study, there are some limitations that should be taken into account when interpreting the results. One limitation is that this study only examined one occupational context and the perspective of one country; transferability of the results to other countries and other occupational groups is limited and requires further research. Indeed, as the results show, effects already differ within this one occupational group, which means that transferability to other occupational groups can only occur within similar work tasks. Furthermore, due to the fact that only one day was considered for measuring work interruptions, longitudinal effects and causal inferences between work interruptions and the (negative) outcomes are not possible. Future studies attempting to model this causal chain should therefore use more advanced methodology, such as prospective designs and diary studies over a longer period of time. In addition, the method used required that respondents independently count their work interruptions, which can lead to errors, such as overlooking or increasing the number of work interruptions counted. In addition, focusing on work interruptions may cause them to be perceived differently than they normally would be if attention were not drawn to them. Moreover, the survey itself could be considered a work interruption, which is why the questionnaire was kept as short as possible so that the reported effects are unlikely to be caused by the extra effort associated with participation.
Conclusion In today’s dynamic workplaces, work interruptions are common and unavoidable, exacerbated by changes in the world of work. This study has highlighted the importance of individual evaluation of work interruptions. It must be assumed that work interruptions do not always have the same effects and may have different consequences depending on the person interrupted and the task interrupted, amongst others. It is therefore crucial to consider work interruptions not only in terms of the possibility of reducing them, but also in terms of their nature and characteristics. In summary, the findings underline the importance of not focusing on a “reductionist” approach to work interruptions; rather, organizations need to focus on better managing interruptions in the workplace. However, this requires a better understanding of the effects of interruptions on employees and the circumstances in which they occur.
BACKGROUND: Research demonstrates that work interruptions are considered one of the most common work stressors. Understanding the mechanisms of work interruptions is therefore vital to reducing worker stress and maintaining performance. OBJECTIVE: The aim of this research is to investigate the influence of the frequency of work interruptions on subjective workload in the context of office work. Specifically, the mediating influence of interruption perception as well as the moderating influence of the complexity of the primary task are examined. METHOD: The work interruptions of 492 office workers in Germany were collected by means of a one-day diary study. A mediation model and a conditional indirect effect model were calculated to examine the influence of interruption frequency on subjective workload, mediated by the individual perception of these interruptions as well as moderated by the complexity of the primary work tasks. RESULTS: The analyses indicated a significant mediation and moderation. This implies that, on the one hand, the perception of work interruptions significantly mediates the relationship between the frequency of work interruptions and subjective workload. On the other hand, more complex primary work tasks seem to strengthen the positive relationship between interruption frequency and perceived interruption overload. CONCLUSION: The study underlines that work interruptions need to be considered in a much more differentiated way than is currently the case. Both in research and in terms of intervention measures in the work context, the various influencing factors need to be identified for an assessment of the effects on the working person to be possible.
Ethical considerations Participation was anonymous, all participants were adults, no intrusive measures were used, and no deception of participants was used. A detailed explanation of the procedure is given in the methods section. Informed consent The data has been collected anonymously. The participants have consented to the use of the anonymized data for scientific purposes. A detailed explanation of the procedure is given in the methods section. Reporting guidelines Reporting is in accordance with the STROBE Statement for cross-sectional studies listed in the EQUATOR Network for Reporting Guidelines.
Acknowledgments The authors have no acknowledgments. Conflict of interest The authors declare that they have no conflict of interest. Funding This survey has been conducted within the project WorkingAge, funded by the European Union's Horizon 2020 Research and Innovation Program under Grant Agreement No. 826232. The analysis has been conducted within the project AKzentE4.0 by the German Federal Ministry of Education and Research (BMBF) within the “The Future of Value Creation –Research on Production, Services and Work” program and managed by the Project Management Agency Karlsruhe (PTKA), grant 02L19C400.
CC BY
no
2024-01-16 23:47:18
Work.; 77(1):185-196
oa_package/38/7f/PMC10789356.tar.gz
PMC10789360
38073390
INTRODUCTION Multiple lines of evidence suggest that synaptic dysfunction is a key early pathological feature of Alzheimer’s disease (AD) and that it correlates with the emergence and progression of cognitive impairment [ 1–5 ]. Synaptic dysfunction includes dendrite abnormalities, enlarged presynaptic terminals and synaptic vesicles, as well as overall synapse loss [ 3, 6–9 ]. Synapse loss occurs in patients with mild cognitive impairment and progresses with disease severity [ 3, 5 ] concomitant with alterations in level and function of synaptic proteins [ 4 ]. Both events are presumably triggered by the pre- and postsynaptic accumulation of pathological forms of tau and amyloid-β [ 9–13 ], eventually leading to dysfunctional synapses [ 14–16 ]. A substantial heterogeneity in terms of regional differences of synaptic protein alterations/overall synapse loss was suggested early on [ 17 ]. In a recent review of 3D electron microscopy studies, associations between synapse and neuron loss differed between brain regions suggesting varying synapse vulnerability [ 18 ]. Indeed, the hippocampus seems to be the most affected structure, while neocortical regions show synapse loss only at later disease stages and the entorhinal cortex presents little to no loss of synapses even at advanced stages [ 3, 19, 20 ]. A meta-analysis confirmed that presynaptic proteins were consistently more affected than postsynaptic proteins [ 21 ]. In this publication, we have sought to further characterize these changes in AD and provide an update of the literature published since the previous meta-analysis with a focus on presynaptic proteins as the most strongly affected early synaptic markers. Thus, a systematic literature search from 2015–2022 for publications measuring proteins with function at the presynapse, as defined by SynGo ( https://www.syngoportal.org/ ) [ 22 ], in AD and control tissue was conducted and a meta-analysis performed on data from 22 studies. Due to the paucity of data, individual subcortical regions could not be distinguished, and so region-specific results are limited to cortical structures. The analysis here provides further support that presynaptic protein changes in AD are highly heterogeneous. While there was an overall reduction in presynaptic proteins in AD patients, areas such as temporal and frontal cortex were more severely affected than others. Synaptic proteins and functional categories also showed heterogeneity.
MATERIALS AND METHODS Search strategy Medline, Embase, and PubMed databases were searched for articles reporting brain presynaptic protein levels in AD patients and animal models compared to healthy controls. Here, only the results on human patients are reported. Databases were searched for the following keywords in abstract and title: presynaptic marker or presynaptic protein or synaptic marker or synaptic protein or proteome and AD or Alzheimer The search was restricted to publications since 2015 and filters were used to remove non-English publications, reviews, and conference abstracts. Database search in February 2022 resulted in 2,565 matches ( Fig. 1 ). The systematic review tool Rayyan ( https://www.rayyan.ai ) [ 23 ] was used for screening. Duplicates from Medline and Embase were removed automatically ( n = 769) and further duplicates with PubMed recognized by Rayyan were omitted manually ( n = 864). Title and abstracts were screened for eligibility in Rayyan according to predefined inclusion and exclusion criteria. Inclusion criteria were comparison of an AD population to healthy controls, and quantification of proteins in brain with functions at the presynapse, as defined by SynGo ( https://www.syngoportal.org/ ) [ 22 ]. Reviews, conference abstracts and publications not including an AD population, lacking controls, or only measuring gene or mRNA expression were excluded. Full texts for eligible studies were retrieved in pdf format; full texts for eight studies were inaccessible and not provided by authors upon request. Relevant studies were searched for cross-references, resulting in identification of a further five studies that were then included. Database search was repeated in August 2022, with no additional studies on patient cohorts being identified. For patient analyses, only reports quantifying presynaptic protein levels in postmortem brain tissue from AD patients and healthy controls were included. Outcomes not suitable for this analysis such as postsynaptic protein quantification or reports quantifying proteins in tissues other than brain were excluded. Global proteomic approaches on global brain tissue analyses were not considered, while those on synaptosomal or synaptic enriched fractions were included. Studies on diseases other than AD and those lacking healthy controls were not considered. If summary statistics of presynaptic protein expression in one or both groups were not available and could not be obtained from authors, then such studies were removed. Data extraction Twenty-two publications fulfilled all inclusion criteria [ 24–45 ] ( Fig. 1 ). Study characteristics were extracted, including proteins measured, method of quantification, brain area, tissue source, sample sizes, age, gender distribution and post-mortem interval ( Supplementary Table 2 ). Not all studies reported all relevant study characteristics ( Supplementary Table 1 ). Where individual group demographics could not be extracted, overall sample characteristics were selected. Numerical data on protein expression were extracted from full texts but, if not available, numerical values were extracted via WebPlotDigitizer ( https://automeris.io/WebPlotDigitizer/ ) [ 46 ] from figures ( Supplementary Table 1 ). Missing data were requested from authors and included if provided. Note that Hesse et al. [ 32 ] pooled their samples before quantification according to group status (AD or control) and APOE genotype ( APOE ɛ 3/ ɛ 3 or APOE ɛ 3/ ɛ 4) which resulted in two measurements per group for all proteins and brain structures. For meta-analysis, n for each group, AD and control, was thus two. Data analysis Most studies included here reported multiple effect sizes such as analysis of several proteins or brain areas in multiple groups or application of various methods. This effect size multiplicity can result in challenges for meta-analyses as the same subjects contribute to multiple effect sizes. Several strategies for addressing this were applied depending on the source of multiplicity including selecting one effect size based on decision rules; averaging effect size and conducting multilevel analysis with nested effects sizes (see for overview of strategies [ 47 ]). Several studies analyzed multiple groups or did not separate their sample into dichotomous categories of AD and non-AD/controls. In such cases, decision rules were applied to select the relevant outcomes for analysis. For studies in which additional neurodegenerative diseases other than AD were included, only control and AD groups were extracted for analysis. When subjects were grouped according to Braak stages, the group with the lowest stage (maximum Braak II) was considered as control and the cohort with most severe Braak stage reported (minimum Braak IV) was included as AD group. For studies reporting multiple AD groups such as familial and sporadic AD, the group with demographic characteristics best matching the remaining study cohorts was selected. Buchanan et al. [ 26 ] reported results for their full sample as well as when removing five controls with overt non-AD-related pathologies. Here, only the latter results were considered. In cases reporting different methods to quantify the same proteins in the same sample, only one method was included for analysis. For instance, in Bereczki et al. [ 24 ] enzyme-linked immunosorbent assay (ELISA) and western blotting were applied for protein quantification. We selected ELISA data as control and AD group mean and standard deviation was available in the text. By contrast, Kurbatskaya et al. [ 35 ] reported protein levels quantified by western blotting using two different loading controls, neuron-specific enolase (NSE) and β-actin. Only expression data using the NSE were included for analysis here. While Nyarko and colleagues [ 37 ] reported expression of solute carrier family 18 member A2 (SLC18A2) in different glycosylation states we choose total SLC18A2 expression per subject. For several reports expression data was aggregated by computing averages of the effect sizes with inverse-variance weighting. Our main goal was to estimate the brain-wide effect of AD on presynaptic protein levels as well as investigating overall effects for main brain areas and proteins where applicable. Therefore, it was deemed acceptable to aggregate data to one overall effect size per area or protein where expression was analyzed for subregions and protein isoforms. Haytural and colleagues [ 31 ] measured five proteins in ten hippocampal subregions, which would result in 50 individual effect sizes. This overestimation of effect sizes was reduced by combining effect sizes for each protein within the dentate gyrus and cornu ammonis . Similarly, Hoshi et al. [ 33 ] and Yamazaki et al. [ 45 ] analyzed subregions in temporal and frontal cortex, respectively, and individual effect sizes were combined as a single effect size. Ramos-Miguel et al. [ 39 ] measured long and short splice variants of syntaxin-binding protein 1 (STXBP1) and this was aggregated to one overall effect size for STXBP1. The same approach was taken for the proteomic analyses of Carlyle et al. [ 27 ] and Hesse et al. [ 32 ], where multiple isoforms were reported. Sensitivity analysis was conducted for several approaches to determine whether they would alter the meta-analysis results and it was revealed that they do not. Finally, remaining effect size multiplicity was due to analyses of several proteins and/or several brain regions within one study and this was accounted for by conducting multilevel meta-analyses where multiple effect sizes were nested within one study. This approach has the advantage of allowing estimation of variance of effect sizes within (Level 2) and between studies (Level 3) [ 48 ]. Primary analysis The primary analysis was performed with metafor [ 49 ], meta [ 50 ], and altmeta [ 51 ] packages in R Studio [ 52 ]. The standardized mean difference adjusted for small sample sizes (Hedges’ g) was used as a measure of effect size [ 53 ]. When more than one protein or more than one brain structure were investigated, each effect size was added for analysis individually. A multilevel meta-analysis was conducted on all available effect sizes for an overall effect of AD on presynaptic proteins across the whole brain. As substantial between-study heterogeneity had been predicted, a random-effects approach was applied for this meta-analysis [ 54 ]. Heterogeneity between studies was calculated using Q-test and I 2 statistic[ 55, 56 ]. Secondary analysis For secondary analyses, effects for different brain areas, individual proteins and presynaptic functions were considered. A multilevel random-effects meta-analysis was performed if five or more independent studies were available, as recommended previously [ 57 ]. Despite reduced statistical power, some relevant exploratory analyses were also performed on brain regions, proteins, and functional groups with lower study numbers. Multilevel random-effects meta-analyses were performed on cortical regions as subcortical regions were only reported in one study. For analysis of specific presynaptic functions, proteins were annotated with their respective functional term extracted from SynGo [ 22 ]. Function annotations included regulation of presynaptic cytosolic calcium levels, regulation of presynaptic membrane potential, presynaptic endocytosis, synaptic vesicle cycle, presynaptic dense core vesicle exocytosis, neurotransmitter uptake, neurotransmitter reuptake, presynaptic chaperone-mediated protein folding and presynaptic signaling pathways. As many proteins have multiple presynaptic functions and would therefore contribute to the analysis several times, separate multilevel random-effects meta-analyses were performed for each functional term if data from five or more independent studies were available. Sensitivity analysis Effect sizes within studies might be correlated especially since they stem from the same subjects. The extent of this correlation was not known; therefore, a sensitivity analysis was performed by running multilevel meta-analyses with values for within-study effect sizes correlating between 0 and 0.99. In the primary analysis, one study had a remarkably high negative effect size; a repeat analysis was performed after outlier removal. Additionally, several reports came from the same research team and to account for this, a multilevel meta-analysis was conducted where individual effect sizes were nested within ‘research team’ instead of within ‘individualstudy’. Publication bias When using the standardized mean difference as the effect size proxy, effect size and its standard error are not independent. This would cause funnel plot distortion when plotting effect size against standard error [ 58, 59 ]. Therefore, to assess publication bias, funnel plots were generated by displaying effect size against sample size as a measure of precision as recommended in Zwetsloot et al. [ 59 ]. Furthermore, the formula suggested by Pustejovsky and Rogers [ 58 ] to conduct Eggers’ test using a modified version of standard error was applied to reveal funnel plot asymmetry. Quality assessment The meta-analysis included 22 studies and examined them according to case and control definition, comparability of groups, methodology and outcome reporting. For case and control definition a maximum of two points could be achieved respectively if based on clinical and neuropathological assessment. One point was given if at least neuropathology was assessed and none when information was lacking. A maximum of two points was awarded if criteria were implemented consistently across all subjects including AD diagnostic criteria or neuropathology scales as well as exclusion criteria such as absence of other diseases. One point was given if at least exclusion criteria were consistently used. For group comparability 0–2 points were awarded depending on how well-matched AD and control groups were as well as one point if groups were overall comparable other than the presence or absence of AD. One point each was awarded for appropriate methods of quantification, appropriate statistical analysis, blinding of samples for protein quantification and reporting of data in sufficient detail to allow extraction of group mean and standard deviation from text or figures. Scores were visualized in a color coded chart and percentage of points achieved was calculated.
RESULTS To determine changes in protein levels at the presynapse in AD, databases were searched to identify publications quantifying presynaptic proteins in brain samples from AD patients and healthy controls and a meta-analysis was conducted on studies matching inclusion criteria. Database searches on PubMed, Medline, and Embase returned 2,565 records ( Fig. 1 ). An additional five articles were identified through cross-referencing. After removing duplicates from search results, 937 records remained for title and abstract screening; 635 articles did not meet inclusion criteria and a further eight articles had to be excluded as no full text could be retrieved. The full text of the remaining 294 articles were assessed for eligibility. For human studies, 22 met all inclusion criteria and were selected formeta-analysis. Overall, presynaptic protein measurements from 17 different brain areas and 223 individual proteins were included ( Supplementary Table 2 ). One publication analyzed subcortical areas and the remaining 21 studies included only cortical structures. Western blotting was most frequently used for protein quantification, followed by immunohistochemistry and ELISA. Three studies used mass spectrometry-based approaches including IP-MS (immunoprecipitation mass spectrometry), LC-MS/MS (liquid chromatography tandem mass spectrometry), and LC-MS 3 (liquid chromatography tandem mass spectrometry cubed). The latter two were conducted on synaptoneurosomal and synaptic-enriched fractions, respectively. When considering all included publications, over 400 control and AD samples were analyzed. However, some studies acquired their samples from the same brain banks, therefore the subjects contributing their brain tissue may overlap across studies. Primary analysis For the primary analysis, a random-effects meta-analysis on the 22 human studies was performed including all individual proteins and brain regions. As most studies provided more than one presynaptic protein measurement, three-level analysis was performed with effect size nested within studies. Meta-analysis showed a significant decrease of presynaptic proteins in AD subjects compared to healthy controls ( Fig. 2 ; effect size: –1.01; 95% Confidence Interval (CI): –1.55, –0.47; p < 0.001). Heterogeneity was very high, with an I 2 of 90.55%. However, intra-study heterogeneity was low (Level 2 I 2 : 5.44%) and most of the overall heterogeneity came from between-study variation (Level 3 I 2 : 85.11%). Although the magnitude of effect sizes varied, all studies apart from one reported a decrease in levels of presynaptic protein in AD patients. Sensitivity analysis As it was not known whether effect sizes within studies were correlated and to what extent, a sensitivity analysis with multiple values for effect size correlation was conducted. When using values between 0.1 and 0.99 for intra-study effect size correlation, the overall result of the meta-analysis varied between –1.01 and –1.17 but the effect remained significant for all analyses ( p < 0.001 for all) ( Supplementary Table 3 ). A sensitivity analysis was conducted to account for multiple studies from the same research team, whereby instead of clustering effect sizes at the study level, they were clustered within publications by the same research teams. The outcome was similar and showed a significant decrease of presynaptic proteins in AD (effect size: –1.02; 95% CI: –1.64, –0.4; p < 0.001) ( Supplementary Table 3 ). Outlier removal When removing the study by Jia et al. [ 34 ] with a very large negative effect size (see Fig. 2 ), the result indicated a smaller overall decrease of presynaptic proteins in AD, but the effect remained significant (effect size: –0.72; 95% CI: –0.93, –0.52; p < 0.001) ( Supplementary Table 3 ). The heterogeneity was much lower and was made up of similar amounts of within- and between-study heterogeneity (Total I 2 : 58.73; Level 2 I 2 : 23.97%; Level 3 I 2 : 34.75%). This indicates that in the remaining 21 studies heterogeneity was moderate on all levels and that the outlier study not only affected the overall result but also added substantial inter-study heterogeneity. Quality assessment and publication bias Overall, nine categories were defined and rated for each publication according to a scale from 0–2 (see Methods). Most studies achieved a moderate to high score in all categories ( Fig. 3 A). However, only three studies reported that outcome assessors were blinded to group status. Overall scores were moderate to high for all studies with no publication receiving less than 60%. No study was excluded due to low quality. Funnel plots including all effects sizes from each study as well as one aggregated effect size per study were generated and assessed for publication bias ( Fig. 3 B, C). Neither showed asymmetry on visual inspection and this was confirmed by the linear regression analysis for funnel plot asymmetry suggesting absence of publication bias (all effect sizes: p = 0.12; aggregated data: p = 0.34). Global changes in presynaptic proteins are region-specific To determine whether presynaptic protein loss in AD was region-specific, a separate meta-analysis was conducted for available cortical areas. Data from more than five independent datasets was only available for frontal and temporal cortex. Frontal cortex included data on 19 proteins measured across nine studies. In temporal cortex 162 proteins were quantified in six studies. Presynaptic protein loss in AD compared to healthy subjects was higher in the temporal cortex ( Fig. 4 ; effect size: –1.04; 95% CI: –1.19, –0.88; p < 0.001) than in frontal cortex ( Fig. 4 ; effect size: –0.75; 95% CI: –1.05, –0.45; p < 0.001). Although the magnitude of protein loss varied, especially in the frontal cortex, all reports indicated a genuine loss of presynaptic protein levels in AD. In the temporal cortex, most reports (five out of six) showed a prominent decrease of protein levels in AD of one standardized mean difference or more relative to controls. Heterogeneity in the temporal cortex was not significant. Meanwhile, heterogeneity remained high in the frontal cortex analysis (Total I 2 : 86%) which was largely due to within-study variation (Level 2 I 2 : 80.93%; Level 3 I 2 : 5.07%). We next performed an exploratory meta-analysis for the remaining cortical structures despite low number of reports. The magnitude of presynaptic protein loss was similar to temporal cortex in parietal cortex ( Table 1 ; effect size: –0.92; 95% CI: –1.65, –0.19; p = 0.014) and cingulate gyrus ( Table 1 ; effect size: –1.07; 95% CI: –1.67, –0.47; p < 0.001). Data was available in three studies each and included 209 proteins measured for parietal cortex and four proteins measured for cingulate gyrus. Heterogeneity was not significant in cingulate gyrus ( p = 0.09). In parietal cortex heterogeneity was low within studies (Level 2 I 2 : 15.18%) and moderate between studies (Level 3 I 2 71.66%). Meanwhile, effects in entorhinal cortex, occipital cortex, and hippocampal formation (HPF, including cornu ammonis and dentate gyrus) were not significant ( Table 1 ). Reduced levels of presynaptic proteins in AD cohorts are protein-specific To reveal whether there are protein-specific effects, proteins, and protein families with effect sizes from at least five independent studies were assessed. Synaptosome associated proteins (SNAPs) were quantified in five brain areas across seven studies. Meta-analysis confirmed a significant decrease in SNAP proteins ( Supplementary Figure 1 ; effect size: –0.90; 95% CI: –1.4, –0.4; p < 0.001). The overall effect was strongest for SNAP25 alone ( Fig. 5 ; effect size: –1.06; 95% CI: –1.49, –0.63; p < 0.001). Both analyses showed moderate heterogeneity with I 2 of 66% and 52.46% respectively. However, for SNAP25 heterogeneity was very similar within- and between studies (Level 2 I 2 : 25.01%; Level 3 I 2 : 27.45%) whereas in the analysis of all SNAP proteins heterogeneity was mainly due to between-study variation (Level 2 I 2 : 13.55%; Level 3 I 2 : 51.32%). All but one report [ 32 ] showed a moderate to large decrease of SNAP25 levels in AD patients. The syntaxin (STX) family showed no significant loss in AD ( Supplementary Figure 2 ). Included in this analysis were six studies measuring syntaxin proteins in five areas. Syntaxin 1 (STX1) was the most frequently analyzed protein in this family and was mentioned in five publications across five brain regions. Syntaxin 1, including isoforms STX1A and STX1B also showed no significant loss in AD subjects compared to controls ( Fig. 6 ). Synaptotagmins (SYTs) were measured in five publications, but no overall change was found ( Table 1 ). The most frequently analyzed presynaptic protein was synaptophysin (SYP) with data available from nine studies. Synaptophysin was measured in seven cortical and five subcortical regions. Meta-analysis confirmed an overall decrease of SYP ( Fig. 7 : effect size: –0.76; 95% CI: –1.11, –0.41; p < 0.001). Overall heterogeneity was moderate due to between-study heterogeneity (Total I 2 : 54%, Level 2 I 2 : 0.74%, Level 3 I 2 : 53.36%) and only one report showed no decrease of SYP levels in AD patients. For exploratory analysis, protein families with low study numbers were also analyzed but none showed significant overall effects ( Table 1 ). Function-specific changes in presynaptic proteins Proteins were grouped according to their presynaptic function and separate meta-analyses were performed. When analyzing specific presynaptic functions, proteins involved in the synaptic vesicle cycle showed the strongest decrease in AD ( Fig. 8 A; effect size: –0.98; 95% CI: –1.51, –0.45; p < 0.001). This functional group also contained the largest number of measurements with 168 proteins in 12 areas from 20 publications. Heterogeneity was high and stemmed mainly from inter-study variations (Total I 2 : 89.38%, Level 2 I 2 : 6.62%, Level 3 I 2 : 82.76%). While most studies reported decreases in protein levels in AD, few showed barely any difference to controls. Only two other functional groups showed significantly reduced levels in AD: dense core vesicle (DCV) exocytosis ( Fig. 8 B; effect size: –0.90; 95% CI: –1.27, –0.53; p < 0.001) and neurotransmitter reuptake ( Fig. 8 C; effect size: –0.31 95% CI: –0.62, 0; p = 0.05). DCV exocytosis had low within-study and moderate between-study heterogeneity (Total I 2 : 61.88%, Level 2 I 2 : 10.19%, Level 3 I 2 : 51.69%) and all reports indicated a moderate to large decrease of protein levels in AD. For neurotransmitter reuptake, heterogeneity was moderate due to intra-study variation (Total I 2 : 51.75%, Level 2 I 2 : 51.75%, Level 3 I 2 : 0%). While a small to moderate decrease of protein expression in AD was most frequently reported, only one study [ 33 ] indicated a large loss of proteins compared to controls. Sixteen proteins involved in DCV exocytosis were measured in five brain regions and reported in nine publications; data for neurotransmitter reuptake came from five publications measuring six proteins in four brain areas. The largest functional group were proteins involved in the synaptic vesicle cycle (SV); individual functions within this group of proteins were also analyzed. Here, proteins involved in synaptic vesicle exocytosis were highly represented with data from 62 proteins in 14 brain areas across 19 studies. Meta-analysis yielded an overall loss in AD subjects ( Table 1 ; effect size: –0.86; 95% CI: –1.16, –0.56, p < 0.001) and a moderate heterogeneity due to low within-study and moderate between-study heterogeneity (Total I 2 : 72.88%, Level 2 I 2 : 22.37%, Level 3 I 2 : 50.51%). Twenty-one proteins regulating synaptic vesicle cycles were quantified in six brain regions across six publications. There was a small but reliable overall loss of proteins in this group in AD ( Table 1 ; effect size: –0.44; 95% CI: –0.82, –0.05; p = 0.03). Heterogeneity was greatest within studies, but overall moderate (Total I 2 : 60.11%, Level 2 I 2 : 41.36%, Level 3 I 2 : 18.75%). No other functional subgroup showed significant overall effects ( Table 1 ). Exploratory analysis was also conducted where study numbers were low, and results are presented in Table 1 .
DISCUSSION Twenty-two publications on presynaptic protein changes in AD patients compared to healthy controls were scrutinized here. Our meta-analysis showed an overall loss of presynaptic proteins in AD. The most prominent protein loss occurred in frontal and temporal cortices. Individual proteins that were highly affected were SNAP25 and synaptophysin, both showing a strong decrease in AD. On a functional level, the most significant decline was observed in proteins involved in exocytosis of dense core vesicles and synaptic vesicles. No evidence for publication bias was observed; all publications were of moderate to high quality. This meta-analysis therefore provides a critical update on the evidence for disruptions of the presynaptic machineryin AD. Two previous publications have reviewed changes in presynaptic proteins in AD compared to healthy controls. This includes a review by Honer and colleagues [ 60 ] and a meta-analysis of synapse counts and synaptic proteins by de Wilde et al. [ 21 ]. This latter analysis included 83 studies quantifying synaptic proteins of which a large proportion reported levels of at least one presynaptic marker, resulting in a much higher number of studies contributing to analysis. The primary analysis included all available protein measurements in all brain regions and revealed a significant loss of presynaptic proteins in AD. Very high between-study heterogeneity was largely due to one study and omission of the data from Jia et al. [ 34 ] did not affect the overall study outcome but highlighted the low heterogeneity between and within the studies included in this meta-analysis. The overall lowering of presynaptically expressed proteins in cortical structures therefore confirms the preceding meta-analysis of de Wilde and colleagues and further highlights that this decline expresses protein and brain region specificity [ 21, 60 ]. As for the regional specificity, it appears to be an anomaly that neither entorhinal cortex nor hippocampus were confirmed as expressing lower levels of presynaptic proteins despite a wide range of studies that provide compelling evidence for protein pathology during the early onset which increases in severity during late-stage AD [ 61, 62 ]. This lack of effect is likely due to the low number of studies and their high heterogeneity, since previous reviews reported some of the strongest effects for these structures based on much higher numbers of reports [ 21, 60 ]. Alternative reasons may concern the method of protein quantification, age, and severity of study cohorts and postmortem intervals. An alternative approach would be the inclusion of older publications predating 2015 in order to provide a more complete analysis of the evidence available to date. This would have been against the set limits of this approach, and a more complete analysis for all proteins would then have to be pursued. Suffice to say that the lack of significant synaptic protein loss in hippocampus in this study is due to the small study number and considered to be an anomaly of our analysis. In general, some of the discrepancies between these older meta-analyses/reviews and this work are due to the much richer pool of literature included in their analyses. Nevertheless, the global effects in terms of region and protein specificity were very similar and therefore seem to be robust and reproducible. Following on from a primary analysis, several separate meta-analyses were performed to explore protein specific changes as well as to determine which presynaptic functions are most affected. SNAP25 was the most affected protein and showed a consistent decline across studies. Apart from some individual proteins engaged in presynaptic structural and transmitter release functions with lower levels in AD patients, the SNAP protein family was the only family of proteins affected significantly by the disease. This is intriguing given the fact that other members of the SNARE complex, including VAMP and syntaxin, strongly involved in the docking of vesicles and rupture of the vesicular membrane enabling transmitter release were not reduced. In some cases (e.g., VAMP) this may be related to the high heterogeneity between studies warranting further examination, but for others the mechanism remains to be explained. When categorized according to presynaptic function, however, proteins involved in cycling of synaptic vesicles also presented with an overall lowering of levels. While this supports our contention that SNARE proteins might more globally be affected by AD pathology, not all elements of vesicle cycling seem equally sensitive to the disease. Most strongly reduced were proteins complexing for exocytosis such as SNARE proteins, complexins or synaptotagmin but also those engaged in vesicle regulation like synapsins and synaptogyrins. However, more studies are needed to confirm these data. It is difficult to determine to what extent the observed decrease in presynaptic protein levels simply represents global neuronal loss and/or genuine synapse loss. Scheff and collegues [ 41 ] used electron microscopy to quantify synapse numbers and, while a global loss of synapses was confirmed in posterior cingulate cortex, there was additional protein loss at surviving synapses. Although only based on two presynaptic proteins, synaptophysin and synapsin-1, the difference in AD compared to controls was higher than for synapse numbers. Also, there was no significant synapse loss in patients with mild cognitive impairment compared to controls, whereas protein loss was similarly significant to the AD cohort. When combining synaptic protein measurements with additional technical approaches(measurement of glutamate transporter VGLUT1, selective axonal labelling), Poirel et al. [ 38 ] and Haytural et al. [ 31 ] found that both were not affected equally. This would suggest there might be protein loss not simply due to synapse loss as both processes should be affected in similar ratios otherwise. While this may suggest that the presynaptic protein loss could be independent from gross synapse loss this is difficult to ascertain, especially since no publications were available directly comparing synapse numbers to protein quantity. Limitations The current meta-analysis includes a number of potential confounders. 1) Uncertainty on how much effect sizes within one study correlated with each other. Towards this end, sensitivity analysis with different values for effect size correlation was performed, and the direction and magnitude of the overall effect remained very similar and was significant for the whole range of values. We therefore take this as evidence that variable effects size correlations are not a critical limit for the viability of our data and does not play a major role in our subgroup analyses. 2) As multiple publications obtained samples from the same brain banks there may be additional effect size correlations due to the same subjects being analyzed in multiple studies. Here, it was not possible to confirm these as no sample codes were given in the respective publications leading to a lack of information on sample overlap. 3) It is not known how far the use of different methods for protein quantification has influenced the analysis. Even among studies using the same analyses method (for example immunoblotting) differences in loading control markers were frequently observed. Moreover, markers for housekeeping proteins varied between methods which may have affected outcomes. Likewise, the effect of age and differences in postmortem interval between groups within each study and between studies may also affect the findings. These factors could not be accounted for here since epidemiological detail (comorbidities, cause of death) and treatment status (symptomatic AD medications, others) was incompletely reported for each study. However, most publications reported that AD and control subjects were free of overt non-AD related neuropathology or psychiatric disorders. 4) Caution needs to be exercised in generalizing these data to all areas of the brain. Extrapolation from cortex (almost exclusively studied here) to subcortical regions may be particularly problematic, although Yamazaki et al. [ 45 ] also found a decrease in synaptic proteins (both pre-and postsynaptic) in most subcortical regions of AD patients that were scrutinized. 5) Protein-specific effects should also be interpreted with caution. as sufficient data from independent studies was available for only a very small subset of proteins which makes it likely that effects of less frequently studied proteins are missed. These issues need to be taken into account when interpreting the findings of the analyses in this study. Conclusions Collectively, our data confirm and extend previous meta-analyses/reviews on the level and distribution of synaptic proteins in postmortem tissue from confirmed advanced stage AD patients. The majority of cortex presents with a lowering of pre-synaptic proteins prior to and independent of frank synapse loss. More fine-grained analyses of the affected transmitter systems and the cortical layers affected is still required. Not all presynaptic proteins are altered equally. SNARE complex proteins and vesicle cycling and recycling peptides are among those most severely reduced, and these are the most likely to be functionally compromised in AD.
Conclusions Collectively, our data confirm and extend previous meta-analyses/reviews on the level and distribution of synaptic proteins in postmortem tissue from confirmed advanced stage AD patients. The majority of cortex presents with a lowering of pre-synaptic proteins prior to and independent of frank synapse loss. More fine-grained analyses of the affected transmitter systems and the cortical layers affected is still required. Not all presynaptic proteins are altered equally. SNARE complex proteins and vesicle cycling and recycling peptides are among those most severely reduced, and these are the most likely to be functionally compromised in AD.
Background: A key aspect of synaptic dysfunction in Alzheimer’s disease (AD) is loss of synaptic proteins. Previous publications showed that the presynaptic machinery is more strongly affected than postsynaptic proteins. However, it has also been reported that presynaptic protein loss is highly variable and shows region- and protein-specificity. Objective: The objective of this meta-analysis was to provide an update on the available literature and to further characterize patterns of presynaptic protein loss in AD. Methods: Systematic literature search was conducted for studies published between 2015–2022 which quantified presynaptic proteins in postmortem tissue from AD patients and healthy controls. Three-level random effects meta-analyses of twenty-two identified studies was performed to characterize overall presynaptic protein loss and changes in specific regions, proteins, protein families, and functional categories. Results: Meta-analysis confirmed overall loss of presynaptic proteins in AD patients. Subgroup analysis revealed region specificity of protein loss, with largest effects in temporal and frontal cortex. Results concerning different groups of proteins were also highly variable. Strongest and most consistently affected was the family of synaptosome associated proteins, especially SNAP25. Among the most severely affected were proteins regulating dense core vesicle exocytosis and the synaptic vesicle cycle. Conclusions: Results confirm previous literature related to presynaptic protein loss in AD patients and provide further in-depth characterization of most affected proteins and presynaptic functions.
Supplementary Material
ACKNOWLEDGMENTS The authors have no acknowledgments to report. FUNDING This work was funded by TauRx Therapeutics Ltd., Singapore. CONFLICT OF INTEREST The authors declare no conflict of interest. DATA AVAILABILITY All data are available within the paper and its supplementary material.
CC BY
no
2024-01-16 23:47:18
J Alzheimers Dis.; 97(1):145-162
oa_package/80/5a/PMC10789360.tar.gz
PMC10789367
38126444
Resultados Entre 1o de janeiro de 2011 e 31 de dezembro de 2018 foram operados 2378 pacientes. O número total final de casos foi de 66 pacientes. Nove casos foram pareados com apenas um controle, por inexistência de diagnóstico equivalente para ser pareado no período ou por outra infecção concomitante. O número final de controles após a randomização e adequação de critérios foi de 123 pacientes. A incidência anual de infecção do sítio cirúrgico em pacientes submetidos a cirurgia cardíaca para cardiopatias congênitas no período de 2011 a 2018 na faixa etária de um a 19 anos variou de 2% a 3,8%. Os diagnósticos de infecção de sítio cirúrgico foram: 29 casos (44%) de ISC superficial, 14 (21%) de ISC profunda e 23 (35%) como ISC órgão/ espaço, sendo 7 mediastinites, 5 osteomielites, 5 endocardites e 6 casos com dois diagnósticos associados, a saber osteomielite e mediastinite (3 casos); osteomielite e endocardite (2 casos) e 1 paciente com mediastinite associada à endocardite. Foram coletadas amostras de secreção do sítio cirúrgico de 50 pacientes (76%) e houve identificação de agente em 37 casos (74%): Staphylococcus aureus em 26 pacientes (70,3%), Staphylococcus epidermidis (6 pacientes - 16,2%), Staphylococcus cohnii (1 paciente - 2,7%), Staphylococcus hominis (1 paciente - 2,7%), Enterococcus faecium (1 paciente - 2,7%), Acinetobacter haemolyticus (1 paciente - 2,7%) e Enterobacter cloacae complex (1 paciente - 2,7%). Quanto ao perfil de sensibilidade, 29% dos estafilococos (10/34) foram resistentes à oxacilina e 71%,(24/34) sensíveis. Enterococcus faecium foi resistente a vancomicina. Em 8 pacientes (21%) houve também hemocultura positiva. Os agentes etiológicos identificados nas hemoculturas foram: Staphylococcus aureus (quatro ISC profundas e duas mediastinites), Staphylococcus epidermidis (uma osteomielite) e Enterococcus faecium (uma mediastinite). Houve 6 óbitos (3,2%), apenas nos pacientes infectados e com diagnóstico de ISC órgão/espaço e hemocultura positiva em todos. Os agentes etiológicos identificados nas hemoculturas foram: S aureus, S. epidermidis e E. faecium . Na avaliação da antibioticoprofilaxia, faltaram dados em 6% (04/66) no grupo caso e 13% (16/123) no grupo controle. Houve conformidade da profilaxia antimicrobiana com o protocolo institucional acima de 90%, sem diferença estatisticamente significante entre casos e controles (p=0,144). Os potenciais fatores de risco para ISC estão descritos na Tabela 1 e os fatores significantes obtidos na análise univariada estão descritos na Tabela 2 . Os lactentes, portadores de síndrome genética, pacientes pertencentes às categorias 3 e 4 do RACHS-1, antecedente de cirurgia realizada em anos anteriores e reoperação na mesma internação estavam sob maior risco para o desenvolvimento da infecção do sítio cirúrgico. Por outro lado, os pacientes com valores maiores da PCR após 48 horas de pós-operatório apresentaram menor risco para esta infecção ( Tabela 2 ). A Figura 1 ilustra a evolução dos valores da PCR nos pacientes de 1 a 19 anos submetidos à cirurgia cardíaca nos períodos pré, intra e pós–operatórios. Os fatores de risco para a infecção do sítio cirúrgico obtidos na análise multivariada estão descritos na Tabela 3 . A Figura central ilustra incidência da ISC e os fatores de risco encontrados no estudo.
Editor responsável pela revisão: Alexandre Colafranceschi Potencial conflito de interesse Não há conflito com o presente artigo Resumo Fundamento A infecção do sítio cirúrgico (ISC) é uma importante complicação no pós-operatório de cirurgia cardíaca pediátrica associada ao aumento da morbimortalidade. Objetivos Identificar fatores de risco para a ISC após cirurgias cardíacas para correção de malformações congênitas. Métodos Este estudo caso-controle incluiu 189 pacientes com um ano completo e 19 anos e 11 meses, submetidos à cirurgia cardíaca em hospital universitário terciário de cardiologia de janeiro de 2011 a dezembro de 2018. Foi realizado registro e análise de dados pré, intra e pós-operatórios. Para cada caso foram selecionados dois controles, conforme o diagnóstico da cardiopatia e cirurgia realizada em um intervalo de até 30 dias para minimizar diferenças pré e/ou intraoperatórias. Para a análise dos fatores de risco foi utilizado o modelo de regressão binária logística. Significância estatística definida como valor de p<0,05. Resultados O estudo incluiu 66 casos e 123 controles. A incidência de ISC variou de 2% a 3,8%. Fatores de risco identificados: faixa etária de lactentes (OR 3,19, IC 95% 1,26 – 8,66, p=0,014), síndrome genética (OR 6,20, IC 95% 1,70 – 21,65, p=0,004), RACHS-1 categorias 3 e 4 (OR 8,40, IC 95% 3,30 – 21,34, p<0,001), o valor da proteína C reativa (PCR) de 48 horas pós-operatórias foi demonstrado como fator protetor para esta infecção (OR 0,85, IC 95% 0,73 – 0,98, p=0,023). Conclusão Os fatores de risco identificados não são variáveis modificáveis. Vigilância e medidas preventivas contínuas são fundamentais para reduzir a infecção. O papel do PCR elevado no pós-operatório foi fator protetor e precisa ser melhor estudado.
Introdução A cardiopatia congênita é considerada um relevante problema de saúde pública principalmente nos países em desenvolvimento. Apesar do aprimoramento da cirurgia cardíaca pediátrica, a demanda de serviços especializados e as limitações dos recursos humanos e financeiros são desafiadores para estes países. 1 , 2 A infecção do sítio cirúrgico (ISC) é uma importante complicação associada ao aumento de morbidade, aumento do consumo de antibióticos, da permanência em unidades de terapia intensiva e do tempo total de hospitalização, custos para o sistema de saúde e incremento na taxa de mortalidade. 1 - 6 A incidência da ISC após cirurgia cardíaca na população pediátrica, segundo dados publicados, varia de 0,2% a 4,8%. 7 Estudos com enfoque na identificação dos fatores de risco na população com mais de 1 ano de idade são escassos, pois o foco tem sido o período neonatal. Não há estudos com esta abordagem específica na população pediátrica no Brasil e este estudo pretende contribuir para ampliar o conhecimento sobre o assunto. Os fatores de risco para infecção do sítio cirúrgico após cirurgia cardíaca pediátrica descritos em publicações prévias foram: idade menor que um mês, síndrome genética, escore da Sociedade Americana de Anestesiologia (ASA) alto, cardiopatia cianótica, hipotermia intraoperatória, hospitalização pré-operatória maior que 48 horas, duração da cirurgia e tempo de circulação extracorpórea (CEC), utilização de múltiplos procedimentos durante a cirurgia, número de transfusões de hemácias e manutenção do esterno aberto após término do procedimento cirúrgico. 1 , 3 , 8 O objetivo primário do estudo foi identificar fatores de risco para infecção do sítio cirúrgico após cirurgias cardíacas para correção de malformações congênitas com e sem circulação extracorpórea (CEC ) em crianças maiores de 1 ano de idade e como objetivo secundário foram avaliadas a incidência e microbiologia das infecções. Métodos Este estudo foi aprovado pela Comissão Científica e pela Comissão de Ética para Análise de Projetos de Pesquisa do hospital universitário terciário de cardiologia. O termo de consentimento livre e esclarecido foi dispensado. Pacientes Foi utilizado o desenho de estudo caso-controle 1:2, retrospectivo para a identificação dos fatores de risco. Este estudo incluiu 189 pacientes com idade entre um ano completo e 19 anos e 11 meses, submetidos à cirurgia cardíaca realizada em centro de referência universitário especializado em assistência de alta complexidade em cirurgia cardiovascular pediátrica; no período de 01 de janeiro de 2011 a 31 de dezembro de 2018, sendo 66 casos e 123 controles. De acordo com a Organização Mundial de Saúde (OMS), a faixa etária da adolescência se situa entre 10 e 19 anos de idade. 9 Esta foi a padronização adotada no estudo. Critérios de inclusão Definição de caso: Paciente portador de cardiopatia congênita com idade entre um ano completo e 19 anos e 11 meses, com infecção do sítio cirúrgico após cirurgia cardíaca. Definição de controle: Paciente portador de cardiopatia congênita com idade entre um ano completo e 19 anos e 11 meses, submetido a cirurgia cardíaca sem infecção do sítio cirúrgico. Critérios de exclusão Pacientes neonatos (28 dias) e lactentes até o primeiro ano de vida (29 dias de idade até 11 meses e 29 dias). Pacientes submetidos à cirurgia cardíaca por diagnósticos diferentes de cardiopatia congênita como: cardiomiopatias, pericardiopatias, tumores cardíacos, doença reumática crônica, pacientes indicados para transplante cardíaco, pacientes indicados para colocação de dispositivo eletrônico ou de dispositivo de assistência circulatória na ausência de cardiopatia estrutural congênita. Seleção de casos e controles Todos os diagnósticos de ISC foram confirmados pela equipe da Unidade de Controle de Infecção Hospitalar (UCIH) do hospital universitário terciário de cardiologia em conformidade com os critérios diagnósticos definidores segundo a ANVISA e o CDC. 10 - 12 Para cada caso foram selecionados dois controles e esta combinação foi baseada no diagnóstico da cardiopatia e com cirurgia realizada em um intervalo de até 30 dias para minimizar diferenças pré e/ou intraoperatórias. Os controles foram definidos por sorteio utilizando o programa Excel. Os diagnósticos das cardiopatias congênitas foram agrupados em categorias de acordo com a base patogênica, fisiopatológica e saturação arterial de oxigênio totalizando quatro categorias: grupo 1) cardiopatias congênitas acianóticas obstrutivas, grupo 2) cardiopatias congênitas acianóticas com desvio de sangue da esquerda para a direita, grupo 3) cardiopatias congênitas cianóticas com hipofluxo pulmonar e grupo 4) cardiopatias congênitas cianóticas com hiperfluxo pulmonar. 13 Foi realizado o registro e análise das variáveis de exposição demográficas, clínicas e laboratoriais pré, intra e pós-operatórios segundo revisão da literatura e relevância clínica e biológica ( Tabela 1 ). Considerando que a transfusão de hemoderivados pode ser um fator de risco importante, ter recebido pelo menos uma unidade de qualquer hemoderivado foi considerado como potencial fator de risco. Recomendações quanto à profilaxia antimicrobiana Utilizou-se cefuroxima por via venosa (dose=50 mg/kg) administrada na indução anestésica, repetida a cada 4 horas durante a cirurgia. Não é recomendada a administração de dose ao término da CEC. Após o término da cirurgia administra-se 30 mg/kg a cada seis horas até completar 24 horas de pós-operatório (quatro doses na UTI cirúrgica). Para crianças com mais de 30 kg, utiliza-se cefuroxima 1,5 g na indução anestésica e 750 mg a cada 4 horas durante a cirurgia e a cada 6 horas no pós-operatório por 24 horas. Para a análise da profilaxia antimicrobiana foram consultados os prontuários eletrônico e de papel. Análise estatística A análise estatística foi realizada utilizando o programa estatístico SPSS versão 23.0 ( SPSS Inc., Chicago, IL, EUA ). As variáveis numéricas foram expressas em mediana e intervalo interquartil (percentis 25 e 75). As variáveis categóricas foram apresentadas utilizando frequências absolutas e relativas. As diferenças entre dois grupos foram analisadas com o uso do teste de Mann-Whitney para variáveis numéricas após verificação de não normalidade pelo teste de Shapiro Wilk, e testes de Qui-quadrado ou teste de Fisher para as variáveis categóricas, quando adequado. Para a análise dos fatores de risco para a infecção do sítio cirúrgico foi utilizado o modelo de regressão binária logística. Para este modelo as variáveis numéricas foram analisadas em decis. As variáveis de exposição com valor de p <0,1 na análise univariada foram escolhidas para a análise multivariada e o procedimento forward LR foi usado para a seleção de variáveis neste modelo final. Para cada possível preditor foi calculada a razão de chances, assim como seu intervalo de confiança de 95%. Significância estatística foi definida como valor de p <0,05. Discussão A identificação da faixa etária de lactentes como fator de risco para infecção do sítio cirúrgico é biologicamente plausível visto que o processo de desenvolvimento do sistema imune na criança inicia-se na vida fetal e continua até a adolescência. O recém-nascido e o lactente possuem menor capacidade de resposta aos antígenos em comparação com as crianças mais velhas, adolescentes e adultos. Este achado corrobora para o papel preditor da baixa idade para ISC já descrito na literatura. 1 , 3 , 4 , 6 , 14 Neste estudo verificou-se que a presença de síndrome genética foi fator de risco para ISC. Estudos prévios como o de Costello et al., 3 Sen et al. 4 e Hatachi et al. 15 mostraram presença de alterações cromossômicas como preditoras para infecção do sítio cirúrgico em pós-operatório de cirurgia cardíaca para cardiopatia congênita. A síndrome de Down é a síndrome genética reconhecidamente mais associada a alterações imunes nos compartimentos celular, humoral e fagocítico e foi a mais prevalente em nossa população. A capacidade lítica dos polimorfonucleares é efetuada pela atividade de superóxidos e outros radicais, os quais provocam danos celulares oxidativos, eliminando fungos e bactérias como Candida spp . e Staphyloccocus spp . A enzima cobre-zinco-superoxidodismutase-1 (Cu-Zn-SOD-1) a qual converte superóxidos em peróxido de hidrogênio é codificada pelo gene SOD1, localizado no cromossomo 21. A carga genética extra determinada pela trissomia está relacionada a níveis elevados de Cu-Zn-SOD-1, o que reduz a quantidade de superóxidos em polimorfonucleares de portadores da trissomia do cromossomo 21. 16 , 17 Costello et al. 3 e Sen et al. 4 descreveram a associação entre a classificação da complexidade de procedimentos cirúrgicos para cardiopatias congênitas pelo RACHS-1 e o risco para ISC. Em nosso estudo identificou-se a categoria de RACHS-1 acima de 3 como fator de risco independente para ISC. Os procedimentos cirúrgicos relacionados às anatomias mais complexas podem acarretar maior tempo em sala cirúrgica, manejo mais intenso e prolongado dos tecidos com maior chance de contaminação do sítio cirúrgico e de dano celular. Neste estudo os tempos de cirurgia e de circulação extracorpórea não foram evidenciados como preditores de infecção. Outra consideração a ser feita é o fato de que o paciente com cardiopatia mais complexa pode ter uma condição clínica basal com potencial de descompensação hemodinâmica maior em relação a um paciente com alterações anatômicas mais simples. A alteração do débito cardíaco pode acarretar a redução da vascularização tecidual e colaborar para a infecção. Quando analisamos a presença de cirurgia prévia realizada em anos anteriores, foi observado que estes pacientes estavam sob maior risco para ISC p=0,046 (IC 95%,1,01-3,41), OR=1,86. O histórico de cirurgia prévia é sugerido em publicações como fator de risco para ISC, 14 entretanto esta variável de exposição não permaneceu como fator de risco para a infecção do sítio cirúrgico quando foi submetida posteriormente à análise multivariada. Os valores da proteína C reativa após 48 horas da operação significantemente superiores no grupo de pacientes não infectados foi um achado inesperado. Na análise multivariada, mostrou-se como um fator de proteção, p= 0,023 (IC 95%, 0,73-0,98), OR=0,85. Para cada incremento de um decil houve redução do risco de infecção do sítio cirúrgico em 15%. A PCR é denominada proteína de fase aguda inflamatória. A inflamação é a resposta humoral e celular protetora do corpo humano à injúria. Ela engloba a ativação de diferentes cascatas como a do sistema complemento, das citocinas e da coagulação. No contexto da cirurgia cardíaca ela já é desencadeada na anestesia e aumentada pela incisão cirúrgica da pele e esternotomia e, finalmente é amplificada de forma robusta pela circulação extracorpórea (CEC). 18 No período de pós-operatório de cirurgia cardíaca os níveis séricos da PCR necessitam ser interpretados com cautela e avaliados em conjunto com dados clínicos, epidemiológicos, demais exames complementares e outros biomarcadores, pois níveis séricos elevados podem ser interpretados como uma complicação infecciosa e justificar a introdução de terapia antimicrobiana empírica ou prolongamento da profilaxia antimicrobiana. A antibioticoprofilaxia nos pacientes com PCR elevada não foi prolongada em função destes valores, e todos os pacientes do grupo controle que receberam algum antimicrobiano por suspeita de diagnóstico infeccioso no pós-operatório foram excluídos do estudo. A evolução dos níveis séricos da PCR ilustrada na Figura 1 está em conformidade com dados da literatura a respeito do pico da PCR no segundo dia de pós-operatório. Jaworski et al. 19 realizaram estudo para avaliação da cinética da proteína C reativa em crianças com cardiopatia congênita submetidas à cirurgia cardíaca com CEC e observaram que os maiores níveis da PCR ocorreram no segundo dia de pós-operatório e que os valores eram altos mesmo na ausência de complicações infecciosas. 19 Tradicionalmente utilizada como marcador de infecção e de eventos cardiovasculares, a proteína C reativa atualmente está sendo apontada por novas evidências como uma proteína com papel ativo e relevante nos processos de inflamação e resposta do hospedeiro a infecções incluindo a via do sistema complemento, apoptose, fagocitose, liberação de óxido nítrico e produção de citocinas, particularmente a interleucina 6 e o fator de necrose tumoral alfa. Na presença de cálcio a PCR liga-se a polissacarídeos nos microrganismos e ativa a via clássica do complemento que promove a opsonização de patógenos. Há relatos de que a PCR pode mediar a resposta do hospedeiro ao Staphylococcus aureus promovendo o aumento da fagocitose bacteriana. Sproston et al. 20 mostraram a ação da PCR sobre a parede polissacarídica bacteriana. 20 Ponderando-se que Staphylococcus aureus é o principal microrganismo identificado nas infecções do sítio cirúrgico na maioria dos estudos, o que também foi constatado em nossos casos, o valor mais elevado da PCR nas 48 h pós-operatórias nos pacientes do grupo controle demonstra a possibilidade de a mesma exercer um papel de opsonina, fator protetor para a ISC. Os valores menores da PCR nos pacientes infectados do grupo caso em relação aos do grupo controle enfatizam a importância de não interpretarmos esta proteína de forma isolada e apenas como marcador de infecção. D’Souza et al. 21 realizaram estudo observacional prospectivo do valor preditivo de biomarcadores como a PCR, procalcitonina, lactato, neutrófilos e linfócitos, plaquetas para o diagnóstico de infecção bacteriana após cirurgia cardíaca na população pediátrica. Foram incluídos 368 pacientes e foi descrito como sendo o maior estudo com foco neste assunto publicado até o momento. Apesar disto, eles concluíram que a diferenciação entre infecção e estado inflamatório pós-operatório permanece difícil nesta faixa etária. As medidas longitudinais da PCR e procalcitonina e monitoramento das alterações clínicas que ocorrem na evolução do paciente no período perioperatório são informações valiosas e devem ser consideradas na decisão sobre o uso racional de antimicrobianos no pós-operatório. 21 A utilização de biomarcadores como PCR e procalcitonina no pós-operatório de cirurgia cardíaca infantil requer uma análise detalhada e atrelada a uma situação clínica para evitar diagnósticos equivocados de infecções, prescrição indiscriminada de antimicrobianos e seleção de microrganismos multirresistentes. São necessários estudos prospectivos multicêntricos para confirmação e consolidação dos achados. O presente estudo apresenta limitações relacionadas ao número total de pacientes infectados e ao desenho retrospectivo unicêntrico. O fato da infecção do sítio cirúrgico ser um desfecho raro pode prejudicar a análise de fatores preditores. A análise retrospectiva de dados em prontuários eletrônicos e físicos apresenta dificuldades quanto à acurácia das informações. A população de um centro único de referência pode ter características peculiares. A realização de estudos multicêntricos e a ampliação das linhas de investigação poderão validar os achados deste estudo. Conclusão Neste estudo, foram identificados fatores de risco para infecção do sítio cirúrgico que não são modificáveis antes da cirurgia (idade, doença genética, por exemplo). Deste modo, a prevenção desta infecção requer o cumprimento rigoroso das medidas de prevenção de infecção, tais como reduzir tempo de internação pré-operatório, utilizar antibioticoprofilaxia individualizada, manipulação cuidadosa de sondas, cateteres e curativos no pós-operatório. Outro ponto a destacar foi o valor mais elevado da PCR nas primeiras 48 hs após a cirurgia ter sido demonstrado como fator protetor para a ocorrência da ISC. O provável papel imunomodulador da proteína C reativa no período pós-operatório da cirurgia cardíaca requer maior investigação, evitando que seu resultado seja interpretado exclusivamente como marcador de infecção e levando ao uso inapropriado de antimicrobianos.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 7; 120(12):e20220592
oa_package/1e/37/PMC10789367.tar.gz
PMC10789368
38214964
Introduction Background Reports and analyses highlight that approximately 60% of health-related tweets contain links to health-related websites [ 1 ]. Twitter users frequently share health information related to chronic diseases, mental health, and general wellness topics [ 2 ]. Twitter is a popular platform for health-related conversations because it allows users to share their thoughts, experiences, and information in real time [ 3 ]. This platform can be particularly useful for sharing information about health events, such as disease outbreaks or public health campaigns. In addition, Twitter can be used to connect with others with similar health concerns or interests and to access information from health care professionals and organizations [ 4 ]. Twitter is a social media platform with a character limit, which makes sharing detailed information about health issues difficult. Moreover, the information shared on Twitter may not always be accurate or reliable, as it is not always fact-checked or verified. However, Twitter is a common platform for people to share opinions, discuss health-related topics, and converse with a wide audience. According to a survey conducted by the Pew Research Center, 21% of Twitter users have used the platform to share information about a health condition [ 5 ]. The survey also found that 20% of Twitter users have followed a health organization or medical professional on the platform, and 15% have searched for information about a health condition on Twitter. Users often express their views about health policies, medical breakthroughs, health care services, and public health issues [ 6 ]. Health professionals, researchers, advocacy groups, and patients actively participate in these discussions, contributing diverse perspectives and sharing evidence-based information. This open and rapid exchange of ideas allows health information dissemination and facilitates conversations that can influence public opinion and policy decisions [ 7 ]. Sharing health information on Twitter raises privacy concerns as it involves sharing personal and sensitive data on a public platform [ 8 ]. Information privacy refers to individuals’ control over collecting, using, and disclosing their personal information. Regarding sharing personal health information, web-based information privacy refers to protecting sensitive health data from unauthorized access, secondary use, or disclosure [ 9 ]. It involves ensuring that individuals can make informed decisions about how their health information is shared and used and that appropriate safeguards are in place to protect the confidentiality and security of this information. The potential privacy risks associated with sharing health information on Twitter can be grouped into 3 reasons. First, potential identification and disclosure of personal information—sharing health information on Twitter can inadvertently lead to disclosing personally identifiable information. A study found that anonymized data from health-related tweets could be reidentified to reveal the identity of users [ 10 ]. Researchers were able to reconstruct personal health stories and connect them to specific individuals, highlighting the potential privacy risks involved. Second, data mining and analytics—third parties can analyze and use health-related tweets for various purposes, including targeted advertising or creating consumer profiles. Researchers analyzed tweets related to mental health and found that the content could be used to predict users’ self-reported diagnoses, medication use, and other personal information [ 11 ]. This demonstrates the potential for extracting sensitive health-related data from Twitter. In addition, a study used deep neural networks to identify personal health experience tweets, highlighting the potential for using Twitter as a data source for health surveillance studies [ 12 ]. Third, public disclosure of sensitive health information—sharing health information on Twitter might inadvertently expose individuals to public scrutiny and judgment. A study examined tweets related to mental health and found that users often disclosed personal experiences, symptoms, and treatments [ 13 ]. Although this sharing can provide support, it can also expose individuals to potential stigma, discrimination, or unwanted attention. Previous studies suggest that individuals may be comfortable with sharing general information that is not sensitive on social media [ 14 ]. However, people may not be likely to share personal information, especially health-related data, owing to privacy concerns [ 15 ]. According to previous studies, privacy concerns can arise from companies’ information collection and use policies in the age of medical big data [ 16 ] and web-based social interactions that may threaten information privacy [ 17 ]. Twitter is reported as an important data set for vendors, researchers, and medical companies to collect health-related information [ 18 ]. Many medical companies collect health information and patient experiences from Twitter for big data analysis to find patterns for public health management [ 19 ]. Although big data collection and data mining techniques could help generate intelligence for monitoring public health issues, they can cause privacy concerns. Reports highlight that many Twitter users have experienced invasion of privacy owing to companies’ collection, sharing, and analytics practices that use information from their tweets, including private health information [ 20 ]. Although there are various studies of vendor-related privacy concerns [ 21 ] and peer-related privacy concerns [ 22 ], little is known about whether these 2 aspects of privacy concerns may collectively influence information-sharing behaviors. As privacy violations can be related to peers (such as inappropriate comments and unauthorized retweeting) and companies (sharing personal information with third parties), more studies are required to examine whether information-sharing disclosure can be affected equally by vendor-related and peer-related privacy concerns. In this study, we aimed to determine whether both aspects of privacy concerns (ie, concern for information privacy [CFIP] and peer privacy concern [PrPC]) can mutually change health-related information-sharing decisions or whether one’s effects can dominate or overshadow the impact of the other. For instance, whether the nature of the relationships and contexts with 2 different information trustees (ie, vendors and peers) can influence information dissemination behavior. Thus, we argue that both aspects of privacy concerns should be considered in a model to better characterize information privacy on social media. Investigating the importance of privacy concerns related to companies and vendors (such as Twitter analytics) and web-based peers (such as retweeting a focal user’s health information without permission) in case of disclosing public and private health information on Twitter would be the main contribution that makes this study different from others. This argument is built based on 4 reasons. First, although the main interactions on social media are mainly peer oriented, vendors can still collect a lot of personal data (such as health information) without authorization and use it for unconsented purposes [ 23 ]. There have been instances of social media platforms, including Twitter, being used by organizations for health-related data mining and analysis [ 24 ]. Twitter data can provide valuable insights into health-related trends for health care organizations through analytics [ 25 ]. Researchers and companies (such as pharmaceutical manufacturers) have used Twitter data to track and analyze health-related trends, including disease outbreaks, medication use, and public health concerns [ 26 ]. For example, a study found that Twitter data could be used to track the spread of influenza and predict outbreaks [ 27 ]. Another study uncovered that Twitter data could be used to monitor adverse drug reactions and identify potential safety concerns [ 28 ]. Pharmaceutical companies have also used Twitter data to monitor medication use and patient experiences. For example, a study reported that Twitter data could be used to monitor patient experiences with antidepressant medications [ 29 ]. Moreover, it is common for organizations and vendors, including those in the health care industry, to monitor social media platforms to gather insights about consumer opinions, preferences, and trends [ 30 ]. Twitter, as a popular social media platform, has been used for these purposes [ 31 ]. Health care organizations and vendors may collect health-related information, such as discussions about medical conditions, treatment experiences, and patient preferences, from public Twitter profiles [ 19 ]. These insights can be valuable for marketing and market research purposes. Second, although the primary purpose of peer-to-peer (P2P) interactions on social media is to maintain social connections with peers and there are no explicit business-to-consumer interactions, companies can still use social media analytics to investigate published health information. Companies can leverage various analytics tools to gather and find meaning and patterns in data collected from social channels to support business decisions (such as predicting the risk factors to manage public health) [ 32 ]. Third, individuals share a variety of information (such as about lifestyle, health status, chronic issues, and medication) on social media, which can be more sensitive than conventional e-commerce information (such as transaction records). Disclosing a wide range of information across social media platforms could raise concerns about whether companies and peers misuse the shared data (eg, health information). Fourth, Web 2.0, a fundamental technology supporting social media, mainly focuses on bilateral relationships between peers. However, it does not remove traffic between companies and social media users. Thus, peers can comment on conversations about a user’s health condition and share others’ personal health information on their own channels. In contrast, companies can use big data analytical tools to collect, use, or share users’ personal health data with third parties. Study Objectives The main objective of this study was to investigate the concept of information privacy concerns in the context of social media based on both vendor-related and peer-related aspects. To do so, we used the survey research methodology and Twitter as the empirical context. We also relied on theories discussing 2 dimensions of information privacy (ie, CFIP and PrPC) as the theoretical foundation of our proposed model. In this study, CFIP, emphasizing both consumer perspectives and company responsibilities, represented privacy concerns related to web-based practices of companies and vendors (such as collection and sharing of self-shared information), and PrPC referred to privacy concerns about losing control over digital communications and web-based interactions with peers. Thus, we suggest that information privacy on social media can be multidimensional, focusing on privacy violations associated with companies’ (vendors’) practices and sharing behaviors of peers. The integration of CFIP and PrPC can comprehensively present the entirety of privacy concerns about web-based health information. This study contributes to both theory and practice. We shed more light on information privacy conceptualization in the context of social media. This study also provides an interactive outlook and practical recommendations for handling privacy issues by explaining how web-based vendors and peers may cause privacy violations when dealing with health information (general and specific) shared over the web. Variable Conceptualization, Theoretical Foundation, and Research Hypotheses General and Specific Health Information Individuals can use web-based channels to share general health information, such as information about treatments, medications, side effects, hospitals, medical costs, and healthy behaviors [ 33 ]. For instance, people are likely to tweet about general obesity-related topics, such as the relationship between fast food and weight gain [ 34 ]. Another study identifies general tobacco-related tweets (such as information about smoking, cigarette risks, and quitting) as the primary conversational data sets for health-related topics on Twitter [ 35 ]. Moreover, people can use tools such as Twitter to share specific health-related information, including past medical history, allergies, personal medications, private health issues, and signs and symptoms. For example, a study indicates that people disseminate information about diagnoses, advice based on personal experience, use of specific medications, side effects, negative reactions, and treatments on Twitter [ 31 ]. Another study highlights that people use Twitter to share their COVID-19–related symptoms and personal health issues during the early stages of the pandemic [ 36 ]. Sharing public and private health information can be valuable for web-based peers and affect their health-related decisions. General information can enable web-based users to find some facts about hospitals, physicians, and diseases. Disseminating specific information can share important insights and advice based on personal health conditions, medical treatments, care planning, and medical experiences with chronic diseases. General health information can be publicly available regardless of personal experiences. However, specific health information can be unpleasant to share because it may contain more private information. As health information dissemination has 2 sides, questions still remain as to what dimensions of information privacy may strongly affect sharing behaviors on Twitter. CFIP Constructs There is evidence suggesting that companies use tweets to collect health information. For example, reports show that public health researchers use Twitter data to study the world’s health. A recent study indicates that the amount of textual health-related data, which could be personal, collected by various organizations is growing (especially during the COVID-19 pandemic) [ 37 ]. Another study argues that health care researchers and research companies have used social media data sources such as Twitter to study public health [ 19 ]. Owing to the importance of the Twitter database, the Centers for Disease Control and Prevention (CDC) designed a document to guide employees and contractors on using Twitter to disseminate health information and engage with individuals and partners [ 38 ]. A study indicates that companies increasingly use Twitter to share public health information and collect real-time health data using crowdsourcing methods [ 39 ]. Information privacy, which refers to people’s ability to control their information, is essential in e-commerce and social media [ 40 ]. Several studies explain the privacy concerns specific to the mobility data collection context [ 41 ]. Thanks to emerging technology (such as Web 2.0), protecting personal information has become a growing concern for web-based users. CFIP is a general concern about how organizations can use and protect consumers’ information [ 21 ]. CFIP explains concerns about organizations’ information collection practices, use policies, and access to consumers’ personal information [ 42 ]. Previous studies indicate that examining consumers’ concerns about how companies (vendors) may use their personal information significantly affects their willingness to engage in web-based transactions actively [ 43 ]. In this study, following most previous studies, CFIP is posited as a multidimensional construct with 4 dimensions to measure individuals’ concerns about organizations’ information privacy practices [ 44 ]. Collection pertains to individuals’ concerns about what web-based information is collected and whether such information is stored properly. Unauthorized secondary use explains individuals’ concerns about whether the information collected for a consented purpose may be unethically and illegally used for other purposes without obtaining authorization. Improper access implies individuals’ concerns about whether unauthorized people (entities) can access, view, and share their information. Finally, concerns about errors reflect whether individuals’ information is appropriately protected to minimize accidental or intentional errors [ 44 ]. Therefore, the multidimensional scale of CFIP reflects the complexity of individuals’ privacy concerns [ 21 ]. According to Stewart and Segars [ 40 ], CFIP is developed as a second-order construct with 4 reflective first-order factors. In this study, we also considered CFIP as a high-order construct with reflective factors. The logic behind conceptualizing this construct as reflective was that the privacy concerns related to companies are reflective of the 4 dimensions (ie, collection, unauthorized access, errors, and secondary use) and the expected interactions among them. Therefore, these dimensions can reflect the same theme and may covary. Although sharing information on Twitter is more oriented toward interactions with web-based peers, privacy concerns about the collection and misuse of digitized health information by vendors and companies still remain significant. Previous studies provide strong evidence suggesting that web-based users of Twitter are concerned about several aspects of their information privacy, from collection of a lot of data to misuse [ 45 ]. Our study focused on individuals’ perceptions about general CFIP owing to policies and practices of vendors and organizations that may collect, access, and use health information shared on Twitter rather than concerns about a particular vendor. According to the four dimensions of the CFIP construct, individuals who demonstrate high privacy concerns believe that (1) a lot of health information is collected by organizations from users’ Twitter accounts, (2) such health information is not appropriately protected against possible errors, (3) various organizations may use health-related information on Twitter for other purposes without authorization (such as data mining, surveillance, research, and business intelligence), and (4) there is lack of visibility into accurate security measures to control who can access and use health information from tweets. Thus, the CFIP construct can be extended to privacy concerns about a wide range of vendors and companies accessing and using tweets containing health information. This concern is not the same as privacy issues owing to interactions with a specific vendor in the context of e-commerce (such as retail platforms). In these conventional interactions, privacy concerns may focus on personal, factual information shared in web-based transactions and services (such as demographic information). However, CFIP in the social media domain deals with concerns associated with the following uncertainty: which organizations collect personal posts, which unauthorized entities can view and share information, why and how the information is used (for instance, data mining), and how the information is protected from internal and external errors and misuse. Therefore, we argue that CFIP cannot be ignored in examining information privacy in social media because users may not have direct relationships with organizations on these digital platforms, but they are still concerned about how their posts can be collected and misused by various companies. Sharing general health information could indicate a user’s rich medical information and wealth of medical knowledge. In contrast, sharing specific health information can show that the user may want to contribute or seek informational and emotional support by disseminating personal experiences and medical history. However, when privacy concerns about the collection and misuse of shared data by organizations are not addressed, users are not likely to disseminate general or specific health information on Twitter. Moreover, we can expect that because specific health information is more sensitive and private, web-based users may generally become more cautious about sharing it. Therefore, we hypothesized the following: Hypothesis 1A (H1A): CFIP negatively influences general health information dissemination on Twitter. Hypothesis 1B (H1B): CFIP negatively influences specific health information dissemination on Twitter. Hypothesis 1C (H1C): CFIP has a more negative effect on specific health information dissemination than on general health information sharing on Twitter. PrPC Constructs Owing to the nature of Web 2.0, users can communicate, create content, and share it via communities, social networks, and virtual worlds [ 46 ]. Web-based users can share a wide range of information and experiences on social media. The information can be objective (based on factual data) or subjective (based on personal interpretation, feelings, tastes, or opinions) [ 47 ]. The range can start with demographic information (eg, age, gender, and race); continue with political views, humanitarian opinions, and health information; and end with comments on others’ posts [ 48 ]. People can use different formats, such as text, pictures, and videos, to disseminate information. Peers are important components of social networks; however, they can threaten information privacy through inappropriate sharing behaviors and unintended consequences of web-based interactions [ 49 ]. Web-based transactions with peers on social media affect users’ decisions about whether they want to reveal their personal information (such as feelings and likes) and create an image consistent with their personal identity [ 50 ]. In this study, peers could be web-based friends who may have long-lasting and affect-laden connections with a user and any web-based users who interact through social media channels. Previous studies highlight the importance of PrPCs in the context of web-based interpersonal relationships where other peers can access and view a user’s web-based information [ 51 ]. Peer-related privacy refers to possible risks of privacy invasion because of direct and indirect web-based interactions with peers [ 17 ]. Social bots and fake and spam accounts can also raise privacy violation risks by potentially exposing several peers to a focal user’s posts using machine learning algorithms [ 52 ]. Previous studies indicate the threat of using social bots on social networks, increasing the likelihood of privacy breaches where even more private user data are exposed [ 53 ]. Understanding who can access web-based information (such as a post related to signs and symptoms of depression) and with whom such information is shared can significantly raise privacy concerns. For example, a study shows that sharing information with only selected friends in social networking services perceived higher control than sharing information with all friends [ 54 ]. Thus, information-sharing behaviors on social media may erode the ability of users to control their virtual space and personal boundaries. Leaving an inappropriate comment for a user who posted about seeking ways to lose weight , can increase privacy concerns about lack of control to maintain the privacy of their Twitter space. A study posits that managing the privacy of virtual territory refers to defining the level of access to and interaction others can have within a user’s territory (eg, allowing peers to see or comment on the post) [ 55 ]. Peers can also play a bilateral role in web-based social interactions. They can intentionally or unintentionally share a user’s personal health information with others and expose the user to others’ personal information that they might not like to view. The user may think that if others’ personal health information has been shared with me, my posts can also be revealed to others. Thus, communication privacy can significantly affect how individuals and relational parties share private information on social media [ 56 ]. A recent study defines PrPC as the sense of inability to control personal boundaries in web-based interactions owing to web-based peers’ behaviors [ 22 ]. They describe this term using 4 reflective dimensions: peer-related information privacy, psychological privacy, virtual territory privacy, and communication privacy. Peer-related information privacy denotes concerns about who can see what type of information and when and how such information is disclosed to other web-based peers. For posts shared by a user, the main concern is unauthorized access and secondary use of data by other peers. On Twitter, this can happen through retweeting and commenting. Peers can also initiate posts or conversation threads to disclose a user’s personal information without authorization. A privacy concern is about the accuracy of personal information shared by peers. Thus, peers’ sharing can be a source of private information leaks. Psychological privacy explains the control over input information coming from others to shape feelings, opinions, and beliefs. Information sharing is 2-way traffic in social media (ie, from a user to peers and from peers to a user) [ 57 ]. As people are exposed to posts shared by celebrities, business magnates, politicians, and other web-based users, their behaviors and opinions are increasingly affected by input information from peers. Peers on social media can influence users’ behavior by applying social influence through public comments on posts [ 58 ]. Privacy concerns become more intense when users’ opinions and psychological independence are intentionally manipulated by social bots [ 59 ]. In this situation, users are not able to make a decision independent of other web-based peers’ ideas. Moreover, receiving a lot of unwanted information from peers may influence value systems, attitudes, identities, and choices. Virtual territory privacy represents concerns about an individual’s inability to achieve control over other peers’ interactions with their virtual properties (such as Twitter accounts) and shared conversations (postings). Previous studies suggest that the sense of ownership and emotional attachment to personal territory can be generalized to the social media domain [ 60 ]. Similar to other personal belongings, virtual properties are seen as private. Thus, any unwanted addition to or revision of personal information can be considered as an intrusion, which may increase privacy violation risks [ 45 ]. Finally, communication privacy reflects an individual’s lack of control over how and when other peers can make direct web-based conversations. For example, peers may use various communication tools to engage individuals in a group conversation about potentially embarrassing or stigmatic health-related topics. Then, users may feel pressured by being involved in such undesirable conversations with unfamiliar people. Individuals may become more likely to share general or specific health information on Twitter when they think it is useful for other web-based peers (eg, they can make better medical decisions). However, peer-related concerns may prevent them from disseminating such information. Peers are participants in social media and can freely collect and share information that is sometimes considered as unwanted interference. For instance, if peers retweet a post containing personal information about postsurgery recovery plans without authorization or tag a user who posted general educational content about HIV, these web-based interactions may violate privacy needs and raise privacy concerns. In return, users may change the pattern of health information dissemination and become more cautious in sharing medical facts or personal experiences. Thus, we formulated the following hypotheses: Hypothesis 2A (H2A): PrPC negatively influences general health information dissemination on Twitter. Hypothesis 2B (H2B): PrPC negatively influences specific health information dissemination on Twitter. Hypothesis 2C (H2C): PrPC has a more negative effect on specific health information dissemination than on general health information sharing on Twitter. CFIP and PrPC Are Privacy Concerns for Twitter Users Although tweets are publicly accessible by default, users likely expect some degree of privacy and control over their personal health information shared on the platform. Previous literature has found that even when posting content publicly on social media, individuals still have privacy interests and concerns about how their data might be used or accessed [ 61 ]. General health information shared publicly on Twitter, such as mentions of hospitals, physicians, and common diseases, is not considered protected or private. However, more specific personal health details, such as past medical history, allergies, medications, and current symptoms, could reveal private information about an individual’s health status. Although these details may be shared publicly by default on Twitter, users likely still have privacy concerns about this content being widely disseminated or used without their consent. The concepts of CFIP and PrPC capture these types of privacy concerns. Although users are voluntarily sharing health information publicly on Twitter, they may still desire control over how these data are accessed and used. CFIP reflects concerns about using or sharing personal health data by third parties such as researchers or companies without the user’s knowledge or permission. Even if users willingly post health information publicly, they may still desire control over how that data are collected, analyzed, or shared by entities such as researchers, pharmacies, insurance companies, and so on. PrPC represents concerns about controlling boundaries around health disclosures and limiting exposure to certain audiences, such as employers or insurers, who could misuse the information. Users must balance sharing personal details with managing social risks if the information reaches unintended viewers such as employers, family members, or friends. Thus, although Twitter data are technically public, users are likely to have nuanced privacy interests surrounding their health disclosures. Therefore, concepts such as CFIP and PrPC are useful for quantifying expectations regarding control, anonymity, and audience boundaries that persist even when posting health care–related content openly over the web. Difference Between the Conceptualization of CFIP and PrPC Overview We used an interactive approach to provide a holistic view of information privacy in the context of sharing health information on Twitter. Using this approach, this study actively engaged with the 2 aspects of privacy concerns (CFIP and PrPC) in a dynamic way, considering the interplay between them, as opposed to treating them as isolated, independent factors. Therefore, we examined how these 2 aspects of privacy concerns interact with each other and how this interaction affects individuals’ behavior on Twitter. It should be mentioned that the dimensions used for CFIP and PrPC may differ because of the different nature of the relationships and contexts involved. Although the underlying concept of privacy concerns remains the same, the specific dimensions or factors that contribute to CFIP and PrPC may vary owing to the distinct characteristics of vendors and peers as information trustees. Role and Control Vendors typically have a professional or business relationship with individuals, where they are entrusted with handling personal information for specific purposes (eg, health care providers and web-based retailers). In this context, individuals may be concerned about vendors’ control over their information; how it is collected, used, and shared; and the potential for data breaches or unauthorized access. Trust and Reputation CFIP dimensions often include factors related to trust and reputation, such as trustworthiness, perceived reliability, and credibility of vendors. As individuals rely on vendors to handle their personal information responsibly, dimensions related to trust and reputation become important for CFIP measurement. Legal and Ethical Considerations CFIP dimensions may also include factors related to legal and ethical considerations, such as compliance with privacy laws, informed consent, and transparency in data practices. Individuals may be concerned about whether vendors meet the legal requirements and ethical standards in protecting their health information. In contrast, peers, who are individuals within an individual’s social network or community, may have different dimensions of privacy concerns. Social interactions, trust, reciprocity, and the potential for social consequences typically characterize peer relationship dynamics. Some factors that could influence PrPC dimensions include the following. Social Norms and Expectations PrPC dimensions may reflect concerns about social norms and expectations related to privacy within the peer group. Individuals may worry about how their health information might be perceived, shared, or used by their peers and the potential impact on their social relationships or reputation. Social Influence and Peer Pressure PrPC dimensions may capture the influence of peer pressure or the fear of negative social consequences. Individuals may be concerned about potential judgment, stigma, or discrimination based on their health information within their peer group. Personal Boundaries and Intimacy PrPC dimensions may include factors related to personal boundaries and the level of intimacy within peer relationships. Individuals may be concerned about the extent to which personal health information should be shared with peers and the potential impact on their privacy, autonomy, and self-disclosure. Although the underlying concept of privacy concerns is present in both CFIP and PrPC, the dimensions may differ owing to the distinct characteristics and dynamics of the relationships involved. Thus, considering these differences when developing measurement instruments is important to accurately capture individuals’ concerns regarding privacy in different trust relationships. Research Model The model focuses on health information and Twitter (as the research context). There are a few critical differences in the privacy concerns around health information compared with other types of information. First, health information is considered to be very sensitive and private. It can reveal details about medical conditions, treatments, prescriptions, family history, and so on. Other types of information, such as social media posts or shopping habits, are generally not as sensitive. Second, health information has strict legal protections such as Health Insurance Portability and Accountability Act in the United States and General Data Protection Regulation in the European Union. These laws place restrictions on how health data can be collected, shared, and used. Other information does not have the same level of legal safeguards. Third, health information could potentially be used to discriminate against people in areas such as employment, insurance, and so on. This type of discrimination is legally prohibited, but the risk remains owing to the sensitive nature of the data. Other data, such as social media posts, have less potential for this type of discrimination. Finally, breach of health information is considered very serious, given the sensitivity of the data. Strong security protections are needed, and breaches can carry heavy penalties. Breaches of other types of data may not have the same level of severity. Regarding privacy on social media, there are some key characteristics of the concerns around Twitter compared with other platforms. First, most Twitter content is public by default, whereas other platforms such as Facebook allow more privacy controls. This can raise concerns about a lack of control over dissemination. Second, tweets are often archived and searchable indefinitely; therefore, there are concerns about permanent availability even for “deleted” content. Other platforms may have more ephemeral sharing. Third, the open nature of Twitter makes it easy for tweets to spread rapidly and become viral compared with platforms such as Instagram, where sharing can be more controlled. This raises concerns about loss of context and lack of containment. Finally, the ability to create anonymous accounts on Twitter is greater than that on platforms such as Facebook that require real identities. This raises concerns about harmful speech, misinformation, and so on. We proposed the following research framework for disclosing general and specific health information on Twitter by integrating 2 aspects of information privacy concerns ( Figure 1 ). As several studies may have found empirical evidence for the hypotheses proposed in this study, we need to clarify what is new in our study. First, this study integrated both aspects of privacy concerns for the first time in a model. Previous studies examined either privacy concerns related to companies’ practices with web-based information (CFIP) [ 62 ] or concerns related to the web-based behaviors of peers (PrPC) [ 63 ]. However, as mentioned in the previous section, individuals may be concerned about disseminating their health information on Twitter because companies’ collection practices and web-based peers’ behaviors could violate their privacy. In this study, we wanted to examine whether both aspects of privacy concerns (ie, CFIP and PrPC) can collectively change health-related information-sharing decisions or whether one can dominate the other. For instance, whether the nature of the relationships and contexts with 2 different information trustees (ie, vendors and peers) can shape information dissemination behavior. Second, as Twitter is considered as a rich database for collecting individual health-related information to examine sentiments and manage public health [ 64 ] and reports highlight that individuals may be concerned about web-based interactions with peers [ 65 ], Twitter would be the best research context to meet the goals of this study. Third, this study distinguished between general and specific health information. Thus, we could offer more insights about privacy concern levels and disclosure behaviors related to the 2 types of health information on Twitter. These 3 reasons can make our study different from previous studies in the privacy literature. In addition, we controlled for several variables such as age, gender, education, Twitter experience, privacy violation experience, and misrepresentation of identity on Twitter. According to previous studies in the privacy concern domain, some demographics, such as age [ 66 ], gender [ 67 ], and education level [ 68 ], can affect people’s intention to disclose information on social media. Moreover, the impacts of these variables have been examined in previous studies investigating individuals’ perceptions about sharing eHealth-related information [ 69 , 70 ]. The effects of these variables are often controlled in previous studies in the field of information privacy threats [ 71 ]. Thus, we assumed that individuals of different ages, genders, and educational levels engage in various disclosure behaviors because they have diverse backgrounds, individual characteristics, and personal differences. Therefore, we considered these demographics to be control variables in the proposed research model. Moreover, the effects of misrepresentation of identity, experience with technology, and privacy violation experiences are controlled in previous studies examining relationships between privacy concerns and self-disclosure [ 22 , 42 ]. Thus, it is believed that individuals with different privacy violation experiences, previous identity misrepresentation, and experiences with Twitter are more likely to demonstrate various disclosure behaviors. Therefore, we treated these experience-related variables as control variables in our model.
Methods Research Approach and Survey Development We administered a web-based survey questionnaire to achieve the defined objectives and test the proposed model and research hypotheses. The survey consists of 4 sections. In the first part, the purpose of the study is described clearly, and a qualifying question is used to select respondents. The question for filtering respondents attempts to screen individuals with a Twitter account. Thus, individuals without a Twitter account are excluded from data collection and analysis. In the second section, respondents are asked to express their perceptions about privacy concerns associated with companies and third parties, peer-related privacy concerns, and health information dissemination behaviors. In the third section, demographic questions (ie, age, gender, education, income, and race) are asked. Finally, the last section focuses on personal privacy experiences (ie, Twitter experience, privacy violation experience, and misrepresentation of identity). Questions to measure each construct were adapted from validated instruments available in the existing literature. Slight changes in the wording were made to fit the context of this study. We adapted items to measure CFIP (as a second-order construct with 4 dimensions) from the study by Stewart and Segars [ 40 ]. Following Zhang et al [ 22 ], we also conceptualized and measured PrPC as a second-order reflective construct with 4 dimensions. Previously defined scales to measure general and specific health information disclosure were adapted from the study by Hsu et al [ 72 ]. Respondents rated all the measuring items included in the survey using a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Multimedia Appendix 1 shows the questions used in the web-based survey. Data Collection and Data Analysis Data were collected in April 2022 by uploading the questionnaire to Amazon’s Mechanical Turk (MTurk). MTurk is a crowdsourcing platform that enables researchers to access data from potential target samples to conduct a study. MTurk has been recognized as an acceptable web-based means for collecting individual-level data. Literature about health care analytics shows a growing number of studies using MTurk for health-related research [ 73 ]. Previous studies highlight that MTurk can measure individual perceptions in various domains, such as social media [ 74 ]. As the target population of this study was US citizens who use Twitter for web-based interactions, we limited the respondents’ location to the United States. Moreover, 2 attention-check questions were used to remove participants who chose answers without correctly replying to reverse-coded filler items [ 75 ]. The filtering questions were as follows: (1) It does not bother me that my peers may try to influence me through comments on my health-related postings on Twitter and (2) I am not concerned that I have little control over who can start a health-related conversation with me on Twitter. We received 364 questionnaires and excluded 35 (9.6%) that were either incomplete or failed the response quality questions, resulting in 329 (90.4%) valid and usable responses. The average response time to complete the questionnaire was 12 minutes. The descriptive statistics for demographics were performed using SPSS (version 26; IBM). The research model was tested using AMOS (version 26; IBM) within the structural equation model framework. Ethical Considerations The institutional review board of Florida International University reviewed and approved the study (approval 112755). According to the institutional review board approval, written informed consent to participate in the study was obtained from all participants. Moreover, the data collected in this study were anonymous. We considered US $1 as an incentive for each respondent to participate in the study.
Results Instrument Validation We used confirmatory factor analysis to assess convergent and discriminant validity. Table 1 shows the results of the convergent validity test. The standardized factor loadings for all constructs exceeded 0.7, which is the acceptable range for factor loadings [ 76 ]. The composite reliability values and Cronbach α values were above the recommended value of .7, demonstrating the adequate reliability of the constructs [ 77 ]. All the values of average variance extracted (AVE) exceeded 0.5, which is the cutoff value [ 78 ]. These measures indicated the acceptability of the measurement model’s convergent validity. Table 2 shows the discriminant validity of the constructs. All diagonal values (square roots of the AVEs) were >0.7 and greater than off-diagonal values (correlations) between any pair of constructs [ 79 ]. Thus, the discriminant validity requirements were satisfied for the research model. Moreover, we checked the convergent and discriminant validity of the second-order constructs. The composite reliability, Cronbach α, and AVE values for CFIP were 0.91, .88, and 0.64, respectively, and these measures for PrPC were 0.94, .89, and 0.72, respectively. The correlation between the second-order variables (eg, CFIP and PrPC) was 0.58. Finally, the square roots of the AVEs for both constructs were >0.7 and higher than the correlations between the constructs. These results confirm an acceptable convergent and discriminant validity for both second-order constructs in the model. Respondents’ Characteristics Table 3 shows the participants’ characteristics. The descriptive statistics demonstrate that respondents were fairly distributed across gender, where 52.9% (174/329) were men and 47.1% (155/329) were women. The age range was positively skewed, indicating that most participants were young, with a range between 25 and 34 years (155/329, 47.1%) being high, followed by the range between 35 and 44 years (102/329, 31%). Approximately half (178/329, 54.1%) of the respondents had undergraduate or graduate education levels, which aligns with previous studies highlighting that people with high education levels tend to search more often for web-based health information [ 80 ]. The annual income was fairly distributed, with income between US $60,000 and US $79,999 showing a high range (135/329, 41%) among the provided categories. Most respondents were White (174/329, 52.9%), followed by Hispanic and African American individuals. Approximately half (174/329, 52.9%) of the respondents reported using Twitter for 4 to 6 years. Overall, 52% (171/329) of the respondents indicated that they had a privacy violation experience at least once (for instance, their account was hacked), and 38.9% (128/329) mentioned that they tried to use a fake account on Twitter (at least once). Analysis of the Dimensions When implementing second-order variables in a measurement model, there are 2 common approaches: the repeated items approach and the 2-step approach [ 81 ]. This study used a repeated items approach to measure reflective second-order constructs. In the repeated items approach, the indicators used to measure the second-order construct are included in the measurement model twice: once as indicators of the second-order construct and once as indicators of the corresponding first-order constructs [ 82 ]. This approach allows for a direct assessment of both the second-order and underlying first-order constructs in a single measurement model. The repeated items approach provides a holistic view of the measurement model by simultaneously assessing the second-order construct and its underlying dimensions [ 83 ]. Using the repeated items approach provides a more integrated perspective about how CFIP and PrPC are influenced by their respective first-order constructs. It allows for a direct examination of the relationships between the second-order construct and its underlying factors. Both CFIP and PrPC are conceptualized as second-order reflective constructs, consistent with existing literature. A reflectively measured construct shares a common theme across subdimensions; the dimensions are expected to be highly correlated [ 84 ]. Table 2 shows that, consistent with reflective measurement, the 4 dimensions of CFIP (ie, collection, unauthorized secondary use, improper access, and errors) are highly correlated with each other. As expected, the 4 dimensions of PrPC (ie, psychological, communication, virtual territory, and peer-related information privacy concerns) are also highly correlated. Results show that the 4 dimensions of CFIP, as first-order factors, load significantly on the second-order construct. The loadings were 0.91 for collection, 0.80 for unauthorized secondary use, 0.95 for improper access, and 0.88 for errors. Therefore, the interaction among 4 dimensions reflects CFIP, which shares a common theme of losing control over information privacy owing to companies’ sharing behaviors. Furthermore, the 4 dimensions of PrPC also load significantly on the second-order construct. The loadings were 0.90 for psychological privacy concerns, 0.92 for communication privacy concerns, 0.86 for virtual territory privacy concerns, and 0.95 for peer-related information privacy concerns. Thus, interactions among these 4 dimensions represent PrPC, which exhibits a shared theme of privacy concerns about losing personal control owing to web-based peer behaviors. Structural Model and Path Analysis Consistent with privacy literature, we controlled variables such as age, gender, education, years of experience, privacy violation experience, and misrepresentation of identity in the structural model [ 42 ]. Findings demonstrate that when the control variables are present, the coefficients and R 2 change significantly. Specifically, when age (β=−.12; P =.01, education (β=.19; P =.003), and privacy violation experience (β=−.58; P= .008) are present in the model, they significantly influence health information disclosure. Thus, the findings confirm that young people with high education levels who have not experienced privacy violations are more likely to disclose health information on Twitter. However, no effects of gender, years of experience, and misrepresentation of identity were found on health information–sharing behaviors. We used the structural equation model technique to analyze the factors affecting health information disclosure on Twitter. The results of model fit indexes exhibit a good fit with the goodness-of-fit indexes ( χ 2 353 =2.2; goodness-of-fit index=0.84; adjusted goodness-of-fit index=0.81; comparative fit index=0.90; normed fit index=0.91; incremental fit index=0.90; standardized root mean square residual=0.03; and root mean square error of approximation=0.04) where all indexes meet their recommended cutoff values [ 85 ]. Table 4 depicts the summary of path analysis for 4 hypotheses (ie, H1A, H1B, H2A, and H2B). Figure 2 shows the standardized path coefficients of the structural model. Support is not found for H1A, which proposes that CFIP significantly influences general health information disclosure on Twitter (β=−.07; P =0.16). In contrast, the findings support H1B by confirming that CFIP significantly attenuates sharing behaviors when disclosing specific health information on Twitter (β=−.43; P <.001). H2A, which posits that PrPC would directly affect the disclosure of general health information on Twitter, is supported (β=−.38; P <.001). The analysis also exhibits that PrPC negatively shapes the sharing of specific health information on Twitter (β=−.72; P <.001), and this significant relationship supports H2B. Regarding H1C and H2C, an alternative model was created for each hypothesis, and the 2 relationships in that hypothesis were constrained [ 86 ]. Next, a 2-tailed t test was used to compare the difference between the alternative and the original model. H1C posits a significant difference between the impact of CFIP on general and specific information–sharing behaviors. As the t value was significant ( t 165 =3.45; P <.001), we confirm that CFIP imposes a more negative effect on specific health information dissemination than on sharing general health information on Twitter. In addition, H2C proposes that people may disclose more general health information than specific health information owing to peer-related privacy concerns. The t value was significant ( t 165 =4.72; P <.001); thus, the effect of PrPC was more prominent in specific health information sharing than in disclosing general health information on Twitter. Finally, the model explains 41% of the variance in general health information disclosure and 67% in specific health information sharing on Twitter. The R 2 scores suggest that the 2 aspects of information privacy concerns (ie, concerns about the web-based practices of companies and peers’ behaviors) can provide reliable explanatory power to predict the variance in sharing general and specific health information.
Discussion Principal Findings Information sharing is one of the most important objectives of social media. People use Twitter for conversation, and information sharing can initiate a web-based exchange of ideas about an issue. As health information is more sensitive than other types of personal information, disclosing such data can raise privacy concerns. Most previous studies have mainly focused on privacy concerns related to companies and vendors as they may collect and use individuals’ personal information for other purposes or may not properly protect the collected information [ 87 ]. Few studies have also explained privacy concerns related to the web-based behaviors of peers [ 22 ]. However, previous literature did not consider both sides of information privacy concerns in a model in the context of social media. Moreover, disclosure behaviors on social media can be contingent upon the type of health information owing to sensitivity levels. Few studies have examined the sharing behaviors based on the unique characteristics of general and specific health information [ 86 ]. Although both antecedents (CFIP and PrPC) have been examined separately in previous studies, this study’s findings could propose scientific novelty. The study differs from previous research in this field because we integrated 2 aspects of privacy concerns (eg, related to companies and peers) to investigate the disclosure of general and specific health information on Twitter. In this study, we examined whether both aspects of information privacy concerns can jointly influence sharing decisions related to health-related information or whether the effect of one aspect can be overshadowed by the other; for instance, whether the nature of the relationships and contexts with 2 different information trustees (ie, vendors and peers) can influence people to share their health information on Twitter. The findings indicate that privacy concerns related to companies play a more significant role in predicting specific health information than in predicting general health information. Privacy concerns related to companies’ practices reflect the collection and misuse of health information by vendors, such as concerns about using health information for data mining and research purposes [ 88 ]. Our findings demonstrate that Twitter users are more concerned about vendors and companies using or sharing their personal health information than general health information. Thus, the dimensions of CFIP (ie, collection, unauthorized secondary use, improper access, and errors) become salient only when people want to disclose specific health information (such as information about their chronic diseases, signs, and symptoms or personal health status). Sharing public health information about hospitals, medical costs, and medications is not significantly affected by concerns about how companies may use such information. A plausible justification is that general health information cannot reflect any personal information associated with an individual, and even if it is used for data mining or big data analysis, it will not violate the user’s privacy needs. Consistent with previous studies [ 89 ], individuals may deliberately want to share general health information on social media to increase public awareness and knowledge about a medical situation, such as COVID-19 symptoms and vaccination. Regardless of information accuracy or misinformation, users may engage in sharing their general medical knowledge and public information about treatment options with almost no or minor privacy concerns related to companies and vendors’ collection and use practices. Our results also show that peer-related privacy concerns can significantly shape both general and specific health information sharing on Twitter. Although Twitter is not the same as web-based health communities designed to share health information, many individuals use tweets to share personal and public health information [ 90 ]. Web-based interactions with peers may affect their sharing behaviors as they may feel unable to control who can see, comment on, or exchange the health information they share on Twitter. The dimensions of PrPC (ie, psychological, communication, virtual territory, and peer-related information privacy concerns) are important factors in predicting how users may disclose public and personal health information. However, peer-related privacy concerns are more intense for sharing personal than general health information. This finding indicates that when a user wants to share public information about a disease (for instance, cancer, COVID-19, or HIV), they are still concerned about how peers relate such general information to their profile. This concern becomes more salient when the user decides to reveal personal health information about that disease, such as what treatments or medications they are using daily or what medical procedures they will undergo in the future. Previous studies associate sharing personal health information related to physical health problems or mental disorders on web-based P2P networks with stigma [ 91 ]. The more sensitive the health information, the more stigma is attached to sharing such information with peers. Thus, being judged by peers (close friends and other web-based users) because of sharing personal health information may prevent them from disclosing that content on Twitter. Although results show significant impacts of both aspects of privacy concerns on sharing specific health information on Twitter, peer-related privacy concerns are leading factors in shaping personal health information disclosure, more so than privacy concerns associated with companies and third parties. This result confirms the importance of web-based interactions with peers on social media and how to deal with their sharing behaviors, such as commenting or tagging [ 60 ]. This finding implies the critical effects of Twitter friends and the circle of people who can see and share tweets about private health information. Individuals may be first concerned about the Twitter circle and how peers would react to the shared personal information about health status and then become worried about how many companies may access such data and how they would use or share them. Thus, secondary dissemination of personal health information by web-based peers through liking, reposting, retweeting, or commenting on posts is more challenging to the maintenance of privacy controls than secondary use of data or unauthorized access to such private information by companies and vendors. Our finding that peer-related privacy concerns have a strong impact on health information sharing compared with privacy concerns associated with companies and third parties offers a counternarrative to prevalent assumptions in digital privacy research. This could be attributed to Twitter’s highly interactive and public nature, which might accentuate peer-related concerns. Previous studies, mainly those conducted in the context of web-based shopping or general social media use, might have overestimated the role of concerns associated with companies and third parties owing to the commercial and private nature of these web-based activities. Theoretical Implications This study may offer some theoretical contributions. First, our findings have implications for information privacy research in social media by integrating the existing privacy concern perspectives. This study can open up the discussions through which privacy needs related to companies, third parties, and peer-related aspects can be addressed. Then, this comprehensive mechanism may strongly affect users’ sharing behavior patterns. Second, this study distinguishes the differences between sharing public and private health information on Twitter. Although disclosing specific health information may help share personal experiences related to various medical situations that could be useful for peers, it is more challenging than disseminating general health information. The findings demonstrate how company-related and peer-related privacy concerns could shape the 2 types of information-sharing behaviors. Third, this study investigates the effects of information privacy mechanisms on health information sharing in the context of Twitter. The findings can promote discussions about health information disclosure in other P2P networks such as other social media platforms, virtual worlds, or Metaverse. Fourth, exploring the determinants of information sharing regarding different types of privacy concerns can expand our current understanding of knowledge acquisition. As sharing both general and specific health information on social media can contribute to people’s medical knowledge, addressing the barriers to specific health information sharing and removing the privacy challenges can significantly help the procedures of medical knowledge acquisition from web-based interactions with peers. The study contributes significantly to the theoretical understanding of privacy concerns in web-based health information sharing. The evidence that peer-related privacy concerns influence more strongly than those related to companies and third parties highlights a potential oversight in theoretical perspectives. Current theories largely view companies as the predominant source of digital privacy concerns, and this may need re-evaluation. The results extend existing theories by emphasizing the role of peer interactions in privacy concerns, particularly in public and highly interactive web-based environments such as Twitter. This recognition of the social dimension of privacy concerns could be integrated into existing theoretical models to provide a more comprehensive framework for web-based privacy behavior. Furthermore, although our study is specific to Twitter and health information, the insights gained may have broad applicability. The potential role of peer-related privacy concerns could be a valuable area of exploration in other social media contexts and in sharing other types of sensitive information. Thus, our findings open up new avenues for theoretical exploration and suggest a need for further studies to fully understand the complexities of privacy behavior in the digital age. Unlike other research approaches, such as experiments, observational data, or qualitative interviews to assess privacy concerns and information sharing, our study used a quantitative survey approach. This allowed us to capture data from a large and more diverse sample, providing a more robust and generalizable understanding of privacy concerns in web-based health information sharing. The strength of our quantitative approach lies in its ability to establish clear patterns and relationships among various factors influencing privacy concerns. This enabled us to derive a more comprehensive and systematic understanding of the factors that significantly influence privacy concerns and health information sharing on Twitter. In terms of comparison, our findings offer a novel perspective about the role of peer-related privacy concerns in shaping web-based health information–sharing behaviors. Previous studies have predominantly focused on company-related and third party–related privacy concerns. However, our study highlighted the paramount influence of peer-related privacy concerns, thus suggesting a reorientation of focus in subsequent research efforts in this area. Our study also provides quantifiable evidence about the relative influence of peer-related privacy concerns and such privacy concerns associated with companies and third parties on Twitter users’ health information–sharing behaviors. Such quantifiable insights could serve as valuable benchmarks for future studies seeking to measure and compare similar variables in different contexts or on different platforms. Our survey methodology, coupled with a comparative analysis of the findings, underscores the contribution of our study to the field, offering both nuanced insights and broad trends that enrich our understanding of privacy concerns and health information sharing on social media platforms. Practical Contributions This study also provides several practical and technical implications for promoting privacy protection on Twitter. To promote the sharing of specific health information, it is essential to address privacy concerns related to both companies and peers. However, addressing peer-related privacy concerns is vital to encourage the disclosure of general health information. This is because concerns related to companies and third parties do not significantly predict general health information sharing. Thus, a robust privacy policy cannot be developed regardless of information type. As the 2 types of health information require different ways of satisfying privacy needs, mechanisms and regulations facilitating general and specific health information sharing cannot be the same. Depending on the type of health information, it is essential to customize the ability of Twitter users to control their self-concept and meet different privacy protection requirements. General procedures and privacy policies to regulate the dissemination and use of personal posts are not sufficient to address the information privacy concerns. Twitter should allay users’ privacy concerns about sharing specific health information using advanced technology and management mechanisms. For example, Twitter can enable individuals to restrict access to their shared personal health information. Punitive regulations can be established for inappropriate behaviors (such as retweeting without consent) that may discourage sharing specific health information. All controlling mechanisms and privacy protection functionalities should be easy to understand and use and should not be an additional burden on the users. As the 2 aspects (company-related and peer-related aspects) of privacy concerns manifest in several dimensions, different features can be developed to address the need for effective protection mechanisms. Twitter can add a new feature to tweets, enabling users to identify the sensitivity of posts related to health information. The content will be recognized as a private post if the sensitivity score (eg, calculated based on a scale ranging from 1 to 10) is more than average. Then, that post is automatically restricted from exposure to everyone, and users can share their thoughts and experiences only with a small crowd. Users can also define terms and conditions for peers who want to retweet sensitive posts. For instance, a “request for share” button can appear for each sensitive post, and peers can only share the posts when they get approval from the focal users. Given our findings, Twitter could introduce a feature that allows users to select the audience for their health-related posts, thereby addressing peer-related privacy concerns. They could also introduce a “Health Information” mode that automatically applies high privacy settings for tweets marked as health related. In addition, given the significant role of knowledge in shaping privacy concerns, Twitter should consider educational campaigns or prompts to inform users about these features and the importance of privacy when sharing health information. In May 2022, Elon Musk called for further investigation into the accuracy of spam and fake account estimates, which Twitter announced to be <5%. Fake and spam accounts could lead to undesirable social interactions with peers, unwanted peer-shared information, and an unpleasant web-based social environment. Twitter needs to use new procedures to detect spam and fake accounts and better control the functionality of Twitter bots to provide a more appropriate web-based social platform. This new mechanism could automatically limit the visibility of private posts containing highly sensitive health information to everyone, even to people who users follow. On the basis of the current Twitter privacy policy, people can mention who can reply to a specific tweet. However, it is hard to confirm who can actually see the posts because of bots and recommendation agents. Social bots use computer algorithms to artificially create content and interact with people on social media [ 92 ]. Twitter bots can be manipulative and purposely change people’s attitudes and opinions about a topic [ 93 ]. For instance, bots can share posts with peers who are not following a user but usually read posts with health information content. The existence of bots may be useful for sharing general information but can be very harmful because it can increase exposure to private posts with sensitive health information. A plausible recommendation is to add a new category for sensitive content (such as specific health information) besides the photo, graphics interchange format (GIF), and poll categories. Then, users can create a new circle of people who are allowed to see, reply to, or share these sensitive posts. Users can also customize the configuration and limit the possible unwanted interactions by selecting who can see the posts but cannot share them. This small crowd can be saved for future use and can be easily modified later. The advantage of this new category is that people are notified to customize their Twitter circle depending on different content (eg, highly sensitive, semisensitive, and nonsensitive). For instance, a user can select everyone to see and comment on posts containing information about cryptocurrency, high-technology companies, or humanitarian issues. In contrast, the user can select a group of followers to see their thoughts about general health information and choose only a few close friends to see and comment on posts with sensitive health information. Stringent privacy policies are required to enable Twitter users to limit who (ie, peers) can view, comment on, and share web-based content. People should be able to easily edit with whom they can share health information to exercise control over their personal digital information. Spambots on Twitter should also be controlled, modified, or filtered because they can involve potentially deceptive, harmful, or annoying activities. A more transparent policy is required to detect and deactivate invasive Twitter bots that can automatically like or retweet users’ postings without consent. Finally, the insights from our study are not only limited to Twitter but also have implications for other social media platforms where users might share health information. Such platforms should recognize the significant role of peer-related privacy concerns and consider introducing similar audience control features. Health professionals and health-related organizations using social media for patient engagement should also be aware of these concerns and take steps to ensure that their communication respects patient privacy. Policy makers should consider our findings when developing regulations for health information sharing on social media to ensure that they address the most significant privacy concerns. Limitations and Future Studies Our study also has some limitations that can be considered as opportunities for future studies. First, a web-based survey through MTurk was used to collect data, which may be biased toward people familiar with crowdsourcing platforms. Future studies can use other data collection and sampling strategies, such as collecting data directly from Twitter. Second, we collected data from 329 Twitter users, which may not be a good representative of Twitter users. Next, studies can increase the sample size to reduce sampling bias and improve the generalizability of the findings. Third, we did not consider the effects of cultural dimensions (such as individualism, uncertainty avoidance, etc) on sharing health information on Twitter. It can be interesting for future studies to explore the effect of culture on disclosing different types of health information on social media. Fourth, our study tests a model to analyze health information sharing from the perspective of privacy concerns. However, there may be other essential variables. More studies are required to examine other factors inhibiting and promoting sharing behaviors on social media, such as reputation, incentives, trust, stigma, and social support. Fifth, this study did not examine the accuracy of the health information shared on Twitter or the risks of misinformation because it is not within the scope of this study. Future studies could expand upon our results to investigate the role of misinformation risks in information-sharing behaviors. Finally, our study focused on Twitter as a study context. We encourage future studies to extend the proposed model to other social media platforms (eg, Facebook, TikTok, and Instagram) where web-based interactions with peers are essential. Conclusions This study provides insights into health information sharing on Twitter from a privacy perspective. The findings propose that including CFIP and PrPC constructs can help in better conceptualization of information privacy concerns in the context of social media. The integration of these 2 aspects of information privacy can expand the discussion about internet privacy by addressing the privacy needs associated with the practices of companies, such as collection, unauthorized secondary use, improper access, and errors. It also considers psychological privacy concerns, communication privacy concerns, peers’ sharing behaviors, and territory privacy concerns related to peers in such interpersonal interactions. This interactive approach can provide a more comprehensive analysis of information privacy (related to web-based vendors and web-based peers) and adds a more substantial explanation of privacy needs on social media channels (such as Twitter). Privacy concerns may not always prohibit disclosure behaviors on Twitter; it depends on the type of health information. The findings demonstrate that peer-related privacy concerns are more salient to predicting general and specific health information sharing on Twitter than privacy concerns related to companies and third parties. The results could propose practical contributions by shedding more light on the negative impacts of web-based peer behaviors on losing personal control over digital communications and information access. Privacy policies should focus on companies’ practices, such as sharing users’ information with third parties for big data analytics. We suggest mitigating privacy concerns and promoting health information sharing on Twitter by creating policies that tailor privacy needs to the type of health information shared (ie, general or specific).
Conclusions This study provides insights into health information sharing on Twitter from a privacy perspective. The findings propose that including CFIP and PrPC constructs can help in better conceptualization of information privacy concerns in the context of social media. The integration of these 2 aspects of information privacy can expand the discussion about internet privacy by addressing the privacy needs associated with the practices of companies, such as collection, unauthorized secondary use, improper access, and errors. It also considers psychological privacy concerns, communication privacy concerns, peers’ sharing behaviors, and territory privacy concerns related to peers in such interpersonal interactions. This interactive approach can provide a more comprehensive analysis of information privacy (related to web-based vendors and web-based peers) and adds a more substantial explanation of privacy needs on social media channels (such as Twitter). Privacy concerns may not always prohibit disclosure behaviors on Twitter; it depends on the type of health information. The findings demonstrate that peer-related privacy concerns are more salient to predicting general and specific health information sharing on Twitter than privacy concerns related to companies and third parties. The results could propose practical contributions by shedding more light on the negative impacts of web-based peer behaviors on losing personal control over digital communications and information access. Privacy policies should focus on companies’ practices, such as sharing users’ information with third parties for big data analytics. We suggest mitigating privacy concerns and promoting health information sharing on Twitter by creating policies that tailor privacy needs to the type of health information shared (ie, general or specific).
Background Twitter is a common platform for people to share opinions, discuss health-related topics, and engage in conversations with a wide audience. Twitter users frequently share health information related to chronic diseases, mental health, and general wellness topics. However, sharing health information on Twitter raises privacy concerns as it involves sharing personal and sensitive data on a web-based platform. Objective This study aims to adopt an interactive approach and develop a model consisting of privacy concerns related to web-based vendors and web-based peers. The research model integrates the 4 dimensions of concern for information privacy that express concerns related to the practices of companies and the 4 dimensions of peer privacy concern that reflect concerns related to web-based interactions with peers. This study examined how this interaction may affect individuals’ information-sharing behavior on Twitter. Methods Data were collected from 329 Twitter users in the United States using a web-based survey. Results Results suggest that privacy concerns related to company practices might not significantly influence the sharing of general health information, such as details about hospitals and medications. However, privacy concerns related to companies and third parties can negatively shape the disclosure of specific health information, such as personal medical issues (β=−.43; P <.001). Findings show that peer-related privacy concerns significantly predict sharing patterns associated with general (β=−.38; P <.001) and specific health information (β=−.72; P <.001). In addition, results suggest that people may disclose more general health information than specific health information owing to peer-related privacy concerns ( t 165 =4.72; P <.001). The model explains 41% of the variance in general health information disclosure and 67% in specific health information sharing on Twitter. Conclusions The results can contribute to privacy research and propose some practical implications. The findings provide insights for developers, policy makers, and health communication professionals about mitigating privacy concerns in web-based health information sharing. It particularly underlines the importance of addressing peer-related privacy concerns. The study underscores the need to build a secure and trustworthy web-based environment, emphasizing the significance of peer interactions and highlighting the need for improved regulations, clear data handling policies, and users’ control over their own data.
Abbreviations average variance extracted Centers for Disease Control and Prevention concern for information privacy graphics interchange format hypothesis 1A hypothesis 1B hypothesis 1C hypothesis 2A hypothesis 2B hypothesis 2C Mechanical Turk peer-to-peer peer privacy concern Data Availability The data sets generated and analyzed during this study are available from the corresponding author upon reasonable request.
CC BY
no
2024-01-16 23:47:18
JMIR Form Res. 2024 Jan 12; 8:e45573
oa_package/c0/f6/PMC10789368.tar.gz
PMC10789369
0
Resultados Caracterização da amostra O fluxograma de rastreamento, eligibilidade, e avaliação dos pacientes está ilustrado na Figura 2 . A Tabela 1 apresenta as características basais dos participantes. Teste de caminhada de seis minutos Aceleração e desaceleração – DC, FC e VS A aceleração do DC foi significativamente diferente entre os grupos (ICFEr 1,89 ± 1,39 l.min -1 .s -1 ; GC: 4,59 ± 2,75 l.min -1 .s -1 , p<0,01). Em contraste, a desaceleração do DC não foi diferente entre os grupos (ICFEr: 0,62 ± 1,39 l.min -1 .s -1 ; GC: 1,94 ± 2,11 l.min -1 .s -1 , p=0,07). ( Figure 3 ). Ainda, a aceleração da FC foi significativamente diferente entre os grupos (ICFEr: 12 ± 12 bpm.s -1 ; GC: 24 ± 15 bpm.s -1 , p=0,039), e a desaceleração da FC não foi diferente entre os grupos (9 ± 8 bpm.s -1 ; GC 11 ± 9 bpm.s -1 , p=0,385) ( Figura 4 ). Em contraste, tanto a aceleração como a desaceleração do VS não foram diferentes entre os grupos (ICFEr: 15,51 ± 14,38 ml.s -1 ; GC: 25,12 ± 15,65 ml.s -1 , p=0,110 e ICFEr 3,29 ± 9,01 ml.s -1 ; GC: 8,85 ± 16,98 ml.s -1 , p=0,304). Medidas convencionais A Tabela 2 apresenta os desfechos tradicionalmente medidos durante o TC6M. Em comparação aos participantes do GC, os pacientes com ICEFr caminharam uma distância mais curta, com uma FC similar em repouso e durante o teste, mas mostraram uma FC mais baixa durante o primeiro minuto de recuperação. Os pacientes com ICFEr apresentaram picos de VS e IC significativamente mais baixos ( Tabela 3 ). Distância percorrida no TC6M, variáveis hemodinâmicas e classe funcional Quanto maior foi a aceleração do DC, maior foi a distância percorrida durante o TC6M (r=0,49, p=0,01). IC basal (r=0,60, p<0,01), pico do IC (r=0,67, p<0,01), ΔCI (r=0,63, p<0,01), IC durante o primeiro minuto de recuperação (r=0,68, p<0,01), bem como o VS durante o primeiro minuto de recuperação (r=0,50, p<0,01) foram significativamente correlacionados com a distância percorrida durante o TC6M. A distância percorrida durante o TC6M correlacionou-se com a FC no primeiro minuto de recuperação (r=0,68, p<0,01) e classe NYHA (r=0,62, p<0,01). Além disso, o VS basal e o pico do VS correlacionaram-se significativamente com a distância percorrida durante o teste (r=0,51, p=0,01, r=0,60, p<0,01, respectivamente). O DC basal, o pico do DC e ΔCO também se correlacionaram significativamente com a distância percorrida no TC6M (r=0,52, p<0,01, r=0,67, p<0,01, r=0,61, p<0,01, respectivamente).
Editor responsável pela revisão: Carlos E. Rochitte Potencial conflito de interesse Não há conflito com o presente artigo Resumo Fundamento O Teste de Caminhada de seis Minutos (TC6M) é comumente usado para avaliar pacientes com insuficiência cardíaca. No entanto, vários fatores clínicos podem influenciar a distância percorrida pelos pacientes no teste. A cardiografia de impedância (CI) na avaliação morfológica é uma ferramenta útil para avaliar a hemodinâmica cardíaca de maneira não invasiva. Objetivo Este estudo teve como objetivo comparar as respostas de aceleração e desaceleração do Débito Cardíaco (DC), da Frequência Cardíaca (FC), e do Volume Sistólico (VS) ao TC6M de indivíduos com insuficiência cardíaca e fração de ejeção reduzida (ICFEr) com as de controles sadios. Métodos Este é um estudo transversal observacional. O DC, a FC, o VS e o Índice Cardíaco (IC) foram avaliados antes, durante e após o TC6M por CI. O nível de significância adotado na análise estatística foi 5%. Resultados Foram incluídos 27 participantes (13 com ICFEr e 14 controles sadios). A aceleração do DC e da FC foi significativamente diferente entre os grupos (p<0,01 e p=0,039, respectivamente). Encontramos diferenças significativas no VS, no DC e no IC entre os grupos (p<0,01). A regressão linear mostrou uma contribuição deficiente do VS à mudança no DC no grupo com ICFEr (22,9% versus 57,4%). Conclusão O principal resultado deste estudo foi o fato de que indivíduos com ICFEr apresentaram valores mais baixos de aceleração do DC e da FC durante o teste de exercício submáximo em comparação a controles sadios. Isso pode indicar um desequilíbrio na resposta autonômica ao exercício nessa condição.
Introdução A insuficiência cardíaca é uma síndrome complexa que pode ser a última consequência da maioria das doenças cardiovasculares. A redução na contratilidade é uma das principais características da insuficiência cardíaca com fração de ejeção reduzida (ICFEr), em que um débito cardíaco (DC) deficiente resulta em hipoperfusão sistêmica. Essa, combinada com alterações pulmonares, periféricas e neuro-humorais, contribui para uma baixa tolerância à atividade física. 1 , 2 Essa capacidade reduzida para a realização de atividades físicas é um dos marcos da doença, comum à maioria dos indivíduos com ICFEr. 3 O teste de caminhada de seis minutos (TC6M) é um método amplamente utilizado para avaliar respostas agudas ao exercício autolimitado, em que a distância caminhada constitui um marcador prognóstico comprovado. 4 Ttrata-se de um teste simples e de baixo custo, fácil de ser realizado e que não requer treinamento especializado. 5 No entanto, estudos mostraram que certas comorbidades, tais como ICFEr, reduzem significativamente a distância percorrida (dependendo do desempenho cardíaco, ou seja da gravidade da doença), requerendo mais avaliações com outras ferramentas para uma melhor informação clínica e prognóstica. 6 A cardiografia de impedância (CI) na avaliação morfológica é um método não invasivo que mede, com precisão, o DC, o volume sistólico (VS), a frequência cardíaca (FC), e o índice cardíaco (IC). Pesquisadores clínicos podem usá-la para avaliar tanto indivíduos sadios como aqueles com condições como ICFEr durante o TC6M, obtendo diferentes informações sobre as condições de saúde. 7 - 10 Dados da literatura mostram que um aparelho de CI comercialmente disponível, o PhysioFlow® consegue avaliar, com precisão, o pico do DC (em comparação aos métodos de Fick e de termodiluição), tanto em repouso como durante o exercício. 11 - 13 Ainda, a CI na avaliação morfológica pode adicionar informação preditiva sobre o pico de consumo de oxigênio (picoVO 2 ) com uma forte correlação entre picoVO 2 medido e predito (r = 0,931; p<0,001) por meio dos valores de DC, VS, e de FC obtidos durante o TC6M. 14 A CI fornece dados hemodinâmicos úteis durante o TC6M em diferentes condições clínicas. 15 No entanto, respostas de aceleração e desaceleração do DC, da FC e do VS ao TC6M ainda precisam ser demonstradas em indivíduos com ICFEr; essas variáveis podem representar desequilíbrios autonômicos no esforço e na recuperação. 7 , 11 Assim, este estudo teve como objetivo avaliar as respostas de aceleração e desaceleração do DC, da FC e do VS ao TC6M em pacientes com ICFEr e em controles sadios. Nosso objetivo secundário foi avaliar o comportamento hemodinâmico (pelo DC, FC, VS e IC) antes, durante e após o TC6M, que ainda não foi descrito na literatura. Métodos Delineamento experimental Este é um estudo transversal observacional com dois grupos, um composto de indivíduos sadios (grupo controle - GC), e o outro de pacientes com ICFEr (número de aprovação pelo comitê de ética institucional 180651). Nossa amostra foi selecionada por conveniência. Os critérios de inclusão foram definidos como sinais e sintomas clínicos de IC, avaliados por ecocardiografia, e uma fração de ejeção <40% em pacientes em tratamento farmacológico padrão adequado. Os pacientes incluídos estavam estáveis por pelo menos três meses (sem internação hospitalar, atendimentos de emergência por ICFEr descompensada ou mudança na terapia medicamentosa) e eram seguidos regularmente em uma clínica especializada em IC. Os pacientes com doenças pulmonares e doenças vasculares foram excluídos da nossa amostra. Os participantes do grupo controle não apresentavam nenhuma doença cardíaca e estavam sedentários por pelo menos seis meses. Os controles incluídos foram pareados por idade com os participantes com ICFEr. Os pacientes foram convidados a participar do estudo, e os procedimentos foram explicados a eles. 15 Os participantes do grupo IC foram rastreados e recrutados por busca ativa nos prontuários médicos do ambulatório de IC do hospital. O grupo controle foi composto de trabalhadores do hospital, e o contato foi feito por telefone usando uma lista pré-estabelecida. Todos os participantes que foram convidados a participar no estudo leram e assinaram um termo de consentimento antes da coleta de dados. Cardiografia de Impedância na avaliação morfológica A Impedância (Z) é uma medida da resistência a uma corrente elétrica. A CI é um método de medida para avaliar o fluido do tórax. A partir da determinação da corrente e da voltagem, mudanças na impedância resultam em mudanças no volume sanguíneo que passa pelo tórax. 16 A variação na voltagem (Z) é filtrada pelo programa do aparelho de CI para evitar a influência de variações no volume inspirado e expirado, e nos fluidos do tórax, ou outros fatores (tais como obesidade ou posição do eletrodo), que afetam a CI convencional. 11 Estudos anteriores mostraram fortes correlações entre a medida de CI e métodos invasivos de avaliação hemodinâmica 17 tanto em repouso como durante o exercício. 11 , 12 A CI pode ser empregada como uma ferramenta diagnóstica, 18 e tem sido usada como um preditor de prognóstico cardiovascular. 19 O peso e a altura de cada participante foram medidos. A pele de cada participante foi preparada (depilada com uma lâmina de barear descartável e gel abrasivo, higienizada com álcool, e seca) para o posicionamento do eletrodo. No total, seis eletrodos (nunca usados anteriormente) de monitoramento cardíaco (FS-50 Skintact, Skintact®, Áustria) foram conectados por fios a um aparelho portátil de CI (PhysioFlow® PF07 EnduroTM, Paris, França; 11.5 x 8.5 x 1.8 cm; peso 200g), o qual estava conectado a um adaptador Bluetooth. Os eletrodos foram colocados na região lateral esquerda do pescoço dos participantes, no centro do esterno, em posições padrões V1 e V6, e paralelamente à coluna vertebral, na altura do processo xifoide ( Figura 1 ). O aparelho e os fios foram estabilizados com uma fita de nylon na cintura do paciente para se reduzir o ruído. O sistema foi calibrado com base em 30 batimentos consecutivos medidos em repouso, estabelecendo, assim, a morfologia basal do paciente e os valores hemodinâmicos de repouso. Teste de Caminhada de Seis Minutos O TC6M foi realizado de acordo com as diretrizes da American Thoracic Society . 20 O teste foi conduzido em um corredor de 30 metros. Os participantes foram orientados a andarem a maior distância possível no intervalo de tempo de seis minutos. A distância percorrida foi registrada e expressa em metros. Medidas hemodinâmicas Os dados da CI foram gravados continuamente a cada batimento. Valores falsos foram manualmente excluídos. O DC foi medido em litros por minuto (L·min -1 ); o VS em mililitros (mL); a FC em batimentos por minuto (bpm), e o IC em litros por minuto por área da superfície corporal em metros ao quadrado (L·min -1 ·m -2 ). Para a análise, os valores basais (definidos como a média das medidas obtidas nos dois minutos que precederam a avaliação, com os pacientes de pé, por questões práticas), valores máximos obtidos durante o TC6M, deltas (diferença entre o valor máximo e o valor basal), e os valores no primeiro minuto de recuperação foram usados. Aceleração e desaceleração do DC, FC e VS A aceleração foi definida como a diferença entre os valores de repouso e a média de todos os valores obtidos durante o primeiro minuto do TC6M, enquanto a desaceleração foi definida como a diferença entre os valores medidos no final do teste e a média de todos os valores obtidos durante o primeiro minuto do TC6M. Essas variáveis foram coletadas durante o primeiro minuto de caminhada (aceleração) e o primeiro minuto de recuperação (desaceleração), uma vez que esses são os momentos do TC6M em que as mudanças hemodinâmicas mais acentuadas ocorrem. A variabilidade das respostas hemodinâmicas ao primeiro minuto de exercício é representada pela aceleração, isto é, demanda cardiovascular aumentada. A variabilidade da resposta imediatamente após o exercício e durante o início da recuperação é representada pela desaceleração. Todos os participantes foram monitorados por 18 minutos – seis minutos de pé, seis minutos de teste de caminhada, e seis minutos durante a recuperação. Análise estatística Os dados foram apresentados em média e desvio padrão (DP) ou mediana e intervalo interquartil (IIQ), de acordo com o teste de normalidade. As variáveis categóricas foram apresentadas como frequência (absoluta e relativa) e o teste do qui-quadrado foi usado para avaliar as diferenças entre os grupos quanto a essas variáveis. A distribuição dos dados foi avaliada pelo teste de Shapiro-Wilk. Para as comparações entre grupos, o teste t independente ou o teste de Mann-Whitney foi usado conforme apropriado. A correlação de Pearson foi aplicada para avaliar a força da associação entre as variáveis. Regressão linear multivariada, com as mudanças no DC como variável dependente, foi usada para identificar a contribuição das mudanças na FC e no VS. Todas as cinco premissas necessárias para o uso da análise de regressão linear foram verificadas (relação linear, normalidade multivariada, pouca ou nenhuma multicolinearidade, ausência de autocorrelação, homoscedasticidade). O alfa foi definido como <0,05 para indicar significância estatística. As análises estatísticas foram realizadas no programa SPSS, versão 20.0 (IBM; ARMONK, NY, EUA). Contribuição das variáveis para a alteração no débito cardíaco Mudanças na FC explicaram 64,3% da ΔCO no grupo controle e 70,3% nos pacientes com ICFEr. A contribuição da ΔVS foi de 57.4% no grupo controle e somente 22,9% no grupo com ICFEr. De acordo com coeficientes de regressão β, para cada unidade de FC alterada, houve uma mudança na ΔCO de 1,121 unidades no grupo controle e de 0,92 unidade no grupo ICFEr. Em relação à ΔVS, cada unidade de FC alterada, houve uma mudança na ΔSV de 1,162 unidades no grupo controle e de 0,91 unidade no grupo ICFEr. Discussão No presente estudo, comparamos respostas cardiodinâmicas ao TC6M entre pacientes com ICFEr e indivíduos sadios. Nosso principal achado é que os pacientes com ICFEr mostraram respostas hemodinâmicas diminuídas ao TC6M, principalmente a aceleração do DC e da FC (em comparação aos controles). Além disso, encontramos diferenças significativas na distância caminhada durante o TC6M entre os grupos, reforçando a esperada redução na capacidade funcional dos indivíduos com ICFEr. Ainda, durante a recuperação, a resposta da FC foi mais baixa no grupo de pacientes com ICFEr, sugerindo uma deficiência no sistema nervoso autônomo parassimpático que realmente ocorre nesses pacientes ( Figura Central ). Um desequilíbrio autonômico, caracterizado pela predominância simpática, é uma característica clássica da ICFEr, com consequências clinicamente relevantes. Essas incluem progressão da doença, desenvolvimento ou deterioração da intolerância ao exercício, remodelamento ventricular e arritmias, e morte prematura. Os mecanismos subjacentes desses processos e sua ocorrência ao longo do tempo ainda necessitam ser descritos. Em nosso conhecimento, vários estudos avaliaram o perfil hemodinâmico de diferentes doenças durante o TC6M. 15 , 21 - 23 Um estudo 22 avaliou a aceleração e a desaceleração do DC (mas não do VS) na hipertensão pulmonar. Essas variáveis são importantes para a ICFEr, uma vez que elas representam um desequilíbrio nas respostas autonômicas ao esforço e à recuperação. 24 , 25 Encontramos uma diferença significativa na aceleração do DC entre os grupos (p<0,01) mas não na sua desaceleração (p=0,07), e um comportamento similar na aceleração (p=0,039) e na desaceleração (p=0,385). A aceleração e a desaceleração tanto do DB como da FC foram mais baixas no grupo com ICFEr em comparação aos controles; esses pacientes apresentam um déficit cronotrópico devido à doença em si, e ao efeito farmacológico dos betabloqueadores (todos os pacientes estavam recebendo tratamento com betabloqueadores). Uma vez que a aceleração e a desaceleração podem representar uma ativação simpática e parassimpática, respectivamente, a ICFEr apresenta uma maior ativação simpática e um desequilíbrio simpatovagal, o que corrobora os achados de nosso estudo, mostrando uma aceleração tanto do DC como da FC no grupo de pacientes com ICFEr. 26 , 27 Modelos animais mostraram que a estimulação simpática e um reflexo cardiovascular anormal contribuem para ativar o sistema nervoso simpático na ICFEr. 28 Por outro lado, pouco se sabe sobre o papel da atividade nervosa parassimpática nessa condição. Hu et al. 26 mostraram que a desaceleração da FC é um preditor independente de infarto agudo do miocárdio e morte súbita na ICFEr, constituindo um preditor mais forte que a fração de ejeção ventricular esquerda e medidas convencionais da variabilidade da FC. Ainda, os mesmos autores 26 avaliaram somente as respostas de aceleração e desaceleração da FC. Assim, nosso estudo é o primeiro a avaliar a aceleração e a desaceleração do DC, da FC, e do VS na ICFEr. Apesar de não termos conseguido demonstrar uma diferença significativa na desaceleração do DC entre os grupos, encontramos uma nítida tendência (p=0,07) de perda nas respostas no grupo ICFEr. Quanto à regulação da FC, uma disfunção ventricular na ICFEr pode desencadear mecanismos compensatórios distintos, que inicialmente aumenta a ativação neuro-hormonal do sistema nervoso simpático e do sistema renina-angiotensina-aldosterona. 29 Contudo, uma exposição prolongada à ativação simpática pode inibir a atividade de receptores beta-adrenérgicos, contribuindo para respostas inotrópicas, que podem prejudicar a recuperação da FC após o exercício. 30 Esses dados são corroborados pelos nossos resultados. 2 Encontramos valores de VS basal e de pico mais baixos nos pacientes com ICFEr em comparação aos controles. Tanto o comportamento em repouso como no exercício no grupo com ICFEr representam uma perda na contração ventricular, isto é, uma redução no VS em cada sístole. 31 Nos indivíduos sadios, o princípio de Frank-Starling descreve um mecanismo fisiológico que aumenta o VS para compensar a redução inicial da contração ventricular. 32 Por outro lado, os indivíduos com ICFEr apresentam falhas nesse mecanismo. A consequente redução na reserva cardiovascular prejudica e reduz a contratilidade ventricular, diminuindo o VS. 33 Esse fenômeno também corrobora os achados de nosso estudo. Ainda, observamos diferenças significativas nos valores basais e de pico do IC e do DC entre os grupos, que pode ser explicado por mecanismos compensatórios da FC e do VS. 21 Encontramos que a disfunção ventricular reduziu a IC e o DC em pacientes com ICFEr, levando a mecanismos que inicialmente aumentam essas variáveis para manter a perfusão no órgão-alvo. 34 Contudo, após uma longa exposição a eles, o miocárdio sofre remodelamento, reduzindo a capacidade inotrópica ventricular e o VS, consequentemente afetando tanto o IC como o DC. 2 Quando submetidos ao exercício submáximo, indivíduos com ICFEr mostram respostas diminuídas tanto para o aumento como para a diminuição no VO2 em indivíduos sadios. 35 Esses pacientes também podem apresentar hipertensão pulmonar leve, o que pode explicar o comportamento do DC e do VS. 22 , 36 O comportamento do DC pode então ser devido ao desequilíbrio nas respostas autonômicas, afetando o VS. 37 Nós testamos se os parâmetros hemodinâmicos, avaliados pela CI, tinham correlação com a distância percorrida durante o TC6M, e encontramos que quanto maior a aceleração do DC, maior a distância percorrida (r=0,49; p=0,01). Ainda, os indivíduos com a recuperação mais rápida na FC após o teste foram aqueles que caminharam distâncias mais longas. Embora esse não tenha sido o principal objetivo do estudo, encontramos que a avaliação hemodinâmica pela CI durante o TC6M pode fornecer resultados interessantes que refletem diretamente a capacidade funcional. Além da distância no TC6M, encontramos que a classe funcional NYHA teve uma boa associação com o DC máximo, a desaceleração do DC, e a FC durante o primeiro minuto de recuperação após o teste. 38 Conforme o esperado, encontramos diferenças nas distâncias percorridas no TC6M entre os grupos (p<0,01), corroborando um estudo prévio. 39 Finalmente, a regressão linear mostrou uma contribuição deficiente do VS (22,9%) às mudanças no DC em pacientes com ICFEr, e valores normais nos controles sadios (57,4% de contribuição do VS à mudança no DC). De fato, o pulso de oxigênio, um indicador do VS, foi mais baixo nos pacientes com ICFEr (em comparação aos controles sadios), conforme descrito em estudos prévios, A disfunção no VS, como representado pelo baixo pulso de oxigênio, pode refletir uma distribuição sistêmica insuficiente de oxigênio durante o exercício e/ou uma utilização deficiente de oxigênio devido à uma redução na função mitocondrial. 40 , 41 O estudo teve algumas limitações. Primeiro, o tamanho pequeno da amostra pode ter limitado a capacidade de detectarmos diferenças significativas. No entanto, o principal objetivo deste estudo fisiológico foi avaliar o comportamento hemodinâmico de indivíduos com ICFEr durante o TC6M, compará-lo com o de indivíduos sadios, e determinar a contribuição relativa das variações do DC, do VS, e da FC durante as fases do teste de caminhada. Segundo, nosso estudo não foi delineado para realizar testes de correlação ou associação dos parâmetros de CI com a distância percorrida, classe funcional NYHA ou fração de ejeção. Embora o TC6M seja seguro, de baixo custo, e facilmente usado para avaliar a capacidade funcional em pacientes com ICFEr, 20 , 39 (por exemplo, o teste não fornece dados diretos do comportamento hemodinâmico). Assim, novas tecnologias seriam importantes para adicionar informações aos achados do TC6M e ajudar no tratamento dessa terrível síndrome. Ainda, avanços tecnológicos permitiram o desenvolvimento de um aparelho portátil para medir, de maneira não invasiva e em tempo real, uma ampla gama de parâmetros hemodinâmicos, tais como DC, VS, FC e IC. Conclusão Este é o primeiro estudo que demonstra as respostas hemodinâmicas de aceleração e desaceleração do DC, FC, e VS por meio da CI em pacientes com ICFEr durante o TC6M. Indivíduos com ICFEr apresentaram aceleração do DC e da FC deficiente durante exercícios submáximos em comparação a controles sadios, o que pode representar um desequilíbrio na resposta autonômica ao esforço. Mais estudos são necessários para testar se mudanças no DC, na FC, no VS e no IC durante o TC6M podem fornecer informações sobre o prognóstico da doença.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 14; 120(12):e20230087
oa_package/ed/e0/PMC10789369.tar.gz
PMC10789370
38126569
Resultados A amostra total (n = 201) foi composta por participantes com idade entre 36 e 92 anos, sendo 72% do sexo masculino. Dentre os participantes, 30% (n = 58) tinham IC e 70% (n = 143) DAC; destes, 58% (n = 81) eram revascularizados. Houve uma predominância de participantes da classe funcional NYHA I na amostra total (53%). No grupo de participantes com DAC, essa classe funcional foi de 60% (n = 69) e com IC foi de 35% (n = 17). Na amostra total, a média de tempo para realização do TUG foi de 7 ± 2,5 segundos e a média do VO 2 pico obtido no TCPE foi de 17 ± 6 mL.kg 1 .min 1 . Quando estratificados por sexo, o desempenho no TUG para os homens foi de 6,86 ± 0,20 segundos e o desempenho para as mulheres foi de 7,23 ± 0,33 segundos. A média do VO 2 pico obtido no TCPE verificado para os homens foi de 18,25 ± 0,50 mL.kg 1 .min 1 e para as mulheres foi de 15,22 ± 0,57 mL.kg 1 .min 1 ( Tabela 1 ). A distribuição dos participantes da amostra total e dos grupos criação e validação pode ser visualizada na Tabela 2 . Grupo criação A amostra do grupo criação foi composta por 134 participantes, com média de idade de 69 ± 13 anos, sendo 72% do sexo masculino. A classe funcional dos pacientes identificou neste grupo que 52% eram pertencentes à classe NYHA l e 37% à classe NYHA ll. O desempenho no TUG foi de 7 ± 2,5 segundos e a média do VO 2 pico obtido no TCPE foi de 17 ± 6 mL.kg 1 .min 1 ( Tabela 1 ). Grupo validação No grupo validação, a amostra foi composta por 67 participantes, com média de idade de 62 ± 13 anos, sendo 72% do sexo masculino. A classe funcional dos pacientes identificou neste grupo que 56% eram pertencentes à classe NYHA l e 37% à classe NYHA ll. O desempenho no TUG foi de 6 ± 2 segundos e a média do VO 2 pico obtido no TCPE foi de 18 ± 6 mL.kg 1 .min 1 ( Tabela 1 ). Criação do modelo preditor A correlação realizada no grupo criação (n = 134) para verificar a relação entre o TUG e o VO 2 pico identificou um coeficiente de correlação de r = −0,54 (intervalo de confiança de 95%: −0,65 a −0,41; p < 0,001) e um R 2 de 0,30 ( Figura 1 ). Foi realizada a regressão linear múltipla com dados do grupo criação (n = 134) para identificar os preditores independentes e desenvolver o modelo para estimar o VO 2 pico com base no TUG. A equação preditiva construída foi: ; sendo atribuído o valor 0 ao sexo masculino e 1 ao sexo feminino ( Tabela 3 ). No modelo final foi encontrado um r de 0,643 e o R 2 ajustado de 0,400, conforme descrito na Tabela 3 . Validação da equação preditora Na equação preditora desenvolvida, foram incluídos os dados da amostra do grupo de validação (n = 67) e foi encontrada uma média de VO 2 pico estimado de 18,81 mL.kg 1 .min 1 . A média de VO 2 pico determinado pelo TCPE nesta amostra foi de 18,18 mL.kg 1 .min 1 e, após realizar uma análise com o teste t pareado, para comparar as médias entre o VO 2 pico estimado pela equação e o VO 2 pico determinado pelo TCPE, não foi encontrada diferença estatisticamente significativa entre os métodos. Análise de concordância A análise do gráfico de Bland-Altman demonstrou que apenas 3 (4,4%) pacientes da amostra de validação (n = 67) estavam fora dos limites superior e inferior de concordância. Estes 3 pacientes eram do sexo masculino, sendo um com 68 anos de idade, acometido por IC e com IMC de 24 kg/m 2 ; um segundo com 65 anos de idade, acometido por DAC e com IMC de 25 kg/m 2 e o terceiro com 44 anos, acometido por DAC e com IMC de 24 kg/m 2 , ressaltando ainda que os 3 possuíam dislipidemia. Não foi verificada a presença de viés de proporção nessas análises ( Figura 2 ). Determinação de melhor ponto de corte A análise de curva ROC foi realizada com a amostra total (n = 201) e verificou uma área sob a curva de 0,80 (intervalo de confiança de 95%: 0,74 a 0,86), para prever um VO 2 pico ≥ 20 mL.kg 1 .min 1 . O ponto de corte no TUG para prever um VO 2 pico ≥ 20 mL.kg 1 .min 1 foi de 5,47 segundos, com sensibilidade de 82,8% e especificidade de 66,5% ( Figura 3 ).
Editor responsável pela revisão: Ricardo Stein Potencial conflito de interesse Não há conflito com o presente artigo. Resumo Fundamento A utilização do teste timed up and go (TUG) na avaliação da aptidão cardiorrespiratória em cardiopatas não está bem definida na literatura. Objetivos Testar a associação entre o TUG e o consumo de oxigênio de pico (VO 2 pico), construir uma equação com base no TUG para prever o VO 2 pico e determinar um ponto de corte para estimar um VO 2 pico ≥ 20 mL.kg−1.min−1. Métodos Estudo transversal com 201 indivíduos portadores de doença arterial coronariana ou insuficiência cardíaca, com idade entre 36 e 92 anos, submetidos ao TUG e ao teste cardiopulmonar de exercício. Foram realizadas análises de correlação, curva ROC, regressão linear múltipla e Bland-Altman. Um p < 0,05 foi adotado como significante. Resultados A média de idade da amostra total foi 67 ± 13 anos, e 70% dos participantes eram do sexo masculino. A média de VO 2 pico foi de 17 ± 6 mL.kg−1.min−1 e a média de desempenho no TUG foi de 7 ± 2,5 segundos. A correlação entre o VO 2 pico e o TUG foi r = −0,54 (p < 0,001) e R 2 de 0,30. Foi desenvolvida a equação com base no ; sendo atribuído o valor 0 ao sexo masculino e 1 ao sexo feminino (R ajustado: 0,41; R 2 ajustado: 0,40). O VO 2 pico estimado pela equação foi 18,81 ± 3,2 mL.kg−1.min−1 e o determinado pelo teste cardiopulmonar de exercício foi 18,18 ± 5,9 mL.kg−1.min−1 (p > 0,05). O melhor ponto de corte para o VO 2 pico ≥ 20 mL.kg−1.min−1 foi de ≤ 5,47 segundos (área sob a curva: 0,80; intervalo de confiança de 95%: 0,74 a 0,86). Conclusões O TUG e o VO 2 pico apresentaram associação significativa. A equação preditiva do VO 2 pico foi desenvolvida e validada internamente com bom desempenho. O ponto de corte no TUG para prever um VO 2 pico ≥ 20 mL.kg−1.min−1 foi ≤ 5,47 segundos.
Introdução As doenças cardiovasculares são as principais causas de morte no mundo, sendo responsáveis por 17,9 milhões dos óbitos no ano de 2019, o que correspondeu a 32% de todas as mortes. 1 As doenças cardiovasculares são altamente incapacitantes, provocando a diminuição da capacidade funcional (CF), condição que pode sugerir riscos cardiovasculares graves e indica pior prognóstico dos pacientes. 2 - 5 A CF é a aptidão para realizar atividades diárias de maneira independente e é considerada um importante indicador de saúde, pois está associada à qualidade de vida. 6 A CF pode ser avaliada pelo consumo máximo de oxigênio no pico do esforço (VO 2 pico) que é o determinante da aptidão cardiorrespiratória (ACR) da população em geral e valores de VO 2 pico ≥ 20 mL.kg 1 .min 1 estão relacionados a melhor prognóstico dos avaliados. O teste cardiopulmonar de exercício (TCPE) é o método padrão-ouro para mensurar a ACR, entretanto, não é um teste muito acessível, pois necessita de equipamentos de custo elevado, instalações apropriadas e ser conduzido por um médico especialista, tornando-o um procedimento oneroso e restritivo à maioria da população. 4 , 7 Atualmente, testes submáximos validados, como o teste de caminhada de 6 minutos (TC6M) e o teste do degrau de 6 minutos, são alternativas viáveis ao TCPE na avaliação da ACR. 5 É recomendado mensurar periodicamente a CF na população de cardiopatas, pois trata-se de um indicador de prognóstico funcional, clínico e, consequentemente, de mortalidade. 5 , 7 Portanto, na impossibilidade de realizar o TCPE ou os demais testes funcionais, são empregados outros instrumentos capazes de realizar esta avaliação funcional. O teste timed up and go (TUG) avalia a mobilidade funcional com base na força muscular de membros inferiores, no equilíbrio e na agilidade. 8 - 10 É um teste simples e o desempenho considera o tempo em segundos para que o avaliado se levante de uma cadeira e o mais rapidamente, caminhe em linha reta por três metros, vire-se e retorne à cadeira sentando novamente. 11 Dados sobre a utilização e o desempenho no TUG em cardiopatas ainda são escassos. Sendo assim, o objetivo principal deste estudo foi de construir uma equação preditora do VO 2 pico com base no desempenho do TUG de indivíduos cardiopatas, como também, analisar a associação entre o TUG e o VO 2 pico e determinar no TUG um ponto de corte para definir pacientes com melhor ACR. Métodos Este é um estudo transversal a partir da análise dos dados de participantes de um programa de reabilitação cardíaca, no período de agosto de 2017 a março de 2020, que, obedecendo diretrizes clínicas, realizaram o TCPE e o TUG, em um hospital de referência em cardiologia, na cidade de Salvador, Brasil. Foram incluídos neste estudo, pacientes acometidos por doença arterial coronariana (DAC) e/ou insuficiência cardíaca (IC), diagnosticadas pela história clínica dos pacientes (infarto agudo do miocárdio, DAC estável, procedimentos de angioplastia ou revascularização ou, ainda, presença de angina ou dispneia), presença de anormalidades eletrocardiográficas ou ecocardiográficas, sendo utilizado o método de Simpson para medida da fração de ejeção. Foram excluídos os participantes que não realizaram o TCPE e o TUG. Na avaliação inicial, foram coletados os dados clínicos, sociodemográficos e realizado o TCPE. O TCPE foi realizado utilizando uma esteira da marca Micromed (São Paulo), modelo Centurion 300, com um analisador de gases da marca Cortex inc (Leipzig, Alemanha), modelo Metalizer 3b que possui capacidade de mensurar a cada respiração. A classe funcional de cada paciente determinou o protocolo de rampa utilizado, objetivando uma padronização para os testes que tiveram duração entre 8 e 12 minutos. Os dados ventilatórios obtidos foram analisados em intervalos de 10 segundos e o VO 2 pico foi expresso em mL.kg 1 .min 1 . Para a verificação da percepção de esforço, foi utilizada a escala de Borg modificada. O TUG foi realizado sob supervisão de profissional de saúde treinado, em um intervalo de 2 a 7 dias após a realização do TCPE. No TUG foi utilizado uma cadeira com assento a 46 cm de altura do solo, com encosto para as costas e sem apoio para os braços. Na posição inicial, o avaliado encontrava-se sentado na cadeira, recostado e com os pés apoiados no chão. Para realização do TUG, os participantes foram orientados que ao comando de “levante e vá”, momento em que o cronômetro era acionado, deveriam se levantar sem auxílio dos braços, caminhar o mais rapidamente possível e, ao cruzar uma linha posicionada a 3 metros de distância da cadeira, dar meia volta e retornar para a cadeira, sentando novamente, momento este em que o cronômetro era travado. O desempenho no teste TUG correspondeu ao tempo em segundos necessários para realização deste processo, determinado pelo cronômetro que era administrado por um avaliador treinado para o protocolo. O protocolo do estudo foi submetido ao Comitê de Ética em Pesquisa Celso Figueirôa no Hospital Santa Izabel e foi aprovado sob o número do CAAE 57813016.0.3001.5533, respeitando as Diretrizes de Helsinque para a realização de pesquisas clínicas e a resolução 466/12 do Conselho Nacional de Saúde. Todos os participantes do estudo assinaram o Termo de Consentimento Livre e Esclarecido. Análise estatística A determinação da normalidade dos dados foi realizada a partir do teste de Shapiro-Wilk e da verificação dos histogramas, adotando-se uma análise paramétrica dos dados. As variáveis contínuas foram expressas por média ± desvio padrão e as variáveis categóricas por número ou percentual. Para verificação da correlação entre o TUG e o VO 2 pico, foi realizado o teste de correlação de Pearson. Na criação do modelo preditor, foi realizada análise de correlação de Pearson, verificando quais variáveis se relacionavam com o VO 2 pico. Foram analisados: TUG, idade, sexo, índice de massa corporal (IMC), presença de DAC e ou IC, frequência cardíaca, fração de ejeção, pressão arterial sistólica e circunferência da cintura. Atendendo a todos os pressupostos, foram realizadas regressões lineares múltiplas com as variáveis admitidas por significância estatística ou plausibilidade biológica e a construção do modelo preditor com base no TUG foi controlado para: idade, sexo, IMC, circunferência da cintura e pressão arterial sistólica, no intuito de identificar os preditores para o VO 2 pico. O método stepwise-backward foi determinado como critério de inclusão e exclusão das variáveis. Para criação do modelo preditor, foram utilizados dados de 2/3 da amostra total, que compuseram o grupo 1 (criação), admitidos após os critérios de elegibilidade, correspondendo aos 134 primeiros participantes da lista; compondo o grupo 2 (validação), foram empregados 1/3 da amostra total, referente aos 67 participantes restantes da lista. Para comparar a média entre o VO 2 pico determinado (TCPE) e o estimado (modelo preditor) no grupo validação, foi realizado o teste t de Student pareado. A avaliação da concordância entre os métodos foi realizada a partir da análise de Bland-Altman. O melhor ponto de corte para prever um VO 2 pico ≥ 20 mL.kg 1 .min 1 foi determinado através da análise da curva ROC, considerando o equilíbrio entre sensibilidade e especificidade no ponto mais próximo de 1 da área abaixo da curva. Para as análises, foi utilizado o software Statistical Package for Social Sciences (SPSS), versão 26.0. Um p < 0,05 foi adotado como limite de significância estatística. Discussão Os dados encontrados neste estudo apontam que o TUG apresentou boa associação com o VO 2 pico dos cardiopatas participantes de um programa de reabilitação cardíaca. Foi identificado um ponto de corte para o TUG capaz de identificar cardiopatas com melhor ACR e ele também demonstrou através das análises da equação preditora ser um teste com adequada capacidade preditora na avaliação da ACR nesta população. O VO 2 pico obtido através da equação preditora elaborada neste estudo com base no desempenho no TUG demonstrou concordância com o VO 2 pico determinado pelo TCPE na mesma amostra, demonstrando ser um método adequado para estimar a ACR de cardiopatas. Em uma metanálise com adultos saudáveis, Kodama et al. 12 sugeriram que a ACR seria um importante preditor de mortalidade e eventos cardiovasculares. Apesar da sua amostra apresentar diferentes características do nosso estudo, é possível inferir que uma melhor ACR está associada a menores riscos de complicações cardiovasculares. É importante frisar que na literatura ainda são escassos os estudos que relacionem o teste TUG à população de pacientes cardiopatas. A ACR estabelecida pelo VO 2 pico é um importante componente de avaliação da saúde, pois, de acordo com Carvalho et al. 3 e Ritt et al., 5 trata-se de um determinante que deve ser mensurado periodicamente em pacientes cardiopatas, com o intuito de monitorar a CF diante da realização de atividades da vida diária e instrumental. O TCPE pode nem sempre estar acessível à população em geral, principalmente em locais com restrição de recursos materiais, estruturais e de profissionais capacitados. Alternativas com protocolos indiretos validados, com menor complexidade operacional, maior celeridade e menor custo, 3 como o modelo preditor desenvolvido neste estudo, podem promover uma avaliação da ACR mais abrangente, sendo, portanto, de grande relevância na prática clínica. Em um estudo com idosos no pré-operatório por diversas naturezas, Boereboom et al. 13 afirmaram que o TUG poderia ser um teste útil para substituir o TCPE quando este não estiver disponível. Entretanto, entendemos que se deve adotar cautela ao sugerir que unicamente o desempenho no teste TUG seja suficiente em substituir o TCPE para avaliar a ACR, principalmente em pacientes cardiopatas. A equação preditora desenvolvida neste estudo propõe uma estimativa mais criteriosa da ACR em cardiopatas do que apenas o tempo de realização no teste TUG, por empregar maior rigor estatístico, além de considerar características da individualidade biológica dos pacientes, portanto, representando um método seguro. Foi encontrado neste estudo uma correlação moderada negativa entre o desempenho no TUG e o VO 2 pico, fato semelhante aos achados de Pedrosa et al., 14 que em um estudo com idosas hipertensas, também encontraram uma correlação moderada negativa entre o TUG e o TC6M, que é um teste funcional correspondente ao TCPE. É importante ressaltar que apesar dos testes diferirem, ambos objetivam mensurar a ACR e ainda que as amostras também tenham características diferentes, a amostra com cardiopatas em nosso estudo identificou que 60% dos participantes também eram portadores de hipertensão arterial sistêmica. Já no estudo de Lourenço et al., 15 foi verificada uma correlação moderada negativa mais expressiva entre o TUG e o TC6M em uma amostra com mulheres adultas portadoras de artrite reumatoide. Todavia, Boereboom et al., 13 no seu estudo com idosos, encontraram uma correlação negativa fraca, apesar de significativa, entre o TUG e o TCPE. Estes estudos apresentam divergências nas características sociodemográficas, clínicas, bem como nos seus protocolos de realização dos testes, no entanto, indicam a existência de relação entre os métodos, o que nos permite deduzir que o TUG pode ser um teste com sugestiva capacidade de designar níveis de ACR. Após as análises, foram considerados preditores do VO 2 pico nesta investigação: a idade, o sexo e o tempo de realização no teste TUG. No que se refere à diferenciação da ACR pelo sexo, o estudo de Herdy et al. 4 apontou que mulheres saudáveis na mesma faixa etária dos homens apresentavam valores de VO 2 máx que variavam entre 76% a 83% dos valores médios apresentados pelos homens. Já no estudo de Nunes et al., 16 foi encontrada uma variação do VO 2 máx por sexo ainda maior, sendo que o sexo feminino apresentou valores médios de VO 2 máx próximos a 70% dos valores atribuídos ao sexo masculino. Os dados dos estudos de Herdy et al. 4 e de Nunes et al. 16 assemelham-se aos encontrados neste estudo com cardiopatas, uma vez verificado que o sexo feminino apresentou um valor médio de VO 2 pico de 83% do valor médio atribuído ao sexo masculino, condições que podem ser explicadas pelas diferenças fisiológicas e morfológicas inerentes a cada sexo. Um outro preditor do VO 2 pico encontrado neste estudo foi a idade, que apresentou uma relação diretamente proporcional ao tempo de realização do TUG e inversamente proporcional ao VO 2 pico obtido no TCPE. Em nossa amostra, foi verificado um desempenho médio no TUG de 7 ± 2,5 segundos, aproximando-se dos valores normativos de 8 ± 1 segundos sugeridos por Bohannon 17 para pessoas da mesma faixa etária, que ainda apontou a redução gradativa do desempenho TUG a cada aumento de década na idade. Outros estudos como os de Khant et al. 18 e Bischoff et al., 19 apontaram a idade como um fator determinante para o desempenho no TUG, sugerindo que a utilização do TUG na prática clínica não deveria desprezar características biológicas como a idade e o sexo para determinar o desempenho no teste. Deste modo, é importante destacar que quando o objetivo da utilização do TUG for prever a ACR, principalmente de cardiopatas, um modelo preditivo com a utilização do desempenho no TUG, além da utilização de características como a idade e o sexo dos avaliados, bem como o uso das respectivas constantes propostas pelo modelo estatístico, podem garantir uma estimativa mais assertiva. Nas análises da curva ROC, foi verificado que o TUG demonstrou um nível de acurácia plausível para estimar a ACR em cardiopatas. O ponto de corte encontrado para prever um VO 2 pico ≥ 20 mL.kg 1 .min 1 , ou seja, de indivíduos com melhor ACR, foi de 5,47 segundos, sugerindo como em outros estudos que o TUG pode ser um teste confiável para estimar a ACR em cardiopatas. 4 , 13 Em análise com indivíduos com características clínicas diversas no pré-operatório e idade semelhante à da nossa amostra, foi identificado um ponto de corte no TUG de 6,5 segundos para prever comprometimentos no pós-operatório, com base num VO 2 pico < 18,6 mL.kg 1 .min 1 . 13 Os estudos trazem metodologias diferentes, entretanto, os resultados indicam parâmetros aproximados de desempenho no TUG com amostras de faixa etária equivalente no intuito de prever a ACR. Em outras análises com idosos saudáveis, foram apontados parâmetros diversos para o desempenho no TUG. 17 - 23 Deste modo, é importante reconhecer que a determinação de pontos de corte no TUG deveria considerar características clínicas dos avaliados (idade, sexo, peso e ainda comorbidades, altura ou comprimento dos membros inferiores), garantindo maior homogeneidade das amostras e propondo pontos de corte mais precisos. O baixo desempenho no TUG pode estar relacionado à CF reduzida em idosos cardiopatas e essa relação já havia sido apontada no estudo de Bateman et al. 24 Além disso, Boereboom et al. 13 apontaram que a performance reduzida no teste estava atrelada ao aumento da incidência de doenças cardiovasculares e à mortalidade, sendo que esses fatores podem estar associados a um processo inflamatório e complicador cardiometabólico derivado do processo de sarcopenia. Ao construirmos e validarmos uma equação preditiva para estimar o VO 2 pico, propomos uma ferramenta simplificada, capaz de compor o rol de instrumentos para avaliação da CF, contribuindo para uma prática clínica mais completa e abrangente aos portadores de DAC e IC. Os resultados deste estudo podem beneficiar principalmente pacientes que são usuários do Sistema Único de Saúde, uma vez que há uma importante limitação de equipamentos, espaços apropriados, recursos financeiros e equipe profissional disponível para realização de outros testes para a mesma finalidade. Este estudo apresentou algumas limitações, como o fato de se tratar de um estudo unicêntrico, uma vez que estudos multicêntricos permitem a participação de uma amostra mais representativa de uma população com grande diversidade como a população brasileira. Importante pontuar que o modelo preditor desenvolvido e o ponto de corte identificado no TUG foi proposto para uma amostra de pacientes cardiopatas com DAC e/ou IC. Entendemos que as análises poderiam ser mais específicas, caso portadores de DAC e IC fossem analisados separadamente, conscientes de que estudos com tamanho amostral maior são necessários para este mister. Nosso estudo não trouxe correlação de prognóstico, haja vista tratar-se de um estudo transversal e utilizou um desfecho substituto classicamente relacionado a prognóstico que foi o VO 2 pico. Conclusão O desempenho no TUG associou-se de forma negativa, moderada e significativa com a ACR em uma população de pacientes cardiopatas. Para prever o VO 2 pico com base no desempenho no TUG, foi desenvolvida uma equação e validada apresentando bom desempenho. Um tempo ≤ 5,47 segundos foi o ponto de corte determinado para prever um VO 2 pico ≥ 20 mL.kg 1 .min 1 . Estes resultados podem ajudar na formulação de diretrizes de avaliação da CF nesta população.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 15; 120(12):e20230338
oa_package/01/31/PMC10789370.tar.gz
PMC10789371
38126445
Resultados No presente estudo, 155 pacientes (média de idade 60,7 ± DP 7,5 anos) foram classificados como hipertensos (casos) e 155 (média de idade 58,2 ± DP 11,9 anos) como normotensos (controle). As características sociodemográficas e clínicas, bem como as características laboratoriais da população estudada, de acordo com a análise univariada, são apresentadas nas Tabelas 1 e 2 , respectivamente. As características sociais, clínicas e laboratoriais da população estudada de acordo com a análise multivariada realizada no modelo final são apresentadas na Tabela 3 . No presente estudo, no modelo final, foram encontradas diferenças significativas entre idade (p = 0,03), escolaridade (p < 0,001), tabagismo (p < 0,001), índice de massa corporal (p < 0,001), triglicerídeos (p = 0,04), LDL-colesterol (p < 0,001), glicose (p < 0,001) e ácido úrico (p < 0,001) quando comparados os grupos hipertensos e normotensos. Características alélicas e genotípicas da população estudada A distribuição dos genótipos do gene GNB3 analisados estava em equilíbrio de Hardy-Weinberg (p > 0,05). Não foram encontradas diferenças significativas entre as frequências alélicas e genotípicas e as populações hipertensas e normotensas ( Tabela 4 ). Análise do alelo T e características clínicas da população Na população estudada, os pacientes hipertensos que apresentavam pelo menos um alelo T eram significativamente mais velhos que os normotensos (p < 0,001) ( Figura 1A ). Da mesma forma, os pacientes hipertensos que possuíam pelo menos um alelo T apresentaram IMC médio significativamente maior do que aqueles que possuíam pelo menos um alelo T no grupo normotenso (p < 0,004) ( Figure 1B ). Análise de genótipos e dosagens bioquímicas As Tabelas 5 , 6 e 7 descrevem as características bioquímicas da população estudada em relação à presença do alelo T. Nos pacientes hipertensos, os níveis de triglicerídeos, glicose e ácido úrico foram maiores naqueles que possuíam pelo menos um alelo T em comparação aos pacientes normotensos (p < 0,002, p < 0,004 e p = 0,002, respectivamente). Em contrapartida, em pacientes normotensos, as concentrações de LDL-colesterol foram mais altas naqueles que possuíam pelo menos um alelo T quando comparados aos hipertensos (p = 0,003).
Editor responsável pela revisão: Carlos E. Rochitte Potencial conflito de interesse Não há conflito com o presente artigo. Resumo Fundamento Genes e suas variantes associadas a fatores ambientais contribuem para o desenvolvimento do fenótipo hipertenso. O gene da subunidade beta 3 da proteína G ( GNB3 ) está envolvido no processo de sinalização intracelular e suas variantes têm sido relacionadas à suscetibilidade à hipertensão arterial. Objetivo Determinar a associação da variante GNB3 (rs5443:C>T) com a hipertensão arterial, parâmetros bioquímicos, idade e obesidade em indivíduos hipertensos e normotensos de Ouro Preto, Minas Gerais. Método A identificação das variantes foi realizada por PCR em tempo real, utilizando o sistema TaqMan®, em amostras de 310 pacientes (155 hipertensos e 155 normotensos). Análises bioquímicas (função renal, perfil lipídico e glicemia) foram realizadas a partir do soro por meio de espectrofotometria UV/Vis e eletrodo íon-seletivo. Foi utilizado um modelo de regressão logística múltipla para identificar fatores associados à hipertensão arterial. A análise das variáveis contínuas com distribuição normal foi realizada usando o teste t de Student não pareado; dados não normais foram analisados usando o teste de Mann-Whitney. Valores de p < 0,05 foram considerados significativos. Resultados A variante rs5443:C>T não esteve associada à hipertensão arterial na população avaliada (p = 0,88). Em relação às medidas bioquímicas, o alelo T esteve associado a níveis elevados de triglicerídeos, glicose e ácido úrico em indivíduos hipertensos (p < 0,05). Conclusão Os presentes resultados mostram a importância do diagnóstico genético para prevenir as causas e consequências de doenças e sugerem que a variante GNB3 rs5443:C>T pode estar associada a alterações no perfil bioquímico em indivíduos hipertensos.
Introdução A hipertensão arterial é uma doença crônica responsável por diversas doenças frequentemente associadas a distúrbios metabólicos. Afeta vários órgãos-alvo e pode levar a morte súbita, acidente vascular cerebral, doença arterial periférica, insuficiência cardíaca, infarto agudo do miocárdio e doença renal crônica. 1 , 2 A regulação fisiológica da pressão arterial e as alterações fisiopatológicas que levam à hipertensão arterial têm um componente genético. Pressupõe-se que 30% a 50% da variabilidade interindividual da pressão arterial pode ser estipulada geneticamente. 3 , 4 Foi relatado que a variante GNB3 rs5443:C>T, localizada no cromossomo 12p13, na região do éxon 10, está associada com a hipertensão arterial. 3 , 5 As proteínas G fazem parte de uma superfamília de proteínas e estão inicialmente em estado inativo ligadas a receptores intracelulares. Quando ativadas, acionam enzimas amplificadoras e estimulam canais iônicos, realizando transdução de sinal. 6 A variante GNB3 rs5443:C>T é responsável pela troca do alelo C por T, gerando splicing alternativo do éxon 9, eliminando 41 aminoácidos (498-620) da proteína, gerando a variante funcional truncada G3-s que exacerba a proteína G. 6 , 7 Isso desencadeia a sinalização intracelular que regula a disponibilidade de sódio e potássio. No estado hiperativo, a proteína G aumenta a retenção de sódio e água, contribuindo para o desenvolvimento da hipertensão arterial. 7 Alguns estudos investigaram a associação entre a variante GNB3 rs5443:C>T e a pressão arterial em outras populações, mas os resultados são controversos. 5 - 8 No entanto, Chen et al. 4 discutiram como a variante GNB3 rs5443:C>T pode servir como marcador genético precoce da sensibilidade da pressão arterial ao sal. A presença do polimorfismo gera uma proteína funcional que desencadeia a lipólise através das catecolaminas, alterando o perfil lipídico na corrente sanguínea. 9 Além disso, provoca uma redução da sensibilidade à insulina no tecido muscular e intensa reabsorção de sódio, favorecendo a hipertensão arterial. 10 Devido ao comprometimento endotelial/renal causado pela hipertensão arterial, há uma deficiência na excreção de algumas substâncias, como ureia, creatinina e ácido úrico, aumentando suas concentrações plasmáticas. 1 , 9 Levando em consideração a população global, a frequência do alelo C é de 67% e a do alelo T é de 33%. Grupos étnicos como europeus, africanos, afro-americanos, asiáticos e latino-americanos têm frequências dos alelos C e T em torno de 69% e 31%; 28% e 72%; 28% e 72%; 46% e 54%; 54% e 46%, respectivamente. 11 Estudos têm mostrado diferentes frequências em populações brasileiras diferentes. 12 , 13 Considerando a importância da variabilidade genética na hipertensão arterial, o presente estudo teve como objetivo determinar se a variante GNB3 rs5443:C>T estava associada à hipertensão arterial e se influenciava a função renal, o perfil lipídico e a glicemia em uma amostra de pacientes brasileiros hipertensos e normotensos. Métodos Declaração ética O presente estudo foi realizado de acordo com os critérios adotados pelo Comitê de Ética Universitária (CAAE 22455119.0.0000.5150), conforme resolução 466/2012. Desenho do estudo O presente estudo caso-controle foi realizado em 2021 na cidade de Ouro Preto, Minas Gerais. Foram convidados a participar do estudo indivíduos presentes no Laboratório de Análises Clínicas da Faculdade de Farmácia da Universidade Federal de Ouro Preto para realização de exames bioquímicos. Aos que aceitaram, foi aplicado um questionário no aplicativo de smartphone KoBoToolbox, a fim de obter informações sobre dados sociodemográficos, comportamentais e de histórico médico. As medidas antropométricas como peso, altura e circunferência da cintura foram obtidas por meio de balança de bioimpedância, estadiômetro e fita métrica, respectivamente. Subsequentemente, foram coletadas amostras de sangue para avaliação bioquímica e molecular. Após análise do questionário/prontuário, os indivíduos foram separados em dois grupos. Foram classificados como hipertensos aqueles que faziam uso de medicamentos para hipertensão e tinham diagnóstico prévio da doença no prontuário. Indivíduos que não faziam uso de medicação anti-hipertensiva e não possuíam diagnóstico de hipertensão no prontuário foram classificados como controles (normotensos). O número da amostra foi definido para atingir o nível de significância de 95% que é crucial para estudos genéticos. Dessa maneira, o tamanho da amostra foi definido usando o programa OpenEpi, versão 3.01, com nível de confiança bilateral (1-alfa) de 95, poder de 80%, proporção de controles para casos de 1, proporção hipotética de controles com exposição de 33% de 8 e odds ratio de 2. Com isso, o tamanho da amostra foi estimado em aproximadamente 138 pacientes para os grupos controle e caso, totalizando 276 pacientes, segundo o teste de Kelsey. Ao final, o grupo de hipertensos incluiu 155 pacientes, sendo 87 mulheres e 68 homens com média de idade de 60,7 anos, e o grupo controle também incluiu 155 pacientes, 85 mulheres e 70 homens com média de idade de 58,2 anos. Dosagens bioquímicas Para as análises bioquímicas, os participantes jejuaram por 8 horas. Os perfis lipídicos (triglicerídeos, colesterol total, HDL-colesterol, LDL-colesterol e VLDL-colesterol), renais (ureia, creatinina e ácido úrico) e glicêmicos foram medidos no soro dos indivíduos hipertensos e normotensos por meio de espectrofotometria UV/Vis, com utilização de reagentes Cobas® Substrates (Roche), conforme as recomendações do fabricante, e processados em equipamento COBAS INTEGRA® 400 Plus (Roche). Os íons sódio e potássio foram medidos no soro usando um eletrodo íon-seletivo com reagentes LS Científica de acordo com as recomendações do fabricante e processados em equipamento AVL 9180 (Roche). Os valores de LDL-colesterol foram determinados com base no risco global atribuído a ambos os grupos e o colesterol não-HDL foi calculado através da fórmula: não-HDL = colesterol total − HDL. 14 Genotipagem Para conhecer as frequências genotípicas e alélicas da população, amostras de sangue total com EDTA foram coletadas e utilizadas para extração de DNA utilizando o PureLink Genomic DNA Mini Kit (Thermo Fisher Scientific). Foi utilizado o sistema TaqMan® SNP Genotyping Assays system (Thermo Fisher Scientific) para PCR em tempo real para analisar a variante GNB3 (Gene ID: 2784) rs5443:C>T (C__2184734_10). A mistura reacional foi preparada com 5 μL de TaqMan ® Master Mix e 0,50 μL de reagente de trabalho (primer/probe), totalizando 5,5 μL de reagente. As amostras de DNA foram diluídas com água livre de nuclease fornecendo 10 ng/μL, e 5,5 μL de reagente pré-preparado e 4,5 μL de amostra diluída foram carregados na placa de reação óptica de 96 poços MicroAmpTM, totalizando um volume de 10 μL por poço. Fita adesiva foi utilizada para selar a placa que foi centrifugada a 1000 rpm e processada no instrumento 7500 FAST de PCR em tempo real. O software 7500 v2.3 foi utilizado para analisar os dados de discriminação alélica (Applied Biosystems, Thermo Fisher Scientific). Análise de dados Foi realizada análise exploratória dos dados, e foram obtidas medidas de frequência absoluta e relativa para dados categóricos. O teste de Shapiro-Wilk foi utilizado para a normalidade dos dados contínuos. Para variáveis contínuas paramétricas, os dados foram expressos como média e desvio padrão (DP), e os dados não paramétricos foram expressos como mediana e intervalo interquartil. Inicialmente, para identificar as variáveis sociodemográficas, clínicas, laboratoriais e genéticas da população associadas à hipertensão, foi realizada regressão logística univariada comparando a frequência relativa das variáveis categóricas. Após selecionar as variáveis com p < 0,25 na análise de regressão logística univariada, foi realizada regressão logística múltipla com ajuste para etnia; subsequentemente, usando a técnica reversa, foram selecionadas apenas variáveis com valor de p < 0,05 para compor o modelo final. Para as variáveis contínuas paramétricas, a média entre os grupos foi comparada pelo teste t de Student não pareado, e para comparar a mediana das variáveis contínuas não paramétricas, foi utilizado o teste de Mann-Whitney, uma vez que os grupos são independentes. Foram excluídos das análises participantes em uso de hipolipemiantes, hipoglicemiantes e uricosúricos. Todas as análises foram realizadas no software STATA V.13.0, considerando p < 0,05 como significativo. O equilíbrio de Hardy-Weinberg foi verificado pelo teste qui-quadrado de Pearson, utilizando o software genAIEx 6.5. Discussão A hipertensão resulta da interação de fatores genéticos e ambientais. 1 , 5 Considerando que algumas variantes genéticas têm o potencial de contribuir para a suscetibilidade a determinadas doenças, 13 o presente estudo investigou a influência da variante GNB3 rs5443:C>T em pacientes hipertensos e normotensos. A variante GNB3 rs5443:C>T tem sido implicada em um risco aumentado de desenvolvimento de hipertensão, embora os resultados sejam inconsistentes. 6 , 15 No presente estudo, a análise da frequência alélica e genotípica não mostrou associação com hipertensão arterial, diferindo dos estudos que relataram essa associação em populações caucasianas, leste-asiáticas, alemãs e australianas. 5 , 7 , 15 Por outro lado, estudo realizado em outra população brasileira também não relatou associação entre os genótipos do polimorfismo C825T do gene GNB3 e a hipertensão arterial. 16 Isso pode ser devido à frequência desigual do alelo T entre diferentes etnias, 7 especialmente na população brasileira que é altamente mista. No presente estudo, o alelo T foi correlacionado com a idade, mostrando que os pacientes hipertensos que possuem pelo menos um alelo T são geralmente mais velhos que os normotensos, corroborando outros estudos realizados na China 12 e na Região Sudeste do Brasil. 13 Apesar disso, os indivíduos normotensos que possuem pelo menos um alelo T podem ter maior probabilidade de desenvolver hipertensão do que os indivíduos que não possuem o alelo T. 6 Assim, é importante que jovens normotensos que possuem pelo menos um alelo T tenham consciência de que são um grupo suscetível à hipertensão arterial, necessitando de maiores cuidados a fim de evitar o aparecimento da doença quando forem mais velhos. Em relação ao índice de massa corporal, também obtivemos resultados significativos mostrando que foi maior em pacientes hipertensos que possuem pelo menos um alelo T em comparação aos normotensos. Foram relatados resultados semelhantes em populações alemãs, chinesas e sul-africanas. 17 A presença de pelo menos um alelo T sugere que o polimorfismo C825T do GNB3 , localizado na região codificadora do gene, resulta em uma proteína funcional que aumenta a expressão da proteína G favorecendo a lipólise induzida por catecolaminas e induzindo a obesidade. 18 Além disso, a hipertensão arterial em indivíduos obesos pode ser devida ao aumento do volume de líquido extracelular e ao aumento do fluxo sanguíneo para os tecidos e do retorno venoso, contribuindo ao débito cardíaco. 19 Em pessoas obesas, o fluxo sanguíneo é maior por causa do excesso de tecido adiposo, bem como do fluxo sanguíneo para vários outros órgãos que hipertrofiam em resposta ao trabalho excessivo de demandas metabólicas, enquanto consomem oxigênio dos tecidos. O excesso de gordura favorece a síntese de citocinas e espécies reativas de oxigênio, gerando inflamação. Esse processo contribui para o desenvolvimento da disfunção endotelial, enrijecendo a vasculatura, o que pode desencadear aterosclerose e hipertensão arterial. 20 Em relação às análises bioquímicas, nossos resultados mostraram que possuir pelo menos um alelo T foi associado a níveis mais elevados de triglicerídeos, glicose e ácido úrico em pacientes hipertensos, bem como níveis mais altos de LDL-colesterol em pacientes normotensos. Feng et al., 21 estudando uma população da África do Sul, também demonstraram níveis mais elevados de triglicerídeos e glicose em indivíduos hipertensos com pelo menos um alelo T quando comparados a indivíduos normotensos. A presença de pelo menos um alelo T sugere que o polimorfismo C825T do GNB3 resulta em uma proteína funcional que influencia a lipólise induzida por catecolaminas, elevando o perfil lipídico na corrente sanguínea. 18 Além disso, o aumento da expressão da proteína G interfere nos níveis de glicose através dos níveis lipídicos por meio do metabolismo, desencadeando uma redução da sensibilidade à insulina no tecido muscular. 22 Em relação ao ácido úrico, Bührmann et al. 23 encontraram maiores concentrações de ácido úrico em pacientes que apresentavam pelo menos um alelo T na população alemã. A presença de pelo menos um alelo T sugere que o aumento da expressão da proteína G desencadeia intensa reabsorção de sódio, favorecendo a hipertensão arterial. Devido ao comprometimento endotelial/renal causado pela hipertensão arterial, acredita-se que haja deficiência na excreção de ácido úrico, aumentando sua concentração plasmática. Nosso achado de níveis mais elevados de colesterol LDL em indivíduos normotensos com pelo menos um alelo T pode ser devido ao fato de as amostras biológicas terem sido coletadas durante o período da pandemia de COVID-19, o que pode ter favorecido o aumento do consumo de alimentos ultraprocessados combinado com um estilo de vida sedentário motivado pelo isolamento social. 24 , 25 Foram encontrados resultados semelhantes por Siffert et al., 7 que verificaram níveis mais elevados de LDL-colesterol em uma população normotensa sem doença pré-existente, com pelo menos um alelo T. O presente estudo apresenta algumas limitações, uma vez que a amostra utilizada não pode ser representativa da população brasileira, além do pequeno tamanho amostral. Conclusão Na população estudada, a presença de pelo menos um alelo T da variante GNB3 rs5443:C>T foi relacionada a pacientes hipertensos com média de idade de 60 anos. Além disso, foi associada a níveis mais elevados de índice de massa corporal em indivíduos hipertensos e pode ser determinante de alterações em parâmetros bioquímicos, como perfil lipídico, glicemia e ácido úrico em pacientes hipertensos. Esses resultados mostram a importância do diagnóstico genético para prevenir as causas e consequências da doença, mesmo em uma população altamente miscigenada como a brasileira, e sugerem que a variante GNB3 rs5443:C>T pode ser usada como uma alternativa fácil e de baixo custo e como um marcador genético precoce de alterações bioquímicas no processo hipertensivo. Para melhor compreender a influência da variante rs5443:C>T na alteração do perfil bioquímico de pacientes hipertensos, é essencial que novos estudos epidemiológicos sejam realizados em outras populações maiores e geneticamente distintas. Declaração de disponibilidade de dados Os dados que suportam os resultados deste estudo estão disponíveis mediante solicitação à autora correspondente/ao autor correspondente. Os dados não estão disponíveis publicamente devido a restrições éticas ou de privacidade.
Agradecimentos Agradecemos a todos os pacientes que participaram do presente estudo. Agradecemos a colaboração do Laboratório de Análises Clínicas (LAPAC), do Laboratório de Epidemiologia da Faculdade de Medicina e Farmácia, do Laboratório de Bioquímica e do Laboratório de Pesquisas Clínicas da Universidade Federal de Ouro Preto. Os autores não têm interesse financeiro ou de propriedade em qualquer material discutido no presente artigo.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 7; 120(12):e20230396
oa_package/3c/a3/PMC10789371.tar.gz
PMC10789372
38126570
Resultados No período da pesquisa, foram analisados dados de 323 voluntários, dos quais 13 foram excluídos por uso de anticoncepcionais e 4 por uso de inibidores da protease, 5 por serem pacientes dialíticos, 2 por diagnóstico de hipercolesterolemia familiar e 1 por ser gestante. Sendo assim, a amostra final foi composta por 298 participantes (idade média 63±16,1 anos), dos quais 102 compuseram o grupo aterosclerose clínica, enquanto 196 participantes sem aterosclerose ou com aterosclerose subclínica compuseram o grupo controle. Entre os pacientes do grupo aterosclerose, os leitos arteriais mais acometidos por aterosclerose clínica foram, respectivamente, o coronariano (76), seguido pelo encefálico (26), carotídeo (9) e periférico (4); 12 pacientes foram diagnosticados com mais de uma doença aterosclerótica. As características clínicas basais dos grupos estudados estão resumidas na Tabela 1 . Os parâmetros laboratoriais e os índices aterogênicos são apresentados na Tabela 2 . Entre os parâmetros lipídicos, foi observada diferença apenas entre os níveis de triglicérides e HDL. No grupo aterosclerose, foram observados níveis maiores de triglicérides e menores de HDL. Os índices aterogênicos IC-I, IC-II, IAP, ICL mostraram-se significativamente maiores no grupo aterosclerose, ao passo que foi observada menor mediana da ΔIPP 90-120 . A Tabela 3 apresenta os valores de sensibilidade, especificidade, valores preditivos e razões de verossimilhança dos índices analisados nesse estudo. Nota-se que apenas o Índice de Castelli II não alcançou uma ASC>0,6 (ASC=0,589). As curvas ROC destes índices podem ser observadas na Figura 1 . Comparações pareadas da análise ROC dos índices que alcançaram ASC > 0,6 mostram que, embora não houvesse diferença significativa entre a ΔIPP 90-120 e o IAP, ambos se mostraram superiores ao IC-I e ICL, entre os quais também não foi observada diferença. Após observar a maior acurácia do IAP e ΔIPP 90-120 , foi realizada análise de correlação de Pearson para investigar suas correlações com outras variáveis contínuas. O IAP foi positivamente correlacionado com a idade (r=0,173, p=0,003), IMC (r=0,116, p=0,046), CT (r=0,138, p=0,017), TG (r=0,830, p<0,001), e foi negativamente correlacionado com o HDL (r=-0,599, p<0,001) e ΔIPP 90-120 (r=-0,237, p<0,001). Por sua vez, a ΔIPP 90-120 foi positivamente correlacionada com a PAD (r=0,154, p=0,012), HDL (r=0,321, p<0,001) e negativamente correlacionada com a idade (r=-0,258, p<0,001) e TG (r=-0,120, p<0,040). Para determinar o grau de associação independente dos índices aterogênicos, realizou-se análise logística multivariada, ajustada para possíveis fatores confundidores ( Tabela 4 ). Observou-se que os índices aterogênicos ΔIPP 90-120 e o IAP foram preditores independentes de aterosclerose clínica.
Editor responsável pela revisão: Gláucia Maria Moraes de Oliveira Potencial conflito de interesse Não há conflito com o presente artigo Resumo Fundamento A busca por métodos clinicamente úteis de avaliação de doenças ateroscleróticas, com boa acurácia, de baixo custo, sem invasividade e de fácil manejo, há anos vem sendo estimulada. Dessa forma, os índices aterogênicos avaliados deste estudo podem se encaixar nesta demanda crescente. Objetivos Avaliar o potencial dos índices aterogênicos como métodos de avaliação de pacientes portadores de aterosclerose clínica. Métodos Estudo transversal de centro único, por meio do qual foram avaliados os índices de Castelli I e II, índice aterogênico plasmático (IAP), índice de combinação de lipoproteínas e a variação do índice de perfusão periférica entre 90 e 120 segundos após um estímulo vasodilatador endotélio-dependente (ΔIPP 90-120 ) na predição de aterosclerose. A significância estatística foi estabelecida em p < 0,05. Resultados A amostra foi composta por 298 indivíduos com idade média de 63,0 ± 16,1 anos, dos quais 57,4% eram mulheres. Comparações pareadas da análise curva ROC dos índices que alcançaram área sob a curva (ASC) > 0,6 mostram que ΔIPP 90-120 e IAP foram superiores aos demais índices, sem diferenças observadas entre si (diferença entre ASC = 0,056; IC95% -0,003-0,115). Ademais, tanto a ΔIPP 90-120 [odds ratio (OR) 9,58; IC95% 4,71-19,46] quanto o IAP (OR 5,35; IC95% 2,30-12,45) foram preditores independentes de aterosclerose clínica. Conclusões O IAP e ΔIPP 90-120 apresentaram melhor acurácia para discriminar aterosclerose clínica. Além disso, foram preditores independentes de aterosclerose clínica, evidenciando uma possibilidade promissora para o desenvolvimento de estratégias preventivas e de controle para doenças cardiovasculares. Tratam-se, portanto, de marcadores adequados para estudos multicêntricos do ponto de vista de praticidade, custo e validade externa.
Introdução A aterosclerose é o pilar central da fisiopatologia de diversas doenças cardiovasculares. 1 Apesar do uso difundido dos parâmetros lipídicos clássicos, amplamente disponíveis para análise clínica, outros parâmetros vêm sendo discutidos atualmente, propondo associações destas variáveis lipídicas com o intuito de avaliar as relações entre elas e a correlação com desfechos clínicos, especialmente a doença coronariana. 2 - 4 Em 1983, Castelli sugeriu os Índices de Castelli I e II, como reflexo da depuração do colesterol total (CT) e do LDL, ambas mediadas pelos níveis de HDL. 5 Recentemente, o índice aterogênico plasmático (IAP) tem ganhado notoriedade científica. Especula-se que o grande potencial preditivo do IAP para as doenças ateroscleróticas deriva da capacidade deste índice de indicar que a relação entre os triglicerídeos (TG) e o HDL pode predeterminar a direção preferencial do transporte intravascular de colesterol em direção a HDL benéficos ou LDL aterogênicos. 6 , 7 Nos últimos 4 anos, representado pela relação entre as concentrações molares do CT, LDL e TG com o HDL, o índice de combinação de lipoproteínas (ICL) foi proposto como um possível preditor independente de doença coronariana em mulheres menopausadas. 8 Recentemente, um estudo mostrou resultados animadores do índice de perfusão periférica (IPP), um parâmetro derivado do oxímetro de pulso, na avaliação da função endotelial na presença de aterosclerose. Este mesmo estudo relatou que o intervalo da variação do IPP entre 90 e 120 segundos (ΔIPP 90-120 ) após a hiperemia reativa parece ter a maior correlação entre fatores de risco cardiovascular e disfunção endotelial. 9 Dada a importância da disfunção endotelial e do perfil lipídico para o desenvolvimento e progressão das doenças ateroscleróticas, a busca por métodos de avaliação clinicamente úteis, com boa acurácia, de baixo custo, sem invasividade e de fácil manejo há anos vem sendo estimulada. Tendo em vista a já descrita associação destes índices com diversos desfechos clínicos cardiovasculares, 3 , 4 , 7 , 10 - 19 eles podem se encaixar na demanda crescente de custo-efetividade e os tornam atraentes para futuros ensaios e possível melhoria na detecção, prevenção e tratamento de tais doenças. Logo, o presente estudo teve como objetivo avaliar o potencial dos índices aterogênicos como métodos de predição de doença aterosclerótica clínica. Métodos Delineamento do estudo Este é um estudo observacional, do tipo transversal, por meio do qual foram avaliados os valores do índices de Castelli I e II, índice aterogênico plasmático, índice de combinação de lipoproteínas e a variação do índice de perfusão periférica após um estímulo vasodilatador endotélio-dependente. Foram avaliados pacientes com aterosclerose clínica em diversos sítios vasculares, baseados na concomitância de sítios envolvidos e na sua característica sistêmica. 20 Local do estudo e cálculo amostral O presente estudo foi realizado em ambulatórios de cardiologia, endocrinologia e geriatria vinculados a um hospital terciário em uma cidade do nordeste brasileiro. Foi realizado um cálculo retrospectivo do tamanho da amostra para o desfecho primário com uma proporção para ocorrência do desfecho de 1:2, baseado em estudo piloto previamente realizado. Com um poder de 0,8, um α de 0,05 e uma ASC=0,6 conforme nossa hipótese a-priori (hipótese nula: ASC=0,5), foi necessária uma amostra de 294 participantes. Para compensar possíveis perdas, foi acrescido 10% para ajuste da amostra, totalizando 323 participantes. Critérios de inclusão e exclusão Todos os pacientes que comparecessem ao atendimento ambulatorial das especialidades supracitadas seriam convidados a participar do estudo desde que tivessem pelo menos 18 anos e um resultado de lipidograma coletado até 03 meses antes da inserção na pesquisa. Em razão a alterações particulares dos parâmetros lipídicos, os pacientes com hipercolesterolemia familiar e usuários de inibidores da protease, anticoncepcionais orais combinados, ou isotretinoína foram excluídos do estudo. Ademais, uma vez que inúmeros fatores podem afetar a reatividade vascular, pacientes renais dialíticos, gestantes e pacientes que tenham se exercitado 1 hora antes da entrevista, ou que tenham ingerido substâncias energéticas, ou que tenham fumado ao menos 4 a 6 horas antes do início da coleta também foram excluídos do estudo. Definição de doença aterosclerótica clínica Os pacientes tiveram suas doenças confirmadas pelo prontuário eletrônico elaborado por médicos especialistas com auxílio de exames complementares, entre os quais: laudos de cineangiocoronariografia e angiotomografia de coronárias com placas ateroscleróticas com estenose ≥ 50%, ecocardiograma com estresse físico ou farmacológico, ressonância magnética cardíaca com estresse farmacológico, arteriografia ou ecodoppler arterial de membros inferiores e ecodoppler de carótidas evidenciando placas ateroscleróticas com estenose ≥ 50%, além de tomografia e angiotomografia de crânio com sinais de isquemia e excluídas etiologias cardioembólicas. As provas não-invasivas foram consideradas positivas quando fosse evidenciada isquemia. Foram considerados laudos positivos de aterosclerose diagnosticada até 1 ano do lipidograma mais recente. Alocação dos grupos Neste estudo, o grupo aterosclerose clínica foi composto por: doença arterial coronariana, doença aterosclerótica carotídea ou periférica e doença cerebrovascular isquêmica aterotrombótica. Desta forma, o grupo controle foi composto por aqueles que não tinham doença aterosclerótica clínica diagnosticada, isto é, indivíduos com aterosclerose subclínica ou sem processo de aterosclerose. Coleta de dados A coleta de dados foi realizada no período de janeiro de 2022 a dezembro de 2022, por meio de entrevista e exame físico em salas individualizadas, com portas fechadas, respeitando a privacidade do participante e a lei geral de proteção de dados. Os entrevistados foram aleatoriamente selecionados por busca ativa em dias aleatórios, previamente ao atendimento ambulatorial. Foram coletadas variáveis relacionadas ao risco cardiovascular: sexo, idade, etnia, prática regular de atividade física, índice de massa corporal (IMC), dislipidemias, diabetes mellitus tipo 2, hipertensão arterial sistêmica e história de etilismo e tabagismo atual ou prévio. Aqueles que não praticassem de forma regular ao menos 150 minutos de atividade física moderada foram classificados como prática inadequada de atividade física. 21 Foram calculados os seguintes índices aterogênicos: índices de Castelli I (IC-I) (CT/HDL) e Castelli II (IC-II) (LDL/HDL), o índice de combinação de lipoproteínas (ICL) (CTxTGxLDL/HDL), o índice aterogênico plasmático (IAP), calculado como log 10 (TG/HDL), e a variação do Índice de Perfusão Periférica no intervalo 90-120 segundos (ΔIPP 90-120 ) após a desinsuflação do manguito. Para os cálculos do IAP e do ICL, os parâmetros lipídicos (CT, LDL, HDL e TG) foram expressos em mmol/L. Coleta do IPP Para análise do IPP, foi utilizado um oxímetro de pulso portátil (modelo HC261, Multilaser, Brasil). Nesta avaliação, realizada por um investigador único, os pacientes foram acomodados, sentados, por aproximadamente 5 minutos numa sala silenciosa com temperatura controlada a 20-22 oC. O protocolo de coleta do IPP seguiu o mesmo utilizado por Menezes et al. 9 A partir da desinsuflação do manguito, o valor do IPP foi avaliado e registrado 90 e 120 segundos para avaliação da sua variação do IPP neste período (ΔIPP 90-120 ) por meio da seguinte fórmula: Análise estatística Variáveis com distribuição normal foram descritas como média ± desvio padrão e variáveis sem distribuição normal foram descritas como mediana e intervalo interquartil. As variáveis contínuas foram avaliadas pelo método analítico de Shapiro-Wilk para determinar a normalidade da distribuição. Foi realizado o teste t de Student não pareado para variáveis com distribuição normal e o teste U de Mann-Whitney para aquelas sem distribuição normal. Para as variáveis categóricas, foi utilizado o teste do qui-quadrado de Pearson. Os pontos de corte para os índices aterogênicos foram obtidos pela curva receiver operating characteristic (ROC), escolhidos por meio do índice de Youden. As áreas sob a curva (ASC) foram calculadas e comparadas pelo método DeLong. Ademais, foram registrados a sensibilidade, a especificidade, o valor preditivo positivo (VP+) e negativo (VP-) e as razões de verossimilhança positiva (RV+) e negativa (RV-) para ocorrência do desfecho. A análise de correlação de Pearson foi realizada para investigar a correlação dos índices com maior ASC com outras variáveis contínuas. Para avaliar o grau de associação das variáveis ao desfecho, foram calculadas as odds ratios (OR) e seus intervalos de confiança de 95% (IC 95% ) para presença de doença aterosclerótica por meio da regressão logística univariada. Aquelas que alcançaram p<0,10 ou que fossem consideradas clinicamente relevantes foram incluídas no modelo multivariado. Valores de p<0,05 foram considerados estatisticamente significativos. Os dados foram analisados utilizando os softwares SPSS, versão 26.0 (SPSS Inc., Chicago, IL, EUA) e MedCalc®, versão 19.5 (MedCalc Software Ltd, Ostend, Belgium). Aspectos éticos Este projeto foi aprovado pelo Comitê de Ética em Pesquisa, sob parecer no 5.106.513, conforme diretrizes e normas estabelecidas na resolução no 466/2012 do CNS, a qual versa sobre pesquisas com seres humanos. Discussão Nosso estudo é o primeiro comparando novos índices aterogênicos em uma população brasileira. Os resultados presentes evidenciam uma importante associação independente entre a ΔIPP 90-120 e o IAP com aterosclerose clínica. Consequentemente, o principal achado deste estudo diz respeito à possibilidade de utilização clínica de um derivado da oximetria de pulso e de relações derivadas da avaliação habitual de lipídeos. Outros estudos também encontraram associação independente do IAP com aterosclerose clínica 7 , 12 - 17 , 22 e subclínica 4 , 10 , 19 em diversos leitos arteriais. Enquanto alguns estudos encontraram correlação inversa entre o IAP e a idade, 14 , 23 um estudo em população africana concluiu que o IAP não estava associado à idade. 24 Esta discrepância pode ser parcialmente o resultado das diferentes populações étnicas selecionadas. Em nosso estudo, de uma população brasileira majoritariamente de indivíduos não brancos (88,9%), foi encontrada correlação positiva com a idade e o IAP (r=0,173; p=0,003), o que pode ser explicado pela clássica associação entre a idade e o desenvolvimento de doenças ateroscleróticas. Foi sugerido que valores de IAP de -0,3–0,1 estão associados a um baixo risco cardiovascular, 0,1–0,24 a médio risco e acima de 0,24 a alto risco. 25 Condizente com os pontos de corte sugeridos, observamos IAP de 0,17 no grupo aterosclerose e -0,06 nos controles. Convém esclarecer que o uso elevado de estatinas no grupo aterosclerose pode justificar um IAP menor que o esperado, 25 todavia, ele permaneceu significativamente maior neste grupo. Verificou-se que o IAP estava negativamente associado ao diâmetro da partícula de LDL. 25 Por consequência, um aumento no IAP indica uma redução no diâmetro da partícula de LDL e um aumento na proporção de partículas de LDL pequenas e densas (sdLDL). 25 Em situações de hipertrigliceridemia, há um estímulo à atividade da proteína de transferência de éster de colesterol (CETP), que está implicada na formação intravascular de sdLDL principalmente por meio de um mecanismo indireto envolvendo uma elevada taxa de transferência de ésteres de colesterol do HDL para partículas de VLDL1. 26 - 28 Devido ao pequeno tamanho da partícula e também à ligação aumentada a proteoglicanos endoteliais, o sdLDL tem maior probabilidade de invadir e se depositar na parede arterial e de ser oxidado, o que leva a ainda mais aterosclerose. 29 - 31 No entanto, devido às técnicas complexas e pouco custo-efetivas para a quantificação da fração sdLDL, sua aplicação na prática clínica costuma ser limitada, o que garante vantagem de custo ao IAP. Neste estudo, a ΔIPP 90-120 foi o preditor independente com maior associação ao desfecho. Há anos, evidências sugerem que a disfunção endotelial ocorre antes mesmo do processo de formação da placa aterosclerótica, contribuindo para sua formação, progressão e possíveis complicações. 32 Menezes et al. sugeriram uma forma de avaliar a disfunção endotelial em indivíduos com aterosclerose clínica por meio da ΔIPP 90-120 , e seus resultados evidenciaram níveis reduzidos deste índice em indivíduos no grupo aterosclerose, 9 independentemente do sexo, de forma bastante similar ao presente estudo. Ao ocorrer o estágio de disfunção endotelial, a resposta vasodilatadora está reduzida ou ausente, e a ΔIPP 90-120 surge como uma possível ferramenta de avaliação desse estágio disfuncional no período em que ocorre maior contribuição do NO para os efeitos da hiperemia reativa. 9 , 33 Apesar de não ter sido encontrada correlação entre o IAP e variáveis hemodinâmicas que pudessem justificar sua correlação encontrada com a ΔIPP 90-120 , alguns estudos relataram associação independente entre níveis plasmáticos de elevados de TG e reduzidos de HDL com a rigidez arterial. 34 - 36 O processo que leva ao aumento da rigidez das grandes artérias é complexo e compreende influências mediadas por estresse mecânico pulsátil, fatores de crescimento e alterações na função endotelial, células inflamatórias, enzimas que degradam a elastina, alterações nas células musculares lisas do fenótipo contrátil para sintético e aumento da produção de matriz extracelular por fibroblastos. 37 Sabe-se que os TG e o HDL têm influências opostas na inflamação, no estresse oxidativo, na formação da matriz extracelular e na alteração do músculo liso vascular do fenótipo contrátil para o sintético, e o IAP, de alguma forma, resume tais influências. 34 , 38 Contudo, foram publicados achados contraditórios e, portanto, permanecem controvérsias sobre as associações do IAP com a rigidez arterial e, consequentemente, com a ΔIPP 90-120 . Nosso estudo tem algumas limitações. A primeira delas ocorre pelo desenho observacional e transversal, realizado em centro único, que pode envolver um viés de seleção, limitando esta pesquisa apenas na geração de hipóteses. Em segundo lugar, nossos dados não puderam explicar completamente a relação fisiopatológica encontrada entre o IAP e a ΔIPP 90-120 . Outra limitação potencial é que foi realizada uma única medida dos índices avaliados para cada paciente, o que restringe as conclusões acerca da reprodutibilidade intra-individual dos métodos. Apesar dessas limitações, este estudo é o primeiro a comparar a relação dos novos índices aterogênicos em diversas patologias ateroscleróticas em uma população brasileira de pacientes ambulatoriais. Conclusão Os resultados permitem concluir que o IAP e ΔIPP 90-120 apresentaram melhor acurácia para discriminar aterosclerose clínica. Além disso, eles foram preditores independentes de aterosclerose clínica, evidenciando uma possibilidade promissora para o desenvolvimento de estratégias preventivas e de controle para doenças cardiovasculares. Tratam-se, portanto, de marcadores adequados para estudos multicêntricos do ponto de vista de praticidade, custo e validade externa.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 15; 120(12):e20230418
oa_package/4b/bf/PMC10789372.tar.gz
PMC10789373
0
2.3. Recomendações Gerais de como Reportar os Resultados do Strain e os Valores de Normalidade Com o objetivo de simplificar a descrição do strain no laudo de ecocardiograma, é recomendada a utilização do tipo de strain analisado (o qual define os movimentos de contração ou alongamento) e sua avaliação numérica em valores absolutos, principalmente em estudos comparativos sequenciais, com o objetivo de não levar a interpretações equivocadas de piora do strain . Outras informações cruciais que devem ser descritas são os sinais vitais do paciente (pressão arterial e frequência cardíaca), devido a alterações de pré- e pós-carga que influenciam o valor global do strain , a marca do equipamento de ultrassom utilizado, bem como a versão do software de análise, em decorrência da variabilidade da normalidade entre os fabricantes. 33 , 34 A Tabela 2.1 abaixo descreve as informações essenciais que devem constar no laudo para descrição completa do strain. 9 Os valores normais de referência para o strain analisado 9 , 35 - 39 devem ser incluídos no laudo. A Tabela 2.2 descreve valores médios da normalidade de forma simplificada dos diversos tipos de strain , bem como o grau de evidência para sua utilização na prática clínica. Diferentemente da FEVE, o valor de normalidade do strain ainda não foi assimilado de maneira consistente pelo cardiologista clínico e, portanto, devem constar no laudo como referência.
Presidentes do DIC: Carlos Eduardo Rochitte (Gestão 2020/2021) e André Luiz Cerqueira de Almeida (Gestão 2022/2023) Editores Coordenadores: André Luiz Cerqueira de Almeida, Marcelo Dantas Tavares de Melo, Alex dos Santos Felix e David Costa de Souza Le Bihan Coeditores: Marcelo Luiz Campos Vieira e José Luiz Barros Pena Conselho de Normatizações e Diretrizes responsável: Carisi Anne Polanczyk (Coordenadora), Humberto Graner Moreira, Mário de Seixas Rocha, Jose Airton de Arruda, Pedro Gabriel Melo de Barros e Silva – Gestão 2022-2024
Sumário 1. Conceitos Básicos sobre o Estudo da Deformação do Ventrículo Esquerdo 7 1.1. Breve Introdução aos Princípios Físicos da Formação dos Speckles na Imagem Cardiovascular 7 1.2. Definições 7 1.2.1. Strain e Strain Rate 7 1.2.2. Deformação Longitudinal, Circunferencial e Radial 8 1.2.3. Tempo dos Eventos Mecânicos 8 1.2.4. Medidas de Pico Extraídas das Curvas de Deformação 8 1.3. Fatores que Afetam a Estimativa do Strain 8 1.3.1 Qualidade da Imagem 8 1.3.2. Modalidade de Imagem Cardiovascular 9 1.3.3. Fabricante e Versão do Software 9 1.3.4. Condições Hemodinâmicas 9 1.4. Strain Longitudinal Global 9 2. Recomendações Gerais para o Uso do Strain : Aplicabilidade Clínica, Comparação com a Fração de Ejeção e Descrição Adequada no Laudo 11 2.1. Valor Prognóstico, Padrões Paramétricos e Detecção Subclínica de Cardiopatias da Deformação Miocárdica 11 2.2. Strain ou Fração de Ejeção: Qual é a Melhor Alternativa? 11 2.3. Recomendações Gerais de como Reportar os Resultados do Strain e os Valores de Normalidade 11 2.4. Conclusão 11 3. Strain na Cardio-oncologia 14 4. Strain na Disfunção Diastólica 16 4.1. Introdução 16 4.2. Strain do Ventrículo Esquerdo 16 4.3. Strain do Átrio Esquerdo 16 4.4. Conclusão 16 5. Strain nas Cardiomiopatias 17 5.1. Introdução 17 5.2. Cardiomiopatia Dilatada 17 5.3. Cardiomiopatia Arritmogênica 17 5.4. Cardiomiopatia Hipertrófica 18 5.5. Endomiocardiofibrose 18 5.6. Miocárdio Não Compactado 18 6. Strain nas Valvopatias 19 7. Strain nas Cardiopatias Isquêmicas 20 7.1. Introdução 20 7.2. Strain na Síndrome Coronariana Aguda 20 7.3. Strain na Síndrome Coronariana Crônica 20 7.4. Strain do Ventrículo Direito na Cardiopatia Isquêmica 21 8. Strain nas Doenças Sistêmicas (Amiloidose e Doença de Fabry) 22 8.1. Strain na Amiloidose Cardíaca 22 8.1.1. Papel da Análise da Deformação Miocárdica no Diagnóstico da Amiloidose Cardíaca 22 8.2. Doença de Fabry 25 9. Strain na Hipertensão Arterial Sistêmica 26 9.1. Introdução 26 9.2. Hipertensão Arterial Sistêmica sem Critérios para Hipertrofia Ventricular Esquerda 26 9.3. Hipertensão Arterial Sistêmica com Critérios para Hipertrofia Ventricular Esquerda 27 9.4. Tratamento Clínico 27 9.5. Conclusão 27 10. Strain em Atletas 27 11. Strain na Ecocardiografia com Estresse 29 12. Strain nas Cardiopatias Congênitas 29 13. Strain do Ventrículo Direito 30 13.1. Introdução 30 13.2. Características Anatômicas e Funcionais do Ventrículo Direito 30 13.3. Ventrículo Direito e Parâmetros Ecocardiográficos na Avaliação da Função Sistólica 30 13.4. Aquisição e Limitações 31 13.5. Indicações/Valores de Normalidade 32 14. Strain do Átrio Esquerdo e do Átrio Direito 33 14.1. Técnica de Obtenção e Análise do strain do Átrio Esquerdo 33 14.2. Valores de Normalidade 33 14.3. Aplicabilidade Clínica do Strain do Átrio Esquerdo 33 14.3.1. Insuficiência Cardíaca e Avaliação de Função Diastólica 33 14.3.2. Fibrilação Atrial 34 14.3.3. Valvopatias 34 14.3.4. Doença Arterial Coronariana 34 14.4. Strain Atrial Direito 34 15. Avaliação da Torção do Ventrículo Esquerdo 34 15.1. Introdução 34 15.2. Definições e Nomenclaturas 35 15.3. Passo a Passo da Avaliação da Torção Ventricular pelo Ecocardiograma com Speckle Tracking 35 15.4. Aplicações Clínicas 35 16. Strain na Análise da Dissincronia Ventricular 36 16.1. Introdução 36 16.2. Avaliação da Dissincronia na Seleção dos Pacientes para a Terapia de Ressincronização Cardíaca 37 16.3. Avaliação de Viabilidade Miocárdica 37 16.4. Orientação do Local de Implante dos Eletrodos 38 16.5. Avaliação Prognóstica após a Terapia de Ressincronização Cardíaca 38 16.6. Ajuste nos Parâmetros de Ressincronização 38 17. Myocardial Work (Trabalho Miocárdico) 38 17.1. Introdução 38 17.2. Aquisição do Trabalho Miocárdico 38 17.3. Valores de Normalidade 39 17.4. Potencial Uso Clínico 42 18. Strain no 3D: O Que Pode Acrescentar ao Exame 43 18.1. Introdução 43 18.2. Strain Ventricular Esquerdo 43 18.3. Strain Ventricular Direito 43 18.3.1. Aquisição e Análise do Full-volume 3D 44 18.4. Strain Atrial Esquerdo 44 19. O papel da Ressonância e Tomografia Cardíacas na Avaliação do Strain 44 19.1. Introdução 44 19.2. Métodos de Aquisição do Strain pela Ressonância Magnética Cardíaca 44 19.3. Strain do Ventrículo Direito pela Ressonância Magnética Cardíaca 44 19.4. Strain do Ventrículo Esquerdo pela Ressonância Magnética Cardíaca 44 19.5. Strain do Átrio Esquerdo pela Ressonância Magnética Cardíaca 46 19.6. Strain pela Tomografia Cardíaca 46 Referências 46 1. Conceitos Básicos sobre o Estudo da Deformação do Ventrículo Esquerdo 1.1. Breve Introdução aos Princípios Físicos da Formação dos Speckles na Imagem Cardiovascular A palavra “ speckle ” refere-se à aparência granular da imagem gerada por um sistema de imagem de coerência óptica, tal como o laser , tomografia de coerência óptica ou ultrassonografia. 1 , 2 Na ecocardiografia, um pulso de ultrassom emitido propaga-se em linha reta, interagindo com as diferentes interfaces acústicas da cavidade torácica até atingir o coração. Entre os diversos fenômenos acústicos que ocorrem nesse percurso, parte do feixe de ultrassom emitido sofre reflexão pelas diferentes estruturas cardíacas, gerando um eco que é parcialmente captado de volta pelo transdutor e utilizado pelo software como entrada ( input ) para a elaboração das imagens de ecocardiografia. Nesse caso, o comprimento de onda do feixe ultrassonográfico é habitualmente menor do que o tamanho das estruturas refletoras. Entretanto, quando o comprimento de onda é maior do que a microestrutura com a qual interage, há uma dispersão do feixe de ultrassom, que se irradia para todas as direções (dispersão difusiva ou “ diffusive scattering” ). Esse fenômeno é o resultado do padrão de interferência de todas as frentes de onda que sofreram dispersão a partir dos diferentes dispersores (diferenças locais de densidade e compressibilidade dos tecidos). Parte da dispersão difusiva é capturada pelo transdutor, formando a imagem de aspecto granular que denominamos speckle . A presença de speckles torna a imagem do modo B menos nítida para o operador humano, porém ela não deve ser vista como um ruído, pois traz consigo informações únicas de forma a atuar como uma “impressão digital” do meio estudado pelo ultrassom. 1 1.2. Definições 1.2.1. Strain e Strain Rate Strain corresponde à quantidade deformação de um objeto em relação à sua forma original. 3 Na cardiologia, esse conceito é representado como o percentual (%) de encurtamento/alongamento do coração em relação à sua medida inicial. Esse conceito pode ser aplicado para um segmento miocárdico ( strain regional) ou para a totalidade de uma das câmaras do coração como o ventrículo esquerdo (VE) ( strain global). O strain rate indica a taxa de deformação miocárdica (%) a cada segundo(s - 1 ) ou, em outras palavras, a velocidade com que a deformação ocorre. 3 - 4 1.2.2. Deformação Longitudinal, Circunferencial e Radial A aplicação do conceito de deformação nos permite pormenorizar o estudo do encurtamento/alongamento do miocárdio do VE a partir de sua orientação em diferentes eixos. De fato, devido à disposição helicoidal das fibras musculares cardíacas, o encurtamento sistólico do VE é determinado pela ação de fibras no sentido longitudinal e de fibras no sentindo circunferencial, 5 o que determina os dois vetores-força ativos da deformação ( Figura 1.1 A). A aplicação dessas forças no sentido longitudinal e circunferencial sobre um material de baixa compressibilidade (tecido miocárdico) resulta em um espessamento do miocárdio no sentindo radial (componente passivo da deformação). 6 Em última análise, este responde pela diminuição radial da cavidade ventricular. 4 É preciso ter em conta que o processo de deformação é bem mais complexo do que podemos aferir, pois, para cada processo de interação entre os vetores-força, surge um novo vetor resultante do cisalhamento entre as diferentes deformações, o shear-strain ( Figura 1.1 B, C e D). O encurtamento sistólico da fibra no sentido longitudinal e circunferencial produz valores negativos de strain . Já o espessamento sistólico radial atribui um valor positivo ao strain . Muitos autores optam por expressar apenas o valor absoluto (valor em módulo), e adotaremos essa abordagem aqui. 1.2.3. Tempo dos Eventos Mecânicos Descrevemos, a seguir, algumas definições fundamentais para a prática clínica: 1 , 7 • Final da sístole ( end-systole ): definido como o ponto temporal de fechamento da valva aórtica. Potenciais substitutos: nadir do strain global ou da curva de volume. É recomendado que os softwares informem qual critério foi adotado para definir o final da sístole. • Final da diástole ( end-diastole ) : definido como o ponto temporal no qual ocorre o pico do complexo QRS. A marcação de eventos ( event timing ) deve ser feita preferencialmente utilizando o Doppler e tendo como referência o eletrocardiograma (ECG). 1.2.4. Medidas de Pico Extraídas das Curvas de Deformação ( Figura 1.2 ) • Strain do final da sístole (end-systolic strain) : o ponto da curva de deformação no final da sístole, conforme previamente definido (fechamento da valva aórtica). Esse é o parâmetro padrão para descrever a deformação miocárdica. • Strain de pico sistólico (peak systolic strain) : o ponto onde ocorre o pico da curva durante toda a sístole. • Strain de pico sistólico positivo (positive peak systolic strain) : valor mais positivo registrado em casos em que a curva de um determinado segmento apresente esse comportamento em algum momento da sístole. • Strain de pico (peak strain) : o ponto onde ocorre o pico da curva de deformação, considerando todo o ciclo cardíaco. Habitualmente, esse ponto é alcançado até o fechamento da valva aórtica. Quando ocorre após, é descrito como strain pós-sistólico ( post systolic strain) 8 ou encurtamento pós-sistólico (EPS, post-systolic shortening ). O strain pós-sistólico reflete a deformação de segmentos que se contraem após o fechamento da valva aórtica e não contribuem para a ejeção ventricular. 1.3. Fatores que Afetam a Estimativa do Strain 1.3.1 Qualidade da Imagem A qualidade da imagem é um fator crítico que afeta a performance de qualquer software que estime a deformação miocárdica. Vários autores reportaram a sensibilidade da estimativa do strain e strain rate proporcionais à qualidade da imagem e do algoritmo de tracking. 9 - 11 1.3.2. Modalidade de Imagem Cardiovascular Diferentes modalidades de imagem cardiovascular fornecem valores diferentes de strain . Tee et al. 12 reportaram tais diferenças aferidas entre a ecocardiografia transtorácica, a tomografia computadorizada e a ressonância magnética cardíaca (RMC). 1.3.3. Fabricante e Versão do Software Estudos organizados pela European Association of Cardiovascular Imaging (EACVI) e American Society of Echocardiography (ASE) testaram a variabilidade de medidas do strain longitudinal global (SLG) obtidas entre diferentes fabricantes de aparelhos e softwares, com evidência de divergências significativas. 13 - 14 Contudo, há de se considerar que tais diferenças ainda são menores que a variabilidade da fração de ejeção (FE) reportadas na literatura. 9 - 10 , 15 Além da variabilidade interfabricante, deve-se estar atento à variabilidade intersoftwares do mesmo fabricante. Mudanças significativas no SLG foram previamente reportadas. 11 , 15 Dessa forma, estudos ecocardiográficos seriados deveriam, idealmente, ser realizados com o mesmo aparelho/ software e sob condições hemodinâmicas semelhantes, sobretudo em situações cuja variação do SLG pode levar a implicações terapêuticas profundas, como no contexto de avaliação de cardiotoxicidade induzida por quimioterápicos, por exemplo. 4 1.3.4. Condições Hemodinâmicas A deformação do VE varia consideravelmente de acordo com as condições de pré-carga e pós-carga às quais o ventrículo está submetido. 1.4. Strain Longitudinal Global É o parâmetro de deformação cardíaca com evidências científicas mais robustas e o único com uso de maior relevância na prática clínica. 9 Ele reflete a deformação longitudinal relativa (%) do miocárdio do VE, que ocorre desde o período de contração isovolumétrica até o final do período de ejeção. 1 , 5 , 15 Matematicamente, a contração em cada instante é computada pelo algoritmo como: , em que L(t) é o comprimento longitudinal no tempo t, e L(ED) é o comprimento no fim da diástole. 1 Há divergências significativas entre softwares quanto ao comprimento L(ED) utilizado: linha inteira da região de interesse (ROI) vs. média de determinado número de pontos do ROI x média dos valores em cada segmento do mesmo quadro. O valor de normalidade do SLG é de, aproximadamente, 20%. 9 Há evidências de variações dos valores de normalidade de acordo com sexo e idade. 7 Para a análise do SLG do VE (SLGVE) por speckle tracking , é necessária uma série de cuidados relacionados à aquisição das imagens: 1) O paciente deve estar sob monitorização eletrocardiográfica. 2) Se possível, deve-se tentar apneia expiratória, evitando os movimentos de translação do coração com as incursões respiratórias. 3) Deve-se buscar um ponto de equilíbrio entre aspectos de resolução espacial e temporal do método ecocardiográfico, ponderando os ajustes do aparelho em relação ao foco, à profundidade e à largura, de modo que otimizem a câmara cardíaca de interesse, versus o frame rate (FR). Este último deve ser mantido entre 40 e 80 quadros por segundo (em pacientes com frequência cardíaca normal). É importante ratificar que, quanto maior a frequência cardíaca, valores mais altos de FR serão necessários. 4) Evitar “imagens truncadas” do VE ( foreshortening ). 5) Clipes das janelas acústicas apicais de 3, 4 e 2 câmaras devem ser adquiridos, preferencialmente com o mínimo de três batimentos, excluindo-se extrassístoles. A Tabela 1.1 e a Figura 1.3 trazem um resumo dos passos a serem seguidos para realizar a medida do SLG. 2. Recomendações Gerais para o Uso do Strain : Aplicabilidade Clínica, Comparação com a Fração de Ejeção e Descrição Adequada no Laudo 2.1. Valor Prognóstico, Padrões Paramétricos e Detecção Subclínica de Cardiopatias da Deformação Miocárdica A análise de deformação miocárdica ( strain ) é uma ferramenta robusta e versátil que oferece informações adicionais e com menor variabilidade em relação aos parâmetros habituais sobre prognóstico, padrões paramétricos peculiares das cardiomiopatias (CMPs) e detecção de lesão subclínica. Estudos recentes demostraram o valor incremental do SLGVE sobre a fração de ejeção do ventrículo esquerdo (FEVE). 10 É importante realçar que a análise do strain apresenta uma variabilidade inter e intraobservador de 4,9 a 8,6%, bem menor que a FEVE, provavelmente por sofrer menos influência da pré- e pós-carga ventricular. 13 , 16 Além disso, o SLGVE vem se tornando uma ferramenta superior à FEVE naqueles pacientes com insuficiência cardíaca com fração de ejeção reduzida (ICFEr) e preservada (ICFEp). 17 , 18 Além da análise do VE, a piora do strain do ventrículo direito (VD) fornece valor aditivo prognóstico naqueles pacientes com ICFEp. 19 As CMPs compartilham achados morfológicos semelhantes na maioria das vezes, sendo um grande desafio diagnóstico na prática clínica diária. É comum haver a presença de aumento da massa e da espessura ventricular, associada à disfunção diastólica (DD) e com FEVE preservada nos estágios mais iniciais. A análise paramétrica do SLGVE pelo mapa polar possibilita que o exame de ecocardiograma desmascare alguns diagnósticos que não eram percebidos pelos parâmetros habituais, sendo descrito como uma “impressão digital” de algumas delas. O exemplo clássico é o padrão de poupar a ponta (apical sparing) da amiloidose, que será descrito com mais detalhes em capítulo específico. 20 Essa caracterização fenotípica vem despertando muito entusiasmo por favorecer uma facilidade diagnóstica nas patologias raras. Por outro lado, é importante ressaltar que, se não combinarmos o strain com dados da história clínica, aspectos morfológicos e hemodinâmicos, favoreceremos o excesso e o erro diagnóstico. Veja esses exemplos de “apical sparing” ( Figura 2.1 ). A utilização do strain como ferramenta diagnóstica e prognóstica se consolidou através de sua aplicação na cardio-oncologia, quando há a oportunidade de ajustar a conduta terapêutica, baseada na variação se seu valor, em relação ao valor do exame basal durante a quimioterapia. Quando há uma redução relativa maior que 15%, considera-se cardiotoxicidade com lesão miocárdica subclínica. 21 Em 2019, um posicionamento das diversas sociedades elaborou critérios para o uso adequado das diversas modalidades de imagens para avaliação das estruturas cardíacas nas doenças não valvares. Nesse documento, das 81 indicações descritas, apenas quatro consideraram o uso do strain adequado, sendo três na cardio-oncologia e uma para avaliação da CMP hipertrófica. 22 Apesar da falta de estudos bem desenhados que validem essa ferramenta nas demais situações descritas por esse posicionamento, o strain é largamente utilizado nos grandes centros de cardio-oncologia. Recentemente, a atualização da Diretriz Brasileira de Cardio-Oncologia reforçou a sua utilização. 23 2.2. Strain ou Fração de Ejeção: Qual é a Melhor Alternativa? A FEVE é um dos principais parâmetros ecocardiográficos utilizados para a avaliação da função ventricular na prática diária, sendo um dado de fácil interpretação pelos clínicos, além de ser amplamente disponível e obtida em equipamentos básicos de ultrassom. Há extensa validação do uso desse parâmetro para o manejo de pacientes portadores de cardiopatias, sendo utilizado em grandes estudos de intervenção terapêutica como critério de inclusão de pacientes e, muitas vezes, servindo como parâmetro para a avaliação e o acompanhamento de resultados. 24 O valor prognóstico da FEVE é bastante estabelecido na insuficiência cardíaca (IC) crônica 25 e, por isso, na atual recomendação da Sociedade Europeia de Cardiologia, as ICs são classificadas com base no valor da FEVE em: 1 ) IC com FE preservada (ICFEp: FE ≥ 50%); 2 ) IC com FE em meio termo (ICFEmr: ); e 3 ) ICs com FE de ejeção reduzida (ICFEr: FE < 40%). 26 A FEVE tem importante papel como parâmetro quantitativo para a definição de estratégias específicas em IC, por exemplo, servindo de critério para a indicação de terapia de ressincronização cardíaca em pacientes com IC refratária (FEVE ≤ 35%) ou mesmo na detecção de cardiotoxicidade em pacientes com câncer em uso de antracíclicos (queda evolutiva de FEVE ≥ 10% em relação ao basal com valor menor que o limite inferior da normalidade). 27 No entanto, deve-se ressaltar que há limitação da acurácia da FEVE estimada pelo Simpson biplanar pela grande variabilidade interobservador dessa medida, que pode chegar até 13%. 28 A ecocardiografia tridimensional (ECO3D), diferente do método Simpson biplanar, não se baseia em assunções geométricas e, por isso, mede diretamente os volumes das cavidades e a FEVE, com resultados bastante comparáveis aos obtidos pela RMC. Pelo uso de algoritmos automáticos e pela menor suscetibilidade a variações nas janelas de aquisição (orientação dos cortes apicais), a ECO3D possui menor variabilidade intra e interobservador que o método biplanar (0.4 ± 4.5%), 29 sendo uma boa alternativa para o acompanhamento e a vigilância de pacientes com disfunção ventricular ou sob risco de dano miocárdico. As técnicas de avaliação da deformação miocárdica, como o strain , permitem a avaliação dos três componentes de contração das fibras miocárdicas: longitudinal, radial e circunferencial. A FEVE é determinada sobretudo pelos componentes radial e circunferencial da contração miocárdica, que resultam no espessamento das paredes do miocárdio e na redução da cavidade ventricular na sístole. É importante notar, porém, que a FEVE não é um determinante único da performance ventricular (função “ejetiva”), sendo esta também dependente de um volume diastólico final (VDF) do VE adequado para gerar um volume sistólico normal. Isso explica por que, em pacientes com CMPs com expressão fenotípica de hipertrofia parietal concêntrica, como nas infiltrativas ou hipertróficas, podemos ter FEVE normal e baixo débito cardíaco. Esses pacientes se apresentam clinicamente como ICFEp e, à despeito de FEVE normal, têm geralmente pior prognóstico que pacientes com FEVE normal e débito cardíaco preservado, com alterações da função contrátil detectáveis apenas pelo SLG. 30 De fato, a deformação longitudinal é o componente da contratilidade miocárdica que se altera mais precocemente em grande parte das CMPs, podendo sinalizar um processo em estágio inicial e subclínico (ainda sem redução da FEVE), fase de doença em que a instituição de medidas terapêuticas ou cardioprotetoras pode apresentar melhores resultados. O SLG pode se encontrar alterado até mesmo em doenças genéticas não fenotipicamente expressas, tal como em portadores de Ataxia de Friedreich com massa e FEVE normais, podendo, inclusive, predizer a queda da FEVE e o prognóstico nesses pacientes. 31 Estudos demonstram o valor prognóstico adicional do SLG em pacientes com IC, com valor incremental ao efeito prognóstico da FEVE, sobretudo em pacientes com FE > 35%. 32 Dessa forma, Potter et al. sugeriram uma nova classificação de função ventricular, incorporando à prática clínica o uso valores de SLGVE de forma complementar à quantificação da FEVE, auxiliando na decisão clínica e na avaliação prognóstica dos pacientes, sobretudo nos com FEVE > 53% (ICFEp). 10 2.4. Conclusão A evidência atual é robusta para a incorporação do strain na prática clínica diária. Porém, ainda temos desafios para nossa realidade nacional, como a falta de democratização de acesso nos serviços de ecocardiograma com aparelhos com softwares para sua análise e falta de dados sobre a população brasileira. Utilizamos valores extrapolados de população com perfil sociodemográfico bem distinto de nossa realidade, com aplicação adaptada para a população brasileira. O Departamento de Imagem Cardiovascular está promovendo um trabalho multicêntrico (já em andamento), quando estão sendo analisados dados ecocardiográficos de brasileiros hígidos, para que possamos ter um retrospecto dos valores de normalidade em nossa população. O strain vem para se somar aos valores habituais do ecocardiograma, trazendo mais robustez prognóstica, possibilitando diagnóstico de CMPs, particularmente nas que se apresentam com aumento da espessura miocárdica, e, por último, diagnóstico de lesão miocárdica subclínica. 3. Strain na Cardio-oncologia A disfunção cardíaca relacionada ao tratamento contra o câncer representa uma importante causa de morbidade e mortalidade nos pacientes oncológicos. 42 , 43 Essa complicação pode interromper o tratamento e comprometer a cura ou o adequado controle do câncer. 44 , 45 Além disso, a IC relacionada à cardiotoxicidade por quimioterápicos frequentemente tem pior prognóstico que muitas neoplasias, com mortalidade de até 60% em 2 anos. 42 A identificação precoce da cardiotoxicidade com a instituição de medidas cardioprotetoras tem potencial impacto prognóstico nesse cenário. 46 , 47 Contudo, os métodos usualmente utilizados para esse diagnóstico, como a FEVE pela técnica bidimensional, têm baixa sensibilidade. 48 , 49 Assim, a utilização de marcadores mais precoces para a identificação dessa complicação, como a análise do strain , tem grande destaque nesse contexto. Os métodos de diagnóstico por imagem têm papel fundamental nesse cenário, e o ecocardiograma tem sido a ferramenta mais utilizada em função de sua correspondência anatômica, caráter não invasivo, fácil acesso, baixo custo e isenção de radiação ionizante. 27 A FEVE é o parâmetro mais utilizado para o diagnóstico de cardiotoxicidade. Utilizando a técnica bidimensional, ela deve ser calculada pelo método de Simpson biplanar. 21 A ECO3D, quando disponível, é a técnica de escolha para monitorar a FEVE em pacientes com câncer. Suas principais vantagens incluem maior acurácia no reconhecimento de FEVE abaixo do limite inferior da normalidade e maior reprodutibilidade que a técnica bidimensional, com acurácia semelhante à ressonância cardíaca. Entretanto, sua baixa disponibilidade, seu alto custo e a experiência do operador representam barreiras da técnica tridimensional. 27 , 50 A disfunção ventricular relacionada ao tratamento contra o câncer é definida por queda absoluta da FEVE em mais de 10 pontos percentuais, para um valor inferior a 50%, na presença ou não de sintomas de IC. Recomenda-se que esse estudo ecocardiográfico seja repetido dentro de 2 a 3 semanas para se avaliar os efeitos da pré e pós-carga sobre a FEVE. Apesar de ser um importante e já estabelecido fator prognóstico, a FEVE tem baixa sensibilidade para o diagnóstico de cardiotoxicidade, sendo dependente de alguns fatores como pré-carga cardíaca, qualidade da imagem e experiência do examinador. Além disso, ela pode subestimar o real dano cardíaco, uma vez que mecanismos hemodinâmicos compensatórios permitem o adequado desempenho sistólico do VE, mesmo na presença de disfunção dos miócitos. 48 Assim, a redução da FEVE ocorre frequentemente em um momento muito tardio, quando, mesmo com a intervenção terapêutica, a maioria dos pacientes não tem recuperação funcional. 46 , 48 , 49 Quando a detecção da cardiotoxicidade é precoce, com instituição de tratamento cardioprotetor, os pacientes têm maior potencial para a recuperação da função ventricular. 46 , 51 Nesse cenário, o estudo da deformação miocárdica ou strain tem grande destaque. O strain calculado pela técnica de speckle tracking bidimensional (ST2D) tem surgido como um marcador sensível e reprodutível de análise da função sistólica e da contratilidade do VE, validado em modelos in vitro e in vivo. 52 , 53 Tem sido crescente o número de publicações que demonstram a utilidade do estudo da deformação miocárdica pelo ST2D na detecção precoce e subclínica da cardiotoxicidade induzida por quimioterápicos, especialmente através da queda relativa do SLG. 23 , 54 - 57 É recomendada a análise do SLG nos pacientes que irão se submeter a tratamento quimioterápico potencialmente cardiotóxico. O diagnóstico subclínico de cardiotoxicidade é sugerido quando há uma queda do SLG maior ou igual a 12% em relação ao seu valor basal. 21 , 27 Na ausência de um estudo ecocardiográfico basal (pré-quimioterapia) para comparação, é sugerido, com base na opinião de especialistas, um valor absoluto de strain inferior a 17% como marcador de cardiotoxicidade subclínica, desde que não haja outros dados clínicos de sobreposição de outra doença miocárdica de base. Uma queda do SLG inferior a 8% do valor basal é considerada não significativa. A Figura 3.1 apresenta um exemplo de cardiotoxicidade subclínica sugerida pela queda relativa do SLG. A Figura 3.2 apresenta o algoritmo do seguimento ecocardiográfico no paciente oncológico, baseado na FEVE e no SLG. As Figuras 3.3 e 3.4 apresentam o monitoramento ecocardiográfico em pacientes sob tratamento com antracíclicos e trastuzumabe, respectivamente. O ensaio clínico randomizado SUCCOUR foi o primeiro estudo prospectivo e multicêntrico com maior poder científico que demonstrou o impacto prognóstico da cardioproteção guiada pelo SLG em comparação à cardioproteção guiada pela queda da FEVE pela ECO3D. Esse estudo revelou que, em pacientes recebendo quimioterapia com antracíclicos e com risco elevado de cardiotoxicidade, a cardioproteção (incluindo inibidor da enzima conversora de angiotensina [ECA] e betabloqueador) guiada por uma queda relativa do SLG maior ou igual a 12% do valor basal resultou em menor grau de queda da FEVE e menor incidência de disfunção cardíaca relacionada ao tratamento contra o câncer em 1 ano de seguimento. 58 O strain calculado pelo speckle tracking tridimensional (ST3D) tem demonstrado vantagens técnicas em relação ao ST2D com acurácia, reprodutibilidade e aplicabilidade já demonstradas em diferentes cenários. 59 - 62 Recentemente, pequenos estudos demonstraram o impacto do ST3D no reconhecimento precoce de alterações mecânicas relacionadas à quimioterapia. 63 - 66 Entretanto, são necessários estudos maiores e principalmente com maior tempo de seguimento para avaliar o valor prognóstico dessa técnica. Entre as limitações da análise do SLG, destacamos a variabilidade das medidas em equipamentos de diferentes fabricantes, de modo que as medições devem ser sempre feitas nos mesmos aparelhos. Assim como a FEVE, o SLG sofre influência pelos efeitos da pré- e pós-carga, geometria ventricular, alterações teciduais (infarto, miocardite, por exemplo) e distúrbios de condução. Por último, determinadas informações clínicas e oncológicas são fundamentais e devem ser reportadas no laudo para uma acurada interpretação ecocardiográfica, conforme apresentado na Tabela 3.1 . 4. Strain na Disfunção Diastólica 4.1. Introdução A DD é considerada um marcador precoce de dano miocárdico e, mesmo quando assintomática, pode determinar maiores taxas de mortalidade. Com a progressão da DD, ocorre aumento das pressões de enchimento do VE e ICFEp, 67 , 68 sendo que esta última responde por mais de 50% das internações por IC e apresenta taxas de mortalidade equiparáveis às da ICFEr. 69 Ao contrário da ICFEp, a DD pré-clínica é potencialmente reversível. Entretanto, a sua fisiopatologia é complexa e, apesar do uso integrado de vários parâmetros, o algoritmo atualmente recomendado é pouco sensível para detectar estágios subclínicos de DD. 70 , 71 Casos indeterminados ainda são frequentes porque esses parâmetros nem sempre se alteram simultaneamente ou de forma linear. 70 Fatores não diastólicos também podem contribuir para a ICFEp levando a expressões fenotípicas variadas, a depender do mecanismo fisiopatológico predominante. 72 Ferramentas que avaliam as mecânicas ventricular e atrial esquerdas pela medida do strain podem suplantar esses desafios diagnósticos. 73 , 74 O papel da mecânica ventricular direita nesse contexto ainda está sob investigação. 75 4.2. Strain do Ventrículo Esquerdo Diversos estudos têm demonstrado que parâmetros de deformação miocárdica avaliados por meio do speckle tracking , especialmente o SLGVE, têm melhor correlação com o relaxamento do VE e maior acurácia em predizer pressões de enchimento e intolerância ao exercício quando comparados a índices derivados do Doppler tecidual. 76 - 78 A queda do SLGVE auxilia na detecção da DD em estágios mais iniciais e também prediz eventos cardiovasculares (CV) na ICFEp. 17 , 79 - 82 Diante dessas evidências, o SLGVE reduzido (< 16%) já foi incluído como critério diagnóstico no novo algoritmo proposto pelas recentes diretrizes de ICFEp. 83 4.3. Strain do Átrio Esquerdo O strain atrial esquerdo (SAE) permite uma análise mais detalhada da função do átrio esquerdo (AE) e dos seus diversos componentes (reservatório, conduto e de bomba). 70 Alterações no SAE expressam o acoplamento ventrículo-atrial e são resultantes da exposição crônica à elevação das pressões do VE, reduções da complacência e do relaxamento do AE, 41 , 84 , 85 podendo preceder o seu remodelamento morfológico. 86 - 88 Embora tenha sido descrita redução de todos componentes do SAE, 86 , 89 , 90 o SAE de reservatório (SAEr) tem se mostrado o parâmetro mais robusto e se altera de forma linear com a progressão da DD. 91 - 93 Morris et al., entre outros, demonstraram que o SAEr reduzido (< 23%) aumentou a detecção de DD, além de se correlacionar com pressões de enchimento e desfechos clínicos. 93 - 99 Diante do número crescente de evidências, o SLGVE e o SAEr poderiam ser integrados ao algoritmo de DD vigente, conforme proposto na Figura Central . Essa estratégia pode ajudar a reclassificar os casos indeterminados e aumentar a acurácia para identificar estágios mais precoces de DD, especialmente em indivíduos com fatores de risco cardiovascular ou dispneia inexplicada. 97 Consensos com padronização da metodologia do strain foram publicados para minimizar a variabilidade entre os fabricantes, o que ainda é uma limitação. 7 , 92 , 100 , 101 Esperam-se novos estudos prospectivos e multicêntricos para avaliar se a modificação desses índices, com o tratamento, muda o prognóstico da DD e ICFEp. 4.4. Conclusão O SLGVE e o SAEr são marcadores de doença subclínica que podem ser incorporados às recomendações vigentes para refinar o diagnóstico, o estadiamento e prognóstico da DD. Considerando-se a natureza complexa dessa avaliação, seria salutar a implementação e validação de algoritmos desenvolvidos por meio de inteligência artificial. 5. Strain nas Cardiomiopatias 5.1. Introdução Seguindo um termo geral, as CMPs são afecções do músculo cardíaco. Em uma conotação mais pura e primária, elas não guardam uma associação com importantes causas sabidamente agressivas ao miocárdio, como a doença arterial coronariana (DAC), hipertensão arterial, valvopatias e cardiopatias congênitas. Podem ser divididas nos principais grandes grupos: dilatada, hipertrófica, restritiva, CMP arritmogênica e “miscelâneas não classificadas”. 102 5.2. Cardiomiopatia Dilatada Por definição, a CMP dilatada se caracteriza como uma doença que acomete o tecido miocárdico e que leva à progressiva redução da função sistólica e dilatação da cavidade do VE. Clinicamente, os indivíduos podem apresentar sinais e sintomas de IC, com necessidade de tratamento, hospitalizações, e por fim, transplante cardíaco. 102 - 107 O ecocardiograma faz parte do arsenal diagnóstico de primeira-linha, com um papel extremamente importante no diagnóstico e prognóstico. Seus principais objetivos são a avaliação da dimensão volumétrica das câmaras cardíacas e a avaliação do desempenho sistólico do VE, classicamente realizada pela estimativa da FE, e que deve ser executada preferencialmente através do método de Simpson. O strain é uma ferramenta ecocardiográfica adicional para agregar informação a essa avaliação e também possibilita detectar anormalidades sutis, subclínicas e em estágios iniciais de doenças. Abduch et al . demonstraram excelente correlação entre parâmetros volumétricos obtidos pela ECO3D e o strain em pacientes com CMP dilatada. 108 Com a evolução da CMP dilatada para fases mais acentuadas de comprometimento sistólico do VE, haverá redução mais importante do strain e do strain rate nas três principais orientações (longitudinal, radial e circunferencial) ( Figura 5.1 ). 109 A torção do VE também acompanha essa tendência de diminuição com a progressão da doença. Adicionalmente, em fases muito avançadas, também pode haver inversão das rotações: segmentos basais com rotação anti-horária e os apicais em sentido horário. 110 - 112 O SLG é um preditor independente de mortalidade por todas as causas em pacientes com ICFEr, especialmente em pacientes do sexo masculino sem FA. 113 Já nos pacientes com FEVE recuperada, um SLG anormal prevê a probabilidade de diminuição da FEVE durante o acompanhamento, enquanto um SLG normal prevê a probabilidade de FEVE estável durante a recuperação. 114 5.3. Cardiomiopatia Arritmogênica A CMP arritmogênica é uma entidade histologicamente caracterizada por infiltração fibrogordurosa no tecido miocárdico. Essa infiltração ocorre preferencialmente nas vias de entrada, saída e ápice do VD, o chamado “triângulo da displasia”, entretanto o VE também pode ser acometido de forma concomitante ou até mesmo exclusiva. 115 - 117 Macroscopicamente, a parede ventricular tende a se afilar, com formação de microaneurismas, e a tendência é progredir para comprometimento sistólico e dilatação da cavidade. O método diagnóstico padrão-ouro é a RMC, contudo o ecocardiograma é o exame inicial, e o strain da parede livre do VD pode auxiliar na determinação do comprometimento sistólico dessa cavidade. Prakasa et al. foram os primeiros a analisar o strain na CMP arritmogênica com acometimento do VD, em 2007. Eles mostraram uma diferença consistente entre os valores do strain nos indivíduos doentes (10 ± 6%) em comparação com os normais (28 ± 11%, P = 0,001). 118 O strain longitudinal da parede livre do VD (SL-PLVD) está associado à taxa de progressão estrutural em pacientes com CMP arritmogênica. Ele pode ser um marcador útil para determinar quais pacientes requerem acompanhamento e tratamento mais próximos. Pacientes com strain de VD menor que 20% tiveram um risco maior de progressão estrutural ( odds ratio : 18,4; IC95% 2,7–125,8; P = 0,003). 119 Pacientes com CMP arritmogênica apresentam redução do strain atrial direito (AD) em todas as fases da diástole, mesmo quando o volume do AD é normal. O strain do AD, obtido nas fases reservatório e bomba, está associado a um risco aumentado de eventos CV. 120 5.4. Cardiomiopatia Hipertrófica A CMP hipertrófica (CMPH) é uma doença autossômica dominante, sendo a doença cardíaca de etiologia genética mais comum. Caracteriza-se pelo aumento da espessura miocárdica ventricular de diferentes morfologias (podendo ser concêntrica, apical, septal, hipertrofia da parede livre do VE e do VD) e está relacionada ao aumento da morbimortalidade dos pacientes acometidos. 121 - 123 O ecocardiograma é o método de imagem mais utilizado para o diagnóstico morfológico e hemodinâmico na CMPH. Cerca de 25% desses pacientes apresentam gradiente em via de saída do VE maior de 30 mmHg no repouso, que pode ser quantificado pelo Doppler contínuo. 124 O gradiente dinâmico também pode estar presente nesses pacientes e pode ser mais bem avaliado pela realização das manobras de Valsalva durante o ecocardiograma, pelo ecocardiograma com estresse físico ou farmacológico com dobutamina. 125 Níveis elevados de gradiente intraventricular na CMPH podem ser um dos determinantes para a queda da deformidade miocárdica e dos mecanismos de torção, assim como do strain no AE. 126 A técnica do strain miocárdico auxilia na análise da mecânica cardíaca regional e global na CMPH, sendo medida através do speckle tracking , e é capaz de detectar precocemente alterações da função sistólica, fibrose e até maior risco de o paciente desenvolver arritmia, mesmo nos casos de pacientes com função sistólica normal. 126 - 130 O padrão do mapa polar auxilia na diferenciação das fenocópias que cursam com aumento da espessura, visto que a deformação miocárdica longitudinal apresenta-se reduzida no local da hipertrofia 20 , 131 , 132 ( Figura 5.2 ). Hiemstra et al. identificaram que o volume atrial esquerdo indexado e o SLGVE são fatores prognósticos independentes de desfechos adversos como morte súbita e transplante cardíaco em pacientes com CMPH. 133 Embora o SLG do VD (SLGVD) possa estar alterado no paciente com CMPH por ser uma doença estrutural cardíaca, seu significado prognóstico é desconhecido. 133 , 134 5.5. Endomiocardiofibrose A endomiocardiofibrose (EMF) é a CMP restritiva mais comum em nosso meio, na África equatorial e na Índia, afetando cerca de 10 milhões de pessoas no mundo. Ela é caracterizada pela deposição de tecido fibroso no endomiocárdio do ápice e da via de entrada de um ou ambos os ventrículos. A etiologia permanece desconhecida até hoje, podendo estar relacionada a hipereosinofilia, infestações parasitárias e desnutrição proteica, especialmente em populações de padrão socioeconômico comprometido. O ecocardiograma mostra ventrículos de tamanho normal ou reduzidos, com morfologia ventricular em “cogumelo” ou em “V” pela deposição da fibrose, podendo estar associado à trombose endocárdica apical, hipermotilidade da região basal ventricular (sinal de Merlon), átrios de volume geralmente muito aumentados, função ventricular sistólica comumente preservada e DD. 135 - 137 Poucos artigos avaliaram a EMF utilizando a ECO3D com speckle tracking. Esses estudos mostram redução do SLG, especialmente com comprometimento mais importante da região apical. 137 , 138 5.6. Miocárdio Não Compactado O miocárdio não compactado (MNC) é caracterizado pela presença de trabéculas proeminentes e espaços intertrabeculares profundos na cavidade do VE devido à compactação incompleta do miocárdio na vida embrionária. Isso pode levar a quadro clínico de IC, arritmias e eventos tromboembólicos. Existem duas formas, a esporádica e a familiar, sendo que a última está relacionada a mutações de proteínas do sarcômero. Os índices de deformação miocárdica permitem uma análise regional adequada da função ventricular em pacientes com MNC e auxiliam na diferenciação de outras CMPs. Um estudo indiano comparou a deformação miocárdica de 12 pacientes com MNC, 18 pacientes com CMPH e 18 indivíduos saudáveis. Ambos os grupos de pacientes apresentaram redução do strain longitudinal, entretanto, os pacientes com MNC apresentaram maior redução do strain longitudinal na região apical quando comparados com o grupo de CMPH (12,18 ± 6,25 vs. 18,37 ± 3,67; p < 0,05), sugerindo um comprometimento maior dessa região na não compactação miocárdica. Além disso, um gradiente ápico-basal no strain longitudinal foi observado nos pacientes com MNC, mas não nos com CMPH. 139 Ambos os grupos apresentam DD quando comparados com o grupo-controle. Outro estudo mostrou que o strain longitudinal é maior no grupo de MNC em relação à CMP dilatada idiopática e que o gradiente base-ápice do strain é um índice útil para diferenciar essas doenças com sensibilidade de 88,4% e especificidade de 66,7%. 140 No coração normal, a base do VE gira no sentido horário durante a sístole, enquanto o ápice gira no sentido anti-horário, sendo que a torção é a diferença entre a rotação apical menos a basal. Um estudo prévio mostra que 50% dos pacientes com MNC apresentam rotação em corpo rígido (RCR), com rotação horária do ápice e da base; entretanto, estudos anteriores mostraram prevalência de 53,3% e 83%. 141 , 142 Um estudo com 28 crianças com MNC mostrou que 39% dos pacientes apresentavam RCR, além de apresentarem menor strain longitudinal, mas não FEVE, em relação ao grupo sem RCR, podendo ter valor prognóstico. 143 Além disso, os autores sugerem que o RCR está possivelmente relacionado principalmente à disfunção da camada apical subepicárdica compactada, sem relação com a distribuição de trabéculas. Outro estudo em 101 crianças com MNC mostrou que o grupo que apresentou desfecho adverso tinha redução do strain longitudinal, radial e circunferencial, sugerindo ser uma doença que afeta globalmente o coração e não apenas a região não compactada. 144 6. Strain nas Valvopatias O ecodopplercardiograma transtorácico é o método de primeira linha para o diagnóstico e a classificação da gravidade das valvopatias, por meio de uma análise combinada das alterações da anatomia e da função valvar. 145 Esse método diagnóstico participa ativamente na definição do momento correto e do tipo de intervenção que deve ser realizada para o tratamento das valvopatias. Classicamente, a indicação de tratamento é baseada na presença de sintomas ou de fatores complicadores. 145 Entre os fatores complicadores, a disfunção ventricular esquerda é considerada o fator mais importante. 145 A avaliação da função ventricular esquerda é habitualmente realizada pela ecocardiografia, por meio da medida da FEVE. 146 Entretanto, várias evidências científicas têm demonstrado que a medida do strain ventricular esquerdo pode identificar a presença de disfunção ventricular antes da queda da FE. A insuficiência mitral (IM), talvez seja a valvopatia que melhor representa esse paradoxo, uma vez que, nessa doença, o estado de alta pré-carga e baixa pós-carga faz com que a FE não represente adequadamente a função sistólica do VE. Por essa razão, as diretrizes clínicas são bastante conservadoras na definição de disfunção ventricular esquerda nessa condição. 145 , 147 - 149 Entretanto, alguns estudos indicam que, mesmo utilizando esses parâmetros, o desfecho clínico após correção cirúrgica da IM pode não ser satisfatório, especialmente no que se refere à queda da FE e à presença de IC. 150 - 151 Dessa forma, estudos têm demonstrado que mesmo em pacientes com FE acima de 60% e com diâmetro sistólico final do VE menor que 40 mm, a presença de SLG reduzido (≤ 19%) se associou à queda da FE abaixo de 50% no pós-operatório. 152 - 154 Um SLG reduzido, abaixo de 18,1%, também se associou à maior mortalidade e a mais eventos CV em pacientes com IM seguidos prospectivamente e submetidos à cirurgia corretiva. 155 Na insuficiência aórtica (IAo), demonstrou-se que a gravidade da valvopatia se correlaciona com a queda do strain ventricular esquerdo. 156 Além disso, em pacientes com IAo importante e crônica, assintomáticos e com FE preservada, um SLG abaixo de 19% se associou à maior mortalidade ao longo do tempo, que era corrigida com a realização de troca valvar. 157 O mesmo grupo mostrou que uma medida menor que 19% do SLG após a cirurgia, bem como uma queda de mais de 5 pontos percentuais no SLG, implicava em maior mortalidade. 158 Na estenose aórtica (EA) importante, a presença de FE menor que 50% e/ou de sintomas têm sido os pilares para indicação de tratamento. 145 , 147 - 149 Entretanto, uma estratégia baseada em aguardar que a FEVE caia para < 50% para indicar a intervenção cirúrgica aórtica pode levar a desfechos clínicos insatisfatórios. 159 Assim, o emprego de um parâmetro robusto de detecção de disfunção miocárdica subclínica, como o SLG, parece ser uma ferramenta de grande valor na estratificação do risco ( Figura 6.1 ). Enquanto a FEVE não difere entre os graus de EA, o SLG diminui linearmente conforme a doença progride, 160 acarretando maior risco de desfechos clínicos adversos, mesmo em assintomáticos. 161 Diversos estudos examinaram o valor prognóstico do SLG para predizer a mortalidade e eventos CV em indivíduos assintomáticos com EA e FEVE preservada, visando selecionar quais devem ser encaminhados precocemente para a intervenção valvar. 162 - 164 Os resultados desses estudos foram condensados em uma metanálise que definiu SLG < 14,7% como o valor de corte associado com maior risco de morte (sensibilidade de 60% e especificidade de 70%; área sob a curva [ASC] = 0,68). 165 Foi encontrado SLG < 14,7% em aproximadamente um terço da população de indivíduos com EA moderada a grave e FEVE preservada, acarretando risco de morte 2,6 vezes maior. É importante ressaltar que a relação entre SLG e mortalidade foi significativa, tanto naqueles com FEVE entre 50-59% quanto naqueles com FEVE ≥ 60%. Em contraste, o achado de SLG > 18% se associou com excelente evolução clínica (97±1% de sobrevida em 2 anos). Portanto, o SLG diminuído, a despeito da FEVE preservada, configura-se como um poderoso preditor prognóstico a ser considerado na tomada de decisão clínica para indicar intervenção na EA grave assintomática, em conjunto com outros dados clínicos e ecocardiográficos. 7. Strain nas Cardiopatias Isquêmicas 7.1. Introdução A ecocardiografia é uma excelente ferramenta a ser utilizada nas unidades de emergência para o diagnóstico de síndrome coronariana aguda e suas complicações. Ela oferece informações sobre o prognóstico desses pacientes a curto e a longo prazo, e o seu papel está bem definido na estratificação de risco da DAC estável e na avaliação de viabilidade miocárdica. Entre as técnicas existentes, a ecocardiografia bidimensional com strain pelo speckle tracking (2DST) ratifica e acrescenta informações, sem estender em demasiado o tempo de exame. Ela avalia com boa acurácia a isquemia subendocárdica através do strain longitudinal em eventos agudos e crônicos. Ao longo do texto, serão revisadas as indicações sobre o uso do strain longitudinal, circunferencial e radial nas cardiopatias isquêmicas, assim como outros dados fornecidos ao se calcular o strain , como a dispersão mecânica. Na Tabela 7.1 estão as principais indicações do strain na cardiopatia isquêmica 7.2. Strain na Síndrome Coronariana Aguda O strain bidimensional é um marcador com boa sensibilidade para detectar isquemia miocárdica, considerado mais reprodutível que a FEVE e com acurácia confirmada pela RMC. 52 , 166 A fibra subendocárdica é mais sensível à isquemia no seu estágio inicial, e o componente longitudinal predomina nesse tipo de isquemia. 167 O SLG apresenta-se reduzido no infarto agudo do miocárdio (IAM) e correlaciona-se com a extensão do infarto, FE, eventos adversos e resposta às estratégias de reperfusão. 168 - 172 Pacientes com infartos de pequena extensão, apresentam o SLG e radial reduzidos, enquanto o circunferencial e o twist se mantêm preservados. No entanto, o strain circunferencial estará comprometido também no infarto transmural. 173 A identificação da extensão do infarto transmural apresenta uma implicação prognóstica importante, pois está associada a prognóstico reservado e a um maior número de eventos adversos. Os infartos subendocárdicos e não transmurais são associados com recuperação após revascularização ( Figura 7.1 ). 174 Um valor do strain longitudinal de 15% correlaciona-se com alterações segmentares (sensibilidade de 76% e especificidade de 95%). 168 O strain radial com o ponto de corte de 16,5% diferencia infartos transmurais dos não transmurais (sensibilidade de 70% e especificidade de 71,2%). O valor do strain circunferencial < 11% diferencia infarto transmural de não transmural (sensibilidade de 70% e especificidade de 71,2%). 175 Além disso, o strain longitudinal regional de 4,5% distingue infarto transmural de não transmural (sensibilidade de 81,2% e especificidade de 81,6%). 176 , 177 Um outro dado importante em relação ao SLG é o seu valor diagnóstico na doença coronariana aguda (DCA) sem supra de ST para discriminar a artéria culpada. Uma coorte com 58 pacientes, dos quais 33 tinham DAC significativa (lesão acima de 50%) definida pela cineangiocoronariografia e submetidos à análise do strain antes do procedimento, demonstrou um ponto de corte de 19,7% (sensibilidade de 81% e especificidade de 88%, com ASC = 0,92) para detecção de DAC. O emprego de um ponto de corte de 21% foi capaz de excluir estenose coronariana significativa em 100% dos pacientes. O strain longitudinal territorial foi calculado como a média dos strain de pico sistólico dos segmentos que pertencem ao território daquele vaso estudado. Nesse trabalho, se o ponto de corte de 21% fosse aplicado, 16 pacientes seriam poupados da cineangiocoronariografia. 178 , 179 O strain pode ser uma ferramenta para auxiliar na detecção de oclusão coronariana aguda em pacientes sem supra de ST, que podem se beneficiar de terapia de reperfusão precoce. Um estudo avaliou 150 pacientes, que realizaram o exame ecocardiográfico antes de serem encaminhados à cineangiocoronariografia. Desse total, 33 apresentaram oclusão coronariana aguda. Observou-se que um strain menor que 14% identificou a oclusão coronariana aguda (sensibilidade de 85% e especificidade de 70%), mas estudos mais robustos necessitam ser realizados para validação da técnica. 180 O strain emergiu como uma nova técnica para detectar alterações subclínicas segmentares e globais, com performance acima de testes enzimáticos, ECG e escores de risco, além do seu papel na avaliação prognóstica desses pacientes com DCA. Trata-se de um exame rápido disponível à beira do leito e que pode ser realizado antes da cineangiocoronariografia, especialmente por ecocardiografistas treinados. Essa técnica está indicada, conforme os estudos citados, na DCA sem supra de ST para a avaliação das alterações segmentares e da função ventricular global, para diferenciar infartos pequenos dos infartos transmurais, discriminar a provável artéria culpada e para a avaliação pós-revascularização percutânea. Também é possível utilizá-la para a avaliação da viabilidade miocárdica após episódios de IAM. 181 , 182 7.3. Strain na Síndrome Coronariana Crônica A área mais suscetível a isquemia está localizada na região subendocárdica e, nessa localidade, as fibras estão orientadas no sentido longitudinal, portanto a avaliação da deformação longitudinal utilizando o 2DST seria um excelente marcador para a presença isquemia em comparação à avaliação somente pela ecocardiografia convencional. 183 A interação do miocárdio normal e anormal gera padrões regionais típicos de deformação miocárdica, indicando que a contração miocárdica e a deformação miocárdica não são parâmetros intercambiáveis. 16 , 184 O SLG pode ser muito mais sensível do que a FEVE na capacidade de detectar alterações precoces na isquemia miocárdica, pois avalia a função longitudinal do VE, porém ele não tem especificidade superior quando comparado às alterações da mobilidade de parede. 185 , 186 A variabilidade das medidas regionais do strain pelo speckle tracking é relativamente alta, o que torna essas avaliações menos adequadas para o uso de rotina. No entanto, as medidas do SLG mostraram-se reprodutíveis e robustas, provavelmente devido à avaliação amplamente automatizada desse método. 187 A outra alteração é a heterogeneidade regional de ativação miocárdica, que altera a sequência temporal de encurtamento e alongamento do miocárdio. Na isquemia, não somente a amplitude do encurtamento é reduzida, como também o início e a duração da contração das fibras são alterados, o que gera um encurtamento ou espessamento característico do miocárdio após o fechamento da válvula aórtica. 187 Essa alteração, chamada de encurtamento pós-sistólico (EPS), é característica do desenvolvimento de isquemia, apesar de poder ocorrer também na disfunção regional de qualquer causa (cicatriz, dissincronia etc.). 187 , 188 Esse EPS pode ser entendido como um sinal de atraso do relaxamento para que a região isquêmica possa encurtar, enquanto a pressão do VE reduz e o tecido circundante relaxa. 16 Um menor EPS com função sistólica normal é um achado frequente em 30% a 40% dos segmentos miocárdicos de corações saudáveis e pode ser encontrado principalmente no ápice e na base das paredes inferior, septal e antero-septal. 16 , 189 Um dado importante no contexto da CMP isquêmica é a avaliação temporal do padrão das curvas do SLG, pois, muitas vezes, os segmentos isquêmicos podem ter valores do pico sistólico preservados, porém apresentam atraso temporal em relação aos outros segmentos não isquêmicos. É importante ressaltar que as medidas do strain longitudinal regional não necessariamente refletem a impressão visual das alterações da contração, que é determinada pelo espessamento radial e pelo movimento endocárdico para o interior da cavidade. 16 O SLG contribui para a detecção de DAC em pacientes com angina estável (estenose maior ou igual a 70%), com valores reduzidos na presença de DAC (17,1 ± 2,5% vs. 18,8 ± 2.6%; p < 0,001), especialmente quando associado ao teste ergométrico, e, além disso, identifica a provável artéria culpada. 183 O emprego do strain longitudinal e, especialmente, do strain rate melhorou a sensibilidade e a acurácia na detecção das alterações segmentares no período tardio do pós-infarto do miocárdio. 189 Na estratificação dos pacientes pós-IAM, um valor SLG menor que 15% antes da alta hospitalar foi um preditor independente de dilatação do VE em um seguimento de 3 a 6 meses, além de servir como um marcador do tamanho da área do infarto. 190 Nesse mesmo contexto de pós-IAM, um valor do SLG menor do que 14% foi um preditor independente de morte cardiovascular e internações por IC. 191 Nos pacientes com doença crônica estável, um valor do SLG menor que 11,5% também demonstrou ser um preditor de morte por todas as causas e de desfechos combinados (morte por todas as causas e internação por IC). 192 A heterogeneidade regional da contração miocárdica também pode ser avaliada pela dispersão mecânica, que é definida como o desvio padrão de tempo para atingir o pico tensão negativa entre todos os segmentos do VE. Esse índice tem um valor preditivo para taquiarritmia ventricular em pacientes pós-infarto. Foi demostrado que valores maiores de dispersão foram encontrados em pacientes que apresentaram arritmias no pós-IAM (85 +/- 29 ms vs. 56 +/-13 ms, p < 0,001). 193 7.4. Strain do Ventrículo Direito na Cardiopatia Isquêmica A função do VD é comprometida em aproximadamente um terço dos infartos de parede inferior e seu envolvimento tem sido descrito como um preditor importante de mortalidade hospitalar e de complicações maiores. A avaliação da função do VD é desafiadora devido à sua complexidade estrutural. O valor do strain da parede livre do VD demostrou ser um preditor de oclusão proximal da artéria coronária direita em pacientes com IAM de parede inferior ( strain da parede livre do VD < 14,5%, ASC = 0,81; p < 0,001). 194 Na doença isquêmica crônica estável, o strain da parede livre do VD apresenta-se alterado em pacientes com estenose da coronária direita (lesão maior que 50%) e pode ser usado para detectar disfunção subclínica nesse contexto. 195 8. Strain nas Doenças Sistêmicas (Amiloidose e Doença de Fabry) 8.1. Strain na Amiloidose Cardíaca A amiloidose é uma doença sistêmica causada pela deposição extracelular de fibrilas amiloides insolúveis nos tecidos. O acometimento cardíaco é um importante fator prognóstico e causa grande impacto na qualidade de vida dos pacientes, ocorrendo mais comumente nas formas causadas por cadeias leves (AL) e na amiloidose por transtirretina (ATTR). 196 A ecocardiografia é um método de primeira linha para o diagnóstico e a avaliação prognóstica da amiloidose cardíaca (AC) e de outras doenças cardíacas infiltrativas. A maioria dos achados clássicos e sinais mais específicos da AC ao ecocardiograma ocorre somente em estágios muito avançados da doença. 30 Situações clínicas como a IC com FE preservada e a presença de hipertrofia ventricular podem servir como sinais de alerta para a suspeição diagnóstica de AC. 197 8.1.1. Papel da Análise da Deformação Miocárdica no Diagnóstico da Amiloidose Cardíaca O SLGVE encontra-se consistentemente alterado em pacientes com AC e está diretamente relacionado ao grau de infiltração amiloide, quantificado na ressonância magnética (RM) pelo grau de realce tardio por gadolíneo (LGE) e pelo volume extracelular (VEC) calculado em imagens de sequência T1. 30 Um padrão de alteração regional dos valores de strain longitudinal (SL), com preservação relativa dos valores de deformação longitudinal dos segmentos apicais (RELAPS) foi descrita na literatura, definindo um gradiente basal-apical característico, conhecido como apical sparing ou cherry on top ( Figura 8.1 ). Na publicação original de Phelan et al., o RELAPS foi calculado a partir da seguinte equação: média do SL apical/(média do SL dos segmentos médios + média do SL dos segmentos basais), com valores > 1,0 apresentando boa acurácia para o diagnóstico de AC, com boa diferenciação de hipertrofias ventriculares causadas por EA e pela CMPH (ASC: 0,94). 20 Esse padrão regional do SL, com gradiente basal-apical, é encontrado indistintamente nos tipos de amiloidose AL e ATTR. É importante enfatizar que o padrão clássico de apical sparing , embora descrito como característico para AC, pode estar ausente, conforme exemplificado no estudo de Ternacle et al., em que 52% dos pacientes com diagnóstico de AC tinham RELAPS “não diagnóstico” (< 1,0). 198 Isso pode ser explicado, em alguns casos, pelo baixo grau de infiltração amiloide do miocárdio, em estágios bastante precoces da doença. Valores de SL regional septal apical/septal basal > 2,1 (SAB), quanto associados a tempo de desaceleração do influxo mitral < 200 ms, também demonstraram boa acurácia para a diferenciação de AC de outras doenças com fenótipo de hipertrofia parietal do VE, como doença de Fabry, ataxia de Friedreich, e hipertrofia do VE relacionada à hipertensão arterial sistêmica (HAS). 199 A relação da FE do VE/SLG > 4,1 também foi demonstrada como um bom parâmetro para diferenciar AC de CMPH, com performance superior ao RELAPS ou SAB, independentemente do tipo de AC. 200 A deformação miocárdica do VD está geralmente reduzida em pacientes com AC, e seu achado pode ajudar a diferenciar de outras causas de hipertrofias parietais ( Figura 8.2 ), tendo sido descrito também um padrão de preservação relativa apical, similar ao que é descrito no VE. 201 Bellavia et al. demonstraram que alterações do VD podem ocorrer precocemente em pacientes com AC do tipo AL, mesmo em casos em que as espessuras parietais do VE ainda são normais. 202 Na AC, assim como observado em outras CMPs infiltrativas, pode haver acometimento significativo de outros componentes da deformação miocárdica, como o strain circunferencial, 203 strain radial, 204 twist e torção ( Figura 8.3 ). Em pacientes com amiloidose sistêmica em estágios iniciais de doença, sem evidência de AC, o twist e untwist podem estar aumentados de forma compensatória, 205 sendo a deterioração desses parâmetros progressiva com o evoluir da doença, 206 podendo levar, em casos avançados, à rotação da base e ápice cardíacos na mesma direção, criando um padrão chamado de rigid body rotation , com perda completa de importante contribuição da torção cardíaca à mecânica ventricular. O strain do AE também se encontra frequentemente alterado de forma significativa nos pacientes com AC, em parte como resultado da própria DD do VE, mas também de forma importante pelo efeito da infiltração direta da parede atrial por fibrilas amiloides ( Figura 8.4 ). Em um estudo recente de Aimo et al., apenas o SL atrial de pico (LA-PALS) apresentou uma associação independente com o diagnóstico de AC, além das variáveis ecocardiográficas clássicas e dos biomarcadores cardíacos. 207 Foi também reconhecido que a miopatia infiltrativa atrial avançada poderia causar grave disfunção e perda da eficiência mecânica da cavidade, levando a uma situação de “dissociação eletromecânica” atrial (DEMA). 208 Em uma grande coorte de pacientes, Bandera et al. demonstraram a presença de DEMA (determinados pela análise do SL) em 22,1% dos pacientes em ritmo sinusal, que era fator determinante de mau prognóstico, quando comparado com a evolução de pacientes em ritmo sinusal e função mecânica atrial preservada. 209 Em uma série de 156 pacientes com AC da Mayo Clinic, trombos intracardíacos foram detectados pelo ecocardiograma transesofágico em 27%, 210 dados reproduzidos por outros estudos, com ocorrência de trombos inclusive em pacientes em ritmo sinusal 211 , 212 ( Figura 8.5 ). O strain 3D pode ser útil em demonstrar alteração de todos os componentes da deformação miocárdica em pacientes com AC. Vitarelli et al. demonstraram que a rotação basal de pico do VE, SL basal do VD e SL basal do VE foram capazes de distinguir com grande acurácia pacientes com AC de pacientes portadores de outras hipertrofias ventriculares. 213 Em estudo de Baccouche et al. 214 usando SL derivado da ECO3D, foi possível demonstrar o mesmo padrão de apical sparing , com gradiente basal-apical característico. O trabalho miocárdico (TM), do inglês myocardial work (MW), também foi avaliado em pacientes com AC. Clemmensen et al. demonstraram que pacientes com AC tiveram menor índice de TM do VE ( left ventricular myocardial work index , LVMWI) que o grupo-controle, com alterações mais pronunciadas nos segmentos basais e, quando submetidos a ecocardiograma de estresse, o aumento de LVMWI do repouso ao pico do exercício foi de 1.974 mmHg% em pacientes do grupo-controle (IC95% 1.699–2.250 mmHg%; p < 0,0001) comparado com apenas 496 mmHg% em pacientes com AC (IC95% 156–835 mmHg%; p < 0,01). 215 O uso de strain para a avaliação evolutiva e para monitorar a resposta de pacientes em uso de tratamentos específicos para AC é bastante promissor. Giblin et al. avaliaram retrospectivamente 45 pacientes com AC ATTR em seguimento de 1 ano, comparando os valores de SL e MW entre os grupos de pacientes tratados e não tratados com Tafamidis. 216 No grupo de pacientes não tratados, foi encontrada uma maior deterioração de SLG (p = 0,02), LVMWI e eficiência de TM (p = 0,04), sem diferenças significativas entre os grupos considerando os valores de strain circunferencial, strain radial, twist ou torção. Os parâmetros de deformação miocárdica também foram bastante estudados como índices prognósticos na AC pela sua capacidade em fornecer dados quantitativos e pela sua alta sensibilidade e reprodutibilidade. Em estudo de Ternacle et al., foram preditores independentes de eventos cardiovascular maiores em um seguimento médio de 11 meses: SL apical médio (ponto de corte: -14,5%), NT-pro-BNP elevado e classe funcional NYHA III ou IV. 198 Em outro estudo, o índice RELAPS se associou de forma independente com o desfecho composto de morte ou transplante cardíaco em 5 anos ( hazard ratio 2,45; p = 0,003), mantendo o valor preditivo desse desfecho primário mesmo na análise multivariada (p = 0,018). 217 Em um grande estudo publicado por Buss et al. incluindo 206 pacientes com AC AL, foi demonstrado que o SL baseado no Doppler e o SLG tiveram forte associação com os níveis de NT-proBNP e com a sobrevida (melhor ponto de corte: -11,78%), sendo que, na análise multivariada, apenas a DD e o SLG permaneceram como preditores independentes de sobrevida. 218 Em trabalho recente, Liu et al. incluíram 40 pacientes portadores de mieloma múltiplo com FE preservada antes do início de tratamento com bortezomibe, medindo SLG e parâmetros de TM na linha de base. 219 Os autores observaram que eficiência de TM global ( global MW efficiency , GMWE) tinha associação significativa com eventos adversos cardíacos após 6 meses de quimioterapia, com ASC = 0,896 (IC95% 0,758–0,970; p < 0,05). O SL do VD também foi associado a prognóstico em pacientes com AC. Huntjens et al., estudando 136 pacientes com AC, demonstraram que valores de strain de todas as cavidades tinham associação significativa com sobrevida em um seguimento médio de 5 anos. 220 O SL de pico do AE e o strain médio de parede livre do VD mantiveram associação independente com prognóstico na análise multivariada. Como variável independente, o strain de pico do AE teve a associação mais robusta com a sobrevida (p < 0,001), e combinando strain do AE com SLG e strain médio de parede livre do VD, foi obtida a maior capacidade de predição prognóstica (p < 0,001). 8.2. Doença de Fabry A doença de Anderson-Fabry é a doença de depósito glicogênico mais comum, acometendo 1 a cada 50.000 indivíduos. 221 É uma doença recessiva ligada ao X, portanto, afeta comumente mais indivíduos masculinos, sendo as mulheres carreadoras da mutação, e caracteriza-se pela ausência da atividade da alfa-galactosidade. Com isso, ocorre um acúmulo progressivo de globotriaosilceramida nos rins, coração e nervos. Clinicamente, os pacientes manifestam-se com alterações cutâneas (angioqueratomas), neuropatia periférica, insuficiência renal e IC decorrente de uma CMP restritiva com aumento da espessura miocárdica. Essas manifestações clínicas podem ocorrer na infância, porém, é mais comum que surjam após a terceira década de vida. 222 A análise morfológica do acometimento ventricular apresenta a característica de aumento da espessura do VE, podendo evoluir para uma redução de sua complacência e IC com FE preservada por uma CMP restritiva. Outros achados interessantes que podem somar como red flags são a presença de hipertrofia do músculo papilar, sinal do duplo contorno do endocárdio e obstrução dinâmica na via de saída do VE. 223 Esse fenótipo semelhante da CMPH é descrito em 6% dos homens 224 e em 12% das mulheres, com o diagnóstico na faixa etária mais tardia. 225 Por outro lado, a análise paramétrica disposta pelo bull’s eye da deformação longitudinal do VE tem um papel relevante na diferenciação das CMPs que cursam com aumento da espessura miocárdica, principalmente quando há um fenótipo assimétrico da hipertrofia ventricular esquerda (HVE), em que há a possibilidade etiológica de uma CMPH, amiloidose (principalmente por transtirretina), doença de Fabry e cardiopatia hipertensiva no idoso. Caracteristicamente, a CMPH apresenta os valores segmentares mais reduzidos nas porções em que ocorre maior espessura; a CMP amiloidótica ocorre um padrão de poupar o ápice e acometer mais as regiões médio-basais do VE, a cardiopatia hipertensiva pode se apresentar com discreta redução SLG. Porém, curiosamente, a doença de Fabry, apresenta um padrão singular em que, apesar do aumento assimétrico da espessura ventricular, a região mais acometida pela deformação longitudinal é a porção basal da parede ínfero-lateral ( Figura 8.6 ), e ocorre um padrão decremental progressivo à medida que a doença evolui sem tratamento. Existe uma boa correlação entre o acometimento descrito pela análise de deformação longitudinal do VE e o realce tardio pela RM nas diferentes fases da doença. 222 Além desse papel discriminatório, um estudo avaliou doentes com Fabry sem alterações morfológicas com um grupo-controle saudável. Nesse estudo, foi analisado o strain longitudinal do VE, VD e do AE com diferença entre os grupos (18,1 ± 4,0, 21,4 ± 4,9 e 29,7 ± 9,9 vs. 21,6 ± 2,2, 25,2 ± 4,0 e 44,8 ± 11,1%, p < 0,001). Interessantemente, além dessa diferença entre os grupos, as alterações da deformação apresentaram boa correlação com a gravidade dos sintomas. 226 O tratamento já é disponibilizado, e é possível apresentar mudanças morfológicas do coração a partir de um ano de tratamento, como a redução de sua espessura. 227 , 228 Sendo assim, é esperado que ocorra uma melhora da deformação longitudinal do VE mais precocemente, mesmo antes da redução da massa ventricular. Porém, a literatura ainda carece de publicações sobre a resposta terapêutica e o padrão do SLG ao longo do tratamento. A análise da deformação miocárdica é uma importante ferramenta no diagnóstico das hipertrofias ventriculares sem etiologia, principalmente dentro de um contexto clínico coerente e com boa janela ecocardiográfica, acompanhamento dos familiares que não tiveram acesso ao estudo genético e avaliação de resposta terapêutica ( Figura 8.6 ). 9. Strain na Hipertensão Arterial Sistêmica 9.1. Introdução Nesta sessão, serão discutidas as principais vantagens e desvantagens do uso do strain em casos de HAS, com e sem critérios para CMP hipertensiva (presença ou não de HVE), e o seu valor clínico atual. 9.2. Hipertensão Arterial Sistêmica sem Critérios para Hipertrofia Ventricular Esquerda A HAS provoca, ao longo de sua evolução clínica, alterações da contratilidade miocárdica comprovada com a redução do strain longitudinal em resposta à pós-carga e ao estresse sistólico de parede elevados, com significado prognóstico comprovado. A queda do SLG traduz a disfunção miocárdica subclínica antes mesmo do surgimento da HVE e queda de FE detectada pela medida tradicional, sendo essa medida do strain a única a se alterar em HAS estágio A do desenvolvimento de IC. 229 - 239 A redução do SLG acontece, inicialmente, na região basal do septo interventricular (SIV), estendendo-se para as regiões basal e média de outras paredes, e isso se deve à provável maior sobrecarga do SIV ao estresse sistólico de parede nos estágios iniciais da síndrome hipertensiva. 240 , 241 ( Figura 9.1 ) As fibras longitudinais da camada subendocárdica estão precocemente acometidas nessa fase inicial juntamente com o mesocárdio, ao contrário do epicárdio, como demonstrado em alguns estudos. 242 Entretanto, a alteração do strain longitudinal da camada epicárdica foi a única variável preditora de eventos CVs em outra publicação, indicando que seu acometimento pode corresponder à lesão mais severa e crônica. 243 Todavia, na grande maioria dos equipamentos disponíveis atualmente a análise por camadas do miocárdio não é possível. Por outro lado, os strains radial e circunferencial, que usam toda a espessura miocárdica em sua análise, têm uma tendência a permanecerem inalterados ou até mesmo aumentados como uma provável tentativa de compensação mecânica à redução do SLG 60 , 236 e, quando o componente circunferencial se altera, pode traduzir disfunção miocárdica mais severa. 244 As principais explicações para a redução do SLG estão associadas a um aumento da síntese de colágeno, culminando com a fibrose, marcador contundente de disfunção miocárdica. O SLG reduzido se correlaciona não somente com marcadores plasmáticos de fibrose, como a elevação do inibidor tecidual de metaloproteinase, mas também com a fibrose detectada com realce tardio pelo gadolíneo em estudos de RM em pacientes hipertensos. 231 , 234 , 238 , 239 Reduções do SLG foram também observadas em pacientes com hipertensão dos tipos mascarada e “jaleco branco” 245 , 246 , com correlação dessas quedas com marcadores ecocardiográficos convencionais de DD 240 , além de maior deterioração a longo prazo em indivíduos que interromperam tratamento anti-hipertensivo. 247 9.3. Hipertensão Arterial Sistêmica com Critérios para Hipertrofia Ventricular Esquerda As consequências miocárdicas da doença hipertensiva crônica incluem a hipertrofia de miócitos, além de fibrose miocárdica e espessamento medial das artérias coronárias intramiocárdicas. 248 Consequentemente, a HAS e as modificações do remodelamento miocárdico são fatores de risco para o desenvolvimento de eventos cardíacos maiores, tais como o risco de desenvolvimento de IC e morte prematura. Assim o uso do strain nesses casos tem o objetivo principal de detectar alterações sutis da função sistólica, antes mesmo do comprometimento da FE obtida de forma convencional, selecionando casos de ICFEP para a adoção de tratamento adequado. Os tipos de remodelamento do VE podem apresentar alterações dos vários tipos de strain . Assim, na hipertrofia concêntrica, é possível encontrar valores reduzidos do SLG com queda progressiva de acordo com a evolução dos tipos geométricos, desde o remodelamento concêntrico até a hipertrofia excêntrica com dilatação do VE. 249 - 251 O strain global circunferencial e o strain global radial encontram-se com valores preservados na maioria dos estudos 252 ou até mesmo reduzidos em algumas séries 249 e tendem a permanecer normais nas camadas epicárdicas em indivíduos com HAS e HVE. 250 O comportamento da torção e do twisting também pode ser variável, com valores normais ou reduzidos de acordo com o tipo de geometria ventricular. 249 , 253 Quando se considera o uso da técnica tridimensional do strain , o SLG 3D tende a se deteriorar de acordo com o grau de hipertrofia e diâmetro da cavidade do VE na HAS. 62 , 250 Além da correlação do SLG reduzido com os diferentes padrões de HVE, o strain pode ser usado para auxiliar no esclarecimento da causa da hipertrofia e é frequentemente mais reduzido em casos de CMPH quando comparado com HVE por HAS. 254 9.4. Tratamento Clínico O SLG apresenta queda paralela com a piora da classe funcional 241 e melhora com o tratamento a longo prazo, como demonstrado em uma análise de seguimento de 3 anos após tratamento anti-hipertensivo 255 e em casos de tratamento anti-hipertensivo no ambiente de atendimento em emergência. 256 O SLG reduzido também se correlacionou com a MAPA anormal em pacientes em tratamento, mesmo após ajuste de outras variáveis clínicas como idade, presença de diabetes melito e índice de massa do VE. 257 9.5. Conclusão Existem evidências suficientes para recomendar o uso do strain em pacientes com HAS, independentemente da presença de HVE, tanto para a identificação precoce das alterações estruturais subclínicas, como para os quadros de ICFEP visando a adoção de tratamento otimizado. Por outro lado, há a necessidade de estudos mais robustos para nortear o uso sistemático do strain nessa população. 10. Strain em Atletas A atividade física regular e intensa é responsável por uma série de profundas alterações elétricas, estruturais e funcionais adaptativas, usualmente referidas como “coração de atleta”. 258 A análise dessa condição é importante para uma melhor compreensão dos mecanismos de adequação cardíaca e melhoria da performance e do rendimento, orientando o treinamento otimizado. Além disso, permite a diferenciação de patologias que podem ter características morfológicas semelhantes àquelas induzidas pelo treinamento. Atletas de alto rendimento e que apresentam grandes volumes do VE parecem pertencer ao espectro da fisiologia saudável típica do “coração de atleta”. Alguns trabalhos demonstram que o SLG se encontra discretamente reduzido nos atletas em repouso, quando comparado com sedentários; em outros, essa medida mostra-se superior aos controles. 259 , 260 Entretanto, na maioria dos estudos, não foi identificada diferença significativa. 261 Essa variação pode estar relacionada ao impacto de diferentes fatores como pré- e pós-carga, massa miocárdica e bradicardia sinusal. Por esse motivo, a presença de valores reduzidos de SLG em atletas com função diastólica do VE normal ou supranormal pode ser determinante para a distinção entre as adaptações secundárias aos exercícios e às patologias cardíacas. Valores absolutos iguais ou superiores a 18% são considerados ainda dentro da normalidade. A redução desses índices é muito mais acentuada em portadores de CMPH e HAS. 262 No entanto, o strain circunferencial global e o strain radial não demonstraram alterações significativas em relação ao grupo-controle. 261 A representação paramétrica em bull’s-eye pode fornecer subsídios para diferenciar o coração do atleta de outras doenças que cursam com hipertrofias. 263 Quando os atletas são categorizados de acordo com o tipo e intensidade dos exercícios praticados em estático e dinâmico, surgem diferenças predominantemente em aspectos mecânicos do VE. Um estudo recente mostrou que a torção cardíaca foi maior em atletas com dinâmica baixa, estática alta (levantamento de peso, artes marciais), estática baixa e dinâmica alta (maratona, futebol) em relação aos controles. Contrariamente, a torção foi menor em atletas com dinâmica alta, estática moderada (natação, polo aquático), o que pode ser explicado por alterações na rotação apical, mas não na basal. O pico de untwisting foi maior em atletas com predominância de exercícios com componentes dinâmico baixo e estático alto, enquanto picos menores foram encontrados em atletas praticantes de esportes com componentes dinâmico alto e estático alto. 261 Estudos utilizando o speckle tacking para quantificar a deformação miocárdica têm mostrado que atletas competitivos de resistência ( endurance ) apresentam valores normais ou aumentados do strain . 264 - 268 Em relação ao VD, os índices de deformação miocárdica, obtidos tanto pelo Doppler tecidual quanto pelo speckle tracking, podem estar discretamente reduzidos nos segmentos basal e médio da parede livre do VD, notadamente em atletas de resistência em comparação aos controles. 269 Ainda é controverso se tal redução da deformação miocárdica do VD é apenas uma resposta adaptativa ao exercício ou se é uma alteração subclínica por lesão miocárdica. 270 Alguns autores supõem que tal achado pode ser explicado pelas mudanças na curvatura entre o ápice e base do VD, resultando nessa diferença do strain entre os segmentos. Ainda são iniciais os estudos da função atrial em atletas com a técnica do speckle tracking e apresentam resultados conflitantes. Um estudo mostrou que a contração atrial avaliada pelo SLG do AE diminuiu significativamente após treino. 271 Outro estudo não mostrou diferenças no strain atrial entre atletas e sedentários. 272 A medida do strain também se faz importante na avaliação da função diastólica. O exercício dinâmico leva a um relaxamento ventricular mais efetivo, além da dilatação biventricular, entretanto, o exercício estático pode estar relacionado ao de aumento da espessura miocárdica e hipertrofia concêntrica do VE, podendo levar a algum grau de comprometimento da função diastólica. 273 Além disso, o uso de drogas ilícitas para o aumento da performance pode levar à deterioração da função ventricular, sistólica ou diastólica; a ecocardiografia com speckle tracking pode detectar precocemente essas alterações. 274 Por isso, é fundamental que, na avaliação da função ventricular dos atletas profissionais e/ou amadores, utilizemos todas as ferramentas disponíveis no arsenal da ecocardiografia. O strain é capaz de detectar alterações incipientes da função sistólica muito antes que ocorra qualquer alteração da contratilidade ao estudo bidimensional ou diminuição da FE. A ecocardiografia com speckle tracking tem se mostrado bastante promissora para complementar a ecocardiografia bidimensional de rotina na avaliação de atletas. O SLG nessa população (diferentemente da população sedentária) pode ser considerado normal com valores absolutos superiores a 16%, valores inferiores devem levantar suspeita de patologia, principalmente se diante de outros sinais sugestivos como hipertrofia ou dilatação ventricular significativas. 275 11. Strain na Ecocardiografia com Estresse A Tabela 11.1 mostra as principais aplicações do strain na ecocardiografia com estresse. Em breve, um artigo de revisão mais completo sobre ecocardiografia de estresse será publicado neste periódico. 12. Strain nas Cardiopatias Congênitas Alguns estudos já demonstraram o elevado valor prognóstico do strain obtido pelo speckle tracking , reforçando sua utilidade tanto em patologias congênitas como adquiridas. 9 No entanto, o strain miocárdico está sujeito a variações fisiológicas causadas por idade, sexo, frequência cardíaca, pré-carga, pressão arterial e superfície corpórea, além do tipo de software utilizado para análise. 296 Um esforço contínuo vem sendo realizado no sentido de estabelecer valores normais do strain que possam ser utilizados como referência universal em pediatria, para que a avaliação da deformação miocárdica seja adotada na rotina clínica. 297 - 299 Apresentaremos, nas tabelas a seguir, os valores de strain miocárdico já apresentados na literatura, em crianças normais ( Tabelas 12.1 a 12.3 ) e em algumas cardiopatias congênitas, com recomendações de valores de corte ( Tabela 12.4 ). Em breve, um artigo de revisão mais completo sobre o tema será publicado neste periódico. 13. Strain do Ventrículo Direito 13.1. Introdução O VD tem importante papel na fisiopatologia das doenças cardiopulmonares. Um grande número de evidências tem demonstrado que a disfunção do VD é um importante marcador independente de morbidade e mortalidade em várias situações clínicas como: IC, doenças valvares, HP, embolia pulmonar (EP), cardiopatia isquêmica e na presença de cardiopatia congênita nos adultos. 308 - 313 A RMC é considerada o exame não invasivo padrão-ouro para a obtenção dos volumes, FE e avaliação estrutural do VD. Tem, porém, como principais limitações um custo elevado, maior tempo da aquisição das imagens e pouca disponibilidade na maioria dos centros. 314 A ecocardiografia bidimensional (2D) é o exame inicial mais utilizado na avaliação estrutural e funcional do VD, por ser mais disponível, de menor custo, não invasiva e com menor tempo para aquisição das imagens. Essa avaliação do VD pela 2D, entretanto, é desafiadora, pela estrutura complexa da cavidade, pela posição desfavorável dentro da parede torácica, pela intensa trabeculação miocárdica, que impede a melhor visualização do endocárdio, por possuir paredes mais finas e pela alta dependência das condições de carga dos índices mais utilizados da função sistólica. 315 Vários parâmetros ecocardiográficos indicadores da função sistólica do VD são utilizados na prática clínica. Recentemente, a ecocardiografia 2D com strain pelo speckle tracking foi introduzida no cenário clínico como um indicador objetivo de contratilidade miocárdica regional e global, inicialmente na avaliação do VE e, mais recentemente, do VD. Com a aplicação dessa nova metodologia, mais pesquisas e publicações têm chamado atenção das vantagens da sua utilização em relação aos outros parâmetros convencionais ecocardiográficos. 316 13.2. Características Anatômicas e Funcionais do Ventrículo Direito Na Tabela 13.1 , podemos observar as principais características que diferenciam os ventrículos. 317 - 319 As funções do VE e VD estão intimamente relacionadas, fenômeno chamado de interação ventricular sistólica, pois compartilham fibras musculares dispostas obliquamente no SIV. Elas têm vantagens mecânicas sobre as fibras transversais da parede livre do VD. 36 A continuidade dessas fibras musculares permite que a parede livre do VD seja tracionada quando ocorre a contração do VE , sendo estimado que 20 a 40% do volume sistólico ejetado e da pressão sistólica do VD resultem da contração do VE. 318 , 319 13.3. Ventrículo Direito e Parâmetros Ecocardiográficos na Avaliação da Função Sistólica Na avaliação da função sistólica do VD, vários índices são utilizados rotineiramente como a mudança fracional da área (FAC), a excursão sistólica do anel tricúspide (TAPSE), a velocidade de pico sistólico do anel tricúspide e o índice de performance miocárdica. Cada um deles tem vantagens e limitações, variável exequibilidade e reprodutibilidade, com discutível eficácia diagnóstica e prognóstica. 35 , 36 Acredita-se que, no momento, nenhum deles seja, isoladamente, um bom indicador da função sistólica do VD. Uma vez que o vetor da contração longitudinal é o mais importante, pela orientação das fibras musculares longitudinais predominantes do anel tricúspide ao ápex, dá-se preferência aos índices que exploram a movimentação no eixo longitudinal na avaliação da função longitudinal regional ou global do VD. 92 A ecocardiografia 2DST é uma modalidade de imagem que avalia a deformação miocárdica, propriedade intrínseca do miocárdio, nas três direções (longitudinal, circunferencial e radial), sendo a longitudinal a mais utilizada por sua boa reprodutibilidade, relevante informação prognóstica, validação em estudo experimental 75 e em estudos clínicos com RMC em várias doenças CV. 320 - 323 Assim , o 2DST do VD tem se mostrado um bom marcador da função sistólica, com valor prognóstico em várias doenças CV. 75 , 324 - 330 13.4. Aquisição e Limitações O SLGVD pelo 2DST deve ser obtido através da janela apical 4 câmaras modificada, focada no VD, com o transdutor deslocado mais lateralmente e direcionado para o ombro direito, o que permite a melhor visibilização da parede livre e reprodutibilidade das medidas ( Figura 13.1 ). É importante otimizar a orientação, a profundidade e o ganho e, com isso, maximizar o tamanho do VD e visualizar o seu ápice durante todo o ciclo cardíaco. 36 Um outro cuidado na aquisição é não anteriorizar ou posteriorizar o transdutor, evitando, respectivamente, o aparecimento da valva aórtica ou do seio coronário, exibindo apenas o septo interatrial. 331 Uma vez que a visualização adequada for obtida, recomenda-se ajustar o aparelho para gravar três ciclos cardíacos e adquirir imagens com uma resolução temporal de 50–80 quadros por segundo. Essa taxa de enquadramento pode ser obtida por ajustes indiretos, como por meio da profundidade da imagem e da abertura do feixe de ultrassom e resolução, como também por ajustes diretos permitidos pelo aparelho de ecocardiograma utilizado. Em alguns softwares , ainda se faz necessária a definição do início e do fim do tempo de ejeção do VD por meio do Doppler pulsado registrado na via de saída do VD. A ROI é definida pela borda endocárdica incluindo a parede livre do VD e SIV, com o cuidado de não incluir o pericárdio e ajustar a largura da ROI para não ficar muito estreita, pois isso pode levar a resultados errôneos. Atenção deve ser dada no posicionamento dos pontos de referência basais, pois se estiver abaixo do ideal, ou seja, no lado atrial do anel tricuspídeo, pode resultar em valores reduzidos da deformação longitudinal. 332 A ROI pode ser traçada manualmente pelo usuário ou gerada automaticamente. Se for gerada automaticamente, o usuário deve ter permissão para verificar e, eventualmente, editá-la de modo manual. 92 Depois de verificar a qualidade do rastreamento e sua aprovação final pelo operador, os valores de deformação regional serão exibidos. Pelas recomendações atuais, o valor utilizado deve ser o maior valor modular alcançado durante a sístole ( strain de pico sistólico), sendo o traçado do Doppler da valva pulmonar utilizado para determinar o final da diástole e da sístole. 92 Sempre que possível, deve-se utilizar um software apropriado, já que o algoritmo de detecção automática dos segmentos do VD reduz a necessidade de intervenções por parte do operador, contribuindo, assim, para uma melhor reprodutibilidade dos resultados. A segmentação da parede livre do VD entre o ápice e a base inclui três segmentos (segmentos basal, médio e apical). O SIV é segmentado de maneira semelhante. O strain longitudinal da parede livre do VD (SL-PLVD) é a média dos valores de deformação dos seus três segmentos, enquanto o SLGVD é a média dos valores do strain dos segmentos de sua parede livre e do SIV. O SL-PLVD é o mais utilizado na prática e na pesquisa clínica, uma vez que o SLGVD sofre interferência da função ventricular esquerda pelo SIV, obtendo, assim, valores absolutos relativamente mais baixos. 333 Para fins de padronização, deve-se relatar como parâmetro padrão o SL-PLVD, sendo opcional o cálculo do SLGVD. 92 Como limitações, além de janelas acústicas inadequadas, estudos experimentais e modelos matemáticos mostraram que a magnitude da deformação miocárdica é influenciada pela frequência cardíaca, além da pré- e pós-carga. Com função sistólica preservada, estudos confirmaram que o strain pode aumentar com o aumento da pré-carga e da frequência cardíaca e pode reduzir com o efeito contrário dessas variáveis. 16 , 334 13.5. Indicações/Valores de Normalidade A disfunção sistólica do VD é bem estabelecida como um fator de prognóstico ruim em várias doenças CV, e o SLGVD é um marcador prognóstico independente nos pacientes com HP, IC, cardiopatia isquêmica e outras CMPs, assim como tem melhor correlação com a FE do VD pela RMC em comparação aos parâmetros tradicionais. 35 , 320 - 23 , 334 - 336 Nos pacientes com HP, o SLGVD está diminuído, mostrando uma boa correlação com os parâmetros hemodinâmicos invasivos da performance do VD. 337 Além disso, foi demonstrado que o SLGVD é preditor independente de mortalidade por todas as causas e eventos relacionados à HP. Com o objetivo de avaliar o valor prognóstico do SLGVD nesses pacientes, uma metanálise recente mostrou que sua redução relativa de 19% está associada a um maior risco de eventos relacionados à HP, enquanto a redução relativa de 22% do SLGVD está associada a um maior risco de morte por todas as causas. 324 A Figura 13.2 mostra um exemplo de strain de VD em um paciente com HP primária de longa data. Em pacientes com IC, o SLGVD tem elevada sensibilidade e acurácia no diagnóstico da disfunção sistólica dessa câmara cardíaca. 338 Uma publicação recente evidenciou que valores absolutos < 14,8% estão associados a eventos adversos como morte, transplante cardíaco e hospitalização, independentemente da FEVE e DD do VE. 19 Além disso, o SLGVD e o SL-PLVD foram capazes de detectar anormalidades sutis da função sistólica do VD em pacientes com IC e FEVE reduzida e em menor grau nos com IC e FEVE preservada. 329 Quanto aos pacientes elegíveis para o implante de dispositivo de assistência ventricular esquerda, o SLGVD é uma ferramenta útil na estratificação de risco de falência do VD. Com uma sensibilidade de 68% e uma especificidade de 76%, o valor absoluto de SLGVD < 9,6% foi capaz de identificar os pacientes que evoluíram com falência do VD pós-procedimento, definida como necessidade de dispositivo de assistência ventricular direita ou uso de inotrópicos por mais de 14 dias. 339 Já em transplantados cardíacos, a combinação das medidas do SLGVE e o SL-PLVD pode ser útil para excluir rejeição celular aguda e reduzir o número de biópsias de rotina. 340 As Figuras 13.3 e 13.4 mostram exemplos de strain de VD em paciente com dispositivo de assistência ventricular e em paciente transplantado cardíaco, respectivamente. No IAM, o SLGVD é o parâmetro ecocardiográfico que tem melhor correlação com a fração de ejeção do ventrículo direito (FEVD) obtida pela RMC. 341 Somado a isso, esse parâmetro demonstrou ser preditor independente de morte, reinfarto e hospitalização por IC, confirmando seu papel fundamental na avaliação dessa população. 342 A avaliação de pacientes com displasia arritmogênica do VD é discutida em outra sessão. Recentemente, o papel da disfunção sistólica do VD tem sido investigado em outras CMPs. Na CMPH, foram descritos valores de SLGVD reduzidos em relação a um grupo-controle saudável, 343 e também diferenciou pacientes com CMPH e hipertrofia secundária à hipertensão, com alta sensibilidade e especificidade. 344 A estenose mitral é a doença valvar cardíaca que mais acomete o VD, com alteração frequente dos parâmetros convencionais da sua avaliação. O SLGVD demonstra um padrão de alteração segmentar, com valores significativamente menores no SIV e na parede livre basal do VD e valores normais na parede livre média e apical. 345 , 346 Em pacientes com insuficiência tricúspide funcional importante, o SL-PLVD identificou, em maior proporção, os pacientes com disfunção do VD (84,9%) em comparação à FAC (48,5%) e à TAPSE (71,7%). Além disso, o SL - PLVD esteve associado de maneira independente com a mortalidade por todas as causas e teve um valor prognóstico incremental quando associado aos parâmetros tradicionais de avaliação do VD. 328 Atualmente, falta um consenso em relação aos valores normais de strain do VD devido à escassez de estudos nessa área. O último documento de recomendações para quantificação das câmaras cardíacas pela ecocardiografia em adultos da Sociedade Americana de Ecocardiografia e da Associação Europeia de Imagem Cardiovascular sugere que valores, tanto do SLGVD quanto do SL-PLVD , inferiores a 20% sejam considerados como anormais. 35 No entanto, é preciso cautela, pois os diferentes tipos de aparelho trazem softwares diferentes, com valores de referência particulares e diferenças quanto ao nível de mapeamento (endocárdico, epicárdico ou abrangendo toda a parede miocárdica). A Tabela 13.2 resume as principais recomendações da utilização do SLG na avaliação do VD. 14. Strain do Átrio Esquerdo e do Átrio Direito 14.1. Técnica de Obtenção e Análise do strain do Átrio Esquerdo A análise da função do AE usando o strain permite que os três componentes da função do AE sejam analisados: AESr, que avalia a função reservatório; AEScd, que avalia a função de conduto; e o AESct, que avalia a função contrátil. Embora menos utilizada, também há a taxa de deformação ou strain rate , descrita como pAESRr (pico strain rate na fase reservatório), pAESRcd (pico de strain rate na fase de conduto) e pAESRct (pico de strain rate durante contração atrial). 92 , 347 Para a análise do strain do AE, usam-se imagens apicais 4 câmaras e 2 câmaras otimizadas para o AE e com frequência de quadros alta, habitualmente entre 40 e 80 fps. Seleciona-se um ciclo cardíaco específico e faz-se manualmente o traçado ponto a ponto a partir da borda endocárdica do anel mitral até o anel mitral oposto, extrapolando-se a entrada das veias pulmonares e do apêndice atrial esquerdo. O software cria a ROI, a qual é ajustada para 03 mm de largura e deve cobrir da borda endocárdica até a epicárdica. Caso a qualidade do tracking seja reprovada em dois ou mais segmentos, mesmo após ajuste manual, deve-se excluir essa incidência da análise. Finalmente, o software calcula o SLG para cada uma das janelas apicais citadas acima. Na análise do strain do AE, há dois métodos diferentes como ponto de referência ou zero: o início da onda P do ECG 2 ou o pico da onda R do QRS. 348 O primeiro método permite o reconhecimento mais fácil dos componentes do strain do AE, sendo necessária a soma dos valores absolutos de AEScd e AESct, para se obter o AESr. O segundo método oferece diretamente o valor do AESr, que é o dado com maior valor prognóstico, sendo que os demais componentes são obtidos a partir do gráfico. O método que usa a onda R como referência é o mais recomendado, porque esse é o ponto de menor volume do AE, sendo o AESr mais facilmente obtido. A maior parte dos trabalhos usa esse método. 92 14.2. Valores de Normalidade O strain do AE apresenta grande heterogeneidade na literatura quanto aos valores de normalidade. A metanálise realizada por Pathan et al . é a melhor evidência no momento: os valores médios do AESr, AEScd e AESct foram, respectivamente: 39,4% (IC95% 38%–40,8%); 23% (IC95% 20,7%–25,2%) e 17,4% (IC95% 16,0%–19,0%). 37 14.3. Aplicabilidade Clínica do Strain do Átrio Esquerdo A avaliação do strain do AE demonstrou valor prognóstico incremental em diversos contextos clínicos, quando comparada à mensuração volumétrica isolada 349 ( Figura 14.1 ). 14.3.1. Insuficiência Cardíaca e Avaliação de Função Diastólica O strain do AE está deprimido na ICFEr e possui valor prognóstico para previsão de morte por todas as causas ou de nova internação por IC, 350 boa correlação com capacidade funcional 351 e pressões de enchimento do VE, 352 além de ser bom preditor de resposta à terapia de ressincronização miocárdica. 353 Na ICFEp, o strain do AE apresenta importante papel no diagnóstico 354 e prognóstico, 73 , 355 além de ser capaz de predizer o risco de evolução para FA. 98 Cerca de 20% dos casos de ICFEp podem ter padrão indeterminado na avaliação da função diastólica do VE, 93 e o strain do AE é capaz de recategorizar esses pacientes, 90 sendo que os três componentes da função atrial demonstraram boa acurácia em determinar aumento da pressão atrial esquerda. 89 14.3.2. Fibrilação Atrial Na FA, as funções de reservatório e conduto do AE estão deprimidas e a contrátil é ausente. O strain do AE é capaz de prever FA nova em diversas patologias como ICFEr, 356 estenose mitral, 357 doença de Chagas 358 e após implante de marca-passo, 359 além de prever o risco de recorrência de FA após cardioversão 360 ou ablação. 361 - 363 É possível que a avaliação da função do AE pelo strain seja incorporada no processo decisório da indicação de ablação de FA. O AESr também está associado com ocorrência de AVCi independente de CHA2DS2-VASc escore, idade e uso de anticoagulante. 364 14.3.3. Valvopatias O strain do AE pode sinalizar maior gravidade e evolução desfavorável na valvopatia mitral e aórtica. 365 , 366 Na IM primária grave, o AESr demonstrou ser preditor de hospitalização por IC ou morte por todas as causas, independentemente das indicações de intervenção cirúrgica. 367 , 368 14.3.4. Doença Arterial Coronariana A DAC associa-se à disfunção atrial por dois mecanismos principais: DD do VE e isquemia direta do AE. 369 O strain do AE pode ter importante valor prognóstico na síndrome coronariana aguda, correlacionando-se com maior gravidade 370 e desfechos desfavoráveis. 371 14.4. Strain Atrial Direito O strain do AD carece de dados, mas um estudo recente avaliou 101 voluntários saudáveis e descreveu os seguintes valores utilizando o complexo QRS como referência: reservatório (37,6% ± 6,9), conduto (26,0% ± 7,1) e contração (11,6% ± 4,4). 372 A avaliação da função do AD é alvo de interesse em cardiopatias congênitas, 301 , 373 valvopatia tricúspide e HP. 374 15. Avaliação da Torção do Ventrículo Esquerdo 15.1. Introdução A função do VE é determinada pelas interações complexas entre a anatomia do tecido, a contratilidade miocárdica e hemodinâmica. No miocárdio do VE, as fibras musculares possuem direções diferentes. Na região subendocárdica, as fibras são quase paralelas à parede e produzem um movimento de rotação do tipo direito (hélice de mão direita), o que gradualmente muda no subepicárdico para fibras anguladas a 60–70 graus, promovendo uma rotação do tipo esquerda (hélice de mão esquerda). 375 , 376 A contração das fibras subepicárdicas faz com que o ápice do VE gire no sentido anti-horário e a sua base no sentido horário. Por outro lado, a contração das fibras subendocárdicas faz o ápice e a base do VE girarem exatamente nas direções opostas. Dado ao maior raio de rotação da camada epicárdica, a direção das fibras subepicárdicas prevalece na direção geral de rotação quando ambas as camadas se contraem simultaneamente. Isso resulta em rotação global do VE no sentido anti-horário próxima ao ápice e na rotação no sentido horário próxima à base do VE durante a ejeção ventricular, 377 como ilustrado na Figura 15.1 . Esse movimento de torção do VE contribui para manter uma distribuição uniforme do encurtamento e do estresse das fibras ao longo de todas as paredes, produzindo, assim, uma FE relativamente elevada (~60%), a despeito de encurtamento limitado (~20%). 378 A torção e o cisalhamento das fibras subendocárdicas durante a ejeção ventricular resultam no armazenamento de energia potencial, que é subsequentemente usada para o desenrolar diastólico das fibras e, assim, destorcer as hélices, produzindo juntos a sucção diastólica. 379 , 380 As condições de pré- e pós-carga e contratilidade alteram a extensão da torção ventricular. 381 O aumento da pré-carga ou da contratilidade aumentam a torção do VE, enquanto o aumento na pós-carga causa efeito inverso. Várias modalidades e técnicas de imagem podem ser usadas para quantificar a mecânica da torção ventricular: ecocardiografia (Doppler tecidual, ST2D e ST3D, imagem de velocidade vetorial [VVI]), RMC ( tissue tagging ) e sonomicrometria. Atualmente, não existe um padrão-ouro para a avaliação da mecânica de torção do VE, tendo as modalidades de imagem listadas acima boa concordância. 382 Devido à sua segurança, disponibilidade e melhor custo/efetividade, a ecocardiografia tem sido a modalidade de imagem mais empregada. 15.2. Definições e Nomenclaturas Torção, twist, twist rate, untwist, untwist rate são as terminologias comumente utilizadas para descrever os achados da rotação sistólica e a rotação diastólica reversa da base e do ápice do VE, como vistos pelo ápex. As definições desses termos podem ser encontradas nas Tabelas 15.1 e 15.2 . 15.3. Passo a Passo da Avaliação da Torção Ventricular pelo Ecocardiograma com Speckle Tracking Para avaliação do mecanismo de rotação, são obtidas imagens paraesternais do eixo curto do VE ao nível basal (valva mitral) e apical (abaixo dos músculos papilares) ( Figura 15.2 ). É importante obter a imagem apical do VE onde não apareça o VD ou apenas uma parte deste, em geral um a dois espaços intercostais abaixo da posição habitual. A maioria dos erros de avaliação ocorre devido à seleção inapropriada dos planos basal e apical e do ajuste da ROI. Por convenção, quando a rotação é horária, o traçado é registrado abaixo da linha de base e, quando a rotação é anti-horária, o traçado é inscrito acima da linha de base ( Figura 15.3 ). O valor normal do twist global é de 9,7° ± 4,1°. Para a torção, há poucos valores de referência na literatura, sendo estimada em 1,35°/cm ± 0,54°/cm. 383 15.4. Aplicações Clínicas Os parâmetros de torção do VE têm sido usados principalmente para avaliar as alterações na mecânica ventricular que ocorrem em patologias com FEVE reduzida (CMP isquêmica e dilatada) ou preservada (ICFEp, hipertensão, CMPH, EA, IAo e IM), bem como na avaliação de disfunção miocárdica subclínica causada por quimioterápicos. A medida do twist e da torção, embora sejam bons parâmetros para ajudar na análise da função sistólica global, tem limitações quanto à reprodutibilidade, em especial devido à falta de parâmetros anatômicos para o corte apical. Os achados de alterações no twist e torção ventricular não são específicos, mas podem contribuir para o entendimento da fisiopatologia de diferentes CMPs, auxiliando na diferenciação entre elas ( Tabela 15.3 ). 16. Strain na Análise da Dissincronia Ventricular 16.1. Introdução A terapia de ressincronização cardíaca (TRC) é uma opção terapêutica com indicações já estabelecidas em diretrizes nacionais e internacionais e redução expressiva comprovada em morbidade e mortalidade. Ela é recomendada como classe I para pacientes com CMP dilatada, sintomáticos, em tratamento clínico otimizado, ECG com padrão de BRE, duração do QRS ≥ 150 ms e FEVE abaixo de 35% (nível de evidência: A). 385 Essas diretrizes empregam a presença de BRE com duração ≥ 150 ms ao ECG como marcador de dissincronia devido à ausência de evidências da utilidade da avaliação ecocardiográfica da sincronia até o momento. Entretanto, o ECG também apresenta limitações como marcador de dissincronia. Assim, atualmente, a avaliação ecocardiográfica da dissincronia para a seleção da TRC deve ser realizada de maneira individualizada e criteriosa por um examinador com treinamento adequado e interpretada juntamente aos dados clínicos do paciente. Também é importante lembrar que o papel da ecocardiografia engloba não apenas a avaliação da sincronia cardíaca na seleção dos pacientes, mas também o auxílio na escolha do melhor local para o implante do eletrodo VE, além de avaliação da resposta e remodelamento reverso e, mais recentemente, a identificação do risco de arritmias ventriculares. 385 16.2. Avaliação da Dissincronia na Seleção dos Pacientes para a Terapia de Ressincronização Cardíaca A avaliação da dissincronia pelo strain , isoladamente, não indica a TRC, nem a análise de sua eficiência. Entretanto, reconhece-se que, mesmo com a indicação precisa, a chance de sucesso, isto é, melhora clínica, funcional e/ou de variáveis obtidas por métodos de imagem, fica em torno de 60–70% dos casos. A taxa de resposta à TRC pode ser estimada e até melhorada com o emprego da ecocardiografia. Nesse panorama, a medida de valores da deformação miocárdica se destaca. A análise inicial da dissincronia pelo strain radial descreveu a diferença de tempo entre a deformação máxima radial dos segmentos médios anterosseptal e inferolateral. Uma medida com valor superior a 130 ms caracteriza pacientes com maior taxa de resposta, 386 como mostrado na Figura 16.1 . Além da dissincronia radial, a identificação de um padrão de estiramento rebote da parede septal (SRS, septal rebound stretch ) pela técnica de speckle tracking demonstrou-se um preditor independente de prognóstico em longo prazo, além de remodelamento reverso ventricular esquerdo com valor incremental à presença de BRE e de apical rocking detectada pela análise visual. Esse padrão reflete à incoordenação da contração cardíaca com resultante redução da performance miocárdica. Assim, estudos recentes indicam que a presença de SRS pode melhorar a seleção dos pacientes na TRC, especialmente no subgrupo de pacientes sem BRE definido. 387 A denominação desse padrão clássico é feita por meio de três elementos obtidos pelo padrão de deformação longitudinal dos segmentos (frequentemente basais) inferosseptal e anterolateral: 1) oposição de pico das curvas septal (negativa), lateral (positiva) inicialmente; 2) pico de deformação negativa do septo em até 70% do tempo de ejeção; 3) pico de deformação negativa de parede lateral após fechamento da valva aórtica, 388 conforme demonstrado na Figura 16.2 . Recentemente, a análise da eficiência do TM global ventricular esquerdo (GLVMWE, global left ventricular myocardial work efficiency ) tem se mostrado promissora no contexto da TRC. O GLVMWE pode ser quantificado de maneira não invasiva a partir das curvas de strain miocárdico e medidas da pressão arterial. Valores reduzidos de GLVMWE estiveram associados, de maneira independente, a melhor prognóstico em longo prazo. 389 Na Figura 16.3 , exemplificam-se as modificações ocorridas no strain, myocardial work e myocardial efficiency em paciente submetido a TRC com sucesso. 16.3. Avaliação de Viabilidade Miocárdica Outra aplicação da deformação miocárdica no contexto da TRC refere-se à correlação da presença de fibrose miocárdica com valores reduzidos de strain . Valores de strain global radial reduzidos correlacionam-se a maior grau de fibrose (detectados por RMC) e, assim, identificam pacientes com chance reduzida de recuperação da função ventricular. A redução da deformação longitudinal em pacientes com cardiopatia isquêmica também pode ser empregada para essa finalidade. 16.4. Orientação do Local de Implante dos Eletrodos Tão importante como a caracterização da fibrose total do VE, a presença de valores comprometidos em segmentos no local de implante do eletrodo do VE tem se relacionado à menor taxa de pacientes respondedores à TRC. Estudos demonstram que o posicionamento do eletrodo do VE no segmento que apresenta maior atraso da contração mecânica resulta em maiores taxas de sucesso à TRC. A identificação do segmento a ser estimulado pode ser realizada pela técnica de speckle tracking . No estudo TARGET, o posicionamento do eletrodo do VE guiado pela técnica de ST2D resultou em melhor resposta clínica e menores taxas de morte e hospitalizações por IC. 390 , 391 16.5. Avaliação Prognóstica após a Terapia de Ressincronização Cardíaca A análise da dissincronia pelo strain longitudinal após a TRC foi um preditor forte de arritmias ventriculares. A persistência ou o aumento da dispersão mecânica 6 meses após a TRC, avaliada pelo speckle tracking, está associada a pior prognóstico. Além disso, a resposta à TRC evidenciada pelo remodelamento reverso foi dependente da melhora tanto da função longitudinal quanto da circunferencial após a TRC. 392 16.6. Ajuste nos Parâmetros de Ressincronização Cerca de 30% dos pacientes submetidos a TRC são considerados não respondedores devido à ausência de melhora clínica e/ou funcional, além da ausência de remodelamento reverso evidenciada pela redução das dimensões ventriculares e o aumento da FE. 393 Nesse grupo de pacientes, ajustes dos intervalos atrioventricular, interventricular e intraventricular esquerdo podem melhorar a resposta individual à TRC. Alguns estudos têm demonstrado que a speckle tracking pode ser empregada como guia para ajuste dos parâmetros da TRC, com melhora significativa da classe funcional e da FE em pacientes não respondedores. 394 17. Myocardial Work (Trabalho Miocárdico) 17.1. Introdução Uma nova ferramenta ecocardiográfica chamada myocardial work (MW) surgiu recentemente visando incrementar informações acerca da função ventricular, adicionando o efeito da pós-carga do VE à medida do strain longitudinal. Com o trabalho experimental de Suga et al. em 1979, 395 demonstrando que a área sob a curva pressão-volume adquirida de forma invasiva com um cateter de condutância intraventricular refletia o trabalho miocárdico (TM) regional e o consumo de oxigênio por batimento, crescia o interesse dos métodos de imagem em tornar essa análise factível de forma não invasiva. 396 , 397 Russell et al., 398 em 2012, validaram a alça pressão-deformação (PD) do VE obtida de forma totalmente não invasiva, integrando a pressão arterial sistólica (PAS) no momento do exame com o strain longitudinal utilizando o speckle tracking , o que gera, quando interpretadas por um software próprio, alças PD global e por segmento ( Figura 17.1 ). A área sob a curva representa o TM e obteve excelente correlação com as medidas diretas intraventriculares. Além disso o TM foi capaz de refletir o metabolismo miocárdico regional de oxigênio, quando comparado à medida pela tomografia por emissão de pósitrons com (18 F ) fluorodesoxiglicose. 399 - 401 Aumentos modestos na pressão arterial podem gerar redução de até 9% no SLG, podendo levar a uma errônea interpretação de redução de contratilidade, quando, na verdade, o TM permanece preservado, refletindo apenas uma elevação da pós-carga. Nesse sentido, o TM é considerado um avanço na compreensão da mecânica ventricular. 398 , 402 As principais diferenças entre o TM e o strain do VE são demonstrados na Tabela 17.1 . 17.2. Aquisição do Trabalho Miocárdico Para a obtenção de resultados reprodutíveis e com boa acurácia, é importante o uso de técnica adequada para o cálculo do TM, não só para a aquisição das imagens, mas também para o pós-processamento e a análise dos parâmetros. Essa tecnologia atualmente está apenas disponível em estações de trabalho ( workstations ) ou embarcadas em aparelhos com software desenvolvido por apenas um fabricante (GE Healthcare, Horten, Norway). As análises podem ser realizadas diretamente no aparelho ou pós-processadas nas workstations a partir de imagens previamente adquiridas. O protocolo de obtenção de imagens para o cálculo do TM segue os mesmos pré-requisitos técnicos necessários para a análise do SLG, abordados em capítulo específico. Após a realização das análises de strain bidimensional, utilizando imagens adquiridas nas três projeções apicais através da técnica do AFI ( automated functional imaging ), o software permite que seja selecionada a análise do TM ( Figura 17.2 ). Como passo inicial, devemos inserir manualmente os valores de pressão arterial não invasiva (PNI) medidos no momento do exame, e isso pode ser realizado a qualquer momento do exame na tela de identificação do paciente ou posteriormente, na própria tela de cálculo de TM ( Figura 17.3 ). Essas medidas de PNI serão automaticamente integralizadas no cálculo de curva de pressão vs. deformação. Para que seja possível obter a indexação temporal dos valores obtidos, é necessário que sejam feitas as marcações de eventos do ciclo cardíaco, identificando a abertura e fechamento das valvas mitral e aórtica, o que pode ser realizado através do Doppler espectral dos fluxos mitral e aórtico, ou ao bidimensional, na análise da projeção apical 3 câmaras, em que podemos evidenciar a abertura e fechamento de ambas as valvas ( Figura 17.4 ). Essas marcações também podem ser realizadas na própria tela de cálculo do TM, modificando quadro a quadro (“ frame ”) a imagem do apical 3 câmaras, selecionando qual o momento exato de cada evento ( Figura 17.5 ). Após aprovar as imagens e marcações realizadas (“ aprove ”), o software realiza os cálculos e dispõe lado a lado o mapa polar ( bull’s eye ) com valores de SLG e strain de pico por segmento e, à direita o mapa polar com o índice de TM por segmento, dispondo na parte inferior os valores de índice de TM global (GWI) e GMWE. Ao selecionarmos, à direita, a tecla “ work efficiency ” o software dispõe no mapa polar os valores de GMWE por segmento ( Figura 17.6 ). Quando selecionamos a tecla “ advanced ”, são geradas as análises por curvas e gráficos, nas quais é possível observar as alças de pressão do VE ( left ventricular pressure , LVP) vs. strain ao longo do ciclo cardíaco, além de um gráfico de barras que demonstra a participação do TM construtivo e do TM desperdiçado ( Figura 17.7 ). Os seguintes parâmetros são fornecidos pelo software : 1. Índice de TM global (ITMG/GWI): corresponde ao trabalho total dentro da área sob a curva PD, sendo calculado a partir do fechamento da válvula mitral até a abertura da valva mitral. Um bull’s eye com valores de TM segmentar e global é fornecido ( Figura 17.6 ). 2. TM construtivo (TMC/GCW): é o trabalho que contribui para a ejeção do VE durante a sístole, sendo obtido considerando-se o encurtamento dos miócitos durante a sístole, adicionando o alongamento dos miócitos durante o relaxamento isovolumétrico ( Figura 17.7 ). 3. TM desperdiçado (TMD/GWW): é o trabalho que não contribui para a ejeção do VE, sendo obtido considerando-se o alongamento dos miócitos (em vez de encurtamento) durante a sístole, somado ao encurtamento durante a fase de relaxamento isovolumétrico (encurtamento pós-sistólico) ( Figura 17.7 ). 4. Eficiência do TM (ETM/GMWE): é obtido por meio da fórmula: . Seu valor é dado em porcentagem de 0 a 100% de eficiência ( Figura 17.6 ). 17.3. Valores de Normalidade Devido à validação recente do TM e de suas variáveis para o uso clínico, não há ensaios multicêntricos com número adequado de pacientes para gerar valores de normalidade definitivos. Manganaro et al. recentemente analisaram os dados do estudo NORRE visando estabelecer limites de referência normais para o TM. Esse foi um estudo multicêntrico e prospectivo europeu formado por 226 pacientes oriundos de 22 laboratórios de ecocardiografia, que forneceu valores de referência para a maioria dos dados ecocardiográficos 2D e 3D. 403 A média ou mediana com o desvio padrão e o intervalo de confiança das variáveis do TM foram, respectivamente, 1.896 + 308 mmHg% (1.292–2.505) para o GWI, 2.232 ± 331 mmHg% (1.582–2.881) para o GCW, 79 mmHg% (53–122) para o GWW e 96% (94–97) para a GMWE. 403 O GWI e GCW foram maiores em mulheres acima de 40 anos, e existe forte correlação no aumento do GWI e GCW com o aumento da pressão arterial sistólica. 404 17.4. Potencial Uso Clínico A grande limitação para a democratização do uso do TM é o fato de apenas uma empresa ser detentora do software (GE Healthcare). Além disso, o cálculo do TM usa basicamente a pressão sistólica aferida manualmente como medida de pós-carga em sua validação. É importante também levar em consideração situações clínicas em que há um aditivo da pós-carga além da PAS, como na EA, CMPH obstrutiva e em algumas cardiopatias congênitas. Apesar de se tratar de uma nova ferramenta bastante promissora, à luz do conhecimento científico atual, ainda é restrita ao campo da pesquisa. Existem algumas publicações que já vêm ganhando notoriedade da aplicação do uso do TM na prática clínica, uma delas é na seleção de pacientes para ressincronização miocárdica. Algumas vezes, a análise do bloqueio do ramo esquerdo pode gerar dúvidas na interpretação ( Figura 17.8 ). Por outro lado, através da análise visual e quantitativa do TM, tornou-se mais fácil o reconhecimento dos casos que têm um ECG com alargamento do QRS e que talvez não tenha um dissincronismo mecânico associado ( Figura 17.9 ). Além da análise visual, tem-se valorizado o valor do trabalho construtivo para identificar os respondedores à terapia de ressincronização, sendo um equivalente de reserva contrátil. 405 Outro dado interessante para identificar os pacientes que se beneficiam com a ressincronização é a análise do trabalho perdido do septo. 406 Portanto, a análise do TM nos pacientes com BRE pode ajudar a melhorar a estratificação pela análise visual, assim como a quantificação do trabalho construtivo e o trabalho perdido. Outro campo interessante na avaliação do TM é na doença coronariana obstrutiva. Ele tem demonstrado sua capacidade de detecção em repouso de doença coronária obstrutiva sendo superior ao SLGVE, mesmo naqueles com FE preservada e sem alteração da contratilidade segmentar. 407 A sua aplicação também tem demonstrado identificar aqueles pacientes com infarto agudo que irão apresentar mais complicações hospitalares e predizer aqueles com recuperação da função miocárdica 408 e determinar complicações a longo prazo em pacientes com IAM com supradesnível de ST. 409 Além das duas situações clínicas descritas, é promissora sua aplicação na CMP dilatada, CMPH e amiloidose. Provavelmente, quando houver mais praticidade do software e mais evidência na literatura, a aplicação do TM será incorporada na prática clínica. 18. Strain no 3D: O Que Pode Acrescentar ao Exame 18.1. Introdução A avaliação tridimensional (3D) da deformação miocárdica das câmaras cardíacas através da técnica de rastreamento dos “speckles” tem inúmeras vantagens em relação à avaliação bidimensional (2D). Considerando que o miocárdio ventricular esquerdo é composto por três camadas de fibras, que estão dispostas em direções diferentes (longitudinal, circunferencial e transversal), os “ speckles ” acabam por apresentar uma trajetória não linear, fugindo do plano bidimensional da imagem por alguns momentos do ciclo cardíaco. Por mais que se realizem múltiplas aquisições longitudinais e transversais do miocárdio ventricular esquerdo, a avaliação dos “ speckles ” no ciclo cardíaco se faz por interpolação, não sendo tão precisa quanto o 3D, que permite acompanhá-los durante todo o ciclo cardíaco nas múltiplas dimensões. Assim, as medidas do strain 3D não são impactadas pela “ out-of-plane ”, pela torsão miocárdica ou encurtamento apical. 410 Em relação ao VD, a metodologia 3D é a única que permite a análise global de todo o miocárdio da câmara, ao passo que, no 2D, apenas a septo e/ou parede livre é avaliada. Ademais, a avaliação do strain 3D é mais fidedigna e fisiológica, pois a análise dos diferentes componentes da deformação miocárdica ocorre de forma simultânea em um único dataset ou ciclo cardíaco. Dessa forma, o strain 3D é uma aplicação promissora para uma avaliação quantitativa, objetiva, compreensiva e reprodutiva da função mecânica do miocárdio. Contudo, essa metodologia apresenta forte dependência da uma boa janela acústica e de um ritmo cardíaco regular, fatores que são os principais limitantes para a incorporação rotineira e sistemática do strain 3D. 411 A aplicação clínica também é limitada devido a diferenças nos algoritmos para acompanhamento dos “ speckles ” e o cálculo ( cut-off ) da deformação miocárdica, que não está padronizado entre os diferentes fabricantes de softwares . 412 , 413 Uma vez que as medidas do strain 3D obtidas por diferentes fabricantes e softwares não são intercambiáveis, a incorporação clínica em avaliações sequenciais requer que as aquisições basais e no acompanhamento do paciente, bem como as análises, sejam obtidas usando o mesmo equipamento, e a interpretação dos resultados deve considerar os valores da normalidade específicos para o equipamento em questão. 410 , 412 Os valores de referência da normalidade também diferem entre as metodologias bidimensional e tridimensional, havendo apenas uma correlação modesta entre os valores do strain longitudinal. Por fim, ainda se fazem necessárias pesquisas clínicas para avaliar a acurácia e o valor prognóstico do strain 3D. 18.2. Strain Ventricular Esquerdo O ST3D (ou strain tridimensional) apresenta um princípio melhor quando comparado ao ST2D por não ser limitado a um plano de corte e por permitir dados vetorizados em três planos ortogonais para uma análise. Do ponto de vista de implementação do método, sabemos que o strain bidimensional precisa de resolução temporal muito elevada (34–50 vps), porque os “ speckles ” ficam pouco tempo (alguns milissegundos) no plano de corte, o que não ocorre no modo tridimensional. Além disso, para o ST3D, o ideal é a obtenção de seis batimentos cardíacos (“ 6 beat acquisition ”), com a maior densidade de linhas e com 44 vps na frequência de 2 a 4 Mhz (pois foi o que apresentou maior acurácia quando comparado a RM). Não se recomenda a aquisição com volumes únicos, e o aumento da resolução temporal no equipamento diminui a qualidade da imagem e o tracking por reduzir a densidade das linhas. 414 A factibilidade geral é em torno de 85%, e as limitações para a implementação da técnica são: janela acústica desfavorável, arritmias cardíacas (impedem aquisições em múltiplos batimentos), visualização incompleta dos seguimentos apicais do VE e VD, problemas de tracking dos speckles nos seguimentos basais (distantes do transdutor) e determinação dos valores de normalidade e de prognóstico clínico. 18.3. Strain Ventricular Direito A análise da contração do VD é importante especialmente para entender o mecanismo dessa câmara diante das doenças congênitas e adquiridas. Porém, ao contrário do VE, a estimativa do VD é mais difícil devido à forma complexa que o VD apresenta e devido à parede fina que possui. Apesar disso, imagens de RM e speckle tracking pelo ecocardiograma têm sido promissoras na análise ventricular direita. No entanto, os valores obtidos pelo ST3D para o VD ainda não estão bem estabelecidos. 415 Persistem problemas técnicos para a análise do strain 3D quando o objetivo é analisar as câmaras direitas, uma vez que o software foi criado para a análise do VE e ainda é adaptado para o VD na maioria das máquinas. Apesar de o strain 3D permitir a análise global de todo miocárdio direito, a tecnologia tridimensional para essa câmara ainda está em andamento, não havendo valores de corte bem definidos para o strain 3D do VD até o momento. 18.3.1. Aquisição e Análise do Full-volume 3D Usando imagem harmônica, idealmente se obtém quatro batimentos triggados na captura. A profundidade deve ser adequada para que somente o VD, suas paredes e o anel tricúspide preencham o volume, e geralmente é o pico sistólico do strain que é usado para análise. 18.4. Strain Atrial Esquerdo O sistema de ultrassom na avaliação do strain 3D do AE (assim como dos ventrículos) é capaz de adquirir os dados volumétricos do átrio em tempo real e pode medir todos os componentes do strain . Contudo, o tracking em três dimensões é um grande desafio, e a resolução temporal e espacial do 3D é inferior à do 2D, o que torna a análise tridimensional complexa e mais demorada, pois a alta qualidade de imagem é necessária para a aquisição do strain 3D. Outro ponto em discussão, é a variabilidade inter e intraobservador na avaliação da mecânica cardíaca. 414 Sabe-se que tanto a fase de reservatório como a de conduto e a de bomba atrial podem ser bem analisadas pelo strain 2D, e há valores médios de corte relativamente definidos para o strain 2D em cada uma dessas fases. Contudo, ainda não temos valores de referência para o strain 3D. As principais aplicações do método nessa câmara, em que a análise do strain torna-se relevante são: IC com FE preservada, 7 avaliação das pressões de enchimento intracavitárias, 95 função atrial do atleta de elite 416 e CMPs, 417 mas os estudos corroboram mais os dados do strain 2D do que do strain 3D. 19. O papel da Ressonância e Tomografia Cardíacas na Avaliação do Strain 19.1. Introdução A RMC, com sua alta resolução espacial e temporal e natureza não invasiva, tornou-se uma importante modalidade para a avaliação da função global e segmentar dos ventrículos. A avaliação do strain é uma medida estabelecida e confiável de quantificação de disfunção contrátil regional e global e possui a capacidade de detectar disfunção cardíaca subclínica sendo, portanto, uma ferramenta útil para a avaliação da função miocárdica. O ecocardiograma é, atualmente, o método mais disponível e de menor custo para a avaliação do strain, porém a análise pode ser prejudicada em pacientes com limitação de janela acústica. 19.2. Métodos de Aquisição do Strain pela Ressonância Magnética Cardíaca O tagging miocárdico (“marcação miocárdica”) é a técnica mais validada em estudos e consiste em uma fase de preparação em que “ tags ” magnéticos (linhas pretas, tags ) são ortogonalmente sobrepostas ao miocárdio no início de uma sequência de cine. 12 , 418 Outra alternativa ao tagging que proporciona análise direta do strain miocárdico pela RMC são as técnicas de SENC ( strain-encoded) e DENSE (displacement encoding with stimulated echoes). 419 Recentemente, foi desenvolvido o método de feature tracking (FT), que permite a quantificação da deformação do miocárdio nas imagens tradicionais de cine RMC sem a necessidade de aquisições adicionais ou longa análise. 420 , 421 Em todas as técnicas de análise do strain , os parâmetros de strain circunferencial e longitudinal globais foram mais reprodutíveis e consistentes do que os regionais. 422 Mais detalhes sobre os métodos de aquisição do strain pela RMC podem ser obtidos nas referências. 12 , 418 - 422 19.3. Strain do Ventrículo Direito pela Ressonância Magnética Cardíaca A medida do strain miocárdico é um método preciso e prático de avaliação da função do VD, por se tratar de um marcador mais sensível e precoce de disfunção contrátil do que outros métodos disponíveis, como a FE. Estudos têm demonstrado o potencial do strain do VD, avaliado pela RMC, em fornecer informações aditivas e prognósticas independentes. 423 - 428 Existem alguns estudos publicados nos quais os autores analisaram o strain do VD em indivíduos saudáveis e também em grupos-controle sem cardiopatia. 423 , 426 , 429 , 430 As patologias que mais acometem o VD, como cardiopatias congênitas, HP e displasia arritmogênica (DAVD) foram as que tiveram maior aplicabilidade na análise do strain do VD. O FT–RMC ( feature tracking pela RMC) foi utilizado em pacientes com tetralogia de Fallot corrigida, e os valores de strain estavam reduzidos nesses pacientes e estavam relacionados com parâmetros de função sistólica (FE biventricular) e também capacidade funcional no teste cardiopulmonar. 425 A avaliação das funções globais e segmentares do VD é fundamental para o diagnóstico multiparamétrico de DAVD, e o strain do VD provou ser uma ferramenta extremamente útil. 428 , 431 Os strains global e segmentar do VD estão significativamente reduzidos em pacientes com DAVD, independente das dimensões e função do VD, de modo que o comprometimento da deformação do VD pode representar um marcador precoce da doença. 428 A Figura 19.1 apresenta exemplos de strain do VD em paciente normal e paciente com HP. 19.4. Strain do Ventrículo Esquerdo pela Ressonância Magnética Cardíaca Os valores médios em indivíduos saudáveis dos tipos de strain do VE (SCG, SRG, SLG) foram pesquisados na última década através do FT–RMC, 419 - 423 incluindo uma importante metanálise. 424 Os maiores e mais recentes estudos em SCG e SR pelo FT-RMC aplicaram a análise da média de três cortes do eixo curto. A maior parte do SLGVE foi calculada através de um corte 4 câmaras, enquanto algumas publicações mais recentes trazem avaliações da média de três cortes longitudinais. Os valores do SLG e do SCG flutuam dentro de uma margem restrita, enquanto os valores de SRG tiveram intervalos de confiança mais amplos. Especula-se que a movimentação através do plano e a grande variabilidade pessoal podem explicar parcialmente esse fenômeno. Entretanto, a real causa ainda permanece incerta. 424 Observa-se uma forte relação dos graus de deformação miocárdica com a presença de realce tardio (RT) miocárdico, em especial o SCG, mas também o SLG derivados do FT-RMC. Ademais, observa-se boa correlação entre as técnicas derivadas entre a ecocardiografia e a RMC. 425 Na CMP dilatada, a presença de um SLG acentuadamente reduzido se relacionou fortemente com pior sobrevida, mesmo naqueles com FE muito reduzida, independentemente da classe funcional e outros achados da RMC. 426 O FT-RMC pode identificar o subgrupo de portadores de IC com FE preservada e DD através do SLG alterado em comparação a indivíduos saudáveis. 427 Na diferenciação entre pericardite constritiva (PC) e CMP restritiva, o SLG derivado da RMC apresentou valor diagnóstico similar ao do ecocardiograma, além de apresentar alto valor discriminatório entre essas patologias. Os valores de SLG são significativamente menores na CMP restritiva, enquanto na pericardite são próximos dos valores encontrados em controles normais. 428 Os componentes longitudinal e circunferencial do strain são alterados também em casos de miocardite. 429 Publicações que exploraram a capacidade do strain derivado pela ressonância usando tagging miocárdico demonstraram alta capacidade em determinar portadores de amiloidose com presença de RT, sendo potencialmente mais sensível que a própria sequencia pós-contraste. 430 A perda do gradiente base-ápice do strain circunferencial parece representar um achado precoce das alterações observadas na doença de Fabry, já as modalidades de strain longitudinal e circunferencial não apresentaram variação significativa entre os controles sadios. 431 Usando o FT-RMC, foi observado que pacientes com CMPH têm redução do SLG, SRG e SCG comparados com controles sadios, sendo o SLG e o SRG preditores de eventos adversos, 432 assim como já foi demonstrado que o SLG é significativamente superior em pacientes portadores de HAS do que em portadores de CMPH. 433 O diagnóstico da doença coronariana isquêmica pela RMC pode ser aperfeiçoado se adicionada a análise com FT-RMC, sendo possível detectar pequenas alterações no strain circunferencial após o estresse por dobutamina, podendo o SLG ser útil na detecção de infarto e avaliação de viabilidade. 434 Os três tipos de strain estão reduzidos em pacientes que sofreram IAM com supradesnivelamento do segmento ST, sendo preditores independentes de eventos cardiovasculares adversos. 435 Pacientes portadores de EA importante têm SLG e SCG reduzidos em relação aos controles sadios, a despeito dos sintomas apresentados. 436 Em pacientes portadores de valva aórtica bicúspide e FE preservada, foram observados indícios de DD através de alterações no strain circunferencial. 437 A cardiotoxicidade induzida por quimioterapia apresenta anormalidades no SLG e SCG muito antes do declínio da FEVE. 438 Recentemente, foi reportado que o SLG pelo FT-RMC apresenta associação mais intensa com mortalidade do que os observados pela combinação da FEVE e pelo RT miocárdico. Até o momento, essa foi a maior experiência em avaliar o valor prognóstico do SLG avaliado pelo FT-RMC. Ajustado para fatores de risco clássicos, incluindo FEVE e RT, a piora de 1% no SLG foi associada a um aumento de 89% no risco de morte em pacientes isquêmicos e não isquêmicos. 439 19.5. Strain do Átrio Esquerdo pela Ressonância Magnética Cardíaca A avaliação da função do AE tem sido cada vez mais reconhecida como um fator crucial em uma diversidade de patologias cardíacas. Normalmente, sua alteração está associada a pior prognóstico e precede o estabelecimento de IC. O AE tem a função de reservatório para a drenagem das veias pulmonares, servindo de conduto para a passagem do fluxo até o VE por diferença de pressão causada pela abertura das cúspides mitrais e por fim de função contrátil, com a sístole atrial ocorrendo no final da diástole do VE. 440 A análise do strain atrial baseada em FT-RMC quantifica de forma confiável o strain longitudinal do AE e o strain rate . Usando imagens cine-RM padrão, ela discrimina entre pacientes com relaxamento ventricular esquerdo alterados e pacientes saudáveis, como podemos observar na Tabela 19.1 . 441 Em um subestudo do MESA, a redução no SLG atrial e alteração do volume indexado mínimo do AE foram fatores preditores independentes para a instalação de IC, mesmo ajustados para a massa do VE e pro-BNP. 442 Da mesma forma, a avaliação da função fásica do AE foi um preditor de risco independente para a admissão por IC ou morte, mesmo após ajustar para o volume do AE e o remodelamento ventricular. 443 19.6. Strain pela Tomografia Cardíaca A avaliação do strain pela tomografia cardíaca (TC) pode ser realizada utilizando-se o método feature tracking (FT-TC) nas aquisições contrastadas e trigadas, com reconstruções funcionais do ciclo cardíaco. Os dados ainda são escassos, mas a sua aplicabilidade foi testada em algumas publicações recentes em pacientes portadores de EAo importante, submetidos a implante de prótese aórtica transcutânea. Os resultados demonstram valores similares do SLG entre a FT-TC e a derivada pelo ECO, 444 , 445 assim como alta reprodutibilidade intraobservador e intraclasse para SLG FT-TC do VE, apesar de aparentemente subestimar os valores. 444 Outra publicação explorou a relação do strain FT-TC com a doença isquêmica do coração em portadores de lesão significativa na artéria descendente anterior. Observou-se uma redução do strain longitudinal nos segmentos do território da artéria descendente anterior, a despeito de volumes diastólico, sistólico e FE normais. 446 No momento, as limitações para a utilização do strain pela RMC e TC se devem à escassa disponibilidade e ao elevado custo de softwares de pós-processamento.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 12; 120(12):e20230646
oa_package/bf/e5/PMC10789373.tar.gz
PMC10789374
0
Resultados As médias de idade e índice de massa corporal (IMC) dos pacientes com câncer de mama antes da quimioterapia e dos controles são apresentados na Tabela 1 . Não foi observada diferença entre os grupos. Em relação à hipertensão arterial e diabetes mellitus, o grupo com câncer de mama apresentou frequências maiores que o grupo controle (todos p<0,05). Após o tratamento, todas as pacientes com câncer de mama apresentaram função ventricular normal (FEVE ≥50%). Onze (32,3%) pacientes apresentaram níveis de NT-proBNP acima do valor de referência (<125,0 pg/mL se <75 anos, ou <450,0 pg/mL, se ≥75 anos), mas nenhuma alteração nos níveis de cTnI (faixa normal <0,120 ng/mL) foi encontrado no grupo de câncer de mama. Nenhuma paciente apresentou cardiotoxicidade clínica. Outras características das pacientes com câncer de mama estão resumidas na Tabela 1 . A comparação dos marcadores cardiovasculares entre o câncer de mama e o grupo controle está descrita na Figura 1 . Pacientes com câncer de mama apresentaram níveis plasmáticos mais elevados de GDF-15 (p<0,001) e níveis mais baixos de FABP3 (p=0,038), FABP4 (p=0,003), sCD14 e ucMGP (p<0,001) em comparação com o grupo controle. Para o GDF-15 observou-se área sob a curva de COR =0,825 (p<0,001, IC=0,722-0,927) ( Figura 2 ). FABP3 (r=0,344; p=0,046) e FABP4 (r=0,479; p=0,004) correlacionaram-se positivamente com o IMC no grupo com câncer de mama. Considerando o grupo com câncer de mama, é interessante notar que os níveis de GDF-15 foram maiores no grupo triplo negativo em comparação com os demais grupos moleculares (p=0,030), mas essa diferença não foi significativa após a aplicação do teste de Bonferroni ( Tabela 2 ). Os níveis de FABP3 também foram maiores no grupo com alto risco cardiovascular de Framingham (p=0,022), mas não foram significativos após a correção de Bonferroni ( Tabela 3 ). Nenhum outro marcador cardiovascular apresentou diferenças nos níveis plasmáticos de acordo com o tipo molecular do tumor ou risco cardiovascular.
Editor responsável pela revisão: Gláucia Maria Moraes de Oliveira Potencial conflito de interesse Não há conflito com o presente artigo Resumo Fundamento As doenças cardiovasculares (DCV) são relevantes para o manejo do tratamento do câncer de mama, uma vez que um número significativo de pacientes desenvolve essas complicações após a quimioterapia. Objetivo Este estudo teve como objetivo avaliar novos biomarcadores cardiovasculares, sendo eles CXCL-16 (ligante de motivo C-X-C 16), FABP3 (proteína de ligação a ácidos graxos 3), FABP4 (proteína de ligação a ácidos graxos 4), LIGHT (membro da superfamília do fator de necrose tumoral 14/TNFS14), GDF-15 (fator de crescimento/diferenciação 15) , sCD4 (forma solúvel de CD14) e ucMGP (matriz Gla-proteína não carboxilada) em pacientes com câncer de mama tratadas com doxorrubicina (DOXO). Métodos Este estudo de caso-controle foi realizado em uma clínica oncológica, incluindo 34 mulheres com diagnóstico de câncer de mama tratadas com quimioterapia com DOXO e 34 mulheres controle, sem câncer ou DCV. Os marcadores foram determinados imediatamente após o último ciclo de quimioterapia. O nível de significância estatística adotado foi de 5%. Resultados O grupo com câncer de mama apresentou níveis mais elevados de GDF-15 (p<0,001), enquanto os indivíduos controle apresentaram níveis mais elevados de FABP3 (p=0,038), FABP4 (p=0003), sCD14 e ucMGP (p<0,001 para ambos). Correlações positivas foram observadas entre FABPs e IMC no grupo com câncer. Conclusão GDF15 é um biomarcador emergente com potencial aplicabilidade clínica neste cenário. FABPs são proteínas relacionadas à adiposidade, potencialmente envolvidas na biologia do câncer de mama. sCD14 e ucMGP estão envolvidos na calcificação inflamatória e vascular. Acima de tudo, a avaliação destes novos biomarcadores cardiovasculares pode ser útil no tratamento da quimioterapia do câncer de mama com DOXO.
Introdução As doenças cardiovasculares (DCV) e o câncer são as principais causas de morte em todo o mundo. 1 Melhorias contínuas nas estratégias de prevenção e tratamentos anticâncer em pacientes com câncer de mama reduziram significativamente o número de mortes por causas relacionadas à doença; no entanto, houve um risco aumentado de morte por DCV neste grupo de pacientes. 2 As razões para este sinergismo entre câncer e as DC são os fatores que ambos possuem em comum (incluindo diabetes mellitus, hipertensão, hipercolesterolemia e obesidade), bem como os mecanismos fisiopatológicos subjacentes às DCV, associados a um risco aumentado de câncer. 3 , 4 Os regimes de tratamento baseados em antraciclinas, como a doxorrubicina (DOXO), são algumas das alternativas mais eficazes do câncer da mama, sendo responsáveis pela melhoria da sobrevida livre de doença e da sobrevida global neste grupo. No entanto, as antraciclinas podem resultar em toxicidade grave a curto e longo prazo, incluindo cardiotoxicidade e malignidade hematológica secundária. 5 Diversos estudos propuseram o uso de biomarcadores plasmáticos, especialmente troponinas e peptídeos natriuréticos tipo B, para monitorar a cardiotoxicidade das antraciclinas visando a detecção precoce dessas complicações cardiovasculares. 6 - 8 Mais recentemente, esses biomarcadores foram incluídos como critérios diagnósticos de cardiotoxicidade, além dos exames de imagem cardiológicos e suas modalidades e características clínicas. 9 , 10 Outros biomarcadores são subjacentes às alterações fisiopatológicas que ocorrem durante a insuficiência cardíaca. A insuficiência cardíaca se manifesta como diminuição da fração de ejeção do ventrículo esquerdo (FEVE) ou insuficiência cardíaca sintomática em até 5% dos pacientes. 11 Em um estudo prospectivo, Cardinale et al. 12 observaram uma incidência global de cardiotoxicidade de 9% usando a diminuição da FEVE como critério único para definição de cardiotoxicidade. Porém, em outro estudo prospectivo, López-Sendón et al. 10 ampliaram os critérios de definição de cardiotoxicidade para além das alterações da FEVE, incluindo o uso de biomarcadores plasmáticos e cardiotoxicidade, que foi identificada em 37,5% dos pacientes durante o acompanhamento. As quimiocinas são citocinas quimioatrativas pró-inflamatórias que atuam principalmente no tráfego de leucócitos, regulam a migração, proliferação e sobrevida celular e são componentes-chave na biologia do câncer. 13 A CXCL-16 (ligante do motivo C-X-C 16) é uma quimiocina expressa em órgãos linfoides, fígado, pulmões, intestino delgado e rim. A expressão de CXCL-16 é aumentada por citocinas pró-inflamatórias, importantes para o acúmulo de células imunes nos locais de reação inflamatória. 14 O LIGHT (membro da superfamília do fator de necrose tumoral 14/TNFS14) pertence à superfamília do fator de necrose tumoral e é expresso por diferentes tipos de células do sistema imunológico. O LIGHT sinaliza através de dois receptores e possui funções distintas que dependem do tipo de célula, mas as interações com esses tipos de receptores têm implicações imunológicas na biologia do tumor. 15 , 16 O fator de crescimento/diferenciação 15 (GDF-15) é um membro divergente da superfamília do fator de crescimento transformador-β (TGF-β), sendo também conhecido como citocina inibitória de macrófagos (MIC)-1. 17 O GDF-15 também está relacionado com a evolução do câncer, tanto positiva quanto negativamente, uma vez que inibe a promoção tumoral precoce, mas sua expressão anormal em cânceres avançados causa formação de células-tronco cancerígenas, proliferação, invasão, metástase, escape imunológico e uma resposta reduzida à terapia. 18 A proteína matriz-carboxiglutamato (Gla) (MGP) é uma proteína dependente de vitamina K e um forte inibidor da calcificação vascular. A deficiência de vitamina K leva à MGP não carboxilada (ucMGP) inativa, que se acumula em locais de calcificação arterial. 19 Biologicamente inativo, o desfosfo-ucMGP é um marcador da condição da vitamina K vascular, sendo descrito como preditor de mortalidade em pacientes com insuficiência cardíaca e estenose aórtica. 20 O antígeno de diferenciação de monócitos humanos CD14 é um receptor de reconhecimento de padrões (RRP) que aumenta as respostas imunes inatas. O CD14 foi identificado pela primeira vez como um marcador de monócitos para sinalizar respostas intracelulares após encontros bacterianos. 21 Formas solúveis de CD14 (sCD14) podem ser secretadas por células ativadas, que liberam CD14 por eliminação dependente ou independente de proteinase. 22 As FABPs (proteínas de ligação a ácidos graxos) são proteínas expressas em quase todos os tecidos. Essas proteínas são responsáveis pelo controle do transporte, metabolismo e armazenamento de ácidos graxos. As FABPs são propostas como reguladoras centrais do metabolismo lipídico, da inflamação e da homeostase energética. 23 A FABP3 é uma proteína citosólica encontrada principalmente no coração, mas também nos músculos, cérebro e rins. 24 Alguns estudos sugerem que a FABP3 tem sensibilidade superior à troponina para detecção de lesão isquêmica e lesão cardíaca associada à insuficiência cardíaca congestiva. 25 , 26 A FABP4 é expressa principalmente em adipócitos e macrófagos e desempenha um papel importante no desenvolvimento da resistência à insulina e aterosclerose. Os níveis circulantes de FABP4 estão associados a diversos aspectos da síndrome metabólica e doenças cardiovasculares. 27 A implementação de novos biomarcadores laboratoriais tem sido uma prioridade na cardio-oncologia, principalmente para a detecção precoce de cardiotoxicidade secundária à quimioterapia. Nesse contexto, este estudo teve como objetivo avaliar novos biomarcadores cardiovasculares, como CXCL-16, FABP4, LIGHT, GDF-15, sCD14 e ucMGP em pacientes com câncer de mama sob quimioterapia baseada em DOXO. Participantes e Métodos Amostras humanas Trata-se de um estudo caso-controle realizado com pacientes ambulatoriais do Serviço de Oncologia do Hospital Alberto Cavalcanti/FHEMIG (Belo Horizonte, Brasil), que incluiu 34 mulheres com idade igual ou superior a 18 anos com diagnóstico de câncer de mama e em uso de terapia neoadjuvante com DOXO, atendidas no período entre junho de 2015 e junho de 2018. Os critérios de exclusão no grupo caso foram: presença de cardiopatia prévia com função ventricular esquerda prejudicada; disfunção hepática ou renal moderada a grave; doenças cerebrais degenerativas que necessitem de cuidadores; e gestantes ou pacientes com expectativa de vida inferior a três meses. Além disso, mulheres que já haviam sido submetidas a quimioterapia, terapia hormonal, imunoterapia ou radioterapia foram excluídas. O grupo controle foi formado por 34 mulheres saudáveis com idade igual ou superior a 18 anos, sem qualquer doença maligna ou presença de cardiopatia prévia, disfunção hepática ou renal moderada a grave, doenças degenerativas e não gestantes, conforme atestado por médico clínico. As características clínicas dos pacientes com câncer de mama foram obtidas a partir dos prontuários médicos hospitalares. Antes da quimioterapia, as pacientes com câncer de mama foram submetidas a avaliação médica com um cardiologista, que realizou eletrocardiograma e ecocardiograma bidimensional incluindo modo tecidual com ecocardiografia usando o Vivid S6 (GE Medical Systems Healthcare®, Tirat Carmel, Israel) com avaliação da FEVE. Não foram observadas alterações nesses exames. O risco cardiovascular de pacientes com câncer de mama foi calculado de acordo com o escore de risco global ( Framingham Heart Study ) 28 antes do tratamento do câncer. O estudo foi aprovado pelo Comitê de Ética em Pesquisa da UFMG (n.o 38538714.20000.5149) e Comitê de Ética da FHEMIG (n.o 54376216.0.0000.5119), seguindo os preceitos da Declaração de Helsinque da Associação Médica Mundial. Todas as participantes assinaram previamente o termo de consentimento livre e esclarecido. Protocolos experimentais e laboratoriais A coleta de sangue em jejum foi realizada após a quimioterapia à base de DOXO (até sete dias após o último ciclo de DOXO). Para o preparo do plasma, o tubo de EDTA foi centrifugado por 10 minutos a 1000 g dentro de 30 minutos após a coleta do sangue e para o preparo do soro, o tubo sem aditivo foi centrifugado a 3000 g por 15 min. As amostras de plasma e soro foram distribuídas em alíquotas e imediatamente armazenadas a -80°C até a análise. Os níveis dos marcadores cardiovasculares foram determinados por imunoensaios multiplexados utilizando a plataforma Luminex® xMAP®. O plasma de EDTA foi utilizado para determinações de CXCL-16, FABP3, FABP4, LIGHT (kit HCVD1MAG-67K; Merck®, Darmstadt, Alemanha), GDF-15 (kit HCVD2MAG-67K; Merck®, Darmstadt, Alemanha), sCD14 e ucMGP (kit HCVD6MAG-67K; Merck®, Darmstadt, Alemanha), de acordo com as instruções do fabricante em um dispositivo MAGPIX® Multiplexing System Analyzer (Luminex Corporation®, Austin, EUA). Os níveis de cTnI (troponina I) e NT-proBNP (fração NT do peptídeo natriurético tipo B), bem como a FEVE, para monitorar a avaliação da disfunção cardíaca, foram determinados de acordo com os protocolos descritos em estudo anterior. 29 A análise do colesterol total e do HDL foi realizada por dosagem colorimétrica no aparelho VITROS 5600 (Ortho Clinical Diagnostics®, Rochester, EUA). O colesterol LDL foi calculado pela fórmula de Friedwald. Análise estatística Os dados foram analisados por meio do software IBM® SPSS® Statistics (para Windows®; Chicago, Illinois, EUA, versão 21). O teste de Shapiro-Wilk foi utilizado para verificar a normalidade das variáveis quantitativas, que foram apresentadas como média ± desvio padrão (DP) ou mediana (percentis 25–75). O teste t de Student não pareado e o ANOVA One-Way (seguido do teste de Tukey) ou o teste de Mann-Whitney e Kruskal-Wallis (seguido do teste de Bonferroni) determinaram as diferenças entre dois e três grupos, conforme apropriado. As variáveis categóricas foram apresentadas como n (%) e comparadas por meio do teste exato de Fisher. As correlações foram realizadas por meio do teste de correlação de Spearman. Curvas de características de operação do receptor (ROC) foram utilizadas para representar a sensibilidade e a especificidade. O nível de significância adotado foi de 5%. Discussão A investigação, monitoramento e avaliação de lesões cardiovasculares em pacientes com câncer de mama em regime de quimioterapia têm sido amplamente estudadas. No entanto, estudos incluindo biomarcadores emergentes capazes de detectar precocemente o comprometimento cardiovascular em pacientes com câncer de mama sob quimioterapia baseada em DOXO são raros, independente da cardiotoxicidade clínica. Assim, as principais conclusões do presente estudo são: (i) pacientes com câncer de mama apresentaram níveis mais elevados de GDF-15, mostrando boa precisão para diferenciar esse grupo e os controles, de acordo com a área sob a curva de ROC; (ii) pacientes com câncer de mama apresentaram níveis mais baixos de FABP3, FABP4, sCD14 e ucMGP; e (iii) houve correlação positiva entre FABPs e IMC. O GDF-15 é um preditor forte e independente de DCV, morbidade e mortalidade por câncer em indivíduos residentes em comunidades. 30 O grupo com câncer de mama apresentou níveis de GDF-15 8,12 vezes maiores do que os indivíduos saudáveis. No câncer de mama, o GDF-15 tem sido associado a metástases e resistência ao trastuzumabe. 31 O GDF-15 aumenta devido a diversas condições fisiopatológicas; portanto, níveis elevados de GDF-15 devem ser interpretados com cautela. Neste estudo, as razões associadas ao aumento dos níveis de GDF-15 em pacientes com câncer de mama permanecem obscuras, uma vez que a biologia do câncer de mama e a quimioterapia baseada em DOXO são ambas condições que podem promover alterações no GDF-15. O GDF-15 é um biomarcador emergente que se encontra elevado na doença subclínica inicial e tem utilidade prognóstica para eventos cardiovasculares e mortalidade. 32 Assim, estudos mais robustos de caso-controle, que incluam pelo menos um grupo de pacientes com câncer de mama tratados com outra classe de medicamentos quimioterápicos, poderiam ser benéficos para esclarecer a hipótese desse estudo. Uma coorte prospectiva de Demissei et al. 6 incluiu 323 pacientes com câncer de mama tratados com regimes baseados em antraciclina e/ou trastuzumabe. Nesse estudo, nenhuma associação entre os níveis de GDF-15, troponina, mieloperoxidase e fator de crescimento placentário com alterações na FEVE foram encontrados. Além disso, não foram observadas alterações nos níveis de GDF-15 nos dois anos de estudo. Na linha de base, os níveis de GDF-15 em pacientes com câncer de mama, que receberam tratamento com DOXO, foram de 704 [532–908] pg/mL e 599 [523–722] pg/mL para pacientes que receberam DOXO+Trastuzumabe. No presente estudo, o GDF-15 foi maior nos pacientes triplo negativos, mas a diferença não foi significativa, requerendo mais estudos com uma maior população. Em um estudo de coorte multicêntrico, os níveis de GDF-15 permaneceram elevados mesmo após 15 meses de estudo em pacientes com câncer da mama (HER2+) sob terapêutica adjuvante com um regime contendo antraciclina seguido de taxanos e trastuzumabe. 33 Os níveis de FABP4 foram mais baixos em pacientes com câncer de mama em comparação com o grupo controle do presente estudo, sendo um achado inesperado de acordo com Tsakogiannis et al. 34 A FABP4 é altamente expressa em adipócitos, mas pacientes com câncer de mama não apresentaram diferenças no IMC em comparação com controles saudáveis. Porém, o IMC não é o melhor método para avaliar a adiposidade; outros marcadores da composição da gordura corporal devem ser aplicados para correlacionar com os níveis de FABP4. Na verdade, seus níveis mostraram correlação com o IMC no grupo com câncer de mama no presente estudo. Contrariamente às observações aqui registradas, outro estudo de caso-controle encontrou níveis mais elevados de FABP4 em pacientes com câncer de mama, em contraste com mulheres saudáveis, e níveis mais elevados no câncer de mama do tipo luminal em comparação com HER2+/triplo negativo. No entanto, eles também sugerem que o IMC no câncer de mama pode ser um fator que afeta a expressão de FABP4, já que pacientes com câncer de mama e IMC ≥25 kg/m 2 apresentaram níveis mais elevados de FABP4. 34 Estes dados enfatizam que as FABPs são expressas pelo tecido adiposo. Além disso, a FABP4 circulante aumenta o fenótipo semelhante a células-tronco tumorais por meio da atividade mediada pelo IL-6/STAT3/ALDH1, sugerindo que a FABP4 circulante liberada pelo tecido adiposo do hospedeiro pode desencadear a saída da dormência tumoral 35 e que a FABP-4 é regulada positivamente em determinados subconjuntos de macrófagos em tumores de mama, o que aumenta sua capacidade de promover o crescimento tumoral e a metástase por meio das vias dependentes de IL-6. 36 Os níveis de FABP3 também foram mais baixos no grupo com câncer de mama em comparação com o grupo controle. Sabe-se que a FABP3 desempenha um papel fundamental no metabolismo dos cardiomiócitos. No entanto, é possível levantar a hipótese de que a quimioterapia com DOXO poderia promover uma diminuição na síntese de FABP3 no tecido cardíaco, uma vez que a DOXO induz a apoptose de cardiomiócitos. 37 Isto foi demonstrado por Sayed-Ahmed et al., 38 cujos experimentos mostram que o uso crônico de DOXO resultou em diminuição significativa e dose-dependente na expressão de mRNA de FABP3 no tecido cardíaco. Além disso, a perda da homeostase do metabolismo lipídico celular devido à redução do conteúdo intracelular de FABP3 e ao fornecimento prejudicado de ácidos graxos parece ser uma hipótese plausível para a progressão da insuficiência cardíaca e de outras DCV. 39 Conway et al. demonstraram que o promotor da FABP3 estava hipermetilado e a expressão gênica estava reduzida no câncer de mama, indicando que a expressão de FABP3 tem um efeito inibitório na doença. 40 Por outro lado, a FABP3 é um marcador de lesão cardíaca, uma vez que níveis elevados são úteis para o diagnóstico precoce do infarto agudo do miocárdio. 41 Os níveis plasmáticos de FABP3 já foram investigados no contexto da quimioterapia do câncer de mama. No entanto, não foram observadas diferenças em indivíduos que desenvolveram cardiotoxicidade secundária às antraciclinas em comparação com indivíduos sem cardiotoxicidade. 42 Estudos experimentais e clínicos substanciais são necessários para esclarecer o comportamento da FABP3 neste contexto. A correlação positiva entre FABP3 e FABP4 com o IMC já era esperada, uma vez que esses marcadores estão diretamente associados ao tecido adiposo e ao metabolismo lipídico. Embora níveis mais baixos de sCD14 tenham sido observados em pacientes com câncer de mama no presente estudo, esse achado é controverso, uma vez que alguns estudos mostraram que os níveis de sCD14 estavam mais elevados em pacientes com câncer do que em pacientes com doença benigna ou em indivíduos saudáveis. 43 , 44 O resultado do CD14 na inflamação é multifatorial, incluindo o local da inflamação, o nível de expressão do CD14, as características dos ligantes do CD14 e a competição entre diferentes vias dependentes do CD14. 21 O CD14 também é expresso nas membranas celulares dos cardiomiócitos 45 e o efeito apoptótico da DOXO nos cardiomiócitos pode reduzir o CD14 solúvel. Além disso, os níveis de sCD14 também foram determinados em outros estudos como reagente de fase aguda, 46 mas os pacientes avaliados não estavam em fase inflamatória aguda, conforme avaliada por medidas da proteína C reativa (PCR) (dados não mostrados). Assim, estes resultados devem ser interpretados com cautela e estudos futuros neste contexto devem ser realizados a fim de determinar o papel do sCD14 neste cenário. Pacientes com câncer de mama também apresentaram níveis plasmáticos de ucMGP mais baixos em comparação com indivíduos controle. De acordo com Yoshimura et al., 47 o gene MGP é regulado positivamente em casos de prognóstico ruim, indicando que os níveis de mRNA do MGP são um potencial indicador prognóstico do câncer de mama. No entanto, não houve diferença na expressão proteica do tumor por meio de imuno-histoquímica. Por outro lado, níveis mais baixos de ucMGP foram causalmente relacionados a uma diminuição do risco de doença coronariana. 48 As formas inativas de MGP (como uc-MGP) são biomarcadores úteis de deficiência de vitamina K, calcificação vascular e DCV e podem prever um risco futuro de morte ou eventos cardiovasculares. Trata-se de uma proteína dependente de vitamina K (VKDP), liberada a partir das células na corrente sanguínea. A vitamina K atenua as respostas inflamatórias bloqueando a transdução do sinal do fator nuclear κB (NF-κB). Níveis mais elevados de ucMGP refletem a calcificação vascular, que é um dos principais fatores de risco de morbidade e mortalidade cardiovascular. 49 , 50 Desta forma, os dados do presente estudo sugerem que a administração de DOXO não induz calcificação cardiovascular a curto prazo, um mecanismo improvável relacionado à toxicidade cardiovascular da DOXO. A determinação dos níveis de vitamina K, como citocinas pró-inflamatórias (como IL-6 e TNF-α; que aceleram a formação de VKDPs) e a quantificação de outras VKDPs, como osteocalcina e 6 específico para parada de crescimento (Gas6) e proteína rica em Gla (GRP) é fortemente encorajada em futuros estudos clínicos prospectivos, incluindo pacientes com câncer de mama sob tratamento com DOXO. Limitações Este estudo apresenta limitações, como um estudo seccional de centro único, realizado com pacientes que faziam quimioterapia apenas com DOXO. O tamanho pequeno da amostra também é uma limitação importante do presente estudo. Além disso, como os biomarcadores não foram avaliados antes do tratamento, o próprio câncer poderia influenciar alguns resultados. Consequentemente, novos estudos longitudinais para validação desses marcadores, com uma população maior, devem ser realizados para avaliar seu desempenho no monitoramento das alterações cardiovasculares causadas pela quimioterapia baseada em DOXO. Conclusão O resultado deste estudo é preliminar, mas pode contribuir para uma melhor compreensão dos mecanismos subjacentes à cardiotoxicidade secundária à quimioterapia baseada em DOXO. Além disso, os resultados sugerem que os níveis de GDF-15, FABP3, FABP4, sCD14 e ucMGP podem estar relacionados a alterações cardiovasculares em pacientes com câncer de mama tratados com DOXO. Mais estudos devem ser realizados em outras populações para validar os presentes resultados.
Agradecimentos A KBG agradece ao Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq pela bolsa de pesquisa. A RMCP agradece à Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES pela bolsa de pesquisa. Luciana Maria Silva e Heloisa Helena M. Oliveira pelo suporte técnico.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 14; 120(12):e20230167
oa_package/c0/e9/PMC10789374.tar.gz
PMC10789375
0
Resultados A média de idade da população estudada foi de 63,7 ± 13,1 anos e 524 pacientes (70,1%) eram do sexo masculino. A mediana do escore ACEF-MDRD na população estudada foi de 2,43 (1,73 - 3,74) e a mediana do escore ACEF-MDRD foi de 1,51 (1,29 - 1,73), 2,41 (2,13 - 2,80) e 4,60 (3,74 - 5,77) para tercis ACEF-MDRD baixo , ACEF-MDRD med e ACEF-MDRD alto , respectivamente. Cento e quarenta e dois pacientes (19,0%) faleceram ao final do acompanhamento de um ano. As características demográficas, antropométricas, clínicas e laboratoriais dos pacientes foram resumidas nas Tabelas 1 e 2 . Como esperado, houve diferenças significativas entre os tercis em termos de características. Os pacientes dentro do tercil ACEF-MDRD alto eram mais propensos a serem mais velhos e do sexo masculino do que o tercil ACEF-MDRD baixo . Sintomas e sinais individuais de congestão e IC foram mais frequentes em pacientes dentro do tercil alto da ACEF-MDRD, e mais pacientes neste tercil apresentaram sintomas de classe 3 ou 4 da New York Heart Association (NYHA) em comparação com outros tercis. Além de apresentar maior creatinina e menor taxa de filtração glomerular no início do estudo, a hemoglobina e a albumina foram significativamente mais baixas, e o NT-proBNP foi significativamente maior no tercil alto do ACEF-MDRD. Por fim, tanto a frequência de pacientes com pelo menos uma internação quanto o número total de internações repetidas foram mais frequentes no tercil ACEF-MDRD alto , e a mortalidade foi significativamente maior neste último tercil em comparação com ACEF-MDRD med e ACEF-MDRD baixo ( Figura 1 ). As curvas de Kaplan-Meier para sobrevida em um ano e riscos cumulativos para tercis foram fornecidas na Figura 2 . Houve diferenças significativas entre os tercis ACEF-MDRD em termos de sobrevida em um ano. Em comparações pareadas, os pacientes no tercil ACEF alto tiveram sobrevida em um ano significativamente menor do que o tercil ACEF-MDRD baixo e ACEF-MDRD med (p<0,001). Houve também uma tendência de menor sobrevida no tercil ACEF-MDRD med em comparação ao tercil ACEF-MDRD baixo , mas isso não foi estatisticamente significativo (p=0,08). Os preditores univariados e multivariados de mortalidade foram fornecidos na Tabela 3 . Após ajuste, houve uma relação linear entre cada aumento de um ponto no escore ACEF-MDRD e a mortalidade em um ano. Além do ACEF-MDRD, outros parâmetros associados à mortalidade foram a presença de congestão autorreferida na admissão, menor sódio e maior classe NYHA. ACEF-MDRD apresentou estatística-c global de 0,66 ± 0,03 para predição de mortalidade em um ano e, para ponto de corte de 2,71, teve sensibilidade de 71,1%, especificidade de 61,9%, valor preditivo positivo de 30,1%. e valor preditivo negativo de 90,1%. Todas as variáveis componentes do ACEF-MDRD tiveram uma estatística C mais baixa para prever a mortalidade em um ano em comparação com o ACEF-MDRD (idade: 0,62 ± 0,03, fração de ejeção do ventrículo esquerdo: 0,64 ± 0,03, taxa de filtração glomerular: 0,56 ± 0,03, taxa de filtração glomerular: 0,56 ± 0,03, p=0,001). Em um modelo de regressão multivariada composto pelos escores ACEF-MDRD e GWTG-HF ambos foram encontrados como preditores independentes de mortalidade em um ano (OR:1,08 (IC95%:1,05 - 1,11), p<0,001 para escore GWTG-HF e OR:1,12 (IC95%: 1,02 - 1,23), p=0,02 para ACEF-MDRD). Para prever a mortalidade em um ano, o escore GWTG-HF apresentou estatística c de 0,70 ± 0,02, e a diferença entre o escore GWTG-HF e o ACEF-MDRD não foi estatisticamente diferente ( Figura 3 ). No geral, o NRI foi de 0,107, indicando uma melhoria na previsão de mortalidade com o escore ACEF-MDRD em relação ao escore GWTG-HF. Os componentes individuais das análises do NRI mostraram que a previsão correta da mortalidade em um ano foi ligeiramente inferior com o ACEF-MDRD (NRIe -0,023), mas a previsão da sobrevida em um ano foi muito melhor quando o ACEF-MDRD foi usado (NRIne 0,130). No subgrupo de pacientes nos quais um NT-proBNP estava disponível (n = 211, 28,2% da amostra do estudo), o NT-proBNP foi significativamente maior em pacientes que morreram ao final de um ano em comparação com aqueles que sobreviveram ( 2510 (390 - 4994) pg/ml vs. 1399 (547 - 4113) pg/ml, p<0,001). Comparado ao NT-proBNP, a capacidade preditiva do escore ACEF-MDRD foi significativamente maior ( Figura 1 suplementar). O escore ACEF-MDRD permaneceu um preditor significativo de mortalidade em um ano após ajuste para NT-proBNP neste subgrupo (OR: 1,45, IC 95%: 1,22 - 1,73, p<0,001).
Editor responsável pela revisão: Gláucia Maria Moraes de Oliveira Potencial conflito de interesse Não há conflito com o presente artigo Resumo Fundamento Embora muitos modelos de risco tenham sido desenvolvidos para prever o prognóstico na insuficiência cardíaca (IC), esses modelos raramente são úteis para o clínico, pois incluem múltiplas variáveis que podem ser demoradas para serem obtidas, são geralmente difíceis de calcular e podem sofrer de overfitting estatístico. Objetivos Investigar se um modelo mais simples, nomeadamente o escore ACEF-MDRD, poderia ser usado para prever a mortalidade em um ano em pacientes com IC. Métodos 748 casos do registro SELFIE-HF tinham dados completos para calcular o escore ACEF-MDRD. Os pacientes foram agrupados em tercis para análise. Para todos os testes, um valor de p <0,05 foi aceito como significativo. Resultados Significativamente mais pacientes dentro do tercil ACEF-MDRD alto (30,0%) morreram dentro de um ano, em comparação com outros tercis (10,8% e 16,1%, respectivamente, para ACEF-MDRD baixo e ACEF-MDRD med , p<0,001 para ambas as comparações). Houve uma diminuição gradual na sobrevida em um ano à medida que o escore ACEF-MDRD aumentou (log-rank p<0,001). ACEF-MDRD foi preditor independente de sobrevida após ajuste para outras variáveis (OR: 1,14, IC95%:1,04 – 1,24, p=0,006). O escore ACEF-MDRD ofereceu precisão semelhante ao escore GWTG-HF para prever a mortalidade em um ano (p=0,14). Conclusões ACEF-MDRD é um preditor de mortalidade em pacientes com IC e sua utilidade é comparável a modelos semelhantes, porém mais complicados.
Introdução Estima-se que pelo menos 23 milhões de pessoas tenham insuficiência cardíaca (IC), tornando-a uma das doenças cardiovasculares mais comuns na era contemporânea. 1 Apesar dos avanços no rastreio, diagnóstico e tratamento da IC, as taxas de mortalidade permanecem elevadas, com uma taxa de 121 por 1.000 pacientes-ano para pacientes com fração de ejeção preservada (ICFEp) e 141 por 1.000 pacientes para pacientes com fração de ejeção reduzida (ICFEr). 2 Embora o julgamento clínico e os parâmetros individuais sejam comumente empregados para prognóstico, múltiplos modelos de risco também estão disponíveis para estimar a mortalidade e orientar decisões de manejo. 3 - 7 Um problema comum com estes modelos de risco é que eles geralmente sofrem de “ overfitting ” de múltiplas variáveis redundantes que não são úteis na estimativa do prognóstico em outras coortes de IC onde o a taxa de mortalidade difere da coorte de derivação original. 8 Além disso, a necessidade de usar inúmeras variáveis (e às vezes trabalhosas de obter) para calcular um único escore de risco para cada paciente com IC geralmente torna esses escores impraticáveis para uso clínico em uma clínica movimentada. O escore de idade, creatinina e fração de ejeção (ACEF) foi inicialmente desenvolvido para prever a mortalidade pós-operatória após cirurgia cardiovascular, mantendo a “lei da parcimônia” em mente. 8 No entanto, estudos posteriores encontraram o escore ACEF ou suas modificações simples calculadas pela substituição da creatinina com taxa de filtração glomerular (TFG) com a equação Dieta Modificada em Doença Renal (MDRD) - o escore ACEF-MDRD - foram úteis para prever mortalidade ou complicações após intervenções coronárias percutâneas ou estruturais, bem como aqueles que apresentavam síndromes coronarianas agudas. 9 - 12 As variáveis individuais utilizadas para calcular o escore ACEF já foram demonstradas como preditoras de hospitalizações e mortalidade em pacientes com IC, e é razoável considerar que um escore calculado usando essas variáveis teria melhor utilidade na predição de mortalidade em IC. 13 - 16 Na presente análise, procuramos investigar se o escore ACEF-MDRD poderia prever a mortalidade em um ano em pacientes com IC e entender como o escore ACEF-MDRD se compara a outros modelos estabelecidos, porém mais complexos, como o Get With The Guidelines – Heart Failure – Siga as Diretrizes - de Insuficiência Cardíaca (GWTG-HF). Métodos A concepção e execução do registo SELFIE-TR já foram publicadas anteriormente. 17 Para resumir, 23 centros de estudo representando todas as áreas geográficas da Turquia foram incluídos no estudo SELFIE-TR. O diagnóstico de IC foi estabelecido por meio de uma combinação de avaliação clínica, ecocardiográfica e laboratorial, e o diagnóstico foi confirmado de forma independente por pelo menos dois cardiologistas que trabalham em cada centro de estudo. Todos os pacientes com 18 anos ou mais e que aceitaram inscrição no estudo foram incluídos; nenhum critério de exclusão foi utilizado. Foram incluídos mil e cinquenta e quatro pacientes, e recentemente foram disponibilizados dados de sobrevida em um ano para 1.022 desses 1.054 pacientes. 18 Desses pacientes, 748 tinham dados completos para calcular o escore ACEF-MDRD, e todas as análises foram feitas usando esses registros. Todos os pacientes do registro SELFIE-TR deram seu consentimento informado antes da inclusão, e o presente estudo foi conduzido de acordo com os princípios descritos na Declaração de Helsinque de 1975 e suas revisões. O estudo foi aprovado por um comitê de ética (aprovação no 288-AU/003), e a aprovação regulatória foi obtida em cada centro de estudos de acordo com leis e outros regulamentos. Todas as medições laboratoriais foram feitas nos centros individuais e as amostras utilizadas para análises foram retiradas logo após a inclusão do paciente no estudo. Nem todas as medidas estavam disponíveis para todos os pacientes devido às diferenças entre os centros em relação aos recursos locais. A taxa de filtração glomerular foi calculada usando a equação MDRD. A fração de ejeção foi medida com ecocardiografia bidimensional em cada centro de estudo com o método de Simpson modificado por dois cardiologistas cegos para a medição um do outro, e uma média dessas duas medições foi tomada como resultado final. O escore ACEF-MDRD foi calculado da seguinte forma: Idade/fração de ejeção + 1 ponto para cada redução de 10 mL/min na TFG quando a TFG estava abaixo de 60 ml/m 2 /min. Análise estatística O tamanho da amostra foi determinado pelo número de casos elegíveis para inclusão, e nenhuma análise de poder foi feita devido à natureza observacional do estudo. A população do estudo foi dividida em três tercis para análise dos dados. As variáveis contínuas foram apresentadas como média ± desvio padrão (DP) ou mediana e intervalo interquartil (IQR), conforme apropriado. As variáveis categóricas são apresentadas por meio de frequências absolutas e relativas. Padrões de distribuição de variáveis contínuas e igualdade de variâncias entre tercis foram testados com testes de Shapiro-Wilk e Levene, respectivamente. Para variáveis contínuas, foram utilizados testes ANOVA unidirecional com correção de Welch ou testes de Kruskal-Wallis, dependendo da presença de padrão de distribuição normal. Análises post-hoc para ANOVA unidirecional foram feitas utilizando os testes HSD de Tukey ou Games-Howell, enquanto o teste Dwass-Steel-Critchlow-Fligner foi utilizado para análises feitas com o teste Kruskal-Wallis. Para variáveis categóricas, foi utilizado o teste qui-quadrado para comparações. Curvas de Kaplan-Meier foram traçadas para análise de sobrevida e os tercis individuais foram comparados com o teste log-rank. O modelo de riscos proporcionais de Cox foi utilizado para determinar preditores individuais de mortalidade em um ano. Todos os parâmetros com valor de p <0,10 na regressão univariada de Cox foram incluídos no modelo inicial, e um critério de seleção retroativo foi utilizado para construir o modelo final. Curvas receptor-operador foram traçadas para analisar a acurácia preditiva do ACEF-MDRD para a predição de mortalidade em um ano. Além disso, o teste de DeLong foi utilizado para determinar se o ACEF-MDRD não era inferior ao escore GWTG-HF em termos de precisão. O índice líquido de melhoria da reclassificação (NRI) foi calculado conforme descrito anteriormente. 19 Um valor de p <0,05 foi aceito como estatisticamente significativo para todas as análises. Todas as análises estatísticas foram feitas usando Jamovi (The Jamovi project - 2020). Jamovi versão 1.2 para Microsoft Windows), que é uma interface gráfica de usuário para linguagem R (R Core Team (2019). R: Uma linguagem e ambiente para computação estatística. Versão 3.6 para Microsoft Windows) e SPSS 25.0 (IBM Inc, Armonk, EUA). Para evitar perda de dados na regressão de Cox e no teste de DeLong, um procedimento de imputação múltipla foi utilizado para prever valores faltantes. Um total de 5 imputações foram feitas, e os resultados de uma estimativa conjunta dessas 5 imputações foram apresentados como resultado sempre que possível. Para todos os outros testes estatísticos, foram utilizados dados originais e o número de casos em que os dados estavam disponíveis foi indicado entre parênteses. Como os dados sobre peptídeos natriuréticos eram muito escassos para serem imputados (> 50% dos dados estavam faltando), uma análise de subgrupo separada foi realizada para entender como a precisão prognóstica do escore ACEFMDRD foi comparada com a do peptídeo natriurético pró-tipo B N-terminal (NT-proBNP) em pacientes nos quais os dados estavam disponíveis. Discussão Tal como muitas outras doenças médicas, o prognóstico de um determinado paciente com IC tem uma natureza estocástica – e não determinística. Como resultado direto, um modelo de risco nunca poderia ter uma capacidade discriminatória perfeita para a mortalidade, independentemente da complexidade do modelo. Usar muitas variáveis para um modelo de risco torna-o menos útil para a prática clínica e aumenta o risco de “ overfitting ” – o que ameaça a precisão de um modelo quando aplicado a populações diferentes da amostra de derivação original. 20 De preferência, um modelo deve seguir o “lei da parcimônia” e contém o menor número de variáveis com maior valor, em vez de incluir todas as variáveis que fornecem apenas um aumento marginal na precisão. O presente estudo mostrou que um escore de risco simples composto apenas por três variáveis tem boa precisão preditiva para mortalidade em um ano e apresenta desempenho bastante comparável a escores de risco mais complexos, como o modelo GWTG-HF. As principais conclusões do presente trabalho estão resumidas na Figura Central . Os modelos de risco têm desvantagens importantes que limitam a sua utilidade. Um modelo de risco de IC poderia fornecer resultados imprecisos quando aplicado a populações além da sua derivação inicial; eles raramente são precisos na previsão do prognóstico para pacientes com IC individuais e podem se tornar obsoletos com o tempo. 21 , 22 No entanto, ainda são convenientes, pois os modelos de risco permitem uma avaliação mais objetiva da esperança média de vida e podem ser úteis para selecionar a estratégia de gestão ideal para um determinado paciente com IC. 21 , 22 Mesmo modelos de risco com validação externa são subutilizados na prática clínica diária, talvez pelas limitações e pela inconveniência de encontrar e inserir múltiplos dados para calcular o escore final. 23 Escore de risco MAGGIC, que tem uma boa base de evidências para validade e um formidável escore c de 0,74 para mortalidade quando aplicado a outras coortes de IC, precisa de 13 variáveis diferentes para serem inseridas. 24 O escore GWTG-HF teve uma capacidade preditiva aceitável para mortalidade em um ano (c- o escore variou entre 0,64 - 0,67 para ICFEr e ICFEp, respectivamente), embora fossem necessárias apenas 7 variáveis que tornassem o escore GWTG-HF um pouco mais fácil de calcular e mais compatível com a lei da parcimônia. 25 Os resultados atuais indicam que o escore ACEF-MDRD poderia prever a mortalidade em um ano com uma precisão comparável ao escore GWTG e, semelhante ao escore GWTG-HF, poderia ser aplicado a populações com IC, independentemente do fenótipo apresentado. O escore ACEF-MDRD teve a vantagem adicional de usar três parâmetros simples e universalmente disponíveis que tornam seu cálculo conveniente, tornando-o um pouco mais adequado para ir além do “domínio da pesquisa” para o mundo real do que outros modelos de risco. Os componentes do escore ACEF não são usados apenas como preditores independentes de prognóstico na IC, mas também uma ou mais dessas variáveis são comumente encontradas em quase todos os escores de risco de IC. 3 , 4 , 16 , 26 A combinação dessas variáveis permite uma estimativa global do expectativa de vida, comorbidades, função de órgãos-alvo e desempenho ventricular esquerdo. Apesar da disponibilidade de múltiplos estudos que demonstram a capacidade preditiva do escore ACEF em diversas condições cardiovasculares, incluindo pacientes com infarto do miocárdio recente ou aqueles submetidos a cirurgia cardiovascular ou intervenções percutâneas, os dados sobre a utilidade prognóstica do escore ACEF em pacientes com IC são extremamente limitados. 8 - 12 Chen et al. estudaram ACEF e ACEF-MDRD em 862 pacientes com cardiomiopatia isquêmica e descobriram que ambos os escores tinham uma boa capacidade discriminativa (a estatística C foi de 0,73 para ACEF e 0,72 para ACEF-MDRD, respectivamente). No entanto, não ficou claro se estes pacientes tinham IC concomitante, uma vez que este estudo foi apresentado apenas como um resumo. 27 Os resultados atuais sugerem que o escore ACEF-MDRD é um preditor independente de mortalidade em todos os pacientes com IC, independentemente da etiologia subjacente, apresentação, ou fenótipo, tornando-se assim uma ferramenta potencialmente útil para vários pacientes. Note-se que o escore ACEF-MDRD não foi desenvolvida a partir da presente amostra, mas aplicada a ela e, como tal, a presente análise em si deve ser considerada um estudo de validação. Embora muitos estudos tenham relatado uma precisão preditiva mais impressionante para seus modelos do que os números fornecidos neste estudo, eles carecem de validação externa ou sua precisão preditiva é substancialmente menor quando testados em amostras diferentes de suas coortes de derivação. 28 Dado que as estatísticas C fornecidas raramente excede 0,8 para quase todos os modelos, o uso de um índice com uma precisão preditiva bastante modesta poderia ser justificado dada a simplicidade do cálculo (que poderia ser feito até mesmo com caneta e papel), tornando-o prático para o uso diário e a falta de “ overfitting ” - tornando-o adequado para uso em diferentes populações de IC. 22 Os tratamentos disponíveis para a IC são numerosos na era contemporânea e os algoritmos fornecidos para orientar as estratégias de tratamento não são baseados em evidências. Embora a principal expectativa de um modelo de risco seja uma estimativa da mortalidade global, é, no entanto, mais útil quando pode orientar as decisões de tratamento. Vários estudos mostraram que modelos de risco poderiam ser utilizados para esse fim. Por exemplo, foi demonstrado que o Modelo de Insuficiência Cardíaca de Seattle (SHFM) prediz a mortalidade após o implante de dispositivo de assistência ventricular esquerda. 29 Se o escore ACEF-MDRD poderia ser utilizado de forma semelhante seria uma perspectiva interessante para pesquisa em estudos futuros. Os presentes resultados indicam que o escore ACEF-MDRD teve uma capacidade discriminativa bastante modesta para mortalidade. Adicionar novas variáveis à equação seria uma forma de melhorar a precisão, uma vez que nossos achados indicam que o escore ACEF por si só não explica toda a variabilidade na mortalidade. No entanto, esta abordagem violaria o princípio fundador do escore ACEF, que utilizava um número limitado de preditores em vez de todas as variáveis com significância estatística na análise multivariada. Outra forma seria encontrar preditores de mortalidade semelhantes, porém mais poderosos, para redesenhar o escore ACEF-MDRD. Embora os componentes individuais do escore ACEF sejam preditores independentes de mortalidade, não está claro se são os melhores preditores, uma vez que o escore ACEF não foi desenvolvido para prever a mortalidade após IC. Como tal, melhores preditores poderiam ser usados para substituir os componentes principais do escore ACEF, mas a lei da parcimônia ainda deve ser aplicada para manter os preditores no mínimo. Limitações do estudo Apesar do desenho multicêntrico do estudo, o número de pacientes inscritos foi bastante limitado, afetando assim o poder da análise. Algumas variáveis estavam faltando e precisavam ser imputadas para análises multivariadas. Os dados faltantes foram superiores a 50% para algumas variáveis, e esses parâmetros - principalmente os peptídeos natriuréticos - não puderam ser incluídos nas análises multivariadas. Embora o ACEF-MDRD parecesse ter um significado prognóstico independente no subgrupo de 211 pacientes nos quais as concentrações de NT-proBNP estavam disponíveis, esta análise foi invariavelmente tendenciosa devido ao pequeno tamanho da amostra e à disponibilidade de dados de alguns centros. Assim, uma amostra maior é necessária para determinar se o escore ACEF-MDRD tem utilidade adicional em relação aos peptídeos natriuréticos. Da mesma forma, escores preditivos como o escore MAGGIC ou o Modelo de Insuficiência Cardíaca de Seattle não puderam ser calculados devido à falta de dados, portanto a utilidade do ACEF-MDRD em relação a essas ferramentas permanece incerta. Finalmente, embora os presentes resultados forneçam uma verificação externa para o escore ACEF-MDRD, mais dados de estudos adicionais aumentariam a confiabilidade para uso clínico futuro do escore ACEF-MDRD em pacientes com IC. Conclusões O escore ACEF-MDRD é um preditor independente de mortalidade em um ano em pacientes com insuficiência cardíaca e sua acurácia preditiva é comparável à do escore GWTG-HF. Em contraste com outros modelos “complexos” que necessitam de múltiplas variáveis e ferramentas especializadas para cálculo, o ACEF-MDRD necessita de três variáveis simples para estimativa de mortalidade, tornando-se uma alternativa bastante mais conveniente para a prática clínica diária.
Agradecimentos Os autores desejam agradecer a todos os investigadores do estudo SELFIE-HF pelas suas contribuições para a base de dados SELFIE-HF.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 14; 120(12):e20230158
oa_package/74/0d/PMC10789375.tar.gz
PMC10789376
0
Arq Bras Cardiol. 2023;120(6):e20220576 No Artigo Original “Construção e Validação do Protocolo EmpoderACO Direcionado a Pacientes em Anticoagulação Oral com Varfarina”, com número de DOI: https:// doi.org/10.36660/abc.20220576, publicado no periódico Arquivos Brasileiros de Cardiologia, Arq Bras Cardiol. 2023;120(6):e20220576, na página 1, corrigir o nome da autora “Rebeca Priscilla de Melo Santos” para “Rebeca Priscila de Melo Santos”. Arq Bras Cardiol. 2023;120(7):e20230342 No Artigo de Revisão “Os Melhores Artigos de 2022 nos Arquivos Brasileiros de Cardiologia e na Revista Portuguesa de Cardiologia”, com número de DOI: https://doi.org/10.36660/abc.20230342, publicado no periódico Arquivos Brasileiros de Cardiologia, Arq Bras Cardiol. 2023; 120(7):e20230342, considerar o seguinte: Este artigo foi desenvolvido em conjunto pelos Arquivos Brasileiros de Cardiologia e a Revista Portuguesa de Cardiologia, e publicado em conjunto pela Sociedade Brasileira de Cardiologia e Elsevier España S.L.U. Os artigos são idênticos, exceto por pequenas diferenças estilísticas e ortográficas, de acordo com o estilo de cada revista. Qualquer citação pode ser usada ao citar este artigo.
CC BY
no
2024-01-16 23:47:18
Arq Bras Cardiol. 2023 Dec 14; 120(12):e20230806
oa_package/f1/ad/PMC10789376.tar.gz
PMC10789377
0
METHODS We sequentially recruited children 2 months to 14 years of age admitted to Patan Hospital, Kathmandu with a clinical diagnosis of pneumonia. All children had chest radiographs, full blood count and C-reactive protein (CRP) measurement, culture of blood (Bactec PedsPlus culture bottles, BD, Franklin Lakes, NJ; aerobic culture in 5% CO 2 at 35–37oC) and NP sampling with flocked swabs (ThermoFisher Scientific, Waltham, MA) for pneumococcal culture and polymerase chain reaction detection of respiratory viruses (NxTAG Luminex Respiratory Pathogen Panel, Luminex Corp Austin, TX) within 48 hours of admission. Serotyping of pneumococci used the Quellung method (Statens Serum Institut, Denmark). 6 Convalescent sampling for serum of recruited children was done in 6–8 weeks following admission. Samples were included for serologic testing if paired acute and convalescent samples were available. We defined a series of comparator groups by a priori probability of having true pneumococcal pneumonia. Of note, NP carriage of serotype 1 or 5 pneumococci (but not other serotypes) has a high positive predictive value for invasive pneumococcal disease in this setting. 5 Participants were grouped as definite pneumococcal pneumonia (pneumococci cultured from blood or pleural fluid), probable pneumococcal pneumonia (CRP concentration ≥60 mg/L and NP carriage of serotype 1 or 5), probable bacterial pneumonia (CRP concentration ≥60 mg/L and no NP carriage of serotype 1 or 5), unknown pneumonia etiology, respiratory syncytial virus (RSV) pneumonia only (CRP concentration <60 mg/L and NP carriage of RSV) and definite other bacterial pneumonia (other bacterial pathogen cultured from blood). All participant’s samples with definite and probable pneumococcal pneumonia, RSV pneumonia and definite other bacterial pneumonia, a randomized selection of probable bacterial pneumonia and unknown pneumonia were included. Serum concentration of IgG to pneumococcal PS contained in the 13-valent PCV was measured using a fluorescence-based multiplex immunoassay (FMIA, RIVM, Netherlands). 7 We investigated whether change in absolute PS-specific IgG concentration (delta concentration) or fold change in PS-specific IgG concentration between acute and convalescent samples, or maximum PS-specific IgG convalescent concentration, was associated with pneumococcal pneumonia. As a sensitivity analysis, we also evaluated serotype 1 and serotype 5 specific values in children with pneumococcal pneumonia caused by serotypes 1 and 5, with children with RSV pneumonia only as a comparator group.
RESULTS Between 2015 and 2017, 897 children were sequentially recruited to the overall study. Of all children recruited, median age was 1.5 years (interquartile range, IQR, 0.7–3.1 years) and 528 (59%) children were male. Of these children, 454 (51%) returned for convalescent sampling (median 47, IQR 37–62, days after acute sampling) and 221 (49%) children with paired serum samples entered further analysis. Of these 221 children, median age was 1.7 (IQR 0.7–3.7) years, 133 (60%) children were male and 58 (26%) had received ≥2 doses of 10-valent PCV according to caregiver information (Table, Supplemental Digital Content 1, http://links.lww.com/INF/F309 ). On admission chest radiographs, 80 (36%) children had alveolar consolidation or effusion. Median CRP concentration at admission was 58 (IQR 6.8–109) mg/L. Eight children were classified as definite pneumococcal pneumonia (median age 5.0, IQR 3.8–6.6, years), 11 children as probable pneumococcal pneumonia (median age 4.7, IQR 3.3–7.7, years), 90 children as probable bacterial (median age 2.7, IQR 1.4–5.5, years), 68 children as RSV pneumonia (median age 0.7, IQR 0.4–1.5, years), 5 children as other bacterial pneumonia (median age 0.7, IQR 0.7–0.9, years) and 39 children as unknown (median age 1.3, IQR 0.8–2.9, years). Of children with definite pneumococcal pneumonia, 5 children had pneumococci isolated from blood (2 each of serotypes 1 and 5 and 1 serotype 6C) and 3 children had pneumococci isolated from pleural fluid (1 serotype 19A, 1 serotype 6B and 1 not serotyped). There were no significant differences in the acute to convalescent change in concentration (delta concentration) of IgG to pneumococcal PS by classification of pneumonia etiology (Kruskal–Wallis test, P = 0.44, Fig. 1 A; multiple pairwise comparisons with Wilcoxon test and Benjamin-Hochberg adjustment for multiple comparisons, P > 0.15 for all comparisons), and there were no significant differences in acute to convalescent fold change of IgG to pneumococcal PS by classification of pneumonia etiology ( P = 0.45, Fig. 1 B; P > 0.10 for pairwise comparisons). When analysis was limited to patients with serotype 1 (8 children) and serotype 5 (7 children) definite or probable pneumococcal pneumonia, there were no differences in delta concentration to the relevant PS in comparison with children with RSV pneumonia ( P = 0.27 and P = 0.36, respectively, Fig. 1 C). In children with serotype 5 definite or probable pneumococcal pneumonia, there was no difference in fold change of IgG to pneumococcal PS5 ( P = 0.79), but children with serotype 1 definite or probable pneumococcal pneumonia had significantly higher fold change of IgG to pneumococcal PS1, in comparison with children with RSV pneumonia ( P = 0.01, Fig. 1 D). No children <2 years of age had definite or probable pneumococcal pneumonia. In children ≥2 years of age, there were no significant differences in delta concentration of IgG to pneumococcal PS by classification of pneumonia etiology (Fig. 1 E, P > 0.15 for pairwise comparisons). Similar results were obtained by maximum convalescent IgG concentration (Figure, Supplemental Digital Content 2, http://links.lww.com/INF/F309 ).
DISCUSSION Analyses of data from Belgium, 3 Finland 4 and Brazil 4 have suggested that paired serologic testing may accurately diagnose pneumococcal pneumonia in children. Tuerlinckx et al 3 quantified PS-specific IgG and IgA concentrations in acute and convalescent serum from Belgian children with pneumonia. Among children with culture-proven pneumococcal pneumonia, 83% of paired samples met the predefined “positive” threshold of ≥3-fold increase in PS-specific IgG concentration. Among children with nonproven pneumococcal pneumonia, 55% of paired samples met this threshold; no other control samples were analyzed. As with data from Nepal, PS-specific IgG concentration in these Belgian data was associated with increasing age, with only 50% of definite pneumococcal pneumonia cases and 13% of suspected/possible pneumococcal cases <2 years of age meeting the positive threshold. Different age distributions between the Nepal (median 1.7 years) and Belgian studies (median 4.0 years), or different distributions of first colonization with pneumococcal serotype, may have contributed to differences in apparent prevalence of “positive” pneumococcal serology. Given the diversity of pneumococcal PS, assay of IgG to pneumococcal proteins may improve the sensitivity of serologic testing. Andrade et al 4 evaluated the use of paired serology to 8 pneumococcal proteins to discriminate between pneumococcal pneumonia in Brazil (13 children, median age 14 months, non-PCV vaccinated) and a viral pharyngitis control group in Finland (23 children, median age 37 months, PCV vaccinated). Receiver-operator characteristic curves yielded areas under the curve of 0.67–0.93. However, the use of controls from a different population and disease may have confounded these results. We previously extended this work by examining the production of IgG to 5 conserved pneumococcal proteins in the antibody in lymphocyte supernatant assay in Nepali children, 8 finding that lymphocyte production of IgG to pneumococcal proteins discriminated between pneumococcal and nonpneumococcal pneumonia with areas under the curve of 0.60–0.85. However, when stratified into children ≥2 years of age, there were no significant differences in protein-specific IgG production between pneumococcal and nonpneumococcal pneumonia. As with PS-specific IgG concentration, production of protein-specific IgG was associated with increasing age. In the absence of a Gold standard, we used comparator groups from the same population of children with other pneumonia etiologies to assess a diagnostic test. We have previously shown that NP carriage of serotype 1 or 5 pneumococci may enrich this cohort for pneumococcal pneumonia. 6 Despite this, the small number of children with definite or probable pneumococcal pneumonia limited our study power, particularly in children <2 years of age. In addition, we sampled convalescent serum at a median 47 days, while other studies sampled convalescent serum at 3–4 weeks 3 or 2–5 weeks, 4 following admission. Our data may therefore represent antibody concentrations that are already declining. Measurement of IgG to specific pneumococcal PS has been used to assess population immunity 9 and combined with functional antibody studies to assess correlates of PCV-mediated protection. 10 However, in this cohort, it was not useful to diagnose pneumococcal pneumonia for individual patients.
We evaluated whether the quantification of IgG to pneumococcal capsular polysaccharides is an accurate diagnostic test for pneumococcal infection in children with pneumonia in Nepal. Children with pneumococcal pneumonia did not have higher convalescent, or higher fold change, IgG to pneumococcal polysaccharides than children with other causes of pneumonia. Caution is needed in interpreting antibody responses in pneumococcal infections.
Modeling of data from randomized controlled trials has suggested that approximately one-third of children with pneumonia and radiographic consolidation have pneumococcal infection in settings without routine infant pneumococcal conjugate vaccination (PCV). 1 However, microbiologic techniques have limited accuracy to diagnose pneumococcal pneumonia for individual patients due to the inaccessibility of lung for sampling, pretreatment with antibiotics and prevalent nasopharyngeal (NP) carriage of pneumococci in healthy children. 2 Paired acute and convalescent serology to pneumococcal capsular polysaccharides (PS) 3 or proteins 4 from children with pneumococcal pneumonia may have diagnostic utility for pneumococcal pneumonia, but previous studies either do not use controls from the same disease population and/or use arbitrary thresholds for defining a positive result. We evaluated the accuracy of serology to pneumococcal PS for the diagnosis of pneumococcal infection in children hospitalized with pneumonia in Kathmandu, Nepal during 2015–2017. Ten-valent PCV ( Synflorix , GSK) was introduced in the Nepal infant immunization schedule in 2015, with no catch-up campaign. In children hospitalized with pneumonia in this setting, 73% and 78% of invasive pneumococcal disease isolates were of serotypes covered by 10-valent PCV during 2005–2013 before 10-valent PCV introduction, 5 and NP carriage of any pneumococci was 36% and of pneumococcal serotypes within 10-valent PCV was 14% during 2014–2015. 6 Supplementary Material
CC BY
no
2024-01-16 23:47:18
Pediatr Infect Dis J. 2024 Feb 11; 43(2):e67-e70
oa_package/3d/a9/PMC10789377.tar.gz
PMC10789388
38226126
Introduction Multiple myeloma (MM) is a malignant neoplasm that originates from B cells and gives rise to destructive osteolytic bone lesions. Among the complications frequently observed, a notable one is the pathological fracture of the vertebral body, which can lead to compression of the spinal cord. This particular complication affects around 5% of individuals diagnosed with MM [ 1 ]. Plasma cell neoplasia ranks as the second most prevalent hematologic malignancy, following non-Hodgkin lymphoma. It represents 1% of all cancers and approximately 10% of all hematologic malignancies. The two subtypes of plasmacytoma are solitary bone plasmacytoma and extramedullary plasmacytoma [ 2 ]. Approximately 5% of all plasma cell disorder cases are solitary bone plasmacytoma, with a male-to-female ratio of 2:1. Solitary bone plasmacytomas comprise 70% of all solitary plasmacytoma cases and mainly occur in the bones of the axial skeleton containing red marrow. The optimal treatment for solitary bone plasmacytoma of the spine remains controversial. Solitary bone plasmacytoma is highly sensitive to radiation therapy, and clinical trials have confirmed high response rates (60-80%) to radiation therapy. Decompressive surgery is indicated in the case of neurologic compromise due to spinal cord compression [ 3 ]. However, progression to MM has been reported in some patients with solitary plasmacytoma after initial radiation therapy [ 1 , 4 ]. Decision-making regarding augmentation, decompression, and stabilization in patients with spinal plasmacytomas is controversial. The Spinal Instability Neoplastic Score (SINS) may play a role during the decision-making process. Vertebral augmentation surgery can be performed in patients with painful spinal plasmacytomas with osteolytic changes with or without a fracture (SINS <13). Decompression and stabilization surgery are the treatments of choice in patients with SINS >12 [ 5 ]. This report aims to describe the anterolateral thoracic approach as a therapeutic measure for this type of lesion through an analysis of this type of tumoral pathology. Simultaneously, it aims to analyze and demonstrate the significance of the SINS as a scale for guiding therapeutic decisions in spinal tumors.
Discussion Solitary bone plasmacytomas are common primary malignant tumors of the vertebrae; they are defined by the presence of a single osteolytic lesion due to monoclonal plasma cell infiltration, with or without soft tissue extension [ 7 ]. The solitary bone plasmacytoma mostly affects vertebral bodies, and the most common location is in the thoracic region. Back pain is a common clinical feature and a variable neurological deficit due to lesion compression [ 8 , 9 ]. In the case presented, our patient had no pain; he had a motor deficit due to weakness in both pelvic limbs. Diagnostic criteria for solitary bone plasmacytoma include a pathologically proven solitary lesion, normal bone marrow with no evidence of clonal plasma cells, a normal skeletal survey and MRI (or CT) of the spine and pelvis (except for the primary solitary lesion), and the absence of end-organ damage [ 3 ]. This case meets three of the four criteria for the diagnosis of solitary bone plasmacytoma; there was no evidence of normal bone marrow. Maintaining or restoring spinal stability and achieving local disease management are the main goals of treatment for spine solitary plasmacytoma. Currently available forms of treatment are radiotherapy, surgery, vertebroplasty or kyphoplasty, and a mix of surgery and adjuvant radiotherapy [ 10 ]. It has been shown that even in situations when spinal cord compression is present, tumor removal can result in local disease management. In some cases, decompression has been demonstrated to maintain neurological function. In situations where discomfort is ascribed to fractures of the vertebral bodies, surgical decompression may also be advantageous. It is recommended to classify the patient's spinal instability using the SINS in order to choose the best surgical course of action [ 11 ]. The Spine Oncology Study Group (SOSG) created the SINS in 2010 as a tool for preoperative evaluation of spinal instability in patients with neoplastic spine illness. The purpose of the SINS is to assist the surgeon in determining treatment decisions for patients who suffer from spinal instability [ 12 ]. The SOSG has defined spinal instability as "the loss of spinal integrity as a result of a neoplastic process that is associated with movement-related pain, symptomatic or progressive deformity, and/or neural compromise under physiological loads” [ 11 ]. Six criteria are used by the SINS to assess vertebral mechanical instability: the location of the lesion, the nature of the pain, the type of bone lesion, the degree of vertebral destruction, the radiographic and spinal alignment, and the involvement of the posterolateral spinal elements. A final score is obtained by adding the ratings assigned to each parameter [ 13 ]. There is a minimum score of 0 and a maximum score of 18. Three categories of stability are derived from the overall score: possibly unstable (7-12 points), unstable (13-18 points), and stable (0-6). Furthermore, it is possible to examine the SINS as a binary indicator of the state of surgical referrals: stable (0-6) or "current or possible instability" (7-18 points). Patients scoring a 7 or higher are advised to contact a surgeon [ 11 ]. The SINS is a particularly trustworthy evaluation instrument. Because treating metastatic spine disease requires interdisciplinary expertise, proper SINS use is crucial [ 11 ]. Ramazanoğlu et al., based on the SINS scale to assess spinal instability associated with vertebral plasmacytoma, reported three patients with a SINS greater than 13 points in whom decompression and stabilization were performed with a good clinical outcome [ 5 ]. We present a case where a successful outcome was achieved in our patient with a total SINS of 14 (Table 1 ). This score was considered unstable, and we decided on initial surgical treatment to localize the tumor lesion, achieve spinal stability, and have tissue for histopathological diagnosis. Even though the posterior approach is the approach considered by most spine surgeons for spine tumors, due to the significant tumoral component of the lesion, the chosen procedure was to perform an anterolateral approach corpectomy on T9, T10, and T11, accompanied by the placement of an expandable obeliscPROTM cage (ulrich GmbH & Co. KG, Ulm, Germany). Subsequently, a posterior approach laminectomy of T9, T10, and T11 was performed along with posterior instrumentation involving the placement of percutaneous transpedicular screws at T7, T8, T12, L1, and L2 (Figure 2 ). The application of the SINS score is a useful tool in making decisions to treat spinal tumors. Decompression of the spinal tumor and stabilization have been shown to be of great benefit in the treatment of spinal instability, pain, and/or neurological deficits [ 14 ]. For local control of the tumor lesion, the R0 resection, which entails complete surgical removal of the tumor, would be optimal. It requires that there be no cancer cells visible at both the macro and microscopic levels. Given the anatomy of the area, this is a challenging task to accomplish in big spinal plasmacytomas. The goal should always be to remove as much of the tumor as feasible while maintaining the integrity of the surrounding tissues and achieving spinal decompression without causing harm to the spinal cord. Neoadjuvant radiation therapy is a possibility in certain big plasmacytomas in order to separate the tumor [ 15 ]. Within five years, at least 50% of solitary plasmacytomas will develop into MM if treatment is not received [ 16 ]. When used as the first line of treatment for solitary plasmacytoma, local irradiation offers good local control (85-90%), which may result in a long-lasting remission or even a cure [ 17 ]. Even at modest doses of radiation therapy, it was not able to stop the course of MM or stabilize the spinal column, even if it did achieve significant rates of local control. It is standard practice to utilize a dose of 40-45 Gy when treating with RT; nevertheless, it has been demonstrated that above 30-35 Gy, there is no dose-response association [ 17 ]. Baumgart et al. described that overall survival was longer with postoperative radiotherapy; however, there was no statistical significance for postoperative chemotherapy [ 2 ]. Our case received postoperative radiotherapy and has not recurred for 24 months, with an important clinical improvement.
Conclusions The surgical management of this type of spinal tumor continues to be controversial. Although there are lesions that have a good response to radiotherapy, there are reports in which progression to MM is reported. In some cases, such as those with instability, severe pain, or acute compression, surgical intervention might be required before radiotherapy. Regarding surgical management, there are several available options; however, the importance of applying the SINS relies on its role as a determinant of the relationship between spinal tumor instability and the need for stabilization surgery.
This case report details the case of a 57-year-old male who initially manifested low back pain radiating from the lumbar region to the left leg. Progressive symptoms included paresthesia on the plantar surfaces of both feet and gait instability attributed to weakness in the pelvic limbs. Computed tomography imaging revealed osteolytic lesions in the T9, T10, and T11 vertebral bodies, resulting in compression of the spinal cord. Subsequent contrast-enhanced magnetic resonance imaging validated these findings, confirming the presence of an extradural tumor. In accordance with the Spinal Instability Neoplastic Score (SINS), the case was categorized as indicative of potential spinal instability. Consequently, a surgical intervention was performed to excise the lesion. Thus, the role of SINS played a pivotal role in guiding the decision-making process for the chosen treatment modality.
Case presentation A 57-year-old male with no previous personal medical history had paresthesia in the soles of his feet. One month later, this progressed to the involvement of both pelvic limb territories. Shortly thereafter, he reported gait instability due to weakness in both pelvic limbs. Two months later, he developed hypoesthesia at the level of the umbilical scar and a progressive loss of strength, making walking impossible. Ultimately, one month later, he reported difficulty urinating and constipation. Subsequently, he was referred to our hospital. On presentation at our service, a physical examination revealed 5/5 strength in the thoracic limbs and 3/5 proximal and distal strength in both pelvic limbs. Deep tendon reflexes were increased in the lower limbs (+++), while the rest were normal. Sensation showed exteroceptive and proprioceptive hypoesthesia starting from T10. Gait assessment was not possible due to weakness. Tone and trophism were preserved, and a bilateral positive Babinski sign was present. A CT scan was performed, which reported osteolytic lesions in the T9, T10, and T11 vertebral bodies with soft tissue formation causing pathological fracture of the T10 vertebral body and spinal cord compression of more than 50% (Figure 1A , 1B ). A contrast-enhanced MRI was conducted to confirm the findings from the CT scan. Additionally, the MRI revealed diffuse enhancement in the posterior elements and confirmed the presence of an extradural tumor in T9, T10, and T11 (Figure 1C , 1D ). The SINS was applied with a total score of 14 points (Table 1 ), signifying spinal instability. Consequently, based on the recommendations of the SINS, decisions were made, and a surgical procedure was performed. Regarding the surgical intervention, the procedure was performed in two surgical stages. First surgical time/transthoracic approach With the patient placed in the left lateral decubitus position, a thoracotomy was performed via the sixth intercostal space to access the thoracic cavity. Once in the cavity, the lung was mobilized, displacing and releasing the pulmonary ligament. After identifying the esophagus and the descending aorta, at the level of T10-T12, the radiculomedullary arteries and the venous drainage of this area were released. Once the vertebral bodies were identified, the territory was ready to continue with the spinal portion of the surgical procedure. The affected vertebral bodies were identified, and discectomy was performed at T8-T9 and T11-T12, along with corpectomy of T9, T10, and T11 vertebral bodies. Hemostasis was ensured, followed by the insertion of a 40-FR endopleural tube and the placement of an expandable obelisk-type cage from T9 to T11. Subsequently, rib fracture stabilization, lung inflation, and closure of the thoracic cavity were completed. Second surgical time With the patient in the prone position and guided by fluoroscopy, paramedian incisions were made to facilitate the placement of Kirschner-type guides in the pedicles of the T9, T10, T11, T12, and L1 vertebrae, followed by the insertion of transpedicular screws and two bars (Figure 2 ). The ports were then removed, and closure of the paramedian incisions was achieved through fascial and skin suturing. An additional incision was made along the T9, T10, and T11 lines, and dissection was performed through the paraspinal muscle planes until the spinous processes were identified. A laminectomy was conducted using a Lexcel rongeur until the dural sac was exposed, revealing a vascularized, grayish, friable consistency lesion, which was completely excised. Follow-up The pathological report stated epidural plasmacytoma, bone without neoplastic infiltration, positive Kappa antigens, and negative lambda. One month after surgery and the beginning of physical rehabilitation, the patient arrived in a wheelchair. Upon physical examination, the patient exhibited muscular strength of 4-/5 proximal and distal in the lower limbs. The patient achieved standing with support but was unable to walk. Additionally, the patient received radiotherapy and chemotherapy in another external hospital; the chemotherapy regimen and dose of radiotherapy used are unknown. Six months after the patient arrived, he was walking independently. During the physical examination, there was a muscular strength of 4/5 in the lower limbs. The patient achieved standing and walking. Currently, at 20 months of follow-up, the patient presents with a muscular strength of 5/5, a preserved gait, and no evidence of tumor recurrence.
CC BY
no
2024-01-16 23:47:18
Cureus.; 15(12):e50627
oa_package/fc/94/PMC10789388.tar.gz
PMC10789389
38226090
Introduction Vitamins are a group of substances that are essential for normal physiological function [ 1 ]. They are not synthesized by the body but can be obtained by diet [ 2 ]. There are 13 vitamins, some are fat-soluble (vitamins A, D, E, and K) while others are water-soluble (vitamin C and the eight B vitamins: thiamine (B1), riboflavin (B2), niacin (B3), pantothenic acid (B5), pyridoxine (B6), biotin (B7), folate (B9), and cobalamin (B12)). The B vitamins are grouped by water solubility and their inter-cellular functions [ 1 - 3 ]. B vitamins are generally synthesized by plants; however, vitamin B12 is produced by bacteria [ 4 ]. A vitamin supplement that contains nearly all eight B vitamins is referred to as vitamin B complex [ 3 , 4 ]. Although B vitamins are generally absorbed by the small intestine, bacterial B vitamins are produced and absorbed in the colon [ 5 ]. The body absorbs vitamin B12 from food in a two-step process. Hydrochloric acid in the stomach frees vitamin B12, and the vitamin then combines with a natural factor protein from the stomach. Both are absorbed together. Vitamin B12 in supplements is not attached to the natural factor protein, so people who cannot produce this natural factor, such as those with pernicious anemia, have trouble absorbing vitamin B12 from foods and supplements. They therefore should receive vitamin B12 intravenously to prevent deficiency [ 1 , 5 , 6 ]. B vitamins are involved in every aspect of generating energy within cells. Deficiency in any of the B vitamins will thus have negative consequences [ 1 , 3 , 5 - 8 ]. For example, vitamins B1, B2, B3, and B5 are essential co-enzymes in mitochondrial aerobic respiration and cellular energy production via their direct role in the citric acid cycle to produce adenosine triphosphate (ATP). Furthermore, B1, B7, and B12 play an essential part in the mitochondrial metabolism of glucose, fatty acids, and amino acids, thus contributing substrates to the citric acid cycle [ 1 , 5 ]. Substantial evidence suggests that a large portion of the populations in developed countries are suffering deficiency or borderline deficiency in one or more B vitamins. Such deficiency may have several negative health consequences, including lower-than-optimal brain function [ 1 , 9 , 10 ]. In addition, vitamin B deficiency has been associated with peripheral neuropathy (PN) [ 11 , 12 ]. Other studies have claimed that vitamin B may treat depression, but further analysis of these studies has indicated that this effect is merely protective and only present in older adults [ 1 , 4 , 13 - 16 ]. Other studies have demonstrated that taking vitamin B alongside either vitamin D or antidepressants may help prevent or lower the effects of depression in elderly patients; otherwise, vitamin B alone does not affect depression [ 1 , 4 , 13 - 17 ]. Vitamin B levels may be inversely related to sleep duration. Increasing levels of vitamin B improves sleep quality in insomnia patients but causes a decrease in sleep duration. Although naturally occurring levels of B12 are not associated with sleep disturbance, vitamin B supplements may be associated [ 13 , 18 - 25 ]. Furthermore, research dating to the 1960s has found that vitamin B causes an increase in body fat and, consequently, an increase in body mass index (BMI) and insulin resistance. These results were associated with both supplemental vitamin B and foods fortified with vitamin B [ 2 , 6 , 8 , 26 - 35 ]. Most of the studies on vitamin B concentrated mainly on one or two effects of using the supplements but none looked at these effects from all sides. This study assesses the association between vitamin B supplementation and BMI changes and explores the effects of vitamin B supplementation on sleep and mood changes. Side effects or complications related to the use of vitamin B supplementation are also examined.
Materials and methods This cross-sectional study was conducted using Twitter and WhatsApp in Saudi Arabia. Participants were recruited through a self-conducted survey. Inclusion criteria included a minimum age of 18 years, use of vitamin B, and willingness to participate. Children, pregnant women, people who had never used vitamin B, and those unwilling to participate were excluded. All participants received a simple questionnaire about their demographics, intake of vitamin B, and effects of the supplement on their health. Height data were collected and rounded to the nearest 0.1 cm. Weight data were also collected and rounded to the nearest 0.1 kg. Following ethics requirements, participants’ privacy and confidentiality were protected by anonymizing data. The data were stored on a password-protected computer that only the principal researcher could access. Questionnaire data included the following: demographic information (age, nationality, education, occupation, gender, and presence of any chronic diseases); duration of vitamin B use; type of vitamin B used; method of intake; and changes in weight, appetite, mood, and sleep (Appendices). Ethical approval was obtained from the ethics committee of Fakeeh College for Medical Sciences (FCMS; ethical approval number 499-IRB-2023). The sample size required for this research was calculated to be 148 if calculated for vitamin B9 [ 30 ], 584 if calculated for vitamin B3 [ 33 ], and 796 if calculated for vitamin B12 [ 33 ]. The sample size was calculated using the OpenEpi program ( https://www.openepi.com/SampleSize ). Because of the greater sample sizing, vitamin B12 was used as the main focus of this research for the sample size calculation. Extra participants were taken to represent non-responders (dropouts). Therefore, this study planned to recruit a minimum of 1000 adults. Data were managed and analyzed using Statistical Package for Social Sciences version 28 (IBM Corp., Armonk, NY, USA) [ 36 ], and the threshold for statistical significance was set to p ≤ 0.05. A paired sample student’s t-test was used to evaluate differences in means between groups for continuous variables. For categorical data, a chi-square test was used to assess differences in proportions across categories. Descriptive statistics were used whenever possible. These included qualitative variables calculated using frequency and percentage.
Results The questionnaire used in this study was promoted on Twitter and WhatsApp in Saudi Arabia for one month. In total, 1,521 adults responded to all questions and were included in the data analysis. Most of the participants were Saudi (1339, 88%) and female (1131, 74.4%). Participants were mainly young adults 18-25 years old (1040, 68.3%) and 26-35 years old (260, 17.1%); only 221, 14.6% were older than 35 years old. Most of the participants had a bachelor’s degree (1066, 70.1%) and were employed (871, 57.3%). Since most of the participants were young, the presence of chronic diseases was minimal (266, 17.1%) and the rest of the participants had no chronic diseases (1261, 82.9%). Characteristics of the participants are displayed in Table 1 . Characteristics of vitamin B use are included in Table 2 . Use of vitamin B complex was the highest, followed by vitamin B12 (897, 59% and 533, 35%, respectively). Additionally, oral tablets were the most common method of consuming vitamin B (1378, 90.6%). Furthermore, using the supplement for 6-12 months was most common (793, 52.1%), followed by use for over 12 months (351, 23.1%). Turning to the side effects of vitamin B supplements, there were minor complaints of mild gastric upset (312, 20.5%), but this difference was significant (p = 0.03). Similarly, many participants exhibited an increase in appetite (1326, 87.2%) and a change in BMI before and after the use of supplements (1378, 90.6%). Both were significant (p = 0.03). Additionally, there was an increase in energy in participants (975, 64.1%), as well as changes in sleep duration and sleep patterns (1014, 66.7%) and mood (1040, 68.4%). All these changes were significant (p =0.03; Table 3 ). To evaluate changes in BMI, World Health Organization (WHO) BMI categories were used. BMI was compared before and after the use of vitamin B supplements. The difference between categories was not significant (p = 0.3; Table 4 ). This study also explored sleeping duration before and after the use of vitamin B supplements; changes were not significant (p = 0.5; Table 5 ). Notably, men in the study complained of erectile dysfunction (ED). This symptom was present in 298 (76.4%) men, and this result was significant (p = 0.03; Table 6 ).
Discussion This study recruited 1,521 participants. Most of the participants were young Saudi females with a bachelor’s degree who were employed and had no chronic disease. Most participants had been using oral tablet vitamin B complex for at least 6-12 months (Table 2 ). A minority of the participants complained of mild gastrointestinal upset, but a significant proportion experienced no side effects. This finding is consistent with existing research: side effects of vitamin B intake are mainly mild nausea or vomiting but rarely reach hepatic toxicity [ 37 ]. In this study, a significant proportion of participants experienced a change in appetite, and this was accompanied by a significant change in participant BMI after taking vitamin B supplements. These findings are similar to two previous large randomized controlled trials, where a significant proportion of normal and underweight participants shifted to the overweight and obese BMI categories [ 8 , 31 ]. These increases in weight and the shift of participants toward obesity are mainly due to fast fat deposition all over the body and on the muscles [ 29 , 30 , 33 - 35 ]. In the present study, changes in BMI before and after the use of vitamin B were present but did not reach significance (Table 4 ). This non-significant result may be due to an underpowered study; moreover, changes in each BMI category were significant (p = .025), but overall changes in BMI were not. This finding is important and motivates future studies to explore changes in BMI related to vitamin B supplement use. This study also explored increases in energy, which were significant and associated with a significant increase in sleeping time. This finding has been present in many previous studies [ 7 , 21 , 23 ]. However, upon a detailed exploration of existing sleep data, it becomes apparent that the real rest time decreased [ 17 - 20 , 22 , 35 ]. These findings were explored in the present study, and though participants' sleep time increased such changes were not statistically significant. However, when comparing pre- and post-vitamin B supplement use for specific sleep duration categories, changes were significant. As with the BMI data, these changes did not reach significance at the group level (Table 5 ). In previous studies, the increase in sleeping time is obvious, but sleeping patterns were affected and resulted in decreased resting time [ 17 - 19 , 35 ]. This finding requires further exploration - future studies should assess changes in sleeping time and patterns related to the intake of vitamin B supplements. On reviewing the literature, patterns start to appear. Participants who take vitamin B notice an increase in appetite and weight gain, as evidenced by increased BMI. Increases in sleeping time but less rest time affect participants' moods and are accompanied by an unusual increase in energy. These outcomes dispose participants of depression and promote food intake, leading to an increase in BMI with faster fat deposition [ 1 , 2 , 9 , 14 , 20 , 28 , 30 , 33 ]. Male participants in the present study noticed significant increases in ED. This finding has been studied extensively in previous research, and similar results were found in some studies [ 38 , 39 ]. On the contrary, some researchers found no relation between vitamin B and ED [ 40 , 41 ]. Noticeably, in studies that found no effect of vitamin B on ED, the use of vitamin B was only for short periods (less than 6 months [ 41 ]). The main limitation of this study is its cross-sectional design, but the collection of so many participants over a short time is best done through social media in cross-sectional studies. Another limitation is that data on the nutritional habits of the participants were unavailable. These data were not collected to make the survey easier for social media users to complete but could have partially explained the mechanisms underlying some of the significant findings. In addition, as data on supplement consumption and outcomes were self-reported, a reporting bias might exist: individuals may not have accurately reported their real vitamin intake. However, such a bias is due to underreporting by participants. Lastly, the reason for vitamin B supplement use was not explored.
Conclusions This study presented important findings including that vitamin B supplements may increase weight by increasing fat deposition. Vitamin B may also increase sleeping time and energy, but the quality of sleep and rest time should be explored more. Vitamin B supplements are associated with mild gastric upset, but, more importantly, may cause ED in men. More research (randomized controlled trials and systematic reviews with meta-analysis) should be done on vitamin B and its effects on the body, and future studies should be large enough to explore each difference's potential mechanisms.
Introduction B vitamins help generate energy within cells. A significant portion of populations in developed countries suffer a deficiency in one or more B vitamins. This study assesses the use of vitamin B supplements and their effects. Methodology This cross-sectional study was conducted using public participants in Saudi Arabia. Participants from all over Saudi Arabia were recruited through self-conducted surveys to study the effects of using vitamin B supplements on appetite, BMI, energy, and sleep, and to identify any side effects in participants. Inclusion criteria included age (18 years or older) and use of vitamin B supplements. Children, pregnant women, adults who had never used vitamin B, and those not willing to participate in the study were excluded. Results In total, 1,521 adults were recruited. Most of the participants were young Saudi Females. While taking vitamin B supplements, a minority of participants complained of mild gastrointestinal upset, but a significant proportion experienced no side effects. In this study, a significant proportion of participants experienced an increase in appetite, which was associated with a significant increase in BMI after taking vitamin B supplements. This study also explored increases in energy, which were significant and associated with significant increases in sleeping time. Male participants in the present study noticed a significant increase in erectile dysfunction (ED). Conclusions This study found significant effects of vitamin B supplements on BMI, appetite, energy, and sleep, as well as an increase in ED in male participants. More studies are needed to further explore these findings.
Appendices
CC BY
no
2024-01-16 23:47:18
Cureus.; 15(12):e50626
oa_package/7a/9f/PMC10789389.tar.gz
PMC10789390
38226118
Introduction Systemic lupus erythematosus (SLE) is a multifactorial autoimmune disease with multisystem involvement, resulting from autoantibody production due to immune system dysfunction and activation of the complement system. It is diagnosed using the American College of Rheumatology criteria [ 1 ]. The incidence of SLE ranges from 0.6% to 2% with a female-to-male ratio of 2:1. The peak age of incidence in females is during the third to seventh decade, whereas in males it is during the fifth to seventh decade [ 2 ]. A serious and atypical initial presentation of SLE is lupus enteritis, with a high mortality rate (53%) if complicated, or if treatment is delayed [ 3 ]. Lupus enteritis was defined by the British Isles Lupus Assessment Group in 2004 as vasculitis or inflammation of the small bowel, diagnosed based on clinical features supported by suggestive imaging, with a CT scan of the abdomen as the gold standard, and/or biopsy findings [ 3 , 4 ]. Zhang et al. reported that the prevalence of lupus intestinal pseudo-obstruction (IPO) was 1.96%, with an in-hospital fatality rate of 7.1%. IPO presents as an initial manifestation in 57.6% of lupus patients and the rate of misdiagnosis has been reported to be 78%[ 5 ]. It is defined as the dilation of the bowel without the presence of an anatomical obstruction, with presenting signs and symptoms of nausea, vomiting, abdominal distension, and obstipation along with bowel dilation on X-ray or CT imaging [ 6 ]. SLE-IPO is strongly associated with genitourinary complications, including hydronephrosis, hydroureter, and cystitis. Approximately 60% of SLE-IPO cases have coinciding ureterohydronephrosis, which is defined as the dilation of the entire upper urinary tract, including the renal pelvicalyceal system and the ureter [ 7 ]. In this report, we describe a rare initial clinical presentation of SLE as lupus enteritis accompanied by IPO with bilateral hydronephroureter, as it presents a challenge in diagnosis and treatment.
Discussion Lupus enteritis with IPO presents a diagnostic and therapeutic dilemma, as gastrointestinal manifestations in this case mimicked infectious enterocolitis, particularly intestinal tuberculosis, Crohn’s disease, and drug-induced colitis. Non-steroidal anti-inflammatory drugs, corticosteroids, and antibiotics have been employed for the management of this condition [ 8 - 10 ]. This patient was being treated previously, before the development of a lupus rash, with suspicion of infectious enterocolitis/intestinal tuberculosis. Although the pathogenesis of lupus enteritis is poorly understood, immune complex deposition in the bowel wall with complement activation might be the driving force, with the involvement of the jejunum and ileum most commonly followed by the involvement of the colon, duodenum, and rectum [ 11 ]. Patients with large intestine-dominant lupus enteritis have hydroureter and bladder wall thickening, which are complications of IPO [ 12 ]. Among patients with SLE-IPO, approximately two-thirds have positive anti-SS-A/Ro antibodies, hypocomplementemia, hypoalbuminemia, elevated CRP, and polyureterectasis [ 5 , 13 ]. Abdominal X-rays may show multiple air-fluid levels and enlarged small intestine segments [ 14 ]. CT abdomen findings of lupus enteritis include engorgement or increased number of mesenteric vessels (“comb sign”) [ 11 ], bowel wall thickening and enhancement (“target sign”), and increased attenuation of mesenteric fat. These imaging findings along with gut wall ischemia and enhancement can also be seen in SLE-IPO [ 15 ]. Our patient had similar lab and radiological findings with raised CRP, hypocomplementemia, hypoalbuminemia, high-titer anti-Ro antibodies, air-fluid levels on X-ray with bilateral hydronephroureter, gut and bladder wall thickening, ischemic colitis, and ileitis. Previous case reports support the treatment strategies of using intravenous methylprednisolone at a dose of 250 mg to 1 g per day, followed by prednisolone 0.5 mg/kg/day, bowel rest, hydration, electrolyte repletion, and parenteral nutrition. Hydroxychloroquine, cyclophosphamide, cyclosporin, mycophenolate mofetil, and intravenous immunoglobulins were administered as steroid-sparing agents. The patient showed good immediate response to the above treatment regimens and maintained remission with a low incidence of relapse [ 16 , 17 ]. If IPO is not treated, the smooth muscle layer can become atrophic and fibrotic and will no longer be reversible with immunosuppression [ 18 ]. In our patient, improvement of gastrointestinal symptoms, hydronephroureter, and cognitive function were observed. We planned to start mycophenolate mofetil after completing six fortnightly pulses of 500 mg cyclophosphamide over three months (the Euro lupus protocol of cyclophosphamide) [ 19 ]. Our patient was managed successfully with the help of a multidisciplinary approach. All relevant investigations and treatment were performed without any delay. Because of financial constraints, few lab tests could not be repeated to calculate the clinical response.
Conclusions Lupus enteritis with coexisting IPO and bilateral hydronephroureter poses a diagnostic and therapeutic challenge because of atypical and uncommon manifestation of lupus and overlapping features with intestinal tuberculosis and other inflammatory bowel conditions. Particularly, if gastrointestinal symptoms occur before the typical features of SLE appear, the index of suspicion for lupus enteritis and SLE-IPO should be high in patients with combined gastrointestinal and genitourinary symptoms. Prompt diagnosis and treatment can prevent morbidity and mortality and avoid invasive procedures for hydronephroureter and IPO.
Systemic lupus erythematosus (SLE) is a systemic, autoimmune, multisystem disease. Lupus enteritis accompanied by intestinal pseudo-obstruction (IPO) is a serious and rare initial manifestation that can lead to high mortality and morbidity in case of delay in diagnosis and treatment. Here, we present a very complicated case of a 36-year-old female Pakistani patient with lupus enteritis accompanied by IPO and bilateral hydronephroureter. The patient had a three-month history of fever, weight loss, recurrent diarrhea, vomiting, alopecia, and photosensitivity. She had a malar and discoid rash, with signs and symptoms of IPO and neuropsychiatric lupus. Her labs revealed positive anti-nucleosome antibodies (8 U/mL), anti-Ro antibodies (100 U/mL), and anti-La antibodies (53 U/mL); equivocal anti-dsDNA antibodies (7 U/mL) and anti-Sm antibodies (7 U/mL); direct Coomb’s positive hemolytic anemia; raised C-reactive protein and erythrocyte sedimentation rate levels; low complement (C3 and C4) levels; and pyuria. IPO was evident on abdominal X-ray and CT scan. Her Systemic Lupus Erythematosus Disease Activity Index was 24, indicating severe disease flare. She was treated with intravenous methylprednisolone, hydroxychloroquine, and intravenous 500 mg cyclophosphamide. Her lab parameters and clinical mini-mental score improved, from 0/30 to 18/30. She was discharged on oral prednisolone 0.5 mg/kg/day, hydroxychloroquine, trimethoprim-sulfamethoxazole (prophylaxis for Pneumocystis jirovecii pneumonia), and mineral and vitamin supplements. She was followed up on the 15th day of discharge for the next dose of cyclophosphamide, and her clinical and lab parameters were normal at that time with gradual improvement in cognition. Lupus enteritis with coexisting IPO and bilateral hydronephroureter poses a diagnostic and therapeutic challenge because of atypical and uncommon manifestations of lupus and overlapping features with intestinal tuberculosis and other inflammatory bowel conditions.
Case presentation A 36-year-old Pakistani female with no comorbid factors, having five children and two neonatal deaths, reported a history of recurrent episodes of diarrhea, vomiting, generalized abdominal pain, low-grade fever, and undocumented weight loss for three months. She had developed a rash over her face and body with hair fall, oral ulcers, and photosensitivity two months before her current hospitalization. This time she presented with aggravation of gastrointestinal symptoms for the last five days with complaints of loose stools, five episodes in a day, soft in consistency, not containing blood, mucus, or pus, associated with a history of urgency; however, no history of tenesmus was reported. The patient also had generalized abdominal pain and vomiting for the last five days, with vomitus containing food particles. There was no complaint of hematemesis or melena. Later, during her current hospital stay, she developed IPO with complaints of constipation, vomiting, aggravation of abdominal pain with abdominal distension, and absolute constipation. A few days later, after worsening gastrointestinal symptoms, she developed psychosis and cognitive impairment. She had been initially evaluated at a hospital for her gastrointestinal complaints, where intestinal tuberculosis was suspected and a workup was done, as the patient also had a history of exposure to pulmonary tuberculosis from close relatives; however, there was no history of recent travel and personal and family history of psychological disorders, with no positive family history of lupus or other autoimmune disorders. Previously, she was treated for suspected infective enterocolitis and urinary tract infection with intravenous antibiotics, metronidazole and meropenem. No record was found for administration of oral or intravenous steroids, hydroxychloroquine, or any disease-modifying anti-rheumatic drugs. Examination on current admission Her vitals were stable except for tachycardia with normal rhythm. She had a malar rash, discoid rash, Shuster sign, alopecia, oral ulcers, and oral thrush (Figures 1 , 2 ). The abdomen was soft with generalized mild tenderness, but no visceromegaly or ascites was seen, and gut sounds were diminished. Later, when she developed IPO, her abdomen was tense, distended, and extremely tender, with absent gut sounds. Her mini-mental score was assessed when she developed neuropsychiatric symptoms, and it was 0/30 (severe). The rest of the systemic and musculoskeletal examinations were unremarkable. Laboratory investigations Her recent labs showed direct Coombs-positive hemolytic anemia, hypokalemia, and raised C-reactive protein (CRP). Her extractable nuclear antigen profile (reference value U/mL: negative <6, equivocal 6-12, positive >12) showed raised anti-nucleosome antibodies (8 U/mL), equivocal anti-dsDNA antibodies (7 U/mL) and anti-Sm antibodies (7 U/mL), and positive anti-Ro antibodies (100 U/mL) and anti-La antibodies (53 U/mL). Her previous laboratory testing revealed positive anti-nuclear antigen by enzyme-linked immunosorbent assay (ELISA), low complement (C3 and C4) levels, anemia, proteinuria, 24-hour urinary protein of 322 mg/dL (<10 mg/dl), and urine culture and sensitivity showed Candida and Escherichia coli sensitive to meropenem. However, on the current admission, she had no proteinuria. Her hepatitis B surface antigen and anti-hepatitis C virus antibodies by ELISA were negative. MRI of the brain and echocardiography were normal. The rest of the labs are shown in Table 1 . Radiological investigation Radiological investigations included an X-ray of the abdomen in an erect position, which showed multiple air-fluid levels and strikingly large and prominent ureters bilaterally (Figure 3 ). Ultrasound of the abdomen and pelvis showed multiple fluid-filled gut loops, with circumferential diffuse wall thickening, bilateral hydronephroureter, and thick-walled urinary bladder. CT of the abdomen further confirmed these findings and revealed diffuse wall thickening of the large bowel mainly along the sigmoid colon with proximal dilation of the small bowel, as well as mild reactive ascites (Figure 4 ), bilateral hydronephroureter along with urinary bladder wall thickening (Figure 5 ). CT angiography of the abdomen with contrast suggested vasculitis secondary to SLE with resultant ischemic colitis and ileitis (Figure 6 ). Sigmoidoscopy and biopsy were also done, which were normal. Management Treatment was started on the lines of active lupus, the Systemic Lupus Erythematosus Disease Activity Index (SLEDAI) was 24 (severe flare) along with the treatment of urinary tract infection and suspected lupus/infective enterocolitis. The patient was kept nil per oral (NPO), and a nasogastric tube was passed for gastric decompression. The patient was catheterized to monitor urine output, and a central venous catheter was also passed through which partial parenteral nutrition was started. Intravenous methylprednisolone 125 mg was administered twice a day for a total dose of 3,500 mg administered during her current admission. Hydroxychloroquine 5 mg/kg/day was administered daily. Continuous intravenous fluids were administered along with intravenous potassium and intravenous albumin replacement. Her central venous pressure was checked regularly to prevent fluid overload. Moreover, injection meropenem 1 g intravenous eight hourly, tablet fluconazole 100 g once a day with monitoring of liver function tests and creatinine, and topical treatment for rash were also given. The urologist advised conservative management with bladder catheterization for bilateral hydronephroureter. For the management of psychosis, haloperidol 5 mg was administered intravenously as needed. After the urinary tract infection and fever were resolved, injection cyclophosphamide 500 mg was administered intravenously. Tablet trimethoprim-sulfamethoxazole double strength was given on alternate days for prophylaxis of pneumocystis jirovecii. A renal biopsy could not be done as she was sick at the time of admission and later developed nosocomial sepsis. Outcome and follow-up A few days after she was administered cyclophosphamide, she developed a spiking fever of 102-103°F with rigors and chills, raising the suspicion of nosocomial infection. All her indwelling catheters were removed and sent for cultures. Blood culture and sensitivity showed Pseudomonas aeruginosa , which was sensitive to amikacin. Her central venous catheter tip and Foley catheter tip culture and sensitivity showed Candida infection, for which fluconazole was already being administered. After administering intravenous amikacin, she became afebrile, her condition improved clinically, and laboratory parameters also improved. At the time of discharge, after 20 days of hospitalization, her gastrointestinal symptoms had improved, with regular bowel movements, and improvement was seen in all clinical and radiological parameters. The absence of bilateral hydronephroureter was seen on a repeating abdominal X-ray (Figure 7 ). The mini-mental score improved from 0/30 (severe cognitive impairment) to 18/30 (moderate cognitive impairment), SLEDAI was not calculated due to the absence of repeated compliments and anti-DsDNA antibodies due to financial constraints. She was discharged on oral prednisolone 0.5 mg/kg/day, along with hydroxychloroquine, trimethoprim-sulfamethoxazole, and mineral and vitamin supplements, and was called for follow-up on the 15th day of discharge for the next dose of cyclophosphamide. Now she is on regular follow-ups and currently has no gastrointestinal symptoms and progressive improvement in cognition. We plan to start mycophenolate mofetil after completing six fortnightly pulses of 500 mg cyclophosphamide over three months (Euro lupus protocol). The patient remains tolerant and adherent to current therapy and no adverse event has been observed on her follow-up visits.
Faiza Naeem and Mishkawt U. Noor contributed equally to the work and should be considered co-first authors.
CC BY
no
2024-01-16 23:47:18
Cureus.; 15(12):e50628
oa_package/6b/e7/PMC10789390.tar.gz
PMC10789448
38225960
1. Introduction Attachment theory proposes a theoretical model that aims to explain how the development of early interpersonal relationships forms cognitive patterns that qualify an individual's perception as being worthy of care (care-seeking) and the perception of others as being reliable in providing care (caregiving). 2 , 3 Attachment style tends to be stable through life and affects how people think, feel, and behave in close relationships all over the life span, “from the cradle to the grave” 3 (p. 129). Therefore, attachment security/insecurity is perceived as a diathesis, which determines how individuals relate to each other and manage threatening situations such as an illness. 22 Previous studies have shown that attachment style has an indirect effect on pain management. Patients with chronic pain and insecure attachment report higher levels of pain-related stress, anxiety, depression and catastrophizing, 4 , 18 – 20 , 28 and lower pain self-efficacy. 19 They are more likely to use emotion-focused than problem-focused coping, 21 report greater pain intensity and disability, 17 , 28 , 30 describe themselves and their pain with more threatening terms, and feel less capable to cope with pain. 18 , 21 Individuals with insecure attachment also report greater usage of health care. 4 Patients who met the criteria for “chronic widespread pain” were 70% more likely to report insecure attachment than the group of patients with no pain. 6 On the other hand, self-compassion is a concept that is gradually taking its place as a resilience factor in patients with chronic pain, associated with greater pain acceptance, lower levels of anxiety and depression, 5 as well as with adaptive coping strategies (active coping, acceptance, and positive reframing) in patients with chronic pain. The concept of self-compassion refers to an individual's capacity to contain their feelings of suffering with a sense of warmth, connection, and care. 24 This involves an ability to be kind to oneself and to confront one's difficulties with understanding and as part of the human experience, as well as to keep one's emotions and thoughts in balanced awareness without overidentifying with them (ie, mindfulness). Early experiences either support or hinder the development of soothing and threat systems and influence the formation of emotional self-regulation and the ability to be compassionate. 11 – 13 Attachment patterns and self-compassion seem likely to influence not only the emotional experience but also the means through which a person will cope with the situation. Thus, the purpose of our study is mainly to explore the interrelationships between these related variables in patients with chronic pain to improve intervention and help patients to adopt more functional strategies to cope with chronic pain. The study's novelty is rooted in the fact that there is no research articulating these 3 variables together, and very few that studies them separately, in patients with chronic pain. Attachment quality and self-compassion are more stable variables of an individual's psychological functioning and are not exclusive to the chronic pain context, so similar results from studies conducted in the general population are expected. 16 , 25 , 26 , 31 Specifically, our principal hypothesis is that secure attachment will be positively correlated with global self-compassion. Regarding the coping variable, we will explore how attachment quality and self-compassion are related to coping quality.
2. Methods 2.1. Participants In this study, 134 participants were eligible at the chronic pain centre of the hospital “Adolphe de Rothschild Foundation” in Paris, among them 97 women (72.4%) and 37 men (27.6%). The average age was 53.2 years (SD = 14.5). The majority were married (45.9%) or cohabiting (8.27%) with their partner. Forty-seven percent of them had a higher education diploma. Table 1 describes the sociodemographic characteristics of all eligible participants. 2.2. Measures This is a quantitative study that was performed using a sociodemographic questionnaire and 3 self-report questionnaires: The Relationship Scale Questionnaire—Reviewed Coding (RSQ-CR) was developed by Bartholomew and Horowitz, 2 and its coding of the French version was reviewed by Tereno et al. 29 The RSQ-CR is an adult attachment self-administered scale that has as output variables a global security scale and 4 subscales that define 4 different attachment styles. Each of the 30 items is scored on a range of a 5-point Likert scale (RSQ-CR scoring details in supplementary materials, available at http://links.lww.com/PR9/A199 ). The RSQ-RC has a satisfactory internal consistency in its scales, and Cronbach's α was 0.69 to 0.82. For the factor “detached” (α = 0.69), “secure” (α = 0.73), “preoccupied” (α = 0.76), and “disorganised” (α = 0.79), the internal consistency was globally good, and for the factor “global security index” (α = 0.82), the internal consistency was very good. 29 The Self-Compassion Scale (SCS) 15 , 24 is a self-compassion questionnaire for adults that includes of 26 items, coded in a 5-point Likert scale and grouped into 6 subscales that measure 3 main components: self-compassion vs self-judgment, common humanity vs isolation, and mindfulness vs overidentification (SCR scoring in supplementary materials, available at http://links.lww.com/PR9/A199 ). The French version of SCS has a satisfactory internal consistency in its scales, and Cronbach's α was 0.74 to 0.88. For the factors “common humanity” (α = 0.74), “overidentification” (α = 0.77), and “isolation” (α = 0.79), the internal consistency was good. For the factors “self-judgment” (α = 0.85), “mindfulness” (α = 0.81), and “self-care” (α = 0.89), the internal consistency was very good. Internal reliability for the total score of the French version of the SCS was excellent (α = 0.94). 15 The Brief COPE (state version) 23 is a 28-item self-administered questionnaire of the coping state, which takes into account the specific way in which people cope with a given stressful situation. It is composed of 14 subscales (2 items each) assessing the following distinct coping dimensions: (1) active coping, (2) planning, (3) seeking instrumental social support, (4) seeking emotional social support, (5) expressing feelings, (6) behavioural disengagement, (7) distraction, (8) blaming, (9) positive reinterpretation, (10) humour, (11) denial, (12) acceptance, (13) religion, and (14) substance use. Participants rate each item on a four-point Likert scale: “I have not been doing this at all,” “a little bit,” “a medium amount,” and “I have been doing this a lot,” score 1 to 4 for each item, and no reverse scoring. The French version of Brief COPE state version has good psychometric qualities. The confirmatory factor analysis showed satisfactory results. The χ 2 obtained was equal to 391, P < 0.05. The GFI was 0.87, the AGFI was equal to 0.80, and the RMR less than 0.06. 23 Muller and Spitz 23 make a distinction between functional coping strategies and dysfunctional coping strategies: Functional strategies aim to adjust the person to the situation and to preserve a certain quality of life (planning, active coping, positive reframing, and acceptance); some strategies are functionally variable (instrumental support, emotional support, venting, religion, humour, and distraction), and their functional variability depends on the circumstances and the particular use by each. Dysfunctional strategies do not promote a person's adjustment to a given situation, nor their well-being in the face of this situation (self-blame, denial, substance use, and behavioural disengagement). 2.3. Procedures 2.3.1. Recruitment procedures Each patient with chronic pain (defined as pain persisting longer than 3 months) who made a new request for treatment at the chronic pain centre of the hospital “Adolphe de Rothschild Foundation” in Paris during the study's inclusion period (December 2018–December 2020) was consecutively offered to participate in the research. Inclusion criteria were being an adult (older than 18 years), French-speaking or bilingual, and affiliated or a beneficiary of a Social Security plan. Patients already treated in another pain centre in the past, patients being followed for cancer, breastfeeding women, patients being followed for Parkinson disease, and patients with a previous psychiatric diagnosis. Patients benefiting from a legal protection measure were excluded from the research, according to the guidelines of the French Ethics Committee. All patients who agreed to participate have completed informed and written consent. The study adhered to the tenets of the Declaration of Helsinki. A French Ethics Committee (Comité de Protection des Personnes Est IV) approved this study on October 9, 2018 (IDRCB: 2018-A01167-48). This clinical study was registered at clinicaltrials.gov under the number NCT: NCT03845816. 2.3.2. Administrative procedures For the comfort of the participants, an appointment with the psychologist–researcher of the chronic pain centre was proposed to them on the same day of one of their 2 first appointments. At the end of this first assessment, an additional, but optional, appointment was proposed for the restitution of the results of the questionnaires and for a possible therapeutic orientation. 2.3.3. Statistical procedures Data were statistically analysed using R (version 4.0.3). Descriptive statistics are reported as mean and SD for continuous variables and as frequency and percentages for categorical variables. t test, or Wilcoxon test, when appropriate, was used to compare continuous variables (self-compassion scores, coping strategies scores, and attachment quality scores) between groups (secure vs insecure attachment; women vs men). Nonparametric test was used if assumptions were not met. χ 2 test, or Fisher exact test when appropriate, was used to compare qualitative parameters (sex and attachment type). Correlations of self-compassion scores, coping strategies scores, and attachment quality scores were realised using Pearson method or Spearman method when appropriate. As an exploratory analysis, multivariate linear regression was conducted to assess the association of attachment type on global self-compassion score adjusted to sex. Because it is an exploratory study, no correction of multiple testing was realized. A mediation analysis was realised to found if total self-compassion mediated between attachment type and coping strategies. A P -value <0.05 was considered as statistically significant.
3. Results 3.1. Sample description at inclusion Regarding their pain status, described in Table S1, available at http://links.lww.com/PR9/A199 , participants presented most frequently, peripheral neuropathic pain (15%), low back pain (15%), headaches (9.8%), and central neuropathic pain (8.3%). 3.2. Attachment quality In our sample, the mean of secure score was 4.30 (SD = 0.61), which is higher than the secure threshold point of 3.67. Regarding attachment quality, described in Table 2 , of all the participants, 45.5% (n = 60) had a secure attachment style, and 54.5% (n = 72) had an insecure attachment style. 3.3. Self-compassion As seen in Table 3 , subjects in our sample had a mean of self-compassion of 2.92 (SD = 0.64), which corresponds to a moderate level of self-compassion (moderated level from 2.5 to 3.5 to the SCS scale). All other subscale scores were moderate for our population. The overidentification, self-judgement, and isolation subscales indicate less self-compassion, and they are reversed for the total score calculation. Men reported a significantly ( P = 0.02) higher mean of global self-compassion score (mean = 3.13; SD = 0.61) when compared with women (mean = 2.84; SD = 0.63). Self-kindness scores were also significantly ( P = 0.008.) higher in men (mean = 3.17; SD = 0.97) compared with women (Mean = 2.66; SD = 0.94). 3.4. Coping strategies Table 4 shows mean values and SDs of the use of coping strategies. Two Wilcoxon tests were performed with 2 different variables: sex and attachment quality (secure or insecure). Women in our sample (mean = 4.49; SD = 1.74) reported significantly lower ( P = 0.05) acceptance coping compared with men (mean = 5.59; SD = 1.94). Securely attached individuals reported a significantly higher ( P = 0.02) mean of active coping score (mean = 5.22; SD = 1.44) compared with insecurely attached individuals (mean = 4.60; 1.59). 3.5. Attachment quality and self-compassion t test showed that securely attached participants (mean = 3.14; SD = 0.61) reported a significantly higher ( P < 0.001) global self-compassion score compared with the insecurely attached individuals (mean = 2.73; SD = 0.61). The securely attached subjects also reported significantly higher levels ( P = 0.01) of self-kindness (mean = 3.01; SD = 1.01), compared with the insecurely attached ones (mean = 2.62; SD = 0.90). Scores on the isolation items were also significantly higher ( P = 0.002) for the securely attached patients (mean = 3.29; SD = 1.13), compared with insecure attachment ones (mean = 2.69, SD = 0.96) (Fig. 1 , Table S2, available at http://links.lww.com/PR9/A199 ). At an alpha risk set at 0.05, in multivariate analysis, being a woman decreases the total self-compassion score (−0.30 [−0.53 to −0.07], P 0.01) compared with being a man, and having a secure attachment increases the total self-compassion score (0.41 [0.20–0.61], P < 0.001) compared with having an insecure attachment. Correlation analysis (Fig. 2 , Table S3, available at http://links.lww.com/PR9/A199 ) showed that the global security scale was significantly and positively correlated with global self-compassion ( r = 0.41, P < 0.05), self-kindness ( r = 0.26, P < 0.05), self-judgment ( r = 0.29, P < 0.05), isolation ( r = 0.35, P < 0.05), mindfulness ( r = 0.23, P < 0.05), and overidentification ones ( r = 0.38, P < 0.05). Insecure detached attachment scale was significantly and negatively correlated with self-judgment ( r = −0.27, P < 0.05), with isolation ( r = −0.31, P < 0.05), and with total self-compassion scores ( r = −0.30, P < 0.05). Insecure disorganised attachment score was significantly and negatively correlated with self-judgement ( r = −0.24, P < 0.05) and isolation score ( r = −0.05, P < 0.05). 3.6. Attachment quality and coping strategies Table 5 presents the correlations between attachment styles and coping strategies implemented by patients with chronic pain. The global security score was correlated significantly and positively with instrumental support ( r = −0.19, P < 0.05) but negatively with behavioural disengagement coping ( r = 0.35, P < 0.05). Preoccupied attachment was significantly and positively correlated with instrumental support coping ( r = 0.19, P < 0.05) and with emotional support coping ( r = −0.2, P < 0.05). Secure attachment score was significantly and positively correlated with the active coping ( r = 0.29, P < 0.05), planning coping ( r = 0.28, P < 0.05), instrumental support coping ( r = 0.30, P < 0.05), and significantly and negatively correlated with behavioural disengagement coping ( r = −0.46, P < 0.05). Preoccupied attachment was significantly and negatively correlated with emotional support coping ( r = −0.29, P < 0.05). 3.7. Self-compassion and coping strategies Table 6 presents the correlations between coping strategies and self-compassion. Active coping was positively and significantly correlated with overall self-compassion score ( r = 0.48, P < 0.05), as well as self-kindness ( r = 0.29, P < 0.05), common humanity ( r = 0.29, P < 0.05), isolation ( r = 0.39, P < 0.05), mindfulness ( r = 0.45, P < 0.05), and overidentification ( r = 0.39, P < 0.05). Planning score was positively and significantly correlated with mindfulness ( r = 0.33, P < 0.05). Venting was negatively and significantly correlated with self-judgment ( r = 0.18, P < 0.05). Positive reframing was significantly correlated with overall self-compassion ( r = 0.40, P < 0.05), as well with common humanity ( r = 0.31, P < 0.05), isolation ( r = 0.24, P < 0.05), and mindfulness ( r = 0.52, P < 0.05). Acceptance coping was positively and significantly correlated with overall self-compassion ( r = 0.44, P < 0.05), as well as with self-kindness ( r = 0.24, P < 0.05), common humanity ( r = 0.33, P < 0.05), isolation ( r = 0.36, P < 0.05), mindfulness ( r = 0.35, P < 0.05), and overidentification ( r = 0.31, P < 0.05). Self-blame was negatively and significantly correlated with overall self-compassion ( r = −0.27, P < 0.05), as well with self-kindness ( r = −0.18, P < 0.05), self-judgement ( r = −0.39, P < 0.05), isolation ( r = −0.25, P < 0.05), and overidentification ( r = −0.20, P < 0.05). Humour was positively and significantly correlated with overall self-compassion ( r = 0.27, P < 0.05), self-kindness ( r = 0.20, P < 0.05), isolation ( r = 0.29, P < 0.05), and overidentification ( r = 0.23, P < 0.05). Religion coping was positively and significantly correlated with common humanity ( r = 0.2, P < 0.05), and substance use coping was significantly and negatively correlated with common humanity ( r = −0.21, P < 0.05). Finally, behavioural disengagement was negatively and significantly correlated with overall self-compassion ( r = −0.36, P < 0.05), self-kindness ( r = −0.22, P < 0.05), self-judgement ( r = −0.20, P < 0.05), isolation ( r = −0.22, P < 0.05), mindfulness ( r = −0.30, P < 0.05), and overidentification ( r = −0.37, P < 0.05). According to our results, active coping, self-blame, and behavioral disengagement were each significantly associated with attachment type, and attachment type was significantly associated with total self-compassion ( P ≤ 0.05). For these variables, a mediation analysis was realized to find out if self-compassion act as mediation. When modeling coping strategies on attachment type and total self-compassion, attachment type was not found significantly associated with these coping strategies (Table 7 ). Therefore, full mediation was observed for total self-compassion between attachment type and the 3 coping strategies. The average causal mediation effects were found statistically significant at an alpha risk set at 0.05.
4. Discussion The main purpose of our study was to explore the interrelationships between attachment patterns, self-compassion, and coping strategies in patients with chronic pain, to improve therapeutic interventions and help patients adopt more functional strategies to cope with chronic pain. In our study, most of eligible participants (54.5% of the 134) reported an insecure attachment. The results of our study support our principal hypothesis: Securely attached individuals have significantly higher global self-compassion scores than insecurely attached individuals. Our findings are partially consistent with previous findings in both clinical and general population samples. In a clinical population of breast cancer survivors, a recent study revealed significant indirect effects of attachment anxiety and attachment avoidance (on both stress and perceived negative impact of cancer) through lower self‐compassion. 1 In a study of university students and adults in the community, Wei et al. 31 found that self-compassion mediated the relationship between attachment-related avoidance, emotional distress, and anxiety. In a general adult population sample, attachment security predicted higher levels of self-compassion and self-compassion partially mediated the relationships between perceived maternal support, family functioning, and attachment security as predictors of well-being. 25 Pepping et al. 26 experimentally confirmed that enhancing state attachment security leads to an increase of state self-compassion. More recently, Mackintosh et al. 16 found that low levels of self-compassion and high levels of interpersonal problems were predicted by attachment-related avoidance in patients with clinical levels of depression and anxiety, and that self-compassion mediated the relationship between attachment avoidance, emotional distress, and anxiety. For coping, our results showed that secure attachment is positively correlated with functional strategies, such as active coping and planning, and with strategies with functional variability, as the use of instrumental and emotional support. Secure attachment was significantly and negatively correlated with behavioural disengagement, which is considered as a dysfunctional strategy. The only empirical evidence linking attachment theory to pain coping was reported by Mikulincer and Florian 21 who cited unpublished data that patients with insecure attachment use more emotion-focused (acceptance, emotional social support, humour, positive reframing, and religion) and less problem-focused coping strategies (active coping, instrumental support, and planning) to deal with their pain, compared with patients with a secure attachment style. Self-compassion was also significantly and positively correlated with functional strategies (active coping, positive reframing, and acceptance) and negatively correlated with dysfunctional strategies (self-blame and behavioural disengagement) in our research. There are very few previous studies that link the concepts of self-compassion and coping. In a study by Sirois et al. 27 with a sample of women with chronic pain, the authors attempted to create a model that describes which coping strategies are correlated with self-compassion and coping self-efficacy, as explanatory variables of stress. In support of our finding, the strategies positively correlated with both variables were active coping, acceptance, and positive reframing strategies (adaptive strategies). 27 By contrast, strategies negatively correlated with both variables were behavioural disengagement and self-blame (nonadaptive strategies). 27 In a more recent study in clinical population, 8 self-compassion accounted for more variance in use of flexible pain coping strategies (ie, acceptance, mindfulness, values, and cognitive diffusion) and less variance in use of traditional pain coping strategies (ie, pacing, relaxation, and positive self-statements). Our results clearly highlight the relationship between attachment style, self-compassion, and coping strategies in patients with chronic pain. There are positive correlations between secure attachment, higher levels of self-compassion, and functional coping and negative correlations between insecure attachment, lack of self-compassion, and dysfunctional coping. Full mediation was observed for total self-compassion between attachment type and the 3 coping strategies (active coping, self-blame, and behavioral disengagement). It seems that secure attachment and self-compassion can be considered as protective factors in chronic pain. The results also show that these 2 variables are rather dependent, explaining partly the same part of the variance in the use of coping strategies. 4.1. Clinical implications In view of these results, the management of patients with chronic pain using programs targeted at the development of self-compassion could be beneficial. There are 2 programs that focus primarily on the development of self-compassion: The first program is mindful self-compassion (MSC) training, 10 and the second is compassionate mind training (CMT). 12 These 2 programs are based on different theoretical assumptions: MSC is based on the third wave of cognitive behavioural therapy and mindfulness, whereas CMT was developed based on notions of developmental psychology. Yet, they share many exercises and meditation practices, to allow patients to grow with more self-compassion. Our findings underline that the attachment pattern may be at the basis of someone's ability to be compassionate and to copy adequately with a difficult situation. Although interventions based on the development of self-compassion can be very useful to develop better coping, an attachment-based therapy could be even more beneficial in the long term for patients with chronic pain. In schema therapy (ST), eg, an integrative and attachment-based model of psychotherapy developed by Jeffrey Young, 32 the “limited reparenting” is proposing a corrective emotional experience as partial antidote to needs that were not adequately met in childhood. Early maladaptive schemas, 32 such as internal working models, 3 are primarily founded on early interactions with the primary caregiver. Although ST is based on specific (cognitive, behavioural, interpersonal, experiential, etc) techniques, as well as all the other integrative and attachment-based therapies, it is focusing on validating feelings, understanding schema origins, but also willing to provide a correcting experience. The therapeutic relationship is becoming a safe transitional space, and of course, the aim is helping the patient to become emotionally autonomous. From this point of view, we can reverse the hypothesis and propose that if actually pain complaining is as an attachment behaviour, a “cry for security,” as Kolb had hypothesised, 14 the patient who conscientiously will be able to recognise and “repair” their schema through therapy will be no longer in need of this specific attachment behaviour. It does not mean that it will necessarily change the pain treatment, but it may change the perception of oneself as worthy of care and the providers of care as more reliable. Securely attached individuals report less health care usage. 4 In clinical practice with patients with chronic pain, as therapy progresses, it is obvious that the more a patient is conscious about their insecure patterns of attachment, the less he will use pain complaints to express his psychological distress, and the more easily he will describe a physical complaint about pain without catastrophizing on an emotional level. Dissociating early insecure attachment experiences from the pain-related needs and their response from personal relationships or from health care professionals is maybe a good key for better pain management and treatment. In a recent review of literature, 17 articles were included examining the association between attachment and different pain conditions from childhood to adolescence. 9 The findings showed “at-risk” attachment pattern and information processing, higher rates of attachment insecurity and unresolved trauma, or loss in clinical groups (children experienced acute, recurrent, or chronic pain) compared with normative samples. It seems that, among other relevant factors, attachment insecurity plays a predominant role in the maintenance of the chronic pain condition, intensifying the pain experience or obstructing effective recovery. 7 The awareness that insecure attachment patterns may be a predisposition to the development and the maintenance of a chronic pain condition also concerns all health professionals. As the attachment system is triggered by a painful stimulus, considered as a threatening situation, the approach of health professionals can be crucial for a patient with insecure attachment or unresolved trauma. An attachment-informed approach could offer a better understanding of the complexity of the pain clinical practice, as well as appropriate support beneficial both to patients and health professionals, which could also increase the effectiveness of interventions. 4.2. Study strengths, limits, and future directions of research The main strength is the study's novelty and the fact that the inclusion of all participants was made in a given point at the very beginning of their treatment, which limits selection bias. The main limitation of the study is that we assessed attachment, self-compassion, and coping using brief self-report measures. All 3 assessments reflect individuals' subjective perceptions, which may be vulnerable to reporting bias. Qualitative methods or grids of heteroassessment, combined with self-report questionnaires, could provide more solid results in future research. In addition, there was a very wide range of different diagnoses, including patients with a diagnosis not yet defined. Further research could group together the most frequently encountered diagnoses, making it possible to explore whether the results need to be qualified according to the disease, the degree of disability, the region of the body where the pain is most important, etc. With respect to the exclusion criteria, there was no age limit to maximise the recruitment of new patients. Although attachment patterns are considered to be stable through time, research in a wider sample could allow us to group patients according to their age, to further study self-compassion and coping scores.
5. Conclusion In conclusion, this study supports that insecurely attached individuals have significantly lower levels of self-compassion and use less adaptive coping strategies than securely attached individuals. Pain therapeutic approaches should thus increase their focus on attachment as a possible way of improving efficacy of management. Further research is, however, required to explore how attachment patterns and self-compassion are linked to unresolved trauma, other domains of psychopathology, pain intensity, and early maladaptive schemas in patients with chronic pain, with longitudinal designs.
Supplemental Digital Content is Available in the Text. Secure attachment is associated with higher self-compassion and functional coping; negative correlations are found between insecure attachment, lack of self-compassion, and dysfunctional coping, in patients with chronic pain. Abstract Introduction: In the recent year's literature, attachment insecurity is described as a vulnerability factor among patients with chronic pain, associated with poor pain coping, anxiety, depression, catastrophizing, greater pain intensity, and disability. Self-compassion, on the other hand, is described as a protective factor, associated with lower levels of negative affect, catastrophizing, depression, and anxiety in patients with chronic pain. Methods: In this study, we aim to explore the association between attachment, self-compassion quality, and coping strategies, in patients with chronic pain. Thus, 134 eligible patients with chronic pain were recruited at the certified Evaluation and Treatment Pain Center of the A. de Rothschild Foundation in Paris. We used a sociodemographic questionnaire, the Relationship Scale Questionnaire (RSQ-RC), the Self-Compassion Scale, and the Brief COPE. Results: Results supported our principal hypothesis; securely attached participants reported a significantly higher global self-compassion score compared with insecurely attached ones. Secure attachment and higher self-compassion levels were positively correlated with functional coping strategies and negatively correlated with dysfunctional ones. Discussion: Attachment patterns may be the basis of someone's ability to be compassionate to himself and to cope adequately with a difficult situation, such as a chronic pain condition. An attachment-informed approach to pain management could offer a better understanding of the complexity of this clinical condition and potentially provide appropriate support for both patients and health professionals, aiming to improve the effectiveness of interventions. Keywords:
Disclosures The authors have no conflict of interest to declare. Appendix A. Supplemental digital content Supplementary Material
Acknowledgements The authors thank the staff and the patients at the diagnostic and treatment centre of chronic pain of the hospital A. de Rothschild Foundation in Paris and specifically Dr Anne-Margot Duclot, Dr. Jean Bruxelles, Dr. Jean-Baptiste Thiebaut, and Ms Meriem Hachem Elaib for their valuable help. This study was promoted and financed by the hospital A. de Rothschild Foundation, and the hospital was given the MERRI (SIGREC-type) endowment to set it up. The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available because of application of the European Data Protection Regulation.
CC BY-ND
no
2024-01-16 23:47:19
Pain Rep. 2023 Aug 18; 8(5):e1087
oa_package/90/38/PMC10789448.tar.gz
PMC10789452
38226027
1. Introduction Low-dose naltrexone (LDN), in daily doses of 1 to 5 mg, 45 is, due to its potential analgesic and anti-inflammatory effects, increasingly used as an off-label treatment of fibromyalgia (FM) and some autoimmune pain conditions. 10 , 36 , 37 , 45 , 57 Naltrexone is well-known for its use in the treatment of opioid and alcohol addiction in daily doses of at least 50 mg. Naltrexone is a μ-opioid-receptor antagonist with, to a lesser extent, δ-receptor antagonistic properties, bearing a close structural similarity to naloxone. Increased oral bioavailability and longer half-life (T 1/2β ) for the active metabolite 6β-naltrexol makes naltrexone pharmacologically preferable over naloxone. 57 Recent experimental studies have demonstrated that LDN acts as an immune modulator in the CNS. 19 , 31 Low-dose naltrexone has been used in disorders with a putative significant neuroinflammatory component, eg, chronic pelvic pain, CRPS (complex regional pain syndrome), interstitial cystitis, epilepsy, FM, inflammatory bowel disease, and multiple sclerosis. 24 , 32 , 57 Fibromyalgia is a chronic, nociplastic, 2 , 4 musculoskeletal disease with an unknown etiology characterized by widespread fluctuating pain, fatigue, low quality of sleep, and high incidences of depression and anxiety disorders. 23 , 43 In Europe and the United States, 2% to 8% of the population experience FM, 12 affecting women more frequently than men. 7 , 55 The main drug classes recommended in FM are antidepressants, anticonvulsants, and opioids. 2 , 30 Although demonstrating clear evidence for analgesic efficacy, these drugs are associated with a risk of initiating severe arrhythmias, cardiac dysfunction, neuropsychiatric disorders, serotonergic syndrome, and enhanced postoperative morbidity. 18 Anticonvulsants have been recommended as alternatives associated with the development of dependence, substance abuse, and suicidality. 27 , 33 European League Against Rheumatism (EULAR) recommends that tramadol can be used in severe pain when nonpharmacological multimodal therapies fail. 25 Paradoxically, in clinical FM studies, opioids generally demonstrate limited analgesic efficacy, 15 but because tramadol also has serotonin-norepinephrine reuptake Inhibitor (SNRI) properties, this may explain the observed weak analgesic effect. All opioids, including tramadol, 40 carry a risk for the development of tolerance, dependence, and substance abuse, highlighted by the “opioid epidemic” in the United States. 30 , 47 Small-scale studies 10 , 55 , 56 indicate analgesic efficacy of LDN with few adverse effects, and thus, our rationale for this study was to corroborate the findings in a higher volume. Only 6 studies address LDN as a treatment of FM. For example, a pilot study by Younger et al. described lowered symptoms in all included 10 patients with FM when receiving LDN. 55 A randomized, double-blind, placebo-controlled study also by Younger et al. 56 showed a significant decrease in pain intensity in 31 included patients with FM. A study investigating the dose–response relationship among 25 patients with FM 10 found that 4.5 mg/day was effective in 95%. A study including 8 patients with FM showed a decrease in proinflammatory cytokines and less pain and symptoms after 8 weeks of LDN treatment. 31 An explorative study describing cold pressor test (CPT) to measure pain included 15 patients with FM and showed improved scores after receiving LDN. 29 Finally, an explorative study using CPT in patients with chronic opioid treatment showed improved CPT scores among 21 patients with FM after LDN treatment. 22 The main objectives of this study were as follows: First , to examine if LDN was associated with a higher analgesic efficacy and improvement in physical function score compared with placebo. Second , to ascertain the analgesic efficacy of LDN in experimental pain procedures using quantitative somatosensory testing. 9 , 28 Third, to examine the pharmacokinetics of LDN and the main metabolite 6β- naltrexol .
2. Methods 2.1. Study management The Committee of Health Research Ethics of The Region of Southern Denmark (S-20150159), the Danish Medicines Agency (2015102044), and the Data Inspection Authority of The Region of Southern Denmark (2008-58-0035) approved the study protocol. The study was registered in EUDRACT (2015-002972-26) and ClinicalTrials.gov (NCT02806440) and conducted in accordance with Good Clinical Practice (GCP) and Good Manufacturing Practice (GMP). 2.2. Investigational centers The study was planned as a 2-center collaboration between The Multidisciplinary Pain Center at Rigshospitalet, Copenhagen (MPC-C), a tertiary university facility, and The Multidisciplinary Pain Clinic at Friklinikken, Grindsted, (MPC-G), a secondary health care facility. In total, an enrollment of 140 patients with FM was planned, with 70 patients with FM allocated at each investigational center. Due to organizational changes at MPC-C, only 1 patient was randomized at MPC-C. This patient made a withdrawal of consent after intake of 1 tablet. The study continued as 1 investigational center, MPC-G, planned to include 70 patients with FM. 2.3. Study design 2.3.1. Study setting The study was an investigator-initiated and investigator-driven study using a block-randomized, double-blind, placebo-controlled, crossover design (see Fig. 1 showing overview of study). The patients with FM were randomized to treatment with LDN or placebo in the first or second treatment period. The patients with FM were randomized and allocated equally according to computer random table method. The pharmacy managed the randomization sequence generated by a web-based randomization site. 4 The sequence was generated using the second generator function, applying blocks of 10 and balanced permutations (see text, Supplemental Digital Content 1, text describing study setting, available at http://links.lww.com/PR9/A195 ). The randomization lists were stored in secure and locked confines, only accessible by the principal investigator. In case of a medical emergency, the code could be individually unblinded. Patients with FM were included consecutively and evenly over time. New patients with new randomization numbers were allocated to replace dropouts. Patients, study staff, and staff in pain clinic were blinded throughout the study period. The study was executed transparently and presented in accordance with CONSORT guidelines. Information and data collection from patients were done by 5 trained staff members. Study data were manually entered into an electronic case report form (e-CRF). After the last patient visit, data were stored in OPEN, a dedicated research registry in The Region of Southern Denmark (see text, Supplemental Digital Content 1, text describing study setting and source data, available at http://links.lww.com/PR9/A195 ). 2.4. Outcomes 2.4.1. Primary outcomes The primary outcomes were patients with FM reporting on function, total impact, and symptoms in Fibromyalgia Impact Questionnaire revised (FIQR) 5 questionnaire (cf. 2.8.2.1) and reporting on pain intensity using the summed pain intensity ratings (SPIR) 8 , 26 (cf. 2.8.2.2). 2.4.2. Secondary outcomes The secondary outcomes were as follows: (1) Diary-based questionnaires; Brief Pain Inventory-Short Form (BPI-SF), 51 Daily Sleep Interference Scale (DSIS), 48 Hospital Anxiety and Depression Scale (HADS), 58 PainDETECT Questionnaire (PD-Q), 16 and Pain Catastrophizing Scale (PCS) 35 , 42 (cf. 2.8.2.3) (2) Quantitative Somatosensory Testing (QST) 28 paradigms (cf. 2.9) (3) Pharmacokinetics of naltrexone and the main metabolite 6β-naltrexol (cf. 2.10.2) 2.4.3. Timeline The study had a 3-phase setup (see Fig. 2 showing study phases). The first phase included baseline assessment (BA1) (day −3 to day 1) and a treatment period (day 1 to day 21) including an outcome assessment (OA1). The second phase was a washout period (day 22 to day 32). The third phase included a baseline assessment (BA2) (day 33 to day 36) followed by a treatment period (day 36 to day 56) including an outcome assessment (OA2). The patients with FM attended 6 separate examination days (see Table 1 , showing overview of the study). 2.5. Drugs Naltrexone (4.5 mg) and identical placebo tablets were manufactured and packed in a blinded and randomized fashion by the pharmacy (magistral production, Glostrup Apotek, Copenhagen, Denmark). Naltrexone (4.5 mg) is not marketed in Denmark but manufactured by permission of the Danish Medicines Agency (DMA), controlling the authorization and licensing of the manufacturing process according to GMP. Naltrexone 4.5 mg was chosen for this study because it is the typically applied dosage in existing studies. 39 , 56 2.6. Patients All patients with FM were screened by a medical specialist in rheumatology and fulfilled the ACR's (American College of Rheumatology) 2011 criteria of FM 3 , 52 – 54 before enrollment in the study. The patients with FM were recruited from the patient registries at MPC-G. 2.6.1. Inclusion and exclusion criteria Inclusion and Exclusion criteria are indicated in Table 2 (see Table 2 , showing inclusion and exclusion criteria). 2.7. Concomitant treatment Concomitant medications were registered in the e-CRF, including generic names and doses. Paracetamol was used as rescue medication (1 g maximum 3 times a day). 2.8. Study chronology 2.8.1. Diary The patients with FM received a diary that allowed entries concerning study medication, adverse events, pain assessment forms, questionnaires, and adverse events. 2.8.2. Questionnaires The questionnaires were self-reported. Patients were contacted by phone the day before answering the first questionnaire. 2.8.2.1. Fibromyalgia impact questionnaire revised The FIQR 5 is a tool developed to assess FM-related problems and response to a given treatment. The FIQR was translated from the original English version to Danish by 4 health care professionals specialized in the management of chronic pain. The back-translation was then performed by a native English individual fluent in Danish. After revision and back-translation, a final revised Danish version of FIQR was generated. The FIQR explores 3 domains: function, total impact, and symptoms. The patient was asked to answer based on the experience during the last 7 days before filling in the questionnaire. The questionnaire includes 21 questions regarding everyday activities. The patient was asked to mark the degree of difficulty spanning from “no difficulty” to “very difficult performing the activity.” Furthermore, the questionnaire evaluated whether the patient was restricted or incapacitated in doing the weekly chores by the FM symptoms. The FIQR also assessed current pain intensity, energy level, sleep quality, anxiety symptoms, feeling depressed, body stiffness, sensitivity to touch, difficulties with balance, memory, and with the perception of loud, shrill noises, smells, or cold. The FIQR scoring was made by dividing the function domain sum (0–90 points) by 3. The overall impact domain was left unchanged (0–20 points). The symptom domain sum (0–100 points) was divided by 2. The 3 domain scores were then summed (0–100 points). Differences in mean score in FIQR between BA1 (day −3 and day 1) and BA2 (day 33 and day 36) and between OA1 (day 18 and day 21) and OA2 (day 53 and day 56), respectively, were calculated. 2.8.2.2. Numeric rating scale, summed pain intensity rating The 11-point numeric rating scale (NRS) 8 , 26 was applied to evaluate pain intensity (during rest, personal hygiene measures, and activity of daily living). The patient with FM indicated the pain intensity (0–10; 0 = “no pain”; 10 = “worst possible pain”) based on the experience during the last 24 hours before filling in the questionnaire. The NRS ratings across each activity were summed as SPIR (0–30 points). Differences in mean score in SPIR between BA1 (days −2, −1, 1) and BA2 (days 34, 35, 36) and between OA1 (days 19, 20, 21) and OA2 (days 54, 55, 56), respectively, were calculated. 2.8.2.3. Miscellaneous questionnaires The BPI-SF, 51 DSIS, 48 HADS, 58 PD-Q, 16 and PCS 42 are described in detail in Supplemental Digital Content (see text, Supplemental Digital Content 2, text describing miscellaneous questionnaires, available at http://links.lww.com/PR9/A195 ). 2.9. Quantitative somatosensory testing Quantitative somatosensory testing (QST) is a standardized activation of the sensory system by the application of graded chemical, electrical, mechanical, or thermal test stimuli, with an assessment of the evoked psychophysical responses, examining sensory detection and pain thresholds. 28 2.9.1. Heat–capsaicin sensitization The heat–capsaicin sensitization test is a validated experimental pain model investigating aspects of central sensitization, eg, secondary hyperalgesia and allodynia 13 , 34 , 38 (see text, Supplemental Digital Content 3, text describing quantitative somatosensory testing, available at http://links.lww.com/PR9/A195 ). 2.9.2. Pressure pain thresholds Assessments of pressure pain thresholds (PPTs) were performed with a calibrated pressure algometer 34 (see text, Supplemental Digital Content 3, text describing quantitative somatosensory testing, available at http://links.lww.com/PR9/A195 ). 2.9.3. Conditioned pain modulation test The conditioned pain modulation (CPM) test evaluates the efficiency of the descending inhibitory pathways and has been used as a quantitative measure of pain disinhibition in patients with FM. 9 The CPM test was performed as a cold pressor test; PPT1 was the baseline assessment, and PPT2 the assessment measured after the patient had submerged their left hand in cold water for 60 seconds (see text, Supplemental Digital Content 3, text describing quantitative somatosensory testing, available at http://links.lww.com/PR9/A195 ). The CPM efficiency was calculated as follows: 2.10. Blood Sampling 2.10.1. Routine blood chemistry Screening of kidney and liver function was tested before inclusion in the study according to safety criteria. 2.10.2. Naltrexone and 6β-naltrexol plasma concentration measurements To examine the pharmacokinetics (PK) of naltrexone and its main metabolite, 6β-naltrexol, venous blood samples were collected in lithium-heparin–containing tubes on days 1, 14, 21, 36, 49, and 56 (see text, Supplemental Digital Content 4, text describing blood sampling and analysis, available at http://links.lww.com/PR9/A195 ). On days 1 and 36 (first day of treatment periods), samples were collected in the morning just before the intake of the first tablet and then subsequently at 15, 30, 45, and 60 minutes after tablet intake. On days 14, 21, 49, and 56, medication was taken in the morning, and samples were collected during the clinical visit 1 to 3 hours later. 2.11. Adverse events Definitions, monitoring, and reporting procedures are described in Supplemental Digital Content 5 (see text, Supplemental Digital Content 5, text describing adverse events, available at http://links.lww.com/PR9/A195 ). 2.11.1. Safety Low-dose naltrexone is considered safe to administer. Four studies 19 , 55 – 57 only reported mild adverse events. The following adverse events were specifically asked for and reported: sleep disturbances, vivid dreams, nausea, diarrhea, headache, and tiredness. 2.12. Statistics 2.12.1. Statistical significance The authors are aware of the discussions concerning the indiscriminate use of P values as an absolute mean of null hypothesis testing. 1 , 49 , 50 In this article, the term statistical significance was avoided. The advice “correct and careful interpretation of statistical tests demands examining the sizes of effect estimates and confidence limits, as well as precise P values (not just whether P values are above or below 0.05 or some other threshold)” was generally followed. 6 , 17 , 20 2.12.2. Sample size estimates The calculation is based on FIQ data from Younger et al., 56 where mean values (SD) in the LDN group of 28.8 (12.5)% and in the placebo group of 18.0 (14.6)% are given for pain reduction, which gives an effect size (ES) of 0.61 (GPower*3.1.9.2, Kiel University, Germany). The sample size estimates were based on a 1% chance of type I errors (α = 0.01), 10% chance of type II errors (β = 0.10), nonparametric distribution (ARE-correction; paired analysis with Wilcoxon signed-rank test), and an estimated correlation coefficient ( r ) between the treatments of 0.3. The estimated sample size per center was 51, allowing complete analyses to be performed at each center (see text, Supplemental Digital Content 6, text describing statistics, available at http://links.lww.com/PR9/A195 ). 2.12.3. Statistical data processing Our analyses focused on measuring the pharmacodynamics effects of LDN compared with placebo on a number of primary and secondary outcomes. We exploited our access to both baseline and outcome measures for all individuals under both active treatment and placebo. This allowed us to perform paired tests. First, baseline (BA) and outcome (OA) measures were transformed into measures of changes (Δ) for all variables ( v ) and for all individuals ( i ) under treatment with LDN or placebo: To assess the pharmacodynamics effects of LDN, the differences between LDN treatment and placebo were examined (see text, Supplemental Digital Content 6, text describing statistics, available at http://links.lww.com/PR9/A195 ). Formally testing the null hypotheses of no effects, paired Wilcoxon signed-rank tests were used to report the associated P values and appropriate ES. For completeness, the results of the corresponding parametric tests were reported in tabular form (mean [SD], P , 95% confidence interval [CI], effect sizes [ESs]). Data are reported as median (IQR) unless otherwise stated.
3. Results 3.1. Patients A total of 151 patients with FM were assessed for eligibility, and 58 patients with FM were included and randomized. Sixty patients declined to participate mostly due to time requirement, and 33 did not meet inclusion criteria. Two patients with FM dropped out due to adverse events (nausea, vomiting) immediately after the treatment started. One patient dropped out due to concomitant acute illness, and 3 patients with FM did not state the withdrawal reason. Fifty-two patients with FM fulfilled the study per protocol. The first patient was included in May 2016, and the last patient visit was in December 2019. Inclusion was evenly distributed over time. 3.1.1. Concomitant medication and demographics Demographic data and use of concomitant medication are described in Table 3 (see Table 3 describing demographic) and Table 4 (see Table 4 describing concomitant medication), respectively. The patients with FM continued the medication in stable doses during the study. 3.1.2. Adverse events Adverse events (see text, Supplemental Digital Content 5, text describing adverse events, available at http://links.lww.com/PR9/A195 ) were registered at visit days in both treatment periods. Headache, fatigue, nausea, and dizziness were registered in both treatment periods by a small number of patients with FM, and all adverse events were classified as minor (See Table 5 describing adverse events). The 2 patients with FM who dropped out immediately after the start of the treatment experienced minor adverse events. 3.2. Primary outcome 3.2.1. Fibromyalgia impact questionnaire revised Baseline and outcome scores for FIQR were obtained under both LDN and placebo treatment (n = 50). The differences were −1.65 (18.55) (see Fig. 3 dot-line diagram showing scores). The Wilcoxon signed-rank test did not indicate any difference between LDN and placebo (ES = 0.15, CI = −6.72 to 2.15; P = 0.30; see Table 6 describing FIQR results). The random effects model did not indicate any difference between LDN and placebo (conditional mean difference −2.50, P = 0.34) (see text, Supplemental Digital Content 6, text describing statistical data processing, available at http://links.lww.com/PR9/A195 ; see Table 7 describing absolute measures). 3.2.2. Summed pain intensity ratings The difference in SPIR scores between LDN and placebo was −0.33 (6.33) (see Fig. 3 dot-line diagram showing scores). The Wilcoxon signed-rank test revealed ES of 0.13 and CI of −2.17 to 0.92 ( P = 0.4; see Table 6 describing SPIR results). The random effects model did not indicate any difference between LDN and placebo (conditional mean difference −0.40, P = 0.68) (see text, Supplemental Digital Content 6, text describing statistical data processing, available at http://links.lww.com/PR9/A195 ; see Table 7 describing absolute measures). 3.3. Secondary outcomes See Table 8 describing secondary outcomes results. 3.3.1. Quantitative somatosensory testing 3.3.1.1. Secondary hyperalgesia areas The differences in secondary hyperalgesia areas between LDN and placebo treatments (n = 33) were −0.88 (48.13) cm 2 . Wilcoxon signed-rank test revealed ES of 0.04 and CI of −14.63 to 12.10 ( P = 0.83). 3.3.1.2. Allodynia The differences in allodynia areas between LDN and placebo treatments (n = 33) were −14.46 (42.04) cm 2 . Wilcoxon signed-rank test revealed ES of 0.24 and CI of −20.82 to 4.96 ( P = 0.65). 3.3.1.3. Pressure pain threshold The differences in PPT (kPa) at tender points between LDN and placebo treatments (n = 32) were 0.08 (0.52) kPa. Wilcoxon signed-rank test revealed ES of 0.03 and CI and −0.21 to 0.21; P = 0.88. The differences in PPT at control points between LDN and placebo treatments (n = 34) were 0.08 (0.76) kPa. Wilcoxon signed-rank test revealed ES of 0.06 and CI −0.26 to 0.29 ( P = 0.75). 3.3.1.4. Conditioned pain modulation The difference in CPM (%) between LDN and placebo was (n = 34) −9.25 (61.00). Wilcoxon signed-rank test revealed ES of 0.21 and CI of −25.42 to 6.93 ( P = 0.23). 3.3.2. PainDETECT The PD-Q questionnaire evaluates the likelihood of presence of a neuropathic pain component. At day 1, before treatment, 6 patients with FM scored the neuropathic component to be very unlikely, 22 that the component could not be rejected and 24 that the neuropathic was very likely. 3.3.3. Miscellaneous questionnaires Differences in scores between LDN and placebo treatment from the questionnaires for HADS, PCS, and DSIS are presented in Table 8 (see Table 8 describing scores in miscellaneous questionnaires). 3.4. Blood Samples 3.4.1. Pharmacokinetics and pharmacodynamics Plasma concentrations (Cp) of naltrexone and 6β-naltrexol on the first treatment day showed a fast absorption rate of naltrexone and a rapid conversion to 6β-naltrexol for all patients with FM. The more than 10-fold increase in Cp of 6β-naltrexol, compared with naltrexone, is due to first-pass metabolism 11 , 41 with a high hepatic extraction ratio for the parent compound. Peak Cp were likely reached at 30 to 45 minutes after ingestion (see Fig. 4 showing plasma concentrations). After 21 days of treatment, samples were taken 1 to 3 hours after tablet intake. The median Cp of naltrexone and 6β-naltrexol were 0.33 μg/L and 5.29 μg/L, respectively. Detailed pharmacokinetic analyses were not performed due to short sampling period.
4. Discussion 4.1. Outcome In this randomized controlled study, using a crossover design, 52 patients with FM fulfilling the ACR's 2011 criteria, the efficacy of treatment with LDN, were examined. The outcome data did not indicate any analgesic efficacy or improvement in physical function score related to the treatment. Using experimental tests of neuroplasticity perturbations, no differences were found related to the treatment. The pharmacokinetic analyses showed a rapid and reliable absorption of naltrexone. 4.2. Current management strategies It is generally agreed that management of FM requires a biopsychosocial approach, eg, cognitive behavioral therapy, education, mindfulness-based stress reduction, physical therapies, and physical exercise. 7 , 46 Also, pharmacotherapy is needed in patients with FM with severe pain as a component in multimodal rehabilitation. 2 , 25 , 36 4.3. The rationale of multimodal pharmacotherapy Multimodal analgesic pharmacotherapy includes the use of a combination of drugs, often with different pharmacological mechanisms of action. The multimodal concept may demonstrate infra- or supraadditivity, obtaining identical or better analgesic effects at lower doses of each drug compared with monotherapy. In acute pain management, the combination of paracetamol and NSAID has improved analgesic efficacy and caused a reduction in opioid requirement. 44 The evidence for the efficacy of multimodal pharmacotherapy in chronic pain is scarce. However, the study did not examine any potential additive analgesic effect by combining LDN with an antidepressant or anticonvulsant. 4.4. Strengths of the study First , compared with the early FM studies 55 , 56 on LDN, methodological aspects have been improved in this study. In the 2009 study 55 (n = 10), a single-blind, nonrandomized, crossover design was used with a fixed treatment sequence but with a dissimilar number of treatment weeks. In the 2013 study 56 (n = 31), a randomized, double-blind, placebo-controlled, counterbalanced, crossover design was used, however, also with dissimilar treatment periods, ie, 12 weeks with LDN vis-á-vis 4 weeks with placebo, and no washout period between treatments. Interestingly, neither an a priori nor a post hoc sample size estimate was presented in any of the studies. Second , this study uses the validated FIQR as a primary outcome parameter, including summed resting and dynamic ADL pain ratings. These measures are considered improvements compared with the previously mentioned studies, only using unimodal nondynamic pain ratings as a primary outcome. Generally, the outcomes in this study are in agreement with the recommended patient phenotyping measures in chronic pain from IMMPACT (Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials) 14 regarding psychometrics, sleep function, and fatigue. This study tested both the ascending excitatory pain pathways by the heat/capsaicin sensitization test and the descending inhibitory pathways by the CPM test. Third , the pharmacokinetics of LDN were analyzed. Fourth , a priori sample size estimates were based on the completion of 51 per-protocol treated patients with FM at 1 center, meaning that failure to complete inclusions in one of the centers would not jeopardize meaningful statistical analysis from the companion center. 4.5. Weaknesses of the study First , the length of the study period, 60 days, may have impeded patient compliance, affecting the attrition rate and the number of dropouts. However, the original study 56 had a duration of 22 weeks, with 28 of 31 patients with FM completing the study, so this would probably not be an issue. Second , a placebo effect was anticipated, particularly in the first treatment period. The design of this study, however, does not allow an estimate of the magnitude of the placebo effect. Third, although patients with FM fulfilled the ACR's 2011 criteria, analysis of the neuroinflammatory component could have characterized patients with FM further. Fourth, analysis of the neuroinflammatory response to LDN could likely identify subgroups of responders but was beyond the scope of this study. Fifth, in this study, patients with FM were diagnosed by a specialist in rheumatology at least months before enrollment. To our knowledge, no literature has described the relevant start window for LDN treatment compared with the time of diagnosis. 4.6. In summary The recommended pharmacological management of FM, antidepressants, anticonvulsants, and opioids 2 , 25 are associated with a substantial risk of development of serious adverse effects. Low-dose naltrexone has in preliminary studies indicated an analgesic efficacy in FM with a low incidence of adverse effects. However, in this study, the analgesic efficacy of LDN was not corroborated.
Supplemental Digital Content is Available in the Text. Abstract Introduction: Fibromyalgia (FM) is a chronic fluctuating, nociplastic pain condition. Naltrexone is a μ-opioid-receptor antagonist; preliminary studies have indicated a pain-relieving effect of low-dose naltrexone (LDN) in patients with FM. The impetus for studying LDN is the assumption of analgesic efficacy and thus reduction of adverse effects seen from conventional pharmacotherapy. Objectives: First , to examine if LDN is associated with analgesic efficacy compared with control in the treatment of patients with FM. Second , to ascertain the analgesic efficacy of LDN in an experimental pain model in patients with FM evaluating the competence of the descending inhibitory pathways compared with controls. Third, to examine the pharmacokinetics of LDN. Methods: The study used a randomized, double-blind, placebo-controlled, crossover design and had a 3-phase setup. The first phase included baseline assessment and a treatment period (days −3 to 21), the second phase a washout period (days 22–32), and the third phase a baseline assessment followed by a treatment period (days 33–56). Treatment was with either LDN 4.5 mg or an inactive placebo given orally once daily. The primary outcomes were Fibromyalgia Impact Questionnaire revised (FIQR) scores and summed pain intensity ratings (SPIR). Results: Fifty-eight patients with FM were randomized. The median difference (IQR) for FIQR scores between LDN and placebo treatment was −1.65 (18.55; effect size = 0.15; P = 0.3). The median difference for SPIR scores was −0.33 (6.33; effect size = 0.13; P = 0.4). Conclusion: Outcome data did not indicate any clinically relevant analgesic efficacy of the LDN treatment in patients with FM. Keywords:
Disclosures The authors declare that the article is a transparent and accurate report of the research undertaken and that there are no conflicts of interest to disclose. Appendix A. Supplemental digital content Supplementary Material
Acknowledgements This study was supported by grants from the AP Moeller Foundation, the Danish Society of Anesthesiology and Intensive Care Medicine, and the Director Emil C. Hertz and Wife Inger Hertz' Foundation. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors express their gratitude to the participating patients and thanks to biochemist Dorte Aalund Olsen and medical laboratory technologist Christoffer Kleve-Roendbjerg for support in conducting this study.
CC BY
no
2024-01-16 23:47:19
Pain Rep. 2023 Jun 15; 8(4):e1080
oa_package/01/7d/PMC10789452.tar.gz
PMC10789453
38225956
1. Introduction Chronic pain (CP), defined as pain lasting more than 3 months, represents a major global burden for years lived with disability and the associated economic impact due to health resources used and work absenteeism. 128 As an example, nonspecific low back pain (LBP) is the largest single cause of years lived with disability globally, 49 accounting for 11% of the entire disability burden from all diseases. In 2017, LBP was estimated to cost the UK up to 116 million lost workdays and approximately £12.3 billion through direct health care costs, production losses, and informal care https://www.gov.uk/government/publications/chronic-pain-in-adults-2017 . Pain has long been considered solely as a symptom, and it is only recently that CP has been recognised by the World Health Organization International Classification of Diseases (ICD)-11 121 as a long-term condition in its own right. Chronic pain prevalence increases with age as predisposing conditions such as obesity, arthritis, diabetes mellitus, and malignancy become more common. 41 A recent meta-analysis of population-based epidemiological studies worldwide reported a pooled CP prevalence estimate of 31%, 111 with an equivalent figure from UK studies of 43.5%, 41 similar to data arising from UK Biobank (UKB, 42.9%). 75 Of those reporting CP, a subset conservatively estimated at 20% 20 have disabling CP that substantially interferes with activities of daily living. Similarly, there is an overlapping population who express a substantial need for health care. Such individuals are often characterised by comorbid depression, fear, cognitive dysfunction, avoidance of movement, and poor coping skills. 15 From a public health perspective, the challenge is to prevent the progression of mild or transient pain to CP which becomes severe. 14 The development and severity of CP involve a complex interaction between genetic, environmental, and clinical factors in vulnerable individuals. 126 Chronic pain is heritable: Twin 72 , 78 , 110 and extended family 61 studies have provided estimates of 30% to 76%. Mutations in specific genes (most of which encode ion channels) cause rare (Mendelian) pain conditions in humans. 12 This includes the disorder inherited erythromelalgia (characterized by pain and erythema of the extremities exacerbated by warming), which is caused by autosomal dominant and highly penetrant gain of function mutations in the gene SCN9A encoding the voltage-gated sodium channel Na V 1.7. 13 Human pain complex trait genetics (especially when combined with rich phenotypic data, biosamples, and large-scale brain imaging cohorts 34 ) has the potential to revolutionise our understanding of CP pathogenesis, risk factors, and the determinants of treatment responses. Current significant challenges and limitations to this approach relate to (1) a lack of precision in CP phenotyping with respect to the duration, location, intensity, and quality of pain as well as the temporal relationship to predisposing factors and comorbidities such as anxiety and depression and (2) the size of the existing CP cohorts which are limited and, therefore, studies are relatively underpowered. Key to overcoming these challenges is large consortia with a harmonised approach to pain phenotyping and nationwide biorepositories with a wide range of genetic and nongenetic data. There is now an increasing international effort to harmonise data collection which is likely to further inform clinical practice. One example is “INTEGRATE-Pain”. This is a joint initiative between the US National Institute of Health (NIH) and Innovative Medicines Initiative-PainCare which is developing consensus on overarching core outcome domain sets for clinical pain trials and clinical pain research ( https://www.comet-initiative.org/Studies/Details/2083 ). Newer cohorts now available to study CP include DOLORisk, 93 and this has informed the approach taken by current studies such as the rephenotyped UKB Chronic Pain (UKB CP) Cohort and PAINSTORM. This review introduces these cohorts, provides an overview of their main outputs, and outlines the key lessons learned.
7. Conclusions Large national level biobanks such as UKB have already provided important insights into the pathophysiology of CP; these are likely to become more robust with the greater precision of recently augmented pain phenotyping and longitudinal outcomes. There are exciting prospects ahead with the greater integration of GP records, new genetic technologies (such as whole genome sequencing), and the brain imaging of 100,000 participants in UKB. Making sense of such complex, multimodal data sets will require advanced analytics, including machine learning approaches. Although the scale of UKB has undoubted advantages, pain remains a subjective phenomenon which is difficult to capture using a limited number of questionnaires. This is particularly important in conditions such as NeuP where clinical assessment (including examination) is required to reach a robust case definition. This means that there is still an important place for smaller deeply phenotyped cohorts in which the link between a predisposing aetiology, CP, and biomarkers can be studied in detail. Such cohorts can also be used (while working with patient partners) to find new ways to assess pain and the functional impact of pain which can then be iteratively fed back to national level cohorts. The pharmaceutical industry is increasingly orienting analgesic drug discovery to targets in which there is human data validating the molecular target. Our hope is that integration of “big pain data” in humans with the rapid advances in cellular transcriptomics and neural circuit level approaches in animal models will facilitate the desperately needed development of novel analgesics, among other important advances.
Current challenges in understanding chronic pain pathogenesis are being overcome by the creation of large consortia and biorepositories with a harmonised approach to phenotyping. Abstract Chronic pain (CP) is a common and often debilitating disorder that has major social and economic impacts. A subset of patients develop CP that significantly interferes with their activities of daily living and requires a high level of healthcare support. The challenge for treating physicians is in preventing the onset of refractory CP or effectively managing existing pain. To be able to do this, it is necessary to understand the risk factors, both genetic and environmental, for the onset of CP and response to treatment, as well as the pathogenesis of the disorder, which is highly heterogenous. However, studies of CP, particularly pain with neuropathic characteristics, have been hindered by a lack of consensus on phenotyping and data collection, making comparisons difficult. Furthermore, existing cohorts have suffered from small sample sizes meaning that analyses, especially genome-wide association studies, are insufficiently powered. The key to overcoming these issues is through the creation of large consortia such as DOLORisk and PAINSTORM and biorepositories, such as UK Biobank, where a common approach can be taken to CP phenotyping, which allows harmonisation across different cohorts and in turn increased study power. This review describes the approach that was used for studying neuropathic pain in DOLORisk and how this has informed current projects such as PAINSTORM, the rephenotyping of UK Biobank, and other endeavours. Moreover, an overview is provided of the outputs from these studies and the lessons learnt for future projects. Keywords:
2. UK Biobank (chronic pain cohort) The scope of UKB, which comprises 500,000 volunteers enrolled between 2006 and 2010 at ages 40 to 69 from across the UK, provides a unique opportunity to examine the epidemiology and genetics of CP in a prospective population cohort. A brief assessment of CP was completed by all participants at booking ( https://biobank.ndph.ox.ac.uk/showcase/field.cgi?id=6159 ) although it did not include validated questionnaires enabling categorisation of CP of diverse aetiologies. Using that initial data, we have previously demonstrated that the prevalence of CP and most regional (ie, site specific) musculoskeletal pains in UKB are similar to that found in other pain epidemiological studies. Our findings also reproduce known relationships from a range of socioeconomic and psychological factors. 75 , 90 To address the lack of specificity when categorising CP, a UK academic consortium of clinicians and pain researchers, many of whom have experience of working with UKB 75 , 86 , 130 , 133 and with a range of synergistic expertise in epidemiology, genomics, psychology, neuroimaging, and pain management, developed a UKB CP phenotyping survey (2017–2018). The pain phenotyping survey ( https://biobank.ctsu.ox.ac.uk/crystal/ukb/docs/pain_questionnaire.pdf ) was designed by a group of experts (including the authors B.H.S., D.W., and D.L.H.B.) and based on a series of validated questionnaires in routine use (Table 1 ), which were fully aligned with the CP cohorts described in this paper. These focused on the most prevalent causes of CP and associated comorbidities and risk factors. The CP phenotyping survey was then sent to ∼335,000 UKB participants who consented to recontact, had an email address, and were still actively participating as of May 2019 (ie, not deceased or withdrawn). The survey was (partially or fully) completed by ∼167,000 individuals (a response rate of 49.8%). Approximately 148,000 individuals either reported no CP (∼72,000), or CP (pain or discomfort that had been present for more than 3 months) and fully completed the Douleur Neuropathique en 4 (DN4) questionnaire (∼76,000; 51.1%). The data were released in early 2021 and are available to bona fide researchers worldwide. The UKB CP cohort has the added values of: (1) being by far the largest phenotyped CP cohort generated to date worldwide; (2) linkage to longitudinal GP records for >95% of respondents by 2024; (3) access to all the other rich datasets obtained by previous (and subsequent) UKB questionnaires completed by these individuals for imaging, depression, anxiety, cognition, multimorbidity, deprivation, etc; (4) the CP survey will be repeated in summer 2024 and extended to provide detailed outcome and treatment data on those with CP, thus allowing the identification over the intervening 5 years of those with newly reported CP; and (5) being of sufficient size and detail to allow data-derived categorisation of CP symptoms and risk factors. The CP data demonstrate that 75% of subjects with CP reported pain having lasted for more than a year and about a third for more than 5 years. Using the Brief Pain Inventory (BPI) questionnaire, approximately 25% of subjects with CP reported severe or moderate pain whereas 20% reported severe or moderate interference in their activities of daily living. Over half of subjects with CP reported back or neck pain, over 40% had pain in one or more joints (the commonest being knee, then hip, hands, and feet), whereas 10% reported pain all over the body. The commonest self-reported CP diagnoses were osteoarthritis affecting one or more joints, followed by migraine, nerve damage or neuropathy, carpal tunnel, pelvic pain, and rheumatoid arthritis. Using the DN4 questionnaire, the prevalence of “possible” neuropathic pain (NeuP) was 9.2%, making up 18.1% of those with CP. Our recent analysis 7 of those with NeuP demonstrated that this was significantly associated with worse health-related quality of life, having a manual or personal service type occupation and younger age compared with those without CP. As expected, NeuP was associated with diabetes and neuropathy but also with other pains (pelvic, postsurgical, and migraine) and musculoskeletal disorders (rheumatoid arthritis, osteoarthritis, and fibromyalgia). In addition, NeuP was associated with pain in the limbs and greater pain intensity and higher body mass index (BMI) compared with those with nonneuropathic pain. 2.1. Caveats/limitations (1) Non-White ethnic backgrounds were rare in UKB (2.2%); 1.6% were from Black, Asian, and Minority ethnicities; 0.6% were mixed ethnicity; and the remaining 97.8% were White. This compares with 18.3% non-White in England and Wales in the 2021 Census. 35 In the 2011 Census, which is closer to when the UKB cohort was recruited, 14% were non-White in England and Wales 36 and 4% were non-White in Scotland 28 (2022 census not yet available). (2) There was an overrepresentation of participants who were female, of younger age, who had lower BMI, and who were less socially deprived in the group that completed the 2019 pain phenotyping questionnaire compared with the rest of the UKB cohort who did not. The overrepresentation of participants of younger age and who were less socially deprived could potentially be due to the fact that the questionnaire was only available online. (3) The definition of NeuP relies on a self-completed screening tool, which does not meet the grading system for “probable” or “definite” NeuP. These necessitate clinical examination which is clearly not feasible in a large population survey. 2.2. Outputs Because of the large sample size and range of data available compared with other cohorts, UKB allows associations to be quantified with greater precision and across different levels of demographics. As the data from the CP phenotyping survey was only released in early 2021, the results of studies using these data are only just beginning to emerge. Up to this point, studies have used the CP data that were collected at baseline recruitment. This has limited studies to specific single or multipain sites, without consideration for underlying aetiology. An extensive but nonexhaustive list of pain studies conducted either wholly or partially using UKB is provided in Table 2 . These are intended to provide an overview of the kind of analyses that are possible using the cohort. Most pain studies conducted in UKB were cross-sectional because of the data being collected at a single time point. These have identified a wide range of associations with pain phenotypes including ethnicity, 90 alcohol consumption, 8 , 74 smoking, 74 physical activity, 91 low socioeconomic status, 1 , 74 cardiovascular disease, 5 type 2 diabetes (T2D), 5 number of long-term comorbidities, 83 adverse childhood experiences, 55 depression, 89 , 90 and bipolar disorder. 89 They have also revealed that certain anatomical features are influential in pain. These include cam morphology (deformity of the femoral head–neck junction 37 ), osteophytes 38 , 39 and joint space narrowing 38 with hip pain, bone fracture with chronic widespread pain (CWP), 96 , 129 and differences in brain structure between acute back pain, chronic back pain, and chronic back pain occurring with other pain sites. 115 UK Biobank has revealed that pain at specific sites, particularly facial pain, often overlaps with pain at other sites. 106 The availability of self-reported medication (before UKB being linked to primary care records) has enabled the study of pain pharmacoepidemiology. Approximately 5.5% of people in UKB reported regular use of opioids (1.4% strong opioids and 4.2% weak opioids), which was found to be associated with low socioeconomic status and excess mortality. 77 Strong opioid users (9.1%) were also more likely to die during follow-up than weak opioid users (6.9%) or nonusers (3.3%). The use of both opiate and NeuP pain medications with cardiometabolic medications was associated with obesity, increased waist circumference, and hypertension compared with taking cardiometabolic medications alone and the use of the antidiabetic drug metformin seemed to be protective of musculoskeletal pain. 27 Common opioid (codeine and tramadol), nonsteroidal anti-inflammatory drug (NSAID) (ibuprofen), and NeuP medications were associated with far sightedness 94 and people taking NSAIDs to treat back pain were more likely to report pain persistence than were those not treated with NSAIDs. 92 Studies have also explored pain as a potential exposure for other clinical traits, particularly in longitudinal studies where the outcome has been measured at multiple time points. Using this approach, it has been demonstrated that CWP was associated with greater incidence of COVID-19 admission and mortality, 56 as well as mortality relating to all-causes, cancer, respiratory disease, and cardiovascular disease. 76 Chronic widespread pain and CP were also associated with cardiovascular disorders such as myocardial infarction, heart failure, and stroke, 99 whereas a higher number of pain sites was associated with death at a younger age in men 88 and all-cause mortality in both genders. 29 The extensive data available in UKB allow researchers to adjust for a wide variety of potentially confounding factors, and this has been performed to a greater or lesser extent in the studies cited. The availability of genome-wide genotyping data has advanced our understanding of genetic risk factors for pain and provided insights into the biological pathways involved. Novel genome-wide significant (GWS) genetic loci have been identified for back, 16 , 45 , 46 , 114 knee, 84 neck/shoulder, 86 , 132 frozen shoulder, 53 and stomach or abdominal pain 132 as well as pain relating to oral inflammatory diseases, 62 multisite CP, 63 , 64 , 66 , 132 and CWP. 96 These findings suggest a key role for genes involved in the central nervous system, 16 , 45 , 63 , 66 dorsal root ganglion, 64 and immune regulation. 62 It has also highlighted some key differences in the genetics underpinning certain subsets of pain. For example, chronic back pain seems to be much more heritable than acute back pain (4.6% vs 0.8%). 16 The same study identified 13 GWS loci associated with chronic back pain but none for acute back pain. Similarly, another study identified 23 GWS loci associated with multisite CP but none for single site CP. 66 Sex-specific genetic risk factors have been identified in chronic back and multisite pain. 46 , 64 Meanwhile, a separate study on chronic back pain identified 2 GWS loci in men but 7 in women. 46 In addition to the identification of genetic risk factors, UKB genome-wide association study (GWAS) data have also been used to identify genetic correlation between different phenotypes. This is achieved by constructing polygenic risk scores (PRS) to summarise each participant's genetic predisposition for a given pain phenotype or by conducting linkage disequilibrium score regression (LDSR). These techniques have revealed, perhaps unsurprisingly, that there is strong genetic correlation between pain at different sites. 40 Pain phenotypes also seem to have a shared genetic signature with a wide range of psychiatric and mood disorders such as depression, neuroticism, and sleep disorders, 45 , 86 whereas shared genetic architecture seems to underpin the relationship between opioid cessation and CP, being a former drinker or being a former smoker. 32 Finally, UKB GWAS data have been used to establish causal inference of nongenetic factors on pain phenotypes through Mendelian Randomisation. This technique uses known variation in a genetic marker and its influence on a particular trait (usually through GWAS) to interrogate the causal effect of an exposure on a particular outcome. As the genetic variants inherited by an individual are randomly assigned at conception and not subject to modification, it allows genetic markers to be used as a “proxy” for the exposure of interest, thus eliminating the risk of confounding or reverse causation. These studies have been able to establish a causal effect of C-reactive protein, 65 insomnia 103 and iron blood serum status 117 with back pain, and diabetes with frozen shoulder. 53 Furthermore, a bidirectional relationship was found to exist between insomnia and CP 21 and between prescription opioid use and depression and anxiety disorders, 100 whereas multisite CP was found to be a causative for major depressive disorder 63 and far sightedness. 94 By contrast, Mendelian Randomisation found no evidence for a causal effect of alcohol with CWP, 8 obesity with frozen shoulder, 53 or daytime sleepiness with lower back pain. 103 3. DOLORisk The approach to rephenotyping UKB participants for CP and NeuP was based on the experience of the DOLORisk consortium with phenotyping methods, in particular for population cohorts 93 (Table 1 ). DOLORisk, funded by EU Horizon 2020, was the starting point of an effort to expand and improve the development of NeuP cohorts across Europe (including 11 participating centres). The aim was to develop clinical cohorts of sufficient scale to study the multiple risk factors and determinants for NeuP (genetic, clinical, and psychosocial), to understand how such factors interact, and to identify those individuals most at risk of NeuP (Fig. 1 ). Observational in design, it consisted of cross-sectional cohorts and longitudinal cohorts and included both participants from the community in whom outcome measures were captured using questionnaires and more specialised cohorts from secondary care who had detailed phenotyping. 93 Participant recruitment occurred between 2015 and 2019. The main longitudinal branch of the study consisted of 2 existing population cohorts: Generation Scotland (a family-based study 107 ) and GoDARTS (Genetics of Diabetes Audit and Research in Tayside Study—focused on diabetes 57 ), whose participants were contacted by the University of Dundee to be rephenotyped for NeuP (approximately 9,000 respondents at baseline 58 ). Aarhus University and INSERM recruited smaller cohorts of participants scheduled to undergo chemotherapy, thoracic surgery, or breast cancer surgery. These also had a longitudinal design, assessing patients before and after surgery or chemotherapy. The other branch of DOLORisk was cross-sectional and consisted of cohorts of participants with a neuropathy assessed (with clinical history, examination, and specialised tests such as quantitative sensory testing [QST]) in research centres. Aetiologies of neuropathy included diabetic neuropathy, other polyneuropathies, postsurgical neuropathy, small fibre neuropathy, chemotherapy-induced neuropathy, rare pain disorders, and traumatic nerve injury. The total number of participants included in this deeply phenotyped cohort was in the region of 1500. An effort was made to include participants with the relevant predisposition such as diabetic neuropathy but without pain as a control group. One of the ambitions of DOLORisk was to set standards for data collection and deep phenotyping of NeuP. All centres followed a common protocol as a means of harmonisation. This was developed around a core set of self-report questionnaires, to be administered to the population cohorts by mail, and based on recent international consensus on NeuP phenotyping which had been developed through systematic review, Delphi survey, and expert consensus meetings. 123 Detailed aspects of the protocol including further questionnaires, clinical measures, and specialised tests were developed by consensus and finalised at a dedicated consensus meeting between all participating centres. Proposals were made with members of the consortium leading on their area of expertise, reviewing the literature, and unpublished data (for instance exploring the performance of a shorter version of the QST protocol) and then working to achieve a group consensus. In addition to the core questionnaires, the deeply phenotyped cohorts had an extended set of questionnaires, neurological examination, and physiological tests. The questionnaires captured information on demographics; medication; the presence, characterisation, and intensity of pain; pain interference; psychological and lifestyle factors; and quality of life. The choice of questionnaires was based on validation in CP/NeuP and the availability of relevant translations (details of the questionnaires and specialised tests can be found in Tables 1 and 2 of Ref. 93 ). Every participant had (or had previously provided) blood samples taken to perform genetic analyses. A subset of participants also provided a serum sample and a skin biopsy sample. Additional investigations included QST using a slightly shortened version of the German NeuP Consortium protocol, 98 electrophysiology (including nerve conduction studies and nerve excitability testing 119 ), conditioned pain modulation (CPM), and electroencephalography (EEG). The diagnosis of NeuP was graded based on the NeuPSIG algorithm, 44 which sorts participants into 4 groups (unlikely, possible, probable, and definite NeuP) and in those participants with neuropathy diagnostic criteria were based on the Tesfaye criteria. 118 The use of the same core questionnaires was to enable comparison between large population cohorts which have the advantage of scale but lack of in-depth phenotyping and the smaller cohorts recruited in secondary care. For clinical examination and the specialised tests, standard protocols were used including training (both in person and using videos) as well as regular review and feedback of data quality. 3.1. Caveats/limitations (1) Generation Scotland has an overrepresentation of females, affluence, older age, and lower BMI and underrepresentation of comorbidities compared with the Scottish population. For example, 32% reported CP (2.7% severe) compared with 46% (5.7%) nationally. (2) GoDARTS has an underrepresentation of non–Anglo-American ethnicities (0.3% vs 9.2%) and people who have never smoked (41.5% vs 48.1%) in the diabetic part of the cohort, compared with the Scottish T2D population in 2020 ( https://www.diabetesinscotland.org.uk/wp-content/uploads/2022/01/Diabetes-Scottish-Diabetes-Survey-2020.pdf ). (3) Self-reported ethnicity was not recorded in DOLORisk limiting exploration of the impact of ethnicity on CP (this was due to national laws in one participating country preventing the recording of ethnicity data). (4) Most cohorts which underwent deep phenotyping in DOLORisk were cross-sectional rather than longitudinal limiting our ability to establish causality in the relationship between risk factor(s) and CP. This will be partly addressed by PAINSTORM (see below) which will collect follow-up data on these cohorts within the United Kingdom. 3.2. Outputs DOLORisk ran from 2015 to 2020, and consequently studies are now beginning to be published (Table 3 ). The first study to emerge was a GWAS meta-analysis of Generation Scotland and GoDARTS (using a questionnaire-based phenotype identifying NeuP of any aetiology) together with UKB (using a phenotype based on self-reported medication 124 ). This revealed a novel genome-wide significant locus at the mitochondrial phosphate carrier gene SLC25A3 and a suggestive locus at the calcium-binding gene CAB39L . In parallel with this study, the questionnaire-based data in Generation Scotland were used to construct longitudinal environmental risk models for onset and resolution of NeuP, which were then validated in GoDARTS. 59 These models demonstrated the importance of psychological, social, lifestyle, and personality factors in predicting NeuP outcomes. The GoDARTS cohort was also used as a validation cohort in a study which trained machine learning models that can classify people with diabetic peripheral neuropathy into those with and without pain. 6 These models were developed in deeply phenotyped cohorts recruited from the University of Oxford, Technion-Israel Institute of Technology, and Imperial College London and again highlighted the importance of personality, psychological, and quality of life factors in predicting pain. Other important predictors identified were levels of glycosylated haemoglobin (HbA1c), age, and BMI. It is hoped that eventually, these models can be used in a clinical setting to help improve prevention, diagnosis, and treatment for patients. Further studies have been conducted on patients with diabetic polyneuropathy in more deeply phenotyped cohorts. For example, in a cohort recruited by Technion-Israel, both traditional and machine learning predictive techniques were used to analyse brain activity through EEG data. 120 This analysis revealed that people with painful diabetic polyneuropathy had significantly greater resting-state cortical functional connectivity than people with painless diabetic polyneuropathy and that EEG-based brain activity could be a powerful biomarker than can accurately discriminate between the 2 groups. Another study using the cohort from Technion and a cohort recruited by Imperial College London found that people with painful diabetic polyneuropathy had more efficient CPM to heat stimuli applied to the forearm than those with painless diabetic polyneuropathy. 52 This was the first comparison of CPM in painful vs painless diabetic neuropathy. Previous studies had compared groups of patients with CP to healthy controls and so would not take into account neuropathy induced damage to sensory afferents. Conditioned pain modulation heat stimuli efficiency was also correlated with greater pain intensity in the previous 24 hours and greater loss of mechanical sensation. One possible explanation for the more efficient CPM to heat stimuli in those with painful diabetic polyneuropathy may relate to neuropathy at the site of stimulation used in the protocol (and not only as a consequence of descending pain modulation). In light of this, new protocols in which stimuli are given to sites unaffected by neuropathy are needed. Finally, a study exploring the use of nerve excitability (using threshold tracking) as a biomarker in patients with diabetic and chemotherapy-induced peripheral neuropathy found that there was no difference in axonal excitability relating to large, myelinated fibers between those with pain and those without pain. 119 However, because nociceptors are generally unmyelinated and, therefore, not assessed using this technique, these findings suggest that alternative techniques such as microneurography (which specifically examines small fibres) should be used to explore the relationship between neuron excitability and NeuP. Separately, a longitudinal investigation of patients with breast cancer referred for surgery found that, in the group who had received chemotherapy, pain at the surgical site was more prevalent than pain in both feet (59% vs 30%). 11 However, the pain in the feet was rated as more intense and with more daily life interference than pain in the surgical area. Furthermore, the prevalence of pain in both feet was greater in those who had pain in the surgical area, compared with those who did not have pain in the surgical area (40% vs 17%). Analysis of the DOLORisk cohort is ongoing especially in relation to the deeply phenotyped cohorts including sensory profiles (determined using QST) and genomics. 4. PAINSTORM Following on from DOLORisk, the PAINSTORM project (Partnership for Assessment and Investigation of Neuropathic Pain: Studies Tracking Outcomes, Risks and Mechanisms) was funded by the UK's Advanced Pain Discovery Platform (APDP), 4 beginning in 2021. PAINSTORM will follow-up the DOLORisk diabetic population cohort in Dundee (GoDARTS), the diabetic and idiopathic neuropathy cohorts at Oxford and Imperial, and further expand the Oxford rare phenotypes cohort. It will also include new cohorts of people receiving chemotherapy, people with HIV and HTLV-1, and will use the newly available CP data in UKB. The aim of PAINSTORM is to collect more longitudinal data, especially in the deeply phenotyped cohorts, to define the risk factors and pathophysiological drivers of NeuP. Patient partners contributed directly to the consensus meeting and to the development of the PAINSTORM protocol and both of these are very similar to those in DOLORisk (Table 1 ). Based on feedback from patient partners and past experience, a few questionnaires were added (ethnicity, PROMIS Emotional Support, PROMIS Instrumental Support, bespoke items related to pain management, and the lived experience of having NeuP), substituted (PROMIS Pain Interference was replaced with PROMIS Ability to Participate in Social Roles and Activities; the IPIP items for Emotional Stability were replaced with the 7-item State Optimism Measure), or removed (PainDETECT). The inclusion of patient partners in PAINSTORM (form the application stage) has shaped our understanding of the issues that matter to people living with NeuP, the lived experience of NeuP, and the acceptability of measures to assess NeuP. The specialised investigation techniques in PAINSTORM differ slightly to DOLORisk: Nerve excitability testing (which we found did not discriminate painful from painless neuropathy in the DOLORisk study 119 ) makes way for microneurography 102 , 122 ; CPM was omitted as our data from the DOLORisk project suggest that an improved CPM protocol, to be applied and validated in the context of neuropathy, is required 52 ; EEG was not included because as yet the technology for undertaking this at scale is not available; and some participants will take part in imaging studies of the brain, the spinal cord, and the peripheral nervous system. Genetic analysis is likely to include technology advances in both sequencing and analysis for much more comprehensive genomic assessment such as whole genome sequencing. 5. Caveats and lessons learned Other important and relevant pain cohorts exist (such as OPPERA examining painful temporomandibular disorder 43 , 104 , 105 ), and we have only described 3 in order that they can be discussed in detail. UK Biobank, DOLORisk, and PAINSTORM cohorts include a specific focus on (neuropathic) pain phenotypes. Some other cohorts include a few pain questions, but pain is not the focus, and these questions are almost incidental, although they can have value. For example, the English Longitudinal Study of Ageing (ELSA) included a single question about pain (“Are you often troubled by pain?”—yes/no), which allowed relatively detailed analysis of associations between pain and mortality. 109 In one sense, this caveat could even be true of UKB at baseline, which used an untried, unvalidated, nonstandard set of relatively superficial questions. This has allowed a good number of studies to be published (Table 2 ), and their success may rely on sample size and consequent power, rather than on the precision or validity of the definitions. Not until the rephenotyping exercise, described above (UKB CP) were there validated, standard pain, and relevant associated questionnaires included. The 3 cohorts we have described deliberately used similar, harmonised approaches to phenotyping pain. Generally, when looking at different research cohorts, we find that a lack of agreed approaches to phenotyping CP means that we cannot compare outputs from different cohorts. For example, in a systematic review of studies examining genetic factors associated with NeuP, we found 29 studies, identifying 28 genes, but none used the same approach to phenotyping, and few single genes were identified by more than one study. This means that we cannot understand whether differences between studies are the result of actual differences between study populations or artefacts of differential phenotyping. One study, for example, found that associations between CP and mortality depended on how the pain was phenotyped. 108 This lack of harmonisation also prevents meta-analysis. To surmount these phenotyping/case definition differences, we need the following: (1) A series of studies exploring the effects that differences in case definition have in identifying subsamples with and without CP; eg, demographic and clinical differences/similarities between “cases” in different cohorts. Macfarlane et al. 75 did explore this in relation to the original UKB pain phenotype, comparing prevalence and associations with those identified in other pain cohorts, and we have performed similar in relation to UKB CP (paper in press). Although the results were reassuring, more such analysis is called for. (2) Harmonised case definitions/phenotyping moving forward, such as that agreed for genetic studies of NeuP (NeuroPPIC 123 ). In addition to allowing comparison and meta-analysis (retrospective studies), this approach will also allow prospectively assembled collaborative cohorts, with enhanced power. This has been our philosophy with UKB CP, DOLORisk, and PAINSTORM. A major issue with the cohorts we have described, and with every other cohort, is their representativeness. Even a very large cohort such as UKB can only reflect the population from which it was drawn. As noted above, in UKB's case, this population comprised adults aged 40 to 69 at recruitment, 112 and underrepresents people from more socioeconomically deprived areas, as well as people who are obese, smoke, drink alcohol, self-report certain health conditions, and ethnic minorities. 48 Although this allows assessment of relationships between exposures and outcomes, it limits findings relating to incidence and prevalence and to exposures/outcomes that are rare in the cohort. Similar constraints apply to the studies contributing to DOLORisk (including GS 107 and GoDARTS 57 ) and will also apply to PAINSTORM. A key is to focus on the strengths and unique selling points of the cohort; eg, a family study allows efficient measurement of heritability. It is also important to measure, understand, and, if possible, account or adjust for relevant differences between the cohort and target populations. Strategies also need to be developed to improve representation in groups that are traditionally underrepresented in research. Potential approaches for improving representativeness include expanding recruitment strategies, to include for example face-to-face, email, and postal invitations, through primary and specialist care. In addition, dedicating research resources into advertising through local/national business organisations, radio, newspapers, and social media can increase uptake and help to oversample these “hard-to-reach” groups. The involvement of people living with pain, for instance coworking with charities and community organisations can help with dissemination strategies and generate general awareness of studies. An example of an initiative that has successfully used these approaches in the Scottish Health Research Register (SHARE). 82 However, there are certain situations in which a representative sample may not be necessary, for example, if an investigator wants to study a particular population subgroup. 97 For practical reasons, large population studies generally only allow brief questionnaire measures, often purporting to represent complex multidimensional phenomena (such as pain) in summary numerical terms, often with continuous scales or categorical coding. Although these questionnaires are (1) usually validated in their development stages by comparison with more sophisticated and detailed assessments and (2) often supplemented in related studies by more detailed measurement/interviews with subsamples, they cannot tell the full story of what is being measured. This issue has long been recognised (eg, by Macnaughton 79 ), and recent discussions with our patient partners on PAINSTORM confirm the issue, and the frustration it causes to people completing the questionnaires. Although we should continue to measure as accurately as possible, using questionnaires with maximum validity and reliability, we should also work with people living with pain to develop more realistic and satisfactory ways of assessing complex health and psychosocial issues at scale. 6. Future prospects Alternative approaches to cohort recruitment and assessment include the use of routine clinical data, without the direct involvement of individuals (although with appropriate ethical and governance approvals in place). This approach has also been recommended for clinical trials in CP. 101 For example, in the United Kingdom, GP-held primary care records offer the opportunity to identify relevant individuals through clinical diagnostic codes or prescribing. These have been used to recruit study participants with CP 22 , 80 but require detailed validation and assessment of sensitivity, specificity, and positive/negative predictive values. The authors (BHS and DW) are currently developing this in 2 funded studies at the population level. Routine clinical data can also augment research-derived data arising from new and existing cohorts, through data linkage, noting the need for data security and confidentiality. As noted, UKB data are now linked to primary healthcare records for most participants. Advantages of linkage to routine data include comprehensiveness, representativeness, and low cost. Disadvantages include relatively poor-quality data and the complexity of data available, as well as the need to secure different approvals before data can be accessed and linked. Development, harmonisation, meta-analysis, and data linkage of CP cohorts require, among other factors, high-quality data storage, management, and access. Also funded through the APDP, Alleviate is the Research Data Hub that will provide a platform for pain data for researchers around the world. 2 Initially focusing on APDP consortia (including PAINSTORM) and projects, Alleviate will also store or allow access to other relevant datasets (including UKB CP and DOLORisk). These data sets will be findable, accessible, interoperable, and reusable (FAIR), and the Hub will be comparable with those already in existence for respiratory datasets (BREATHE 19 ) and COVID-19 (CO-CONNECT 31 ). Investment in data hubs such as Alleviate from researchers and funding bodies is key to expanding these cohorts on a global scale and integrating efforts around data harmonisation. Such investment can be encouraged and promoted through organisations such the International Association for the Study of Pain. Meanwhile, we will continue our research with the above cohorts, both through further analysis of DOLORisk and UKB CP, and through development of PAINSTORM. Importantly, this will include collaboration with colleagues and cohorts elsewhere, including other APDP consortia and projects, 4 with whom PAINSTORM is harmonising as much as possible. The scheduled follow-up of UKB CP phenotyping promises exciting approaches to longitudinal research on CP at large scale. Disclosures The authors have no conflict of interest to declare.
Acknowledgements B.H.S. and D.L.H.B. have received grants from the MRC, Versus Arthritis, Eli Lilly, and Astra Zeneca (as part of the Advanced Pain Discovery Platform) for PAINSTORM (MR/W002388/1) and from the European Union's Horizon 2020 research and innovation programme under grant agreement No 633491 (DOLORisk). H.L.H. and M.M.V.P. are supported by PAINSTORM and have been supported by DOLORisk. D.L.H.B. acknowledges grants from the Wellcome Trust, Diabetes UK, MRC, and the BBSRC and has acted as a consultant in the past 2 years for AditumBio, Amgen, Biointervene, Bristows, LatigoBio, GSK, Ionis, Lexicon therapeutics, Lilly, Neuvati, Olipass, Orion, Regeneron, Replay, and Theranexus on behalf of Oxford University Innovation (all paid to institution). Data availability statement: Data sharing is not applicable as no new data were created or analysed in this article. All results presented in this article have been published previously and fully cited in the text.
CC BY
no
2024-01-16 23:47:19
Pain Rep. 2023 Aug 10; 8(5):e1086
oa_package/37/21/PMC10789453.tar.gz
PMC10789454
0
1. Introduction Fibromyalgia, characterized by widespread pain, stiffness, mood disorders, fatigue, and cognitive difficulties, 26 affects 2% to 4% of the population. 10 Impairment in attention and executive functioning is commonly observed in patients with fibromyalgia compared with healthy controls. 2 , 7 , 19 Previously, we showed that chronic pain (fibromyalgia) and experimentally induced acute pain differentially affected cognitive performance, suggesting that the factors underlying the effects of pain on cognitive difficulties in acute and chronic states may differ. 18 Importantly, fibromyalgia patients report that cognitive difficulties have a large impact on quality of life, 1 making it important to identify factors that contribute to these difficulties. Psychosocial factors may be particularly relevant for understanding cognitive difficulties. Sleep disturbance, anxiety, and depression are highly prevalent among patients with fibromyalgia, 25 and each is related to impaired cognitive performance. 9 , 23 , 24 Interestingly, one study demonstrated that poor sleep accounted for the association between pain severity and impaired attention performance among patients with fibromyalgia. 8 Yet, limited work has explored whether psychosocial factors contribute to differences in cognitive performance that are often observed between patients with fibromyalgia and healthy controls. The present study was a secondary data analysis investigating differences in cognitive performance between patients with fibromyalgia and healthy controls and whether psychosocial factors accounted for these differences.
2. Methods 2.1. Participants and procedure Participants were 24 adults with fibromyalgia and 26 healthy, pain-free controls (HC). For details about the full procedure and inclusion criteria, see our previously published manuscript ( https://www.researchgate.net/publication/332976441_The_Effect_of_Induced_and_Chronic_Pain_on_Attention ). 18 Participants completed questionnaires, and then instructions were provided for the cognitive performance tasks. Participants completed a practice trial of each task before the experimental versions. Study procedures were approved by Brigham and Women's Hospital's Institutional Review Board. 2.2. Cognitive performance Participants completed 3 tasks based on the Bath TAP battery, 16 designed and controlled using E-Prime II professional software. 22 This battery was established because of the relationships between these measures of cognition and pain among healthy adults and those with headache and menstrual pain. 12 , 15 – 17 In the original report, 18 we investigated the impact of an acute mechanical pain stimulus on cognitive performance; here, we report performance on these cognitive tasks in the absence of externally applied noxious stimulation. All participants completed the cognitive tasks twice (ie, in the presence and absence of pain stimuli), and the order of the testing sessions was randomized. We also previously reported that we did not find a significant difference in attention span (n-back task) between patients with fibromyalgia and HC, 18 and thus, the present study focuses solely on the attentional switching and divided attention tasks. 2.2.1. Attentional switching To measure the ability to alternate between 2 tasks, participants saw a single digit number and made 1 of 2 decisions about these numbers based on a task-cue presented before each of 200 trials. For some trials, participants indicated whether the number was higher or lower than 5. On other trials, participants indicated whether the number was odd or even. Typically, when the task remains the same (2 consecutive odd/even trials), participants will perform faster and more accurately than when there is a switch between tasks (odd/even then low/high). This reduction in performance on switch trials is called a “switch-cost.” Outcome variables for this task were the differences in reaction time and accuracy between repeat and switch trials. For both reaction time and accuracy, positive scores reflect faster performance on repeat compared with switch trials, whereas negative scores reflect better performance on repeat compared with switch trials. 2.2.2. Divided attention task To measure participants' accuracy while processing >1 source of information concurrently (a measure of attention and executive function 6 ), participants performed 2 tasks simultaneously. Participants were presented with a chain of numbers in the center of the screen and 2 lines, either vertical or horizontal, at the periphery of the screen. Participants identified when 3 consecutive odd or even digits were presented and when the 2 lines were presented in different orientations (1 vertical and 1 horizontal). A total of 400 displays were presented with 8 number targets and 8 line targets on every set of 80 displays. Number and line targets were never both presented on the same trial. The outcome variable for this task was accuracy. 2.3. Psychosocial factors The Patient-Reported Outcomes Measurement Information System short forms, which have demonstrated good reliability and validity, 3 , 11 measured sleep disturbance, anxiety, depression, and pain severity. 4 2.4. Data analysis As previously reported, 18 data were checked for normality and outliers, and outlying data were excluded case-wise. Independent samples t tests were conducted to test for differences in cognitive performance and psychosocial factors between patients with fibromyalgia and HC. Pearson correlations were conducted to examine associations between psychosocial factors and cognitive performance among the whole sample. Psychosocial factors significantly related to cognitive performance were explored as potential mediators of group differences in cognitive performance. Mediation analysis was conducted using the PROCESS macro with bias-corrected 5000 bootstrapped resamples. 20 A post hoc power analysis indicated that a sample of 47 participants was sufficient to detect a medium- to large-sized effect (f 2 = 0.22), assuming power is 0.80 and α = 0.05.
3. Results Participants (N = 50) were an average age of 38 years (SD = 12.6), and 90% were female. Participants were White (82%), African American/Black (8%), Asian (8%), and more than one race (2%). There were no differences in age, sex, or race between patients with fibromyalgia and HC. 18 Patients with fibromyalgia demonstrated poorer accuracy for divided attention compared with HC ( P = 0.028; Table 1 ). Based on raw accuracy scores on the attentional switching task, patients with fibromyalgia performed similarly across repeat and switch trials, whereas HC showed a larger benefit in accuracy from repeat than switch trials. In other words, HC showed a greater switch-cost for accuracy compared with patients with fibromyalgia ( P = 0.009). There was no difference in switch-cost reaction time. Patients with fibromyalgia reported greater sleep disturbance, anxiety, depression, and pain severity compared with HC. Correlations (Table 2 ) showed that greater sleep disturbance was significantly associated with less accuracy on the divided attention task ( P = 0.01), but sleep was not related to accuracy or reaction time on the attentional switching task ( P s > 0.05). Anxiety and depression were not related to cognitive performance on any task ( P s > 0.05). We conducted a mediation analysis to explore whether the group difference in accuracy for divided attention performance was mediated by sleep disturbance (Fig. 1 ). Because sleep was not related to attentional switching, mediation analyses were not conducted for those outcomes. Similarly, as anxiety and depression were not related to cognitive performance, they were not explored as potential mediators. There was a significant indirect effect of group on divided attention through sleep disturbance (b = −0.05, 95% CI [−0.11, −0.001]). The group difference in divided attention performance was no longer significant when sleep disturbance was included in the model ( P > 0.05; Fig. 1 ). This suggests that patients with fibromyalgia demonstrated poorer accuracy for divided attention, and higher sleep disturbance mediated this relationship.
4. Discussion In the present study, patients with fibromyalgia demonstrated poorer accuracy for divided attention compared with healthy controls, which may suggest that patients with fibromyalgia found it difficult to perform this task both quickly and accurately. In the current context where participants performed the timed task with rapidly presented stimuli, it may not have been possible for patients to use compensatory strategies to maintain accuracy. Although we observed a greater switch-cost for accuracy among healthy controls, raw scores on the switching task suggested a slightly larger benefit from repeat trials among healthy controls, whereas patients with fibromyalgia performed similarly across repeat and switch trials. Research has shown that sleep disturbance is associated with cognitive difficulties, including impaired executive function and attention, 9 and one study demonstrated that poor sleep mediated the association between pain severity and impaired attentional performance. 8 In the present study, greater sleep disturbance was associated with poorer accuracy for divided attention. Furthermore, we found that sleep disturbance mediated the group difference in divided attention, such that patients with fibromyalgia reported greater sleep disturbance, and in turn, poorer accuracy for divided attention. Sleep disturbance is a modifiable factor that may be targeted among patients with fibromyalgia to improve cognition. Indeed, randomized controlled trials (RCTs) have shown that cognitive behavioral therapy (CBT) is effective for improving sleep among patients with fibromyalgia. 13 , 14 , 21 Moreover, a meta-analysis found that CBT for insomnia improves sleep quality, while also reducing pain severity, anxiety, and depression, 5 indicating it may be a beneficial treatment for a variety of comorbid symptoms. Additionally, one study showed that CBT improved executive functioning among patients with fibromyalgia, and that improvements in executive functioning were associated with improved sleep. 14 More RCTs are needed to examine whether reducing sleep disturbance among patients with fibromyalgia leads to improved cognition, specifically the ability to simultaneously process multiple pieces of information (ie, divided attention). There are limitations to consider when interpreting our findings. Our sample was majority female and White, limiting the generalizability of our findings. We only included patients with fibromyalgia and the factors that explain cognitive difficulties may differ for other chronic pain conditions. Our measures of cognitive performance mainly assessed aspects of attention and executive function. Future studies should assess other types of cognition, including memory. Despite these limitations, our findings highlight the importance of considering symptoms of sleep disturbance when investigating cognitive performance, particularly executive attention, among patients with fibromyalgia.
Patients with fibromyalgia reported greater sleep disturbance, which contributed to reduced accuracy on a divided attention task compared with healthy controls. Abstract Introduction: Patients with fibromyalgia show impaired cognitive performance compared with healthy, pain-free controls. Sleep disturbance, anxiety, and depression are highly prevalent among patients with fibromyalgia, and each is associated with impaired cognitive performance. Yet, limited work has explored whether psychosocial factors contribute to group differences in cognitive performance. Objectives: This secondary data analysis investigated differences in cognitive performance between patients with fibromyalgia and healthy controls, and whether psychosocial factors accounted for these differences. Methods: Adults with fibromyalgia (N = 24) and healthy, pain-free controls (N = 26) completed 2 cognitive tasks and the Patient-Reported Outcomes Measurement Information System sleep disturbance, anxiety, and depression short forms. Independent samples t tests were used to test for differences in cognitive performance between patients with fibromyalgia and healthy controls. Pearson correlations were conducted to examine associations between psychosocial factors and cognitive performance. Psychosocial factors significantly related to cognitive performance were explored as potential mediators of group differences in cognitive performance. Results: Patients with fibromyalgia demonstrated poorer accuracy for divided attention compared with healthy controls, and sleep disturbance mediated this group difference. On the attentional switching task, healthy controls showed a greater switch-cost for accuracy compared with patients with fibromyalgia, but there was no group difference in reaction time. Anxiety and depression were not related to cognitive performance. Conclusion: We found that patients with fibromyalgia reported greater sleep disturbance and, in turn, had poorer accuracy on the divided attention task. Sleep disturbance is modifiable with behavioral interventions, such as cognitive behavioral therapy, and may be a target for improving sleep quality and cognitive performance among patients with fibromyalgia. Keywords:
Disclosures The authors have no conflicts of interest to declare.
Acknowledgements This research is partly supported by an unrestricted grant for research from Purdue Pharmaceutical (D.J.M.) and the National Institutes of Health K24NS126570 (R.R.E.). Data availability statement: The data that support the findings of this study are available from the corresponding author (D.J.M.), upon reasonable request.
CC BY
no
2024-01-16 23:47:19
Pain Rep. 2024 Jan 12; 9(1):e1
oa_package/44/ab/PMC10789454.tar.gz
PMC10789458
38226364
Introduction Intra-Vehicular Activity (IVA) and Extra-Vehicular Activity (EVA) safety is susceptible to numerous factors, with astronaut fatigue and cognitive load being primary concerns, often closely interlinked. Astronaut fatigue chiefly stems from the considerable weight (>100 kg), pressurization, and restricted mobility of spacesuits [1] . High cognitive load arises from the abundance of error-sensitive tasks, compounded by sensory deprivation [2] , [3] . EVA spacesuits, comprised of up to 16 material layers, impose limitations on touch, and visibility (e.g., 2023 Artemis spacesuits obstruct foot view, and block sound [4] ). The consequence of this degradation of senses is an increase in cognitive load [5] , [6] and the effect on mission safety can be as critical as those of fatigue. This effect was evidenced in a 2021 analysis that shows a correlation between cognitive load and increased falls during the Apollo missions [7] . Ergonomics and Safety Astronauts suffer a surprisingly high rate of musculoskeletal injury [8] . Various authors have addressed the ergonomics of suits from different perspectives [9] , [10] , [11] . Further efforts beyond ergonomics have been proposed in diverse areas, such as hypercapnia prevention, thermal regulation, waste management, radiation shielding, and further injury prevention [12] , [6] , [13] . Solutions to mitigate sensory degradation have also been proposed via the use of various interfaces, such as multimodal screens [14] , Augmented Reality [15] , new glove designs [16] , and passive electro-stimulation for haptic sensory substitution [17] , among others. Compared to other solutions such as haptics [18] , sound design [19] has been an overlooked area that offers potential for improvement with minimal engineering drawbacks. Auditive Comfort EVA and IVA suits offer life support [21] , [22] but tend to dampen external sounds, especially those associated with EVA activities. Consequently, the protective features of these suits limit the natural auditory experience that humans typically encounter in their daily environment. The suits often incorporate multiple layers of insulating and protective materials, which can attenuate sound transmission and lead to muffled or reduced audio clarity. For example, when wearing Final Frontier Design's training suit with visor down (Fig. 2 ), students reported an inability to understand speech. Consequently, individuals wearing an IVA suit might face challenges in perceiving and interpreting auditory information, which could have a substantial impact on their cognitive performance and problem-solving abilities in situations where sound affects cognition [3] , [7] . In the case of an EVA occurring in the absence of atmosphere, the sound and vibrations that the astronaut feels are transmitted by the suit itself from the suit's contact with objects, or contact and friction between the human skin and the suit's inner shell. Therefore, the absence of atmosphere does not imply silence, as the suit (that is analogous to a pressurized human-sized balloon) transmits sound and vibrations to its wearer. In addition, on a hypothetical Mars walk the rarified Martian atmosphere would provide faint audio feedback, which has a significantly different signature from feedback heard on Earth. Regardless, of the fact that on Mars the suit would block external sounds too, it is not unfeasible to process the Martian sounds to make them sound like they would on Earth by means of simple sound processing techniques and the use of linear filtering techniques. Evaluation of Sound Transparency Our primary objective is to assess the impact of sound transparency on cognitive performance in subjects wearing an IVA spacesuit. We hypothesize that the implementation of transparency leads to enhanced cognitive performance in tasks that require manipulation as well as problem solving skills. To test this hypothesis, we designed an experimental setup with two conditions: a standard spacesuit and one where an audio system provides sound transparency. We measure cognitive performance using the Koh block test. The Koh Block test is about solving a 9-block puzzle. It requires manual dexterity as well as problems solving skills. It was chosen because it is representative of the task an astronaut faces, such as during a spacewalk (assembly, tool manipulation, thinking, stress, and time constraints). We considered various tests. In our related work [20] , we applied the Fukuda step test to measure cognitive improvement related to proprioception in preventing falls during EVA; however, the results were inconclusive as the Fukuda test is primarily designed to detect vestibular damage. Another alternative is to use the NASA Task Load Index; although widely used in various fields [21] , including to assess sound comfort in offices [22] , it is not a well-defined test per se but rather a qualitative assessment guide that can be applied to any activity. Thus, we opted for the Koh Block Test, which is quantitative. As it is timed, it facilitates a numerical comparison between different suit configurations (sound transparency ON/OFF).
Materials and Methods Experimental Setup An IVA training suit (Terra-Suit) from Final Frontier Design [30] , [31] (NY) was rented from startup Zero2infinity S.A, Barcelona, Spain, and connected to a Stanley FATMAX air compressor (30 L/min, 59 dB noise). A digital air regulator (LEMATEC DAR02B) maintained a constant pressure inside the suit between 1.01–1.03 bars. Test days had temperatures of 22–24 °C, and the suit had a sound insulating characteristic of 21 dB. Apple AirPods TM (1st generation) paired with a microphone Android App was used to provide sound transparency, with a one-way (mic-2-ear) delay of 144 ms. The experiments were carried in room E1-3071 in the campus of the first author. This room was windowless, and had a relatively high background noise level of 42.5 dB stemming HVAC, which was always on. During the experiments, the noise level was 49.5 dB, owing to the compressor that sat 3 m away. The compressor was always on. Flooring: carpet flooring. Lighting: fluorescent bulbs. Table used: standard 1m by 40 cm. Chair, standard issue wood and steel chair. See Fig. 2 Koh Block Test A description of the Koh block protocol can be found in [24] . Procedure: Participants were briefed, given consent forms, and informed of the procedure. They were instructed to solve three different Koh block puzzles in sequence: (Figs. 2 - 3 ). All participants solved the puzzles in the same order, with an approximate duration of 23 minutes per participant. Sequence Briefing and consent form signing. First puzzle: Participants familiarized themselves with the Koh block test mechanics by solving the ‘offset diamond’ puzzle without wearing the suit . Spacesuit-donning, EarPods that relay exterior sound via Bluetooth are placed on participants’ ears . Visor down. Second puzzle (control experiment): Both groups A and B, solve the ‘diagonal stripes’ puzzle. Sound transparency is on . Visor up. EarPods removed from Group A participants. Feedback is not requested but noted down if any is received spontaneously . Visor down. Third puzzle: Both groups A and B solve the ‘checkered pattern’ puzzle . Visor up. Qualitative feedback is requested and collected. Removal of the suit . Participant Demographic 39 participants were recruited from UAE University (36 female, 3 male), with a mean age of 21.5 (SD=4.6, min 18, max 33). Three participants were left-handed, six wore contact lenses or prescription glasses, and various preexisting conditions were reported. Participants signaled puzzle completion with a hand gesture. Time was recorded, and the experiment ended. If a participant completed the puzzle incorrectly, their data is not used, which occurred for seven participants across the three puzzles. Remaining statistics for groups A and B: Group A (18 females, mean age 20.4, five used graduated glasses); Group B (16 females, 2 males, mean age 20.0, two used graduated glasses). Male performance was similar or worse to mean female performance. Data Availability The Data set is available at IEEEDataPort https://dx.doi.org/10.21227/12jp-pq48 Ethics This research, titled "Space Suit Haptics Project," adhered to IEEE and UAEU ethical guidelines. The study received ethical approval from the UAEU Social Sciences Ethics Committee - Research / Course (Application No: ERSC_2023_2408) on January 20, 2023. Key ethical considerations were addressed as follows: 1. Informed Consent: Participants provided written consent after receiving a detailed explanation of the project. 2. Privacy and Confidentiality: Participant data was securely stored and anonymized to maintain privacy. 3. Minimization of Potential Harm: Precautions were taken to avoid undue physical, psychological, or emotional risks to participants. 4. Fair Treatment: All participants were treated fairly and without discrimination during recruitment and throughout the study. 5. Transparency and Accountability: The research team disclosed methods, findings, and potential conflicts of interest.
Results We divided the participants into two groups. Fig. 3(a) compares groups' puzzle completion times using sound transparency ON for both. Fig. 3(b) compares puzzle completion times when sound transparency is ON for one group and OFF for the other, statistical significance (p<0.05) is indicated with an asterisk. See also the sequence section in the materials and methods. Effect of Sound Transparency on Competition Time Welch's t-test shows a significant difference (t = 2.38, p = 0.012) in completion time between the groups with sound transparency ON (mean = 101 s) and OFF (mean = 159 s), with a 95% CI (16.6 s, Inf). The results support the argument that providing astronauts with sound transparent suits improves cognitive performance. A summary of qualitative comments from participants is shown in Table I . It shows a list of the most common feedback from participants aggregated by topic.
Discussion Of the more than 500 astronauts that have flown to space, about 11% have been women. The median age was 46 years old. The Gender Gap in Space In the past astronauts used to belong to a very specific demographic of race and gender. While this distribution has changed in recent missions with improvements in gender [23] and race distributions, society, however, is still far away from gender equality. This research pioneers a gendered approach because it focused on a sample of 95% females that were younger than the average astronaut. This approach manifest due to our own logistic limitations. Generalization of Results Gender The literature related to our setup does not support that comparative the results would be dissimilar in the case of the male-only population [24] . However, more study might be needed to account for consequences of long exposure to the space environment. Age The average age of the sample here is 22 years old. The youngest astronaut case was the orbital flight of Gherman Titov in 1961 who was 25 at the time. We acknowledge that the results may not be generalizable across different age groups. Age can have a significant impact on cognitive performance, with research suggesting changes in cognitive abilities, such as processing speed, working memory, and problem-solving, over the course of one's lifetime [25] , [26] . Future work could investigate the effects of age on problem-solving efficiency and compare the results with our findings. Grade Point Average Finally, we observed no correlation (R < 2%) between self-declared Grade Point Average (a numerical score of academic performance) and time to complete the test. This finding is in line with current literature. While some studies have identified a correlation between Intelligence Quotient and academic outcomes, they also highlighted the importance of other factors, such as self-discipline, having a greater influence [27] . Further studies concluded that Intelligence Quotient alone does not account for the differences in academic performance among students, with motivation being the primary factor [28] .
Conclusion This study highlights the importance of sound transparency in spacesuit audio systems, demonstrating its impact on cognitive performance. Our findings demonstrate that the implementation of transparency improves cognitive performance, with potential implications for spacesuit design, safety. Other areas of application of these findings are underwater welding operations as they have similar sensory impairments and record one of the highest occupational fatality rates in the world [29] .
The review of this article was arranged by Editor Ilaria Cinelli. CORRESPONDING AUTHOR: JOSE BERENGUERES (e-mail: [email protected] ) Spacesuits may block external sound. This induces sensory deprivation; a side effect is lower cognitive performance. This can increase the risk of an accident. This undesirable effect can be mitigated by designing suits with sound transparency. If the atmosphere is available, as on Mars, sound transparency can be realized by augmenting and processing external sounds. If no atmosphere is available, such as on the Moon, then an Earth-like sound can be re-created via generative AR techniques. We measure the effect of adding sound transparency in an Intra-Vehicular Activity suit by means of the Koh Block test. The results indicate that participants complete the test more quickly when wearing a suit with sound transparency.
Acknowledgment The authors are grateful to all the volunteers involved in the study. Tony Ng suggested the Koh block test. Jose Mariano Urdiales (zero2infinity) who loaned the suit. Julian Lopez for assisting in the transportation of the suit.
CC BY
no
2024-01-16 23:47:20
IEEE Open J Eng Med Biol. 2023 Jun 22; 4:190-194
oa_package/01/e4/PMC10789458.tar.gz
PMC10789472
0
Introduction Publishing papers in academic journals is a cornerstone of many physician careers. From sharing groundbreaking scientific discoveries to advancing professional aspirations, dermatologists and other physicians contribute to scientific progress through publications. Editors and editorial boards serve as the gatekeepers of medical literature; through scientific review, they determine which articles are published and which are not. Most editors are also active scientific researchers creating their own academic work and submitting it for publication. A potential conflict of interest can arise when an editor or editorial board member submits their work to their own journal, even if other editors are reviewing the work. Some journals publish work by their own editors; others do not [ 1 ]. Some journals may encourage editors and board members to submit their work to their own journals. While some medical specialties have examined the effect of editorial board membership on the likelihood of manuscript publication, this has been minimally studied in dermatology [ 1 - 5 ]. Within some journals, a small number of authors, frequently editorial board members, have been found to have a disproportionately high number of publications in their own journal, as well as shorter wait times for manuscript processing and publication decisions [ 6 , 7 ]. We investigated the publication patterns of 67 editorial board members at three leading dermatology journals to identify patterns between editorial board membership and publication rates and compare publication rates within their own journals versus external journals.
Materials and methods Using Scopus, Elsevier’s author search tool, we identified editorial board members who served continuously between January 2019 and December 2021 at JAMA Dermatology (JAMA Derm), the British Journal of Dermatology (BJD), and the Journal of the Academy of Dermatology (JAAD). These journals have consistently ranked in the top five dermatology journals by impact factor over the last five years, according to Clarivate Web of Science citation reports [ 8 ]. We considered editorial board members with titles of Associate Editor, Assistant Section Editor, and higher, and with more than six total publications over the period studied. Editorial staff members who were not dermatologists and publishing staff were excluded. Editorial articles were also excluded due to their regular publication cadence. Initial data were collected from 104 authors, with 67 authors in the final analysis based on exclusion criteria. All data were collected from publicly available sources, and no humans were contacted during any part of the research process. Data were collected on each author’s h-index, editorial board title, number/type of articles in their own journal, number/type of articles in other top two journals, total publications, and affiliations between January 2019 and December 2021. Average and median publications were calculated; t-tests were run to examine differences in publication rates and differences in the percentage of total publications in their own journal. P-values <0.05 were considered significant.
Results For the 67 authors included, the average percentages of members’ total publications (excluding editorials) appearing within their own journal were 13.4% (JAMA Derm), 23.6% (BJD), and 35.0% (JAAD). The average percentages of a member’s total publications appearing in the other top two journals were 19.8% (JAMA Derm), 6.6% (BJD), and 4.56% (JAAD). The percentages of total publications within a member’s own journal as compared to the number of publications in all three top journals were 42.4% (JAMA Derm), 78.3% (BJD), and 82.8% (JAAD) (Table 1 ). Within the examined period, the mean difference in the number of publications within a member’s own journal compared to those published in the other top two journals was significantly higher for JAAD (8.6 [95% CI 2.0 to 15.2]; P = 0.013) and BJD (4.3 [95% CI, 2.3 to 6.2]; P = 1.4E-05), but not for JAMA Derm (-3.8 [95% CI, -1.53 to 9.0]; P = 0.07). The mean difference in the percent of total publications appearing in a member’s own journal compared to the percent appearing in the other top two journals was significantly higher for JAAD (30.5% [95% CI, 17% to 44%]; P = 0.00016) and BJD (17.0% [95% CI, 9.2% to 24.7%]; P = 6.7E-05), but not for JAMA Derm (-6.3% [95% CI, -15.7% to 3.1%]; P = 0.18) (Table 2 ).
Discussion While it may not be surprising to see higher publication rates for editorial board members within their own journals, this has not yet been well-studied in dermatology [ 1 - 5 ]. A large systematic review examining the phenomenon of “self-publishing” found considerable variability across different fields, journals, and editors [ 1 ]. Some editors never publish in their own journals, while others publish extensively in their own journals [ 1 ]. Editorial board members often have a strong track record of publication within a field. They are frequently regarded as key thought leaders, have high h-indexes, and may be more aware of journal priorities and trending topics. All of these factors may contribute to a higher likelihood that their papers are accepted for publication. However, the role of editorial board members as “gatekeepers” of publication can raise concerns about potential bias, favoritism, and conflicts of interest [ 2 ]. It is important to be clear that we make no claims about bias or irregular practices within the three journals examined in this article. Nevertheless, the high proportion of in-journal publications for editorial board members is worth further consideration. The pressure to publish seems to intensify every year, yet the ability to publish is increasingly challenging for many. As dermatology strives to prioritize diversity, equity, and inclusion, this emphasis should extend to manuscript review and publication, ensuring that a variety of voices is being sought out and heard. It is crucial to balance the voices of the influential with fresh perspectives. To address concerns of preferential treatment or bias, Helgesson et al. propose that journals be transparent about the criteria employed in the review process and the review of editorial board member submissions to the journal. They suggest that journals exclude editors from any formal influence over the review and acceptance of their own submissions [ 1 ]. Additionally, they advocate for blinding the identities of editorial board members from reviewers as part of the manuscript review process. Helgesson et al. further recommend that editors-in-chief, and perhaps associate editors, should avoid publishing within their own journals [ 1 ].
Conclusions The findings of this study may suggest that further reflection on the manuscript review process is warranted. Increasing transparency about the factors considered during the review process for all submissions, especially those by editorial board members, could alleviate concerns about potential favoritism or bias. It is important not to assume improper practices from these data but rather use it as an opportunity to conduct meaningful introspection of journal review practices as we strive to eliminate bias in publication. Having a small number of authors with a disproportionately high number of publications over a long period of time should prompt a review to ensure that journals include a diverse set of voices. This study is limited by its short time frame of three years, its inclusion of only three journals, and its inability to establish causation. Further examination of editorial review and publication practices should be conducted with a larger cohort of journals over a more extended period of time. A comparison of publication rates before and after becoming an editorial board member may also be useful.
Introduction While some medical specialties have examined the effect of editorial board membership on the likelihood of manuscript publication, this has been minimally studied in dermatology. We investigated the publication patterns of 67 editorial board members at three leading dermatology journals to identify any discernible patterns between editorial board membership and publication rates. Materials and methods Using Scopus, Elsevier’s author search tool, we identified editorial board members who served continuously over a three-year period between January 2019 and December 2021 at JAMA Dermatology (JAMA Derm), the British Journal of Dermatology (BJD), and the Journal of the Academy of Dermatology (JAAD). All data are from publicly available sources. Results The mean difference in the number of publications within a member’s own journal compared to those published in the other top two journals was significantly higher for JAAD (8.6 [95% CI 2.0 to 15.2]; P = 0.013) and BJD (4.3 [95% CI, 2.3 to 6.2]; P = 1.4E-05), but not for JAMA Derm (-3.8 [95% CI, -1.53 to 9.0]; P= 0.07). The mean difference in the percent of total publications appearing in a member’s own journal compared to the percent appearing in the other top two journals was significantly higher for JAAD (30.5% [95% CI, 17% to 44%]; P = 0.00016) and BJD (17.0% [95% CI, 9.2% to 24.7%]; P = 6.7E-05), but not for JAMA Derm (-6.3% [95% CI, -15.7% to 3.1%]; P = 0.18). Discussion Although we make no claims about irregular practices, the role of editorial board members as “gatekeepers” of publication can lead to allegations of potential bias, favoritism, and conflicts of interest. The high proportion of in-journal publications for editorial board members of JAAD and BJD is, therefore, worth further consideration. Conclusion These results may indicate that reflection on the manuscript review and publication process is warranted to ensure equity and inclusivity. Some limitations of this study include the short time interval of three years, the inclusion of only three journals, and the lack of established causation. Further examination of editorial review and publication practices should be undertaken.
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50518
oa_package/1c/94/PMC10789472.tar.gz
PMC10789473
38226108
Introduction Eating disorders (ED) involve an excessive concern regarding one’s diet, appearance, weight and body shape. This obsession can be nearly fatal in some cases as it may cause an extreme change in dietary and exercise habits that can harm one’s physical and mental well-being and decrease quality of life. Anorexia nervosa, bulimia nervosa, and binge-eating disorder are considered common ED [ 1 ]. It is still unclear what causes ED exactly. However, it's theorized to be related to anxiety and depression [ 1 ]. The general strain theory has been used to explain how negative emotions can influence behavior; some exhibited outwardly and others internalized the negativity that could manifest as anxiety, depression and disordered eating behavior [ 1 ]. It’s also believed to be occurring more often in women as many studies have found a higher incidence of ED in women [ 2 ], triggered by biological, psychological, environmental and social factors [ 3 ]. Psychological factors like depression, anxiety and body-image dissatisfaction can alter self-perception, and lead one to pursue disordered eating behaviors as a result [ 1 ]. Environmental and sociocultural factors such as negative parental feedback regarding their child’s appearance, a critical family environment and comparing one’s appearance to those displayed in media and accepted by society at large and closer relations (family, friends, peers, etc.) as ideals, those of which usually apply to the popularized standard of thinness as the desirable female beauty ideal can also lead to developing risky eating behaviors in order to achieve that “ideal” image [ 1 , 2 ]. Sociocultural attitudes regarding, appearance and body image represent the internalization of self-value by identifying the standards of sociocultural beauty through societal, familial, peer and mass media influence and the perception of these individuals towards their body image is affected by many variables [ 4 , 5 ]. Interaction with family members tends to be the first source of one’s integration with society, which often influences self-esteem and perception of one’s body image and possibly adopting disturbed attitudes toward their eating and exercising habits that may affect their health [ 6 ]. While growing up surrounded by peers who have their own ideas regarding their body image and possibly some extreme dieting and exercising habits some people may be influenced or feel pressured to fall into those ideas and perceive them as the norm, which further disturbs one’s relationship with their body [ 7 ]. Another important factor related to the high likelihood of developing ED in young women is the perceived ideal standards of beauty and body shape promoted through mass media [ 7 ]. Studies have shown that the sociocultural influence of Western media might have brought on the newly established standards of beauty and body shape in Eastern societies like the Middle East which has increased body dissatisfaction and poor eating habits in young females [ 2 ]. The correlation between sociocultural attitudes including parental factors, mass media, body image apprehensions resulting from peer pressure and the onset of ED among females isn’t often discussed in literature, especially in our region and for that reason, our aim is to explore how can society and socially imposed standards of appearance and beauty and the internalization and acceptance of these standards by young women affect and possibly increase the probability of developing ED in young women in Almadinah Almunawarah. This study’s objectives are to explore the concerns regarding body image and unhealthy eating behaviors in young women in Almadinah Almunawarah, to identify the important social and cultural factors contributing to the development of ED, to review the discrepancies between idealized standards of body weight in society and normal BMI (18.5 to 24.9), to assess the risky behaviors suggestive of a higher possibility to develop an ED and to discuss how sociocultural attitudes towards appearance can have a role in developing ED in young women in Almadinah Almunawarah.
Materials and methods Study setting and population A cross-sectional study was conducted between April 2022 and May 2023. The targeted population was young females living in Almadinah Almunawarah, Saudi Arabia. A total of 384 women aged 18-30 years participated in this study. The criteria for inclusion in our study comprised young women aged 18 to 30 residing in Almadinah. Whereas, the exclusion criteria encompassed males, individuals in prison, pregnant women, females below 18 or above 30 years old, and those with cognitive or mental disabilities. Ethical statement The study’s protocol and questionnaires were approved by the Research Ethics Committee, College of Medicine, Taibah University. Ethical approval was obtained in January 2022. Study ID is STU-21-027 (Reference Number: IORG0008716-IRB00010413), in Almadinah Almunawarah, Saudi Arabia. Participants' confidentiality and anonymity were assured. Consent was obtained from those who agreed to participate. This study’s sample was determined using the Krejcie & Morgan equation for sample size calculation that minimized standard error. The population census of women in Almadinah Almunawarah in 2017 was 239,234. According to Krejcie & Morgan (1970), the required sample size for a population (N>= 100000) is 384 participants at a confidence interval (CI) of 95%. And the study has 384 participants. Data collection tools Using a web-based survey, data was collected anonymously via a secure server using Google Forms. The link https://forms.gle/FZpVjKaSWpRxEcFm8 was distributed among participants on-site (Gym, Schools, Malls, etc.) and through social media platforms. The participants responded to the survey after giving their consent regarding obtaining personal data, including age, weight, and height. Body mass index (BMI) was calculated using self-reported measures of height and weight submitted by participants. Using the World Health Organization's cutoff criteria for BMI, participants were classified into underweight, normal weight, overweight and obese. Validated questionnaires were used in the survey and translated from original English into the Arabic language using a certified translator. Questionnaires Included the Following Validated Scales The Sociocultural Attitudes Toward Appearance Questionnaire-4 (SATAQ-4): SATAQ-4 is a 22-item questionnaire collecting data related to social pressure faced by media, family and peers potentially affecting the prevalence of eating disorders, each question is measured on a 1-5 scale. The questionnaire explores four sociocultural domains: self-pressure to be “thin or muscular”; family pressure (parents, siblings, close and distant relatives); peer pressure (friends, colleagues, and other social relations); and media pressures (internet and social media, movies and television, advertisements) regarding weight and appearance [ 8 ]. The Eating Attitudes Test (EAT-26): EAT-26© is used for recognizing people at risk of developing disordered eating behaviors. The EAT-26 test was proven to be effective and is used widely across various countries and ages. It is a self-reported assessment tool with answers ranked on a 1-6 point scale. When the overall score was reported at 20 points or more, participants were classified to be at risk of developing disordered eating attitudes and habits. Additional assessment by a mental health expert is advised for those with high scores [ 9 ]. Statistical analysis Upon collecting the data through a survey and converting it to an Excel sheet, we used the Statistical Package for the Social Sciences (IBM SPSS Statistics for Windows, IBM Corp., Version 26.0, Armonk, NY) for data analysis. Mean and standard deviation (±SD) were used to represent continuous data, while categorical variables were represented as frequency and percentages. Data normality was tested using the Kolmogorov-Smirnov test. Consequently, the Mann-Whitney U test was used in order to determine the relation between the SATAQ-4 subscale and the risk of ED. The Wilcoxon signed ranks test was used to review the discrepancies between idealized standards of body weight in society and the normal BMI. The Spearman’s rank correlation was used to determine the link between disordered eating risk (measured with EAT-26) and BMI. Categorical variable analysis was done using Pearson's Chi-squared o-test to assess the significant difference between the age of the participants regarding intake of eating attitudes test-EAT-26. A multiple logistic regression model was constructed to predict factors affecting ED risk as its dependent binary variable. To verify the assumption of logistic regression, the data were examined to be certain they were not correlated. A p-value less than 0.05 was considered statistically significant, and the CI was 95%.
Results The total number of participants was 384, all of them females. In Table 1 , which reflects the sociodemographic characteristics of our sample. Approximately 57% (n= 218) of the participants were from 18 to 22 years old. The average height of the participants was 158.4 (SD± 6.2), and their average weight was 58 kg (SD± 14.4). Also, half of the sample has normal weight. And approximately 33.1% (n=127) of the participants scored 20 or more on the EAT-26 scale and they were seen as being susceptible to ED. Figure 1 illustrates the participants' distribution on the EAT-26, the highest prevalence of risk was in binge eating behavior, as around 40.6% (n=156) of the participants were at risk of it. The least prevalence of risk was in risky exercise behavior at 3.4% (n=13). The participants' responses and means for each statement in the SATAQ-4 for the whole study sample (n = 384) show a high average of participants' responses, scoring between 3.4 and 3.5 out of 5 for self-pressure to internalization of masculinity, and internalization of thinness, respectively. And the low average of participants (ranging between 2.0 and 2.8 out of 5) agreed with describing family pressure (2.6), peer pressure (2.0), and media pressure (2.8) subscales. The average response on the total score of SATAQ-4 was 2.9 out of 5 (Figure 2 ). The participants' response to ED risk part 1 of the EAT-26 shows statements from 1 to 25. Options "sometimes, rarely, and never" were the highest answers the most chosen by participants. While statement 26 “enjoy trying new rich foods" options "always, usually, and often" were the highest answers the most chosen by participants. Tables 2 , 3 show participants' responses to ED risk part 2 of the EAT-26. As shown in the table, for the behavior question “Gone on eating binges where you feel that you may not be able to stop” the “never” option was the most answer chosen by 33.3% (n=128) of participants, and the next option was “once a month or less” option by 26% (n=100), while "once a day or more" was the least option chosen 3.9% (n=15). In behavioral questions "Ever made yourself sick (vomited) to control your weight or shape" and "Ever used laxatives, diet pills, or diuretics (water pills) to control your weight or shape" the "never" option was the most answered chosen by participants 83.6% (n= 321) and 81.5% (n=313) respectively. While about 3.4% (n=13) of participants answered they exercised more than 60 minutes once a day or more. Finally, 15.9% (n=61) of participants reported they lost 20 pounds or more in the past six months. Table 4 shows the difference in mean ranks of ED risk in relation to SATAQ‐4. As shown in the table, internalization to thinness, internalization to masculinity, family pressure, peer pressure, and media pressure, and total scores on SATAQ-4 were all considerably greater among those at-risk participants (P < 0.05). Furthermore, internalization of thinness showed a greater magnitude difference between females at risk (mean rank = 249.2, n= 127) and females who were not (mean rank = 164.5, n=257). On the contrary, the influence of peer attitude toward appearance on ED risk had the least magnitude of difference. Figure 3 illustrates the ED risk of participants' BMI; females who were obese demonstrated the greatest prevalence of ED risk, as the majority of obese participants (20 out of 44 participants) were at risk of ED. Additionally, 23.6% (n=30) of participants who were overweight were at risk of ED. And 46.5% (n=59) of participants who had normal weight were at risk of ED. Females who were underweight had the lowest prevalence of ED risk at 14.2% (n=18). In addition, there was a considerably weak positive correspondence between eating attitudes test scores and BMI, r (382) = .172, p = .001. There was no relationship between EAT scores and BMI for the ED risk sample, r (125) = .143, p = .109, as evident in Figure 4 . Table 5 shows a comparison between idealized standards of body weight and normal BMI (18.5-24.9%). A Wilcoxon signed-ranks test indicated the difference between idealized standards of body weight and normal BMI, p =.001 for the whole study sample, and likewise for no ED risk sample.
Discussion The objective of this paper is to explore the association of certain sociocultural attitudes on developing ED among young females in Almadinah Almunwarrah, Saudi Arabia. This study has found that approximately 57% (n=218) of the participants in this study were from 18 to 22 years old, and 33.1% (n=127) of the participants scored 20 or more on the EAT-26 scale and were considered at risk for ED. The percentage of individuals who have a higher probability of developing an ED was one-third of the participants which is relatively similar to another study done on students at the University of Sharjah UAE [ 6 ]. Women of younger age groups are more vulnerable to being under the influence of sociocultural attitudes, thus they are more susceptible to developing risky eating behaviors [ 2 ]. One of the many causes is that at this age, young women start to develop an idea of their self-representation under the effect of family pressure, social media pressure, and peer pressure [ 6 ]. In this study, family pressure regarding body shape and weight showed a higher incidence in participants who were more at risk of developing ED (P<0.001). Similar results have been observed in another study done on female students in Jeddah where 45% (n=226) of the participants expressed family pressure to lose weight [ 10 ] and 50% (n=328) in another study done on female students in Dammam [ 11 ]. There are different factors relating to the effect of family on developing disordered eating behaviors, these include being labeled by family members as “overweight” earlier in life [ 12 ] and having mothers who exhibit disordered eating behavior themselves [ 13 ]. Furthermore, studies show a positive correlation between high education in both parents and grandparents and the risk of developing ED in their offspring [ 14 , 15 ]. Another factor that was observed was family income, as a study done by Hunger and Tomiyama suggests a strong positive association between family income and ED [ 12 ]. When discussing the source of peer pressure which is also a form of environmental influence, Dr Harris in a review of the development literature states that peer groups are one of the main factors that influence the development of disordered eating patterns. Pressure to fit in and meet the group norms is one of the most potent ways that peers can modify personality characteristics. This is not a direct obligation for friends to copy one another, but rather they subtly wish to share meaningful experiences with their peers that form part of their group identity [ 16 ]. Findings of other studies also indicated that peer interaction (e.g., discussing eating habits and dieting) and a desire for acceptance and popularity (i.e., believing that becoming thinner will make them more likable) both are factors of peer influence strongly associated with disordered eating. According to other studies, the significant association between peer pressure and disordered eating behaviors can be attributed to body comparison. Conversations between adolescents about appearance, weight, and dieting can implant the idea that the ideal beauty standard in the community is to be thin, which in turn can provoke the internalization of body image issues and dissatisfaction and lead adolescents to fall into unhealthy, disordered eating behaviors named successful by their peers to achieve that image [ 17 ]. To add to the review, the study that showed the influence of media and attitude toward appearance revealed a high magnitude difference between females who are at risk of developing ED and those who weren’t (P<0.001). And as shown in Figure 2 influence came higher than family factors and peers respectively. These results seem to intensify the traditional belief that exposure to media that advertise the thin bodies of social-media influencers triggers symptoms of ED among women who feel insecure about their bodies [ 18 ]. Another study proposes that women are more influenced by media due to the usual depiction of femininity with thin, slim bodies and virtues that are appealing to society like self-discipline, assertiveness and wealth, with what defies the norm or opposes these virtues being illustrated as ugly and unattractive [ 18 ]. Another study suggested that having a high BMI and being in a society that values a thin body which is influenced by media exposure is likely to increase the self-relevance of thinness and body dissatisfaction [ 19 ]. On the other hand, a study that was conducted in Kuwait states that the dissatisfaction with body image that came from media in Eastern society is initially adopted from Western media and Western lifestyles [ 20 ]. The discrepancy between actual body image (the current physical size as perceived) and ideal body image (desired body size) is frequently used as a measure of body dissatisfaction [ 21 , 22 ]. Most frequently, this discrepancy consists of desiring a thinner body. This study showed a difference in the idealized body weight and what is considered a normal BMI across the whole sample, regardless of being at risk of developing ED or not which indicates bodily dissatisfaction in the whole sample in general, but as Figure 3 draws conclusion that a higher BMI can lead to a higher risk of ED and that might be true as we observe from the figure that people who are considered obese had the highest ED risk. So, to confirm a comparison was tested to find the relationship between the BMI and score of EAT-26, and the results in Figure 4A of the whole sample 384 showed a significant weak positive correlation between BMI and risk of developing an ED. And the people who were reported to be at risk of ED were also measured in Figure 4B and showed no correlation between BMI and developing risk of ED. A wide-ranging literature provides evidence that this desire for a thinner body is generalized particularly among women [ 20 ]. And that the desire for underweight BMI is caused by viewing thin bodies as the ideal [ 23 ]. This mindset was evident in our study as it showed a great difference between what was viewed as “ideal” or “desired” body weight and what normal BMI actually is. Similar outcomes were seen in a previous study measuring females' understanding of normal BMI, especially in relation to their own idealized bodies, to determine whether women intentionally glorify bodies classified as “underweight"[ 23 ]. The aim of the current review was to gain insight if women were aware that they are idealizing underweight bodies, or if they do so unintentionally because they have an incorrect perception of what a categorically "normal" weight BMI looks like. Participants frequently incorrectly predicted the bodies' BMIs, however, they did so to a larger extent when they viewed bodies as an extension of their own, i.e., following the figure rating scale task. These results imply that women have inaccurate conceptions of the ideal body size, and that they frequently have greater misperceptions of the bodies of those around them, which may be causing people to idealize underweight bodies [ 23 ]. The goal of this study was to look deeper into the association of certain sociocultural attitudes on developing ED among females in Almadinah Almunwarrah, and this study has found that approximately 33.1% (n=127) of the participants scored 20 or more on the EAT-26 scale and were considered at risk for ED. Regarding risk factors of ED, we discussed media attitudes toward appearance, followed by family pressure regarding body shape and weight, and then peer pressure which all were factors that played a huge part in developing ED at such a young age. The discrepancy between actual body image (the current physical size as perceived) and ideal body image (desired body size) is frequently used as a measure of body dissatisfaction [ 21 , 22 ]. This discrepancy is typically expressed as a desire for a slimmer body. This mindset was evident in our study as it showed a great difference between what was viewed as “ideal” or “desired” body weight and what body weight, and BMI actually are. The main limitation of this study was that BMI calculations were based on self-reported measures of weight and height, which involved response bias that may have led to inaccuracy regarding recorded responses and results. The cross-sectional design of the study is another possible limitation as it displayed the association between risk factors without causation and so, the results cannot be generalized. For future research, we recommend the inclusion of in-person interviews and height-weight documentation to limit the possibility of bias regarding data collection.
Conclusions In conclusion, this study highlights the role of sociocultural attitudes on the development of ED among young females in Almadinah Almunawarah, Saudi Arabia. Results show a high prevalence of risky eating behaviors, particularly among those who experience family and media pressure toward body shape and weight. Peer pressure was also identified as a significant risk factor. These findings emphasize the need for interventions that target sociocultural attitudes and provide support for vulnerable individuals. Education and awareness campaigns can play a crucial role in reducing harmful behaviors and promoting healthy body image. Further research is required to better understand the complex interplay between sociocultural factors and ED in this region.
Background: Eating disorders (ED) are believed to be more susceptible in women due to varied factors involving dissatisfaction with their body and appearance. The exact cause of ED isn't known. But it may be triggered by biological, psychological, environmental, and social factors. Objectives: The current literature aims to explore the body dissatisfaction of women from Almadinah Almunawarah and factors that may contribute to developing risk of ED and assess the discrepancies between desired and healthy BMI. Methods: The Sociocultural Attitudes Toward Appearance Questionnaire-4 (SATAQ-4) questionnaire surveyed 384 females to explore family, peer, and media pressure, followed by the Eating Attitudes Test-26 (EAT-26) questionnaire to recognize those at risk of developing ED. The body dissatisfaction of the sample was measured by the difference between the healthy BMI and the desired BMI. Results: A total of 127 of the participants, who were reported to have a high probability of developing an ED, had the highest factor scored in the SATAQ-4 questionnaire being media exposure with a p-value less than 0.001. The study showed a difference in the ideal body and what is considered a healthy BMI. Results showed no correlation between BMI and developing ED. Discussion: Women of younger age groups are more vulnerable to being under the influence of sociocultural attitudes, thus they are more susceptible to developing risky eating behaviors. This can be affected by family, peers, and media factors. Conclusion: The findings of this study show a high prevalence of risky eating behaviors, particularly among those who experience family and media pressure toward body shape and weight. Peer pressure was also identified as a significant risk factor. These findings emphasize the need for interventions that target sociocultural attitudes and provide support for vulnerable individuals.
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50576
oa_package/8c/af/PMC10789473.tar.gz
PMC10789474
38226111
Introduction Sneddon syndrome is a rare, non-inflammatory, occlusive vasculopathy affecting small- and medium-sized arteries of the brain and skin [ 1 , 2 ]. Sneddon syndrome can present with stroke, transient ischemic attack, livedo reticularis, and progressive dementia [ 1 , 2 ]. Very few cases reported a headache as one of the symptoms with no clear association. Hereby, we present an unusual case of Sneddon syndrome with the possible association with paroxysmal hemicrania.
Discussion Sneddon syndrome is a non-inflammatory vasculopathy involving small- and medium-sized arteries. Sneddon syndrome involves various clinical features, including neurological and non-neurological symptoms [ 3 , 4 ]. The disease commonly affects women aged 20-40 years. Multiple isolated familial cases were reported suggesting genetic causes [ 4 ]. The most important neurological manifestations are recurrent ischemic stroke manifestations, such as hemiparesis, sensory disturbances, aphasia, and visual field defects [ 3 ]. The most common involved territory is the superficial MCA [ 3 - 5 ]. MRI usually demonstrates small and multifocal lesions in the white matter mostly located in the periventricular area. MRI findings are nonspecific for Sneddon syndrome and must be interpreted in correlation with clinical manifestations [ 6 ]. Cerebral angiograms usually show stenosis and/or occlusion of the small- and medium-sized arteries and may show leptomeningeal and transdural collateral networks [ 3 ]. Other reported neurological symptoms include diffuse cortical atrophy, early-onset dementia, psychiatric disturbances, migraines, epilepsy, and intracranial hemorrhage [ 3 , 4 ]. Cognitive dysfunction was shown to occur in almost 77% of patients with Sneddon syndrome [ 5 ]. Headache is the most reported symptom in almost 50% of patients [ 3 , 6 ]. Two cases were reported describing the headaches experienced by these patients [ 1 , 2 ]. In our observation, headache attacks seem to be holocephalic and with no reported symptoms of (nausea, vomiting, photophobia, and phonophobia). Our patient showed new features of the headache in the subsequent Sneddon attacks involving severe retro-orbital pain associated with throbbing headache directing the diagnosis into paroxysmal hemicrania. Martinelli et al. concluded that headache attacks can be the first isolated neurological symptoms of the syndrome before developing other symptoms [ 1 ]. Headaches are described as dull and diffuse and usually precede the neurological symptoms for two months to 15 years [ 6 ]. The most common non-neurological manifestation is livedo racemosa, which is an erythematous netlike broken irregular rash that is persistent on rewarming due to occlusion of small- and medium-sized arteries in comparison to livedo reticularis, which has a continuous netlike pattern that resolves with rewarming [ 3 , 5 ]. Livedo racemosa usually appears many years before the onset of stroke [ 3 ]. Other dermatological manifestations include Raynaud’s phenomena and widespread cutaneous discoloration because of systemic angiomatosis [ 3 , 7 ]. Laboratory findings in Sneddon syndrome patients are similar to patients with antiphospholipid syndrome. Elevated antiphospholipid antibody (aPL) levels are found in almost 57% of patients with Sneddon syndrome [ 7 ]. Thrombocytopenia is commonly found in patients with positive antiphospholipid antibodies [ 7 ]. Skin biopsy can be used to help direct the diagnosis of Sneddon syndrome. Histopathological findings include endotheliitis, fibrin thrombi occluding vessels, and intima and media proliferation without evidence of vasculitis involving mostly arteries of the reticular dermis [ 4 ]. The diagnosis of Sneddon syndrome requires the combination of neurological, dermatological, and neuroradiological findings, which all together may raise the suspension of a Sneddon syndrome diagnosis [ 6 ]. In aPL-positive patients, no significant difference was found between anticoagulants and antiplatelets to decrease the risk of stroke recurrence, although anticoagulants seem to be more effective [ 3 ]. In aPL-negative patients, no difference was found between anticoagulants and antiplatelets, so the choice of antithrombotic treatment should be individualized [ 5 ]. In our case, our patient was compliant on aspirin with good follow-up and no stroke recurrence.
Conclusions In conclusion, Sneddon syndrome is a small vessel vasculitis with neurological and non-neurological symptoms. Only a few reports reviewed the relationship between Sneddon syndrome and headaches. The nature of the disease is still not completely understood. Unfortunately, there is no cure or specific treatment.
This clinical case report aims to highlight the unusual presentation of Sneddon syndrome with a possible association with paroxysmal hemicrania. A medical record review was performed at a tertiary hospital in Riyadh, Saudi Arabia. Data collected include clinical evaluations and laboratory and imaging results. Informed consent was obtained. Hereby, we present a 27-year-old female who presented with multiple stroke attacks, along with severe headaches involving right retro-orbital pain with an eight-year history of spotted skin lesions. Initial unenhanced computed tomography (UCT) brain in the emergency showed left insular cortex hypodensity, revealing acute ischemic insult. Subsequent magnetic resonance imaging (MRI) and magnetic resonance angiography (MRA) revealed acute ischemic infarct in the territory of the left middle cerebral artery (MCA) involving the insula and frontoparietal lobe. Further investigations were done, including cerebrospinal fluid (CSF) analysis and autoimmune and infectious workup, which were unrevealing. Skin biopsy of the lesions showed subcutaneous fat necrosis with nonspecific scattered fibrinogen positivity and was labeled as livedo reticularis vs. livedo racemosa. A Sneddon syndrome diagnosis can be very challenging, needing a high index of suspicion to direct the diagnostic investigations. Moreover, the presence of a severe headache is an unusual phenomenon that needs further study.
Case presentation We report a case of a 27-year-old Saudi female, a known case of primary Raynaud’s disease for more than 10 years on a nitroglycerin patch. She was in her usual state of health till a week prior to her presentation when she started to have on-and-off episodes of vertigo, along with right upper limb numbness. She presented to our emergency department with a one-day history of dysarthria, facial asymmetry, and right upper limb weakness. She denied having visual symptoms, unsteady gait, abnormal movement, or loss of consciousness. She also complained of a holocephalic headache for a long time, throbbing in nature, lasting for hours, and then resolved, which is not associated with nausea and vomiting, photophobia, or phonophobia and does not affect her daily activity. A review of systems revealed having right index finger swelling and joint pain. During her first presentation, she was vitally stable. During examination, she was alert, attentive, and oriented to time, place, and person. She showed impaired naming and reading, but intact comprehension, repetition, and writing. Her neurological examination revealed reduced pinprick sensation over the right trigeminal branches (V1, V2, and V3), absent gag reflex, right facial droop sparing the forehead, reduced power and sensations in the right upper limb (RUL) with spastic tone, positive Hoffman's sign in RUL, normal reflexes all over, but brisker in RUL, and normal gait and coordination. Her pupils were equally reactive and non-dilated, measuring 3 mm bilaterally. Other system examinations revealed redness over her cheeks and nose and cold feet with blue discoloration over the right big toe. Her laboratory investigations were unremarkable. Unenhanced computed tomography (CT) brain showed left insular cortex hypodensity, concerning for an acute ischemic insult, with no intracranial hemorrhage. Therefore, the patient was started on aspirin 81 mg. Her working diagnosis was a stroke at a young age. CT arterial and venous angiogram was done and showed patent major intracranial arteries, with no flow-limiting stenosis and no evidence of cerebral venous thrombosis. An extensive workup was done. A lumbar puncture was performed. The sampled cerebrospinal fluid (CSF) looked clear, with normal cells. The echocardiogram and Holter monitor were both normal. An autoimmune workup was done, which was unrevealing (antinuclear antibodies, antinuclear cytoplasmic antibodies, anti-proteinase 3 antineutrophil cytoplasmic antibody, myeloperoxidase, complement levels, inflammatory markers). A thrombophilia workup was unremarkable, except for protein S, which was 39% (low normal: 55%). An infectious workup was unremarkable. Magnetic resonance imaging (MRI) (Figures 1 - 2 ) and magnetic resonance angiography (MRA) (Figure 3 ) of the brain showed acute ischemic infarct in the territory of the left middle cerebral artery (MCA) involving the insula, and the frontoparietal lobe extends to the pre- and postcentral gyri. There is no acute intracranial hemorrhage or petechial hemorrhage, but features of chronic small vessel disease. Interventional radiology was unremarkable for any occlusion. MRI and MRA Tesla-3 were done and showed the expected evolution of the left MCA subacute infarction. Multiple old lacunar infarcts in the cerebral white matter and deep gray matter with associated bilateral cerebral atrophy, predominantly involving the high frontal and parietal convexities. There was callosal involvement. Features of retinal and vestibulocochlear involvement were appreciated. During her stay, the patient showed improvement in her weakness and dysarthria. Upon discharge, the patient resolved completely, except for slight facial deviation, with decreased sensation over the trigeminal branches and residual right-sided weakness. The patient was referred to occupational and physical therapy for assessment. The patient was started on amitriptyline. She presented to the emergency department multiple times with the same three layers of deterioration. Her headache was associated with right eye pain phonophobia and right-sided numbness involving her upper and lower limbs. The patient was started on dual antiplatelet therapy (DAPT) for 21 days and then to complete aspirin for life. CT/CTA was unremarkable for acute insult with no large vessel occlusion on CTA. MRI brain showed no new ischemic insult, with no diffusion restriction. The patient was also evaluated by the dermatology team because of spotted skin for almost eight years along the extremities and was labeled as livedo reticularis vs. livedo racemosa. A skin biopsy was performed and showed subcutaneous fat necrosis with nonspecific scattered fibrinogen positivity. The query of Sneddon syndrome +/- hemicrania paroxysmal was raised. Following up, the patient reported no recurrence of symptoms. She had mild spasticity over the right upper limb with improved power, as shown in Table 1 .
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50562
oa_package/51/94/PMC10789474.tar.gz
PMC10789475
38226103
Introduction and background Chronic liver diseases (CLDs) represent a formidable global health challenge, affecting millions of individuals and imposing a substantial burden on healthcare systems worldwide. Defined by a spectrum of conditions ranging from viral hepatitis to metabolic disorders like non-alcoholic fatty liver disease (NAFLD), CLDs progressively lead to liver inflammation, fibrosis, and impaired function. The staggering prevalence of CLDs necessitates a holistic understanding of their impact, extending beyond the traditional realms of hepatology. One pivotal aspect that often stands out in the lived experience of individuals grappling with CLDs is the presence of persistent pain, casting a shadow on their quality of life and overall well-being. As we explore challenges and opportunities in developing tailored pain management strategies for liver patients, it is crucial to grasp the vastness of the global burden posed by CLDs. An estimated 844 million people were affected by CLDs in 2019, with a staggering two million succumbing to liver-related complications annually [ 1 ]. Viral hepatitis, alcohol-related liver diseases, and NAFLD are among the leading contributors to this growing epidemic, underlining the urgent need for comprehensive management strategies. Within the intricate tapestry of CLDs, the prevalence of pain emerges as a profound and often underestimated aspect of the patient experience. Pain in liver diseases can manifest in various forms, from the dull ache associated with fibrosis to the intense abdominal discomfort experienced in advanced stages such as cirrhosis. The impact of pain extends beyond the physical realm, permeating into the emotional and psychological dimensions of patients' lives. Understanding the significance of pain in liver patients requires a multifaceted approach, acknowledging the physiological mechanisms and the psychosocial intricacies involved. Chronic pain not only compromises the patient's ability to perform daily activities but also contributes to mental health challenges, including anxiety and depression [ 2 ]. Furthermore, the persistent nature of pain can result in a diminished quality of life, making effective pain management a crucial component of holistic care for individuals with CLDs. Amidst the landscape of pain management, a critical paradigm shift is underway, one that recognizes the need for tailored strategies customized to the unique challenges posed by liver diseases. Conventional approaches to pain management, often rooted in general principles, may fall short of addressing the specific nuances associated with CLDs. Tailoring interventions to the individual characteristics of liver patients becomes imperative, considering factors such as the underlying etiology of liver disease, the severity of liver damage, and the presence of comorbid conditions [ 2 ]. Precision in pain management involves not only the selection of appropriate pharmacological agents but also the incorporation of non-pharmacological interventions and a patient-centered approach to care. The advent of precision medicine has opened new avenues for understanding the interplay between genetic factors, environmental influences, and the response to pain medications in liver patients. By deciphering these intricate relationships, healthcare providers can optimize pain management strategies, minimizing adverse effects and maximizing efficacy [ 3 ]. Beyond the scientific and clinical dimensions, the journey of pain management in liver patients is inherently human. Each traverses a unique path, shaped by personal experiences, cultural influences, and social contexts. Recognizing the human aspect of pain in CLDs is not only empathetic but also essential for tailoring interventions that resonate with patients' individual needs. The significance of a patient-centered approach cannot be overstated. A study by Hauser et al. demonstrated that incorporating patient perspectives into pain management decision-making not only improves treatment outcomes but also fosters a sense of empowerment and engagement in the healthcare process [ 4 ]. By understanding the lived experiences of liver patients, healthcare providers can create tailored interventions that align with patients' values, preferences, and goals. As we navigate the intricacies of challenges and opportunities in developing tailored pain management strategies for liver patients, we must ground our exploration in the rich tapestry of human experiences. This narrative review aims not only to elucidate the scientific complexities but also to weave a narrative that resonates with the diverse and profound stories of individuals facing the dual challenges of CLDs and persistent pain. In the subsequent sections, we will delve into the pathophysiology of pain in liver diseases, the inherent difficulties in pain management, and the emerging opportunities that hold promise for a future where pain relief is effective and uniquely tailored to each patient's needs. In summary, CLDs present a global health crisis, with pain emerging as a poignant and multifaceted aspect of the patient experience. Recognizing the significance of pain in liver patients and understanding the imperative for tailored strategies sets the stage for a comprehensive exploration of the challenges and opportunities in this crucial healthcare domain. By embracing the human touch in pain management, we unravel the complexities, bridging the gap between scientific understanding and the lived experiences of individuals facing the dual burden of liver diseases and persistent pain.
Conclusions To summarize, our investigation into customized pain control for individuals with liver conditions reveals a complex environment filled with difficulties and possibilities, emphasizing the delicate relationship between pain and chronic liver ailments. The various challenges, which include the complex causes of liver-related pain, the complexities of having multiple health conditions, and how medications interact with each other, emphasize the urgent need for a fundamental change in how we approach clinical practices. In the face of these difficulties, the possibilities that arise from interdisciplinary cooperation, precision medicine, and continuous research indicate a potential for significant change in comprehensive pain management. A resounding plea for action echoes throughout the research and clinical practice arenas, advocating a unified dedication to enhancing customized therapies for patients with liver conditions. This call to action is founded upon the amalgamation of hepatologists' understanding of liver diseases, pain specialists' proficiency in intricate pain management, and the prospective advancements derived from ongoing research. As we explore this field, driven by empathy and creativity, the future of pain treatment for liver patients urges us to adopt customized approaches that relieve symptoms and reframe the story of strength and self-empowerment. Our shared dedication creates a path toward a future where personalized and empathetic care for liver-related pain goes beyond traditional boundaries and strives for holistic wellness.
Chronic liver illnesses pose a substantial worldwide health challenge, with various causes that span from viral infections to metabolic problems. Individuals suffering from liver problems frequently face distinct difficulties in pain control, requiring a customized strategy that takes into account both the fundamental disease and the complexities of liver function. The liver, a vital organ responsible for metabolic control and detoxification, is pivotal in multiple physiological processes. Chronic liver illnesses, such as cirrhosis and non-alcoholic fatty liver disease (NAFLD), are marked by a gradual process of inflammation and fibrosis, resulting in reduced liver function. These disorders often come with pain, varying from internal discomfort to intense abdominal pain, which impacts the quality of life and general well-being of patients. The review explores the complex aspects of pain perception in liver illnesses, including inflammation, modified neuronal signaling, and the influence of comorbidities. It highlights the significance of a detailed comprehension of the pain experience in individuals with hepatic conditions for the implementation of successful pain management treatments. In addition, the review emphasizes the difficulties involved in treating pain in this group of patients, such as the possible complications linked to commonly prescribed pain relievers and the necessity for collaboration between hepatologists, pain specialists, and other healthcare professionals. Moreover, it examines new possibilities in the domain, such as the significance of innovative pharmacological substances, non-pharmacological treatments, and personalized medicine strategies designed for specific patient characteristics. This study thoroughly analyzes the difficulties and possibilities involved in creating personalized pain management approaches for individuals with liver conditions. Its purpose is to guide physicians, researchers, and healthcare providers, enabling them to implement more efficient and patient-focused interventions. As our comprehension of liver-related pain progresses, the potential for enhancing the quality of life for persons with chronic liver disorders through tailored pain management measures becomes more and more encouraging.
Review Methodology Conducting a narrative review on the challenges and opportunities in developing tailored pain management strategies for liver patients involves a comprehensive approach to gathering relevant information. The first step in our methodology involved a thorough search of scientific databases, including PubMed, Scopus, and Cochrane Library. Keywords such as "chronic liver diseases," "pain management," and "tailored strategies" were employed to identify articles published up to the knowledge cutoff date in November 2023. In addition to electronic databases, manual searches were conducted through relevant journals, textbooks, and grey literature to ensure a comprehensive understanding of the current landscape of pain management in liver patients. Articles selected for review focused on CLDs, pain management strategies, and the customization of interventions. Studies addressing challenges and opportunities in tailoring pain relief for liver patients were prioritized. Information was synthesized in a narrative format, organizing findings into coherent themes related to the pathophysiology of pain in liver diseases, current pharmacological and non-pharmacological interventions, and advancements in precision medicine. Given the nature of a narrative review, ethical approval was not required. However, strict adherence to the principles of transparency, accuracy, and respect for intellectual property rights guided the presentation of information. Acknowledging the inherent limitations of narrative reviews, including potential publication bias and subjectivity in data synthesis, we strived to maintain rigor through systematic search strategies and critical appraisal of included studies. However, readers should interpret the findings in light of these inherent limitations. Pathophysiology of liver diseases and pain CLDs cast a profound shadow on the health of millions worldwide, ushering in a cascade of challenges that extend beyond impaired liver function. Among the intricate facets of CLDs, pain emerges as a significant and often overlooked aspect, influencing the well-being and quality of life of affected individuals. This essay delves into the pathophysiology of liver diseases and pain, unraveling the mechanisms that underlie the often debilitating experience of pain in this patient population. Mechanisms of Pain in Chronic Liver Conditions Pain in chronic liver conditions is a complex phenomenon with multifactorial origins. Unlike acute pain, which serves as a warning signal in response to tissue injury, chronic pain in liver diseases often lingers and becomes a persistent companion to the affected individuals. The mechanisms driving this chronic pain are intricately woven into the fabric of liver pathophysiology. A crucial contributor to pain in chronic liver conditions is the distortion of liver architecture due to ongoing inflammation and fibrosis. As inflammation takes root within the liver parenchyma, it creates a cascade of events that ultimately leads to tissue scarring or fibrosis. This distorted tissue architecture contributes to the mechanical strain on the liver capsule, a sensitive structure surrounding the liver, resulting in a dull, persistent ache [ 5 ]. Moreover, the inflammatory milieu within the liver creates an environment rich in mediators such as cytokines and chemokines. These biochemical signals contribute to the activation of immune cells and sensitize nerve endings, heightening their responsiveness to stimuli and lowering the pain threshold [ 5 ]. In essence, the inflammatory processes within the liver create a fertile ground for the genesis and perpetuation of pain. Inflammation, a cornerstone of liver diseases, plays a dual role in pain perception. On one hand, it directly stimulates nerve endings within the liver, contributing to the sensation of pain. On the other hand, inflammatory mediators released during the immune response can sensitize the central nervous system, amplifying pain perception even without direct liver stimulation. Fibrosis, the progressive scarring of liver tissue, further exacerbates the pain experienced in chronic liver conditions. As the liver undergoes architectural changes, the once-supple organ becomes rigid and less compliant. This rigidity can lead to increased pressure within the liver, affecting the surrounding structures and triggering pain signals. Additionally, fibrosis disrupts the normal blood flow within the liver, contributing to congestion and ischemia, both recognized sources of pain [ 6 ]. The impact of inflammation and fibrosis on pain perception is not confined to the liver alone; it extends its tendrils into neighboring structures. The liver is richly innervated, and the pain experienced may radiate to the right upper quadrant, abdomen, or even the back. This referred pain phenomenon further complicates the clinical picture, making pinpointing the exact source of discomfort challenging. Neural Signaling Alterations in Liver Diseases Neural signaling alterations represent another layer in the intricate interplay between liver diseases and pain. The nervous system undergoes adaptive changes in response to the persistent inflammatory milieu and structural alterations within the liver. The peripheral nerves that innervate the liver, known as hepatic nerves, become hypersensitive in CLDs. This heightened sensitivity is fueled by the release of neurotransmitters such as substance P and calcitonin gene-related peptide (CGRP) from nerve endings. These substances not only contribute to the transmission of pain signals but also play a role in promoting inflammation within the liver [ 6 ]. The brain processes and interprets pain signals at the central nervous system level. In chronic liver conditions, maladaptive changes occur in the central nervous system, leading to a phenomenon known as central sensitization. This process involves amplifying pain signals, making individuals more susceptible to experiencing pain even with mild stimuli. Central sensitization intensifies the perception of pain originating from the liver and contributes to the development of widespread pain and hypersensitivity [ 7 ]. The pain mechanisms in CLDs are not isolated events but interconnected processes reflecting the intricate dance between inflammation, fibrosis, and neural signaling alterations. As we decipher the scientific underpinnings of pain in liver diseases, we must recognize that these mechanisms are not merely academic concepts but tangible experiences for individuals grappling with these conditions. The persistent ache, the nuanced discomfort, and the challenges posed by referred pain are not abstract; they are part of the daily reality for those navigating the labyrinth of CLDs. Challenges in pain management for liver patients CLDs bring with them a myriad of challenges, and one of the most pervasive is the persistent presence of pain. As we embark on a journey to understand the difficulties in pain management for liver patients, it becomes apparent that the intricate interplay between liver pathophysiology and pain perception presents unique hurdles. This essay explores the common pain challenges faced by liver patients, delves into the complications associated with traditional analgesics, and highlights the crucial considerations in the context of comorbidities and medication interactions. Overview of Common Pain Challenges Faced by Liver Patients Pain, in the context of CLDs, manifests in diverse forms, casting a broad shadow on the lives of affected individuals. From the dull, persistent ache associated with fibrosis to the more intense abdominal pain witnessed in advanced stages like cirrhosis, the spectrum of pain challenges faced by liver patients is extensive. A notable challenge lies in accurately assessing and characterizing the pain experienced by liver patients. The nature of liver pain is often visceral, meaning it originates from the internal organs, making it inherently challenging to localize and describe. Patients may use terms like heaviness, pressure, or discomfort rather than sharp or stabbing sensations commonly associated with other types of pain [ 7 ]. Liver pain's subjective and elusive nature complicates the diagnostic and management landscape. Moreover, the experience of pain is not uniform across liver patients. Factors such as the underlying etiology of liver disease, the extent of liver damage, and the presence of comorbidities contribute to the heterogeneity in pain presentation. Tailoring interventions to meet each patient's unique needs becomes paramount, necessitating a personalized approach to pain management. In the pursuit of alleviating pain in liver patients, healthcare providers are often confronted with the challenge of selecting appropriate analgesic agents. Traditional analgesics, such as nonsteroidal anti-inflammatory drugs (NSAIDs) and opioid medications, pose significant complications in the context of liver diseases. NSAIDs, commonly used for pain relief and anti-inflammatory effects, exert their actions by inhibiting enzymes involved in the synthesis of prostaglandins. However, this mechanism has a notable drawback in the context of liver diseases. The liver, already compromised in function, may struggle to metabolize NSAIDs, leading to an increased risk of hepatotoxicity [ 8 ]. Individuals with cirrhosis, in particular, face heightened susceptibility to complications associated with NSAID use. The impaired liver function compromises the clearance of these medications, amplifying the risk of adverse effects, including gastrointestinal bleeding and acute liver injury [ 8 ]. As such, the seemingly innocuous choice of an over-the-counter pain reliever becomes a delicate balancing act in the context of liver patients. Opioids, potent analgesics widely used for pain management, present another layer of complexity in liver patients. While opioids may offer effective relief, their use raises concerns about the potential for exacerbating underlying liver conditions and contributing to complications such as hepatic encephalopathy [ 9 ]. Furthermore, liver patients may exhibit altered drug metabolism, affecting the pharmacokinetics of opioids. This altered metabolism, coupled with the potential for drug interactions, necessitates cautious prescribing and close monitoring to prevent unintended consequences. The opioid crisis and concerns related to addiction further complicate the landscape of opioid use in liver patients. Striking a balance between providing adequate pain relief and minimizing the risk of opioid-related adverse effects poses a substantial challenge for healthcare providers. Considerations for Comorbidities and Medication Interactions Beyond the complexities associated with analgesic choices, the challenges in pain management for liver patients extend to considerations of comorbidities and potential interactions with other medications. Liver patients often grapple with multimorbidity, meaning they contend with the presence of multiple chronic conditions simultaneously. The challenge lies not only in managing liver-related pain but also in addressing the intricacies introduced by comorbidities such as diabetes, cardiovascular disease, or renal impairment [ 10 ]. Each comorbidity adds a layer of complexity to the overall care plan, requiring a nuanced understanding of the interplay between different conditions and their respective treatments. Healthcare providers must navigate a delicate balance, ensuring that interventions for pain management do not exacerbate other chronic conditions while still addressing the unique challenges posed by liver diseases. Liver patients often find themselves on many medications to manage various aspects of their health. The potential for drug interactions, where one pill influences the effectiveness or safety of another, becomes a critical consideration in pain management. For example, medications metabolized by the liver may experience altered clearance in the context of impaired liver function. This altered metabolism can lead to drug accumulation, increasing the risk of adverse effects. Moreover, medications with similar metabolism pathways may compete for the limited enzymatic resources within the compromised liver, further complicating the pharmacokinetic landscape [ 10 ]. Anticipating and managing these potential interactions requires a comprehensive understanding of the patient's medication regimen, emphasizing the importance of communication and collaboration among healthcare providers involved in the patient's care. Patient-centered pain assessment Pain, a multifaceted and subjective experience, weaves itself into the fabric of the human condition. For individuals grappling with CLDs, pain becomes an integral part of their daily narrative. This essay explores patient-centered pain assessment, delving into the importance of personalized approaches, examining tools and methods tailored for evaluating pain in liver patients, and addressing the intricate psychosocial aspects that shape the pain experience. Importance of Personalized Pain Assessment At the heart of patient-centered care lies the recognition that individuals are not merely carriers of diseases; they are unique beings with distinct experiences, values, and aspirations. This recognition is particularly crucial when it comes to pain assessment in the context of CLDs. Pain is inherently subjective, varying between individuals and within the same person over time. Adopting a standardized approach to pain assessment often falls short of capturing the nuances of the pain experience in liver patients. A numerical rating on a pain scale may convey the intensity but falls short of encapsulating the quality, impact on daily life, and the emotional toll of pain. A personalized approach to pain assessment acknowledges the uniqueness of each patient's pain experience. Cultural background, personal beliefs, and coping mechanisms are pivotal in shaping how pain is perceived and expressed [ 10 ]. Tailoring assessments to align with these individual nuances fosters a deeper understanding of the patient's experience, laying the foundation for targeted interventions that resonate with their values and goals. In pain assessment for liver patients, employing tools and methods that capture the multidimensional nature of pain becomes imperative. Traditional measures, such as the Visual Analog Scale (VAS) or Numeric Rating Scale (NRS), provide a quantitative snapshot of pain intensity but may fall short of encapsulating the broader aspects of the pain experience. Comprehensive pain inventories offer a more holistic approach to pain assessment, delving into the qualitative aspects of pain. The McGill Pain Questionnaire (MPQ) and the Brief Pain Inventory (BPI) are examples of tools that go beyond a numerical rating, capturing the sensory and affective components of pain [ 11 ]. For liver patients, whose pain may manifest in various forms, employing such comprehensive tools allows for a more nuanced understanding. Patient-reported outcomes (PROs) play a pivotal role in patient-centered pain assessment. By directly soliciting the patient's perspective, PROs provide insights into the impact of pain on daily functioning, emotional well-being, and overall quality of life. The use of PROs, such as the Pain Impact Questionnaire (PIQ-6) or the Patient-Reported Outcomes Measurement Information System (PROMIS), empowers patients to participate in their care actively and ensures that assessments align with their priorities [ 11 ]. Dynamic Assessments: Pain Diaries and Ecological Momentary Assessment (EMA) Recognizing the dynamic nature of pain, especially in chronic conditions, calls for assessments that capture fluctuations over time. Pain diaries, where patients record their pain experiences regularly, offer a longitudinal perspective, allowing healthcare providers to discern patterns and triggers. Using mobile technology, ecological momentary assessment (EMA) enables real-time data collection, providing an in-the-moment snapshot of pain experiences in the natural environment [ 11 ]. Pain is not confined to the physical realm; it intertwines with psychosocial dimensions, influencing emotions, relationships, and overall well-being. A patient-centered approach to pain assessment in liver patients necessitates an exploration of these psychosocial aspects. Chronic pain often coexists with emotional challenges, including anxiety and depression. Understanding the dynamic landscape is essential, as it influences the perception of pain and shapes the individual's ability to cope. Incorporating validated tools for assessing emotional well-being, such as the Hospital Anxiety and Depression Scale (HADS), alongside pain assessments provides a more comprehensive view of the patient's mental health [ 11 ]. The significance of social support cannot be overstated in the context of chronic pain. Exploring the patient's support network, assessing coping mechanisms, and understanding how pain impacts relationships offer valuable insights. Tools like the Coping Strategies Questionnaire (CSQ) and the Multidimensional Scale of Perceived Social Support (MSPSS) provide avenues for probing into these psychosocial dimensions [ 12 ]. Cultural influences shape the expression and interpretation of pain. A patient's cultural background plays a pivotal role in how they communicate pain, their expectations regarding pain management, and the impact of pain on their identity. Embracing cultural humility and employing culturally sensitive tools, such as the Cross-Cultural Pain Questionnaire (CCPQ), ensures that pain assessments are conducted with cultural competence [ 12 ]. In the symphony of patient-centered care, pain assessment emerges as a poignant movement that harmonizes the scientific precision of validated tools with the nuanced understanding of individual narratives. The tools and methods employed should not be mere instruments; they should be conduits for capturing the lived experiences of liver patients grappling with the dual burden of chronic diseases and persistent pain. By embracing the importance of personalized approaches, incorporating comprehensive tools, and delving into the psychosocial dimensions of pain, healthcare providers can compose a melody that resonates with the uniqueness of each patient. As we navigate the landscape of pain assessment for liver patients, we decode the language of pain and empower individuals to articulate their experiences, fostering a therapeutic alliance grounded in empathy and understanding. Current pharmacological interventions Pain, an unwelcome companion in the journey of CLDs, demands a nuanced approach to intervention. This essay embarks on a detailed exploration of current pharmacological interventions for liver-related pain, traversing the landscape of existing analgesic options, critically examining their limitations and potential risks, and shedding light on emerging pharmaceutical approaches that hold promise in reshaping the narrative of pain management. Review of Existing Analgesic Options The pharmacological arsenal for managing liver-related pain encompasses a spectrum of options, each with its unique mechanisms of action and considerations. Understanding the landscape of existing analgesic options is crucial for healthcare providers navigating the complexities of pain management in the context of CLDs. Acetaminophen, often considered a first-line analgesic, presents a paradox in the realm of liver diseases. While it effectively alleviates pain and reduces fever, its metabolism occurs predominantly in the liver. In cases of compromised liver function, as seen in CLDs, the risk of hepatotoxicity escalates. The narrow therapeutic window of acetaminophen poses a challenge, requiring meticulous dosing and close monitoring to avert potential harm [ 13 ]. NSAIDs, with their potent anti-inflammatory and analgesic properties, are commonly utilized for pain relief. However, their use in liver patients is riddled with complexities. The risk of gastrointestinal bleeding, coupled with the potential to exacerbate renal dysfunction and induce hepatorenal syndrome, necessitates caution [ 13 ]. In particular, individuals with cirrhosis face an elevated risk of complications associated with NSAID use, emphasizing the need for vigilant risk-benefit assessments. Opioids, potent analgesics central to pain management, present a delicate balance between providing relief and navigating potential complications. The liver, a hub for drug metabolism, plays a pivotal role in opioid processing. In the context of CLDs, alterations in drug metabolism may lead to opioid accumulation, raising concerns about respiratory depression and other opioid-related adverse effects [ 13 ]. Furthermore, the opioid epidemic underscores the imperative to approach opioid use judiciously, considering the risk of addiction and the potential for unintended consequences. Striking a balance between adequate pain relief and minimizing the risk of opioid-related complications demands a personalized and vigilant approach. Beyond conventional analgesics, adjuvant medications play a role in addressing neuropathic pain, a common dimension of liver-related pain. Drugs such as gabapentin and pregabalin, designed initially to manage seizures, exhibit efficacy in dampening neuropathic pain signals. However, their use requires careful titration and monitoring due to potential side effects, including sedation and dizziness [ 14 ]. Limitations and Potential Risks Associated with Standard Medications While existing analgesic options provide a foundation for pain management in liver patients, their utilization is fraught with limitations and potential risks intrinsic to the intricate interplay between liver function and drug metabolism. Hepatotoxicity, a common thread woven through many analgesic options, poses a substantial stake in the context of CLDs. Acetaminophen, despite its efficacy, becomes a potential culprit in liver injury, especially when consumed at doses exceeding recommended limits. NSAIDs, while offering anti-inflammatory benefits, may contribute to liver damage, particularly in individuals with advanced liver disease [ 14 ]. Opioids, with their metabolism intricately linked to liver function, pose a dual risk. Not only can they contribute to hepatotoxicity, but the altered drug metabolism in liver diseases may also lead to unpredictable pharmacokinetics, complicating dosing regimens and increasing the risk of adverse effects. The intricate interplay between liver and kidney function adds a layer of complexity to pain management. NSAIDs, notorious for their potential to induce renal impairment, pose a dual threat in liver patients who may already grapple with compromised renal function [ 15 ]. The delicate balance between maintaining adequate analgesia and preventing further renal damage necessitates vigilant monitoring and individualized approaches. Gastrointestinal complications, including bleeding, represent a significant risk associated with NSAID use. In liver patients who may already contend with portal hypertension and varices, the potential exacerbation of gastrointestinal bleeding amplifies the complexity of pain management decisions [ 15 ]. Striking a balance between pain relief and the risk of bleeding requires a meticulous evaluation of each patient's clinical status and potential contraindications. Opioids, while providing effective pain relief, introduce a spectrum of central nervous system effects that demand careful consideration. Sedation, respiratory depression, and the risk of opioid-induced hyperalgesia add layers of complexity to opioid use in liver patients [ 16 ]. The challenge lies in optimizing pain control while minimizing the potential for adverse effects, striking a delicate balance that requires ongoing monitoring and dose adjustments. Emerging Pharmaceutical Approaches for Liver-Related Pain The limitations and potential risks associated with standard medications underscore the need for innovative approaches in reshaping the landscape of pain management for liver patients. Emerging pharmaceutical strategies offer glimpses into the future, holding the potential to address pain with greater precision and reduced adverse effects. Novel pharmaceutical approaches explore the intricacies of peripheral pain mechanisms, aiming to minimize systemic effects. Drugs that selectively target peripheral nociceptors or modulate neurotransmitter release at the injury site represent a paradigm shift. Still, in the early stages of development, such approaches hold promise in providing localized pain relief without imposing undue stress on the liver or other organ systems [ 16 ]. The era of precision medicine heralds a new frontier in pain management. Tailoring interventions based on an individual's genetic makeup, metabolic profile, and specific pain mechanisms allows for a more refined and personalized approach [ 17 ]. The application of pharmacogenomics, in which genetic information guides medication selection and dosing, holds the potential to minimize adverse effects and optimize pain control for liver patients. Cannabinoids, compounds derived from the cannabis plant, are gaining attention for their potential in pain management. Cannabidiol (CBD) and tetrahydrocannabinol (THC), two prominent cannabinoids, exhibit analgesic properties and anti-inflammatory effects. While the potential hepatotoxicity of cannabinoids remains a subject of debate, preliminary evidence suggests a role in managing neuropathic pain, especially in conditions like cirrhosis [ 17 ]. Neurostimulation techniques, such as spinal cord and peripheral nerve stimulation, offer an alternative avenue for pain management. By modulating pain signals at the neural level, these interventions aim to disrupt the pain pathway without relying on systemic medications [ 17 ]. While still considered investigational, neurostimulation holds promise as a potential option for liver patients resistant to or intolerant of traditional analgesics. Non-pharmacological approaches CLDs have a widespread impact, and among the complex array of difficulties, pain emerges as a formidable opponent. This essay explores non-pharmacological methods for managing pain in individuals with chronic liver illnesses. We uncover a story beyond traditional pain management methods by examining the impact of lifestyle changes, psychological treatments, and integrative therapies. This approach promotes overall well-being for individuals dealing with liver-related pain. Role of Lifestyle Modifications in Pain Management Lifestyle alterations, which are often overlooked in the field of pain management, have the power to bring about significant changes for persons dealing with chronic liver disorders. These alterations go beyond the conventional limits of medical therapy, embracing elements of daily life that profoundly impact the pain experience. In the setting of liver illnesses, diet plays a crucial role in managing pain. Dietary choices are essential for reducing inflammation, treating nutritional shortages, and promoting optimal liver function. Consuming a diet abundant in antioxidants omega-3 fatty acids, and low in processed carbohydrates may decrease inflammatory indicators, which could help relieve pain [ 18 ]. In addition, for persons with liver problems, maintaining an ideal body weight is not just a matter of appearance but a deliberate method to reduce the mechanical stress on the liver capsule. Excessive weight leads to heightened pressure on the liver, which can worsen pain symptoms. Thus, incorporating a well-balanced and hepatoprotective diet is crucial to pain therapy. The interdependent connection between engaging in physical activity and effectively managing pain serves as a promising prospect for persons suffering from chronic liver disorders. Although exercise may first cause concern when considering damaged health, customized physical activity can provide many advantages. Scientific evidence has demonstrated that regular physical activity can decrease inflammation, promote cardiovascular health, and enhance general well-being [ 18 ]. Individuals suffering from chronic liver disorders might alleviate discomfort and improve their sense of empowerment and resilience by participating in activities such as walking, swimming, or mild yoga. However, it is vital to customize exercise routines to match the individual's health condition and restrictions. By fostering collaboration between healthcare doctors and fitness professionals, physical exercise can be transformed into a therapeutic asset rather than a burden. Within the field of pain management, the sometimes-disregarded element of sleep hygiene appears as a fundamental feature for patients suffering from chronic liver disorders. Insomnia, which is prevalent among individuals with liver conditions, not only intensifies the experience of pain but also contributes to a cycle of tiredness and reduced ability to recover. Implementing techniques to enhance sleep quality, such as adhering to a regular sleep routine, establishing a sleep-friendly atmosphere, and reducing the consumption of stimulating substances before bedtime, can significantly improve the rejuvenating nature of sleep. An adequately rested body is more capable of managing pain and participating in the necessary healing processes for persons dealing with CLDs [ 19 ]. Psychological Strategies for Managing Pain The complex interaction between the mind and the perception of pain reveals a domain where psychological interventions arise as potent strategies for managing pain. These interventions go beyond the field of pharmacology and address the emotional and cognitive aspects of pain, providing patients with a sophisticated method for controlling the intricacies of their pain experience. Cognitive-behavioral therapy (CBT) is highly regarded as an effective psychological strategy for managing pain. Based on the premise that thoughts, feelings, and behaviors are interrelated, CBT seeks to reframe unhealthy thought patterns and develop effective coping strategies. CBT is a systematic approach to help people with CLDs who experience both physical pain and emotional suffering. It focuses on identifying and changing negative thought patterns and promoting resilience. CBT helps individuals by providing them with mental tools and methods to cope with pain. This reduces the psychological impact of pain and leads to noticeable improvements in how pain is perceived [ 19 ]. Mindfulness meditation, an old practice rooted in contemplative traditions, is now recognized as a valuable tool for managing pain in modern times. Mindfulness encourages individuals to develop conscious awareness in the present moment, promoting a non-evaluative acceptance of ideas and experiences. Mindfulness meditation provides solace for individuals grappling with CLDs, where pain frequently becomes an enduring presence. Studies have shown that mindfulness-based therapies can enhance pain intensity, how pain affects daily activities, and general well-being [ 19 ]. By embracing the current moment, individuals can effectively handle the fluctuation of pain with tranquility and acceptance. Pain frequently manifests itself through muscle tension and elevated stress levels. Engaging in various relaxation techniques, such as progressive muscle relaxation and guided imagery, can be a beneficial therapeutic pursuit for persons looking to alleviate liver-related pain. These techniques mitigate the tension knots often accompanying chronic pain, relieving physical and mental strain. Incorporating relaxation practices into everyday routines equips individuals with practical strategies for pain management, empowering them to take control of their path [ 20 ]. Exploring the Advantages of Integrative Therapies Pain management encompasses more than traditional medical and psychological treatments, encouraging consumers to investigate integrative therapy. These therapies, frequently based on holistic traditions, enhance the current approaches by providing a diverse range of possible advantages. Acupuncture, a time-honored technique based on traditional Chinese medicine, is the precise insertion of slender needles into specific spots on the body. Acupuncture provides a distinct approach to addressing the complexities of energy pathways in the management of pain for persons with CLDs. Studies indicate that acupuncture can alleviate pain by regulating the release of neurotransmitters and impacting the body's perception of pain [ 20 ]. Acupuncture offers a comprehensive approach to pain management by targeting physical and energetic aspects, aligning with an individual's desire for complete alleviation. Massage treatment goes beyond relaxation and plays a role in managing discomfort for people suffering from CLDs. Engaging in tactile stimulation and manipulation of soft tissues helps relieve muscle tension and promotes the release of endorphins, which are the body's natural painkillers [ 20 ]. Integrating massage therapy into the pain treatment regimen provides clients with a concrete encounter of therapeutic touch. In addition to the physical advantages, the emotional and psychological aspects of pain can be alleviated by the comforting touch of a proficient massage therapist. Individuals with CLDs are encouraged to consider herbal supplements as a complementary option to conventional interventions to benefit from nature's pharmacy. Although the scientific evidence about the effectiveness of herbal supplements may differ, specific herbs demonstrate anti-inflammatory and analgesic characteristics. Turmeric, containing the active component curcumin, shows the potential to reduce inflammation and offer analgesic effects. Likewise, ginger, renowned for its anti-inflammatory features, is a tasty inclusion in the collection of natural therapies [ 21 ]. Nevertheless, it is imperative to exercise prudence while dealing with herbal supplements, taking into account the possibility of conflicts with pharmaceuticals and seeking advice from healthcare professionals. Precision medicine in pain management Chronic pain, an often experienced condition in health difficulties, requires a detailed and individualized approach. Recently, the idea of precision medicine has gained attention as a promising approach to customizing pain management techniques based on the distinct attributes of each person. This essay aims to examine precision medicine in pain management thoroughly. It provides an overview of precision medicine concepts, explores the complexities of tailoring strategies to individual patient profiles, and discusses the advancements in pharmacogenomics and their significant implications. Overview of Precision Medicine Concepts Precision medicine aims to transcend the uniform approach by recognizing the intrinsic variations among individuals regarding their genetic composition, lifestyle, and environmental factors. Precision medicine seeks to customize therapies by considering the unique characteristics of individual patients instead of using standardized treatment approaches. The core principle of precision medicine is recognizing that our genetic composition significantly impacts how our bodies react to drugs and therapies. The Human Genome Project, a significant undertaking that fully mapped the complete human genome, established the basis for understanding the complex relationship between genetics and health [ 21 ]. Precision medicine utilizes genomic data to identify individual variances in drug metabolism, effectiveness, and probable adverse reactions. Healthcare professionals can customize interventions based on the distinct genetic profile of each patient, ensuring that the treatments are aligned with the individual's specific biological characteristics. Genomics is a fundamental aspect of precision medicine, but a comprehensive approach goes beyond genetics to include various elements that influence an individual's health profile. Lifestyle choices, environmental exposures, and socio-economic characteristics collectively affect the foundation on which precision medicine implements its customized interventions. In pain management, adopting a holistic viewpoint is of utmost importance. Pain is not solely a genetic manifestation; it is interconnected with psychological, social, and environmental aspects. Precision medicine in pain treatment entails a thorough evaluation that considers the genetic predispositions and the numerous circumstances that impact an individual's pain perception [ 21 ]. Tailoring Pain Management Strategies Based on Individual Patient Profiles Precision medicine emerges as a revolutionary framework in pain management, enabling healthcare providers to surpass the conventional trial-and-error method in treatment. Customizing pain management strategies according to specific patient profiles requires thoroughly comprehending the patient's distinct attributes and utilizing this knowledge to develop precise therapies. An essential principle of precision medicine in pain management is acknowledging that pain problems are not uniform. Different subcategories of pain syndromes may fall within a larger category, each with its own unique underlying causes and reactions to therapies. Within the context of chronic pain, there are many subtypes, such as neuropathic pain, inflammatory pain, and nociceptive pain. Each category requires a customized strategy [ 22 ]. Precision medicine enables the classification of pain problems according to their underlying mechanisms, facilitating the development of focused therapies targeting the fundamental causes. Conventional pain management methods often consist of a systematic increase in treatments, where drugs and therapies are administered uniformly to a wide range of patients. Precision medicine challenges this established model by promoting personalized treatment strategies that consider each patient's unique attributes. When dealing with chronic pain, it may be necessary to customize the selection of medications according to the individual's genetic tendency for drug metabolism. Pharmacogenomic testing, which analyzes the impact of an individual's genes on their reaction to drugs, is a significant tool in determining the choice of analgesic agents [ 22 ]. Furthermore, non-pharmacological interventions, such as physical or CBT, can be customized to match the patient's inclinations, way of life, and psychosocial circumstances. Precision medicine in pain management surpasses the fixed character of treatment programs, acknowledging that people may undergo variations in their pain profiles over time. Wearable technologies and mobile health applications enable real-time monitoring, which allows for the adjustment of interventions according to the changing needs of the patient [ 23 ]. For example, a patient suffering from persistent pain may employ a mobile application to monitor pain intensity, physical exertion, and sleep cycles. By utilizing this up-to-the-minute data, healthcare providers can obtain valuable information about the factors contributing to the worsening of pain and customize interventions accordingly. If sleep disruptions become a notable factor in causing pain, the therapeutic approach may shift toward addressing sleep hygiene and employing relaxing techniques. Advances in Pharmacogenomics and their Implications Pharmacogenomics, which investigates the genetic elements impacting drug reactions, is a notable advancement in precision medicine. Pharmacogenomics reveals how drugs can be chosen and how dosages can be improved for pain treatment by considering an individual's genetic characteristics. The metabolism of drugs, an intricate interaction of enzyme systems within the body, differs among individuals due to genetic variances. Pharmacogenomic testing examines crucial genes that produce enzymes involved in drug metabolism, providing insights into an individual's potential reaction to particular drugs. One example is the cytochrome P450 family of enzymes, including CYP2D6 and CYP3A4, vital in breaking down certain pain-relieving drugs, such as opioids [ 24 ]. Genetic differences in these enzymes can lead to individuals being classified as poor metabolizers, extensive metabolizers, or ultra-rapid metabolizers. Every category has an impact on how the body metabolizes drugs, affecting both their effectiveness and the likelihood of experiencing negative side effects. The opioid crisis has highlighted the crucial necessity for implementing safer methods of prescribing in the field of pain management. Pharmacogenomics aids in understanding an individual's reaction to opioids, thereby contributing to this urgent matter. Genetic differences in opioid receptors, namely the mu-opioid receptor (OPRM1), have an impact on the effectiveness and possible adverse reactions of opioids [ 24 ]. Through the identification of genetic markers linked to heightened sensitivity or reduced response to opioids, healthcare providers can make well-informed choices regarding opioid prescriptions, thereby reducing the chances of overdose or insufficient pain management. The merging of pharmacogenomics and precision medicine facilitates the advancement of personalized analgesic treatment plans. Healthcare providers can utilize genetic information to tailor the selection of analgesic drugs, doses, and administration methods instead of using a standardized strategy for pain management. Recent research has investigated the application of pharmacogenomic testing to enhance pain management for illnesses such as osteoarthritis and postoperative pain [ 24 ]. Research indicates that integrating pharmacogenomics into pain management regimens not only enhances pain outcomes but also decreases the occurrence of adverse medication reactions. Although the potential of precision medicine in pain management is undoubtedly intriguing, significant ethical concerns need to be addressed. Ensuring the responsible and fair incorporation of precision medicine into clinical practice requires careful consideration of matters about consent, privacy, and accessibility. Precision medicine entails examining genetic data, so obtaining informed consent becomes very important. Patients must receive comprehensive information regarding the ramifications of genetic testing, encompassing the potential identification of non-pain-related ailments and the psychological and social consequences of genetic data. The intricate nature of genomic data introduces additional elements to obtaining informed consent, requiring explicit communication of the inherent unpredictability associated with genetic predictions. When dealing with the genomic landscape of precision medicine, it is crucial to address patient concerns, ensure understanding, and respect autonomy [ 25 ]. The abundance of genetic data produced by precision medicine gives rise to worries regarding privacy and data security. Genomic data is intrinsically sensitive, as it includes information about the individual and their biological relatives. Implementing strong procedures for encrypting, storing, and transmitting data is crucial to protect against unwanted access and potential misuse. Furthermore, the possibility of unforeseen outcomes, such as genetic discrimination by insurance companies or employers, emphasizes the necessity of legal and ethical structures that safeguard individuals from improper use of their genetic data [ 25 ]. The potential of precision medicine in pain management should not be limited to select segments of the population. It is essential to responsibly adopt measures that guarantee fair access to genetic testing and personalized interventions. To tackle inequalities in access, focusing on socio-economic variables, geographical issues, and cultural subtleties is necessary. Incorporating precision medicine into pain management requires proactive efforts to eliminate obstacles that may contribute to ongoing health disparities [ 25 ]. The practice of precision medicine in pain management prioritizes the involvement of patients in the decision-making process. Providing patients with information about the possible advantages and restrictions of genetic testing promotes a setting where well-informed decisions can be made. Efforts to educate patients, openly discuss the consequences of genetic information, and engage in joint decision-making foster a collaborative relationship between healthcare practitioners and patients. Focusing on precision medicine empowers patients to actively engage in their pain treatment journey, thereby contributing to a collaborative decision-making process [ 26 ]. Interdisciplinary collaboration CLDs have a widespread impact, and among the complex array of difficulties, pain poses a significant and demanding problem. Acknowledging the complex and varied nature of pain experienced by liver patients emphasizes the necessity for collaboration between different disciplines. Hepatologists and pain experts, with their distinct areas of expertise, come together to form a collaborative partnership focused on understanding the intricacies of liver-related pain. In the field of liver diseases, where pain frequently intersects with the complexities of inflammation, fibrosis, and comorbidities, the partnership between hepatologists and pain experts becomes crucial [ 26 ]. Hepatologists, equipped with the knowledge of fundamental liver disorders, aid in identifying pain triggers and developing precise treatment strategies. However, pain specialists with extensive expertise in the intricacies of pain treatment possess a wide range of interventions that go beyond the conventional use of medications. Utilizing Collaborative Methods for Holistic Pain Management To effectively manage pain in liver patients, it is crucial to move away from the fragmented approach to healthcare and instead coordinate a holistic pain care plan. The emergence of team-based methods involving the collaboration of hepatologists, pain specialists, nurses, psychiatrists, and physical therapists represents a paradigm change surpassing individual expertise's constraints. As primary healthcare professionals, nurses connect the hepatology and pain management fields. Their everyday encounters with patients offer a distinctive perspective for evaluating the comprehensive influence of pain on persons' lives. Nurses are crucial in ensuring ongoing care and support for liver patients experiencing discomfort. They achieve this by educating patients, monitoring their symptoms, and facilitating effective communication across different medical specialties [ 27 ]. The psychological aspects of pain experienced by liver patients necessitate the involvement of psychologists with specialized knowledge. Psychologists work closely with hepatologists and pain experts to explore the emotional aspects of patients' experiences, focusing on anxiety, sadness, and the effects of pain on their overall well-being. Incorporating psychological therapies, such as cognitive-behavioral therapy, becomes a fundamental aspect of the comprehensive care framework. Physical therapists have a complete awareness of the physical symptoms associated with liver-related discomfort, in addition to their expertise in rehabilitative exercises. Physical therapists work closely with hepatologists and pain specialists to design customized exercise programs that relieve pain and promote functional recovery. Their proficiency in managing musculoskeletal conditions and enhancing mobility adds to a comprehensive approach to pain management [ 27 ]. Improving Communication and Collaboration Among Healthcare Practitioners Effective communication and coordination among healthcare providers is the foundation for successful interdisciplinary collaboration. Facilitating open communication, collaborative decision-making, and prioritizing the patient's needs is crucial for creating a harmonious relationship among different areas of expertise. Regular interdisciplinary meetings, where hepatologists, pain specialists, and allied healthcare providers gather, function as a central point for collaboration. These meetings serve as a forum for analyzing cases, exchanging perspectives, and making decisions together. By engaging in a collaborative process of generating ideas, healthcare providers can enhance treatment plans, tackle difficulties, and coordinate actions to maximize pain management for patients with liver conditions [ 28 ]. Electronic health records (EHR) integration serves as a technology infrastructure that optimizes the transmission of information between healthcare practitioners. Enabling hepatologists and pain experts to access patient records collectively guarantees a thorough comprehension of the patient's medical history, liver condition, and pain management treatments. The instantaneous transmission of information reduces communication gaps and improves the consistency of healthcare delivery. Adopting a patient-centered approach, which involves individuals actively engaging in decision-making and goal-setting, catalyzes improving communication. By working together to create care plans that align with the patient's preferences, values, and lifestyle, we can ensure that the interventions are well-suited to the individual's specific requirements [ 28 ]. Enabling patients to participate in their pain management process actively enhances their sense of control and encourages them to follow the entire care plan. Interdisciplinary collaboration thrives when healthcare practitioners possess a shared comprehension of one another's roles, knowledge, and viewpoints. Interprofessional education programs involving hepatologists, pain specialists, and other team members are crucial in promoting a culture of mutual respect and recognition for diverse contributions through cross-disciplinary learning. Workshops, seminars, and collaborative training programs help dismantle professional barriers and foster a culture of collaboration. Future directions and opportunities: paving the path for transformative pain management The field of pain management for liver patients is currently on the verge of significant advancements, as continuing research is uncovering new knowledge and potential revolutionary discoveries. The convergence of hepatology, pain management, and state-of-the-art research provides opportunities for pioneering approaches that target the unique difficulties presented by liver-related pain. Ongoing research investigates the complex relationship between neuroinflammation and pain in the context of CLDs. Gaining insight into the role of inflammation in the central nervous system in pain perception reveals new possibilities for focused therapies. Manipulating neuroinflammatory pathways using drugs or immunomodulatory methods can reduce pain at its origin [ 28 ]. Advanced neural imaging techniques, such as functional MRI (fMRI) and positron emission tomography (PET), provide insight into the pain circuits in the brain. Research efforts on the neurological basis of liver-related pain seek to decipher the complex signals and reactions that contribute to the experience of pain. Neuroimaging research in this field offers a detailed comprehension of how the brain contributes to the perception of pain, which allows for the development of precise therapies that can disrupt dysfunctional neural circuits [ 28 ]. Possible Advancements in Pain Control for Individuals with Liver Conditions Immunomodulatory medications, initially developed for illnesses including rheumatoid arthritis and inflammatory bowel disease, are now becoming recognized as potentially transformative treatments for liver-related pain. These therapies, which focus on specific elements of the immune system, show potential in adjusting the inflammatory environment contributing to pain in CLDs [ 29 ]. The ongoing clinical trials are investigating the safety and effectiveness of immunomodulatory drugs. This could lead to a redefinition of how inflammation is managed in cases of liver-related pain. The emerging discipline of gene therapy provides a genetic toolbox for precise pain alleviation in individuals with liver conditions. Gene therapy utilizes viral vectors to transport therapeutic genes to regulate the expression of target molecules that play a role in pain pathways [ 30 ]. Preclinical investigations on the efficacy of gene therapy in animal models of CLDs demonstrate initial achievements, sparking hope for a future where genetic therapies serve as precise instruments in pain management [ 31 - 33 ]. Significance of Ongoing Clinical Trials and Studies Clinical trials and research studies are the arena for testing new therapies and therapeutic approaches. Continuing efforts have far-reaching consequences that transcend beyond research facilities and medical centers, influencing pain management for individuals with liver conditions. Peripheral nerve therapies in clinical trials provide opportunities to explore pain transmission routes separate from the central nervous system [ 33 - 36 ]. Peripheral nerve blocks, radiofrequency ablation, and neuromodulation procedures interrupt pain signals before they arrive at the spinal cord and brain. Continuing research examining the safety and effectiveness of these treatments in individuals with liver conditions offers valuable knowledge about their potential contribution to the comprehensive pain management approach [ 29 ]. Incorporating PROs into clinical trials is a significant change that enhances the role of the patient in assessing the effectiveness of pain management therapies [ 37 - 39 ]. PROs offer a comprehensive perspective for evaluating the effectiveness of interventions by considering the patient's subjective experiences, pain levels, and quality of life. Continuing research that utilizes PROs helps to develop a detailed understanding of the patient's viewpoint. This knowledge is then used to improve therapies to match the individual's priorities better [ 40 ]. Amidst a period characterized by technological progress, it is essential to conduct continuous research to examine the effectiveness of telehealth interventions in alleviating pain for individuals with liver conditions. This research is vital for addressing disparities in healthcare accessibility. Telehealth services, which include virtual consultations, remote monitoring, and digital interventions, provide a crucial solution for persons who encounter obstacles in accessing conventional healthcare. The ongoing trials evaluate telemedicine's possibility and efficacy in managing liver-related pain. These trials reveal telehealth's potential benefits in improving access to comprehensive care.
We extend our heartfelt gratitude to the Paolo Procacci Foundation for their unwavering support, which has greatly enriched the success of this paper.
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50633
oa_package/bd/71/PMC10789475.tar.gz
PMC10789476
38226104
Introduction Pregnancy represents a significant period of transformation in a woman's life, characterized by notable manifestations of creative and nurturing capacities [ 1 ]. This is a critical phase during which maternal health significantly impacts the overall welfare of the developing fetus. This particular period is distinguished by notable physical and physiological transformations, as the human body adjusts to facilitate the development of the growing fetus within the uterus [ 2 ]. The physiological alterations in biomechanics, hormone regulation, and vascular dynamics that occur during pregnancy have been associated with a diverse array of musculoskeletal problems. The displacement of the uterus during pregnancy results in a redistribution of the body's center of gravity, hence imposing mechanical strain on the physiological system [ 3 ]. Hormonal variations during pregnancy contribute to joint laxity, while fluid retention can exert pressure on soft tissues, rendering pregnant women more vulnerable to musculoskeletal problems. Frequently reported issues encompass a range of common ailments, such as back pain, leg cramps, and peripheral neuropathies, with spinal pain being the predominant concern [ 4 ]. The occurrence of pregnancy-induced neuromechanical changes, including modifications in stride, posture, and sensory input, escalates the susceptibility to musculoskeletal problems and fall-related accidents [ 5 ]. As an example, the pelvis undergoes a tilting motion, causing the back to arch to sustain equilibrium, frequently resulting in suboptimal postural alignment. Moreover, the progressive increase in body mass and hormonal fluctuations experienced during pregnancy can have an impact on the foot, contributing to feelings of pain [ 6 ]. Recent research has indicated that musculoskeletal disorders exhibit the highest prevalence during the second and third trimesters of pregnancy. In the absence of appropriate therapy, these relatively mild disorders have the potential to intensify, thereby impacting the well-being of both the expectant mother and the developing fetus [ 3 ]. The dissemination of information regarding these matters to expectant mothers is of utmost importance, as it necessitates no specialized apparatus, but rather relies on the presence of a competent midwife educator and the receptiveness of the mothers to engage in attentive listening and adherence to instructions actively [ 7 ]. The global burden of musculoskeletal issues during pregnancy has been highlighted by the World Health Organization (WHO), prompting the organization to organize meetings aimed at enhancing rehabilitation services for these illnesses [ 8 ]. The incidence of pregnancy-related low back pain and pelvic girdle discomfort exhibits substantial global variation, exerting a notable impact on individuals' daily functioning and overall well-being. Within the Indian context, much research has been conducted to investigate the prevalence and consequential effects of musculoskeletal discomfort experienced during pregnancy [ 9 ]. A study conducted in Tamil Nadu revealed that a significant proportion of pregnant women encounter various physical discomforts, particularly during the advanced stages of pregnancy [ 10 , 11 ]. The researchers at the Institute of Obstetrics and Gynaecology in Chennai discovered that a notable percentage of primigravida women encounter musculoskeletal issues in the latter stages of pregnancy, specifically during the second and third trimesters. Developing comprehensive preventative and treatment strategies can be facilitated by gaining an understanding of the typical discomforts experienced during different trimesters [ 12 , 13 ]. Numerous studies conducted both in India and internationally have underscored the necessity of implementing such efforts. The authors propose that musculoskeletal discomforts, including lower back and hip pain, are widespread and have a substantial negative impact on the overall quality of life experienced by pregnant individuals [ 14 , 15 ]. The recognition of prenatal exercises in the management of various diseases is growing. These exercises have the objective of improving the overall physical and psychological health of pregnant women and reducing the occurrence of pregnancy-related disorders [ 16 , 17 ]. Typically, prenatal exercise routines consist of low-impact aerobic workouts and stretching, which are characterized by their ease of execution and ability to effectively alleviate discomfort and minimize the duration of childbirth preparation. Nevertheless, there exists a deficiency in understanding antenatal exercises, particularly among first-time pregnant women [ 18 ]. The objective of this study is to evaluate the efficacy of antenatal exercises in mitigating musculoskeletal disorders among primigravida women who are receiving care at an antenatal outpatient facility. This study aims to fill a gap in current knowledge and practice by examining the effects of prenatal exercises on the well-being of pregnant women, offering valuable insights into this area.
Materials and methods The primary objective of the research study was to assess the efficacy of antenatal exercises in musculoskeletal discomfort experienced by primigravida mothers and to associate them with selected sociodemographic variables. The study used a quantitative methodology, specifically utilizing a pre-experimental design known as the one-group pretest-posttest design. The antenatal exercises consisted of three components: abdominal tightness, pelvic tilting, and foot and ankle exercises, which served as the independent variable in the study. The variable of interest in this study was the occurrence and intensity of musculoskeletal ailments among first-time pregnant mothers. The research was carried out for three months at the antenatal outpatient department of the Maternity Tertiary Care Center located in Tamil Nadu. The study focused on primigravida mothers who were in their second and third trimesters and were attending the antenatal outpatient department. The researchers utilized a purposive sampling method, selecting a sample of 60 primigravida mothers who met certain inclusion criteria. These criteria included a willingness to participate, proficiency in either Tamil or English, and the presence of only mild to moderate musculoskeletal complaints. The exclusion criteria encompassed individuals with mental disability, high-risk medical disorders, prior antenatal exercise experience, severe musculoskeletal diseases, and utilization of pain treatment techniques. The instrument utilized for the collection of data was a pain scale that was designed and verified using input from medical professionals and authorities in the nursing department. The study incorporated demographic and obstetric factors, as well as a numerical pain rating scale that spanned from 0 (indicating the absence of pain) to 10 (representing intense pain). The reliability of the tool was validated through the attainment of a strong correlation coefficient of 0.92. The ethical aspects of the study were comprehensively handled, as evidenced by the permission received from the Institutional Ethics Committee of Madras Medical College and the acquisition of informed consent from all mothers involved. The pilot study, which included a sample size of 10 mothers, provided evidence to support the practicality of the primary study. The process of data collecting encompassed several key steps, including establishing initial contact with participants, obtaining informed consent, and conducting a pre-assessment utilizing the numerical pain scale. The researcher demonstrated the exercises for 20 minutes, after which the mothers were asked to perform the activities themselves. The process was monitored and observed for two weeks. Subsequently, antenatal exercises were illustrated, followed by a post-assessment that took place two weeks later. The intervention protocol outlined the specific details of the location, exercises, duration, teaching approach, and posttest evaluation. The statistical techniques used in this study include descriptive statistics (frequency distributions, percentages, and mean ), inferential statistics (chi-square test and extended McNemar's test)
Results Sociodemographic and obstetric variables The study examined 60 first-time mothers (Table 1 ), most of whom were aged between 21 and 30 years (49, 81.67%). Twenty-six (43.33%) mothers had been married for one to two years, while the majority of them were in their first one to three years of marriage. The average level of education varied, with secondary education being the most common (31, 51.67%) and primary education being the next most common (23, 38.33%). Of the total number of participants, 49 (81.66%) were Hindu and 45 (75%) were homemakers. The majority of them (51, 85%) earned between Rs. 10,001 and Rs. 12,000 a month, and most of them lived in cities (34, 56.67%). The majority of the participants got married between the ages of 18 and 24 years. According to the obstetric data presented in Table 2 , the majority of mothers had a gestational age ranging from 29 to 32 weeks, accounting for 35 (56.32%) participants in this study. The distribution of antenatal checkups was evenly split between government hospitals and primary health centers, with each accounting for 46.67% (28) of the total. The majority of participants (42, 70%) reported attending three antenatal visits. The majority of heights fell within the range of 151-160 cm, accounting for 32 (53.33%) samples. Similarly, the most prevalent weight category was 51-60 kg, representing 34 (56.67%) participants. All participants were enrolled in the Perinatal and Infant Care Monitoring and Evaluation (PICME) system and received immunizations. No prior medical or obstetric complications were documented. Pre-intervention levels of musculoskeletal ailments Before the implementation of the antenatal exercise intervention, there was a notable prevalence of moderate musculoskeletal pain observed among the maternal population. Forty-five (75%) participants reported experiencing moderate back pain, while 48 (80%) reported pelvic pain and 47 (78.33%) reported leg cramps. The observed high prevalence of moderate pain levels suggests a significant burden of musculoskeletal disorders within the population under investigation (as evidenced by the data presented in Tables 3 - 5 ). Effectiveness of antenatal exercises The implementation of antenatal exercises demonstrated a significant decrease in musculoskeletal ailments. Following the session, a notable decrease in back pain was seen, with 40 (66.66%) participants reporting light discomfort. This reduction was statistically significant compared to the participants' pre-intervention condition. In a similar vein, the prevalence of mild pelvic pain rose to 42 (70%) following the intervention, while 44 (73.33%) mothers reported experiencing mild leg cramps. The findings of this study highlight the efficacy of antenatal exercises in reducing the intensity of musculoskeletal pain experienced by first-time pregnant mothers (Tables 6 - 8 ). Association with sociodemographic variables The pain levels observed after the intervention had different associations with sociodemographic characteristics. The results of the study indicate that individuals who identified as educated women ( P = 0.05) and homemakers ( P = 0.01) saw a more substantial reduction in pain levels following the intervention. This finding suggests that the lifestyle of homemakers may have played a beneficial role in enhancing the efficiency of the exercises. There was a discernible association observed between the kind of family structure and the extent of reported improvements, with nuclear families ( P = 0.01) exhibiting more substantial progress. This phenomenon may be attributed to the presence of enhanced support systems or the provision of more individualized care within smaller family groups. Regarding obstetric characteristics, there was a notable association observed between the age at marriage ( P = 0.01) and weight of mothers ( P = 0.05) and the levels of pain experienced. Mothers with a body weight over 60 kg reported higher levels of moderate pain scores following the intervention, suggesting that the efficacy of prenatal workouts may be influenced by body weight.
Discussion The findings of the study offer significant insights into the effects of prenatal activities on musculoskeletal ailments in first-time pregnant mothers. The initial findings of the study indicated a significant incidence of musculoskeletal disorders among primigravida mothers, with a majority reporting moderate levels of back pain, pelvic discomfort, and leg cramps. The results align with the research conducted by Onyemaechi et al., wherein a similar pattern of elevated prevalence of musculoskeletal impairments, such as calf muscle cramps and low back pain, was observed among pregnant women, particularly during the advanced stages of pregnancy [ 19 ]. The findings from both studies highlight the escalating physical strain experienced during pregnancy, as seen by the heightened symptomatology observed in each successive trimester. The considerable reduction in pain levels post-intervention indicates the effectiveness of antenatal exercises. This correlates with the findings of Davenport et al., where a specialized exercise program resulted in lower pain intensity and better functional abilities in pregnant women with low back pain [ 20 ]. Similarly, the research by Stuge indicated that pelvic girdle exercises considerably reduced pelvic girdle pain and enhanced specific tasks. The unifying thread in this research is the emphasis on targeted exercises to treat specific musculoskeletal disorders, underlining the usefulness of such interventions in prenatal care [ 21 ]. The study also evaluated the association of pain reduction with demographic and obstetric factors. Interestingly, educated mothers, homemakers, younger mothers, and those from nuclear households exhibited more significant benefits. This shows that lifestyle characteristics, familial support systems, and age may play roles in how efficiently pregnant women can manage and lessen musculoskeletal discomfort. Additionally, the result echoes the research by Fiat et al., which indicated physical inactivity and body weight to be major factors influencing musculoskeletal pain during pregnancy. This confirms the view that a holistic approach, encompassing lifestyle and physical health, is vital in managing pregnancy-related ailments [ 22 ]. The results underline the crucial role of nurses in prenatal care, particularly in teaching and encouraging primigravida mothers to undertake antenatal exercises. Nurses can function as catalysts in promoting these activities, highlighting their benefits not just in lowering musculoskeletal illnesses but also in enhancing the general quality of life during pregnancy. This study, through its findings and comparisons with current literature, underlines the need for integrating exercise regimens into normal antenatal care, customized to the unique needs of pregnant women. While the study offers valuable information, further research might address the long-term impact of antenatal activities beyond immediate pain reduction, including postpartum healing and mental well-being. The limitation of the current study is its focus on a specific cohort (primigravida mothers), which may not be generalizable to all pregnant women. Further research should widen the demographic reach to include multigravida women and study diverse geographical and cultural situations to validate these findings more extensively.
Conclusions The study assessing the efficacy of antenatal exercises in alleviating musculoskeletal disorders among first-time pregnant women provides valuable insights into prenatal treatment strategies. The research findings provide clear evidence that implementing certain antenatal exercises effectively reduces the occurrence and intensity of musculoskeletal ailments, including back pain, pelvic discomfort, and leg cramps, in first-time pregnant women. The significant decrease in pain levels following the intervention highlights the need to integrate regular exercise regimens into prenatal care. The results of this study are supported by previous research, providing additional evidence for the significant impact of physical activity on the overall health and wellness of pregnant women. The findings of this study are of great significance not only for the field of clinical practice but also for informing the development of effective strategies for prenatal care. Additionally, the study emphasizes the crucial involvement of nurses and healthcare professionals in advocating for and implementing these exercise routines. By incorporating antenatal exercises into standard prenatal care and providing personalized coaching to pregnant women, healthcare practitioners have the potential to greatly enhance the well-being of expectant mothers. This methodology has the potential to yield improved health outcomes for both the maternal and their children. The results of the study also suggest the need for additional research in this field, particularly about varied populations and the long-term impacts of prenatal activities. The study serves to confirm the fundamental significance of engaging in physical activity during pregnancy and emphasizes the necessity for its wider implementation in prenatal healthcare.
During pregnancy, there are notable alterations in biomechanics, hormones, and vascular functioning, which frequently result in a range of musculoskeletal ailments, including back pain, leg cramps, and pelvic girdle discomfort. The significance of pregnancy-related musculoskeletal problems on women's daily functioning and general well-being is highlighted by their widespread occurrence worldwide, necessitating heightened focus and implementation of effective therapeutic approaches. The main aims of this study were to assess the effectiveness of prenatal exercises in musculoskeletal discomfort and investigate the association between post-intervention levels of discomfort and certain demographic factors. A quantitative technique was used in this study, utilizing a pre-experimental design conducted for three months. A total of 60 primigravida mothers were selected as participants through purposive sampling. The study was conducted in a Maternity Tertiary Care Center located in Tamil Nadu. The intervention encompassed the provision of antenatal exercises, specifically focusing on abdominal tightness, pelvic tilting, and foot and ankle movements. The researcher demonstrated the exercises for 20 minutes, and afterward, mothers were asked to perform the activities themselves. The process was monitored and observed for two weeks. The findings were statistically significant, suggesting a noteworthy decrease in musculoskeletal disorders following the implementation of the intervention. The statistical analysis revealed a significant degree of significance ( P = 0.001), confirming the efficacy of the exercises. Before the implementation of the intervention, a significant proportion of mothers, namely, 45 (75%) reported experiencing moderate back pain. However, following the intervention, this percentage notably fell to 33.34% (20). The incidence of moderate pelvic pain decreased from 80% (48) to 30% (18), and a comparable pattern was observed in the reduction of leg cramps. Additionally, the research identified significant associations between the improvements and a range of demographic and obstetric factors, including the level of education, occupation, family structure, age at marriage, and weight of the mother. The results highlight the significance of incorporating antenatal exercises as a regular component of prenatal care to minimize musculoskeletal discomfort, hence promoting the overall health and well-being of expectant mothers.
The authors would like to thank the participating mothers for their commitment and invaluable contributions and their staff members for their unwavering support and expertise. Their combined efforts have been instrumental in the success of this study, enhancing our understanding and making a significant impact.
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50494
oa_package/5c/96/PMC10789476.tar.gz
PMC10789477
38226082
Introduction Renal tubular acidosis (RTA) is one of the few causes of metabolic acidosis with normal serum anion gaps. RTA is mainly of three types: proximal, distal, and hyperkalemic RTA [ 1 ]. Distal renal tubular acidosis (RTA-1) can be hereditary due to a genetic mutation of enzymes and/or channels in the distal tubule and collecting ducts, but it can also be secondary to some systemic diseases, including but not limited to Sjögren's syndrome [ 2 ]. Distal renal tubular acidosis has variable clinical presentations and is often associated with hypokalemia. It rarely manifests into quadriplegia and respiratory failure. Furthermore, pregnancy has been reported to exacerbate RTA-1 due to its physiologic changes and volume overload on the kidneys [ 3 ]. Here, we present a case of a female patient with undiagnosed RTA-1 that was associated with Sjögren's syndrome. The patient was admitted for quadriplegia and eventually went into respiratory failure. She had a recent history of complicated pregnancy.
Discussion Distal renal tubular acidosis is a disorder of hyperchloremic normal anion gap metabolic acidosis. It causes a spectrum of clinical presentations depending upon the cause and severity of the disease. Severe RTA-1 with genetic mutation can develop early in infancy or childhood, while the mild form can manifest in adolescence. Whereas acquired RTA-1 secondary to autoimmune disorders, e.g. Sjögren's syndrome, is more commonly observed in adulthood [ 1 ]. RTA-1 is diagnosed in a patient when urinary pH is alkaline, i.e., urine pH>5.5 in a context of existing metabolic acidosis or an induced acidosis [ 2 , 4 ]. Urine anion gap (UAG) is performed to differentiate the renal or extrarenal cause of hyperchloremic metabolic acidosis. UAG remains positive in RTA-1 and negative if normal anion gap metabolic acidosis is due to extrarenal etiology, e.g., diarrhea [ 1 , 2 ]. Moreover, UAG becomes negative in proximal renal tubular acidosis (RTA-2) provided serum HCO3- is low. Acquired RTA-1 is associated with Sjögren's syndrome in 5-25% of cases [ 5 ]. Patients with Sjögren's syndrome present due to glandular and/or extraglandular involvement of the disease. RTA-1 is one of its extraglandular expressions. One large Indian study shows that only 8.1% of patients come to clinical attention by themselves due to their subjective complaint of sicca symptoms [ 6 ]. Sjögren's syndrome diagnosis is mostly made based on criteria set by the American European Consensus Group (AECG) [ 6 , 7 ]. Hypokalemia is found in patients with ailments of RTA-1. One proposed theory behind it is that since hydrogen is not excreted, potassium is wasted in order to maintain electroneutrality in urine, which results in low serum potassium [ 5 ]. Caruana and Buckalew, in their study, demonstrated that 28% of patients with low serum potassium are associated with RTA-1 [ 4 ]. Hypokalemia can cause symptoms of polyuria and polydipsia, but quadriplegia with impending respiratory failure is a rare occurrence [ 8 - 10 ]. One Asian study shows that 5.4% of patients with Sjögren's syndrome present with hypokalemic paralysis as their first manifestation [ 6 ]. In quite a few instances, undiagnosed asymptomatic RTA-1 is later uncovered and reported during pregnancy due to hypokalemia and associated complications [ 11 , 12 ]. It can be speculated from these cases that physiologic changes in pregnancy might have incited otherwise occult RTA-1. Our patient had a chronic history of nausea and vomiting starting in the second trimester of pregnancy. Her basic workup did not show any abnormality; however, she continued to have her problem. Her pregnancy was terminated, and the baby was delivered via c-section with a possible diagnosis of hyperemesis gravidarum. After delivery, she continued to have emesis, but the frequency decreased from before. Three months post c-section, she started developing weakness in the lower limbs, which progressed rapidly and involved the upper limbs. With a history of weakness and paralysis for three days, she was admitted to our intensive care unit, where she became severely short of breath and was put on a ventilator. After an extensive workup, she was diagnosed with hypokalemic paralysis with RTA-1 secondary to Sjögren's syndrome and was treated with alkali therapy and potassium supplements along with steroids. She improved and was discharged home on oral supplements. We believe that it is possible that recurrent vomiting in our patient during pregnancy could have been due to hyperemesis gravidarum since she started getting better post c-section. Her emesis after delivery was possibly secondary to RTA-1, which might have become apparent due to a complicated pregnancy and/or subsequent surgery.
Conclusions We suggest that any female patient with muscular weakness or paralysis and respiratory insufficiency during or after pregnancy should be investigated for possible RTA-1 and associated electrolyte imbalances. Once diagnosed, appropriate urgent steps should be taken to prevent devastating consequences. Furthermore, research studies are needed to understand the possible impact of pregnancy and its complications on RTA-1 exacerbation.
Renal tubular acidosis type 1 (RTA-1) is a disorder where kidneys are unable to acidify urine, which ultimately results in normal anion gap metabolic acidosis. Its initial presentations and subsequent clinical manifestations can vary depending on the underlying cause and severity of the disease. We report a case of a 26-year-old female with a recent history of complicated pregnancy. She presented to a tertiary care hospital with quadriplegia and shortness of breath and required ventilator support. The extensive workup revealed that the patient had RTA-1 in association with Sjögren's syndrome. There are only a few cases of RTA-1 reported where the diagnosis was made during the pregnancy. By reporting this case of RTA-1 with rare initial clinical presentation and a recent complicated pregnancy, we propose that further research studies should be carried out in this area to explore a possible statistically significant association between pregnancy (and its complications) and RTA-1 exacerbation.
Case presentation A 26-year-old female patient, mother of two children, with a history of cesarean section (c-section) three months ago, was admitted to medical ICU with complaints of all four-limb weakness for three days and shortness of breath for one day. The weakness started in the lower limbs and progressed rapidly to involve her upper limbs. Over two days, it worsened to the point that she was unable to move her fingers. She became short of breath, and ultimately required ventilator support. She also had a history of vomiting for seven months. The frequency of episodes of vomiting decreased from thrice a week to once a week after her c-section three months ago. The associated symptom was the loss of appetite for the last seven months. Besides the above findings, a review of the other systems was unremarkable. Her obstetric history included two c-sections. She had two sons, both delivered through c-section. Her elder son was born at term and was healthy, while her second pregnancy was complicated, resulting in a preterm delivery at 28 weeks; the baby, however, survived without complications. She was not taking any medications except for multivitamins and, as needed, antiemetics. Her family history was non-contributory. Her parents and siblings were alive and healthy. She neither had exposure to pets such as dogs, cats, or birds nor had she a history of recent travel, history of swimming, tick bite, or hiking. She did not have any fever, neck rigidity, diplopia, joint problems, photosensitivity, alopecia, or other signs or symptoms to suggest a possible underlying autoimmune or infectious process. On examination, a thin, lean lady in respiratory distress was lying in bed. Her respiratory rate was 35 breaths per minute. Her blood pressure was 100/60 mmHg, and pulse rate was 125 beats per minute. She was conscious and oriented. Her limbs were flaccid with the power of 0/5 in all four limbs. Deep tendon reflexes were absent, but her peripheral sensations were intact. The chest was clear with vesicular breathing bilaterally, and the abdominal examination was benign. Laboratory investigations of the patient are given below (Table 1 ). Since SS-A/Ro antibodies were strongly positive, we, therefore, performed Schirmer's test in a search to detect underlying Sjögren's syndrome as a cause of RTA-1, and it resulted positive. Further workup that included thyroid profile, liver function tests, and cerebrospinal fluid analysis came out normal. ECG did not reveal any abnormality except for sinus tachycardia.
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50630
oa_package/e2/79/PMC10789477.tar.gz
PMC10789478
38224561
INTRODUCTION Musculoskeletal Disorders (MSDs) are a serious public health problem in Brazil and around the world, and identifying situations that may contribute to workers’ MSDs is essential to mitigating health risks ( 1 – 4 ) . Data from the European Agency for Safety at Work (OSHA) report show that approximately three out of every five workers in the 28 countries of the European Union (EU) had complaints related to musculoskeletal injuries ( 1 ) . In general, healthcare workers suffer more musculoskeletal injuries than other groups of professionals and have a four times greater risk of developing MSDs than those working in the industrial sector ( 5 ) . In the in-hospital context, nursing stands out for having high levels of musculoskeletal pain with or without comorbidity of MSD ( 6 , 7 ) , and the worsening of these conditions can be the result of intense activities, as proven in previous studies ( 8 ) . Thus, for this study, Material and Sterilization Centers (MSCs) stood out, whose purpose is to process and supply health products ( 9 ) and which, due to the specificities of the work process, such as picking up, washing and transporting heavy boxes of surgical instruments, working for a long time in an orthostatic position, at a fast, repetitive and stressful pace, make professionals more vulnerable to damage to their health ( 10 ) . Studies ( 6 , 11 – 13 ) suggest a relationship between pain, MSD and the work context of the MSC. This is due to the high physical demands of handling heavy materials, the need for agility when processing materials, occupational exposure to contaminated materials, high temperatures and the fast pace of work. The aim of this study was to identify the presence of musculoskeletal pain during the working day among nursing professionals in a MSC.
METHOD Type of Study This is a cross-sectional study. Study Site The study was carried out in three class II MSCs, for processing complex and non-complex materials ( 10 ) , linked to a High Complexity Oncology Center (CACON), located in Rio de Janeiro, RJ, Brazil. These MSCs comprised the following units: hospital unit (HU) 1, made up of 19 nursing professionals in the MSC and serving the specialties of pediatric surgery, thoracic surgery, abdomino-pelvic surgery, urological surgery, head and neck surgery, bone marrow transplantation, neurosurgery and robotic surgery; HU 2, consisting of 16 nursing professionals in the MSC and providing care in the specialties of gynecology and connective bone tissue (orthopedics); HU 3, consisting of 10 nursing professionals in the MSC and providing care in the specialties of mammoplasty and mastology. Study Sample The inclusion criteria were: nursing professionals who had been working in the MSC for at least six months, working during the day and/or night; and the exclusion criteria were: professionals who worked in other sectors, but who eventually worked extra shifts in the unit, professionals who had been temporarily relocated and/or professionals on sick leave during the data collection period. Based on a census, nine professionals of the 45 eligible nursing professionals were excluded, after applying the aforementioned criteria; six due to prolonged medical leave, two due to administrative leave, as a result of belonging to risk groups for COVID-19, and one due to a sector transfer. In the end, the sample comprised 36 nursing professionals, representing all those who took part in the census. Data Collection Instrument The instrument included a sociodemographic, occupational and health-related conditions questionnaire, as well as Corlett and Manenica’s self-report diagram of painful areas ( 14 ) . The questionnaire variables were: a) sociodemographic: age, sex assigned at birth, marital status, ethnicity/color, children and schooling; b) occupational: number of jobs, workload, professional category, time working in the MSC; c) health-related: medical diagnosis of MSD (self-reported). The questionnaire was assessed for clarity and quality, and a pilot test was carried out with nursing professionals from the MSCs who did not take part in this research, in order to ensure that the information was accurately understood. Corlett and Manenica’s self-report diagram of painful areas was used to assess the presence, intensity and location of painful complaints ( 14 ) . It provides an assessment of postural discomfort by means of a map of body regions and allows the identification of areas of discomfort when using the workstation (furniture) studied. The diagram was used in the most up-to-date version, but adapted for this research. In the current version, the “back” region is assessed by the item “back”. In the original version, the back was divided into three areas. For this study, we decided to divide this region into two areas, based on the division in the original version. Thus, the region corresponding to “back” was assessed using the items “dorsal spine” (upper back) and “lumbar spine” (lower back), in order to broaden the options of body areas, making the record more accurate and without causing any damage to applicability. It is an open-access instrument, widely used in ergonomic research in various areas, including nursing. At the end of the assessment, this instrument provides an overview of the professional’s condition at the time of application, in terms of the presence of painful complaints. Analyzing the intensity of pain was not one of the objectives of this study. To characterize the area of pain, we considered the presence or absence of pain based on the illustrative drawing of the human body proposed in the diagram ( 14 ) . To record the location of the pain, the drawing of the human body in a posterior position was used, divided into right and left sides, with 14 symmetrical body segments listed in pairs: 11 for left neck and 12 for right neck, 21 for left shoulder and 22 for right shoulder, 31 for left thoracic region and 32 for right thoracic region, 41 for left lumbar region and 42 for left lumbar region, 51 for left elbow and 52 for right elbow, 61 for left wrist and hand and 62 for right wrist and hand, 71 for right leg and foot and 72 for left leg and foot. The study considered only regions where pain was reported on at least one side (right or left). Data Collection Data collection was carried out by a nurse specialized in operating rooms and sterilization centers, a member of the MSC nursing team at hospital unit 1, from December 2019 to September 2020. The professionals were contacted in person, on a day previously agreed with their immediate supervisor, and collectively received all the information about the study. Those who agreed to take part were given the Informed Consent Form (ICF) and the Data Collection Instrument in a non-transparent envelope. They were first asked to fill in the questionnaire to characterize their profile and then to fill in the diagram of painful areas. The guidelines for applying this instrument are that it should be done only once, at the end of the workday ( 14 ) . However, in this study, it was decided to apply it at two times: the beginning and the end of the workday, in order to make more accurate comparisons as to the presence or absence of painful complaints. In this way, the instruments were filled out as follows: for daytime workers, they were applied at two times, at 7 a.m. and 4 p.m. or 8 a.m. and 5 p.m.; for daytime shift workers, they were applied once, at 7 a.m. and 7 p.m.; for nighttime shift workers, they were also applied once, from 7 p.m. to 7 a.m.; and for 24-hour shift workers, they were applied once, from 7 a.m. on the day the envelope was delivered until 7 a.m. the following day. As the final approach took place at the end of the working day, it was agreed with the participants and their immediate supervisor that they would have five days from the date of the approach to return the envelope with the completed instruments. When filling in the instruments, the participant marked the number that best represented the intensity of the pain felt in each region marked on the diagram. When there were no painful complaints in a region, a score of zero was recorded. On the day of the instrument collection, all envelopes were readily available. Data Processing The data obtained was entered into Microsoft Excel® spreadsheets, double-entered and validated for comparison and correction in the event of discrepancies. They were then transferred to the Statistical Package for the Social Sciences (SPSS) program, version 21.0, for processing. Descriptive analyses (frequency distribution) were carried out to identify the profile of the participants and the characteristics related to pain and the diagnosis of MSD. To verify the associations between the presence and location of pain and MSD, as well as the relationships between the number of body segments with painful manifestations and MSD, Pearson’s chi-square test or likelihood ratio or Fisher’s exact test were used. The significance level adopted was 5%. Ethical Aspects This study complied with the ethical precepts of Resolution 466/12 for research involving human beings and used the ICF; it was cleared by the Research Ethics Committees (CEP) of the proposing institutions (Opinion No. 3.636.843/2019) and the co-participating institution (Opinion No. 3.709.457/2019).
RESULTS The sample consisted of 36 nursing professionals, three of whom were nurses, 31 technicians and two nursing assistants, predominantly female (83.5%, n = 30), non-white (58.3%, n = 21), educated to high school level (41.7%, n = 15), with children (75%, n = 27), and living without a partner (55.6%, n = 20). The average age of the participants was 47.4 (±9.7) years. Most did not have another job (69.4%, n = 25), worked 40 hours a week (69.4%, n = 25) and had been working in the MSC for between one and five years (55.6%, n = 20). The overall prevalence of self-reported MSD among nursing professionals was 66.7% (n = 24). Painful complaints were observed both at the beginning and end of the working day. Among the participants diagnosed with MSD, 75% (n = 18) reported pain at the initial assessment, while 95.8% (n = 23) reported pain at the final assessment. It should be noted that 91.7% (n = 11) of the participants without a diagnosis of MSD reported pain both at the start and end of the working day ( Table 1 ). Table 2 shows the distribution of the location of the painful complaints of nursing professionals with (n = 24) or without (n = 12) a diagnosis of MSD, considering the time of the assessment. Analysis of the results at the two different times of the working day showed an increase in the percentage of those with a diagnosis of MSD in all the body areas analyzed, and in several areas for those without such a diagnosis, although without any statistically significant difference. Among the professionals diagnosed with MSD, there was an increase in the number of segments with pain, as shown in Table 3 . It is noteworthy that in the final assessment, compared to the initial assessment, 8.7% (n = 2) of the professionals had pain in two body segments and 21.7% (n = 5) had pain in four body segments; and 21.7% (n = 5) had pain in six body segments. The p-value was significant, showing a strong relationship between pain in the body segments and musculoskeletal disorders. It should be noted that it was not necessarily the case that all the participants had pain at both times or in all the segments investigated.
DISCUSSION The research evidenced that the presence of painful symptoms and a diagnosis of MSD among nursing professionals is a concern in the Material and Sterilization Centers investigated, corroborating previous studies that have also demonstrated this issue in other professional practice settings ( 6 – 8 , 15 – 17) . Studies show that nursing professionals are at high risk of MSD, owing to their work dynamics and, as a result of this wear and tear, they end up suffering from painful sensations during or at the end of the working day. This is exacerbated when they perform activities involving repetitive movements, forced postures, standing and/or walking, as well as working in an unfavorable work environment, with sudden movements, weight and emotional stress. The findings of this study also corroborate the fact that professional health practice favors exposure of the musculoskeletal system and, consequently, the appearance of pain in various segments of the body ( 12 , 18 , 19 , 20 ) . Research ( 13 , 16 , 17 ) indicates that musculoskeletal disorders and pain are the main causes of absenteeism and sick leave among nursing professionals. A study ( 17 ) investigating 2.761 absences among nursing professionals at a university hospital in Rio Grande do Sul, Brazil, found that MSDs were the main cause, accounting for 16.26% (n = 449) of absences. This total included 220 (9.94%) professionals, of whom 48.9% had more than one absence, demonstrating the persistence of illness. The data presented so far is in line with that obtained in this study and in another national study ( 17 ) . From this point of view, it reinforces the understanding that areas of greater repetitiveness and support of body weight have greater evidence for the appearance of MSD and painful sensations in nursing professionals. These inferences can be transposed to the context of working in the MSC. Nursing professionals in this sector often report painful sensations, probably associated with the repetitiveness of activities involving washing materials, carrying heavy boxes and baskets with materials prepared for sterilization, as well as transporting sterile materials to be dispensed to consumer units. As a result, they are a group of workers prone to developing occupational stress and illness related to the occupational risks inherent in the sector ( 21 ) . A study ( 22 ) carried out in Ecuador with MSC nursing professionals confirms the data collected here, as it showed that 75% of the professionals reported making sudden movements and 42% reported feeling stressed during the working day, which contributed to the prevalence of painful areas in the shoulders and back, followed by pain in the neck, arms and waist. It was found that 75% of the MSC nursing professionals investigated reported pain at the beginning of their shift, and 95.8% said they felt pain at the end of their shift. This suggests that after physical, organizational and cognitive exposure to the work in the sector, the professionals responded to the stimuli through painful sensations. It is therefore possible to associate these pains with MSD, which in the long term can lead to work incapacity ( 1 , 23 ) . The regions most affected by painful sensations were the lumbar and dorsal spine, feet/legs, neck and shoulders, both at the beginning and end of the working day. These data are corroborated by studies ( 24 , 25 ) which identified a higher prevalence of pain in the lumbar region, followed by the neck and shoulder in nurses, nursing technicians and nursing assistants in the material and sterilization center and emergency department. Corroborating these findings, the prevalence of MSD in a sample of 1.932 Chinese nurses was 79.52%, with painful symptoms in the waist region (64.83%), neck (61.83) and shoulders (52.36%) ( 13 ) . In Iran, among 211 nurses aged 35–45, the most affected regions were the lumbar (88.33%), knee (83.33%), femoral (71%) and neck (55%) ( 16 ) . Analysis of the data showed that nursing professionals started their working day with pain, even those who were not diagnosed with MSD. This is worrying from the perspective of the health of these workers. In addition, the routine of working in the MSC first thing in the morning contributes to overload and may be related to the early onset of pain. Another factor that cannot be overlooked is the postures adopted by the workers. What’s more, in practice, the volume of demands in the morning is more intense. This is when the greatest number of procedures and surgeries take place, requiring the sector to be more agile and prompt in preparing and releasing material throughout the day. In the final assessment - at the end of the shift - the results for the neck, shoulder, lumbar spine and wrist/hand regions remained similar to those of the first assessment, which shows the continued use of these structures and the consequent painful sensation from repetitive movements. However, there was an increase in the dorsal column, elbow/forearm and leg/foot. These results contribute to the understanding that, at the end of the working day, professionals end up feeling pain. It can be inferred that this pain is triggered by fatigue due to the constant use of the back and elbow/forearm throughout the working day, during activities which involve cleaning, drying, transporting heavy materials, as well as preparing trays in a sitting or standing position, using the upper limbs to fold, check and transport them to the sterilization sector. This is concurring with other studies ( 26 – 28 ) regarding the health of MSC workers, which state that ergonomic risks are present in this work environment and can affect productivity and quality of care. In addition, there is a high prevalence of MSD and migraines. In the case of the lower limbs, studies corroborate that the legs and feet are responsible for supporting the body and, consequently, long periods of standing can lead to painful sensations. Reports of exhaustion due to maintaining the same posture for long periods and physical exertion are recurrent in this group ( 27 , 28 ) . The results of the study in question corroborate these observations. Exemplifying this situation, a study ( 29 ) found a high prevalence of MSD associated with standing for long periods and manual handling of materials during the working day among nurses in Greece. Another study ( 30 ) reinforces this finding and adds that physically demanding work, involving the adoption of incorrect postures, contributes greatly to the development of MSD and the presence of pain. Thus, reports of pain in various regions of the body are common among nursing professionals, indicating that this class has a clear prevalence of pain and, consequently, work limitations associated with the work environment, repetitive movements, inappropriate postures, stress and a fast pace of work ( 1 , 7 , 30 ) . These situations, combined over the long term, can compromise the worker’s functional capacity and lead to permanent musculoskeletal injuries. It should therefore be pointed out that working with pain limits the professional’s work, making it stressful and demotivating. Just one area affected by pain can have serious consequences for a professional’s health. It is therefore difficult for the professional to continue working with several areas of pain at the same time, as identified in this study. The limitations of this study are related to its typology, which makes it impossible to establish causal relationships, and to the fact that the professionals may have answered according to what was expected, rather than according to their intentions, which may have underestimated or overestimated the outcomes, which was enhanced by the self-administered form of the instrument. On the other hand, the study advances scientific knowledge by investigating a field of nursing that is still invisible, but which has an impact on the whole process of caring for individuals. For a long time, MSCs were located in isolated areas of Healthcare Establishments, with no contact with other sectors, as they are a closed environment in which the careful adoption of safety measures is essential. However, this has led to a distancing in the imagination of the institutions, which has generated a lack of knowledge of the role and importance of the activities carried out there for other members of the staff. This has also been reflected in the absence or incipient concern of researchers in developing research involving issues specific to that environment. Thus, this study fills a gap in the production of scientific knowledge, especially in relation to the processes of illness of this public, broadening the contributions by pointing out not only the presence of MSD and pain, but also the moments when these episodes occur. This article highlights the need to identify early on the potential risks of illness among nursing professionals who work in MSCs, as well as those professionals who are most vulnerable, in order to devise strategies to mitigate the damage that may eventually occur to their health. This shows the importance of shared action between managers and workers to promote better working conditions, especially when it comes to painful complaints and MSD, management and prevention programs for ergonomic risks, which are intrinsically related to this type of illness. As this was an exploratory study, it did not delve into the analysis of MSDs at this initial stage. The population was specific, which prevents generalizations, and no differential diagnosis was made for acute and chronic pain. These limitations should be overcome in new studies. The study highlights the importance of identifying the working conditions that may be contributing to the onset of pain and the need to adopt preventive measures. This could include training in ergonomic techniques, the use of support equipment or changes to the design of the workplace. It contributes to raising awareness among managers, the creation of public policies related to the prevention of MSD and pain in MSC nursing professionals, as well as treatments for those already affected; it favors concrete data that can guide practical interventions, such as rotations between professionals in the WEC areas, regular breaks and stretching exercises, for example.
CONCLUSION The presence of pain was reported by 29 (80.6%) of the participants at the beginning of the workday and by 34 (94.4%) at the end. The overall prevalence of MSD among nursing professionals was 66.6% (n = 24). Among the professionals diagnosed with MSD, there was an increase in the number of participants reporting pain at the end of the shift. These data indicate that there may be a relationship between the work process and the development of pain, identified and verbalized by the participating professionals. The region with the highest prevalence of pain at both assessment times was the lumbar spine, followed by legs/feet, dorsal spine and neck in the initial assessment. In the final assessment, the regions with the highest prevalence of pain were the legs/feet, neck and shoulders. At the end of the work shift, compared to the first assessment, there was a quantitative increase in the number of body segments in pain. Finally, the increase in the prevalence of pain among nursing professionals after the working day highlights the importance of paying attention to all areas of the body and the urgent need for interventions in workers’ health. The continuous presence of pain related to musculoskeletal overload can increase the risk of developing MSD related to the dynamics of the service.
ASSOCIATE EDITOR: Vanessa de Brito Poveda ABSTRACT Objective: To identify the presence of musculoskeletal pain during the working day among nursing professionals in material and sterilization centers. Method: A cross-sectional study with 36 nursing professionals who answered a questionnaire for personal characterization and diagnosis of musculoskeletal disorders and Corlett and Manenica's diagram of painful areas at the beginning and end of the working day. Frequency distribution analysis, Fisher's exact test and likelihood ratio were carried out. Results: The presence of pain was reported by 80.6% (n = 29) of the participants at the start of the working day and 94.4% (n = 34) at the end, and the prevalence of musculoskeletal disorders was 66.6% (n = 24). There was a statistically significant difference in the number of segments with pain between professionals with and without a diagnosis of musculoskeletal disorders, in the initial and final assessments. The lumbar spine had a higher prevalence of pain in both assessments. Conclusion: The prevalence of pain increased towards the end of the working day and indicates that there may be a relationship between the work process and the development of pain. It is important to identify working conditions that may contribute to the onset of pain and to adopt preventive measures. RESUMO Objetivo: Identificar a presença de dor osteomuscular durante a jornada de trabalho, em profissionais de enfermagem de centros de material e esterilização. Método: Estudo transversal, com 36 profissionais de enfermagem, que responderam ao questionário para caracterização pessoal e de diagnóstico de distúrbios osteomusculares e ao diagrama de áreas dolorosas de Corlett e Manenica, no início e fim da jornada de trabalho. Realizou-se análise de distribuição de frequências, teste exato de Fisher e razão de verossimilhança. Resultados: A presença de dor foi referida por 80,6% (n = 29) dos participantes no início da jornada de trabalho e por 94,4% (n = 34) ao final, e a prevalência de distúrbios osteomusculares foi de 66,6% (n = 24). Houve diferença estatisticamente significativa na quantidade de segmentos com dor entre profissionais com e sem diagnóstico de distúrbios osteomusculares, na avaliação inicial e final. A coluna lombar, em ambas as avaliações, apresentou maior prevalência de dor. Conclusão: A prevalência de dor aumentou ao final da jornada de trabalho e indica que pode haver relação entre o processo de trabalho e o desenvolvimento de dor. É importante identificar condições de trabalho que podem contribuir para o surgimento da dor e adotar medidas de prevenção. RESUMEN Objetivo: Identificar la presencia de dolor musculoesquelético durante la jornada laboral entre los profesionales de enfermería de los centros de material y esterilización. Método: Estudio transversal con 36 profesionales de enfermería que respondieron a un cuestionario de caracterización personal y diagnóstico de trastornos musculoesqueléticos y al diagrama de Corlett y Manenica de zonas dolorosas al inicio y al final de la jornada laboral. Se analizaron la distribución de frecuencias, la prueba exacta de Fisher y los cocientes de probabilidad. Resultados: El 80,6% (n = 29) de los participantes declararon la presencia de dolor al inicio de la jornada laboral y el 94,4% (n = 34) al final, y la prevalencia de trastornos musculoesqueléticos fue del 66,6% (n = 24). Hubo una diferencia estadísticamente significativa en el número de segmentos con dolor entre los profesionales con y sin diagnóstico de trastornos musculoesqueléticos, en las evaluaciones inicial y final. La columna lumbar presentó una mayor prevalencia de dolor en ambas evaluaciones. Conclusión: La prevalencia de dolor aumentó hacia el final de la jornada laboral e indica que puede existir una relación entre el proceso de trabajo y el desarrollo de dolor. Es importante identificar las condiciones de trabajo que pueden contribuir a la aparición del dolor y adoptar medidas preventivas. DESCRIPTORS DESCRIPTORS DESCRITORES
CC BY
no
2024-01-16 23:47:20
Rev Esc Enferm USP.; 57:e20230019
oa_package/6d/fa/PMC10789478.tar.gz
PMC10789483
38226315
Introduction Sarcoidosis, marked by noncaseating granuloma formation, is a complex multisystem disorder with the potential to affect numerous organs. Consequently, this disease can be presented with a range of symptoms, from none to severe organ problems, corresponding to different clinical phenotypes [ 1 - 6 ]. Sarcoidosis is a worldwide condition with varying incidence rates across different regions ranging from 1 to 15 cases per 100,000 people, with the highest rates observed in afro-descendants and Northern European countries, 11-15 cases per 100,000 people [ 1 , 7 ]. Variations in sarcoidosis incidence and prevalence rates are often attributed to disparities in genetics, environmental exposures, or differences in detection and diagnostic methods [ 1 , 5 , 8 , 9 ]. In many parts of the world, the true burden of sarcoidosis remains unclear due to the existence of diseases that mimic its symptoms (e.g., tuberculosis), inadequate access to diagnostic technology and knowledge, and limited case recording [ 2 , 8 , 9 ]. The diagnosis can be overlooked when clinicians are not familiar with its various presenting characteristics and the appropriate diagnostic evaluations involving imaging studies and tissue biopsies [ 8 , 10 ]. Sarcoidosis may be acute or chronic, with acute forms often displaying a positive prognosis, frequently achieving complete remission within the initial two years [ 8 ]. Certain clinical presentations exhibit such distinct symptoms that they have been recognized as syndromes. Löfgren syndrome (LS), consistently considered the most well-established phenotype of sarcoidosis, is characterized by the coexistence of bilateral hilar adenopathy on chest radiography, bilateral ankle arthritis (typically in men), and/or erythema nodosum (EN) (typically in women) [ 4 ]. LS has the highest reported incidence in individuals of white ethnicity and is rarely diagnosed in black or Asian individuals. In Sweden, the syndrome comprises approximately 30% of all sarcoidosis cases [ 4 , 7 ]. The majority of patients with LS experience spontaneous resolution within three to six months [ 5 ]. We present a case of bilateral diffuse panniculitis in a 36-year-old woman with systemic involvement, whose presentation and clinical course demonstrated to be a unique mosaic of sarcoidosis phenotypes. We reviewed the state of the art regarding clinical presentation, differential diagnosis, and natural history, highlighting LS. Given the idiosyncratic characteristics of this clinical scenario, we focused on cutaneous, subcutaneous, and musculoskeletal manifestations. We also discuss the timing of starting treatment and the expected evolution of the acute form of sarcoidosis.
Discussion The above clinical case describes a rare acute manifestation of sarcoidosis. The diagnostic criteria of sarcoidosis include a combination of clinical and radiological presentation, the presence of non-caseating granulomas, and the complete exclusion of alternative diseases [ 1 - 3 ]. The respective weight of each criterion varies depending on the presentation and evolution of sarcoidosis [ 5 ]. In 1952, a triad of sarcoidosis-related acute symptoms was described: BHL, EN, and/or bilateral ankle arthritis or periarticular inflammation (PAI), which became known as LS [ 8 , 10 ]. Unlike the chronic groups, which preferentially affect individuals of African descent, LS predominantly affects individuals of Caucasian European descent, with a significant presence in Sweden and the Netherlands [ 7 ]. Thus, numerous research groups have concentrated on delineating the phenotypic patterns to link them with distinct genetic backgrounds and underlying biological pathways, with the aim of predicting clinical progression and treatment response [ 11 - 13 ]. EN is a reactive nodular inflammatory panniculitis that develops in up to 25% of patients with sarcoidosis and serves as the primary cutaneous manifestation of LS. It is considered as a non-specific cutaneous manifestation of sarcoidosis, as histological examination does not reveal granulomas [ 14 , 15 ]. EN is characterized by erythematous, violet, or brown subcutaneous nodules that are tender and warm to the touch. These nodules are typically found in the pre-tibial areas of the lower limbs and are frequently associated with symptoms such as arthralgia, periarteritis, lower limb edema, and fever [ 14 , 15 ]. Importantly, they have been associated with a favorable prognosis [ 1 - 10 , 14 , 15 ]. During the initial evaluation, our medical team initially assumed that the skin manifestations in our patient could signify EN. The absence of palpable nodules was attributed to the presence of significant periarticular edema. However, a subsequent ultrasound examination did not confirm this suspicion. The prominent PAI observed in our patient was described as non-septal panniculitis, representing a rare and non-specific subcutaneous manifestation of sarcoidosis [ 14 , 15 ]. Indeed, contrary to EN, the lesions in subcutaneous sarcoidosis are not tender, have a flesh-colored appearance, may persist for extended periods, and are strongly associated with mild systemic involvement [ 14 , 15 ]. Hence, following a clinical suspicion, a chest X-ray was performed, which confirmed radiographic stage I according to the Scadding chest X-ray staging system, characterized by bilateral mediastinal and hilar adenopathy without pulmonary infiltrate, as observed in the majority of described patients in the medical literature (79%-82.5%) [ 13 ]. In fact, the presence of isolated PAI in conjunction with BHL (without EN) remains documented by Caplan et al. and also in the more recent studies; the proportion of the relevant patients goes up to 6.4% in some of the series [ 10 - 13 ]. In contrast to the typical presentation of LS, this variant is observed more frequently in men in the 25-40 age range and tends to manifest during the spring season, which could indicate a set of environmental associations related to the etiology of this variant [ 5 , 7 , 10 , 11 , 13 ]. Joint involvement in sarcoidosis remains recorded in 2%-38% of cases [ 16 , 17 ]. Overall, joint involvement in the context of LS is described as migratory polyarthritis, often symmetrical and predominantly affecting the large joints, with the ankle joint being involved in over 90% of the relevant cases. Joint pain is often the main complaint of patients with LS, which leads them to seek medical attention. Most patients initially exhibit stiffness and pain, often describing the same as a dull ache [ 18 , 19 ]. In this regard, the most notable clinical observation is the predominant location of the inflammation in the tissues surrounding the joints (periarteritis) [ 16 , 18 , 19 ]. This manifestation is characterized by moderate to severe soft tissue swelling and tenderness in 70% of patients with ankle involvement [ 16 , 20 ]. In many cases, the associated redness and warmth of the skin resembles cellulitis. Moreover, tenderness often persists even after the swelling subsides. In contrast to this PAI, intraarticular involvement is relatively minor. In the cases described in the medical literature, the pain remains minimal or non-existent during active or passive movement of the joint and does not compromise walking, even when the ankles are acutely inflamed [ 18 , 19 ]. Ultrasonography typically indicates the swelling of the soft tissues surrounding the joints and tenosynovitis, with joint effusion or synovitis being observed less frequently [ 17 , 18 ]. In Caplan’s series, only one of the 19 patients reports severe pain during joint movements, similar to the claims of our patient [ 10 ]. In the case presented in this report, the clinical and ultrasound findings of our patient appeared to align with the abovementioned rare subgroup described by Caplan et al. [ 10 ]. However, to confirm the diagnosis of sarcoidosis, it was still necessary to identify non-caseating granulomas and rule out other pathologies. Indeed, in cases where the triad that indicates LS is present, the diagnosis is usually straightforward, with high specificity (93%); in this regard, histological confirmation is not mandatory [ 20 ]. However, in some instances, the diagnosis of LS can be challenging due to the presence of variant forms of the syndrome or an incomplete initial presentation, both of which make it difficult to exclude other potential pathologies and establish the correct diagnosis. This was the exact situation in our case [ 8 ]. Notably, tuberculosis can involve a particularly challenging differential diagnosis, not only in terms of pulmonary presentations but even in cases involving arthritis presentations, as exemplified by the need to differentiate it from Poncet’s Disease [ 4 ]. Thus, given the prevalence of tuberculosis in Portugal and the related risk of potential occupational exposure for our patient, it was crucial to rule out tuberculosis at the outset. When performing a biopsy on the skin or joints, the typical findings often involve mild, non-specific inflammation of the synovium, characterized by the presence of mononuclear cells surrounding blood vessels within the synovial tissue or panniculitis; the presence of non-caseating granulomas is rare [ 9 ]. In our case, due to the limited precision of the skin and joint biopsies, we decided to proceed with a transbronchial biopsy. Here, one must note that EBUS-TBNA is currently the preferred diagnostic procedure regarding sarcoidosis in many medical centers due to its high sensitivity and low complication rate; in fact, meta-analyses have indicated a diagnostic yield for sarcoidosis ranging from 54% to 93%, with an overall sensitivity of 79% [ 2 , 5 , 6 ]. As per this procedure, the biopsy specimen must undergo a thorough evaluation to exclude other potential causes of granulomatous inflammation, especially fungal and mycobacterial infections, as well as foreign body reactions [ 4 , 8 ]. This evaluation includes specific staining and culture tests for mycobacteria and fungi. Moreover, BAL typically reveals a moderate lymphocytosis (20%-50%) in 80% of sarcoidosis cases and a CD4:CD8 T lymphocyte ratio greater than 3.5 in 50% of the cases, further supporting the diagnosis of sarcoidosis [ 4 ]. In our case, the biopsies in our patient confirmed the presence of non-caseating granulomas and ruled out other conditions. Regarding sarcoidosis treatment, the best-defined guidelines primarily address pulmonary manifestations, as they are more frequent and have a significant impact on prognosis [ 1 , 2 ]. Managing the extrapulmonary manifestations of sarcoidosis is a complex issue, mainly due to the challenge of predicting disease progression, which can resolve spontaneously even in advanced cases [ 8 , 9 ]. The therapeutic options include disease-modifying medications such as steroids, classical immunosuppressants (e.g., methotrexate, azathioprine), and biologics (e.g., infliximab) [ 5 , 6 ]. Furthermore, the initiation of treatment should consider the disease’s severity or its potential to worsen, in addition to assessing how the symptoms affect a given patient’s quality of life [ 7 , 9 ]. Prednisolone, a glucocorticoid, remains the primary substance that is used in the treatment of sarcoidosis. One proposed regimen includes the administration of prednisolone at a dose of 0.5-0.75 mg/kg of one’s body weight on a daily basis for four weeks, followed by a tapering regimen that involves decreasing the dosage by 10 mg after every four-week period based on the disease response [ 4 ]. In many cases, treatment can be discontinued after six to 12 months, provided that the patients become asymptomatic and their lung function improves. Notably, regular monitoring is crucial, and experts recommend the continuance of vigilant post-treatment monitoring for at least three years after discontinuation [ 4 ]. Most patients with LS achieve remission; however, chronic forms of LS do occur, remaining active two years after diagnosis; more rarely (3%-6%), recurrences occur after remission [ 20 ]. In our patient’s case, after months of remission, a subacute condition developed along with extreme fatigue and arthralgias [ 8 ]. This situation could be compared to those described in the available medical literature, where one-third of patients reportedly exhibit constitutional symptoms, with fatigue being present in up to 90% of cases and significantly affecting their quality of life [ 1 ]. On the other hand, chronic arthritis is extremely rare, as inflammatory arthritis usually resolves within six weeks in most patients and within two years in almost all patients. A chronic course lasting for more than two years is observed in 8%-22.6% of patients with LS and is associated with advanced age, stage II sarcoidosis diagnosis, and the need for treatment [ 9 , 16 ]. Notably, chronic forms of sarcoid arthritis involve a less symmetrical distribution of joint and skin involvement compared to that of LS and are usually associated with either pulmonary or extrapulmonary parenchymal sarcoidosis [ 16 ]. Given the rarity of these forms, treatment decisions must be made on a case-by-case basis. In the case of our patient, as her symptoms had a substantial impact on functional capacity and the literature indicates favorable outcomes in managing cutaneous and joint manifestations, we opted to initiate weekly methotrexate therapy [ 2 , 5 ]. One must remember that sarcoidosis is associated with an increased risk of several comorbidities, including infections, heart failure, stroke, autoimmune diseases, and various types of cancer [ 20 ]. Since there are limited studies examining the contribution of glucocorticoids and other immunosuppressive agents as potential factors in the development of these comorbidities, it is challenging to distinguish the influence of the disease itself from the impact of treatment [ 4 ]. Still, the prognosis of sarcoidosis is generally favorable, with fewer than 10% of patients succumbing to the disease (mainly due to advanced lung involvement). Several factors, such as age at the time of diagnosis, disease presentation, pulmonary fibrosis, cardiac and neurological complications, and pulmonary hypertension, influence the prognosis of sarcoidosis [ 2 , 4 , 7 ]. Thus, careful management and monitoring are essential to improve the concomitant patient outcomes.
Conclusions This case report aims to underscore the uncommon presentation of LS, characterized by the absence of EN but featuring diffuse panniculitis. It highlights the necessity of recognizing the diverse manifestations of sarcoidosis. Particularly noteworthy is the clinical recurrence experienced by our patient, a deviation from the typical cases documented in the medical literature. In light of this, we emphasize the critical importance of ongoing disease monitoring and the implementation of personalized treatment strategies for sarcoidosis.
Sarcoidosis is an autoimmune multisystemic granulomatous disease with an unknown etiology. Löfgren syndrome (LS), an infrequent initial presentation of acute sarcoidosis, is characterized by the classic triad of acute arthritis, erythema nodosum (EN), and bilateral hilar lymphadenopathy (BHL). The presence of this triad offers high diagnostic specificity for sarcoidosis, eliminating the need for a confirmatory biopsy. Typically, LS follows a predictable, self-limiting clinical course. However, atypical presentations require early suspicion and closer monitoring. This case report highlights an unusual clinical manifestation of LS, marked by an incomplete presentation with acute panniculitis and joint lesions in the absence of EN. Acute sarcoidosis should be considered among the differential diagnoses when these clinical manifestations are present, and chest radiography should be performed to rule out BHL. In atypical cases, the disease course becomes less predictable, as exemplified in our case, where recurrence of the disease may occur, necessitating consistent monitoring.
Case presentation A 36-year-old Caucasian non-smoker female patient was referred to our internal medicine department for the investigation of pain and inflammatory signs in her bilateral tibiotarsal joints over the seven days that preceded her referral. The patient had initially noted painful swelling, redness, and stiffness in both her ankles early in March. Subsequently, she reported a progressive worsening of these conditions over the 48 hours preceding her referral, which had evolved and impaired her ability to walk, compelling her to use a wheelchair for movement. The pain reportedly became worse during times of rest, especially at night, and the morning stiffness lasted for more than two hours. Notably, the patient did not report the occurrence of fever, night sweats, weight loss, or other accompanying symptoms. She did not have any history of trauma, recent infections, travel, or risk behavior for sexually transmitted diseases. She worked as a healthcare professional, with no medical history, no record of drug addiction or medication use, and no known allergies prior to her referral. The patient’s physical examination revealed exuberant inflammatory signs in both her feet, her tibiotarsal joints, and the lower thirds of her legs. Specifically, there was a fat pad anterior to the lateral malleoli, soft tissue non-pitting edema, and the absence of a Stemmer sign (Figure 1 ). As mentioned before, the patient encountered pain at rest and both passive and active movements were restricted in the affected joints. However, the examination of the other joints revealed no abnormality. Upon further examination, no palpable lymphadenopathies or organomegaly were detected in the accessible areas, and the patient’s cardiovascular, respiratory, and neurological systems appeared unremarkable. The patient’s admission blood samples showed an elevated erythrocyte sedimentation rate (71 mm/h), the presence of C-reactive protein (159 mg/L), normal serum angiotensin-converting enzyme, and protein electrophoresis. The autoimmune disease-related studies revealed a negative report regarding HLA-B27, antinuclear antibodies, anti-citrullinated protein antibodies, and rheumatic factors (Table 1 ). The screenings for tuberculosis and other infectious diseases also showed negative reports. Moreover, the ultrasound of the patient’s tibiotarsal joints showed the following: diffuse subcutaneous edema of the feet and tibiotarsal joints, with more significant edema on the left foot; moderate tenosynovitis of the peroneal and posterior tibial tendons, and the absence of joint effusion or appreciable synovial thickening of the tibiotarsal or intertarsal joints. No micro- or macro-vascular changes were observed (Figure 2 ). Notably, the patient’s chest X-ray film revealed bilateral hilar lymphadenopathy (BHL) (Figure 3 ). Furthermore, a contrast thoracic abdominal computed tomography (CT) revealed a classic "123 sign" with bilateral hilar and paratracheal lymphadenopathy but no other observable abnormality (Figure 4 ). To confirm the diagnosis and rule out other potential causes of lymph node enlargement, bronchoalveolar lavage (BAL) and endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) were conducted on the hilar and mediastinal lymph nodes. The obtained material underwent microbiological examination, including special staining for fungi and acid-fast bacilli, as well as cultures for tuberculosis and fungi, all of which returned negative results. A polymerase chain reaction (PCR) test for acid-fast bacilli was also performed and yielded a negative result. CD4:CD8 T lymphocyte ratio was not performed. Pathological analysis revealed well-formed non-necrotizing granulomas, characterized by epithelioid histiocytes accompanied by lymphocytes (Figure 5 ). After the diagnosis of sarcoidosis was established, the patient started prednisolone at a dosage of 0.5 mg/kg/day. Remarkably, there was a noticeable improvement in pain within two days, and signs of inflammation in the soft tissues improved over three weeks, resulting in the restoration of the patient's ability to walk. After one month of steroid treatment, the patient initiated the tapering process and eventually discontinued the treatment after six months. During the period of corticosteroid therapy, no infections were observed; however, the patient reported an 8% increase in weight gain. Upon the six-month and nine-month follow-up evaluation, the patient exhibited full recovery, and the objective examination revealed no positive findings. An analytical assessment demonstrated a regression of inflammatory markers (Table 1 ) and a subsequent chest X-ray showed no signs of lymphadenopathy. The patient maintained remission for one year, free of symptoms or limitations, and with normal results in additional tests. However, after this period, the patient reported a recurrence of fatigue following minor exertion, along with panniculitis and morning arthralgia in the ankle. The stiffness persisted for more than two hours, significantly impacting daily functioning and hindering professional activities. A chest CT revealed bilateral lymphadenopathy, and laboratory data indicated increased inflammatory markers (Table 1 ). The patient expressed significant concerns about the professional implications of her limitations and the weight gain she had experienced in the past due to corticosteroid therapy. After carefully weighing the associated risks and benefits, a collaborative decision was made with the patient to initiate low-dose methotrexate at 10 mg once weekly. Remarkably, the patient's complaints were resolved within the next four weeks. Currently, the patient is under our ongoing care, and her symptoms are effectively controlled with continuous therapy.
We would like to express our heartfelt gratitude to the Pathology Department team at the Portuguese Oncology Institute in Lisbon for their kind assistance and generous provision of biopsy slide images. Your contribution significantly elevates the quality of our article and exemplifies the spirit of collaboration within the scientific community.
CC BY
no
2024-01-16 23:47:20
Cureus.; 16(1):e52317
oa_package/6f/d5/PMC10789483.tar.gz
PMC10789484
38226127
Introduction Lateral lumbar spinal canal stenosis is a degenerative disease resulting from the cumulative narrowing of the lateral recess and intervertebral foramen of the spinal canal, causing impingement on the nerve root. This narrowing occurs due to the hypertrophy of surrounding osseocartilaginous and ligamentous structures as part of the degenerative process. The anatomical elements of lateral lumbar stenosis can be categorized into lateral recess, foraminal, and extraforaminal stenosis. Degenerative spinal stenosis is a common presentation that can result in significant disability and have a negative impact on the patient's quality of life. Many patients present in their sixth decade of life, where degeneration plays a significant role, and most exhibit lateral canal stenosis affecting both sides of the L4/L5 and L5/S1 levels [ 1 , 2 ]. In clinical practice, magnetic resonance imaging (MRI) is considered the gold standard modality for diagnosing patients with lumbar spinal stenosis [ 3 , 4 ]. Accurate pre-intervention diagnosis is vital to achieving satisfactory treatment outcomes for patients [ 5 ]. Previous studies on lumbar stenosis have predominantly focused on patients with central canal stenosis [ 2 ], leaving a gap in clinical data and literature concerning the association between clinical symptoms and disability and the severity of lumbar spinal stenosis as determined by MRI. It was anticipated that lateral lumbar spinal stenosis would exhibit more pronounced clinical manifestations due to the limited anatomical space for nerve roots in comparison to central stenosis. However, the diagnosis and assessment of lateral stenosis are often overlooked or underestimated, as there is a lack of a well-defined association between the radiological degree of stenosis and the severity of pain and daily disability. The primary objective of this study is to assess the connection between clinical symptoms and disability and the anatomical gradation of lateral spinal stenosis, the magnitude of posterior disc height, and the extent of disc degeneration as determined through MRI assessment.
Materials and methods This research has been approved by the Human Research Ethics Committee of the authors' affiliated institution with the approval Code USM/JEPeM/17080369, and the patients provided written informed consent. Patients This was a cross-sectional study carried out at the University of Sciences, Malaysia, from February 2018 to December 2019. The study subjects involved 121 patients aged 50 years and older who presented at the clinic with clinical presentations suggestive of lateral lumbar spinal stenosis. They underwent magnetic resonance imaging after assessments by orthopedic spine surgeons following established treatment failure over three months of non-operative therapy. The exclusion criteria for patient selection included those who presented with only back pain, had a primary diagnosis of malignancy, experienced a recent spinal fracture within three months, underwent lumbosacral spinal surgery, had spondylitis, or had congenital spinal anomalies. Patients with cognitive impairment prohibiting completion of the questionnaires were also excluded from the study. By MRI, only patients with moderate central canal stenosis with predominant radiculopathy and claudication clinically were included, excluding patients with severe central canal stenosis. Assessment of clinical symptoms The level of disability experienced by the patients was evaluated using the Oswestry Disability Index (ODI) to validate a response to chronic lower back pain, as established by Fritz et al. [ 6 ]. It is considered the most effective for the evaluation of persistent severe disability, as concluded by Davies et al. [ 7 ]. Most authors use the ODI to evaluate the association and correlation of the disability index with magnetic resonance imaging findings [ 8 - 12 ]. The overall current low back and leg pain severity can be evaluated by a self-administered Visual Analog Scale (VAS) with a range of 0-100mm during outpatient clinic follow-up, as validated by Delgado et al. [ 13 ]. Most authors use the VAS to determine the association and correlation of pain intensity with magnetic resonance imaging findings [ 2 , 11 , 12 ]. Completion of a data collection sheet for demographic data and assessment of clinical symptoms by ODI and VAS questionnaires must be done within three months after the MRI evaluation. Magnetic resonance imaging All patients were undergoing the same study protocol for study purposes. MR imaging of the lumbar spine was performed in a supine position with both knees flexed using a 3.0-T MRI system (Achieva 3.0T TX; Philips Healthcare, Best, Netherlands). Fast spin echo (FSE) T1-weighted and T2-weighted images were obtained in the axial and sagittal planes. The protocol comprised sagittal T1 FSE (T1 fast spin echo, TR 400msec, TE 10ms); sagittal T2 FSE (TR 3160msec, TE 120msec); axial T2 FSE (TR 4740msec, TE 120 msec). For all sequences, a 4 mm slice thickness was used. The intersection gap was 0.6-1.3 mm, and the echo train lengths were 6 and 30 for T1 and T2 weighted imaging, respectively. Imaging examinations MRI analysis was conducted with the assistance and guidance of a radiologist, involving the qualitative grading of nerve root compression in the lateral recess, foraminal, and extraforaminal areas. Additionally, it included the quantitative grading of posterior disc heights and the qualitative grading of disc degeneration at the bilateral L4/L5 and L5/S1 levels. The analysis was performed in a blinded manner, without knowledge of the clinical findings and radiological reports. Weishaupt et al. introduced a grading system for nerve root compression in the lateral recess, utilizing T2-weighted images at the axial inferior endplate. The grades were assigned as follows: 0 for no contact of the nerve root with the disc, 1 for nerve root contact without deviation, 2 for nerve root contact with deviation, and 3 for nerve root compression [ 14 ]. An illustrative example of MRI evaluation for lateral recess stenosis is depicted in Figure 1 , with a small red arrow indicating grade 1 stenosis (disc in contact with nerve root without deviation) and a large red arrow indicating grade 2 stenosis (evident deviation of the nerve root). The qualitative assessment of foraminal stenosis, based on T1-weighted parasagittal images, was graded as follows: grade 0 for normal foramina, grade 1 for mild foraminal stenosis, grade 2 for moderate foraminal stenosis, and grade 3 for severe stenosis. This grading was determined by Wildermuth et al. [ 15 ]. An example of MRI evaluation for foraminal stenosis is shown in Figure 2 , with a small red arrow indicating grade 1 stenosis and a large red arrow indicating grade 2 stenosis (epidural fat only partly surrounding the nerve root). Extraforaminal nerve root entrapment was evaluated from T1-weighted axial images at the center of a disc, with an evident circumferential loss of perineural fat signal, and was graded as either yes or no entrapment [ 16 - 18 ]. The MRI assessment of extraforaminal stenosis is depicted in Figure 3 , with a small red arrow indicating no stenosis and a large red arrow indicating extraforaminal stenosis (absence of perineural fat signal). Cinotti et al. proposed a quantitative assessment of posterior disc height, calculated from a T2-weighted mid-sagittal view as the shortest distance between the adjacent superior and inferior endplates [ 19 ]. Pfirrmann's Grading System was used to score the degree of lumbar intervertebral disc degeneration from a T2-weighted midsagittal view. The grades were as follows: 1 for a homogeneous bright white disc with a clear distinction of nucleus and annulus structure, 2 for an inhomogeneous bright white disc with a clear distinction of nucleus and annulus structure, 3 for an inhomogeneous grey disc with an unclear distinction of nucleus and annulus structure and a slightly decreased disc height, 4 for an inhomogeneous grey to black disc with a loss of distinction of the nucleus and annulus structure and a moderately decreased disc height, and 5 for an inhomogeneous black disc with a loss of distinction of the nucleus and annulus structure and a collapsed disc space [ 20 , 21 ]. Statistical analysis Data analysis was conducted using Statistical Packages for Social Science (SPSS) version 23.0 and STATA version 14.0. When patients had multilevel spinal stenosis, the level and side with the worst stenosis were selected for the study's association. The same principle was applied for the assessment of posterior disc height and grade of disc degeneration in L4/L5 and L5/S1, with the worst score chosen for the association analysis. The correlation between posterior disc height and ODI and VAS scores was evaluated using Pearson's correlation test via SPSS version 23.0 (IBM Inc., Armonk, New York). The association between the extent of lateral stenosis and disc degeneration on MRI with ODI and VAS scores was determined by the Fisher Exact Test via STATA version 14.0 (StataCorp LLC, College Station, Texas). This test was chosen as a replacement for the Chi-squared test, as it is more accurate for small cell sizes with expected values less than five. The result of the association was considered statistically significant, with a p-value of less than 0.05.
Results A total of 121 patients were clinically evaluated for degenerative lateral lumbar spinal stenosis, and patient characteristics are summarized in Table 1 . The assessment of ODI scores showed that patient symptoms and disability ranged from a minimal score of 24% to a maximal score of 92%, with a mean value of 62.2% ± 10.7%. Based on the percentage disability score of the ODI, out of the 121 patients, one patient (0.8%) demonstrated moderate disability, 53 patients (43.8%) had a severe disability, 60 patients (49.6%) were crippled, and seven patients (5.8%) were bedridden. According to VAS scores, patient pain intensity ranged from a minimal score of 55 to a maximal score of 90, with a mean value of 79.3 ± 8.6. In the overall VAS scores, six patients (5.6%) had severe pain (scores 41-60), 72 (59.5%) had high pain (scores 61-80), and 43 (35.5%) had very high pain (scores 81-100). None of the patients had minimal to moderate pain. Table 2 summarizes the analysis of anatomical lateral stenosis at L4/L5, while Table 3 provides the analysis for L5/S1 based on MRI findings. Through MRI analysis, the posterior disc height at L4/L5 demonstrated a mean of 7.0 mm ± 1.7 mm, ranging from 2.3 mm to 11.9 mm. Similarly, at the L5/S1 level, the mean posterior disc height was 6.3 mm ± 1.8 mm, ranging from 1.5 mm to 10.4 mm. The evaluation of intervertebral disc degeneration at L4/L5 and L5/S1 is detailed in Table 4 . No statistically significant correlation was found between posterior disc height and the distribution of ODI and VAS scores. For the L4/L5 level, the Pearson's correlation coefficient (r) was 0.11 (p=0.22) for ODI and 0.06 (p=0.95) for VAS. At the L5/S1 level, the correlation coefficients were -0.41 (p=0.65) for ODI and 0.74 (p=0.41) for VAS. Figures 4 - 7 depict scatterplots illustrating the relationship between posterior disc height and the distribution of ODI and VAS scores. Further statistical analysis, conducted in line with the study's objectives, revealed no statistically significant association between the distribution of ODI and VAS grading, the anatomical grading of lateral recess stenosis, foraminal stenosis, extraforaminal stenosis, or disc degenerative grading, as detailed in Table 5 .
Discussion Based on our findings, the mean age of the sample participants was 58.7 years. We specifically targeted the sixth decade of life to yield optimal results, as this period significantly contributes to the degenerative process, particularly due to relative estrogen deficiency, resulting in a higher prevalence among females. The majority of the patients were married or cohabiting, which is important to achieve good results since almost all the patients have similar family environmental and psychosocial contributions. This similarity is reflected fairly in the severity of symptoms, as it was the main determinant for the subjective experience of pain and disability among the patients. Most of the patients had an abnormal body mass index, with 76% of them being overweight or obese. This finding is consistent with the fact that obesity is strongly related to biomechanical changes contributing to degenerative lumbar stenosis. An increased body mass index will result in higher shear forces that overload the joints and torque on the lumbar disc, potentially leading to facets and disc degeneration. Analysis of lateral lumbar stenosis revealed a notably higher prevalence of lateral recess stenosis at the L4/L5 level, affecting 79.4% of patients with moderate to severe compression, compared to 52.9% at the L5/S1 level. The prevalence of moderate to severe foraminal stenosis was consistent at both levels, affecting 77.7% of the patients. These findings align with previous studies indicating a greater occurrence of severe lateral stenosis at the L4/L5 level and severe foraminal stenosis at the L5/S1 level based on MRI assessments and their association with clinical symptoms in the general population, as concluded by Ishimoto et al. [ 22 ]. This observation can be explained by the susceptibility of the lower lumbar region, especially the L4/L5 level, to high mechanical stress, connecting a mobile segment of the lumbar spine to a relatively rigid sacrum and pelvis. Examining the prevalence of extraforaminal stenosis, the L4/L5 level showed a higher involvement (44.6%) compared to the L5/S1 level (29.8%). This result is consistent with a prior study by Lee et al. that reported a 39.5% occurrence of extraforaminal stenosis [ 23 ]. The phenomenon is attributed to the loss of intervertebral disc height due to disc degeneration, leading to the anterosuperior subluxation of the superior articular process of the inferior vertebra, causing stenosis. The calculated mean posterior disc height in symptomatic patients was 7.0 ± 1.7mm for the L4/L5 level and 6.3±1.8mm for the L5/S1 level, significantly lower than the measurements in normal subjects, which were 10.1±1.0mm and 8.5 ± 1.0mm, respectively [ 24 ]. A separate cadaveric dissection study by Cinotti et al. showed an average posterior disc height of 6.55 ± 1.7mm for L4/L5 and 5.29 ± 1.9mm for L5/S1 [ 19 ]. The majority of our patients exhibited degenerated lumbar discs, with 64.5% graded as Pfirrmann grade 4 for L4/L5 and 59.5% for L5/S1. Additionally, 5.8% were graded as 5 for L4/L5, and 11.6% for L5/S1. In comparison, a previous study reported lower rates of disc degeneration, with Pfirrmann grade 4 at 34.5% for L4/L5 and 33.7% for L5/S1, while showing similar findings for L5/S1 disc degeneration as published by Middendorp et al. [ 9 ]. The observed differences are likely due to degenerative changes within the intervertebral discs, characterized by the loss of water content, diminished nutritional transport, and reduced proteoglycan content. Disc aging leads to changes, particularly in the nucleus, becoming less gelatinous and more fibrous. These significant changes can manifest as the loss of homogeneous brightness of the disc with a diminished clear distinction between the nucleus and annulus, as well as a decrease and collapse of the disc height as observed on MRI." Our study suggests challenges in reliably diagnosing lateral lumbar stenosis based solely on imaging findings, as there appears to be an inconsistency between clinical symptoms and imaging results. This inconsistency may be attributed to the limited capability of MRI in identifying nerve root compression adequately. Static images of canal dimensions might not predict a patient's symptoms without assessing the dynamic nature of the disease process. The degree of compression is dynamic and likely varies based on the patient's condition. The limitations of our study lie in conducting routine clinical MRI with patients in a supine position, which may not reflect symptoms that worsen in an upright position due to alterations in nerve element compression. Therefore, upright MRI imaging, especially under axial loading, becomes crucial for a comprehensive assessment, as it causes displacement of anatomical structures leading to nerve root compression, not observed in the supine position, as suggested by Beattie et al. [ 25 ]. The absence of association in our study might also be linked to the fluctuating nature of symptoms over time, potentially following a natural course that could either improve or remain stable, thereby affecting the perceived pain and disability of the patient [ 26 ]. Pain and disability experienced by the patient are subjective and influenced by emotional, psychological, and genetic factors. Although we evaluated disability using the ODI score, which is widely accepted and has strong psychometric properties, it remains subjective and may not consistently correlate with the severity of radiological spinal stenosis [ 27 ]. A comprehensive history and thorough physical examination are essential for diagnosing degenerative lateral spinal stenosis. While MRI evidence of nerve compression is necessary, it should be clinically assessed before being attributed solely as the cause of back pain. Therapy should be directed towards the patient's most disturbing symptoms rather than solely relying on the severity of radiographic narrowing. This study aims to establish a robust predictive value concerning the relationship between clinical symptoms, disability, and MRI imaging. The research featured a selectively chosen elderly population aged 50 years and above, specifically targeting those with typical presentations while excluding patients with severe central stenosis. To ensure data quality, symptom recording and disability assessment were carried out solely by the principal investigator, while a detailed visual qualitative MRI analysis was conducted by an experienced radiologist. Several limitations were identified in this study. It's important to note that this was a cross-sectional study, limiting its ability to provide conclusive evidence for the broader population. Furthermore, the study encompassed a relatively small sample size recruited exclusively from our center. Additionally, MRI evaluations were not conducted at the peak of clinical symptoms and disability but rather within three months of their presentation due to constraints in immediate MRI availability.
Conclusions In summary, our findings conclude that there is no significant interrelation between clinical symptoms, pain severity, the extent of daily disability, and the observed MRI results for the anatomical gradation of lateral spinal stenosis, the magnitude of posterior disc height, and the extent of disc degeneration. Lumbar spinal stenosis remains a clinical-radiological syndrome, and a comprehensive clinical evaluation remains essential for an accurate diagnosis, emphasizing the necessity of appropriately correlating MRI findings with their clinical significance. On the other hand, MRI is a gold standard diagnostic tool for decision-making in the management and intervention of patients with spinal stenosis.
Introduction Degenerative lumbar spinal stenosis is a communal problem in the sixth decade of life involving L4/L5 and L5/S1 levels. Lateral spinal stenosis is often underestimated because of no established relationship between the clinical symptoms and MRI findings. We conducted a study to establish an association between the degree of anatomical lateral stenosis, posterior disc height, and disc degeneration from MRI with the daily disability and pain severity for lateral lumbar spinal stenosis. Methods This was a cross-sectional study involving 121 patients with distinct clinical symptoms of lateral lumbar spinal stenosis evaluated from February 2018 to December 2019. The clinical data were evaluated using the Oswestry Disability Index (ODI) and Visual Analogue Scale (VAS), while magnetic resonance imaging (MRI) was assessed qualitatively for the anatomical gradation of lateral spinal stenosis, the magnitude of posterior disc height, and the extent of disc degeneration. Statistical analysis for the correlation between posterior disc height and ODI and VAS scores was evaluated using Pearson's correlation test via SPSS version 23.0 (IBM Inc., Armonk, New York), and the association between the extent of lateral stenosis and disc degeneration on MRI with ODI and VAS scores was determined by the Fisher Exact Test via STATA version 14.0 (StataCorp LLC, College Station, Texas). The association was considered statistically significant with a P-value of less than 0.05. Results The analysis of 121 patients showed the mean age of the patients was 58.7 ± 7.1 years old. The number of female patients was higher compared to male patients, 52.9% and 47.1%, respectively. 97.5% of the patients were married or cohabiting, and 76.0% had an abnormal body mass index. The mean score of ODI and VAS was 62.2 ± 10.7% and 79.3 ± 8.6 respectively. 49.6% of the patient presented with a crippling disability with ODI assessment, while 59.5% presented with high pain intensity with VAS assessment. MRI assessment of anatomical grading lateral stenosis of L4/L5 level revealed that 45.5% of the patients had grade 2 lateral recess stenosis, 63.6% had grade 2 foraminal stenosis, and 44.6% had extraforaminal stenosis. L5/S1 level analysis showed that 43.0% had grade 2 lateral recess stenosis, 62.0% had grade 2 foraminal stenosis, and 29.8% had extraforaminal stenosis. 64.5% of patients had grade 4 disc degeneration of L4/L5 with mean posterior disc height of 7.0mm ±1.7mm while 59.5% had grade 4 disc degeneration of L5/S1 with mean posterior disc height of 6.3mm ±1.8mm. However, no statistically significant association between clinical symptoms and MRI findings was found. Conclusions There was no significant association between the clinical symptoms of pain and disability and the MRI findings for the anatomical gradation of lateral spinal stenosis, the magnitude of posterior disc height, and the extent of disc degeneration. A comprehensive clinical evaluation remains essential for an accurate diagnosis, emphasizing the necessity of appropriately correlating MRI findings with their clinical significance.
Appendices Below are the supplemental materials for the study, including the data collection sheet (Figure 9 ), the subject information and consent form, which incorporates the participant's material publication consent form (Figure 10 ), and the Human Research Ethics Committee approval letter (Figure 11).
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50475
oa_package/72/6d/PMC10789484.tar.gz
PMC10789488
38224473
Introduction Frontotemporal lobar degeneration is the second most common type of early-onset dementia under the age of 65 years ( Harvey et al., 2003 ). Its most common subtype, behavioral variant frontotemporal dementia (bvFTD), is characterized by detrimental changes in personality and behavior ( Pressman and Miller, 2014 ). Patients can display both apathy and disinhibition, often combined with a lack of insight, and executive and socioemotional deficits ( Schroeter et al., 2011 ; Schroeter et al., 2012 ). Despite striking and early symptoms, bvFTD patients are often (i.e. up to 50%) misdiagnosed as having a psychiatric illness rather than a neurodegenerative disease ( Woolley et al., 2011 ). In addition to the presence of symptoms, the diagnosis requires consideration of family history due to its frequent heritable component and examination of different neuroimaging modalities ( Pressman and Miller, 2014 ; Bang et al., 2015 ; Schroeter et al., 2014 ; Schroeter et al., 2008 ). Whereas atrophy in frontoinsular areas only occurs in later disease stages, glucose hypometabolism in frontal, anterior cingulate, and anterior temporal regions visible with fluorodeoxyglucose positron emission tomography (FDG-PET) is already detectable from an early stage onwards ( Bang et al., 2015 ; Diehl-Schmid et al., 2007 ). The fractional amplitude of low-frequency fluctuations (fALFF) is a resting-state functional magnetic resonance imaging (rsfMRI) derived measure with good test–retest reliability that closely correlates with FDG-PET ( Aiello et al., 2015 ; Holiga et al., 2018 ; Deng et al., 2022 ). In frontotemporal dementia (FTD) patients, fALFF was reduced in inferior parietal, frontal lobes, and posterior cingulate cortex and holds great potential as MRI biomarker ( Premi et al., 2014 ; Borroni et al., 2018 ). Low local fALFF activity in the left insula was linked to symptom deterioration ( Day et al., 2013 ). On a molecular level, frontotemporal lobar degeneration can be differentiated into three different subtypes based on abnormal protein deposition: tau (tau protein), transactive response DNA-binding protein with molecular weight 43 kDa (TDP-43), and FET (fused-in-sarcoma [FUS] and Ewing sarcoma [EWS] proteins, and TATA-binding protein-associated factor 15 [TAF15]) ( Bang et al., 2015 ; Haass and Neumann, 2016 ). Whereas tau and TDP pathologies each occur in half of the bvFTD patients, FUS pathology is very rare ( Whitwell et al., 2011 ). Several possible mechanisms are discussed in the literature for the spread of these proteins throughout the brain, from a selective neuronal vulnerability (i.e. specific neurons being inherently more susceptible to the underlying disease-related mechanisms) to prion-like propagation of the respective proteins ( Walsh and Selkoe, 2016 ; Hock and Polymenidou, 2016 ). The latter entails that misfolded proteins accumulate and induce a self-perpetuating process so that protein aggregates can spread and amplify, leading to gradual dysfunction and eventually death of neurons and glial cells ( Hock and Polymenidou, 2016 ). For example, tau can cause presynaptic dysfunction prior to loss of function or cell death ( Zhou et al., 2017 ), whereas overexpression of TDP-43 leads to impairment of presynaptic integrity ( Heyburn and Moussa, 2016 ). The role of FET proteins is not fully understood, although their involvement in gene expression suggests a mechanism of altered RNA processing ( Svetoni et al., 2016 ). Neuronal connectivity plays a key role in the spread of pathology as it is thought to transmit along neural networks. Supporting the notion, previous studies also found an association between tau levels and functional connectivity in functionally connected brain regions, for example across normal aging and Alzheimer’s disease ( Franzmeier et al., 2019 ). Thereby, dopaminergic, serotonergic, glutamatergic, and GABAergic neurotransmission is affected. More specifically, current research indicates a deficit of neurons and receptors in these neurotransmitter systems ( Hock and Polymenidou, 2016 ; Huey et al., 2006 ; Murley and Rowe, 2018 ). Furthermore, these deficits have been associated with clinical symptoms. For example, whereas GABAergic deficits have been associated with disinhibition, increased dopaminergic neurotransmission and altered serotonergic modulation of dopaminergic neurotransmission have been associated with agitated and aggressive behavior ( Engelborghs et al., 2008 ; Murley et al., 2020 ). Another study related apathy to glucose hypometabolism in the ventral tegmental area, a hub of the dopaminergic network ( Schroeter et al., 2011 ). Despite this compelling evidence of disease-related impairment at functional and molecular levels, the relationship between both remains poorly understood. It also remains unknown if the above neurotransmitter alterations reflect a disease-specific vulnerability of specific neuron populations or merely reflect a consequence of the ongoing neurodegeneration. Based on the above findings, we hypothesize that the spatial distribution of fALFF and gray matter (GM) pathology in FTD will be related to the distribution of dopaminergic, serotonergic, and GABAergic neurotransmission. The aim of the current study was to gain novel insight into the disease mechanisms underlying functional and structural alterations in bvFTD by examining if there is a selective vulnerability of specific neurotransmitter systems. We evaluated the link between disease-related functional alterations and the spatial distribution of specific neurotransmitter systems and their underlying gene expression levels. In addition, we tested if these associations are linked to specific symptoms observed in this clinical population.
Materials and methods Subjects We included 52 Caucasian patients with bvFTD (mean age = 61.5 ± 10.0 years; 14 females) and 22 Caucasian age-matched healthy controls (HC) (mean age = 63.6 ± 11.9 years; 13 females) examined in nine centers of the German Consortium for Frontotemporal Lobar Degeneration ( http://www.ftld.de ; Otto et al., 2011 ) into this study. Details regarding the distribution of demographic characteristics across centers are reported in Supplementary file 1a . Diagnosis was based on established international diagnostic criteria ( Rascovsky et al., 2011 ). Written informed consent was collected from each participant. The study was approved by the ethics committees of all universities involved in the German Consortium for Frontotemporal Lobar Degeneration (Ethics Committee University of Ulm approval number 20/10) and was in accordance with the latest version of the Declaration of Helsinki. The clinical and neuropsychological test data included the Mini Mental State Exam (MMSE), Verbal Fluency (VF; animals), Boston Naming Test (BNT), Trail Making Test B (TMT-B), Apathy Evaluation Scale (AES) (companion-rated) ( Glenn, 2005 ), Frontal Systems Behavior Scale (FrSBe) (companion-rated) incl. subscales (executive function [EF], inhibition, and apathy) ( Grace and Malloy, 2001 ), and Clinical Dementia Rating-Frontotemporal Lobar Degeneration scale‐modified (CDR-FTLD) ( Knopman et al., 2008 ). Demographic and neuropsychological test information for both groups is displayed in Table 1 . MRI acquisition and preprocessing of imaging data Structural T1-weighted magnetization-prepared rapid gradient-echo MRI and rsfMRI (TR = 2000 ms, TE = 30 ms, FOV = 64 × 64 × 30, voxel size = 3 × 3 × 5 mm, 300 volumes) were acquired on 3T devices. Table 2 reports center-specific imaging parameters confirming a high level of harmonization. All initial preprocessing of imaging data was performed using SPM12 ( Penny et al., 2011 ). To calculate voxel-wise GM volume (GMV), structural images were segmented, spatially normalized to MNI space, modulated, and smoothed by a Gaussian convolution kernel with 6 mm full-width at half maximum (FWHM). RsfMRI images were realigned, unwarped, co-registered to the structural image, spatially normalized to MNI space, and smoothed with a Gaussian convolution kernel with 6 mm FWHM. A GM mask was applied to reduce all analyses to GM tissue. Images were further processed in the REST toolbox ( Song et al., 2011 ) version 1.8. Mean white matter and cerebrospinal fluid signals as wells as 24 motion parameters (Friston-24) were regressed out before computing voxel-based measures of interest. fALFF was calculated at each voxel as the root mean square of the blood oxygen level-dependent signal amplitude in the analysis frequency band (here: 0.01–0.08 Hz) divided by the amplitude in the entire frequency band ( Song et al., 2011 ). fALFF is closely linked to FDG-PET and other measures of local metabolic activity as has been shown in healthy participants but also for example in Alzheimer’s disease ( Deng et al., 2022 ; Marchitelli et al., 2018 ). Contrast analyses of fALFF and GMV To test for fALFF alterations, group comparisons were performed in SPM12 using a flexible-factorial design with group (bvFTD or HC) as a factor and age, sex, and site (i.e. one dummy variable per site) as covariates ( Huotari et al., 2019 ). To test for group differences in GMV, the same design with addition of total intracranial volume (TIV) was used. Pairwise group t -contrasts (i.e. HC > bvFTD, bvFTD > HC) were evaluated for significance using an exact permutation-based cluster threshold (1000 permutations permuting group labels, p < 0.05) to control for multiple comparisons combined with an uncorrected voxel-threshold of p < 0.001. A permutation-based cluster threshold combined with an uncorrected voxel-threshold was used since standard correction methods such as a family wise error rate of 5% may lead to elevated false-positive rates ( Eklund et al., 2016 ). Spatial correlation with neurotransmitter density maps Confounding effects of age, sex, and site were regressed out from all images prior to further spatial correlation analyses. To test if fALFF alterations in bvFTD patients (relative to HC) are correlated with specific neurotransmitter systems, the JuSpace toolbox ( Dukart et al., 2021 ) was used. The JuSpace toolbox allows for cross-modal spatial correlations of different neuroimaging modalities with nuclear imaging derived information about the relative density distribution of various neurotransmitter systems. All neurotransmitter maps were derived as averages from an independent healthy volunteer population and processed as described in the JuSpace publication including rescaling and normalization into the Montreal Neurological Institute space. More specifically, we wanted to test if the spatial structure of fALFF maps in patients relative to HC is similar to the distribution of nuclear imaging derived neurotransmitter maps from independent healthy volunteer populations included in the toolbox (5-HT1a receptor [ Savli et al., 2012 ], 5-HT1b receptor [ Savli et al., 2012 ], 5-HT2a receptor [ Savli et al., 2012 ], serotonin transporter [5-HTT; Savli et al., 2012 ], D1 receptor [ Kaller et al., 2017 ], D2 receptor [ Sandiego et al., 2015 ], dopamine transporter [DAT; Dukart et al., 2018 ], Fluorodopa [FDOPA; García Gómez et al., 2018 ], γ-aminobutyric acid type A [GABAa] receptors [ Dukart et al., 2018 ; Myers et al., 2012 ], μ-opioid [MU] receptors [ Aghourian et al., 2017 ], and norepinephrine transporter [NET; Hesse et al., 2017 ]). Detailed information about the publicly available neurotransmitter maps is provided in Supplementary file 1c . In contrast to standard analyses of fMRI data, this analysis might provide novel insight into potential neurophysiological mechanisms underlying the observed correlations ( Dukart et al., 2021 ). Using the toolbox, mean values were extracted from both neurotransmitter and fALFF maps using GM regions from the Neuromorphometrics atlas. Extracted mean regional values of the patients’ fALFF maps were z -transformed relative to HC. Spearman correlation coefficients (Fisher’s z -transformed) were calculated between these z -transformed fALFF maps of the patients and the spatial distribution of the respective neurotransmitter maps. Exact permutation-based p-values as implemented in JuSpace (10,000 permutations randomly assigning group labels using orthogonal permutations) were computed to test if the distribution of the observed Fisher’s z -transformed individual correlation coefficients significantly deviated from zero. Furthermore, adjustment for spatial autocorrelation was performed by computing partial correlation coefficients between fALFF and neurotransmitter maps adjusted for local GM probabilities estimated from the SPM12-provided TPM.nii ( Dukart et al., 2021 ). All analyses were false discovery rate (FDR) corrected for the number of tests (i.e. the number of neurotransmitter maps). To further test if and how the observed fALFF co-localization patterns are explained by the underlying global atrophy, we repeated the co-localization analysis (p < 0.05) for the significant fALFF–neurotransmitter associations after controlling for total GMV. Additionally, the receiver operating characteristic (ROC) curves and corresponding areas under the curve (AUC) were calculated for patients (Fisher’s z -transformed Spearman correlations) vs. HC (leave-one-out Z -score maps) to examine discriminability of the resulting fALFF–neurotransmitter correlations. Correlation with structural data To test if the significant correlations observed between fALFF and neurotransmitter maps were driven by structural alterations (i.e. partial volume effects), the JuSpace analysis using the same parameters was repeated with local GMV incl. a correction for confounding effects of age, sex, site, and TIV. For further exploration, fALFF and GMV Fisher’s z -transformed Spearman correlations as computed by the JuSpace toolbox were correlated with each other for each patient over all neurotransmitters. The median of those correlation coefficients was squared to calculate the variance in fALFF explained by GMV. Correlation with clinical data To test if fALFF–neurotransmitter correlations are related to symptoms of bvFTD, we calculated Spearman correlation coefficients between significant fALFF–neurotransmitter correlations (Fisher’s z -transformed Spearman correlation coefficients from JuSpace toolbox output) and clinical scales and neuropsychological test data (see Table 1 ). All analyses were FDR corrected for the number of tests. In addition, to test for the specificity of these associations we examined the direct associations between fALFF and the neuropsychological tests by computing Spearman correlations with the Eigenvariates extracted from the largest cluster of the HC > bvFTD SPM contrast. Association with gene expression profile maps Furthermore, to test if fALFF alterations in bvFTD patients associated with specific neurotransmitter systems in the JuSpace analysis were also spatially correlated with their underlying mRNA gene expression profile maps, the MENGA toolbox ( Rizzo et al., 2016 ; Rizzo et al., 2014 ) was used. Z -scores were calculated for the patients relative to HC using the confound-corrected images. The analyses were performed using 169 regions of interest and genes corresponding to each significantly associated neurotransmitter from the JuSpace analysis (5-HT1b: HTR1B ; 5-HT2a: HTR2A ; GABAa (19 subunits): GABRA1–6 , GABRB1–3 , GABRG1–3 , GABRR1–3 , GABRD , GABRE , GABRP , GABRQ ; NET: SLC6A2 ). More specifically, Spearman correlation coefficients were calculated between the genomic values and re-sampled image values in the regions of interest for each patient and for each mRNA donor from the Allen Brain Atlas ( Hawrylycz et al., 2012 ) separately. The Fisher’s z -transformed correlation coefficients were averaged over the six mRNA donors. Bonferroni-corrected one-sample t -tests were performed for each neurotransmitter to examine, whether the correlation coefficient differed significantly from zero. Neurotransmitter-genomic correlations and gene differential stability To further examine the association of fALFF–neurotransmitter correlations and mRNA gene expression profile maps, we explored the relationship between neurotransmitter maps included in the JuSpace toolbox and mRNA maps provided in the MENGA toolbox. The MENGA analysis was repeated using the same parameters to obtain Fisher’s z -transformed Spearman correlation coefficients between the neurotransmitter maps and the mRNA gene expression profile maps. To evaluate the robustness of the mRNA maps between donors, gene differential stability was estimated by computing the Fisher’s z -transformed Spearman correlation coefficients between the genomic values of each of the six mRNA donors, which were then averaged ( Hawrylycz et al., 2012 ).
Results Contrast analysis of fALFF and GMV First, we tested for group differences in fALFF between HC and patients. Compared to HC, bvFTD patients showed a significantly reduced fALFF signal in frontoparietal and frontotemporal regions ( Figure 1A ). Furthermore, patients also showed reduced GMV in medial and lateral prefrontal, insular, temporal, anterior caudate, and thalamic regions in comparison to HC ( Figure 1B ). For a detailed representation of the thresholded fALFF and GMV t-maps, see Figure 1—figure supplement 1 . Cluster size, peak-level MNI coordinates, and corresponding anatomical regions incl. the additional fALFF analysis with correction for total GMV are reported in Supplementary file 1d . For the distribution of Eigenvariates for the two groups in both modalities, see Figure 1—figure supplement 2 . Spatial correlation with neurotransmitter maps We performed correlation analyses to test if fALFF alterations in bvFTD significantly co-localize with the spatial distribution of specific neurotransmitter systems. fALFF alterations in bvFTD as compared to HC were significantly associated with the spatial distribution of 5-HT1b (mean r = −0.21, p < 0.001), 5-HT2a (mean r = −0.16, p = 0.0014), GABAa (mean r = −0.12, p = 0.0149), and NET (mean r = −0.13, p = 0.0157) (p FDR = 0.0157; Figure 2A ). The directionality of these findings (i.e. a negative correlation) suggest bvFTD displayed stronger reductions in fALFF relative to HC in areas which are associated with a higher non-pathological density of respective receptors and transporters. When controlling for total GMV, the co-localization findings remained significant except for the co-localization with GABAa. The AUC resulting from the ROC curves between Spearman correlation coefficients of patients and controls revealed a good discrimination for 5-HT1b (AUC = 0.74) and 5-HT2a (AUC = 0.71) and a fair discrimination for GABAa (AUC = 0.68) and NET (AUC = 0.67) ( Figure 3A ). Next, we tested if similar co-localization patterns are observed with GMV. GMV alterations in bvFTD were not significantly associated with any of the neurotransmitter systems ( Figure 2B ). fALFF–neurotransmitter and GMV–neurotransmitter correlations displayed a positive yet weak association with structural alterations explaining only 10% of variance in the fALFF alterations ( Figure 3B ). All correlations and their corresponding permutation-based p-values incl. the analysis utilizing fALFF images additionally corrected for total GMV are provided in Supplementary file 1c . To exclude a potential bias caused by the collection of imaging data at different sites, we performed a Kruskal–Wallis test to examine differences on the Fisher’s z -transformed correlations coefficients across sites. No significant differences ( X 2 = 6.34, p = 0.50, df = 7) were found among the sites. Relationship to clinical symptoms Furthermore, we tested if the significant fALFF–neurotransmitter correlation coefficients are also associated with symptoms or test results of bvFTD. After FDR correction (p = 0.0085), the strength of fALFF co-localization with NET distribution was significantly associated with VF (mean r = 0.37, p = 0.0086; N = 49; Figure 2C ) and MMSE (mean r = 0.40, p = 0.0039; N = 50; Figure 2D ). The positive correlation coefficients suggest that more negative correlations between fALFF and neurotransmitter maps were associated with lower test performance, that is the higher/more fALFF reductions in areas with high neurotransmitter density, the lower the test performance. Associations with other neuropsychological tests were not significant ( Supplementary file 1c ). We also tested if Eigenvariates extracted from the largest cluster of the HC > bvFTD contrast correlated with the specific symptoms of bvFTD ( Supplementary file 1f ). None of the correlations remained significant after correction for multiple comparisons. Association with gene expression profile maps Next, we evaluated if co-localization of fALFF is also observed with mRNA gene expression underlying the significantly associated neurotransmitter systems. For genes encoding the 19 GABAa subunits, we first evaluated the variability between the subunits regarding their fALFF–mRNA correlations, their correlation with GABAa density and their mRNA autocorrelations (see Figure 2—figure supplement 1 and Figure 3—figure supplement 1 ). As the variability between the genes was high, we limited the analyses to genes encoding the three main subunits (GABRA1, GABRB1, and GABRG1). Correlations of fALFF alterations with mRNA gene expression profile maps in bvFTD relative to HC differed significantly from zero for HTR1B (encoding the 5-HT1b receptor; mean r = −0.02, p = 0.0144), HTR2A (encoding the 5-HT2a receptor; mean r = −0.04, p < 0.001), GABRB1 (encoding subunit of the GABAa receptor; mean r = −0.08, p < 0.001) and SLC6A2 (encoding NET; mean r = 0.06, p < 0.001), but not for GABRA1 (encoding subunit of the GABAa receptor; mean r = 0.02, p = 0.1414) and GABRG1 (encoding subunit of the GABAa receptor; mean r = −0.03, p = 0.0730) ( Figure 2G ). Thereby, correlations were negative for HTR1B , HTR2A , and GABRB1 , that is fALFF was reduced in areas with higher expression of respective genes, and positive for SLC6A2 . Furthermore, we tested if there was an association between the neurotransmitter maps included in the JuSpace toolbox and the mRNA gene expression profile maps provided in the MENGA toolbox that were both derived from independent healthy volunteer populations. The correlations between spatial distributions of 5-HT1b, 5-HT2a, GABAa, and NET, and corresponding mRNA gene expression profile maps were positive (5-HT1b/ HTR1B : mean r = 0.12; 5-HT2a/ HTR2A : mean r = 0.20; GABAa/ GABRA1 : mean r = 0.14; GABAa/ GABRB1 : mean r = 0.14; NET/ SLC6A2 : mean r = 0.02) with exception of the GABRG1 gene (GABAa/ GABRG1 : mean r = −0.13) ( Figure 3C ). Positive correlation coefficients suggest that higher neurotransmitter density was associated with higher expression of those neurotransmitters. Lastly, to evaluate the robustness of the mRNA analyses (i.e. gene differential stability), genomic autocorrelations were calculated. The genomic autocorrelation was high for GABRB1 (mean r = 0.92) and GABRG1 (mean r = 0.64), small for HTR1B (mean r = 0.23), SLC6A2 (mean r = 0.22), and GABRA1 (mean r = 0.21), and very small for HTR2A (mean r = 0.05) ( Figure 3D ).
Discussion In the current study, we examined if there is a selective vulnerability of specific neurotransmitter systems in bvFTD to gain novel insight into the disease mechanisms underlying functional and structural alterations. More specifically, we evaluated if fALFF alterations in bvFTD co-localize with specific neurotransmitter systems. We found a significant spatial co-localization between fALFF alterations in patients and the in vivo derived distribution of specific receptors and transporters covering serotonergic, norepinephrinergic, and GABAergic neurotransmission. These fALFF–neurotransmitter associations were also observed at the mRNA expression level and their strength correlated with specific clinical symptoms. All of the observed co-localizations with in vivo derived neurotransmitter estimates were negative with lower fALFF values in bvFTD being associated with a higher density of the respective receptors and transporters in health. The directionality of these findings supports the notion of higher vulnerability of respective networks to disease-related alterations. These findings are also largely in line with previous research concerning FTD showing alterations in all of the respective neurotransmitter systems ( Huey et al., 2006 ; Murley and Rowe, 2018 ). The in vivo co-localization findings might also support the notion that propagation of proteins involved in bvFTD may align with specific neurotransmitter systems ( Hock and Polymenidou, 2016 ). With regard to other brain disorders, linking functional connectivity with receptor density and expression, recent studies found an association between functional connectivity and receptor availability in schizophrenia, and an association between structural–functional decoupling and receptor gene expression in Parkinson’s disease ( Zarkali et al., 2021 ; Horga et al., 2016 ). A potential mechanism for the selective vulnerability of specific neurotransmitter systems is the propagation of proteins along functionally connected networks that has been previously demonstrated for various neurodegenerative diseases ( Zhou et al., 2012 ; Seeley et al., 2009 ). For example, in Alzheimer’s disease and normal aging, tau levels closely correlated with functional connectivity ( Franzmeier et al., 2019 ). We found moderate to large AUC when using the strength of the identified co-localizations for differentiation between patients and HC suggesting that these findings may represent a measure of the affectedness of respective neurotransmitter systems. In bvFTD, neurodegeneration is thought to progress through the salience network involved in socioemotional tasks, which comprises the anterior cingulate and frontoinsular cortex, as well as the amygdala and the striatum ( Bang et al., 2015 ; Hock and Polymenidou, 2016 ). The three neurotransmitter systems found to be deficient in our sample are relevant for the functioning of these structures (anterior cingulate cortex: e.g. serotonin and norepinephrine, Tian et al., 2017 ; Koga et al., 2020 ; amygdala: e.g. GABA and serotonin, Castro-Sierra et al., 2005 ; striatum: e.g. GABA, Semba et al., 1987 ). Although spread of misfolded proteins through the salience network provides a potential disease mechanism, further research of the exact mechanisms involved is needed. For GMV, we did not find any significant co-localization with specific neurotransmitter systems. As the correlations with GMV showed a distinct pattern to fALFF and the variance explained by GMV in the observed fALFF–neurotransmitter associations was small, the observed associations with fALFF seem to be driven indeed by functional alterations and not by the underlying atrophy of respective regions. As propagation of misfolded proteins leads to a gradual dysfunction and eventually cell death ( Hock and Polymenidou, 2016 ), some regions displaying high density of a specific neurotransmitter might suffer dysfunction (i.e. functional alterations), whereas others might already be exposed to cell death (i.e. structural alterations/atrophy). An interesting future direction might compose integration of structural connectivity as measured by diffusion tensor imaging. A study by Dopper et al., 2014 showed reduced fractional anisotropy in healthy individuals carrying mutations compared to non-carriers ( Dopper et al., 2014 ). Given that there were structural connectivity differences even before disease onset, it would be of interest to re-examine structural connectivity differences between HC and patients (i.e. after disease onset). Repeating the neurotransmitter analyses might facilitate understanding of the underlying disease mechanism. The strength of co-localization of fALFF with NET was correlated with VF and MMSE, both being impaired in patients with bvFTD ( Schroeter et al., 2012 ; Diehl and Kurz, 2002 ; Schroeter et al., 2018 ). Thereby, a stronger negative co-localization (i.e. lower fALFF in patients in high-density regions in health) was moderately associated with decreased test performance. Similarly, a correlation between MMSE and NE plasma concentration has been previously reported in Alzheimer’s disease ( Pillet et al., 2020 ). Combined, these findings point to a potentially more general role of norepinephrinergic neurotransmission in cognitive decline observed across different dementia syndromes. This interpretation is in line with the recently proposed role of the locus coeruleus, the source of norepinephrine in the brain, in regulating processes of learning, memory, and attention ( Tsukahara and Engle, 2021 ). In contrast to the study by Murley et al., 2020 who reported an association of GABA concentrations in the inferior frontal gyrus in FTD with disinhibition, we did not find this association. Beside the use of different methodology, a potential explanation may constitute the use of different inhibition measures. Whereas we measured disinhibition using the FrSBe, Murley et al., 2020 used a stop-signal task. Although, except for α1 and γ1 GABAa subunits, all of the co-localizations with fALFF identified with in vivo estimates were also significant at the respective mRNA gene expression level, we found correlation coefficients of both directionalities. Interestingly, whereas these correlations were solely negative for the in vivo derived maps, the correlations with gene expression profile maps were positive for NET, and negative for 5-HT1b, 5-HT2a, and β1 GABAa subunit. Thus, for NET, we observed higher fALFF values in bvFTD patients in areas with high mRNA gene expression in health, whereas for 5-HT1b, 5-HT2a, and β1 GABAa subunit we observed lower fALFF values in bvFTD patients in areas with high mRNA gene expression in health. One explanation for these seemingly contradictory findings is that mRNA gene expression seems to vary strongly between individuals. In our mRNA gene expression profile maps, the autocorrelation between mRNA donors was low for 5-HT1b, 5-HT2a, and α1 GABAa subunit, and NET, limiting the confidence in some of these findings. Additionally, the association of mRNA expression with protein products may also vary greatly between genes, being not associated at all or even negatively associated for some, and strongly correlated for others ( Koussounadis et al., 2015 ; Moritz et al., 2019 ). Similarly, a previous study found the correspondence between receptor density and mRNA expression to be low ( Hansen et al., 2022 ). Potential reasons for the lack of or even negative correlations may be a decoupling in time as well as that other levels of regulation overrode the transcriptional level ( Koussounadis et al., 2015 ). We observed a similar phenomenon in our data with the correlation of neurotransmitter density maps with their underlying mRNA gene expression being weak for all neurotransmitters except β1 and γ1 GABAa subunits. Our findings support the notion of fALFF as useful marker for assessing bvFTD-related decline in brain function. In line with previous literature in bvFTD, we observe fALFF reductions mainly in frontal and temporal lobes, but also in the parietal lobe ( Premi et al., 2014 ; Borroni et al., 2018 ). These findings support the notion of fALFF being a useful marker of metabolic impairment ( Bang et al., 2015 ; Diehl-Schmid et al., 2007 ). Moreover, we found a clear association of fALFF with several neurotransmitter systems pointing to a selective neurotransmitter vulnerability in bvFTD, as suggested in previous research ( Huey et al., 2006 ; Murley and Rowe, 2018 ). In particular, the co-localization of fALFF with NET was associated with VF and MMSE, suggesting the sensitivity of fALFF to reflect modality-specific cognitive decline. The current study was limited by the unavailability of medication information. Therefore, we were not able to control for its potential confounding effects. However, as bvFTD medication is typically restricted to serotonin reuptake inhibitors its effects should be primarily associated with availability of 5-HTT and directionally negate the effects of the disease. Furthermore, as the included PET maps were derived from healthy subjects, the applied approach only tests for co-localization of imaging changes with the non-pathological distribution of the respective neurotransmitter systems. Similarly, the reliability of the co-localization analyses is partly limited by the number of healthy volunteers used to derive the respective neurotransmitter average maps. Finally, the current study was limited by the availability of neurotransmitter maps included in the JuSpace toolbox. To summarize, we found fALFF reductions in bvFTD to co-localize with the in vivo and ex vivo derived distribution of serotonergic, GABAergic, and norepinephrinergic neurotransmitter systems, pointing to a crucial vulnerability of these neurotransmitters. The strength of these associations was linked to some of the neuropsychological deficits observed in this disease. We propose a combination of spread of pathology through neuronal connectivity and more specifically, through the salience network, as a disease mechanism. Thereby, these findings provide novel insight into the mechanisms underlying the spatial constraints observed in progressive functional and structural alterations in bvFTD. Our data-driven method might even be used to generate new hypotheses for pharmacological intervention in neuropsychiatric diseases beyond this disorder.
These authors contributed equally to this work. Background: Aside to clinical changes, behavioral variant frontotemporal dementia (bvFTD) is characterized by progressive structural and functional alterations in frontal and temporal regions. We examined if there is a selective vulnerability of specific neurotransmitter systems in bvFTD by evaluating the link between disease-related functional alterations and the spatial distribution of specific neurotransmitter systems and their underlying gene expression levels. Methods: Maps of fractional amplitude of low-frequency fluctuations (fALFF) were derived as a measure of local activity from resting-state functional magnetic resonance imaging for 52 bvFTD patients (mean age = 61.5 ± 10.0 years; 14 females) and 22 healthy controls (HC) (mean age = 63.6 ± 11.9 years; 13 females). We tested if alterations of fALFF in patients co-localize with the non-pathological distribution of specific neurotransmitter systems and their coding mRNA gene expression. Furthermore, we evaluated if the strength of co-localization is associated with the observed clinical symptoms. Results: Patients displayed significantly reduced fALFF in frontotemporal and frontoparietal regions. These alterations co-localized with the distribution of serotonin (5-HT1b and 5-HT2a) and γ-aminobutyric acid type A (GABAa) receptors, the norepinephrine transporter (NET), and their encoding mRNA gene expression. The strength of co-localization with NET was associated with cognitive symptoms and disease severity of bvFTD. Conclusions: Local brain functional activity reductions in bvFTD followed the distribution of specific neurotransmitter systems indicating a selective vulnerability. These findings provide novel insight into the disease mechanisms underlying functional alterations. Our data-driven method opens the road to generate new hypotheses for pharmacological interventions in neurodegenerative diseases even beyond bvFTD. Funding: This study has been supported by the German Consortium for Frontotemporal Lobar Degeneration, funded by the German Federal Ministry of Education and Research (BMBF; grant no. FKZ01GI1007A). Research organism
Funding Information This paper was supported by the following grants: http://dx.doi.org/10.13039/501100002347 Bundesministerium für Bildung und Forschung FKZ01GI1007A to Karsten Mueller. http://dx.doi.org/10.13039/501100001659 Deutsche Forschungsgemeinschaft SCHR 774/5-1 to Matthias L Schroeter. http://dx.doi.org/10.13039/501100006298 Sächsische Aufbaubank eHealthSax Initiative to Matthias L Schroeter. http://dx.doi.org/10.13039/501100007601 Horizon 2020 - Research and Innovation Framework Programme TheVirtualBrain-Cloud 826421 to Juergen Dukart. http://dx.doi.org/10.13039/100013278 EU Joint Programme – Neurodegenerative Disease Research GENFI-prox to Markus Otto. Acknowledgements This study has been supported by the German Consortium for Frontotemporal Lobar Degeneration, funded by the German Federal Ministry of Education and Research (BMBF; grant no. FKZ01GI1007A). MLS has been furthermore supported by the German Research Foundation (DFG; SCHR 774/5-1) and the eHealthSax Initiative of the Sächsische Aufbaubank (SAB). Accordingly, this study is co-financed with tax revenue based on the budget approved by the Saxon state parliament. JD has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement no. 826421, 'TheVirtualBrain-Cloud'. This work was further supported by the JPND grant 'GENFI-prox' (by DLR/BMBF to MLS, joint with MO). We would like to acknowledge the Clinic for Cognitive Neurology in Leipzig, Annerose Engel, Anke Marschhauser, and Maryna Polyakova. Additional information Additional files Data availability The original data and their derivatives cannot be made publicly available as the study includes sensitive patient data and public data sharing was not covered in the informed consent. The original data supporting the findings of this study are available from the senior author (Matthias L. Schroeter) upon reasonable request. All derived statistical measures used here are available from the first author upon request. The software applied is publicly available at https://github.com/juryxy/JuSpace (JuSpace, Juryxy, 2023 ) and https://github.com/FAIR-CNS/MENGA (MENGA, Rizzo, 2016 ). The code for the main analyses is publicly available at https://github.com/liha-coding/Neurotransmitter-vulnerability-in-bvFTD (copy archived at Hahn, 2023 ). Processed data used for the creation of the figures are available as supplementary material.
CC BY
no
2024-01-16 23:47:20
eLife.; 13:e86085
oa_package/b3/4b/PMC10789488.tar.gz
PMC10789490
38224341
Introduction In many situations of uncertainty, some outcomes are more probable than others. Knowing the probability distributions of the possible outcomes provides an edge that can be leveraged to improve and speed up decision making and perception ( Summerfield and de Lange, 2014 ). In the case of choice reaction-time tasks, it was noted in the early 1950s that human reactions were faster when responding to a stimulus whose probability was higher ( Hick, 1952 ; Hyman, 1953 ). In addition, faster responses were obtained after a repetition of a stimulus (i.e., when the same stimulus was presented twice in a row), even in the case of serially-independent stimuli (i.e., when the preceding stimulus carried no information on subsequent ones; Hyman, 1953 ; Bertelson, 1965 ). The observation of this seemingly suboptimal behavior has motivated in the following decades a profuse literature on ‘sequential effects’, i.e., on the dependence of reaction times on the recent history of presented stimuli ( Kornblum, 1967 ; Soetens et al., 1985 ; Cho et al., 2002 ; Yu and Cohen, 2008 ; Wilder et al., 2009 ; Jones et al., 2013 ; Zhang et al., 2014 ; Meyniel et al., 2016 ). These studies consistently report a recency effect whereby the more often a simple pattern of stimuli (e.g. a repetition) is observed in recent stimulus history, the faster subjects respond to it. In tasks in which subjects are asked to make predictions about sequences of random binary events, sequential effects are also observed and they have given rise since the 1950s to a rich literature ( Jarvik, 1951 ; Edwards, 1961 ; McClelland and Hackenberg, 1978 ; Matthews and Sanders, 1984 ; Gilovich et al., 1985 ; Ayton and Fischer, 2004 ; Burns and Corpus, 2004 ; Croson and Sundali, 2005 ; Bar-Eli et al., 2006 ; Oskarsson et al., 2009 ; Plonsky et al., 2015 ; Plonsky and Erev, 2017 ; Gökaydin and Ejova, 2017 ). Sequential effects are intriguing: why do subjects change their behavior as a function of the recent past observations when those are in fact irrelevant to the current decision? A common theoretical account is that humans infer the statistics of the stimuli presented to them, but because they usually live in environments that change over time, they may believe that the process generating the stimuli is subject to random changes even when it is in fact constant ( Yu and Cohen, 2008 ; Wilder et al., 2009 ; Zhang et al., 2014 ; Meyniel et al., 2016 ). Consequently, they may rely excessively on the most recent stimuli to predict the next ones. In several studies, this was heuristically modeled as a ‘leaky integration’ of the stimuli, that is, an exponential discounting of past observations ( Cho et al., 2002 ; Yu and Cohen, 2008 ; Wilder et al., 2009 ; Jones et al., 2013 ; Meyniel et al., 2016 ). Here, instead of positing that subjects hold an incorrect belief on the dynamics of the environment and do not learn that it is stationary, we propose a different account, whereby a cognitive constraint is hindering the inference process and preventing it from converging to the correct, constant belief about the unchanging statistics of the environment. This proposal calls for the investigation of the kinds of choice patterns and sequential effects that would result from different cognitive constraints at play during inference. We derive a framework of constrained inference, in which a cost hinders the representation of belief distributions (posteriors). This approach is in line with a rich literature that views several perceptual and cognitive processes as resulting from a constrained optimization: the brain is assumed to operate optimally, but within some posited limits on its resources or abilities. The ‘efficient coding’ hypothesis in neuroscience ( Ganguli and Simoncelli, 2016 ; Wei and Stocker, 2015 ; Wei and Stocker, 2017 ; Prat-Carrabin and Woodford, 2021c ) and the ‘rational inattention’ models in economics ( Sims, 2003 ; Woodford, 2009 ; Caplin et al., 2019 ; Gabaix, 2017 ; Azeredo da Silveira and Woodford, 2019 ; Azeredo da Silveira et al., 2020 ) are examples of this approach, which has been called ‘resource-rational analysis’ ( Griffiths et al., 2015 ; Lieder and Griffiths, 2019 ). Here, we investigate the proposal that human inference is resource-rational, i.e., optimal under a cost. As for the nature of this cost, we consider two natural hypotheses: first, that a higher precision in belief is harder for subjects to achieve, and thus that more precise posteriors come with higher costs; and second, that unpredictable environments are difficult for subjects to represent, and thus that they entail higher costs. Under the first hypothesis, the cost is a function of the belief held, while under the second hypothesis the cost is a function of the inferred environment. We show that the precision cost predicts ‘leaky integration’: in the resulting inference process, remote observations are discarded. Crucially, beliefs do not converge but fluctuate instead with the recent stimulus history. By contrast, under the unpredictability cost, the inference process does converge, although not to the correct (Bayesian) posterior, but rather to a posterior that implies a biased belief on the temporal structure of the stimuli. In both cases, sequential effects emerge as the result of a constrained inference process. We examine experimentally the degree to which the models derived from our framework account for human behavior, with a task in which we repeatedly ask subjects to predict the upcoming stimulus in sequences of Bernoulli-distributed stimuli. Most studies on sequential effects only consider the equiprobable case, in which the two stimuli have the same probability. However, the models we consider here are more general than this singular case and they apply to the entire range of stimulus probability. We thus manipulate in separate blocks of trials the stimulus generative probability (i.e., the Bernoulli probability that parameterizes the stimulus) to span the range from 0.05 to 0.95 by increments of 0.05. This enables us to examine in detail the behavior of subjects in a large gamut of environments from the singular case of an equiprobable, maximally-uncertain environment (with a probability of 0.5 for both stimuli) to the strongly-biased, almost-certain environment in which one stimulus occurs with probability 0.95. To anticipate on our results, the predictions of subjects depend on the stimulus generative probability, but also on the history of stimuli. We examine whether the occurrence of a stimulus, in past trials, increase the proportion of predictions identical to this stimulus (‘attractive effect’), or whether it decreases this proportion (‘repulsive effect’). The two costs presented above reproduce qualitatively the main patterns in subjects’ data, but they make distinct predictions as to the modulations of the recency effect as a function of the history of stimuli, beyond the last stimulus. We show that the responses of subjects exhibit an elaborate, and at times counter-intuitive, pattern of attractive and repulsive effects, and we compare these to the predictions of our models. Our results suggest that the brain infers a stimulus generative probability, but under a constraint on the precision of its internal representations; the inferred generative process may be more general than the actual one, and include higher-order statistics (e.g. transition probabilities), in contrast with the Bernoulli-distributed stimulus used in the experiment. We present the behavioral task and we examine the predictions of subjects — in particular, how they vary with the stimulus generative probability, and how they depend, at each trial, on the preceding stimulus. We then introduce our framework of inference under constraint, and the two costs we consider, from which we derive two families of models. We examine the behavior of these models and the extent to which they capture the behavioral patterns of subjects. The models make different qualitative predictions about the sequential effects of past observations, which we confront to subjects’ data. We find that the predictions of subjects are qualitatively consistent with a model of inference of conditional probabilities, in which precise posteriors are costly.
Methods Task and subjects The computer-based task was programmed using the Python library PsychoPy ( Peirce, 2008 ). The experiment comprised ten blocks of trials, which differed by the stimulus generative probability, p, used in all the trials of each block. The probability p was chosen randomly among the ten values ranging from 0.50 to 0.95 by increments of 0.05, excluding the values already chosen; and with probability 1/2 the stimulus generative probability was used instead. Each block started with 200 passive trials, in which the subject was only asked to look at the 200 stimuli sampled with the block’s probability and successively presented. No action from the subject was required for these passive trials. The subject was then asked to predict, in each of 200 trials, the next location of the stimulus. Subjects provided their responses by a keypress. The task was presented as a game to the subjects: the stimulus was a lightning symbol, and predicting correctly whether the lightning would strike the left or the right rod resulted in the electrical energy of the lightning being collected in a battery ( Figure 1 ). A gauge below the battery indicated the amount of energy accumulated in the current block of trials ( Figure 1a ). Twenty subjects (7 women, 13 men; age: 18–41, mean 25.5, standard deviation 6.2) participated in the experiment. All subjects completed the ten blocks of trials, except one subject who did not finish the experiment and was excluded from the analysis. The study was approved by the ethics committee Île de France VII (CPP 08–021). Participants gave their written consent prior to participating. The number of blocks of trials and the number of trials per block were chosen as a trade-off between maximizing the statistical power of the study, scanning the values of the generative probability parameter from 0.05 to 0.95 with a satisfying resolution, and maintaining the duration of the experiment under a reasonable length of time. The number of subjects was chosen consistently with similar studies and so as to capture individual variability. Throughout the study, we conduct Student’s t-tests when comparing the subjects’ proportion of predictions A to a given value (e.g. 0.5). When comparing two proportions of predictions A obtained under different conditions (e.g. depending on whether the preceding stimulus is A or B), we accordingly conduct Fisher exact tests. The trials in which subjects failed to respond within the limit of 1 s were not included in the analysis. They represented 1.27% of the trials, on average (across subjects); and for 95% of the subjects these trials represented less than 2.5% of the trials. Sequential effects of the models We run simulations of the eight models and look at the predictions they yield. To reproduce the conditions faced by the subjects, which included 200 passive trials, we start each simulation by showing to the model subject 200 randomly sampled stimuli (without collecting predictions at this stage). We then show an additional 200 samples, and obtain a prediction from the model subject after each sample. The sequential effects of the most recent stimulus, with the different models, are shown in Figure 7 . With the precision-cost models, the posterior distribution of the model subject does not converge, but fluctuates instead with the recent history of the stimulus. This results in attractive sequential effects ( Figure 7a ), including for the Bernoulli observer, who assumes that the probability of A does not depend on the most recent stimulus. With the unpredictability-cost models, the posterior of the model subject does converge. With Markov observers, it converges toward a parameter vector that implies that the probability of observing A depends on the most recent stimulus, resulting in the presence of sequential effects of the most recent stimulus ( Figure 7b , second to fourth row). With a Bernoulli observer, the posterior of the model subject converges toward a value of the stimulus generative probability that does not depend on the stimulus history. As more evidence is accumulated, the posterior narrows around this value but not without some fluctuations that depend on the sequence of stimuli presented. In consequence the model subject’s estimate of the stimulus generative probability is also subject to fluctuations, and depends on the history of stimuli (including the most recent stimulus), although the width of the fluctuations tend to zero as more stimuli are observed. After the 200 stimuli of the passive trials, the sequential effects of the most recent stimulus resulting from this transient regime appear small in comparison to the sequential effects obtained with the other models ( Figure 7b , first row). The Figure 7 also shows the behaviors of the models when augmented with a propensity to repeat the preceding response: we comment on these in the section dedicated to these models, below. Turning to higher-order sequential effects, we look at the influence on predictions of the second- and third-to-last stimulus ( Figure 8 ). As mentioned, only precision-cost models of Markov observers yield repulsive sequential effects, and these occur only when the third-to-last-stimulus is followed by BA. They do not occur with the second-to-last stimulus, nor with the third-to-last-stimulus when it is followed by AA ( Figure 8a ); and they do not occur in any case with the unpredictability-cost models ( Figure 8b ). Derivation of the approximate posteriors We derive the solution to the constrained optimization problem, in the general case of a ‘hybrid’ model subject who bears both a precision cost, with weight , and an unpredictability cost, with weight . Thus the subject minimizes the loss function in which we have included a Lagrange multiplier, μ, corresponding to the normalization constraint, . Taking the functional derivative of and setting to zero, we obtain and thus we write the approximate posterior as where is the Bayesian update of the preceding belief, , i.e., Setting the weight of the unpredictability cost to zero (i.e., ), we obtain the posterior in presence of the precision cost only, as The main text provides more details about the posterior in this case ( Equation 4 ), in particular with a Bernoulli observer ( ; Equation 5 , Equation 6 ). For the hybrid model (in which both and are potentially different from zero), we obtain With , the sum in the exponential is equal to , and the precision-cost posterior, , is the Bayesian posterior, , and thus we obtain the posterior in presence of the unpredictability cost only (see Equation 8 ). Hybrid models The hybrid model, described above, features both a precision cost and an unpredictability cost, with respective weights and . As with the models that include only one type of cost, we consider a Bernoulli observer ( ), and three Markov observers ( and 3). As for the response-selection strategy, we use, here also, the generalized probability-matching strategy parameterized by . We thus obtain four new models; each one has three parameters ( , , and ), while the non-hybrid models (featuring only one type of cost) have only two parameters. We fit these models to the responses of subjects. For 68% of subjects, the BIC of the best-fitting hybrid model is larger than the BIC of the best-fitting non-hybrid model, indicating a worse fit, by this measure. This suggests that for these subjects, allowing for a second type of cost result in a modest improvement of the fit that does not justify the additional parameter. For the remaining 32% of subjects, the hybrid models yield a better fit (a lower BIC) than the non-hybrid models, although for half of these, the difference in BICs is lower than 6, which is only weak evidence in favor of the hybrid models. Moreover, we compute the exceedance probability, defined below in the section ‘Bayesian Model Selection’, of the hybrid models (together with the complementary probability of the non-hybrid models). We find that the exceedance probability of the hybrid models is 8.1% while that of the non-hybrid models is 91.9%, suggesting that subjects best-fitted by non-hybrid models are more prevalent. In summary, we find that for more than two thirds of subjects, allowing for a second cost type does not improve much the fit to the behavioral data (the BIC is higher with the best-fitting hybrid model). These subjects are best-fitted by non-hybrid models, that is, by models featuring only one type of cost, instead of ‘falling in between’ the two cost types. This suggests that for most subjects, only one of the two costs, either the prediction cost or the unpredictability cost, dominates the inference process. Alternative response-selection strategy, and repetition or alternation propensity In addition to the generalized probability-matching response-selection strategy presented in the main text, in our investigations we also implement several other response-selection strategies. First, a strategy based on a ‘softmax’ function that smoothes the optimal decision rule; it does not yield, however, a behavior substantially different from that of the generalized probability-matching response-selection strategy. Second, we examine a strategy in which the model subject chooses the optimal response with a probability that is fixed across conditions, which we fit onto subjects’ choices. No subject is best-fitted by this strategy. Third, another possible strategy proposed in the game-theory literature ( Nowak and Sigmund, 1993 ) is ‘win-stay, lose-shift’: it prescribes to repeat the same response as long as it proves correct and to change otherwise. In the context of our binary-choice prediction task, it is indistinguishable from a strategy in which the model subject chooses a prediction equal to the outcome that last occurred. This strategy is a special case of our Bernoulli observer hampered by a precision cost whose weight is large combined with the optimal response-selection strategy ( ). Since the generalized probability-matching strategy parameterized by the exponent appears either more general, better than or indistinguishable from those other response-selection strategies, we selected it to obtain the results presented in the main text. Furthermore, we consider the possibility that subjects may have a tendency to repeat their preceding response, or, conversely, to alternate and choose the other response, independently from their inference of the stimulus statistics. Specifically, we examine a generalization of the response-selection strategy, in which a parameter , with , modulates the probability of a repetition or of an alternation. With probability , the model subject chooses a response with the generalized probability-matching response-selection strategy, with parameter . With probability , the model subject repeats the preceding response, if is positive; or chooses the opposite of the preceding response, if is negative. With , there is no propensity for repetition nor alternation, and the response-selection strategy is the same as the one we have considered in the main text. We have allowed for alternations ( ) in this model for the sake of generality, but for all the subjects the best-fitting value of is non-negative, thus henceforth we only consider the possibility of repetitions, i.e., non-negative values of the parameter ( ). We note that with a repetition probability , such that , the unconditional probability of a prediction A, which we denote by , is not different from the unconditional probability of a prediction A in the absence of a repetition probability , , as in the event of a repetition, the response that is repeated is itself A with probability ; formally, , which implies the equality . Now turning to sequential effects, we note that with a repetition probability , the probability of a prediction conditional on an observation A is In other words, when introducing the repetition probability , the resulting probability of a prediction A conditional on observing A is a weighted mean of the unconditional probability of a prediction A and of the conditional probability of a prediction A in the absence of a repetition probability. Figure 7 (dotted lines) illustrates this for the eight models, with . Consequently the sequential effects with this response-selection strategy are more modest ( Figure 7 , light-red dots). We fit (by maximizing their likelihoods) our eight models now equipped with a propensity for repetition (or alternation) parameterized by . The average best-fitting value of , across subjects, is 0.21 (standard deviation: 0.19; median: 0.18); as mentioned, no subjects have a negative best-fitting value of . In order to assess the degree to which the models with repetition propensity are able to capture subjects’ data, in comparison with the models without such propensity, we use the Bayesian Information Criterion (BIC) ( Schwarz, 1978 ), which penalizes the number of parameters, as a comparative metric (a lower BIC is better). For 26% of subjects, the BIC with this response-selection strategy (allowing for ) is higher than with the original response-selection strategy (which sets ,) suggesting that the responses of these subjects do not warrant the introduction of a repetition (or alternation) propensity. In addition, for these subjects the best-fitting inference model, characterized by a cost type and a Markov order, is the same when the response-selection strategy allows for repetition or alternation ( ) and when it does not ( ). For 47% of subjects, the BIC is lower when including the parameter (suggesting that allowing for results in a better fit to the data), and importantly, here also the best-fitting inference model (cost type and Markov order) is the same with and with . For 11% of subjects, a better fit (lower BIC) is obtained with ; and the best-fitting inference models, with and with , belong to the same family of models, that is, they have the same cost type (precision cost or unpredictability cost), and only their Markov orders differ. Finally, only for the remaining 16% does the cost type change when allowing for . In other words, for 84% of subjects the best-fitting cost type is the same whether or not is allowed to differ from 0. Furthermore, the best-fitting parameters and are also stable across these two cases. Among the 73% of subjects whose best-fitting inference model (including both cost type and Markov order) remains the same regardless of the presence of a repetition propensity, we find that the best-fitting values of , with and with , differ by less than 10% for 93% of subjects, and the best-fitting values of differ by less than 10% for 71% of subjects. For these two parameters, the correlation coefficient (between the best-fitting value with and the best-fitting value with ) is above 0.99 (with p-values lower than 1e-19). The responses of a majority of subjects are thus better reproduced by a response-selection strategy that includes a probability of repeating the preceding response. The impact of this repetition propensity on sequential effects is relatively small in comparison to the magnitude of these effects ( Figure 7 ). For most subjects, moreover, the best-fitting inference model, characterized by its cost type and its Markov order, is the same — with or without repetition propensity —, and the best-fitting parameters and are very close in the two cases. Therefore, this analysis supports the results of the model-fitting and model-selection procedure, and validates its robustness. We conclude that the models of costly inference are essential in reproducing the behavioral data, notwithstanding a positive repetition propensity in a fraction of subjects. Computation of the models’ likelihoods Model fitting is conducted by maximizing for each model the likelihood of the subject’s choices. With the precision-cost models, the likelihood can be derived analytically and thus easily computed: the model’s posterior is a Dirichlet distribution of order , whose parameters are exponentially filtered counts of the observed sequences of length . With a Bernoulli observer, i.e., , this is the Beta distribution presented in Equation 5 . The expected probability of a stimulus A, conditional on the sequence of stimuli most recently observed, is a simple ratio involving the exponentially filtered counts, for example in the case of a Bernoulli observer. This probability is then raised to the power and normalized (as prescribed by the generalized probability-matching response-selection strategy) in order to obtain the probability of a prediction A. As for the unpredictability-cost models, the posterior is given in Equation 8 up to a normalization constant. Unfortunately, the expected probability of a stimulus A implied by this posterior does not come in a closed-form expression. Thus we compute the (unnormalized) posterior on a discretized grid of values of the vector . The dimension of the vector is , and each element of is in the segment . If we discretize each dimension into bins, we obtain different possible values of the vector ; for each of these, at each trial, we compute the unnormalized value of the posterior (as given by Equation 8 ). As increases, this becomes computationally prohibitive: for instance, with bins and , the multidimensional grid of values of contains 10 16 numbers (with a typical computer, this would represent 80,000 terabytes). In order to keep the needed computational resources within reasonable limits, we choose a lower resolution of the grid for larger values of . Specifically, for we choose a grid (over ) with increments of 0.01; for , increments of 0.02 (in each dimension); for , increments of 0.05; and for , increments of 0.1. We then compute the mean of the discretized posterior and pass it through the generalized probability-matching response-selection model to obtain the choice probability. To find the best-fitting parameters and , the likelihood was maximized with the L-BFGS-B algorithm ( Byrd et al., 1995 ; Zhu et al., 1997 ). These computations were run using Python and the libraries Numpy and Scipy ( Harris et al., 2020 ; Virtanen et al., 2020 ). Symmetries and relations between conditional probabilities Throughout the paper, we leverage the symmetry inherent to the Bernoulli prediction task to present results in a condensed manner. Specifically, in our analysis, the proportion of predictions A when the probability of A (the stimulus generative probability) is , which we denote here by , is equal to the proportion of predictions B when the probability of A is , which we denote by ; i.e., . More generally, the predictions conditional on a given sequence when the probability of A is are equal to the predictions conditional on the ‘mirror’ sequence (in which A and B have been swapped), when the probability of A is , for example extending our notation, . Here, we show how this results in the symmetries in Figure 2 , and in the fact that in Figures 5 and 6 , it suffices to plot the sequential effects obtained with only a fraction of all the possible sequences of two or three stimuli. First, we note that which implies the symmetry of in Figure 2a (grey line). Turning to conditional probabilities (and thus sequential effects), we have As a result, the lines representing (blue) and (orange) in Figure 2a are reflections of each other. In addition, these equations result in the equality which implies the symmetry in Figure 2b . As for the sequential effect of the second-to-last stimulus, we show in Figures 5a and 6a the difference in the proportions of predictions A conditional on two past sequences of two stimuli, AA and BA; i.e., . There are two other possible sequences of two stimuli: and . The difference in the proportions conditional on these two sequences is implied by the former difference, as: As for the sequential effect of the third-to-last stimulus, we show in Figures 5b and 6b the difference in the proportions conditional on the sequences AAA and BAA, and in Figures 5c and 6c the difference in the proportions conditional on the sequences ABA and BBA. The differences in the proportions conditional on the sequences AAB and BAB, and conditional on the sequences ABB and BBB, are recovered as a function of the former two, as Bayesian model selection We implement the Bayesian model selection (BMS) procedure described in Stephan et al., 2009 . Given models, this procedure aims at deriving a probabilistic belief on the distribution of these models among the general population. This unknown distribution is a categorical distribution, parameterized by the probabilities of the models, denoted by , with . With a finite sample of data, one cannot determine with infinite precision the values of the probabilities . The BMS, thus, computes an approximation of the Bayesian posterior over the vector , as a Dirichlet distribution parameterized by the vector , i.e., the posterior distribution Computing the parameters of this posterior makes use of the log-evidence of each model for each subject, i.e., the logarithm of the joint probability, , of a given subject’s responses, , under the assumption that a given model, , generated the responses. We use the model’s maximum likelihood to obtain an approximation of the model’s log-evidence, as ( Balasubramanian, 1997 ) where denotes the parameters of the model, is the likelihood of the model when parameterized with , is the dimension of , and is the size of the data, that is, the number of responses. (The well-known Bayesian Information Criterion Schwarz, 1978 is equal to this approximation of the model’s log-evidence, multiplied by .) In our case, there are models, each with parameters: . The posterior distribution over the parameters of the categorical distribution of models in the general population, , allows for the derivation of several quantities of interest; following Stephan et al., 2009 , we derive two types of quantities. First, given a family of models, that is, a set of different models (for instance, the prediction-cost models, or the Bernoulli-observer models), the expected probability of this class of model, that is, the expected probability that the behavior of a subject randomly chosen in the general population follows a model belonging to this class, is the ratio We compute the expected probability of the precision-cost models (and the complementary probability of the unpredictability-cost models), and the expected probability of the Bernoulli-observer models (and the complementary probability of the Markov-observer models; see Results). Second, we estimate, for each family of models , the probability that it is the most likely, i.e., the probability of the inequality which is called the ‘exceedance probability’. We compute an estimate of this probability by sampling one million times the Dirichlet belief distribution ( Equation 21 ), and counting the number of samples in which the inequality is verified. We estimate in this way the exceedance probability of the precision-cost models (and the complementary probability of the unpredictability-cost models), and the exceedance probability of the Bernoulli-observer models (and the complementary probability of the Markov-observer models; see Results). Unpredictability cost for Markov observers Here we derive the expression of the unpredictability cost for Markov observers as a function of the elements of the parameter vector . For an observer of Markov order 1 ( ), the vector has two elements, which are the probability of observing A at a given trial conditional on the preceding outcome being A, and the probability of observing A at a given trial conditional on the preceding outcome being B, which we denote by and , respectively. The Shannon entropy, , implied by the vector , is the average of the conditional entropies implied by each conditional probability, i.e., where and are the unconditional probabilities of observing A and B, respectively (see below), and where is A or B. The unconditional probabilities and are functions of the conditional probabilities and . Indeed, at trial , the marginal probability of the event , , is a weighted average of the probabilities of this event conditional on the preceding stimulus, , as given by the law of total probability: i.e. Solving for , we find: The entropy implied by the vector is obtained by substituting these quantities in Equation 25 . Similarly, for and 3, the elements of the vector are the parameters and , respectively, where , and where is the probability of observing A at a given trial conditional on the two preceding outcomes being the sequence ‘ ’, and is the probability of observing A at a given trial conditional on the three preceding outcomes being the sequence ‘ ’. The Shannon entropy, , implied by the vector , is here also the average of the conditional entropies implied by each conditional probability, as where and are the unconditional probabilities of observing the sequence ‘ ’, and of observing the sequence ‘ ’, respectively. These unconditional probabilities verify a system of linear equations whose coefficients are given by the conditional probabilities. For instance, for , we have the relation i.e., The system of linear equations can be written as The solution is the eigenvector corresponding to the eigenvalue equal to 1 of the matrix in the equation above, with the additional constraint that the unconditional probabilities must sum to 1, i.e., . We find: For , we find the relations: Together with the normalization constraint , these relations allow determining the eight unconditional probabilities , and thus the expression of the Shannon entropy.
Results Subjects’ predictions of a stimulus increase with the stimulus probability In a computer-based task, subjects are asked to predict which of two rods the lightning will strike. On each trial, the subject first selects by a key press the left- or right-hand-side rod presented on screen. A lightning symbol (which is here the stimulus) then randomly strikes either of the two rods. The trial is a success if the lightning strikes the rod selected by the subject ( Figure 1a ). The location of the lightning strike (left or right) is a Bernoulli random variable whose parameter (the stimulus generative probability) we manipulate across blocks of 200 trials: in each block, is a multiple of 0.05 chosen between 0.05 and 0.95. Changes of block are explicitly signaled to the subjects: each block is presented as a different town exposed to lightning strikes. The subjects are not told that the locations of the strikes are Bernoulli-distributed (in fact no information is given to them regarding how the locations are determined). Moreover, in order to capture the ‘stationary’ behavior of subjects, which presumably prevails after ample exposure to the stimulus, each block is preceded by 200 passive trials in which the stimuli (sampled with the probability chosen for the block) are successively shown with no action from the subject ( Figure 1b ); this is presented as a ‘useful track record’ of lightning strikes in the current town. (To verify the stationarity of subjects’ behavior, we compare their responses in the first and second halves of the 200 trials in which they are asked to make predictions. In most cases we find no significant differences. See Appendix.) We provide further details on the task in Methods. The behavior of subjects varies with the stimulus generative probability, . In our analyses, we are interested in how the subjects’ predictions of an event (left or right strike) vary with the probability of this event, regardless of its nature (left or right). Thus, for instance, we would like to pool together the trials in which a subject makes a rightward prediction when the probability of a rightward strike is 0.7, and the trials in which a subject makes a leftward prediction when the probability of a leftward strike is also 0.7. Therefore, throughout the paper, we do not discuss whether subjects predict ‘right’ or ‘left’, and instead we discuss whether they predict the event ‘A’ or the complementary event ‘B’: in different blocks of trials, A (and similarly B) may refer to different locations; but importantly, B always corresponds to the location opposite to A, and denotes the probability of A (thus B has probability ). This allows us, given a probability , to pool together the responses obtained in blocks of trials in which one of the two locations has probability . One advantage of this pooling is that it reduces the noise in data. Looking at the unpooled data, however, does not change our conclusions; see Appendix. Turning to the behavior of subjects, we denote by the proportion of trials in which a subject predicts the event A. In the equiprobable condition ( ), the subjects predict either side on about half the trials ( , subjects pooled; standard error of the mean (sem): 0.008; p-value of t-test of equality with 0.5: 0.59). In the non-equiprobable conditions, the optimal behavior is to predict A on none of the trials ( ) if , or on all trials ( ) if . The proportion of predictions A adopted by the subjects also increases as a function of the stimulus generative probability (Pearson correlation coefficient between and , subjects pooled: .97; p-value: 3.3e-6; correlation between the ‘logits’, : 0.994, p-value: 5.7e-9.), but not as steeply: it lies between the stimulus generative probability , and the optimal response 0 (if ) or 1 (if ; Figure 2a ). First-order sequential effects: attractive influence of the most recent stimulus on subjects’ predictions The sequences presented to subjects correspond to independent, Bernoulli-distributed random events. Having shown that the subjects’ predictions follow (in a non-optimal fashion) the stimulus generative probability, we now test whether they also exhibit the non-independence of consecutive trials featured by the Bernoulli process. Under this hypothesis and in the stationary regime, the proportion of predictions A conditional on the preceding stimulus being A, , should be no different than the proportion of predictions A conditional on the preceding stimulus being B, . (Here and below, denotes the proportion of predictions X conditional on the preceding observation being Y, and not on the preceding response being Y. For the possibility that subjects’ responses depend on the preceding response, see Methods.) In other words, conditioning on the preceding stimulus should have no effect. In subjects’ responses, however, these two conditional proportions are markedly different for all stimulus generative probabilities (Fisher exact test, subjects pooled: all p-values < 1e-10; Figure 2a ). Both quantities increase as a function of the stimulus generative probability, but the proportions of predictions A conditional on an A are consistently greater than the proportions of predictions A conditional on a B, i.e., ( Figure 2b ). (We note that because the stimulus is either A or B, it follows that, symmetrically, the proportions of predictions B conditional on a B are consistently greater than the proportions of predictions B conditional on an A.) In other words, the preceding stimulus has an ‘attractive’ sequential effect. In addition, this attractive sequential effect seems stronger for values of the stimulus generative probability closer to the equiprobable case (p = 0.5), and to decrease for more extreme values ( closer to 0 or to 1; Figure 2b ). The results in Figure 2 are obtained by pooling together the responses of the subjects. Results derived from an across-subjects analysis are very similar; see Appendix. A framework of costly inference The attractive effect of the preceding stimulus on subjects’ responses suggests that the subjects have not correctly inferred the Bernoulli statistics of the process generating the stimuli. We investigate the hypothesis that their ability to infer the underlying statistics of the stimuli is hampered by cognitive constraints. We assume that these constraints can be understood as a cost, bearing on the representation, by the brain, of the subject’s beliefs about the statistics. Specifically, we derive an array of models from a framework of inference under costly posteriors ( Prat-Carrabin et al., 2021a ), which we now present. We consider a model subject who is presented on each trial with a stimulus (where 0 and 1 encode for B and A, respectively) and who uses the sequence of stimuli to infer the stimulus statistics, over which she holds the belief distribution . A Bayesian observer equipped with this belief and observing a new observation would obtain its updated belief through Bayes’ rule. However, a cognitive cost hinders our model subject’s ability to represent probability distributions . Thus, she approximates the posterior through another distribution that minimizes a loss function defined as where is a measure of distance between two probability distributions, and is a coefficient specifying the relative weight of the cost. (We are not proposing that subjects actively minimize this quantity, but rather that the brain’s inference process is an effective solution to this optimization problem.) Below, we use the Kullback-Leibler divergence for the distance (i.e. ). If , the solution to this minimization problem is the Bayesian posterior; if , the cost distorts the Bayesian solution in ways that depend on the form of the cost borne by the subject (we detail further below the two kinds of costs we investigate). In our framework, the subject assumes that the preceding stimuli ( with ) and a vector of parameters jointly determine the distribution of the stimulus at trial , . Although in our task the stimuli are Bernoulli-distributed (thus they do not depend on preceding stimuli) and a single parameter determines the probability of the outcomes (the stimulus generative probability), the subject may admit the possibility that more complex mechanisms govern the statistics of the stimuli, for example transition probabilities between consecutive stimuli. Therefore, the vector may contain more than one parameter and the number of preceding stimuli assumed to influence the probability of the following stimulus, which we call the ‘Markov order’, may be greater than 0. Below, we call ‘Bernoulli observer’ any model subject who assumes that the stimuli are Bernoulli-distributed ( ); in this case the vector consists of a single parameter that determines the probability of observing A, which we also denote by for the sake of concision. The bias and variability in the inference of the Bernoulli observer is studied in Prat-Carrabin et al., 2021a . We call ‘Markov observer’ any model subject who posits that the probability of the stimulus depends on the preceding stimuli ( ). In this case, the vector contains the conditional probabilities of observing A after observing each possible sequence of stimuli. For instance, with the vector is the pair of parameters denoting the probabilities of observing a stimulus A after observing, respectively, a stimulus A and a stimulus B. In the absence of a cost, the belief over the parameter(s) eventually converges towards the parameter vector that is consistent with the generative Bernoulli statistics governing the stimulus (except if the prior precludes this parameter vector). Below, we assume a uniform prior. To understand how the costs contort the inference process, it is useful to have in mind the solution to the ‘unconstrained’ inference problem (with ), i.e., the Bayesian posterior, which we denote by . In the case of a Bernoulli observer ( ), after trials, the Bayesian posterior is a Beta distribution, where is the number of stimuli observed up to trial , that is, , and . As more evidence is accumulated, the Bayesian posterior gradually narrows and converges towards the value of the stimulus generative probability ( Figure 3c and d , grey lines). The ways in which the Bayesian posterior is distorted, in our models, depend on the nature of the cost that weighs on the inference process. Although many assumptions could be made on the kind of constraint that hinders human inference, and on the cost it would entail in our framework, here we examine two costs that stem from two possible principles: that the cost is a function of the beliefs held by the subject, or that it is a function of the environment that the subject is inferring. We detail, below, these two costs. Precision cost A first hypothesis about the inference process of subjects is that the brain mobilizes resources to represent probability distributions, and that more ‘precise’ distributions require more resources. We write the cost associated with a distribution, , as the negative of its entropy, which is a measure of the amount of certainty in the distribution. Wider (less concentrated) distributions provide less information about the probability parameter and are thus less costly than narrower (more concentrated) distributions ( Figure 3b ). As an extreme case, the uniform distribution is the least costly. With this cost, the loss function ( Equation 1 ) is minimized by the distribution equal to the product of the prior and the likelihood, raised to the exponent , and normalized, i.e., Since is strictly positive, the exponent is positive and lower than 1. As a result, the solution ‘flattens’ the Bayesian posterior, and in the extreme case of an unbounded cost ( ) the posterior is the uniform distribution. Furthermore, in the expression of our model subject’s posterior, the likelihood is raised after trials to the exponent , it thus decays to zero as the number of new stimuli increases. One can interpret this effect as gradually forgetting past observations. Specifically, we recover the predictions of leaky-integration models, in which remote patterns in the sequence of stimuli are discounted through an exponential filter ( Yu and Cohen, 2008 ; Meyniel et al., 2016 ); here, we do not posit the gradual forgetting of remote observations, but instead we derive it as an optimal solution to a problem of constrained inference. We illustrate leaky integration in the case of a Bernoulli observer ( ): in this case, the posterior after trials, , is a Beta distribution, where and are exponentially-filtered counts of the number of stimuli A and B observed up to trial , i.e., In other words, the solution to the constrained inference problem, with the precision cost, is similar to the Bayesian posterior ( Equation 2 ), but with counts of the two stimuli that gradually ‘forget’ remote observations (in the absence of a cost, that is, , we have and , and thus we recover the Bayesian posterior). As a result, these counts fluctuate with the recent history of the stimuli. Consequently, the posterior is dominated by the recent stimuli: it does not converge, but instead fluctuates with the recent stimulus history ( Figure 3c and d , purple lines; compare with the green and gray lines). Hence, this model implies predictions about subsequent stimuli that depend on the stimulus history, i.e., it predicts sequential effects. Unpredictability cost A different hypothesis is that the subjects favor, in their inference, parameter vectors that correspond to more predictable outcomes. We quantify the outcome unpredictability by the Shannon entropy ( Shannon, 1948 ) of the outcome implied by the vector of parameters , which we denote by . (In the Bernoulli-observer case, ; for the Markov-observer cases, see Methods.) The cost associated with the distribution is the expectation of this quantity averaged over beliefs, i.e., which we call the ‘unpredictability cost’. For a Bernoulli observer, a posterior concentrated on extreme values of the Bernoulli parameter (toward 0 or 1), thus representing more predictable environments, comes with a lower cost than a posterior concentrated on values of the Bernoulli parameter close to 0.5, which correspond to the most unpredictable environments ( Figure 3a ). After trials, the loss function ( Equation 1 ) under this cost is minimized by the posterior i.e., the product of the Bayesian posterior, which narrows with around the stimulus generative probability, and of a function that is larger for values of that imply less entropic (i.e. more predictable) environments (see Methods). In short, with the unpredictability cost the model subject’s posterior is ‘pushed’ towards less entropic values of . In the Bernoulli case ( ), the posterior after stimuli has a global maximum, , that depends on the proportion of stimuli A observed up to trial . As the number of presented stimuli grows, the posterior becomes concentrated around this maximum. The proportion naturally converges to the stimulus generative probability, , thus our subject’s inference converges towards the value which is different from the true value , in the non-equiprobable case ( ). The equiprobable case ( ) is singular, in that with a weak cost ( ) the inferred probability is unbiased ( ), while with a strong cost ( ) the inferred probability does not converge but instead alternates between two values above and below 0.5; see Prat-Carrabin et al., 2021a . In other words, except in the equiprobable case, the inference converges but it is biased, i.e., the posterior peaks at an incorrect value of the stimulus generative probability ( Figure 3c and d , green lines). This value is closer to the extremes (0 and 1) than the stimulus generative probability, that is, it implies an environment more predictable than the actual one ( Figure 3d ). In the case of a Markov observer ( ), the posterior also converges to a vector of parameters which implies not only a bias but also that the conditional probabilities of a stimulus A (conditioned on different stimulus histories) are not equal. The prediction of the next stimulus being A on a given trial depends on whether the preceding stimulus was A or B: this model therefore predicts sequential effects. We further examine below the behavior of this model in the cases of a Bernoulli observer and of different Markov observers. We refer the reader interested in more details on the Markov models, including their mathematical derivations, to the Methods section. In short, with the unpredictability-cost models, when , the inference process converges to an asymptotic posterior which does not itself depend on the history of the stimulus, but that is biased ( Figure 3c, d , green lines). In particular, for Markov observers ( ), the asymptotic posterior corresponds to an erroneous belief about the dependency of the stimulus on the recent stimulus history, which results in sequential effects in behavior. Overview of the inference models Although the two families of models derived from the two costs both potentially generate sequential effects, they do so by giving rise to qualitatively different inference processes. Under the unpredictability cost, the inference converges to a posterior that, in the Bernoulli case ( ), implies a biased estimate of the stimulus generative probability ( Figure 3d , green lines), while in the Markov case ( ) it implies the belief that there are serial dependencies in the stimuli: predictions therefore depend on the recent stimulus history. By contrast, the precision cost prevents beliefs from converging ( Figure 3c , purple lines). As a result, the subject’s predictions vary with the recent stimulus history ( Figure 3d ). This inference process amounts to an exponential discount of remote observations, or equivalently, to the overweighting of recent observations ( Equation 6 ). To investigate in more detail the sequential effects that these two costs produce, we implement two families of inference models derived from the two costs. Each model is characterized by the type of cost (unpredictability cost or precision cost), and by the assumed Markov order ( ): we examine the case of a Bernoulli observer ( ) and three cases of Markov observers (with 1, 2, and 3). We thus obtain models of inference. Each of these models has one parameter controlling the weight of the cost. (We also examine a ‘hybrid’ model that combines the two costs; see below.) Response-selection strategy We assume that the subject’s response on a given trial depends on the inferred posterior according to a generalization of ‘probability matching’ implemented in other studies ( Battaglia et al., 2011 ; Yu and Huang, 2014 ; Prat-Carrabin et al., 2021b ). In this response-selection strategy, the subject predicts A with the probability , where is the expected probability of a stimulus A derived from the posterior, i.e., . The single parameter controls the randomness of the response: with the subject predicts A and B with equal probability; with the response-selection strategy corresponds to probability matching, that is, the subject predicts A with probability ; and as increases toward infinity the choices become optimal, that is, the subjects predicts A if the expected probability of observing a stimulus A, , is greater than 0.5, and predicts B if it is lower than 0.5 (if the subject chooses A or B with equal probability). In our investigations, we also implement several other response-selection strategies, including one in which subjects have a propensity to repeat their preceding response, or conversely, to alternate; these analyses do not change our conclusions (see Methods). Model fitting favors Markov-observer models Each of our eight models has two parameters: the factor weighting the cost, , and the exponent of the generalized probability-matching, . We fit the parameters of each model to the responses of each subject, by maximizing their likelihoods. We find that 60% of subjects are best fitted by one of the unpredictability-cost models, while 40% are best fitted by one of the precision-cost models. When pooling the two types of cost, 65% of subjects are best fitted by a Markov-observer model. We implement a ‘Bayesian model selection’ procedure ( Stephan et al., 2009 ), which takes into account, for each subject, the likelihoods of all the models (and not only the maximum among them) in order to obtain a Bayesian posterior over the distribution of models in the general population (see Methods). The derived expected probability of unpredictability-cost models is 57% (and 43% for precision-cost models) with an exceedance probability (i.e. probability that unpredictability-cost models are more frequent in the general population) of 78%. The expected probability of Markov-observer models, regardless of the cost used in the model, is 70% (and 30% for Bernoulli-observer models) with an exceedance probability (i.e. probability that Markov-observer models are more frequent in the general population) of 98%. These results indicate that the responses of subjects are generally consistent with a Markov-observer model, although the stimuli used in the experiment are Bernoulli-distributed. As for the unpredictability-cost and the precision-cost families of models, Bayesian model selection does not provide decisive evidence in favor of either model, indicating that they both capture some aspects of the responses of the subjects. Below, we examine more closely the behaviors of the models, and point to qualitative differences between the predictions resulting from each model family. Before turning to these results, we validate the robustness of our model-fitting procedure with several additional analyses. First, we estimate a confusion matrix to examine the possibility that the model-fitting procedure could misidentify the models which generated test sets of responses. We find that the best-fitting model corresponds to the true model in at least 70% of simulations (the chance level is 12.5%=1/8 models), and actually more than 90% for the majority of models (see Appendix). Second, we seek to verify whether the best-fitting cost factor, , that we obtain for each subject is consistent across the range of probabilities tested. Specifically, we fit separately the models to the responses obtained in the blocks of trials whose stimulus generative probability was ‘medium’ (between 0.3 and 0.7, included) on the one hand, and to the responses obtained when the probability was ‘extreme’ (below 0.3, and above 0.7) on the other hand; and we compare the values of the best-fitting cost factors in these two cases. More precisely, for the precision-cost family, we look at the inverse of the decay time, , which is the inverse of the characteristic time over which the model subject ‘forgets’ past observations. With both families of models, we find that on a logarithmic scale the parameters in the medium- and extreme-probabilities cases are significantly correlated across subjects (Pearson’s , precision-cost models: 0.75, p-value: 1e-4; unpredictability-cost models: , p-value: .036). In other words, if a subject is best fitted by a large cost factor in medium-probabilities trials, he or she is likely to be also best fitted by a large cost factor in extreme-probabilities trials. This indicates that our models capture idiosyncratic features of subjects that generalize across conditions instead of varying with the stimulus probability (see Appendix). Third, as mentioned above we examine a variant of the response-selection strategy in which the subject sometimes repeats the preceding response, or conversely alternates and chooses the other response, instead of responding based on the inferred probability of the next stimulus. This propensity to repeat or alternate does not change the best-fitting inference model of most subjects, and the best-fitting values of the parameters and are very stable when allowing or not for this propensity. This analysis supports the results we present here, and speaks to the robustness of the model-fitting procedure (see Methods). Finally, as the unpredictability-cost family and the precision-cost family of models both seem to capture the responses of a sizable share of the subjects, one might assume that the behavior of most subjects actually fall ‘somewhere in between’, and would be best accounted for by a hybrid model combining the two costs. In our investigations, we have implemented such a model, whereby the subject’s approximate posterior results from the minimization of a loss function that includes both a precision cost, with weight , and an unpredictability cost, with weight (and the response-selection strategy is the generalized probability matching, with parameter ). We do not find that most subjects’ responses are better fitted (as measured by the Bayesian Information Criterion Schwarz, 1978 ) by a combination of the two costs: instead, for more than two thirds of subjects, the best-fitting model features just one cost (see Methods). In other words, the two cost seems to capture different aspects of the behavior that are predominant in different subpopulations. Below, we examine the behavioral patterns resulting from each cost type, in comparison with the behavior of the subjects. Models of costly inference reproduce the attractive effect of the most recent stimulus We now examine the behavioral patterns resulting from the models. All the models we consider predict that the proportion of predictions A, , is a smooth, increasing function of the stimulus generative probability (when and ; Figure 4a–d , grey lines), thus we focus, here, on the ability of the models to reproduce the subjects’ sequential effects. With the unpredictability-cost model of a Bernoulli observer ( ), the belief of the model subject, as mentioned above, asymptotically converges in non-equiprobable cases to an erroneous value of the stimulus generative probability ( Figure 3d , green lines). After a large number of observations (such as the 200 ‘passive’ trials, in our task), the sensitivity of the belief to new observations becomes almost imperceptible; as a result, this model predicts practically no sequential effects ( Figure 4b ), that is, . With the unpredictability-cost model of a Markov observer (e.g. ), the belief of the model subject also converges, but to a vector of parameters that implies a sequential dependency in the stimulus, that is, , resulting in sequential effects in predictions, that is, . The parameter vector yields a more predictable (less entropic) environment if the probability conditional on the more frequent outcome (say, A) is less entropic than the probability conditional on the less frequent outcome (B). This is the case if the former is greater than the latter, resulting in the inequality , that is, in sequential effects of the attractive kind ( Figure 4d ). (The case in which B is the more frequent outcome results in the inequality , i.e., , i.e., the same, attractive sequential effects.) Turning to the precision-cost models, we have noted that in these models the posterior fluctuates with the recent history of the stimuli ( Figure 3c ): as a result, sequential effects are obtained, even with a Bernoulli observer ( ; Figure 4a ). The most recent stimulus has the largest weight in the exponentially filtered counts that determine the posterior ( Equation 6 ), thus the model subject’s prediction is biased towards the last stimulus, that is, the sequential effect is attractive ( ). With the traditional probability-matching response-selection strategy (i.e. ), the strength of the attractive effect is the same across all stimulus generative probabilities (i.e. the difference is constant; Figure 4a , dotted lines and light-red dots). With the generalized probability-matching response-selection strategy, if , proportions below and above 0.5 are brought closer to the extremes (0 and 1, respectively), resulting in larger sequential effects for values of the stimulus generative probability closer to 0.5 ( Figure 4a , solid lines and red dots; the model is simulated with , a value representative of the subjects’ best-fitting values for this parameter). We also find stronger sequential effects closer to the equiprobable case in subjects’ data ( Figure 2b ). The precision-cost model of a Markov observer ( ) also predicts attractive sequential effects ( Figure 4c ). While the behavior of the Bernoulli observer (with a precision cost) is determined by two exponentially-filtered counts of the two possible stimuli ( Equation 6 ), that of the Markov observer with depends on four exponentially filtered counts of the four possible pairs of stimuli. After observing a stimulus B, the belief that the following stimulus should be A or B is determined by the exponentially filtered counts of the pairs BA and BB. If is large, i.e., if the stimulus B is infrequent, then the BA and BB pairs are also infrequent and the corresponding counts are close to zero: the model subject thus behaves as if only very little evidence had been observed about the transitions B to A and B to B in this case, resulting in a proportion of predictions A conditional on a preceding B, , close to 0.5 ( Figure 4c , orange line). Consequently, the sequential effects are stronger for values of the stimulus generative probabilities closer to the extreme ( Figure 4c , red dots). Both families of costs are thus able to produce attractive sequential effects, albeit with some qualitative differences. (In Figure 4a–d we show the behaviors resulting from the two costs for a Bernoulli observer and a Markov observer of order ; the Markov observers of higher order exhibit qualitatively similar behaviors; see Methods.) As the model fitting indicates that different groups of subjects are best fitted by models belonging to the two families, we examine separately the behaviors of the subjects whose responses are best fitted by each of the two costs ( Figure 4e and f ), in comparison with the behaviors of the corresponding best-fitting models ( Figure 4g and h ). This provides a finer understanding of the behavior of subjects than the group average shown in Figure 2 . For the subjects best fitted by precision-cost models, the proportion of predictions A, , when the stimulus generative probability is close to 0.5, is a less steep function of this probability than for the subjects best-fitted by unpredictability-cost models ( Figure 4e and f , grey lines); furthermore, their sequential effects are larger (as measured by the difference ), and do not depend much on the stimulus generative probability ( Figure 4e and f , red dots). The corresponding models reproduce the behavioral patterns of the subjects that they best fit ( Figure 4g and h ). Each family of models seems to capture specific behaviors exhibited by the subjects: when fitting the unpredictability-cost models to the responses of the subjects that are best fitted by precision-cost models, and conversely when fitting the precision-cost models to the responses of the subjects that are best fitted by unpredictability-cost models, the models do not reproduce well the subjects’ behavioral patterns ( Figure 4i and j ). The precision-cost models, however, seem slightly better than the unpredictability-cost models at capturing the behavior of the subjects that they do not best fit ( Figure 4 , compare panel j to panel f, and panel i to panel e). Substantiating this observation, the examination of the distributions of the models’ BICs across subjects shows that when fitting the models onto the subjects that they do not best fit, the precision-cost models fare better than the unpredictability-cost models (see Appendix). Beyond the most recent stimulus: patterns of higher-order sequential effects Notwithstanding the quantitative differences just presented, both families of models yield qualitatively similar attractive sequential effects: the model subjects’ predictions are biased towards the preceding stimulus. Does this pattern also apply to the longer history of the stimulus, i.e., do more distant trials also influence the model subjects’ predictions? To investigate this hypothesis, we examine the difference between the proportion of predictions A after observing a sequence of length that starts with A, minus the proportion of predictions A after the same sequence, but starting with B, i.e., , where is a sequence of length , and and denote the same sequence preceded by A and by B. This quantity enables us to isolate the influence of the -to-last stimulus on the current prediction. If the difference is positive, the effect is ‘attractive’; if it is negative, the effect is ‘repulsive’ (in this latter case, the presentation of an A decreases the probability that the subjects predicts A in a later trial, as compared to the presentation of a B); and if the difference is zero there is no sequential effect stemming from the -to-last stimulus. The case corresponds to the immediately preceding stimulus, whose effect we have shown to be attractive, i.e., , in the responses both of the best-fitting models and of the subjects ( Figures 2b , 4g and h ). We investigate the effect of the -to-last stimulus on the behavior of the two families of models, with , , and . We present here the main results of this investigation; we refer the reader to Methods for a more detailed analysis. With unpredictability-cost models of Markov order , there are non-vanishing sequential effects stemming from the -to-last stimulus only if the Markov order is greater than or equal to the distance from this stimulus to the current trial, i.e., if . In this case, the sequential effects are attractive ( Figure 5 ). With precision-cost models, the -to-last stimuli yield non-vanishing sequential effects regardless of the Markov order, . With , the effect is attractive, i.e., . With (second-to-last stimulus), the effect is also attractive, i.e., in the case of the pair of sequences AA and BA, ( Figure 5a ). By symmetry, the difference is also positive for the other pair of relevant sequences, AB and BB (e.g. we note that , and that when the probability of A is is equal to when the probability of A is . We detail in Methods such relations between the proportions of predictions A or B in different situations. These relations result in the symmetries of Figure 2 , for the sequential effect of the last stimulus, while for higher-order sequential effects they imply that we do not need to show, in Figure 5 , the effects following all possible past sequences of two or three stimuli, as the ones we do not show are readily derived from the ones we do.) As for the third-to-last stimulus ( ), it can be followed by four different sequences of length two, but we only need to examine two of these four, for the reasons just presented. We find that for the precision-cost models, with all the Markov orders we examine (from 0 to 3), the probability of predicting A after observing the sequence AAA is greater than that after observing the sequence BAA, i.e., , that is, there is an attractive sequential effect of the third-to-last stimulus if the sequence following it is AA (and, by symmetry, if it is BB; Figure 5b ). So far, thus, we have found only attractive effects. However, the results are less straightforward when the third-to-last stimulus is followed by the sequence BA. In this case, for a Bernoulli observer ( ), the effect is also attractive: ( Figure 5c , white circles). With Markov observers ( ), over a range of stimulus generative probability , the effect is repulsive: , that is, the presentation of an A decreases the probability that the model subject predicts A three trials later, as compared to the presentation of a B ( Figure 5c , filled circles). The occurrence of the repulsive effect in this particular case is a distinctive trait of the precision-cost models of Markov observers ( ); we do not obtain any repulsive effect with any of the unpredictability-cost models, nor with the precision-cost model of a Bernoulli observer ( ). Subjects’ predictions exhibit higher-order repulsive effects We now examine the sequential effects in subjects’ responses, beyond the attractive effect of the preceding stimulus ( ; discussed above). With (second-to-last stimulus), for the majority of the 19 stimulus generative probabilities , we find attractive sequential effects: the difference is significantly positive ( Figure 6a ; p-values <0.01 for 11 stimulus generative probabilities, <0.05 for 13 probabilities; subjects pooled). With (third-to-last stimulus), we also find significant attractive sequential effects in subjects’ responses for some of the stimulus generative probabilities, when the third-to-last stimulus is followed by the sequence AA ( Figure 6b ; p-values <0.01 for four probabilities, <0.05 for seven probabilities). When it is instead followed by the sequence BA, we find that for eight stimulus generative probabilities, all between 0.25 and 0.75, there is a significant repulsive sequential effect: (p-values <0.01 for six probabilities, <0.05 for eight probabilities; subjects pooled). Thus, in these cases, the occurrence of A as the third-to-last stimulus increases (in comparison with the occurrence of a B) the proportion of the opposite prediction, B. For the remaining stimulus generative probabilities, this difference is in most cases also negative although not significantly different from zero ( Figure 6c ). (An across-subjects analysis yields similar results; see Supplementary Materials.) Figure 6d summarizes subjects’ sequential effects, and exhibits the attractive and repulsive sequential effects in their responses (compare solid and dotted lines). (In this tree-like representation, we show averages across the stimulus generative probabilities; a figure with the individual ‘trees’ for each probability is provided in the Appendix.) The repulsive sequential effect of the third-to-last stimulus in subjects’ predictions only occurs when the third-to-last stimulus is A followed by the sequence BA. It is also only in this case that the repulsive effect appears with the precision-cost models of a Markov observer (while it never appears with the unpredictability-cost models). This qualitative difference suggests that the precision-cost models offer a better account of sequential effects in subjects. However, model-fitting onto the overall behavior presented above showed that a fraction of the subjects is better fitted by the unpredictability-cost models. We investigate, thus, the presence of a repulsive effect in the predictions of the subjects best fitted by the precision-cost models, and of those best fitted by the unpredictability-cost models. For the subjects best fitted by the precision-cost models, we find (expectedly) that there is a significant repulsive sequential effect of the third-to-last stimulus ( ; p-values <0.01 for two probabilities, <0.05 for four probabilities; subjects pooled; Figure 6e , left panel). For the subjects best fitted by the unpredictability-cost models (a family of model that does not predict any repulsive sequential effects), we also find, perhaps surprisingly, a significant repulsive effect of the third-to-last stimulus (p-values <0.01 for three probabilities, <0.05 for five probabilities; subjects pooled), which demonstrates the robustness of this effect ( Figure 6e , right panel). Thus, in spite of the results of the model-selection procedure, some sequential effects in subjects’ predictions support only one of the two families of model. Regardless of the model that best fits their overall predictions, the behavior of the subjects is consistent only with the precision-cost family of models with Markov order equal to or greater than 1, that is, with a model of inference of conditional probabilities hampered by a cognitive cost weighing on the precision of belief distributions.
Discussion We investigated the hypothesis that sequential effects in human predictions result from cognitive constraints hindering the inference process carried out by the brain. We devised a framework of constrained inference, in which the model subject bears a cognitive cost when updating its belief distribution upon the arrival of new evidence: the larger the cost, the more the subject’s posterior differs from the Bayesian posterior. The models we derive from this framework make specific predictions. First, the proportion of forced-choice predictions for a given stimulus should increase with the stimulus generative probability. Second, most of those models predict sequential effects: predictions also depend on the recent stimulus history. Models with different types of cognitive cost resulted in different patterns of attractive and repulsive effects of the past few stimuli on predictions. To compare the predictions of constrained inference with human behavior, we asked subjects to predict each next outcome in sequences of binary stimuli. We manipulated the stimulus generative probability in blocks of trials, exploring exhaustively the probability range from 0.05 to 0.95 by increments of 0.05. We found that subjects’ predictions depend on both the stimulus generative probability and the recent stimulus history. Sequential effects exhibited both attractive and repulsive components which were modulated by the stimulus generative probability. This behavior was qualitatively accounted for by a model of constrained inference in which the subject infers the transition probabilities underlying the sequences of stimuli and bears a cost that increases with the precision of the posterior distributions. Our study proposes a novel theoretical account of sequential effects in terms of optimal inference under cognitive constraints and it uncovers the richness of human behavior over a wide range of stimulus generative probabilities. The notion that human decisions can be understood as resulting from a constrained optimization has gained traction across several fields, including neuroscience, cognitive science, and economics. In neuroscience, a voluminous literature that started with Attneave, 1954 and Barlow, 1961 investigates the idea that perception maximizes the transmission of information, under the constraint of costly and limited neural resources ( Laughlin, 1981 ; Laughlin et al., 1998 ; Simoncelli and Olshausen, 2001 ); related theories of ‘efficient coding’ account for the bias and the variability of perception ( Ganguli and Simoncelli, 2016 ; Wei and Stocker, 2015 ; Wei and Stocker, 2017 ; Prat-Carrabin and Woodford, 2021c ). In cognitive science and economics, ‘bounded rationality’ is a precursory concept introduced in the 1950s by Herbert Simon, who defines it as “rational choice that takes into account the cognitive limitations of the decision maker — limitations of both knowledge and computational capacity” ( Simon, 1997 ). For Gigerenzer, these limitations promote the use of heuristics, which are ‘fast and frugal’ ways of reasoning, leading to biases and errors in humans and other animals ( Gigerenzer and Goldstein, 1996 ; Gigerenzer and Selten, 2002 ). A range of more recent approaches can be understood as attempts to specify formally the limitations in question, and the resulting trade-off. The ‘resource-rational analysis’ paradigm aims at a unified theoretical account that reconciles principles of rationality with realistic constraints about the resources available to the brain when it is carrying out computations ( Griffiths et al., 2015 ). In this approach, biases result from the constraints on resources, rather than from ‘simple heuristics’ (see Lieder and Griffiths, 2019 for an extensive review). For instance, in economics, theories of ‘rational inattention’ propose that economic agents optimally allocate resources (a limited amount of attention) to make decisions, thereby proposing new accounts of empirical findings in the economic literature ( Sims, 2003 ; Woodford, 2009 ; Caplin et al., 2019 ; Gabaix, 2017 ; Azeredo da Silveira and Woodford, 2019 ; Azeredo da Silveira et al., 2020 ). Our study puts forward a ‘resource-rational’ account of sequential effects. Traditional accounts since the 1960s attribute these effects to a belief in sequential dependencies between successive outcomes ( Edwards, 1961 ; Matthews and Sanders, 1984 ) (potentially ‘acquired through life experience’ Ayton and Fischer, 2004 ), and more generally to the incorrect models that people assume about the processes generating sequences of events (see Oskarsson et al., 2009 for a review; similar rationales have been proposed to account for suboptimal behavior in other contexts, for example in exploration-exploitation tasks Navarro et al., 2016 ). This traditional account was formalized, in particular, by models in which subjects carry out a statistical inference about the sequence of stimuli presented to them, and this inference assumes that the parameters underlying the generating process are subject to changes ( Yu and Cohen, 2008 ; Wilder et al., 2009 ; Zhang et al., 2014 ; Meyniel et al., 2016 ). In these models, sequential effects are thus understood as resulting from a rational adaptation to a changing world. Human subjects indeed dynamically adapt their learning rate when the environment changes ( Payzan-LeNestour et al., 2013 ; Meyniel and Dehaene, 2017 ; Nassar et al., 2010 ), and they can even adapt their inference to the statistics of these changes ( Behrens et al., 2007 ; Prat-Carrabin et al., 2021b ). However, in our task and in many previous studies in which sequential effects have been reported, the underlying statistics are in fact not changing across trials. The models just mentioned thus leave unexplained why subjects’ behavior, in these tasks, is not rationally adapted to the unchanging statistics of the stimulus. What underpins our main hypothesis is a different kind of rational adaptation: one, instead, to the ‘cognitive limitations of the decision maker’, which we assume hinder the inference carried out by the brain. We show that rational models of inference under a cost yield rich patterns of sequential effects. When the cost varies with the precision of the posterior (measured here by the negative of its entropy, Equation 3 ), the resulting optimal posterior is proportional to the product of the prior and the likelihood, each raised to an exponent ( Equation 4 ). Many previous studies on biased belief updating have proposed models that adopt the same form except for the different exponents applied to the prior and to the likelihood ( Grether, 1980 ; Matsumori et al., 2018 ; Benjamin, 2019 ). Here, with the precision cost, both quantities are raised to the same exponent and we note that in this case the inference of the subject amounts to an exponentially decaying count of the patterns observed in the sequence of stimuli, which is sometimes called ‘leaky integration’ in the literature ( Yu and Cohen, 2008 ; Wilder et al., 2009 ; Jones et al., 2013 ; Meyniel et al., 2016 ). The models mentioned above, that posit a belief in changing statistics, indeed are well approximated by models of leaky integration ( Yu and Cohen, 2008 ; Meyniel et al., 2016 ), which shows that the exponential discount can have different origins. Meyniel et al., 2016 show that the precision-cost, Markov-observer model with (named ‘local transition probability model’ in this study) accounts for a range of other findings, in addition to sequential effects, such as biases in the perception of randomness and patterns in the surprise signals recorded through EEG and fMRI. Here we reinterpret these effects as resulting from an optimal inference subject to a cost, rather than from a suboptimal erroneous belief in the dynamics of the stimulus’ statistics. In our modeling approach, the minimization of a loss function ( Equation 1 ) formalizes a trade-off between the distance to optimality of the inference, and the cognitive constraints under which it is carried out. We stress that our proposal is not that the brain actively solves this optimization problem online, but instead that it is endowed with an inference algorithm (whose origin remains to be elucidated) which is effectively a solution to the constrained optimization problem. By grounding the sequential effects in the optimal solution to a problem of constrained optimization, our approach opens avenues for exploring the origins of sequential effects, in the form of hypotheses about the nature of the constraint that hinders the inference carried out by the brain. With the precision cost, more precise posterior distributions are assumed to take a larger cognitive toll. The intuitive assumption that it is costly to be precise finds a more concrete realization in neural models of inference with probabilistic population codes: in these models, the precision of the posterior is proportional to the average activity of the population of neurons and to the number of neurons ( Ma et al., 2006 ; Seung and Sompolinsky, 1993 ). More neural activity and more neurons arguably come with a metabolic cost, and thus more precise posteriors are more costly in these models. Imprecisions in computations, moreover, was shown to successfully account for decision variability and adaptive behavior in volatile environments ( Findling et al., 2019 ; Findling et al., 2021 ). The unpredictability cost, which we introduce, yields models that also exhibit sequential effects (for Markov observers), and that fit several subjects better than the precision-cost models. The unpredictability cost relies on a different hypothesis: that the cost of representing a distribution over different possible states of the world (here, different possible values of ) resides in the difficulty of representing these states. This could be the case, for instance, under the hypothesis that the brain runs stochastic simulations of the implied environments, as proposed in models of ‘intuitive physics’ ( Battaglia et al., 2013 ) and in Kahneman and Tversky’s ‘simulation heuristics’ ( Kahneman et al., 1982 ). More entropic environments imply more possible scenarios to simulate, giving rise, under this assumption, to higher costs. A different literature explores the hypothesis that the brain carries out a mental compression of sequences ( Simon, 1972 ; Chekaf et al., 2016 ; Planton et al., 2021 ); entropy in this context is a measure of the degree of compressibility of a sequence ( Planton et al., 2021 ), and thus, presumably, of its implied cost. As a result, the brain may prefer predictable environments over unpredictable ones. Human subjects exhibit a preference for predictive information indeed ( Ogawa and Watanabe, 2011 ; Trapp et al., 2015 ), while unpredictable stimuli have been shown not only to increase anxiety-like behavior ( Herry et al., 2007 ), but also to induce more neural activity ( Herry et al., 2007 ; den Ouden et al., 2009 ; Alink et al., 2010 ) — a presumably costly increase, which may result from the encoding of larger prediction errors ( Herry et al., 2007 ; Schultz and Dickinson, 2000 ). We note that both costs (precision and unpredictability) can predict sequential effects, even though neither carries ex ante an explicit assumption that presupposes the existence of sequential effects. They both reproduce the attractive recency effect of the last stimulus exhibited by the subjects. They make quantitatively different predictions ( Figure 4 ); we also find this diversity of behaviors in subjects. The precision cost, as mentioned above, yields leaky-integration models which can be summarized by a simple algorithm in which the observed patterns are counted with an exponential decay. The psychology and neuroscience literature proposes many similar ‘leaky integrators’ or ‘leaky accumulators’ models ( Smith, 1995 ; Roe et al., 2001 ; Usher and McClelland, 2001 ; Cook and Maunsell, 2002 ; Wang, 2002 ; Sugrue et al., 2004 ; Bogacz et al., 2006 ; Kiani et al., 2008 ; Yu and Cohen, 2008 ; Gao et al., 2011 ; Tsetsos et al., 2012 ; Ossmy et al., 2013 ; Meyniel et al., 2016 ). In connectionist models of decision-making, for instance, decision units in abstract network models have activity levels that accumulate evidence received from input units, and which decay to zero in the absence of input ( Roe et al., 2001 ; Usher and McClelland, 2001 ; Wang, 2002 ; Bogacz et al., 2006 ; Tsetsos et al., 2012 ). In other instances, perceptual evidence ( Kiani et al., 2008 ; Gao et al., 2011 ; Ossmy et al., 2013 ) or counts of events ( Sugrue et al., 2004 ; Yu and Cohen, 2008 ; Meyniel et al., 2016 ) are accumulated through an exponential temporal filter. In our approach, leaky integration is not an assumption about the mechanisms underpinning some cognitive process: instead, we find that it is an optimal strategy in the face of a cognitive cost weighing on the precision of beliefs. Although it is less clear whether the unpredictability-cost models lend themselves to a similar algorithmic simplification, they consist in a distortion of Bayesian inference, for which various neural-network models have been proposed ( Deneve et al., 2001 ; Ma et al., 2008 ; Ganguli and Simoncelli, 2014 ; Echeveste et al., 2020 ). Turning to the experimental results, we note that in spite of the rich literature on sequential effects, the majority of studies have focused on equiprobable Bernoulli environments, in which the two possible stimuli both had a probability equal to 0.5, as in tosses of a fair coin ( Soetens et al., 1985 ; Cho et al., 2002 ; Yu and Cohen, 2008 ; Wilder et al., 2009 ; Jones et al., 2013 ; Zhang et al., 2014 ; Ayton and Fischer, 2004 ; Gökaydin and Ejova, 2017 ). In environments of this kind, the two stimuli play symmetric roles and all sequences of a given length are equally probable. In contrast, in biased environments one of the two possible stimuli is more probable than the other. Although much less studied, this situation breaks the regularities of equiprobable environments and is arguably very frequent in real life. In our experiment, we explore stimulus generative probabilities from 0.05 to 0.95, thus allowing to investigate the behavior of subjects in a wide spectrum of Bernoulli environments: from these with ‘extreme’ probabilities (e.g. p = 0.95) to these only slightly different from the equiprobable case (e.g. p = 0.55) to the equiprobable case itself (p = 0.5). The subjects are sensitive to the imbalance of the non-equiprobable cases: while they predict A in half the trials of the equiprobable case, a probability of just p = 0.55 suffices to prompt the subjects to predict A in about in 60% of trials, a significant difference ( ; sem: 0.008; p-value of t-test against null hypothesis that : 1.7e-11; subjects pooled). The well-known ‘probability matching’ hypothesis ( Herrnstein, 1961 ; Vulkan, 2000 ; Gaissmaier and Schooler, 2008 ) suggests that the proportion of predictions A matches the stimulus generative probability: . This hypothesis is not supported by our data. We find that in the non-equiprobable conditions these two quantities are significantly different (all p-values <1e-11, when ). More precisely, we find that the proportion of prediction A is more extreme than the stimulus generative probability (i.e. when , and when ; Figure 2a ). This result is consistent with the observations made by Edwards, 1961 ; Edwards, 1956 and with the conclusions of a more recent review ( Vulkan, 2000 ). In addition to varying with the stimulus generative probability, the subjects’ predictions depend on the recent history of stimuli. Recency effects are common in the psychology literature; they were reported from memory ( Ebbinghaus et al., 1913 ) to causal learning ( Collins and Shanks, 2002 ) to inference ( Shanteau, 1972 ; Hogarth and Einhorn, 1992 ; Benjamin, 2019 ). Recency effects, in many studies, are obtained in the context of reaction tasks, in which subjects must identify a stimulus and quickly provide a response ( Hyman, 1953 ; Bertelson, 1965 ; Kornblum, 1967 ; Soetens et al., 1985 ; Cho et al., 2002 ; Yu and Cohen, 2008 ; Wilder et al., 2009 ; Jones et al., 2013 ; Zhang et al., 2014 ). Although our task is of a different kind (subjects must predict the next stimulus), we find some evidence of recency effects in the response times of subjects: after observing the less frequent of the two stimuli (when ), subjects seem slower at providing a response (see Appendix). In prediction tasks (like ours), both attractive recency effects, also called ‘hot-hand fallacy’, and repulsive recency effects, also called ‘gambler’s fallacy’, have been reported ( Jarvik, 1951 ; Edwards, 1961 ; Ayton and Fischer, 2004 ; Burns and Corpus, 2004 ; Croson and Sundali, 2005 ; Oskarsson et al., 2009 ). The observation of both effects within the same experiment has been reported in a visual identification task ( Chopin and Mamassian, 2012 ) and in risky choices (‘wavy recency effect’ Plonsky et al., 2015 ; Plonsky and Erev, 2017 ). As to the heterogeneity of these results, several explanations have been proposed; two important factors seem to be the perceived degree of randomness of the predicted variable and whether it relates to human performance ( Ayton and Fischer, 2004 ; Burns and Corpus, 2004 ; Croson and Sundali, 2005 ; Oskarsson et al., 2009 ). In any event, most studies focus exclusively on the influence of ‘runs’ of identical outcomes on the upcoming prediction, for example, in our task, on whether three As in a row increases the proportion of predictions A. With this analysis, Edwards ( Edwards, 1961 ) in a task similar to ours concluded to an attractive recency effect (which he called ‘probability following’). Although our results are consistent with this observation (in our data three As in a row do increase the proportion of predictions A), we provide a more detailed picture of the influence of each stimulus preceding the prediction, whether it is in a ‘run’ of identical stimuli or not, which allows us to exhibit the non-trivial finer structure of the recency effects that is often overlooked. Up to two stimuli in the past, the recency effect is attractive: observing A at trial or at trial induces, all else being equal, a higher proportion of predictions A at trial (in comparison to observing B; Figures 2 and 6a ). The influence of the third-to-last stimulus is more intricate: it can yield either an attractive or a repulsive effect, depending on the second-to-last and the last stimuli. For a majority of probability parameters, , while an A followed by the sequence AA has an attractive effect (i.e. ), an A followed by the sequence BA has a repulsive effect (i.e. ; Figure 6b and c ). How can this reversal be intuited? Only one of our models, the precision-cost model with a Markov order 1 ( ), reproduces this behavior; we show how it provides an interpretation for this result. From the update equation of this model ( Equation 4 ), it is straightforward to show that the posterior of the model subject (a Dirichlet distribution of order 4) is determined by four quantities, which are exponentially-decaying counts of the four two-long patterns observed in the sequence of stimuli: BB, BA, AB, and AA. The higher the count of a pattern, the more likely the model subject deems this pattern to happen again. In the equiprobable case ( ), after observing the sequence AAA, the count of AA is higher than after observing BAA, thus the model subject believes that AA is more probable, and accordingly predicts A more frequently, i.e., . As for the sequences ABA and BBA, both result in the same count of AA, but the former results in a higher count of AB — in other words, the short sequence ABA suggests that A is usually followed by B, but the sequence BBA does not — and thus the model subject predicts more frequently B, i.e., less frequently A ( ). In short, the ability of the precision-cost model of a Markov observer to capture the repulsive effect found in behavioral data suggests that human subjects extrapolate the local statistical properties of the presented sequence of stimuli in order to make predictions, and that they pay attention not only to the ‘base rate’ — the marginal probability of observing A, unconditional on the recent history — as a Bernoulli observer would do, but also to the statistics of more complex patterns, including the repetitions and the alternations, thus capturing the transition probabilities between consecutive observations. Wilder et al., 2009 , Jones et al., 2013 , and Meyniel et al., 2016 similarly argue that sequential effects result from an imperfect inference of the base rate and of the frequency of repetitions and alternations. Dehaene et al., 2015 argue that the knowledge of transition probabilities is a central mechanism in the brain’s processing of sequences (e.g. in language comprehension), and infants as young as 5 months were shown to be able to track both base rates and transition probabilities (see Saffran and Kirkham, 2018 for a review). Learning of transition probabilities has also been observed in rhesus monkeys ( Meyer and Olson, 2011 ). The deviations from perfect inference, in the precision-cost model, originate in the constraints faced by the brain when performing computation with probability distributions. In spite of the success of the Bayesian framework, we note that human performance in various inference tasks is often suboptimal ( Nassar et al., 2010 ; Hu et al., 2013 ; Acerbi et al., 2014 ; Prat-Carrabin et al., 2021b ; Prat-Carrabin and Woodford, 2022 ). Our approach suggests that the deviations from optimality in these tasks may be explained by the cognitive constraints at play in the inference carried out by humans. Other studies have considered the hypothesis that suboptimal behavior in inference tasks results from cognitive constraints. Kominers et al., 2016 consider a model in which Bayesian inference comes with a fixed cost; the observer can choose to forgo updating her belief, so as to avoid the cost. In some cases, the model predicts ‘permanently cycling beliefs’ that do not converge; but in general the model predicts that subjects will choose not to react to new evidence that is unsurprising under the current belief. The significant sequential effects we find in our subjects’ responses, however, seem to indicate that they are sensitive to both unsurprising (e.g. outcome A when p>0.5) and surprising (outcome B when p>0.5) observations, at least across the values of the stimulus generative probability that we test ( Figure 2 ). Graeber, 2020 considers costly information processing as an account of subjects’ neglect of confounding variables in an inference task, but concludes instead that the suboptimal behavior of subjects results from their misunderstanding of the information structure in the task. A model close to ours is the one proposed in Azeredo da Silveira and Woodford, 2019 and Azeredo da Silveira et al., 2020 , in which an information-theoretic cost limits the memory of an otherwise optimal and Bayesian decision-maker, resulting, here also, in beliefs that fluctuate and do not converge, and in an overweighting, in decisions, of the recent evidence. Taking a different approach, Dasgupta et al., 2020 implement a neural network that learns to approximate Bayesian posteriors. Possible approximate posteriors are constrained not only by the structure of the network, but also by the fact that the same network is used to address a series of different inference problems. Thus the network’s parameters must be ‘shared’ across problems, which is meant to capture the brain’s limited computational resources. Although this constraint differs from the ones we consider, we note that in this study the distance function (which the approximation aims to minimize) is the same as in our models, namely, the Kullback-Leibler divergence from the optimal posterior to the approximate posterior, . Minimizing this divergence (under a cost) allows the model subject to obtain a posterior as close as possible (at least by this measure) to the optimal posterior given the most recent stimulus and the subject’s belief prior to observing the stimulus, which in turn enables the subject to perform reasonably well in the task. In principle, rewarding subjects with a higher payoff when they make a correct prediction would change the optimal trade-off (between the distance to the optimal posterior and the cognitive costs) formalized in Equation 1 , resulting in ‘better’ posteriors (closer to the Bayesian posterior), and thus to higher performance in the task. At the same time, incentivization is known to influence, also in the direction of higher performance, the extent to which choice behavior is close to probability matching ( Vulkan, 2000 ). The interesting question of the respective sensitivities of the subjects’ inference process and of their response-selection strategy in response to different levels of incentives is beyond the scope of this study, in which we have focussed on the sensitivity of behavior to different stimulus generative probabilities. In any case, the approach of minimizing the Kullback-Leibler divergence from the optimal posterior to the approximate posterior is widely used in the machine learning literature, and forms the basis of the ‘variational’ family of approximate-inference techniques ( Bishop, 2006 ). These techniques have inspired various cognitive models ( Sanborn, 2017 ; Gallistel and Latham, 2022 ; Aridor and Woodford, 2023 ); alternatively, a bound on the divergence, known as the ‘evidence bound’, or, in neuroscience, as the negative of the ‘free energy’, is maximized ( Moustafa, 2017 ; Friston et al., 2006 ; Friston, 2009 ). (We note that the ‘opposite’ divergence, , is minimized in a different machine-learning technique, ‘expectation propagation’ ( Bishop, 2006 ), and in the cognitive model of causal reasoning of Icard and Goodman, 2015 .) In these techniques, the approximate posterior is chosen within a convenient family of tractable, parameterized distributions; other distributions are precluded. This can be understood, in our framework, as positing a cost that is infinite for most distributions, but zero for the distributions that belong to some arbitrary family ( Prat-Carrabin et al., 2021a ). The precision cost and the unpredictability cost, in comparison, are ‘smooth’, and allow for any distribution, but they penalize, respectively, more precise belief distributions, and belief distributions that imply more unpredictable environments. Our study shows that inference, when subject to either of these costs, yields an attractive sequential effect of the most recent observation; and with a precision cost weighing on the inference of transition probabilities (i.e., ), the model predicts the subtle pattern of attractive and repulsive sequential effects that we find in subjects’ responses.
Department of Psychology, Harvard University, Cambridge, United States. An abundant literature reports on ‘sequential effects’ observed when humans make predictions on the basis of stochastic sequences of stimuli. Such sequential effects represent departures from an optimal, Bayesian process. A prominent explanation posits that humans are adapted to changing environments, and erroneously assume non-stationarity of the environment, even if the latter is static. As a result, their predictions fluctuate over time. We propose a different explanation in which sub-optimal and fluctuating predictions result from cognitive constraints (or costs), under which humans however behave rationally. We devise a framework of costly inference, in which we develop two classes of models that differ by the nature of the constraints at play: in one case the precision of beliefs comes at a cost, resulting in an exponential forgetting of past observations, while in the other beliefs with high predictive power are favored. To compare model predictions to human behavior, we carry out a prediction task that uses binary random stimuli, with probabilities ranging from 0.05 to 0.95. Although in this task the environment is static and the Bayesian belief converges, subjects’ predictions fluctuate and are biased toward the recent stimulus history. Both classes of models capture this ‘attractive effect’, but they depart in their characterization of higher-order effects. Only the precision-cost model reproduces a ‘repulsive effect’, observed in the data, in which predictions are biased away from stimuli presented in more distant trials. Our experimental results reveal systematic modulations in sequential effects, which our theoretical approach accounts for in terms of rationality under cognitive constraints. Research organism
Funding Information This paper was supported by the following grants: Albert P. Sloan Foundation Grant G-2020-12680 to Rava Azeredo da Silveira. http://dx.doi.org/10.13039/501100004794 CNRS UMR8023 to Rava Azeredo da Silveira. http://dx.doi.org/10.13039/501100009627 Fondation Pierre-Gilles de Gennes pour la recherche Ph.D. Fellowship to Arthur Prat-Carrabin. Acknowledgements We thank Doron Cohen and Michael Woodford for inspiring discussions. This work was supported by the Alfred P Sloan Foundation through grant G-2020–12680 and the CNRS through UMR8023. A.P.C. was supported by a Ph.D. fellowship of the Fondation Pierre-Gilles de Gennes pour la Recherche. We acknowledge computing resources from Columbia University’s Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010. Additional information Additional files Data availability The behavioral data for this study and the computer code used for data analysis are freely and publicly available through the Open Science Framework repository at https://doi.org/10.17605/OSF.IO/BS5CY . The following dataset was generated: Prat-Carrabin A Meyniel F Azeredo da Silveira R 2022 Resource-Rational Account of Sequential Effects in Human Prediction: Data & Code Open Science Framework 10.17605/OSF.IO/BS5CY Appendix 1 Stability of subjects’ behavior throughout the experiment To validate the assumption that we capture, in our experiment, the ‘stationary’ behavior of subjects, we compare their responses in the first half of the task (first 100 trials) to their responses in the second half (last 100 trials). We find that the unconditional proportions of prediction A in these two cases are not significantly different, for most values of the stimulus generative probability. The sign of the difference (regardless of its statistical significance) implies that the proportions of predictions A in the second half of the experiment are slightly closer to 1 when the probability of the stimulus A is greater than 0.5; which would mean that the responses of subjects are slightly closer to optimality, in the second half of the experiment ( Appendix 1—figure 1a , grey lines). Regarding the sequential effects, we also obtain very similar behaviors in the first and second halves of the experiment ( Appendix 1—figure 1 ). We conclude that for our analysis it is reasonable to assume that the behavior of subjects is stationary throughout the task. Robustness of the model fitting To evaluate the ability of the model-fitting procedure to correctly identify the model that generated a given set of responses, we compute a confusion matrix of the eight models. For each model, we simulate 200 runs of the task (each with 200 passive trials followed by 200 trials in which a prediction is obtained), with values of and close to values typically obtained when fitting the subjects’ responses (for prediction-cost models, ; for unpredictability-cost models, ; and for both families of models). We then fit each of the eight models to each of these simulated datasets, and count how many times each model best fit each dataset ( Appendix 1—figure 2a ). To further test the robustness of the model-fitting procedure, we randomly introduce errors in the simulated responses: for 10% of the responses, randomly chosen in each dataset, we substitute the response by its opposite (i.e., B for A, and A for B), and compute a confusion matrix using these new responses ( Appendix 1—figure 2b ). In both cases, the model-fitting procedure identifies the correct model a majority of times (i.e., the best-fitting model is the model that generated the data; Appendix 1—figure 2 ). Finally, to examine the robustness of the weight of the cost, , we consider for each subject its best-fitting model in each family (the precision-cost family and the unpredictability-cost family), and we fit separately each model to the subject’s responses obtained in trials in which the stimulus generative probability was medium ( ) and those in which it was extreme ( ). The Appendix 1—figure 3 shows the correlation between the best-fitting parameters obtained in these two cases. Distribution of subjects’ BICs Subjects’ sequential effects — tree representation Subjects’ sequential effects — unpooled data As mentioned in the main text, we pool together the predictions that correspond, in different blocks of trials, to either event (left or right), as long as these events have the same probability. The Appendix 1—figure 6 , below, is the same as Figure 2 , but without such pooling. Given a stimulus generative probability, , all the subjects experience one (and only one) block of trials in which either the event ‘right’ or the event ‘left’ had probability . For one group of subjects the ‘right’ event has probability and for the group of remaining subjects it is the ‘left’ event that has probability . The responses of these subjects are not pooled together in Appendix 1—figure 6 , while they were in Figure 2 . This also applies for any other stimulus generative probability, . However, we note that the two groups of subjects for whom was the probability of a ‘right’ event or a ‘left’ event are not the same as the two groups just mentioned in the case of the probability . As a result, from one proportion shown in Appendix 1—figure 6 to another, the underlying group of subjects changes. In Figure 2 , each proportion is computed with the responses of all the subjects. This illustrates another advantage of the pooling that we use in the main text. Subjects’ response times Across-subjects results
CC BY
no
2024-01-16 23:47:20
eLife.; 13:e81256
oa_package/d9/b8/PMC10789490.tar.gz
PMC10789491
38063302
Introduction The intestinal tract is lined by a cellular monolayer which is folded to form invaginations, called crypts, and protrusions, called villi, in the small intestine. The stem cell niche is formed by intermingling Paneth and stem cells located at the base of the crypt ( Barker et al., 2007 ). Stem cells divide symmetrically, forming a pool of equipotent cells that replace each other following neutral drift dynamics ( Lopez-Garcia et al., 2010 ). Continuously dividing stem cells at the base of the crypt give rise to secretory and proliferative absorptive progenitors that migrate towards the villus, driven by proliferation-derived forces ( Parker et al., 2017 ). The transit-amplifying region above the stem cell niche fuels the rapid renewal of the epithelium. The equilibrium of this dynamic system is maintained by cell shedding from the villus tip into the gut lumen ( Wright and Alison, 1984 ). Epithelial cell dynamics is orchestrated by tightly regulated signalling pathways. Two counteracting gradients run along the crypt–villus axis: the Wnt gradient, secreted by mesenchymal and Paneth cells at the bottom of the crypt, and the bone morphogenetic protein (BMP) gradient generated in the villus mesenchyme, with BMP inhibitors secreted by myofibroblasts and smooth muscle cells located around the stem cell niche ( Gehart and Clevers, 2019 ). These two signalling pathways are also the target of stabilizing negative feedback loops comprising the turnover of Wnt receptors ( Hao et al., 2012 ; Koo et al., 2012 ; Clevers, 2013b ; Clevers and Bevins, 2013c ) and the modulation of BMP secretion ( Büller et al., 2012 ; van den Brink et al., 2004 ). Paneth cells and mesenchymal cells surrounding the niche also secrete other proliferation-enhancing molecules such as epidermal growth factor (EGF) and transforming growth factor-α ( Gehart and Clevers, 2019 ). In addition, Notch signalling-mediated lateral inhibition mechanisms are essential for stem cell maintenance and differentiation into absorptive and secretory progenitors ( Gehart and Clevers, 2019 ). There is also an increasing awareness of the importance of the mechanical regulation of cell proliferation through the Hippo signalling pathway interplaying with several of the key signals, such as EGF, WNT, and Notch, although the exact mechanisms are not currently fully understood ( Gehart and Clevers, 2019 ). The imbalance of this tightly orchestrated system contributes to pathological conditions, including microbial infections, intestinal inflammatory disorders, extra-intestinal autoimmune diseases, and metabolic disorders ( Chelakkot et al., 2018 ). In addition, critically ill patients and patients receiving chemotherapy/radiotherapy often show severely compromised intestinal barrier integrity ( Chelakkot et al., 2018 ). For instance, oncotherapeutics-induced gastrointestinal toxicity is frequently a life-threatening condition that leads to dose reduction, delay, and cessation of treatment and presents a constant challenge for the development of efficient and tolerable cancer treatments ( Stein et al., 2010 ; Saltz et al., 2000 ; Saltz et al., 2001 ; McQuade et al., 2016 ). This intestinal toxicity often results from the interaction of the drug with its intended molecular target such as cell cycle proteins ( Zhang et al., 2021 ) or the disruption of the cycle through DNA damage ( Helleday et al., 2008 ). Multiscale models integrating our knowledge on how the epithelium maintains homeostasis and responds to injury can contribute to understand epithelial biology and quantify the risk of intestinal toxicity during drug development. Several agent-based models (ABMs) have been proposed to describe the complexity and dynamic nature of the intestinal crypt. Early models were used as in silico platforms to study the dynamics and cellular organization of the crypt. For instance, one of the pioneering ABMs was used to study the distribution and organization of labelling and mitotic indices ( Meineke et al., 2001 ). This model comprises a fixed ring of Paneth cells beneath a row of stem cells, which divide asymmetrically to produce a stem cell and a transit-amplifying cell that terminally differentiates after a fixed number of divisions. Some subsequent models are lattice-free, recapitulate neutral drift of equipotent stem cells, and describe proliferation and cell fate regulated by a fixed Wnt signalling spatial gradient, which is defined by the distance from the crypt base, with proliferating cells progressing through discrete phases of the cell cycle and showing variable duration of the G1 phase ( Pitt-Francis et al., 2009 ). Further model refinements can be seen in the model of Buske et al., 2011 , with stochastic cell growth and division time ( Buske et al., 2011 ), Wnt levels defined by the fixed local curvature of the crypt and lateral inhibition driven by Notch signalling. Here, we present a lattice-free ABM that describes the spatiotemporal dynamics of single cells in the small intestinal crypt driven by the interaction of surface-tethered Wnt signals, cell–cell Notch signalling, BMP-diffusive signals, RNF43/ZNRF3-mediated feedback mechanisms, and the cycle protein network responding to the crypt mechanical environment. We show that our computational model enables the simulation of the ablation and recovery of the stem cell niche as well as of how drug-induced molecular perturbations trigger a cascade of disruptive events spanning from the cell cycle to single-cell arrest and/or apoptosis, altered cell migration and turnover, and ultimately loss of epithelial integrity.
Materials and methods Mouse experiments We used BrdU tracking and Ki-67 immunostaining data from previously published experiments in healthy mice ( Parker et al., 2017 ; Parker et al., 2019 ) and following 5-FU treatment ( Jardi et al., 2023 ). The samples from this later study ( Jardi et al., 2023 ) were analysed again to count Ki-67-positive cells at each position along the longitudinal crypt axis for 30–50 individual hemi crypt units per tissue section per mouse as previously described ( Williams et al., 2016 ). ABM development A comprehensive description of the model can be found in Appendix 1 and Appendix 1—table 1 . The model has been made available through BioModels (MODEL2212120002) ( Malik-Sheriff et al., 2020 )
Results Modelling a self-organizing crypt using an ABM We have modelled the mouse intestinal crypt as a self-organizing system where cell dynamics and cell composition arise from local interactions between single cells and the mesenchyme through signalling pathways with behaviours (proliferation, differentiation, fate decision, migration, etc.) determined largely by endogenous intracellular and intercellular interactions. The model describes the spatiotemporal dynamics of stem cells and progenitors undergoing division cycles and responding to intercellular signalling to differentiate into Paneth, goblet, and enteroendocrine cells and enterocytes ( Figure 1A ). All cells interact physically and biochemically in the geometry of the crypt. Stem cells intermingle with Paneth cells at the bottom of the crypt and randomly replace each other. Progenitors and mature cells migrate towards the villus driven by proliferation forces ( Figure 1A ). To achieve a stable crypt cell composition under constant cell renewal dynamics, we have implemented several signalling mechanisms which include the Wnt, Notch, and BMP pathways essential for morphogenesis and homeostasis of the intestinal crypt ( Gehart and Clevers, 2019 ; Fevr et al., 2007 ; VanDussen et al., 2012 ; Pellegrinet et al., 2011 ; He et al., 2004 ), the YAP-Hippo signalling pathway responding to mechanical forces and modulating contact inhibition of proliferation ( Gjorevski et al., 2016 ), and a ZNRF3/RNF43-like-mediated feedback mechanism between Paneth and stem cells to regulate the size of the stem cell niche according to experimental reports ( Hao et al., 2012 ; Koo et al., 2012 ; Farin et al., 2016 ; Figure 1B ). The Wnt pathway is the primary pathway associated with stem cell maintenance and cell proliferation in the crypt ( Fevr et al., 2007 ; van der Flier and Clevers, 2009 ). Our model implements two sources of Wnt signals described in the crypt: Paneth cells ( Sato et al., 2011 ) and mesenchymal cells surrounding the stem cell niche at the crypt base ( Stzepourginski et al., 2017 ). Wnt signalling is modelled as a short-range field around Wnt-emitting Paneth and mesenchymal cells with Wnt signals tethered to receptive cells as previously reported ( Farin et al., 2016 ; Clevers and Nusse, 2012 ). Surface-tethered signals are split between daughter cells upon cell division ( Gehart and Clevers, 2019 ; Farin et al., 2016 ), which results in a gradual depletion of tethered Wnt signals as cells divide and migrate towards the villus away from Wnt sources ( Figure 1A and B ). Notch signalling is also implemented in the model with Notch ligands expressed by secretory cells binding to Notch receptors on neighbouring cells and preventing them from differentiating into secretory fates, in a process known as lateral inhibition, that leads to a checkerboard/on-off pattern of Paneth and stem cells in the niche ( VanDussen et al., 2012 ). Specifically, in our model, high Wnt and Notch signalling environments are required to maintain stemness, as reported in the literature ( Tian et al., 2015 ) while under low Notch and high Wnt signalling, stem cells differentiate into secretory cells, including Paneth cells. On the other hand, Notch signalling also mediates the process of Paneth cell de-differentiation into stem cells to regenerate the niche as previously reported ( Mei et al., 2020 ; Yu et al., 2018 ). Stem cells with decreased levels of Wnt signalling, usually located outside the niche, differentiate into absorptive proliferating progenitors or alternatively into secretory progenitors in the absence of Notch signals ( Figure 1C ). In our model, mechanical stimuli, captured through the YAP-Hippo signalling pathway ( Gjorevski et al., 2016 ; Halder et al., 2012 ; Aragona et al., 2013 ; Low et al., 2014 ), indirectly interact with the Notch and Wnt signalling pathways. We recapitulate YAP-mediated contact inhibition of proliferation by using cell compression to modulate the duration of the division cycle which increases when cells are densely squeezed, such as in the stem cell niche, and decreases if cell density falls, for instance, in the transit-amplifying compartment or in cases of crypt damage ( Figure 1A and B ). In agreement with experimental reports ( Pin et al., 2015 ), in our model, Paneth cells are assumed to be stiffer and larger than other epithelial cells, requiring higher forces to be displaced and generating high intercellular pressure in the niche. Due to the increased mechanical pressure, cells in the niche have longer division cycles and can accumulate more Wnt and Notch signals. These premises imply that Paneth cells enhance their own production by generating Wnt signals and inducing prolonged division times, which increases stem and Paneth cell production and could lead to unlimited expansion of the niche recapitulating the phenotype seen in ZNRF3/RNF43 knockout mice ( Koo et al., 2012 ; see Appendix 1, Section 1.11). To generate a niche of stable size, we implemented a negative Wnt-mediated feedback loop that resembles the reported stem cell production of RNF43/ZNRF3 ligands to increase the turnover of Wnt receptors in nearby cells ( Hao et al., 2012 ; Koo et al., 2012 ; Clevers, 2013b ; Clevers and Bevins, 2013c ). Similarly, in our model, a number of stem cells in excess of the homeostatic value reduces cell tethering of Wnt ligands and hence inhibits Paneth and stem cell generation ( Figure 1A and B ). The Wnt gradient in the crypt is opposed by a gradient of bone morphogenic protein (BMP) that inhibits cell proliferation and promotes differentiation ( Qi et al., 2017 ). We assume that enterocytes secrete diffusing signals, resembling Indian Hedgehog signals ( Büller et al., 2012 ), that induce mesenchymal cells to generate a BMP signalling gradient effective to prevent proliferative cells from reaching the villus ( Figure 1A and B ). Based on experimental evidence, we also assume that BMP activity is counteracted by BMP antagonist-secreting mesenchymal cells surrounding the stem cell niche ( McCarthy et al., 2020 ). Proliferative absorptive progenitors migrating towards the villus lose Wnt during every division and eventually meet values of BMP that overcome the proliferation-inducing effect of Wnt signalling ( He et al., 2004 ). We found that a homeostatic crypt cell composition is achieved when BMP and Wnt differentiation thresholds result in progenitors dividing approximately four times before differentiating into enterocytes ( Figure 1C ). In our model, the BMP signalling gradient responds dynamically to the number of enterocytes, giving rise to a negative feedback loop between enterocytes on the villus and their proliferative progenitors in the crypt that recapitulates the enhanced crypt proliferation observed after epithelial damage ( Büller et al., 2012 ; Pont and Yan, 2018 ; Sprangers et al., 2021 ). For instance, a decreased number of enterocytes results in reduced production of BMP, which enables progenitor cells to divide and migrate further up the crypt before meeting BMP levels higher than the differentiation threshold. Altogether our model describes single cells that generate and respond to signals and mechanical pressures in the crypt–villus geometry to give rise to a self-organizing crypt which has stable spatial cell composition over time ( Figure 1D ) and reproduces reported experimental data ( Buske et al., 2011 ). An extended description of these modelling features is provided in Appendix 1. The cell cycle protein network governs proliferation in each single cell of the ABM and responds to mechanical cues We have used the model of Csikász-Nagy et al., 2006 , which is based on the seminal work of Novak and Tyson, 1993 ; Novak et al., 2001 ; Novák and Tyson, 2004 and available in BioModels ( Le Novère and Csikasz-Nagy, 2006 ), to recreate the dynamics of the main proteins governing the mammalian cell cycle in each single proliferative cell of the ABM. In this model, a dividing cell begins in G1, with low levels of cyclins A, B, and E and a high level of Wee1, and progresses to S-phase when cyclin E increases. S-phase ends and G2 begins when Wee1 falls. The decrease in cyclin A expression defines the start of M-phase, while falling cyclin B implies the end of M-phase, when the cell divides into two daughter cells with half the final mass value and re-enters the cell cycle ( Figure 2A–D ). To implement YAP-Hippo-mediated contact inhibition of proliferation, we have modified the dynamics of the proteins of the Csikasz-Nagy model to respond to mechanical cues encountered by cells migrating along the crypt. Crowded, constrained environments result in longer cycles, such as in stem cells in the niche, while decreased intercellular forces lead to shortened cycles as cells migrate towards the villus in agreement with experimental reports ( Wright and Alison, 1984 ; Marshman et al., 2002 ; Potten et al., 1997 ). The shorter cycle duration in absorptive progenitors has been mainly associated with shortening/omission of G1, while the duration of S-phase is less variable ( Wright and Alison, 1984 ). Using the model of Csikász-Nagy et al., 2006 , we modulated the duration of G1 through the production rate of the p27 protein. The p27 protein has been reported to regulate the duration of G1 by preventing the activation of cyclin E-Cdk2 which induces DNA replication and the beginning of S-phase ( Morgan and Morgan, 2007 ). We, hence, hypothesized that rapid cycling absorptive progenitors located in regions of low mechanical pressure outside the stem cell niche have low levels of p27, which bring forward the start of S-phase to shorten G1 ( Figure 2D ). In support of this hypothesis, it has been demonstrated that p27 inhibition has no effect on the proliferation of absorptive progenitors ( Zheng et al., 2008 ; see Appendix 1 for a full description). These new features of the cell cycle model are updated dynamically and continuously to respond to changes in mechanical pressure experienced by each cell as it migrates along the crypt. To demonstrate the performance of the model to reproduce the spatiotemporal cell dynamics and composition of a homeostatic crypt, we simulated previous published mouse experiments ( Parker et al., 2017 ; Parker et al., 2019 ) comprising 5-bromo-29-deoxyuridine (BrdU) tracking ( Figure 2E ) and Ki-67 staining ( Figure 2F ). BrdU is a thymidine analogue often used to track proliferative cells and their descendants along the crypt–villus axis ( Nowakowski et al., 1989 ; Gratzner, 1982 ). BrdU is incorporated into the newly synthesized DNA of dividing cells during S-phase and transmitted to daughter cells, regardless of whether they proliferate. If the exogenous administration of this molecule is discontinued, the cell label content is diluted by each cell division and is no longer detected after 4–5 generations ( Wilson et al., 2008 ). To simulate the BrdU chase experiment after a single BrdU pulse, we assumed that any cell in S-phase incorporated BrdU permanently into its DNA for the first 120 min after injection of BrdU and BrdU cell content was diluted upon cell division such that after five cell divisions, BrdU was not detectable. See Appendix 1 for a complete description. The BrdU chase simulation showed that the observed initial distribution of cells in S-phase as well as division, differentiation, and migration of BrdU-positive cells over time was replicated by our model ( Figure 2E ). Ki-67 is a protein produced by actively proliferating cells during the S-, G2-, and M-phases of the division cycle ( Sobecki et al., 2017 ). Due to the time required for this protein to be catabolized ( Miller et al., 2018 ), Ki-67 is also detected in quiescent or non-proliferative cells after exiting the cycle ( Miller et al., 2018 ) and during G1 in continuously cycling cells ( Sobecki et al., 2017 ). Our simulations assumed that Ki-67 is detected in continuously cycling cells, cells re-entering the cycle after arrest except during G1, as well as in differentiated cells that were cycling within the past 6 hr and recently drug-arrested cells. See Appendix 1 for a complete description. Similarly, we observed that the ABM-simulated spatial distribution along the crypt of Ki-67-positive cells recapitulated observations in mouse ileum ( Figure 2F ). In summary, proliferative cells in the ABM respond to mechanical cues by adjusting the cell cycle protein network to dynamically change the duration of the cycle while migrating along the crypt. With this feature, the model replicates spatiotemporal patterns of cell proliferation, differentiation, and migration observed in mouse experiments. Cell plasticity/de-differentiation enables crypt regeneration following damage of the stem cell niche Marker-based lineage-tracing studies have demonstrated numerous potential sources available for intestinal stem cell regeneration ( Hageman et al., 2020 ). In line with these studies, our model assumes that cell fate decisions are reversible and both secretory and absorptive cells are able to revert into stem cells when regaining sufficient Wnt and Notch signals. To investigate the potential of the ABM to describe and explore cell plasticity dynamics, we simulated the repeated ablation of intestinal stem cells resembling a previously published study ( Tan et al., 2021 ). Following the experimental setup in that study, we simulated the diphtheria toxin receptor-mediated conditional targeted ablation of stem cells for four consecutive days considering that ablation was completed after the first 24 hr ( Saito et al., 2001 ) and persistently inducing stem cell death during the remaining days of treatment ( Figure 3A–C ). Our simulations showed that 6 hr after the last induction, stem cells were not detected, Paneth cells decreased by 75–100% ( Figure 3B ), and the villus length was reduced by about 10–20% ( Figure 3C ) which was similar to the reported experimental findings ( Tan et al., 2021 ). Simulated proliferative absorptive progenitors were indirectly affected by stem cell ablation and their decrease was followed by a reduction in mature enterocytes. The progenitors recovered after treatment interruption to later reach values above baseline when responding to the negative feedback signalling from mature enterocytes ( Figure 3A ). In our simulations, enhanced crypt proliferation was not accompanied by simultaneous villus recovery, which started later. Tan et al., 2021 reported similar results with increased crypt proliferation replenishing first the crypt and not contributing immediately to villus recovery. See Video 1 to visualize the response of the crypt. We next studied the type of cells that were dedifferentiating during the simulated repeated ablation of stem cells and found that in agreement with experimental reports, Paneth cells ( Yu et al., 2018 ), absorptive progenitors ( Tetteh et al., 2016 ), and quiescent stem cells located just above the stem cell niche at the fourth cell position from the crypt base ( Tian et al., 2011 ) dedifferentiated into stem cells. Specifically, from all dedifferentiated cells, about 60% were Paneth cells, 30% absorptive progenitors, and 10% secretory progenitors, which are considered quiescent stem cells as previously suggested ( Buczacki et al., 2013 ). Furthermore, we used our model to explore the retrograde motion, reported using intravital microscopy ( Azkanaz et al., 2022 ), of cells returning to the niche to de-differentiate into stem cells. For cells outside the niche, movement is retrograde when its velocity is negative in the z direction, that is, they move towards the niche across the longitudinal crypt–villus axis. For cells in the hemispherical niche, we consider a cell to move forwards, towards the villus, or backwards, towards the crypt base, if the rate of change of its polar angle is positive or negative, respectively. This implies that cells can be recorded to move backwards despite being located at the crypt base. We observed that the frequency of retrograde, or backward, movements is relatively high at low positions in a crypt in homeostasis ( Figure 3D ) and increases further after stem cell ablation, reflecting increased retrograde cellular motion as cells repopulate the niche. While in homeostasis the progeny of a stem cell generally differentiates into a cascade of absorptive and secretory progenitors that migrate towards the villus and eventually leave the crypt ( Figure 3E ). Following the interruption of stem cell ablation, during recovery absorptive progenitors return to the niche and dedifferentiate to regenerate multiple stem and Paneth cells as well as progenitors ( Figure 3E ). Taken together, our model recapitulates cellular reprogramming of both multipotent precursors and committed progeny in the crypt and replicates the reported crypt injury dynamics following persistent ablation of stem cells ( Tan et al., 2021 ). Disturbance of cell cycle proteins spans across scales to impact on crypt and villus organization The model of Csikász-Nagy et al., 2006 enables the simulation of the disruption of the main proteins governing the cell cycle in each single proliferative cell of the ABM. CDKs play important roles in the control of cell division ( Malumbres, 2014 ), and the development of CDK inhibitors for cancer treatment is an active field of research ( Zhang et al., 2021 ). To explore the effect of the disruption of the cell cycle on epithelial integrity, we simulated the inhibition of CDK1 for 6 hr, every 12 hr for four consecutive days, resembling epithelial toxicity of a theoretical drug. CDK1 is reported to be the only CDK essential for the cell cycle in mammals ( Santamaría et al., 2007 ). CDK1 triggers the initiation of cytokinesis by inducing the nuclear localization of mitotic cyclins A and B ( Pesin and Orr-Weaver, 2008 ), and its inhibition has been proposed as a cancer therapy with potentially higher efficacy than the inactivation of other CDKs ( Diril et al., 2012 ). To mimic CDK1 inhibition, we added a term to the CycA/CDK1,2 and CycB/CDK1 differential equations of the Csikasz-Nagy model ( Csikász-Nagy et al., 2006 ) that strongly reduces the production of both CycA/CDK1,2 and CycB/CDK1 during the CDK1 inhibition period ( Figure 4A–E ; Appendix 1). It has been experimentally demonstrated that the selective inhibition of CDK1 activity in cells programmed to endoreduplicate (i.e. cells that can duplicate their genome in the absence of intervening mitosis) leads to the formation of stable nonproliferating giant cells, whereas the same treatment triggers apoptosis in cells that are not developmentally programmed to endoreduplicate ( Ullah et al., 2008 ). Although endoreduplication is not expected in crypt cells, enlarged polynucleated cells have been reported to remain in the epithelium without dying in a recent light-sheet organoid imaging study tracking the progeny of a cell after cytokinesis failure induced by the inhibition of LATS1 ( de Medeiros et al., 2022 ), which is phosphorylated by CDK1 during mitosis ( Furth and Aylon, 2017 ). Thus, we chose to replicate this phenotype to show the capacity of our model to predict possible complex responses in the intestine. Following CDK1 inhibition, we detected oversized cells in the ABM ( Figure 4A ). The inhibition of the activation of cyclins A and B altered the modelled protein profiles, disturbing progression through G2 and M-phase and preventing the cell mass from dividing before reinitiating a new cycle ( Figure 4B ). Thus, a cell could either be (i) unaffected if it was at the early stages of the cycle ( Figure 4C ); or (ii) restart the cell cycle if CDK1 was inhibited while the cell was at the end of G2 and unable to enter M-phase or in M-phase and unable to complete cytokinesis. In this case, the inhibition of cyclins A and B led to an early increase in cyclin E and the premature restart of G1 with the generation of oversized cells, which are ultimately arrested ( Figure 4D ); or (iii) cells in M-phase can undergo mitotic death if the reduction of cyclins A and B severely disrupts the protein network ( Figure 4E ). Hence, the failure to culminate M-phase resulted in cell death or generation of oversized, nonproliferating cells, which led to a reduction of the crypt overall cell number ( Figure 4F ) and the turnover of villus cells ( Figure 4G ). Appendix 1—figure 1 shows the response of all cell lineages to CDK1 inhibition, and Video 2 shows the 3-D visualization of the crypt during this treatment. Altogether our ABM enables the simulation of how disruptions of the cell cycle protein network span across scales to generate complex phenotypes, such as giant cells, and impact on the integrity of the crypt and villus structure. A practical application of the ABM to describe 5-fluorouracil (5-FU)-induced epithelial injury at multiple scales 5-FU is a well-studied and commonly administered cancer drug ( Longley et al., 2003 ) with reported high incidence of gastrointestinal adverse effects in treated patients ( Stein et al., 2010 ). 5-FU is a pyrimidine antimetabolite cytotoxin which has multiple mechanisms of action upon conversion to several nucleotides that induce DNA and RNA damage ( Longley et al., 2003 ). Antimetabolites resemble nucleotides and nucleotide precursors that inhibit nucleotide metabolism pathways, and hence DNA synthesis, as well as impair the replication fork progression after being incorporated into the DNA ( Helleday et al., 2008 ). To explore the performance of our ABM to predict epithelial injury, we used results from experiments in mice dosed with 50 and 20 mg/kg of 5-FU every 12 hr for 4 d to achieve drug exposures similar to those observed in patients ( Jardi et al., 2023 ). 5-FU pharmacokinetics is metabolized into three active metabolites FUTP, FdUMP, and FdUTP ( Longley et al., 2003 ). Based on previous reports, we assumed that FUTP is incorporated into RNA of proliferative cells, leading to global changes in cell cycle proteins ( Pritchard et al., 1997 ), while FdUTP is incorporated into DNA ( Longley et al., 2003 ) during S-phase, resulting in the accumulation of damaged DNA. In our model, DNA and/or RNA damage can be repaired or lead to cell arrest or apoptosis ( Figure 5A ). We did not implement the inhibition of thymidylate synthase (TS) by FdUMP because the impact of this mechanism on intestinal toxicity is not completely understood ( Pritchard et al., 1997 ). A previously published 5-FU PK model ( Gall et al., 2023 ) was integrated into the ABM to describe the dynamic profile of the concentration of 5-FU and its metabolites in plasma and GI epithelium after dosing ( Figure 5B ). Figure 5C shows the cell cycle protein dynamics and fate decision when 5-FU challenge took place at the beginning of S-phase and led to the accumulation of relatively high levels of DNA damage which triggered cell death at the G2-M-phase checkpoint. When the challenged cell was at the end of S-phase, the accumulated levels of DNA damage were not high enough to be detected at the G2-M-phase checkpoint and the cell finished the cycle and restarted a new cycle at a slower rate due to concurrent RNA damage and relatively low level of DNA damage ( Figure 5D ). Figure 5E shows that predicted and observed Ki-67-positive cells declined gradually over time at all positions in the crypt during the 5-FU high-dose treatment. However, the numbers recovered, reaching values above baseline, 2 d after the interruption of 5-FU administration. The increased rebound of the proliferative crypt compartment after treatment was captured in our ABM by the implemented BMP-mediated feedback mechanism from mature enterocytes to proliferative cells (see Appendix 1, Section 1.7.4). For this treatment, both simulated and observed total number of cell,s in the crypt followed the same pattern as the proliferative compartment ( Figure 5F ), while the decline in villus cells started later and took longer to achieve full recovery ( Figure 5G ). Appendix 1—figure 2A and B shows the response of all cell lineages during this treatment, and Video 3 shows the 3-D visualization of the simulated crypt and changes in signalling pathways and cell composition during the high-dose 5-FU challenge. The low dose of 5-FU had a minor impact on crypt proliferation and villus integrity, which was also recapitulated by the model ( Appendix 1—figure 2C–E ). Overall, the ABM recapitulates DNA and RNA damage, resulting in cell cycle disruption associated with 5-FU administration and describes the propagation of the injury across scales to disturb epithelial integrity. The loss of epithelial barrier integrity is widely accepted to be the triggering event of chemotherapy-induced diarrhoea ( McQuade et al., 2016 ) which is reported in mice at the doses used in this study ( Jardi et al., 2023 ) as well as observed in patients undergoing equivalent treatments ( Morawska et al., 2018 ).
Discussion We have built a multi-scale ABM of the small intestinal crypt with self-organizing, stable behaviour that emerges from the dynamic interaction of the Wnt, Notch, BMP, and ZNRF3/RNF43 pathways orchestrating cellular fate and feedback regulatory loops and includes contact inhibition of proliferation, RNA and DNA metabolism, and the cell cycle protein interaction network regulating progression across division stages. In our model, the stability of the niche is achieved by a negative feedback mechanism from stem cells to Wnt respondent cells that resembles the reported turnover of Wnt receptors by ZNRF3/RNF43 ligands secreted by stem cells ( Hao et al., 2012 ; Koo et al., 2012 ; Clevers, 2013b ; Clevers and Bevins, 2013c ). Wnt signals generated from mesenchymal cells and Paneth cells at the bottom of the crypt are tethered to receptive cells and divided between daughter cells upon division, which forms a decreasing Wnt gradient towards the villi that stimulates cell proliferation and ensures stemness maintenance ( Farin et al., 2016 ; Sato et al., 2011 ). The model also implements the BMP signalling counter-gradient along the crypt–villus axis by resembling the production of diffusive BMP signals by mesenchymal telocytes abundant at the villus base as well as the activity of BMP antagonist molecules secreted by trophocytes located just below crypts ( McCarthy et al., 2020 ). This BMP signalling gradient forms an additional negative feedback mechanism that regulates the size of the crypt proliferative compartment and recapitulates the modulation of BMP secretion by mesenchymal cells via villus cells-derived hedgehog signalling ( Büller et al., 2012 ; van den Brink et al., 2004 ). Another novel feature of our model is the inclusion of the dynamics of the protein network governing the phases of cell division ( Csikász-Nagy et al., 2006 ). Moreover, in our model, the cell cycle protein network responds to environmental mechanical cues by adapting the duration of the cycle phases. Cells in crowded environments subjected to higher mechanical pressure, such as stem cells in the niche, exhibit longer cell cycles ( Wright and Alison, 1984 ; Marshman et al., 2002 ; Potten et al., 1997 ) while progenitors in the transit-amplifying compartment adapt their cell cycle protein dynamics to mainly shorten G1-phase ( Wright and Alison, 1984 ; Carroll et al., 2018 ) and proliferate more rapidly. This model feature recapitulates the widely reported YAP-mediated mechanism of contact inhibition of proliferation under physical compression ( Halder et al., 2012 ; Aragona et al., 2013 ; Low et al., 2014 ). Interestingly, it has been reported that stiff matrices initially enhance YAP activity and proliferation of in vitro cultured intestinal stem cells by promoting cellular tension ( Gjorevski et al., 2016 ); however, that study also proposes that the resulting colony growth within a stiff confining environment may give rise to compression YAP inactivation retarding growth and morphogenesis ( Gjorevski et al., 2016 ). Furthermore, our model considers that the mechanical regulation of the cell cycle interacts with signalling pathways to maintain epithelial homeostasis, but also to trigger cell dedifferentiation if required. Cells with longer cycles accumulate more Wnt and Notch signals, leading to the maintenance of the highly dynamic niche by replacement of Paneth and stem cells. Cells located outside the niche exhibit shorter cycles and cannot effectively accumulate enough Wnt signals to dedifferentiate into stem cells in homeostatic conditions. However, in case of niche perturbation, progenitor cells reaching the niche as well as existing Paneth cells in the niche are able to dedifferentiate into stem cells after regaining enough Wnt signals, which replicates the injury recovery mechanisms observed in the crypt ( Hageman et al., 2020 ; Tetteh et al., 2016 ). Our model also concurs with experimental results suggesting that Lgr5+ stem cells are essential for intestinal homeostasis and that their persistent ablation compromises epithelial integrity ( Tan et al., 2021 ). Altogether, our model implements qualitative and quantitative behaviours to better simulate the functional heterogeneity of the intestinal epithelium at multiple scales. One of the important applications of our modelling approach lies in the development of safer oncotherapeutics. The model enables the prediction of intestinal injury associated with efficacious dosing schedules in order to minimize toxicity while maintaining the efficacy of investigational drugs. We demonstrated the application of our model to predict potential intestinal toxicity phenotypes induced by CDK1 inhibition as well as describe the disruption of the epithelium at multiple scales triggered by RNA and DNA damage, leading to the loss of integrity of the intestinal barrier and diarrhoea following 5-FU treatment. The drug-induced perturbation of other cell cycle proteins or signalling pathways, already integrated into the model, is straightforward to simulate with the current version of the model while the resolution of molecular networks can be increased, or new pathways incorporated into the ABM, to describe additional drug mechanisms of action. While most of the crypt biology understanding integrated in our model derives from mouse epithelial studies, human-derived intestinal organoids and microphysiological systems, now routinely used in research, can provide highly precise information at the single-cell level to inform ABM development. In return, ABMs can help test hypotheses behind organoid responses in health and disease conditions. Our work highlights the importance of novel modelling strategies that are able to integrate the dynamics of processes regulating the functionality of the intestinal epithelium at multiple scales in homeostasis and following perturbations to provide unprecedented insights into the biology of the epithelium with practical application to the development of safer novel drug candidates.
The maintenance of the functional integrity of the intestinal epithelium requires a tight coordination between cell production, migration, and shedding along the crypt–villus axis. Dysregulation of these processes may result in loss of the intestinal barrier and disease. With the aim of generating a more complete and integrated understanding of how the epithelium maintains homeostasis and recovers after injury, we have built a multi-scale agent-based model (ABM) of the mouse intestinal epithelium. We demonstrate that stable, self-organizing behaviour in the crypt emerges from the dynamic interaction of multiple signalling pathways, such as Wnt, Notch, BMP, ZNRF3/RNF43, and YAP-Hippo pathways, which regulate proliferation and differentiation, respond to environmental mechanical cues, form feedback mechanisms, and modulate the dynamics of the cell cycle protein network. The model recapitulates the crypt phenotype reported after persistent stem cell ablation and after the inhibition of the CDK1 cycle protein. Moreover, we simulated 5-fluorouracil (5-FU)-induced toxicity at multiple scales starting from DNA and RNA damage, which disrupts the cell cycle, cell signalling, proliferation, differentiation, and migration and leads to loss of barrier integrity. During recovery, our in silico crypt regenerates its structure in a self-organizing, dynamic fashion driven by dedifferentiation and enhanced by negative feedback loops. Thus, the model enables the simulation of xenobiotic-, in particular chemotherapy-, induced mechanisms of intestinal toxicity and epithelial recovery. Overall, we present a systems model able to simulate the disruption of molecular events and its impact across multiple levels of epithelial organization and demonstrate its application to epithelial research and drug development. Research organism
Funding Information This paper was supported by the following grants: http://dx.doi.org/10.13039/100013322 European Federation of Pharmaceutical Industries and Associations Innovative Medicines Initiative 2 No. 116030 to Louis Gall, Carrie Duckworth, Ferran Jardi, Lieve Lammens, David Mark Pritchard, Carmen Pin. http://dx.doi.org/10.13039/100010661 Horizon 2020 Framework Programme Innovative Medicines Initiative 2 No. 116030 to Louis Gall, Carrie Duckworth, Ferran Jardi, Lieve Lammens, David Mark Pritchard, Carmen Pin. Acknowledgements The authors acknowledge financial support from TransQST consortium. This project has received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement no. 116030. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation programme and EFPIA. Additional information Additional files Data availability The current manuscript is a computational study. No data have been generated for this manuscript. Modelling code is uploaded as Source code 1. Appendix 1 Technical description of the intestinal epithelial ABM The model primarily focuses on describing the spatiotemporal dynamics of single epithelial cells, interacting physically and biochemically in the mouse intestinal crypt, undergoing division cycles or differentiating into mature epithelial cells. Single cells both generate and respond to signals and mechanical pressure in the crypt–villus geometry to generate a self-organizing tissue. Below we describe the assumptions and hypotheses that underpin the model, regarding (1) geometry; (2) cell cycle proteins and cellular growth; (3) drug perturbation of the cell cycle proteins: Cdk1 inhibition; (4) DNA and RNA synthesis; (5) drug perturbations of RNA and DNA synthesis: 5-FU-induced RNA and DNA damage; (6) mechanical cell interactions and contact inhibition; (7) biochemical signalling; (8) cell fate: proliferation, differentiation, arrest, and apoptosis; (9) ABM simulation of Ki-67 and BrdU staining; (10) ‘What-if’ analysis; and (11) model implementation and parameterization. Geometry To recreate the morphology of the crypt, we chose the common idealized ‘test tube’ crypt geometry of a hemisphere attached to a cylinder, which describes the basement membrane that the cells are attached to. The parameters describing the average morphology of the crypt, that is, the height and circumference of the ‘tube’, in mouse jejunum and ileum are described in Appendix 1—table 1 . Cells on the villus are terminally differentiated and can be assumed to migrate on a conveyor belt at constant velocity ( Parker et al., 2017 ). Given these simple dynamics, to save computational power and time we modelled individual cells on the villus without spatial granularity. Cells that reach the top of the crypt are collected into a villus compartment. Shedding from the villus tip is mimicked by removing the oldest cells when the number of cells exceeds the maximum capacity of the villus, which is described in Appendix 1—table 1 . Cells on the villus keep all properties and still age and undergo apoptosis if required, though in homeostatic conditions cells are usually shed into the lumen before becoming senescent. Cell cycle proteins and cellular growth The division cycle of cells is controlled by a network of interacting proteins which include cyclins, cyclin-dependent kinases (CDKs), and a suite of ancillary proteins ( Morgan and Morgan, 2007 ). The discrete events of the cell cycle, such as DNA replication in S-phase and the various stages of mitosis, are regulated by the activity of this protein network, whose components go through a careful, conserved series of peaks and troughs at the correct pace to complete all processes of the cycle. The dynamics of this protein interaction network is simulated in each cell of the ABM and controls cell division and differentiation. We have used the model of Csikász-Nagy et al., 2006 that recreates the mammalian cell cycle and is available in Biomodels ( Le Novère and Csikasz-Nagy, 2006 ). This model is an extension of the pioneering work of Novak and Tyson that helped reveal the complex nonlinear dynamics of the cell cycle proteins ( Novak and Tyson, 1993 ; Novak et al., 2001 ; Novák and Tyson, 2004 ). The Csikasz-Nagy model provides multiple necessary features such as core cell cycle proteins, a mass variable that can be coupled to the volume of the single cells in our ABM and sufficient mechanistic detail to enable a detailed description of drug–cycle interactions. The model compromises 14 variables that describe the dynamics of the concentration of the main cell cycle proteins as oscillations between alternating peaks and troughs. G1-phase is the default opening state, with low levels of cyclins A, B, and E and high level of Wee1. The level of cyclin D grows exponentially throughout the cycle and is halved between daughter cells after mitosis. S-phase begins with the increase in cyclin E and ends when Wee1 drops to reach its trough. G2-phase is characterized by low Wee1 and high cyclin A, ending with the drop of cyclin A. M-phase ends when cyclin B falls and the cell divides and restarts the cycle in G1. Stem cells have been reported to have a longer division cycle than absorptive progenitor cells ( Wright and Alison, 1984 ; Marshman et al., 2002 ; Potten et al., 1997 ). We hypothesize that this is due to contact inhibition mechanisms caused by increased intercellular forces in the crowded, constrained niche. This implies that the duration of the cycle may significantly vary among single cells. To implement cycles of varying duration in our ABM, we describe below a series of required adjustments in the Csikasz-Nagy model that basically involve changes in the duration of the full cycle, the re-adjustment of the length of the cycle phases, primarily G1 and S-phase, and the modulation of the dynamics of the model mass variable. To change the duration of the cell cycle, , we rescaled the time coordinate: , where h is the original period of the model ( Csikász-Nagy et al., 2006 ) and is determined by the internal pressure of the cell as detailed below in Section 1.6. Without further modifications of the Csikasz-Nagy model ( Csikász-Nagy et al., 2006 ), the duration of all cycle phases would be scaled in proportion with changes in . However, not all phases are proportionally shortened in fast cycling healthy cells ( Csikász-Nagy et al., 2006 ). The shorter cycle duration in absorptive progenitors is likely due to shortening/omission of G1-phase as reported for rapid cycling progenitors ( Wright and Alison, 1984 ; Carroll et al., 2018 ), while the duration of S-phase is less variable ( Wright and Alison, 1984 ) with reported values of 8 hr for mouse ileal epithelium ( Wright and Alison, 1984 ). Regarding G1-phase, the p27 protein has been reported to regulate the duration of G1 by preventing the activation of cyclin E-Cdk2 which induces DNA replication and defines the beginning of S-phase ( Morgan and Morgan, 2007 ). We hypothesized that fast cycling cells have low levels of p27 which results in earlier DNA replication, bringing forward the start of S-phase and shortening the length of G1. In support of this hypothesis, it has been experimentally demonstrated that inhibiting p27 has no effect on the proliferation of absorptive progenitors ( Zheng et al., 2008 ). In the Csikasz-Nagy model ( Csikász-Nagy et al., 2006 ), the duration of G1 can be modulated through the parameter , which is the basal production rate of p21/p27 (in the Csikasz-Nagy model, the p21 and p27 proteins are represented by a single variable, here we refer to that model quantity as p21/p27). Additionally, the end of S-phase is associated with the decrease in Wee1 to basal levels due to Cdc14-mediated phosphorylation of Wee1. In the Csikasz-Nagy model ( Csikász-Nagy et al., 2006 ), this reaction is described by a Goldbeter–Koshland function, which includes the parameter to regulate the level of Cdc14 required for the phosphorylation of Wee1. Therefore, we modified these two parameters, and , to ensure that variations of the cycle duration mostly impact on G1 while the length of S-phase remains constant. We assumed that the value of the two parameters scales linearly with the duration of the division cycle, , between a lower and upper bound, which prevent aberrant behaviour of the cell cycle model in the dynamically changing conditions of the crypt. is scaled according to where and denote the average duration of the cycle of fast cycling progenitors and of the slower cycling stem cells, respectively. and are values calibrated to ensure the correct duration of G1 for the short and long cycles, respectively, and can be found in Appendix 1—table 1 . Similarly, we scale using the function: Here, and are the values required to maintain constant duration of S-phase in fast and slow cycling cells and can be found in Appendix 1—table 1 . A further refinement required to modify the length of the cycle in the Csikasz-Nagy model comprises the mass variable. This variable doubles its value over the course of a cycle and drives the progression of the cell cycle by changing the production rates of the cycle proteins. The changing production rates affect the balance of the proteins and the duration of the cell cycle phases, which start and end at particular mass values determined by the abovementioned two rates and other parameters in the model. After the mass doubles, mitosis occurs and the mass is halved to its initial value, returning the model to the original state. From here the mass begins to grow again, repeating the cell cycle. The mass of a cell effectively tracks the cell’s progress through the cell cycle. In our ABM, changes continuously in each cell and modifies and as described above, which in turn changes the mass values of the start/end of the cell cycle phases. Without further changes in the model, this would cause the cells to not progress through the cell cycle correctly, with unbalanced phases duration and dividing at unwanted mass values, causing erroneous and unrealistic behaviour in the ABM. This can be solved by normalizing the mass in the cell cycle model, chosen such that a cell begins at and always divides at . To do this, we first define a normalized mass variable, assumed to be proportional to the volume of the cell: where is the cell radius that takes values between and . When a proliferative cell is created, it is assigned a desired final size, , where for stem cells and for all other cells. The mean values, 0.5 and 0.35, of the radius of progenitor and stem cells, respectively, were determined for an average, non-proliferative or proliferative progenitor cell to have, without loss of generality, a diameter of 1 while the diameter of an average stem cell is slightly smaller, 0.7. In this way, the model captures the smaller size described for columnar LGR5+ stem cells ( Barker et al., 2008 ), which additionally helps recapitulate the mechanics and cell composition of the niche. The variance of the radius was determined by our implementation of the cell cycle model in the ABM. In our model, the volume of the cell is equated to the cell’s mass parameter of the Csikasz-Nagy model and, hence, the cell final radius determines the duration of the cell cycle as described above. By simulating the cell cycle model, we observed that large values of the standard deviation resulted in some cells progressing through the cycle too quickly and, therefore, failing to complete the cell cycle correctly. This analysis provided an upper limit to the coefficient of variation (CV) = 0.025 to ensure all cells progress regularly through the cycle during homeostasis. This results in values of the standard deviation of the radius of 0.0125 and 0.00875 for progenitor cells and stem cells, respectively. Of note, a cell radius CV of 0.025 corresponds to a cell volume CV of about 0.075 which is not far from the reported experimental CV for cell volume, about 0.11 ( Bell and Anderson, 1967 ). We then introduce a factor onto the four terms involving the mass variable in the cell cycle model. These terms are the basal production rates of the four cyclins A, B, D, and E, called , , , and , respectively. is given by The values and are values found by calibration of the cell cycle model to guarantee the cell always divides at for the short and long cycle durations. Moreover, the cell mass is assumed to grow exponentially. A proliferative cell always reaches a final value of , corresponding to the radius , during the cycle time, , so that mass must grow as This corresponds to a radial growth rate of As changes dynamically through the cell cycle, the growth rate holds only for the instantaneous conditions the cell is experiencing and changes dynamically through the cell’s lifetime. However, in a healthy crypt, extracellular conditions vary slowly, and the value of and all derived adjustment factors remain relatively unchanged. We assumed that cells divide symmetrically. Each daughter cell has a starting radius of and is assigned with a new randomly generated value which determines . If , then we set to prevent values of . Since cells have a variable maximum size uncorrelated to their birth size, that is, , the initial mass value is not necessarily 1. Longer or shorter G1 phases emerge from the model to adjust the cycle duration in cells that begin with or , respectively. Proliferative daughter cells continue through its own cell cycle and proceed to grow to its own . Non-proliferative secretory cells differentiate from stem cells, which are smaller than other cells. To compensate for this, secretory cells grow to reach a radius , generated as in a time equal to . The other type of non-proliferative cells, enterocytes, derive from absorptive progenitors and remain at without increasing size. These definitions of mass, cell radius, and cell growth were chosen to ensure that cells have a consistent radius and guarantee that the cell cycle model correctly proceeds through all phases in each cell. Due to the varying cycle duration and extracellular conditions, this control is essential to the correct functioning of the cell cycle and overall behaviour of the ABM. Drug perturbations of the cell cycle model: CDK1 inhibition We have used the Csikasz-Nagy cell cycle model to implement drug-induced perturbations of the cell cycle proteins, which are common mechanisms of action of oncotherapeutics, in our ABM. For an arbitrary component of the cell cycle model, , we introduce a term dependent on the drug and : where ⊂ means ‘contains the term’ and Drug represents the cell concentration of the active compound/metabolite which is often described by a pharmacokinetics model. , quantifies the effect of the drug on . This function can take several forms such as a mass-action term or a Michaelis-Menten or Hill equation. Multiple terms like this can be added concurrently to the proteins described by the Csikasz-Nagy model. As an example, we have modelled the effects of a Cdk1 inhibition at the single-cell level in our ABM. Cdk1 binding is reported to induce nuclear translocation of cyclins A and B require to initiate mitosis ( Pesin and Orr-Weaver, 2008 ). Accordingly, we have added a mass-action term onto the rate of change of the CycA/Cdk1/2 and CycB/Cdk1 complexes as follows: where and are used to refer to CycA/Cdk1/2 and CycB/Cdk1 to improve readability of the equation. and are parameters that quantify the drug effect, with values specified in Appendix 1—table 1 , and denotes a theoretical drug dynamical concentration. For the simulation in Figure 4 , we considered a CDK1 inhibitor that was administered every 12 hr for 4 d, with active cytotoxic effects for 6 hr. To model this, is given by the formula where hr. Also, we considered a smaller value for than for to reflect the fact that CycA represents both CycA/Cdk1 and CycA/Cdk2 and only CycA/Cdk1 is inhibited. These perturbations of the cell cycle proteins can cause incorrect progression through the cell cycle, whereupon a cell is permanently arrested. A disorderly restart of the cycle, leading to enlarged cells, is observed when CDK1 inhibition prevents cells at the end of G2 from entering M-phase or induces early reduction of cyclins A and B during M-phase, with cells failing to complete cytokinesis and prematurely restarting G1. Cells in M-phase subjected to greater reductions of cyclins A and B, which completely disrupt the protein network, undergo mitotic death. DNA and RNA synthesis Since one of the most common means of targeting the cell cycle is to exploit the effect of DNA-damaging drugs ( Helleday et al., 2008 ), we added the dynamics of DNA replication during S-phase and RNA synthesis during the cell cycle. Replicating DNA is represented by two variables, and , which denote two DNA double helices formed during S-phase. is an abstraction of the proportion of undamaged DNA, which takes values from 0, representing total DNA disruption, to 1 for the whole undamaged double helix. At the onset of S-phase, the original DNA double helix, , unwinds to start the replication of strands and rapidly generates two complete sets of DNA, and . This is represented in the model by Both and aim to reach : DNA synthesis is assumed to be at a faster rate during S-phase, and outside S-phase DNA synthesis takes place solely for repair at a slower rate. Hence, in healthy cells, these variables obey the following equations and algorithm: The DNA replication rate, , is sufficiently fast to ensure reaches 1 during S-phase in healthy cells. Outside of S-phase, we assumed a twofold slower rate for DNA repair when the cell is not actively replicating its DNA. Values are specified in Appendix 1—table 1 . When the cell divides, the daughter cells are given one DNA double helix each (which are both assigned to in the respective daughter cell) to restart the cycle. RNA levels are represented by a single variable. Similarly, this variable is an abstraction of the proportion of undamaged RNA in the cell, with in a healthy cell and for total RNA disruption. RNA synthesis is assumed to be governed by a simple linear-growth differential equation until its maximum value, , and remains at this value unless damage is induced as follows: with parameter values specified in Appendix 1—table 1 . Along with these equations for DNA and RNA levels, we added DNA and RNA-damage checkpoints to modulate the response of the Csikasz-Nagy cell cycle model to perturbations. We considered both the G1/S and the G2/M checkpoints ( Morgan and Morgan, 2007 ), with cells checking their DNA and RNA levels as they progress from G1 to S-phase, and from G2 to M-phase. If the DNA and/or RNA levels are below the threshold values (see Appendix 1—table 1 ), the cell undergoes apoptosis. Checkpoint failures can occur upon drug-induced DNA or RNA damage, as explained below. Drug perturbations of RNA and DNA synthesis: 5-FU-induced RNA and DNA damage Similar to the cell cycle model, drug effects are represented by adding a negative term to these differential equations: where could be a mass action, Hill equation, or Michaelis–Menten term quantifying the drug-induced RNA or DNA damage. DNA damage induces increased p21 expression in cells, which prevents progression through the cell cycle and can lead to cell cycle arrest or apoptosis ( Abbas and Dutta, 2009 ). To replicate this, we further modified the p21/p27 term in the Csikasz-Nagy model to respond to the DNA levels of the cell. Recall that Vsi was the production rate of p21/p27 in the model, and we multiplied this by to moderate the production of p21/p27 (see details in ‘Cell cycle proteins and cell size/growth’ above). Recall that ; to replicate DNA-damaged induced production of p21, we replace with a bounded function dependent on the cell’s DNA levels in G1, and in all other phases and s is a scaling coefficient. In homeostasis, with , this function is equal to and the cell cycle model proceeds as before. With severe DNA damage, the function is approximately equal to , always > that represents the maximum fold increase in the production rate of p21, that is, Parameter values can be found in Appendix 1—table 1 . When DNA levels are reduced by drug-induced injury, this new function increases the production rate of p21/p27 which slows down the production of cyclins and the progression of the cell cycle, recapitulating a reversible cell cycle arrest for low-to-moderate DNA damage ( Shaltiel et al., 2015 ). Cell growth is dependent on the correct translation of mRNA into proteins. We hypothesized that RNA damage reduces a cell’s capability of biosynthesis and leads to slower cellular growth ( Wurtmann and Wolin, 2009 ). This is modelled by adding an RNA-dependent factor to the growth rate of cells: where RNA takes as defined above values between 0 and 1 and t is a scaling coefficient. Parameter values can be found in Appendix 1—table 1 . By linking RNA integrity to cellular growth, we allow RNA damage to induce a form of cell cycle arrest, as previously reported ( Chernova et al., 1995 ; Bellacosa and Moss, 2003 ). The result of these responses to DNA and RNA damage, in combination with the cell cycle checkpoints, allows the cells in our model to exhibit a progression of responses to increasingly severe DNA and RNA damage. Cells with slightly damaged DNA and/or RNA levels grow and proliferate slowly due to impediment of their cell cycle and/or cellular growth. With moderate DNA and RNA damage, a cell enters an impermanent, reversible cell cycle arrest (characterized by a near-zero growth rate and p21-induced halt of the cell cycle). Upon interruption of the drug-induced insult, these cells will re-enter the cell cycle. In case of severe DNA and/or RNA damage, a cell will undergo DNA/RNA damage-induced apoptosis caused by failing a cell cycle checkpoint. Additionally, drug-induced perturbations may result in incorrect progression through the cell cycle, which causes the cell to enter a permanent arrested state or die as described above. Note that though RNA damage is known to cause cell cycle arrest and apoptosis ( Bellacosa and Moss, 2003 ), the mechanisms are poorly known, so we made the conservative decision to check the level of RNA damage at the same checkpoints as DNA damage. As an example, we modelled 5-FU-induced RNA and DNA damage in the intestinal epithelium. We considered the two main downstream metabolites of 5-FU, FdUTP and FUTP, causing DNA and RNA damage, respectively ( Longley et al., 2003 ). To do this, we implemented in the ABM a previously published model that describes 5-FU distribution post-dosing in mouse and a reduced version of the 5-FU metabolic pathway ( Gall et al., 2023 ). Furthermore, we implemented the effect of FdUTP and FUTP on DNA and RNA synthesis, respectively, on each cell of our ABM using a Hill function as follows: Parameter values can be found in Appendix 1—table 1 . The impact of these metabolites on DNA and RNA of each cell of the epithelium resulted in the arrest of the majority of proliferative cells, with a small proportion undergoing apoptosis after failing the G1/S or G2/M checkpoint. Mechanical cell interactions and contact inhibition Intestinal stem cells and early progenitor cells compete for limited niche space and, therefore, the ability to retain or regain stemness. Cell proliferation creates a constant battle for space, inducing forces that drive cell migration away from the hard boundary of the stem cell niche towards the top of the crypt and onto the villus. We assumed intercellular physical forces based on Hertzian contact mechanics with adhesive and frictional forces, similar to those in published reports ( Galle et al., 2005 ; Buske et al., 2011 ). For the sake of simplicity and differently from previous approaches, we did not include the extra repulsive force opposing the reduction in cell volume caused by cell overlapping and did not consider radial expansion of cells to compensate for the loss of volume in compressed cells. In our model, cells experience repulsive, adhesive, and frictional forces. Forces result in movement according to Stoke’s flow, where viscous forces dominate inertial forces, such that cell velocity is directly proportional to the resultant forces on the cell. For very shallow overlapping distances ( of the cells radius), the adhesive force holds the cells together and replicate continuity of a biological tissue, but for greater overlap distances, repulsive forces dominate. Frictional forces help create collective movement by counteracting cell migration in the opposite direction to the general flow of cells. All distances are expressed in arbitrary units (A.U.) defined such that 1 distance unit is equal to the diameter of an average, isolated cell. Forces are then measured in the resulting units. We have assumed cells are deformable and hence can lose their spherical shape when responding to mechanical forces. Regions with high proliferation result in cell diameters, in both the z-axis direction (longitudinal crypt–villus axis) and the x–y plane (crypt transversal circumference), smaller than 1 unit and, hence, in inequality between the number of cells and the distance units. Contact repulsion Cells are assumed to be elastic spheres with intercellular forces derived from Hertzian contact mechanics. The magnitude of the repellent force, between cell (with position vector , radius , Young’s modulus , and Poisson ratio ) and cell (with position vector , radius , Young’s modulus , and Poisson ratio ) is described as follows: where is the overlapping distance between cells measured on the line joining the cell centres, with the displacement vector joining the two cell centres. This repulsive force acts on both cells in opposing directions, pushing them away along the unit vector joining the two cells : The reported value for the Young’s modulus of Paneth cell is relatively large ( Pin et al., 2015 ) and results in a relatively large force acting on neighbouring stem cells which helps to confine them in the niche. In addition, the previously published values of the Poisson ratio indicate that cells are marginally compressible ( Geissler and Hecht, 1981 ). Adhesive force All cells in contact experience adhesive forces proportional to the area of contact and the cells inherent adhesiveness, parameterized by . The magnitude of adhesive force between cell and is quantified as follows: where is the distance between cell centres and This force is again directed along , pulling the cells together: and its magnitude is derived by assuming the associated energy, , is proportional to the area of contact between cells and , , where , and differentiating with respect to the distance between the cells. Two cells in isolation will be at rest when the repulsive and adhesive forces are equal; however, in our simulations, this rarely happens due to the constant proliferation and growth of surrounding cells. In vivo crypts have a highly compressed niche with tightly packed stem cells wedged between Paneth cells. In our model, the repulsive force is parameterized entirely by observed quantities (the Young’s modulus and Poisson ratio), leaving in the adhesive force as a free parameter. The value of determines intercell separation at rest. This value was chosen to allow overlapping of Paneth cells at rest of 0.15 distance units, which corresponds to 15% of the diameter of an average Paneth cell. This results in for Paneth–Paneth adhesion. Qualitatively, all other cells are less tightly packed, so all other adhesive forces (including Paneth cells with any other cell type) are assumed to be tenfold weaker with , which produces an overlap of approximately 0.075 cell units. These assumptions facilitate the recapitulation of the tighter packed cells in the niche, resulting in increased mechanical pressure (defined in the following sections) which induces proliferation contact inhibition mechanisms. Frictional force Cells that are in contact experience a frictional force proportional to their relative velocity. The force acting upon cell due to friction with cell j is quantified as follows: where is the area of contact between cells and defined above, and is a numerical constant calibrated to enforce orderly cell dynamics. This force is comparatively smaller than the other forces but helps the collective motion of cells by opposing cell migration against the common direction. Cell migration Under a force, cells move according to Stoke’s flow, where viscous forces are assumed to dominate over inertial effects: Therefore, the position vector of the -th cell, , is updated according to where is the resultant of all forces on cell due to cell . The parameter links the forces to cellular motion. The value of this parameter is estimated to recapitulate the transfer velocity in the crypt–villus junction measured in in vivo experiments to be approximately one cell position per hour in mice ( Potten, 1998 ). However, cell motion response to these forces may vary for different cell types. It has been reported that Paneth cells persist in the stem cell niche at the crypt base for relatively long periods of up to 57 d in mice ( Ireland et al., 2005 ; Roth et al., 2012 ) and exhibit elevated -integrin expression anchoring them to the mesenchyme ( Langlands et al., 2016 ). Additionally, Paneth cells are larger and stiffer than the comparatively malleable stem cells which suggest that they require greater forces to be displaced. In our model, we used μ to replicate this behaviour and recreate drag effects of the basal membrane/mesenchyme. We implemented a value of for Paneth cells 10,000-fold greater than for other cells, effectively making Paneth cells difficult to move by other cells but allowing them to slowly move one another to form an orderly niche over longer timescales. Internal pressure and contact inhibition The forces described above are used to calculate the internal pressure experienced by cells, which varies according to the cell-intrinsic properties and local environment, that is, a stem cell in the crowded niche has higher internal pressure. Cell pressure is used to recapitulate contact inhibition by modulating the duration of the division cycle which increases when cells are densely squeezed together and decreases if cell density falls to enable, for instance, fast recovery from injury. A cell feels internal stress from the surrounding cells, and this is used to simulate contact inhibition. To do this we use the concept of virial stress outlined in Van Liedekerke et al., 2015 . The stress tensor for cell i , , is defined as follows: where is the vector from the centre of the cell to the plane of contact with cell , always assumed to be on the surface of cell , and ⊗ is the tensor/outer product combining two vectors into a ‘matrix’. Using this stress tensor, we extract the pressure in the conventional manner: As all our forces are normal to the plane of contact, this reduces to This provides a rough, first-order approximation to the pressure experienced at the centre of the cell that is straightforward to compute and essential to implement contact inhibition in proliferative cells. Note that we do not consider the hydrostatic pressure induced by cell compression. On the other hand, physical compression has been reported to lead to YAP inactivation, retarding growth and morphogenesis in the GI epithelium ( Halder et al., 2012 ; Aragona et al., 2013 ; Low et al., 2014 ). We used our estimate of pressure to implement this contact proliferation inhibition mechanism responding to environmental mechanical cues and described the increase in the cell cycle duration, , as pressure, , increases using a scaled logistic function as follows: Here, is the average pressure experienced by cells in the niche; is the average division time of absorptive progenitors; and , where denotes the longer division time of a stem cell in average niche conditions. This function captures the variation of the duration of the division cycle from a minimum to a maximum value in highly compressed cells which leads to longer division times in the tightly constrained stem cell niche of the crypt, while the cycle is shorter in the less compressed transit-amplifying zone, in agreement with experimental reports ( Wright and Alison, 1984 ; Bach et al., 2000 ; Schepers et al., 2011 ). Biochemical signalling Next, we detail how the cells interact with one another, communicating the local composition of the crypt to maintain homeostasis through simulated biochemical signalling. To achieve stable crypt cell composition and structure, we have implemented five signalling mechanisms including Wnt, Notch, and BMP pathways which have been demonstrated to be essential for morphogenesis and homeostasis of the intestinal crypt ( Gehart and Clevers, 2019 ; Fevr et al., 2007 ; VanDussen et al., 2012 ; Pellegrinet et al., 2011 ; He et al., 2004 ). We have modelled contact proliferation inhibition mediated by the YAP-Hippo signalling pathway responding to mechanical forces ( Halder et al., 2012 ; Aragona et al., 2013 ; Low et al., 2014 ) as described above and following experimental evidence ( Hao et al., 2012 ; Koo et al., 2012 ; Farin et al., 2016 ), implemented a ZNRF3/RNF43-like mediated feedback mechanism between Paneth and stem cells. These minimal signalling mechanisms were chosen because a full understanding of the protein interaction networks is still a topic of active research. However, even with our conservative assumptions, we implicitly introduce crosstalk between the different signalling pathways. For example, the nature of cell fate decisions leads to interaction between Wnt and Notch levels, and changes in the duration of the cell cycle caused by contact inhibition regulate the ability of a cell to accumulate signalling molecules. Wnt signalling The Wnt pathway is the primary pathway associated with stem cell maintenance and differentiation in the intestinal crypt as well as in many other tissues ( Fevr et al., 2007 ; van der Flier and Clevers, 2009 ; Nusse and Clevers, 2017 ). Two sources of Wnt signals have been described in the mouse crypt: Paneth cells ( Sato et al., 2011 ) and specific mesenchymal cells surrounding the stem cell niche at the crypt base ( Stzepourginski et al., 2017 ). We did not consider the dynamics of the canonical Wnt signalling molecular cascade but directly implemented downstream cellular responses to Wnt levels. We modelled Wnt signalling as a short-range field around Paneth cells and Wnt-emitting mesenchymal cells at the bottom of the crypt, acting within a distance WntRange from the surface of these cells (see Appendix 1—table 1 for value). Receptive cells within this range tether Wnt signals to their surface as previously reported ( Farin et al., 2016 ; Clevers and Nusse, 2012 ). This is described by the following equation: The variable ‘ ’ is an abstraction of the total number of Wnt ligands tethered to the surface of the cell. is the rate of Wnt signal tethering by a receptive cell and is the decay rate of Wnt signal tethered molecules. depends on the turnover of Wnt receptors assumed to be regulated by RNF43 and ZNRF3 ligands produced by stem cells, which forms a Wnt-mediated negative feedback loop as described below. describes the maximum number of Wnt signals a cell can have tethered and its value is chosen to be a power of 2 to facilitate dividing Wnt signals in half upon cellular division. is the total amount of Wnt signal sources within range of the cell and is calculated as follows: represents the number of Wnt-emitting mesenchymal cells surrounding the niche, which we assume is equal to the total number of epithelial cells in the niche in homeostatic conditions ( Wright and Alison, 1984 ; Snippert et al., 2010 ). Additional Wnt production by Paneth cells is required to support the homeostatic number of stem cells in homeostasis. In the presented modelling scenarios, we assumed constant exogeneous Wnt source, that is, constant , shared by all cells in the niche and enhancing niche recovery after damage. For instance, with lower number of cells in the niche, the survival cells will receive stronger mesenchymal Wnt signalling that enhances proliferation and recovery after perturbations. We assumed that surface-tethered signals are equally distributed between daughter cells upon cell division ( Gehart and Clevers, 2019 ; Farin et al., 2016 ), so that cells eventually lose Wnt signals and their capacity to proliferate if not within the range of a Wnt source. These assumptions are partly supported by observed in vivo and in vitro behaviour, where the mesenchymal and Paneth cell-derived Wnt sources are mutually redundant ( Farin et al., 2012 ). ZNRF3/RNF43 signalling In our model, Paneth cells enhance their own production by generating high Wnt local environments ( van Es et al., 2005 ). In addition, due to their high Young’s modulus, Paneth cells create a region of high intercellular forces on neighbouring cells which leads to prolonged division times with greater opportunity for Wnt accumulation. This, in turn, expands the niche region with high Wnt and high cell pressure, promoting further differentiation into stem and Paneth cells. Therefore, without a negative feedback mechanism in our model, these features would result the expansion of the niche with stem and Paneth cells occupying the entire crypt. Additionally, two recent studies have demonstrated the existence of a negative feedback loop mediated by RNF43 and ZNRF3 ligands produced by stem cells ( Hao et al., 2012 ; Koo et al., 2012 ). These studies proposed that RNF43 and ZNRF3 inhibit Wnt signalling by promoting the turnover of Wnt receptors such as Frizzled and LRP5 ( de Lau et al., 2011 ), and showed that simultaneous deletion of these two receptors results in the formation of adenomas comprising mostly stem and Paneth cells ( Koo et al., 2012 ). We assumed that ZNRF3/RNF43 (henceforth called ZNRF3 for simplicity) is a diffusing, decaying signal secreted by stem cells. Without explicit knowledge of the chemical and physical properties of ZNRF3 signalling, this process is assumed to immediately reach steady state at the timescale of cellular decisions. Therefore, the ZNRF3 signal strength, , received by a cell at position from a stem cell located at position , is described by the diffusion equation as follows Crank, 1975 : where represents the maximum signal strength immediately around the emitting cell and determines the spatial scale of diffusion, which we assume is equal to the length of a cell in order to maintain high signalling levels primarily in the niche. The total ZNRF3 signalling received by a cell at position is calculated, therefore, as the sum of the signal received from all stem cells: where is the position of the i- th stem cell. The strength of ZNRF3 signalling received by a cell is proportional to the number of stem cells in the immediate vicinity of the cell: in typical, homeostatic conditions, is high in the niche, falling off exponentially as a cell moves towards the villus. The ZNRF3 signalling level detected by a cell, located at position , regulates the decay rate of its surface-tethered Wnt molecules, as follows: where u is a scaling coefficient, and and are constants calibrated to maintain the size of niche at its homeostatic level. In particular, is determined by the homeostatic number of stem cells in the niche ( Snippert et al., 2010 ), while K was calibrated to produce a Wnt decay rate high enough to prevent Wnt values ≥64 in cells located at the edge of the niche when the number of stem cells is excessive such that . These considerations prevent the expansion of the niche by preventing cells from differentiating into the Paneth or stem cell fate (which requires ) when a cell is outside the niche. With this implementation of ZNRF3-mediated negative feedback, the Wnt decay rate within the niche is high but is compensated by the abundant Wnt supply from mesenchymal and Paneth sources, while the Wnt decay rate rapidly drops to zero outside the niche. This means that the degradation of Wnt outside the niche has little impact on a healthy crypt and the Wnt gradient in our model is mainly generated by the halving of the surface bound Wnt signals between daughter cells upon division. Growth and proliferation derived forces drive migration of cells towards the villus while the amount of tethered Wnt decreases after each division. This recreates the observed ( Farin et al., 2016 ) decreasing gradient of Wnt signals moving up the crypt ( Figure 1 ), with the highest values in the niche, intermediary values in the transit-amplifying zone, and low levels in the upper crypt region of differentiated enterocytes. The stem cell-mediated negative feedback loop regulating Wnt signalling, together with the differentiation rules described below, ensures the maintenance of the niche size and crypt composition in homeostasis. In addition, it also facilitates crypt recovery as stem cells in low numbers are able to reach greater surface-tethered Wnt levels to pass to their offspring which, in turn, can more readily acquire the required amount of Wnt to become stem cells. Notch signalling Active Notch signalling requires direct membrane contact between two cells, one expressing Notch ligands and the other Notch receptors ( Gehart and Clevers, 2019 ; Pellegrinet et al., 2011 ; Baron, 2003 ; Sancho et al., 2015 ). In the intestinal epithelium, Notch ligands present in secretory cells bond to transmembrane notch receptors of stem cells to induce a transcriptional cascade which blocks differentiation of stem cells into the secretory lineage in a process known as lateral inhibition and leads to checkerboard/on-off pattern of Paneth and stem cells in the niche ( VanDussen et al., 2012 ; Chen et al., 2017 ). With these considerations, Notch signalling, , is implemented in each cell according to the following equation: where is the amount of incoming notch ligands to the cell which we assumed is equal to the number of ligands expressing cells in contact with the cell. At steady state, a cell’s Notch value corresponds to the number of incoming Notch ligands the cell is receiving: for example, a stem cell receiving Notch from one single neighbouring cell reaches equilibrium with . The factor denotes the rate of Notch accumulation and has a relatively high value to ensure that the equilibrium is reached before the fate-commitment point at the end of G1. As described in the cell cycle section, the duration of G1 changes with the length of the overall division cycle: shorter cycles have a shorter G1 phase, shortening the time the cell has to receive Notch signals before deciding whether to differentiate or divide. Additionally, is also the decay rate of the cell Notch signalling and this relatively fast rate means that Notch must be constantly supplied for a stem cell to maintain stemness. A reduction in cell density (e.g. by ablation of cells) can introduce gaps in the simulated epithelial tissue. In real tissues, these gaps would be covered by expansion-flattening of surviving cells to restore epithelial integrity and contact to neighbouring cells. These new contacts would allow cells to exchange Notch ligands. In our model, we do not explicitly consider the expansion of cells to fill gaps in the epithelium; however, we simulate this effect by allowing a cell to pass Notch signals to receiving cells within a larger range (one cell diameter) following a drop in local density. This allows our model to recreate the correct recovery response following ablation of cells. BMP signalling The Wnt gradient in the crypt is opposed by a gradient of BMP generated by mesenchymal telocytes, which are especially abundant at the villus base and provide a BMP reservoir, and by the recently identified trophocytes located just below crypts and secreting the BMP antagonist Gremlin1 ( McCarthy et al., 2020 ). BMP signals inhibit cell proliferation and promote terminal differentiation ( Qi et al., 2017 ). Large levels of BMP at the crypt–villus junction prevent proliferative cells from reaching the villus ( Beumer et al., 2022 ). BMP signalling has been reported to be modulated by matured epithelial cells on the villus via hedgehog signalling ( Büller et al., 2012 ; van den Brink et al., 2004 ) such that a decrease in villus cells decreases BMP signalling in the crypt, which enhances proliferation and expedites villus regeneration. We propose a simple model that assumes that enterocytes, , secrete diffusing signals, which could be interpreted as Indian hedgehog, to regulate BMP secretion by mesenchymal cells. The explicit pathways and associated timescales involved in BMP signalling are unknown; therefore, similar to our implementation of ZNRF3 signalling, this process is assumed to instantaneously reach steady state at the timescale of cellular decisions. As before, we assume that BMP is a diffusing, decaying signal in steady state ( Crank, 1975 ) described by where is the position coordinate corresponding to the crypt–villus longitudinal axis; in our model for cells located in the stem cell niche while for crypt cells outside the niche; is the value of at the top of the crypt, which depends on the number of enterocytes on the villus; is the exponential transformation of the diffusion coefficient. To facilitate the use of the model for different species, the coordinate is standardized using , which is the crypt axis position at which the number of mature enterocytes becomes greater than the number of absorptive progenitors. As mentioned above, mesenchymal cells surrounding the niche secrete BMP antagonists ( McCarthy et al., 2020 ), and we assumed that BMP signalling is effectively blocked in the niche such that , which is approximately true for the above formula. describes the relationship between the number of enterocytes and maximum BMP signal intensity using an increasing Hill function: where is the homeostatic number of enterocytes determined by in vivo experiments, is the Hill interaction coefficient, and denotes the level of BMP signals at position . In our model, absorptive progenitors differentiate into enterocytes when , representing that the anti-proliferative BMP signalling received by the cell is sufficient to overcome the proliferative effect of Wnt ( He et al., 2004 ). We achieved a homeostatic crypt cell composition with values of and that allow progenitors cells to divide in a healthy crypt at least three times before differentiating. Differentiation occurs when the Wnt content of a cell, at position , reaches values below when migrating towards the villus. In addition, these equations describe a frequently reported feedback response to villus injury consisting of enhanced proliferation within hypertrophic crypts ( Büller et al., 2012 ; Pont and Yan, 2018 ; Sprangers et al., 2021 ). In our model, when the number of enterocytes on the villus falls below the homeostatic level, the production of BMP signals decreases and makes it possible for absorptive progenitors to divide more times and reach higher positions in the crypt before becoming terminally differentiated. Concurrently, the height of the crypt must increase to provide sufficient space for the extra proliferative cells. We modelled the enlargement of the crypt height responding to villus injury by varying the maximum -coordinate of the crypt, , using a decreasing Hill function as follows: where is the calibrated homeostatic value of , is the maximum fold increase in the height of the crypt, and the Hill interaction coefficient. We do not consider cases in which the number of enterocytes on the villus increases above homeostatic levels, such that if then The standard manner to report the height of cells along the crypt–villus axis is in terms of cell positions, which is related to but not equal to . This is because cell positions are counted from the bottom of the niche (and we defined to be the top of the niche), and that in our model the cells are squeezed together, causing the height of the crypt measured in cell positions to be larger than . Cell fate: Proliferation, differentiation, arrest, and apoptosis In the sections above, we have outlined the dynamics of signalling pathways, cell cycle proteins, and mechanical forces. These processes interact with each other to maintain epithelial homeostasis by precisely tuning cell proliferation, differentiation, and migration within the crypt geometry. An overall picture integrating the rules governing cell fate decision is described in Figure 1 . Wnt levels ≥64 A.U. are required for stemness maintenance. For a stem cell, lateral inhibition is repressed when Notch < 3 A.U., equivalent to less than three secretory cells in the local neighbourhood. If Notch is repressed (<3 A.U.) and Wnt > 64 A.U., stem cells differentiate into Paneth cells. Paneth cells generate Wnt signals which enhance the production of stem cells and of Paneth cells themselves. Niche expansion is modulated by the ZNRF3/RNF43-mediated negative feedback mechanism ( Hao et al., 2012 ; Koo et al., 2012 ; Farin et al., 2016 ) that makes Wnt > 64 unobtainable after reaching the homeostatic number of stem cells. Furthermore, the duration of the division cycle is dependent on local forces experienced by the cell. Cells under high mechanical pressure (in the niche) are subjected to YAP-Hippo-regulated contact inhibition and with longer cycles accumulate more Wnt and Notch signals. On the other hand, cells located outside the niche exhibit shorter cycles and cannot effectively accumulate enough Wnt signals to become stem or Paneth cells. Stem cells with decreased levels of Wnt signalling (<64), usually located outside the niche, differentiate into absorptive proliferating progenitors if Notch signalling is active or into secretory progenitors if Notch signals <2 A.U. This lower Notch threshold value is required to maintain the correct balance of absorptive and secretory cells outside the niche in the absence of large numbers of Notch secreting Paneth cells. All cells migrate towards the crypt mouth driven by proliferation forces. During this migration, the Wnt content in absorptive progenitors is halved in each division and, away from Wnt sources, progressively decreases, while BMP signals increase, towards the villus. In our model, differentiation into enterocytes occurs when progenitors encounter a BMP signal level higher than their Wnt signal content. For instance, in the ileal crypt in homeostasis, this occurs approximately at cell position 16 from the crypt base, where progenitors migrating from the stem cell niche reach a reduced content of Wnt signals of about 8 A.U. On the other hand, the BMP signalling level has a maximum value of 64 at approximately cell position 23 from the crypt base, where BMP signals are generated by mature enterocytes. These BMP signals diffuse towards the crypt base and, hence, decrease exponentially to reach values of 8 A.U. at approximately position 16, which enables differentiation into enterocytes. Epithelial injuries resulting in a decreased number of enterocytes reduce BMP signal production and its diffusion range which results in the enlargement of the proliferation compartment as cells encounter the required level of BMP signals for differentiation only at higher positions in the crypt. All fate decisions are assumed to be made at the restriction point which in our model is located at the end of G1 ( Blomen and Boonstra, 2007 ). At the restriction point, cells assess their internal Wnt and Notch levels, and if these values fulfil the criteria to differentiate, they enter a quiescent state or G0, otherwise they proceed to S-phase and become irreversibly committed to complete the cell cycle of variable duration depending on local forces. This quiescent state lasts for 4 hr for all differentiating cells, except for absorptive progenitors, which differentiate straightaway into enterocytes. In accordance with Stamataki et al., 2011 , a secretory progenitor requires an additional 4 hr to fully mature into a goblet or enteroendocrine cell. Therefore, quiescent stem cells located above the fourth cell position from the crypt base ( Gehart and Clevers, 2019 ; Potten et al., 1978 ; Sangiorgi and Capecchi, 2008 ) emerge naturally in the model as stem cells migrate outside the niche and pause the cycle to give rise to non-proliferative secretory progenitors, which have been identified with quiescent stem cells ( Buczacki et al., 2013 ; Clevers, 2013a ). Features and behaviours of these cells could be expanded if of interest for the model application. Cell fate decisions are reversible; a stem cell that leaves the niche and differentiates into a progenitor cell can relatively quickly become a stem cell again if regaining enough Wnt signals by being pushed back into the niche. This plasticity extends to all cells: all progenitors and fully differentiated cells can revert to stem cells when exposed to sufficient levels of Wnt and Notch signals, replicating injury recovery mechanisms observed in the crypt ( Hageman et al., 2020 ; Tetteh et al., 2016 ). We have assumed that all cells, except Paneth cells, need to acquire and maintain high levels of Wnt signals (>64) over 4 hr to complete the process. Dedifferentiating cells shrink to their new smaller size during the process if required. Notch signalling mediates the process of Paneth cell de-differentiation into stem cells to regenerate the niche as previously reported ( Mei et al., 2020 ; Yu et al., 2018 ). Paneth cells not supplying Notch ligands for 12 hr to recipient cells dedifferentiate into stem cells in a process that takes 36 hr to complete in agreement with published findings ( Yu et al., 2018 ). Additionally, Paneth cells in low Wnt conditions (e.g. a Paneth cell that is forced out of the niche) for 48 hr will also dedifferentiate into a stem cell, which with low Wnt content rapidly becomes a secretory or absorptive progenitor. Additionally, injured proliferative cells can experience cell cycle arrest and apoptosis, induced by drug injury or by natural senescence. In arrested and apoptotic proliferative cells, the production rates of the cyclins ( , , , and ) are set to 0 to interrupt the cell cycle. We assumed that cells remain arrested until they are shed from the villus tip or reach the end of their lifespan and become apoptotic. Apoptotic cells shrink and die with a negative linear rate of where is the radius at the onset of apoptosis and is the time for the completion of apoptosis. ABM simulation of Ki-67 and BrdU staining This section discusses the implementation of the Ki-67 and BrdU staining simulations, which can be found in Figures 3 and 5 , and is discussed in the ‘Results’ section. For the Ki-67 staining simulation, we considered that a cell is Ki-67 positive if it is going through S-, G2-, or M-phase of the division cycle. Daughter cells are considered Ki-67 positive, regardless of their fate, during the first 6 hr after cell division. This assumption recapitulates the time reported for the Ki-67 protein to decay below detectable levels after exiting the cycle ( Miller et al., 2018 ) and the detection of Ki-67 in G1 in continuously cycling cells ( Sobecki et al., 2017 ). Similarly, cells are assumed to remain Ki-67 positive for 6–12 hr after drug-induced cell cycle interruption depending on the phase the cell was in upon interruption in our simulations, which recapitulates a previously published report ( Miller et al., 2018 ), where cells exhibited greater Ki-67 levels in later cell cycle phases. In particular, cells arrested during G1, S, G2, and M phases are Ki-67 positive for 6, 8, 10, and 12 hr after arrest, respectively. For the BrdU staining simulation, we assumed that cells become BrdU positive by effectively incorporating BrdU into their DNA when they are in S-phase or enter S-phase during the BrdU exposure window, which is considered to last 2 hr after BrdU administration in agreement with previous experimental reports ( Parker et al., 2017 ). The initial level of BrdU after dosing in each cell is quantified by where is the function that rounds to the nearest integer, is the theoretical maximum level of BrdU a cell can incorporate, and is the remaining BrdU exposure time. For cells that enter S-phase after BrdU administration, is equal to the remaining BrdU exposure time, while for cells already in S-phase at the time of BrdU dosing, is equal to the exposure window or, alternatively, to the remaining duration of S-phase if this is shorter than the exposure window. Furthermore, we considered that a cell is BrdU positive if its BrdU level is >0 and, if dividing, the two daughter cells are given a BrdU value of = . This consideration recapitulates experimental reports indicating that the BrdU cell content is diluted in each division and it is no longer detected after 4–5 generations ( Wilson et al., 2008 ). The spatial data from the Ki-67 and BrdU staining experiments comprises the proportion of positive cells at each cell position by aggregating spatial counts from 20 to 50 one-dimensional longitudinal strips running from the crypt base to the villus ( Parker et al., 2017 ; Williams et al., 2016 ). Therefore, cell position is reported in a one-dimensional space and measured as the cell count from the base of the crypt to the cell itself. To match these observations, we have implemented an algorithm that slices longitudinally the simulated crypts to generate 100 one-dimensional strips which are aggregated to estimate the proportion of stained cells at each position. Furthermore, we estimated a 95% confidence interval, based on experimental error, around the simulated spatial profiles of Ki-67- and BrdU-positive cells by assuming that the proportion of stained cells follows a beta distribution with parameters and , . These parameters are estimated as follows: where is the simulated proportion and its standard error. We used an estimate of the standard error, e , derived from experimental data. We first studied the relationship between the mean value and the standard deviation of the proportion in three replicated control samples. The experimental data suggest that the error is lower for extreme values of that is, around 0 or 1, and larger for values of around 0.5 ( Appendix 1—figure 3 ). Thus, we described this relationship with a quadratic expression: where , and are the coefficients determined from the replicated control samples and their values are displayed in Appendix 1—figure 3 . What-if analysis We investigated the effect on the simulated crypt of increasing and decreasing the strength of the main signalling pathways, Wnt, BMP, and ZNRF3/RNF43 signalling, and modifying the Notch thresholds. For each alternative parameterization, except when decreasing ZNRF3/RNF43 signalling, the simulation was run for 30 d to ensure stability was reached with the new parameter set and the final 10 d were included in the analysis. When decreasing ZNRF3/RNF43 signalling, we simulated 60 d to demonstrate the expansion of the niche and analysed the final 10 d. The reference parameter set used as baseline was the ileal mouse crypt parameter set reported in Appendix 1—table 1 . In all cases, we only consider modifications of one signalling mechanism at a time. To study alternative Wnt signalling scenarios, we used the WntRange parameter ( Appendix 1—table 1 ) to double and halve the spreading area of Wnt signals emitted by Paneth cells while we maintained the original WntRange value for Wnt-emitting mesenchymal cells at the bottom of the crypt (Section 1.7.1; Appendix 1—figure 4A–F ). When WntRange was doubled, we observed increased number of stem and Paneth cells in a noticeably enlarged niche ( Appendix 1—figure 4C and D ), with cells choosing the stem cell fate instead of differentiating into absorptive progenitors. On the other hand, decreasing Wnt signalling, by halving WntRange in Paneth cells but maintaining its homeostatic value in mesenchymal cells, resulted in no apparent changes in niche cell composition ( Appendix 1—figure 4E and F ), which resembled published experimental results of persisting functional stem cells after Paneth cell ablation ( Durand et al., 2012 ). The ZNRF3/RNF43-mediated negative feedback mechanism regulates the size of the niche by modulating Wnt signalling. We simulated increasing and decreasing the strength of the ZNRF3/RNF43 by doubling and halving, respectively, the parameter Z described in Section 1.7.2; Appendix 1—figure 5A–F . Following the increase in the intensity of ZNRF3/RNF43 signalling, we observed a decrease in the number of stem and Paneth cells, together with relatively minor changes in the transit-amplifying region ( Appendix 1—figure 5C and D ). On the other hand, when decreasing ZNRF3/RNF43 signalling levels, the niche expanded, resulting in a crypt dominated by Paneth and stem cells ( Appendix 1—figure 5E and F ) which replicates reported experimental phenotypes ( Koo et al., 2012 ). To modify Notch signalling, we increased and decreased by 1 A.U. the Notch threshold required for lateral inhibition ( Appendix 1—figure 6A–F ). This Notch signalling threshold determines the number of contacting Notch-secreting cells (secretory lineage) required to inhibit the differentiation of stem cells into the secretory lineage. Thus, increasing this Notch threshold enhances the production of secretory cells leading to the increase in Paneth, goblet, and enteroendocrine cells ( Appendix 1—figure 6C and D ). Alternatively, decreasing the Notch threshold enhances differentiation into the absorptive lineage, reducing the number of Paneth and secretory cells ( Appendix 1—figure 6E and F ). We modified the range of diffusion of BMP signals by doubling and halving the parameter ( Appendix 1—figure 7A–F ) which denotes the amount of diffusing BMP signals, and hence affects the diffusion range, towards the base of the crypt (Section 1.7.4). When we increased the BMP signalling range, enterocytes differentiated at lower crypt positions, effectively reducing the transit-amplifying zone ( Appendix 1—figure 7A and B ). Decreasing BMP signalling strength by halving resulted in the increase in proliferative absorptive progenitors, which reach higher positions in the crypt ( Appendix 1—figure 7C and D ). The niche was largely unaffected in both cases ( Appendix 1—figure 7E and F ). Model implementation and parameterization The model is implemented using the Julia programming language. The mechanical forces, cellular motion, and biochemical signalling are simulated with a fixed timestep of d, while the proteins of the cell cycle model are simulated with a timestep of 0.00001 d. Parameter values and means used for their identification are detailed in Appendix 1—table 1 .
CC BY
no
2024-01-16 23:47:20
eLife.; 12:e85478
oa_package/1b/7f/PMC10789491.tar.gz
PMC10789494
38224499
Introduction Tuberculosis (TB), caused by Mycobacterium tuberculosis ( Mtb ) and related species, remains a leading cause of death globally. Around one-quarter of the global population is estimated to show immunological evidence of prior exposure to Mtb ( Houben and Dodd, 2016 ), and in 2019 an estimated 10 million people developed the disease, resulting in 1.4 million deaths ( WHO, 2020 ). This disease burden could be substantially reduced with action to address the social determinants of disease and equitable scale-up of existing interventions. However, tools to prevent, diagnose, and treat TB could be improved if a better understanding of the underpinning pathophysiology could help identify those at greatest risk of the disease. The role of host genetic factors in TB susceptibility has long been of significant interest. Over 100 candidate genes have been studied, but few associations have proven reproducible ( Naranbhai, 2016 ). This failure to replicate may be a result of the modest size of many TB genome-wide association studies (GWAS), variability in phenotyping between studies, the impact of population-specific effects, the challenge of complex population structure in some high-burden settings (e.g., admixed individuals), and, possibly, pathogen variation ( Correa-Macedo et al., 2019 ; Daya et al., 2014a ; Luo et al., 2019 ; Möller and Kinnear, 2020 ; Müller et al., 2021 ; Omae et al., 2017 ; Schurz et al., 2018 ). Seventeen GWAS have been reported but only two loci replicate between studies ( Daya et al., 2014a ; Schurz et al., 2018 ; Chimusa et al., 2014 ; The Wellcome Trust Case Control Consortium, 2007 ; Curtis et al., 2015 ; Mahasirimongkol et al., 2012 ; Qi et al., 2017 ; Thye et al., 2010 ; Thye et al., 2012 ; Quistrebert et al., 2021 ; Sveinbjornsson et al., 2016 ; Hong et al., 2017 ; Li et al., 2021 ; Luo et al., 2019 ; Zheng et al., 2018 ; Grant et al., 2016 ; Png et al., 2012 ). The WT1 locus, identified in cohorts from Ghana and Gambia, replicated in South Africa and Russia. The ASAP1 locus identified in Russia was replicated through reanalysis of prior studies ( Correa-Macedo et al., 2019 ; Möller and Kinnear, 2020 ). To address these challenges, we established the International Tuberculosis Host Genetics Consortium (ITHGC) to study the host genetics of disease through collaborative and equitable data sharing ( Naranbhai, 2016 ). The ITHGC includes 12 case–control GWAS from nine countries in Europe, Africa, and Asia (total of 14,153 pulmonary TB cases and 19,536 healthy controls). Inclusion of multiple ancestral groups in a multi-ancestry meta-analysis has the advantage of maximizing power and enhancing fine-mapping resolution to identifying true global associated variants that influence TB susceptibility across population groups. Here we present the first analyses of the ITHGC dataset exploring host genetic correlates of TB susceptibility using a multi-ancestry meta-analysis approach, including fine-mapping of human leukocyte antigen (HLA) loci and estimation of genetic heritability.
Methods Data This analysis includes 12 of the 17 published (and unpublished, Table 1 , Supplementary file 1 ) GWAS of TB (with HIV-negative cohorts) prior to 2022 ( Schurz et al., 2018 ; Chimusa et al., 2014 ; The Wellcome Trust Case Control Consortium, 2007 ; Curtis et al., 2015 ; Mahasirimongkol et al., 2012 ; Qi et al., 2017 ; Thye et al., 2010 ; Thye et al., 2012 ; Daya et al., 2014b ). For unpublished works, we contacted researchers that were funded for genetic TB research and acquired data-sharing agreements to obtain summary statistics (or raw data) along with any metadata that was available. It excludes data from Iceland and Vietnam ( Quistrebert et al., 2021 ) as they declined to share data. It excludes data from China, Korea, Peru, and Japan ( Luo et al., 2019 ; Hong et al., 2017 ; Li et al., 2021 ; Zheng, 2018 ; Sveinbjornsson et al., 2016 ) as data-sharing agreements could not be finalized in time for this analysis. The Indonesian and Moroccan data were too sparsely genotyped and not suitable for reliable imputation. In addition, the Moroccan data was family-based and thus also not suitable for this meta-analysis as this would introduce confounding effects from the inclusion of related individuals ( Grant et al., 2016 ; Png et al., 2012 ). Finally, cases and controls are also available within large-scale biobanks, for example, UK Biobank, which could also be leveraged in future iterations of this analysis ( Munafò et al., 2018 ). Included individuals were genotyped on a variety of genotyping arrays ( Table 1 , Supplementary file 1 ), and raw genotyping data was available for eight datasets and for the remainder association testing summary statistics were obtained to use in the meta-analysis ( Table 1 , Supplementary file 1 ). Quality control (QC) of raw genotyping data ( Table 1 , Supplementary file 1 ) was done using Plink (v1.9), followed by pre-phasing using SHAPEIT and imputation with IMPUTE2 with the 1000 genomes phase 3 reference panel ( Chang et al., 2015 ; Delaneau et al., 2013 ; Howie et al., 2009 ; Sudmant et al., 2015 ). QC and imputation were done as described previously ( Schurz et al., 2018 ; Schurz et al., 2019 ); briefly, we used a MAF filter of 0.025 and an individual and SNP missingness filter of 0.1. Hardy–Weinberg equilibrium threshold was set at a Bonferroni-corrected p-value according to the number of SNPs testes (0.05/number of SNPs) and samples where sex could not be determined from genotyping were also removed. Imputed data was filtered at a quality score of 0.3, prior to individual and genotype filtration steps. Prior to QC and imputation, allele orientation was corrected using Genotype Harmoniser version 1.4.15, and the genome build of all datasets was checked for consistency (GRCh37) and updated if necessary using the liftOver software from the UCSC genome browser ( Deelen et al., 2014 ; Kent et al., 2002 ). The four datasets with only summary statistics available ( Table 1 , Supplementary file 1 ) were imputed and QC’d during the original investigations, but the marker names and allele orientation were checked for concordance between the summary statistics and the rest of the consortium’s imputed data. Polygenic heritability analysis To assess the level of genetic contribution to TB susceptibility, we estimated polygenic heritability on the individual studies for which raw genotyping data was available ( Table 1 , Supplementary file 1 ). Polygenic heritability estimates were calculated using GCTA (v1.93.2), a genomic risk prediction tool ( Yang et al., 2011 ). The genetic relationship matrix was calculated for each autosomal chromosome. Raw genotype data was pruned for SNPs in LD using a 50 SNP window, sliding by 10 SNPs at a time and removing all variants with LD > 0.5. Samples were filtered by removing cryptic relatedness (--grm-cutoff 0.025) and assuming that the causal loci have similar distribution of allele frequencies as the genotyped SNPs (--grm-adj 0). Principal components were then calculated (--pca 20) to include as covariates prior to estimating heritability. Heritability estimations were transformed onto the liability scale using the GCTA software to account for the difference in the proportion of cases in the data compared to the population prevalence ( Yang et al., 2011 ). The average heritability estimate was calculated by taking the mean of all estimates and the confidence intervals were estimated based on the standard error across all studies and the number of studies included. Meta-analysis All variants with MAF > 1% and polymorphic in at least three studies (from at least two different ancestries) were included in the primary analysis. For the GWAS, summary statistics of each dataset variants with infinite confidence intervals were removed prior to the meta-analysis. A multi-ancestry meta-analysis plus separate ancestry-specific analyses for Africa, Asia, and Europe were performed. MR-MEGA (Meta-Regression of Multi-Ethnic Genetic Association, v0.20), a meta-analysis tool that maximizes power and enhances fine-mapping when combining data across different ethnicities, was used for the multi-ancestry meta-analysis ( Mägi et al., 2017 ). To account for the expected heterogeneity in allelic effects between populations, MR-MEGA implements a multi-ancestry meta-regression that includes covariates to represent genetic ancestry, obtained from multidimensional scaling of mean pairwise genome-wide allele frequency differences. Genomic control correction (GCC) was implemented during the MR-MEGA analysis for the individual input data (if lambda was >1.05) and output statistics, and the first two PCs, calculated from the genome-wide allele frequency differences, were included as covariates in the regression. QQ-plots of p-values and associated lambda values were used to assess the quality of results prior to downstream investigation. For the ancestry-specific analyses, the studies were grouped by the major ancestral groups ( Table 1 , Supplementary file 1 ) and all variants with a MAF of > 1% that were observed in at least two studies were included in the meta-analysis. We performed traditional fixed-effects meta-analyses in GWAMA (v2.2.2), implementing GCC and assessed the results using QQ-plots ( Mägi and Morris, 2010 ). The genome-wide significance threshold for all association testing was set at p-value=5 × 10 -8 ( Panagiotou et al., 2012 ). HLA imputation To fine-map HLA alleles over the HLA locus we imputed HLA class l and ll variants for all 8 studies for which raw data was available ( Table 1 and Supplementary file 1 ). HLA imputation for the HLA class l regions A, B and C as well as the HLA class ll regions DPB1, DRB1, DQB1 and DQA1 was done using the R package HIBAG (version 1.5), implemented in the R free software environment (version 4.0.5) using the predict() command for imputation ( R Development Core Team, 2013 ; Zheng, 2018 ; Zheng et al., 2014 ). The reference datasets for HLA imputation are both genotyping panel and population-specific, and HIBAG has a database of reference data for many genotyping arrays. Each reference panel is also available for either Asian, European, or African populations or a mixture of the three ( https://hibag.s3.amazonaws.com/hlares_index.html#estimates ). For each dataset included for imputation, the reference panel chosen was the same as the genotyping array used for the data and the reference population was chosen to match the data as closely as possible. Asian and European reference panels were used for the Asian and European populations and African references were used for the Gambia and Ghana datasets, while mixed datasets were implemented for the admixed RSA population. Following imputation, the HIBAG package (hlaAssocTest) command was used to implement an additive association test for the HLA alleles across the different regions limited to alleles at MAF > 2.5%. Analyses were adjusted for the first four PCs with and without the rs28383206 genotype in the model. Association testing results for the eight included studies were then combined in a fixed-effects meta-analysis using Metasoft software ( Han and Eskin, 2011 ). Ancestry-specific meta-analysis grouped according to the major population groups ( Table 1 , Supplementary file 1 ) was also done using the same method. Estimation of infection pressure To generate a covariate capturing the likely cumulative exposure to Mtb for included controls, the results of Houben and Dodd, 2016 were adapted to produce a distance matrix to feed into the meta-analysis. The approach in this article fits a Gaussian process model of infection risk history to local data. To represent uncertainty in derived results, a sample of 200 estimated histories of the annual risk of TB infection in each country was used to calculate the expected fraction of control participants ever infected with Mtb , assuming that controls were uniformly aged between 35 and 44 y in 2010, which approximates the period during which controls were recruited for most of the studies. The true age of the controls was not known for all of the datasets, but as quite a substantial skew to the age distribution would be required to have an impact on the results, we believe our choice here is justified. This was done by including estimates for the potential lifetime infections for each source population as a covariate in the MR-MEGA multi-ancestry meta regression. To determine the impact of the covariate, a chi-square difference test was implemented, on an SNP-SNP basis, on the residual and association testing statistics of two meta-analysis output statistics, one including and the other excluding the potential lifetime infections covariate ( Satorra and Bentler, 2001 ). The aim was to determine whether inclusion of potential lifetime infections in the regression explained some of the residual heterogeneity. Concordance of direction of effect To determine the degree to which direction of effect is shared for SNPs between the ancestry-specific meta-analysis, we followed the methodology of Mahajan et al., 2014 . First, we identified all variants present in all 12 included datasets. Among these SNPs, we then identified an independent subset of variants in the European ancestry-specific meta-analysis showing nominal evidence of association (p-value≤0.001) and separated by at least 500 kb. The identified SNPs were then extracted from the Asian and African ancestry-specific meta-analysis results to calculate the number of SNPs that had the same direction of effect as in the European analysis. To determine whether significant excess in concordance of effect direction was present, a one-sided binomial test was implemented with the expected concordance set at 50%. This analysis was then repeated for other p-value thresholds (0.001<p≤0.01; 0.01<p≤0.5; and 0.5<p≤1), and also using the African and Asian meta-analysis results as reference.
Results Study overview In total, 12 GWAS from three major ancestral groups (European, African, and Asian) were included in this study ( Table 1 ; a more detailed table outlining the selection of cases and controls is provided in Supplementary file 1a ). All individual datasets were imputed and aligned to the same reference allele before association testing, using an additive genetic model, to obtain odds ratios (OR) and p-values to be used in the meta-analysis. For each individual study (for which we had raw genotyping data), the polygenic heritability was estimated, and HLA alleles were imputed for fine-mapping of the HLA regions. The summary statistics from the individual GWAS of each dataset were used to conduct a combined, multi-ancestry meta-analysis using MR-MEGA and ancestry-specific (European, African, and Asian) fixed effects (FE) meta-analyses using GWAMA. Finally, the impact of infection pressure on the multi-ancestry meta-regression was assessed and the concordance in direction of effect for the reference allele between studies was investigated. Polygenic heritability estimates suggest a genetic contribution to TB disease susceptibility Twin studies estimate the narrow-sense heritability of susceptibility to TB at up to 80% ( Diehl and Von, 1936 ; Kallmann and Reisner, 1943 ; Comstock, 1978 ), but there are few modern estimates. Using raw (unimputed) genotyping data, and assuming population prevalence of disease in each study population equivalent to the reported WHO prevalence rates for that country ( WHO, 2020 ), we estimated polygenic heritability of susceptibility to TB in 10 contributing studies which ranged from 5 to 36% (average of 26.3%, Supplementary file 1b ). Comparisons of the heritability estimates between studies from different geographical locations do not take into consideration the differences in environmental pressures between the included studies, and as such these estimates of heritability are only interpretable if the distribution of nongenetic determinants of TB is held constant ( Pearce, 2011 ). Furthermore, variations in phenotype definition can have an impact on heritability estimates ( Supplementary file 1a ). This is supported by previous research by McHenry et al., 2021a , where significant differences in polygenic heritability estimates were identified between subjects with latent TB infection (LTBI), active TB, and subjects classified as resistors. ( McHenry et al., 2021a ). As this study includes data with varying methods of classifying TB cases and healthy controls ( Supplementary file 1a ), there is potential for a degree of heterogeneity and misclassification (between cases and controls) that can have an impact on the heritability estimates. Recent history has seen the near elimination of TB in several countries associated with economic development and public health action. However, while improvement of socioeconomic standing and environment has a stronger impact than host genetics, these crude estimates of polygenic heritability do indicate that TB susceptibility is, in part, heritable. These results require future, more rigorous investigations to narrow down the level of heritable risk and pinpoint genomic loci involved by accounting for population stratification to obtain more accurate heritability estimates. Multi-ancestry meta-analysis identifies susceptibility loci for TB For the primary multi-ancestry meta-analysis, MR-MEGA was used as it allows for differences in allelic effects of variants on disease risk between GWAS. Principal components (PCs), derived from a matrix of similarities in allele frequencies between GWAS, were plotted and revealed distinct separation between the three main ancestral groups included in the study (Figure 4) . To account for this, the first two PCs were included as covariates in MR-MEGA as they sufficiently accounted for the allele frequency differences between the study populations, as assessed via a QQ-plot and associated lambda inflation value ( Figure 1—figure supplement 1 , lambda = 1.00). In total, 26,620,804 variants with a minor allele frequency (MAF) > 1% and present in at least three studies were included in the analysis, of which 3,184,478 were present in all 12 datasets. A significant association peak on chromosome 6 was identified in the HLA class II region ( Figure 1 ). One variant (rs28383206, OR = 0.89, CI = 0.84–0.94, p-value=8.26 × 10 –9 ) within this peak was associated with susceptibility to TB at genome-wide significance (p<5.0e –8 , Figures 1 — 3 , Table 2 ). Both the residual heterogeneity (p-value=0.012) and ancestry-correlated heterogeneity (p-value=5.28e –6 ) are significant (p-value<0.05) for the associated variant. However, the evidence of ancestry-correlated heterogeneity is much stronger than for residual heterogeneity, indicating that genetic ancestry contributes more to differences in effects sizes between GWAS than does study design (e.g., phenotyping differences and potential case–control misclassification). The association peak encompasses many HLA-ll genes, including HLA-DRB1/5 (major histocompatibility complex, class II, DR beta 1/5), HLA-DQA1 (major histocompatibility complex, class II, DQ alpha 1), and HLA-DQB3 (major histocompatibility complex, class II, DQ beta 3, Figures 1 and 2 ). While not reaching genome-wide significance, the HLA class l locus is also indirectly tagged through the association with rs2621322, in the TAP2 (transporter 2, ATP binding cassette subfamily B member) gene, a transporter protein that restores surface expression of MHC class I molecules and has previously been implicated in TB susceptibility ( Thu et al., 2016 ). HLA-A , DQA1 , DQB1, DRB1, and TAP2 genes have previously been linked to TB susceptibility through TB candidate gene and GWAS analysis ( Thu et al., 2016 ; Kinnear et al., 2017 ; Stein et al., 2017 ; Sveinbjornsson et al., 2016 ; Zhang et al., 2021 ). The HLA-II locus encodes several proteins crucial in antigen presentation, including HLA-DR, HLA-DQ, and HLA-DP, which are widely implicated in susceptibility to infection and autoimmunity ( Kelly and Trowsdale, 2019 ; Shiina et al., 2009 ). HLA-II Given the strong association peak in the HLA-ll locus ( Figures 1 and 2 ), we imputed HLA-ll alleles to fine-map this association. HLA alleles were imputed using the HIBAG R package that utilizes both genotyping array and population-specific reference panels to obtain the most accurate imputations for each individual dataset. Association testing was then conducted using an additive genetic model for each individual dataset before meta-analyzing the results ( Source data 1 , sheets 11–15). Notwithstanding inconsistency across populations, the strongest signal in the combined global analyses is at DQA1*02:01, revealing a protective effect (OR = 0.88, 95% CI = 0.82–93, p-value=1.3e –5 , Figure 3B ). The signal remains apparent in the six populations with the lead SNP at MAF > 2.5% and individual-level data available (p-value=0.0003). After conditioning on the lead SNP (rs28383206) in this subset, there is no residual significant association at DQA1*02:01 (p-value=0.44, Figure 3—figure supplement 1 ), suggesting that the classical allele is tagging the rs28383206 association. This observation is consistent with previous observations of HLA analysis in Icelandic (DQA1*02:01: OR = 0.82, p-value=7.39e –4 ) and Han Chinese populations (DQA1*02:01: OR = 0.82, p-value=7.39e –4 ), but showed opposite direction of effect in another Chinese population (DQA1*02:01: OR = 1.28, p-value=0.0193, Figure 3B ; Sveinbjornsson et al., 2016 ; Li et al., 2021 ; Zheng et al., 2018 ). The significant HLA associations overlap with the association peak observed in the multi-ancestry meta-analysis ( Figure 2 ) but show more consistency in the direction of effects between the input studies compared to the lead SNPs identified in the association peak. This suggests that the rs28383206 association in the meta-analysis is tagging an HLA allele, where the different linkage disequilibrium (LD) patterns from the included ancestral populations result in the differences in effects sizes between populations at the rs28383206 association. This variation in significant associations is, in part, attributable to the observed variation in HLA allele frequencies across all the included studies and may also reflect differential tagging of at least one unknown causal variant across populations ( Source data 1 , sheets 16–22). The variable role of classical HLA alleles in different populations could be partially due to unique infectious pressures that each geographical region faces and could also explain why different strains of Mtb are more or less prevalent in different regions as they adapted to the HLA profile of the population within this region. Sequencing efforts of global mycobacterial isolates find hyperconservation of class II epitopes, suggesting pathogen advantage achieved through limiting HLA-II recognition and highlighting the potential complex interplay between pathogen and host evolution in modifying class II presentation in TB infection ( Comas et al., 2010 ). Previous work has shown evidence of interaction between genetic variants of the host and specific strains of Mtb in Ghanaian, Ugandan, South African, and Asian populations ( Möller and Kinnear, 2020 ; Müller et al., 2021 ; Correa-Macedo et al., 2019 ; Salie et al., 2014 ; Luo et al., 2015 ; Wampande et al., 2019 ; Micheni et al., 2021 ; McHenry et al., 2021b ; McHenry et al., 2020 ). These interactions provide further evidence that Mtb may have undergone substantial genetic evolution, in concert with host migration and evolution of different populations ( Comas et al., 2013 ; Coscolla and Gagneux, 2014 ). Some studies suggest that HLA-II epitopes may have undergone regional mutations that modify HLA-II binding, and we speculate that the heterogeneity observed in HLA-II associations between regions may, at least in part, be accounted for by different pressures exerted by varying stains of Mtb ( Copin et al., 2016 ). Impact of infection pressure on meta-regression To further understand the heterogeneity across populations, we attempted to account for variation in levels of prior exposure that could serve to mask host effects given that not all controls will have been exposed to Mtb . In low transmission settings, more susceptible but unexposed individuals would be included as controls, who, had they been exposed to Mtb, might have progressed to TB disease. Overall, including each cohort’s estimated prevalence of prior exposure had a significant impact on the residual heterogeneity and association statistics of 5% of the variants included in the meta-analysis (419,460/8,355,367), which at a significance level of p-value<0.05 is what is to be expected purely by chance. Separating the results into bins according to p-values revealed that the bins where the covariate had the biggest impact were for p-values in the range of 1e –3 to 1e –5 ( Figure 1—figure supplement 2 ), while significant and suggestive associations reported in this study did not show any significant changes in residual heterogeneity. While the proportion of variants significantly impacted when correcting for infection pressures is low and has the biggest impact on variants with larger p-values, there was still an overall reduction in the chi-square value for the residual heterogeneity (mean chi-square value reduced by 10). This suggests that accounting for potential lifetime of infections does account for some of the observed residual heterogeneity; it is most likely not the main driving force for these residuals. When considering the impact of force of infection, it is important to consider not only the proportion of controls ever exposed but also the impact of recurrent exposure. There is some evidence to suggest that genetic barriers to progression to TB may be overcome if the infectious dose is high ( Fox et al., 1929 ). Repeated exposure may be observed where TB prevalence is high, as in South Africa, and could contribute to the overall lower effects sizes observed in the GWAS enrolling RSA people. Inclusion of potential lifetime infections in meta-regression could help adjust for these effects and prove useful for not only TB, but meta-analysis of infectious diseases in general, and should be further explored. Other suggestive loci that did not reach significance There were four loci with suggestive associations and strong peaks on the Manhattan plot ( Figure 1 ) that did not reach significance but should still be considered as potential variants of interest ( Supplementary file 1c ). One chr9 peak (rs4576509, p-value=7.40e –07 ) was intergenic ( Figure 1—figure supplement 3 ) while the second (rs6477824, p-value=2.99e –07 ) is located in the 5′-UTR region of the zinc finger protein 483 ( ZNF483 ) gene ( Figure 1—figure supplement 3 ), previously associated with age at menarche ( Demerath et al., 2013 ; Elks et al., 2010 ). The chromosome 11 peak (rs12362545, p-value=1.24e –06 ) is located in the PPFIA binding protein 2 ( PPFIBP2 ) gene ( Figure 1—figure supplement 4 ), which plays a role in axon guidance and neuronal synapse development and has previously been implicated in cancer development ( Colas et al., 2011 ; Wu et al., 2018 ). The final peak (rs35787595, p-value=5.41e –06 ), on chromosome 16 ( Figure 1—figure supplement 5 ), is located in the craniofacial development protein 1 ( CFDP1 ) gene region and involved in chromatin organization ( Messina et al., 2017 ). These genes have not been previously linked to TB susceptibility and a potential role is unclear, and as a result further validation of these variants is needed before any conclusions on their impact to TB susceptibility can be drawn. Ancestry-specific meta-analysis Concordance in the direction of effects of the risk allele between the ancestry-specific meta-analyses was examined to determine whether significant enrichment (above the expected 50%) exists at different p-value thresholds. Significant enrichment in the concordance of direction of effect was only observed when using the European ancestry as reference compared to the African meta-analysis results for SNPs with p-values>0.001 and <0.01 (p-value=0.0061, Supplementary file 1d ). The lack of enrichment between the ancestries suggests significant ancestry-specific associations, which could be further compounded by the differences in local infection pressures. Due to the lack of concordance and the separation of the ancestral populations in the principal component analysis (PCA) plot ( Figure 4 ), ancestry-specific meta-analysis was done. The PCA plot ( Figure 4 ) for the 12 studies (based on mean pairwise genome-wide allele frequency differences calculated by MR-MEGA) illustrates distinct separation between the three major population groups (Asia, Europe, and Africa). The separation observed between the African studies (Gambia/Ghana and RSA) is due to the high level of admixture in the RSA population. The RSA population is a five-way admixed South African population with genetic contributions from Bantu-speaking African, KhoeSan, European, and South and South East Asian populations, which explains the observed shift in the PCA plot ( Daya et al., 2013 ; Figure 4 ). QQ-plots for the ancestry-specific analysis show no significant inflation or deflation. After removing associations without any clear peaks on the Manhattan plots (associations driven by a single study), we found no significant associations for the ancestry-specific analysis. However, suggestive peaks that did not reach genome-wide significance were identified in the European and Asian ancestry-specific analyses ( Figure 4—figure supplements 1 and 2 , Supplementary file 1e ). Potential causes for the lack of associations and suggestive peaks in the African analysis ( Figure 4—figure supplement 3 ) are the increased genetic diversity within Africa, the inclusion of admixed samples (RSA), and the smaller sample size compared to the other ancestry-specific meta-analysis. While power can be increased through inclusion of greater genetic diversity, between-subpopulation differences in allele frequency can introduce confounding. Confounding by genetic background can result in both spurious associations and the masking of true associations. Such confounding may explain why the results observed elsewhere may not replicate in admixed samples. Removing the admixed data and analyzing only the Gambian and Ghanaian datasets also did not produce any significant results, although, clearly, the sample size was smaller. For the European analysis ( Figure 4—figure supplement 1 ), suggestive peaks were identified on chromosomes 6 (rs28383206, p-value=7.06e –08 ), 8 (rs3935174, p-value=1.00e –06 ), and 11 (rs12362545, p-value=1.06e –07 , Supplementary file 1e ), while the Asian ( Figure 4—figure supplement 2 ) analysis identified suggestive peaks on chromosome 6 (rs146049519, p-value = 1.06e –06 ) and 8 (rs62495207, p-value=5.10e –06 , Supplementary file 1e ). The suggestive peaks on chromosomes 6 and 11 in the European subgroup analysis overlap with the suggestive peaks of the multi-ancestry meta-analysis ( Figure 1 , Figure 4—figure supplement 4 , Supplementary file 1e ), but the suggestive peak on chromosome 8 is unique to this population ( Figure 4—figure supplement 1 , Supplementary file 1e ). The strongest signal for this peak (rs3935174, OR = 0.87, p-value=1.00e –6 ) is located in the ArfGAP with SH3 domain, ankyrin repeat, and PH domain 1 ( ASAP1 ) region, which encodes an ADP-ribosylation factor (ARF) GTPase-activating protein and is potentially involved in the regulation of membrane trafficking and cytoskeleton remodeling ( Brown et al., 1998 ). Variants in ASAP1 (rs4733781 and rs10956514) have previously been linked to TB susceptibility in a TB-GWAS analysis of the same Russian population included here ( Curtis et al., 2015 ). While these ASAP1 variants were present in all 12 studies and had consistent direction of effects, they presented with a strong signal in the European ancestry-specific analysis only (African and Asian p-values all ≥ 0.1). These differences in association were not driven by allele frequency differences as they are similar between the included study populations. A possible explanation for the association being observed only in the European meta-analysis is that the association is driven by the Russian dataset. rs4733781 has a strong signal in the Russian dataset (p-value=2.96e –7 ), but very weak signals in all other populations included in the analysis (p-value>0.01) and is in LD with rs3935174 (r2 = 0.6935 and D’ = 0.8791) identified in our analysis. rs4733781 also did not replicate in a previous GWAS from Iceland ( Sveinbjornsson et al., 2016 ), further suggesting that this association is not specific to European populations, but rather driven by the large Russian dataset included in this study. The suggestive peak on chromosome 8 in the Asian subgroup analysis lies in an intergenic region ( Figure 4—figure supplement 2 , Supplementary file 1e ) and the link to TB susceptibility is unclear. Finally, the suggestive region on chromosome 6 overlaps with the significant peak from the multi-ancestry analysis ( Figure 1 and Figure 4—figure supplement 2 ) and is located in the major histocompatibility complex, class II, DR beta 1 ( HLA-DRB1 ), as discussed above ( Figure 4—figure supplement 2 , Supplementary file 1e ). Prior associations To determine whether associations from previously published TB-GWAS, TB candidate SNPs, and SNPs within candidate gene studies replicate in this meta-analysis, we extracted all significant and suggestive associations from prior analyses and compared these to our multi-ancestry and ancestry-specific meta-analysis results ( Luo et al., 2019 ; Schurz et al., 2018 ; Chimusa et al., 2014 ; The Wellcome Trust Case Control Consortium, 2007 ; Curtis et al., 2015 ; Mahasirimongkol et al., 2012 ; Qi et al., 2017 ; Thye et al., 2010 ; Thye et al., 2012 ; Quistrebert et al., 2021 ; Hong et al., 2017 ; Zheng et al., 2018 ; Grant et al., 2016 ; Png et al., 2012 ; Daya et al., 2014b ). In total, 44 SNPs and 36 genes were identified from the GWAS catalog, of which 33 SNPs and all candidate genes were present in our data ( Source data 1 , sheet 2). We also extracted the association statistics for a further 90 previously identified candidate genes from our multi-ancestry and population-specific meta-analysis results ( Source data 1 , sheet 2; Naranbhai, 2016 ). Using a Bonferroni-corrected p-value of 0.0015 for the number of SNPs tested (33) as the significance threshold for replication, two candidate SNPs (rs4733781: p-value=3.22e –5 ; rs10956514: p-value=0.000118; Source data 1 , sheets 3 and 4) replicated in the multi-ancestry meta-analysis, both located in the ASAP1 gene region ( Curtis et al., 2015 ; Chen et al., 2019 ; Wang et al., 2018 ). However, as discussed in the previous section, these associations are driven by the Russian dataset, which is the same data used by Curtis et al., 2015 , where these associations were originally discovered ( Curtis et al., 2015 ). As the Russian population included in our analysis presenting with a strong signal for these variants, there is no independent evidence for these candidate SNPs as they did not replicate in any other population. For the Asian ancestry-specific analysis, the replicated variant was rs41553512, located in the HLA-DRB5 gene (p-value=3.53E-05). HLA-DRB5 is located within the HLA-ll region identified in the multi-ancestry meta-analysis ( Figure 1 ) and was previously identified by Qi et al., 2017 in a Han Chinese population. The African ancestry-specific analysis did not replicate previous associations, with the lowest p-value at rs6786408 in the FOXP1 gene (p-value=0.023). While this variant was previously identified in a North African cohort, the fact that it does not replicate here could be because of the genetic diversity within Africa and specifically the variability introduced by the five-way admixed South African population.
Discussion This large-scale, multi-ethnic meta-analysis of genetic susceptibility to TB, involving 14,153 cases and 19,536 controls, identified one risk locus achieving genome-wide significance, and further investigation of this region revealed significant classical HLA allele associations. This association is noteworthy given we show that there is association in other studies for the same allele ( Kinnear et al., 2017 ; Stein et al., 2017 ). Based on the significant association, rs28383206, in the HLA region identified in this multi-ancestry analysis ( Figure 3A ), HLA-specific imputation and association testing were done to fine-map the region and identify potential HLA alleles driving this association. HLA DQA1*02:01 had the strongest signal in the meta-analysis across the eight included studies ( Figure 3B ), but this signal disappeared when conditioning on the significant SNP (rs28383206). HLA DQA1*02:01 has previously been identified in an Icelandic and two Chinese populations, but the direction of effect was not consistent ( Sveinbjornsson et al., 2016 ; Li et al., 2021 ; Zheng, 2018 ). Despite these inconsistencies, the association between Mtb and HLA class II should be explored in more detail in future studies. A study investigating the outcomes of Mtb exposure in individuals of African ancestry identified protective effects of HLA class II alleles for individuals resistant to TB, highlighting the importance of HLA class II and susceptibility to TB ( Dawkins et al., 2022 ). HLA class II is a key determinant of the immune response in TB, and Mtb has the mechanisms to directly interfere with MHC class 2 antigen presentation ( Sia and Rengarajan, 2019 ). This is supported by studies in mice, where mice in which the MHC class ll genes were deleted died quickly when exposed to Mtb and died faster than the mice in which MHC class I genes were deleted ( Sia and Rengarajan, 2019 ). The p-values of residual heterogeneity in genetic effects between the studies in the multi-ancestry meta-analysis show no significant inflation between the studies. This suggests that the differences in study characteristics (phenotype definition, infection pressure, Mtb strain) are not the main contributor to the lack of significant associations. However, they certainly have an impact, which is further compounded with ancestry-correlated heterogeneity and other factors (e.g., socioeconomic standing). The ancestry-correlated heterogeneity p-values are generally lower than the residual heterogeneity, suggesting that genetic ancestry has a stronger impact on the differences in effects sizes between the studies. This is supported by the fact that previous TB genetic association studies have identified significant effects of ancestry on TB susceptibility ( Chimusa et al., 2014 ; Daya et al., 2014b ). However, the effects of genetic ancestry can be confounded by other factors not accounted for in this analysis, such as the differences in socioeconomic factors (including the differences in housing, employment, poverty, and access to healthcare), phenotype definitions, and differences in infection pressure between the included study populations ( Hargreaves et al., 2011 ; Duarte et al., 2018 ; Lönnroth et al., 2009 ). Specifically, the lack of consistency and specificity in TB diagnosis between the included studies introduces heterogeneity and the potential for misclassification of cases and controls, which can reduce the power to detect significant associations ( Supplementary file 1a ). While this is a limitation of this study, the fact that the residual heterogeneity is overpowered by the ancestry-specific heterogeneity suggests that the phenotype definitions are not the main driver behind the lack of significant associations. For the ancestry-specific analysis, fewer studies result in there being less input heterogeneity to account for, but the reduced sample size was not sufficient to detect any ancestry-specific genome-wide associations. This is particularly evident for the African ancestry-specific meta-analysis where the large degree of heterogeneity, which could be a result of the high genetic diversity within Africa, in combination with differences in socioeconomic factors compared to other populations included in this study, resulted in no observable suggestive association peaks ( Campbell and Tishkoff, 2008 ; Peprah et al., 2015 ). Furthermore, the suggestive associations ( Supplementary file 1c and e ) reported in this study should be interpreted with care, and further validation is required before any conclusions can be drawn on the impact that they could have on TB susceptibility. Polygenic heritability estimates revealed genetic contributions to TB susceptibility for all studies, but the level of this contribution varied greatly (5–36%), suggesting that other factors are contributing to both the lack of significant associations detected in this meta-analysis and the variation observed for the polygenic heritability estimates. These factors likely include environmental, socioeconomic, and varying levels of infection pressures, as well as genetic ancestry-specific effects between the included study populations. An individual from South Africa will face a much higher force of infection than individuals in Europe, and making the assumption that environmental circumstances are equal will significantly skew these crude heritability estimates ( Pearce, 2011 ). This argument is sustained by the fact that increasing disease prevalence (higher infection pressure) increased the level of genetic contribution to TB susceptibility up to a certain point, presumably accounted for by increasingly informative control samples, after which further increasing the infection pressure will not further impact genetic susceptibility. To determine the impact that force of infection has on the level of genetic contribution to TB susceptibility, we modeled values for proportion of people ever infected with Mtb to include in the multi-ancestry meta-analysis and correct for the different force of infection faced by individuals in each country. Inclusion of this covariate, however, only resulted in a significant difference for 5% of the analyzed variants, what is to be expected based on chance alone, and as such we cannot conclude that a significant portion of the observed residual heterogeneity is explained by this. Limited metadata forced us to make several assumptions about the ages of study participants and the dates on which they were enrolled. With more precise metadata, or Mtb infection test results in controls, the potential impact of lifetime infection could be better quantified and may contribute to elucidating genetic TB susceptibility. Multi-ancestry meta-analysis of other infectious diseases could also potentially benefit from the inclusion of force of infection covariates. It would also be important to determine whether there is a level of exposure beyond which host genetic barriers to infection are overcome ( Simmons et al., 2018 ). A single significant association was identified in this multi-ancestry meta-analysis, which is small when compared to other meta-analyses of similar size. Factors contributing to this include the difficulty in analyzing multi-ancestry data, the outdated arrays and lack of suitable reference panels for the included study populations, and heterogeneity in case and control definitions between the studies. The issue of heterogeneity in definitions is especially pronounced for this study as it included unpublished data with limited information, which does not indicate how cases were confirmed and controls were collected. The complexity of TB and generally small genetic effects suggests that larger sample sizes or alternative methods of investigation are needed. Utilizing GWAS arrays that better capture diverse populations in combination with imputation making use of larger and more diverse reference panels would allow for larger and more consistent datasets for future meta-analysis. Remapping specific areas of interest such as the HLA , ASAP1, or TLR using long-read sequencing would be invaluable. Increased amounts of genetic data will also allow for more accurate TB heritability analysis and permit analysis of polygenic risk scores and exploration of host–pathogen interactions. In conclusion, this large-scale multi-ancestry TB GWAS meta-analysis revealed significant associations and shared genetic TB susceptibility architecture across multiple populations from different genetic backgrounds. The analysis shows the value of collaboration and data sharing to solve difficult problems and elucidate what determines susceptibility to complex diseases such as TB. We hope that this publication will encourage others to make their data available for future large-scale meta-analyses.
co-first authors. The heritability of susceptibility to tuberculosis (TB) disease has been well recognized. Over 100 genes have been studied as candidates for TB susceptibility, and several variants were identified by genome-wide association studies (GWAS), but few replicate. We established the International Tuberculosis Host Genetics Consortium to perform a multi-ancestry meta-analysis of GWAS, including 14,153 cases and 19,536 controls of African, Asian, and European ancestry. Our analyses demonstrate a substantial degree of heritability (pooled polygenic h 2 = 26.3%, 95% CI 23.7–29.0%) for susceptibility to TB that is shared across ancestries, highlighting an important host genetic influence on disease. We identified one global host genetic correlate for TB at genome-wide significance (p<5 × 10 -8 ) in the human leukocyte antigen (HLA)-II region (rs28383206, p-value=5.2 × 10 -9 ) but failed to replicate variants previously associated with TB susceptibility. These data demonstrate the complex shared genetic architecture of susceptibility to TB and the importance of large-scale GWAS analysis across multiple ancestries experiencing different levels of infection pressure. Research organism
Funding Information This paper was supported by the following grants: http://dx.doi.org/10.13039/501100000272 National Institute for Health Research Academic Clinical Lectureship to James J Gilchrist. http://dx.doi.org/10.13039/501100012041 Versus Arthritis 21754 to Andrew P Morris. http://dx.doi.org/10.13039/501100000265 Medical Research Council MR/P022081/1 to Peter J Dodd. http://dx.doi.org/10.13039/501100000272 National Institute for Health Research NIHR Clinical Lecturer to Tom A Yates. http://dx.doi.org/10.13039/501100000272 National Institute for Health Research CL-2020-21-001 to Tom Parks. http://dx.doi.org/10.13039/100010269 Wellcome 10.35802/222098 to Tom Parks. Acknowledgements Computation used the Oxford Biomedical Research Computing (BMRC) facility, a joint development between the Wellcome Centre for Human Genetics and the Big Data Institute supported by Health Data Research UK and the NIHR Oxford Biomedical Research Centre. Financial support was provided by the Wellcome Trust Core Award Grant Number 203141/Z/16/Z. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. This work was partly supported by a Grant in-Aid for Scientific Research (B) (KAKENHI 21406006) from Japan Society for the Promotion of Science (JSPS). The clinical information and samples in Thailand, in this part, were supported by JSPS KAKENHI 17256005 and later by research grant from the Ministry of Health, Labor and Welfare (MHLW) H21-aids-12. We would like to thank all the subjects and the members of the Rotary Club of Osaka-Midosuji District 2660 Rotary International in Japan who donated their DNA for this work. We thank all members of BioBank Japan, Institute of Medical Science, The University of Tokyo, and of RIKEN Center for Genomic Medicine for their contribution to the completion of our study. This work was conducted as a part of the BioBank Japan Project that was supported by the Ministry of Education, Culture, Sports, Science and Technology of the Japanese government. As for Thai samples, we thank all of the staff and collaborators of the TB/HIV Research Project, Thailand, a research project between the Research Institute of Tuberculosis, the Japan Anti-tuberculosis Association, and the Thai Ministry of Public Health for collecting clinical data and DNA samples. We thank the German Consortium 'TB or not TB Network' ( https://www.tbornottb.de/ ), which was responsible for collecting the German TB samples. We acknowledge the support of the DSI-NRF Centre of Excellence for Biomedical Tuberculosis Research, South African Medical Research Council Centre for Tuberculosis Research, Division of Molecular Biology and Human Genetics, Faculty of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa. This research was funded in whole, or in part, by the Wellcome Trust. For the purpose of open access, the author has applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. JJG is funded by an NIHR Academic Clinical Lectureship. APM acknowledges support from Versus Arthritis (grant reference 21754). PJD was supported by a fellowship from the UK Medical Research Council (MR/P022081/1); this UK-funded award is part of the EDCTP2 program supported by the European Union. ME was supported by an NHMRC fellowship (552496). The research was supported by the NHMRC grant 1025166. AvL and RvC are supported by the National Institute of Allergy and Infectious Diseases at NIH [R01 AI136921]. TAY is an NIHR Clinical Lecturer supported by the National Institute for Health Research. TP acknowledges funding from the National Institute for Health Research (CL-2020-21-001) and the Wellcome Trust (222098/Z/20/Z). The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research, or the Department of Health and Social Care. AM and RM are funded by the EU project no. 2014-2020.4.01.15-0012 'Gentransmed'. BA is supported by the 'Scientific Programme Indonesia Netherlands' (SPIN) under the Royal Academy of Arts and Sciences (KNAW), the Netherlands. Additional information Additional files Data availability Summary statistics of all meta-analysis will be made available on Dryad ( https://doi.org/10.5061/dryad.6wwpzgn2s ). The summary statistics and raw data (where available) of the individual data files cannot be made available but enquiries or requests for this data can be made through the corresponding authors or authors directly responsible for the data, listed in Table 1 . As the ITHGC consortium has strict data transfer and sharing agreements with the original authors/owners of the data we can not ethically share the source data files in any way, be it either anonymized, de-identified or in any other form. All data that is not restricted by these data transfer and ethical agreements has been either uploaded to the online repository ( https://doi.org/10.5061/dryad.6wwpzgn2s ) or submitted along with this document. If any interested researchers want to apply for access to the original raw and individual GWAS datasets or any other other data currently restricted they can contact the corresponding author of this manuscript to put them in touch with the original data owners/authors, or the original data owners/authors can be contacted directly by contacting the corresponding authors listed in Table 1 . Once the original authors/owners of the data have been contacted discussions can be had to share the data using the appropriate and ethically approved methods, which could include data transfer agreements or similar application processes. The following previously published dataset was used: Schurz H Naranbhai V Yates TA Gilchrist J Parks T Dodd P Möller M Hoal EG Morris A Hill AV 2022 Multi-ancestry meta-analysis of host genetic susceptibility to tuberculosis identifies shared genetic architecture Dryad Digital Repository 10.5061/dryad.6wwpzgn2s
CC BY
no
2024-01-16 23:49:22
eLife.; 13:e84394
oa_package/31/64/PMC10789494.tar.gz
PMC10789496
38226077
Introduction and background Growth disturbances in osteomyelitis Osteomyelitis is a serious and potentially debilitating bone infection characterized by the inflammation of the bone and its marrow components. While its devastating effects on bone health and overall well-being have been widely recognized, the impact of osteomyelitis on skeletal growth, particularly in pediatric patients, is an aspect that has received increasing attention in recent years [ 1 , 2 ]. This introduction and background section aims to provide an overview of the significance of understanding and managing growth disturbances in osteomyelitis. Osteomyelitis poses a significant challenge to healthcare professionals, characterized by the complex interplay of host defenses, bacterial pathogens, and the bone itself. The infection typically originates from hematogenous spread, direct inoculation, or contiguous spread from adjacent tissues, leading to bone necrosis and the formation of sequestrae. Its pathophysiology, diagnosis, and treatment modalities have been the focus of extensive research, resulting in improvements in the management of the infection [ 3 ]. However, it is the secondary consequences of osteomyelitis, specifically its impact on skeletal growth, that have only recently gained the recognition they deserve. Pediatric patients, in particular, are highly susceptible to these growth disturbances due to the vulnerability of their growth plates [ 4 ]. The growth plate, also known as the physis, is a cartilaginous structure located at the ends of long bones. It plays a pivotal role in longitudinal bone growth by allowing the bone to elongate during the growth phase. This critical structure is particularly susceptible to the pathological processes associated with osteomyelitis, making pediatric patients especially prone to skeletal growth impairments [ 5 , 6 ]. The consequences of growth disturbances in osteomyelitis are profound. In cases where the growth plate is affected, patients may experience limb length discrepancies [ 7 ], angular deformities, and functional impairments, all of which can have a lasting impact on their quality of life [ 8 ]. Additionally, the economic burden associated with the long-term care and rehabilitation of individuals affected by osteomyelitis-related growth disturbances is substantial. Recognizing the importance of understanding and managing growth disturbances in osteomyelitis is crucial for improving patient outcomes. While the diagnosis and treatment of the underlying infection remain paramount, addressing the secondary complications related to growth plate involvement is equally essential. Early recognition, accurate diagnosis, and appropriate interventions can mitigate the long-term sequelae and ensure that patients can lead healthier, more productive lives. Therefore, this review article aims to shed light on the existing knowledge, key findings, and recommendations regarding growth disturbances in osteomyelitis, emphasizing the need for a comprehensive approach that encompasses both infection control and skeletal health.
Conclusions In conclusion, the management of growth disturbances in osteomyelitis demands a multifaceted approach involving timely diagnosis, antibiotic therapy, surgical interventions, physical therapy, and long-term follow-up. Recognizing the importance of addressing both the infection and its skeletal consequences is essential for optimizing patient outcomes and ensuring a better quality of life, particularly in pediatric cases. It is important for clinicians to be aware of growth plate involvement in children with osteomyelitis to initiate prompt treatment and resolution to minimize the long-term effects of growth plate damage.
Osteomyelitis, a severe bone infection, poses a multifaceted challenge to healthcare professionals. While its pathophysiology and treatment have been extensively studied, the impact of osteomyelitis on skeletal growth, particularly in pediatric patients, is an area that warrants attention. This abstract highlights the significance of understanding and managing growth disturbances in osteomyelitis, providing key findings and recommendations for clinicians. Understanding growth disturbance in osteomyelitis is essential because it can lead to lifelong consequences for pediatric patients. The infection may affect the growth plate, leading to limb length discrepancies, angular deformities, and functional impairments. These complications not only diminish the quality of life but also pose a substantial economic burden on the healthcare system. Therefore, early recognition and intervention are crucial. Key findings indicate that the risk of growth disturbances in osteomyelitis is particularly high in pediatric patients due to the vulnerability of the growth plate. Timely diagnosis, appropriate management, and targeted interventions can mitigate the long-term sequelae of growth disturbances. These include utilizing advanced imaging techniques to assess the extent of growth plate involvement, optimizing antibiotic therapy, and employing surgical techniques like epiphysiodesis, guided growth, or corrective osteotomies. Additionally, fostering a multidisciplinary approach that involves orthopedic surgeons, infectious disease specialists, and pediatric endocrinologists is vital to achieving successful outcomes. Recommendations for managing growth disturbance in osteomyelitis encompass early detection, meticulous monitoring, and a tailored treatment plan. Healthcare providers should remain vigilant for signs of growth plate involvement in osteomyelitis patients, especially in the pediatric population. A thorough evaluation, including advanced imaging and clinical assessment, is essential for accurate diagnosis. Close collaboration between specialists to address the infection and its skeletal consequences is crucial. Furthermore, patient and family education plays a pivotal role in fostering compliance with the treatment regimen. In conclusion, understanding and managing growth disturbances in osteomyelitis is paramount, particularly in pediatric patients. The implications of growth plate involvement are significant, and timely intervention is essential to prevent lifelong consequences. By implementing a comprehensive approach that combines accurate diagnosis, multidisciplinary collaboration, and patient education, healthcare professionals can enhance the quality of life and well-being of those affected by this challenging condition.
Review Pathophysiology Osteomyelitis is a bone infection with a complex pathophysiology that can lead to growth disturbances, particularly in the pediatric population [ 9 ]. Understanding the pathophysiological mechanisms underlying these growth impairments is essential for effective diagnosis and management. This section explores the pathophysiological aspects of growth disturbances in osteomyelitis. Osteomyelitis often begins with the introduction of pathogenic microorganisms into the bone. This can occur through various routes, including hematogenous spread, direct inoculation due to trauma or surgery, or contiguous spread from adjacent soft tissues. In pediatric patients, hematogenous spread is more common, with bacteria reaching the bone through the bloodstream. The bacteria most frequently responsible for osteomyelitis include Staphylococcus aureus , Streptococcus species, and Haemophilus influenzae [ 9 ]. Once the pathogenic microorganisms establish themselves within the bone, an intense inflammatory response is triggered. This response involves the activation of immune cells, such as neutrophils and macrophages, and the release of proinflammatory cytokines and chemokines. The inflammatory process can cause local tissue damage, leading to bone necrosis and the formation of sequestrae [ 10 ]. The infection and associated inflammation compromise the integrity of the bone. The release of enzymes and toxins by invading bacteria can result in bone destruction. In the context of growth disturbances, the most critical aspect is the involvement of the growth plate, also known as the epiphyseal plate or physis [ 11 ]. The growth plate is a specialized cartilaginous structure located at the ends of long bones, where longitudinal growth occurs. The bacterial infection can affect the growth plate directly or indirectly through the release of toxins, inflammatory mediators, and vascular compromise [ 11 ]. The involvement of the growth plate is a key factor in the pathophysiology of growth disturbances in osteomyelitis. The growth plate's unique characteristics, such as its rich blood supply and active cell turnover, make it particularly susceptible to bacterial invasion [ 12 ]. Infection of the growth plate can lead to its destruction and premature closure, disrupting the normal process of bone growth. This can result in limb length discrepancies, angular deformities, and functional impairments in pediatric patients [ 13 ]. In osteomyelitis, infection and inflammation can lead to vascular compromise within the affected bone. This compromise affects the blood supply to the growth plate, further exacerbating the damage. Insufficient blood flow can lead to ischemic changes, contributing to growth plate destruction and the formation of avascular areas within the bone [ 14 ]. The immune system's response to osteomyelitis includes the recruitment of immune cells and the formation of pus within the affected bone. This pus, along with dead tissue and debris, can create an environment conducive to bacterial growth and perpetuate the infection. In pediatric patients, the immune response can be heightened, as their immune systems are actively engaged in growth and development [ 14 ]. Understanding the pathophysiology of growth disturbances in osteomyelitis highlights the need for early diagnosis and appropriate management to prevent long-term sequelae. Accurate diagnosis and treatment should not only focus on eradicating the infection but also on addressing the secondary consequences, particularly the growth plate involvement. A comprehensive approach that combines infection control and measures to promote skeletal health is essential to ensuring optimal outcomes for patients affected by osteomyelitis-related growth disturbances [ 15 ]. Diagnosis Diagnosing growth disturbances in osteomyelitis can be a complex process, as it involves recognizing both the underlying bone infection and its impact on skeletal development [ 16 ]. Timely and accurate diagnosis is crucial for implementing appropriate management strategies to mitigate long-term sequelae. This section discusses the diagnostic modalities and considerations for identifying growth disturbances in osteomyelitis. Clinical evaluation is the first step in the diagnostic process. Healthcare providers should maintain a high index of suspicion for osteomyelitis in pediatric patients, particularly when there is a history of predisposing factors such as recent trauma, surgery, or systemic illnesses [ 16 ]. Common clinical findings may include localized pain, swelling, erythema, warmth over the affected bone, and limited range of motion. Furthermore, pediatric patients may exhibit signs of growth disturbances, such as limb length discrepancies or angular deformities [ 16 ]. Imaging plays a pivotal role in diagnosing osteomyelitis and assessing its impact on skeletal growth. Several imaging modalities are useful, such as X-rays, MRIs, CT, ESR/CRP, and biopsy/culture. Conventional X-rays are often the initial imaging study, as they can reveal bony changes such as periosteal reactions, lytic or sclerotic lesions, and soft tissue swelling. In pediatric patients, growth plate involvement, epiphyseal irregularities, or asymmetry in bone length may be evident [ 15 ]. MRI is highly sensitive in detecting osteomyelitis and its complications. It provides detailed images of bone, soft tissue, and the growth plate. The presence of abscesses, sequestrae, and growth plate abnormalities can be identified through MRI [ 16 ]. CT scans are valuable for assessing bone involvement, sequestrae, and cortical bone defects. They are particularly useful for surgical planning in cases where bone deformities and growth plate disturbances require intervention [ 16 ]. Laboratory tests can aid in the diagnosis of osteomyelitis, although they are not specific to growth disturbances. Elevated inflammatory markers, such as erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP), are often observed in cases of osteomyelitis. Blood cultures may also be performed to identify the causative pathogen [ 17 ]. When imaging and clinical findings strongly suggest osteomyelitis, a bone biopsy may be necessary to confirm the diagnosis and guide antibiotic therapy. Biopsy specimens are sent for culture and sensitivity testing to identify the infecting microorganism and determine its susceptibility to antibiotics [ 18 ]. In cases where growth disturbances are suspected, a specific assessment of the growth plate may be necessary. This can be achieved through advanced imaging techniques, such as MRI, which can provide detailed information on growth plate integrity and any pathological changes [ 16 ]. Pediatric patients with osteomyelitis and growth disturbances require a multidisciplinary approach to diagnosis and management. Collaboration between orthopedic surgeons, infectious disease specialists, and pediatric endocrinologists is crucial for comprehensively evaluating the patient's condition and formulating an appropriate treatment plan. Early diagnosis and recognition of growth plate involvement can facilitate targeted interventions, which may include surgical procedures [ 18 ] to correct deformities or address growth plate damage, as well as antibiotic therapy tailored to the specific pathogen involved [ 19 ]. Overall, a thorough and multidisciplinary diagnostic approach is fundamental to ensuring optimal outcomes in cases of growth disturbances in osteomyelitis. Discussion The management of growth disturbances in osteomyelitis presents a unique set of challenges, particularly in pediatric patients. The complexity lies in the need for a multifaceted approach that addresses both the underlying infection and the skeletal consequences. Surgical interventions, including debridement, epiphysiodesis, guided growth, and corrective osteotomies, have proven effective in correcting deformities and limb length discrepancies [ 19 ]. However, the selection of the most appropriate surgical approach should be tailored to the individual patient's needs and specific growth plate involvement. The collaboration of a multidisciplinary team, comprising orthopedic surgeons, infectious disease specialists, pediatric endocrinologists, and physical therapists, is essential to provide comprehensive care [ 20 ]. Long-term follow-up and monitoring play a crucial role in assessing treatment efficacy, identifying recurrent infections, and managing potential complications. Patient and family education should remain a focal point in ensuring adherence to treatment regimens and understanding the potential lifelong impact of growth disturbances in osteomyelitis.
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50631
oa_package/af/ee/PMC10789496.tar.gz
PMC10789501
38109137
Resultados Después de analizar los reportes de las 200 angiografías coronarias, se determinó que el 70,5 % correspondían a hombres y el 29,5 % a mujeres, con un promedio de edad de 64,22 (desviación estándar, DE: 11,22) años. El origen anatómico de la arteria interventricular anterior fue normal en el 99,05 % de la muestra, con una sola variación en una mujer; en ella, la arteria se originó en un orificio independiente localizado en el seno aórtico izquierdo y el tronco principal de la arteria coronaria izquierda estaba ausente, ( figura 1 ). El trayecto arterial más frecuente fue de tipo subepicárdico en el 98 % de los casos, con excepción de cuatro (2 %), tres hombres y una mujer, en quienes se identificó y reportó un puente muscular del segmento 13 o segmento intermedio de la arteria interventricular anterior. El 86 % de casos correspondió a dominancia coronaria derecha con origen normal de arteria interventricular anterior. Este tipo de dominancia fue la más frecuente en ambos sexos; sin embargo, la segunda dominancia más frecuente para hombres fue la izquierda, mientras que, para mujeres, fue del tipo balanceada. El caso de la arteria interventricular anterior con origen anómalo mostró dominancia izquierda. La permeabilidad arterial fue normal en el 57 % del total de la muestra; se observaron alteraciones en la luz arterial en el 43 % de casos, con mayor frecuencia en los hombres. El segmento arterial más afectado fue el 13 o intermedio en hombres y el 12 o proximal en mujeres. El compromiso de un solo segmento arterial fue lo más frecuente, seguido del compromiso bisegmentario ( cuadro 1 ). Las causas más frecuentes de alteración de la permeabilidad correspondieron a placa arteriosclerótica, coágulo o trombo que, en algunos casos, fueron tratadas con análogos de los salicilatos, como el agrastat, con angioplastia y con implantación de endoprótesis vascular convencional o medicada. Los casos más graves se remitieron para presentarlos en una junta cardioquirúrgica. El 25 % de las personas, 40 hombres y 10 mujeres, presentaron dolor precordial. En el 40 %, se reportaron alteraciones ecocardiográficas, el 5 % presentó isquemia cardiaca y, el 59 %, alteración del trazado electrocardiográfico. Cuatro casos presentaron alteración de tipo puente miocárdico en el trayecto de la arteria interventricular anterior: dos con dominancia derecha, uno con dominancia izquierda y otro con dominancia balanceada. Los dos casos con dominancia derecha presentaron dolor precordial; uno de los casos presentó oclusión de dos segmentos coronarios, con diagnóstico de infarto agudo del miocardio. Al comparar la frecuencia del tipo de dominancia coronaria con la permeabilidad de la arteria interventricular, se encontró que, proporcionalmente, la codominancia o circulación balanceada es la que más frecuentemente se relaciona con lesiones obstructivas, más frecuentes en mujeres; la dominancia derecha es la que menos se relaciona con obstrucción ( cuadro 2 ). En la mayoría de los casos, el dolor precordial fue concomitante con una lesión obstructiva de la arteria interventricular anterior, en hombres, del segmento 13 y, en mujeres, del 12. Según la irrigación coronaria, el dolor precordial se presentó principalmente en casos con dominancia derecha ( cuadro 3 ).
Todos los autores contribuyeron en forma sustancial al diseño, análisis e interpretación de datos, redacción y revisión crítica del contenido. Conflicto de intereses: Los autores declaramos no tener conflicto de intereses de ningún tipo. Resumen Introducción. La arteria interventricular anterior se origina en la coronaria izquierda, irriga la cara anterior de los ventrículos, el ápex y el tabique interventricular; es la segunda arteria más relevante del corazón. Objetivo. Describir las características anatómicas y clínicas de la arteria interventricular anterior mediante angiografía. Materiales y métodos. Se realizó un estudio descriptivo con 200 reportes angiográficos de personas colombianas; se valoraron el origen, el trayecto y la permeabilidad de la arteria interventricular anterior, así como la dominancia coronaria. Se incluyeron datos relacionados con dolor precordial, infarto agudo de miocardio, dislipidemia y alteración electrocardiográfica. No fue posible hacer pruebas estadísticas, debido a la escasa prevalencia de variaciones anatómicas de dicha arteria. Resultados. Se encontró una arteria interventricular anterior con su origen en el seno aórtico izquierdo, sin puente miocárdico, sin alteración de la permeabilidad y con dominancia izquierda. La frecuencia de los puentes fue del 2 % y la dominancia más frecuente fue la derecha en el 86 %. Se presentaron alteraciones de permeabilidad en el 43 % de los casos, las cuales afectaron principalmente al S13. El 25 % de los pacientes presentó dolor precordial; el 40 %, alteraciones ecocardiográficas; el 5 %, cardiopatía isquémica, y el 59 %, alguna alteración electrocardiográfica. Conclusiones. Las variaciones en el origen de la arteria interventricular anterior son poco prevalentes, según reportes de Chile, Colombia y España. Los puentes miocárdicos de esta arteria fueron escasos respecto a otros estudios, lo cual sugiere mejor especificidad de los hallazgos de la angiotomografía o de la disección directa. La permeabilidad coronaria se valora con la escala TIMI ( Thrombolysis in Myocardial Infarction ); puntajes de 0 y 1 indican una lesión oclusiva asociada con cardiopatía isquémica. La dominancia coronaria más frecuente, según diversas técnicas, es la derecha, seguida de la izquierda en hombres y de una circulación balanceada en mujeres. Abstract Introduction. The anterior interventricular artery originates from the left coronary artery and irrigates the anterior surface of the ventricles, apex, and interventricular septum, making it the second most relevant artery of the heart. Objective. To describe the anatomical and clinical aspects of the anterior interventricular artery through angiography. Materials and methods. A descriptive study was conducted using 200 angiographic reports of Colombian individuals. The anterior interventricular artery's origin, course, patency, and coronary dominance were evaluated. Data related to chest pain, acute myocardial infarction, dyslipidemia, and electrocardiographic abnormalities were included. Statistical tests could not be performed due to this artery's low prevalence of anatomical variations. Results. One anterior interventricular artery was found to have originated from the left coronary sinus without a myocardial bridge, with no alteration in permeability, and with left dominance. The frequency of bridges was 2%, and the most frequent dominance was right in 86; permeability alterations occurred in 43% mainly affecting S13. Twenty-five per cent presented chest pain; 40%, echocardiographic alterations; 5%, ischemic heart disease, and 59%, electrocardiographic alterations. Conclusions. Variations of origin of the anterior interventricular artery have a low prevalence according to reports from Chile, Colombia, and Spain. anterior interventricular artery myocardial bridges were scarce compared to other studies, suggesting better specificity of computed tomography angiography or direct dissection for these findings. The assessment of coronary permeability is graded with the thrombolysis in myocardial infarction scale; values 0 and 1 indicate occlusive lesion associated with ischemic heart disease. According to various techniques, the most frequent coronary dominance the right, followed by the left in men and balanced circulation in women. Palabras clave: Key words:
El conocimiento detallado de los aspectos anatómicos y las consideraciones clínicas de la arteria interventricular anterior es fundamental para el diagnóstico y tratamiento de la enfermedad arterial coronaria que es la tercera causa de morbimortalidad mundial y la primera en Colombia 1 . La interventricular anterior es una de las principales arterias que irrigan el corazón; se origina en la arteria coronaria izquierda, discurre por el espacio subepicárdico sobre el surco interventricular anterior, y se divide en los segmentos S12, S13 y S14 2 . Emite ramas que irrigan la pared anterior del ventrículo izquierdo, los dos tercios anteriores del tabique interventricular y el ápex cardiaco 3 . Las variaciones del origen de la arteria interventricular anterior oscilan entre el 0,6 y el 1,3 %; incluyen su origen directo en el seno aórtico izquierdo o el derecho, en el tronco común con la coronaria derecha y en el tronco pulmonar. Pocas veces son asintomáticas, ya que se relacionan con cardiopatía isquémica e, incluso, muerte súbita 4 - 7 . La permeabilidad de la arteria interventricular anterior se puede valorar mediante una técnica angiográfica. La disminución de la luz arterial se relaciona con dolor precordial; si la obstrucción es igual o superior al 70 %, requiere de intervencionismo, por el riesgo de infarto miocárdico ventricular izquierdo y de bloqueo aurículo-ventricular asociado con lesión del fascículo auriculoventricular, también conocido como haz de His 8 . La información disponible en los textos de anatomía descriptiva sobre la arteria interventricular anterior, o arteria descendente anterior izquierda, que se usa como insumo para la formación de estudiantes de pregrado y posgrado en medicina y otras ciencias de la salud, suele limitarse a la descripción del origen anatómico normal y de un solo patrón de ramificación 9 , 10 . En el presente artículo, se describen aspectos más detallados de variables como el trayecto, la permeabilidad y la relación de esta arteria con el tipo de dominancia coronaria, los cuales pueden contribuir a un conocimiento más completo de una de las arterias del corazón con mayor relevancia clínica. El compromiso estructural y funcional de la arteria interventricular anterior, que se relaciona con enfermedad arterial coronaria responsable de una gran tasa de morbimortalidad, requiere de conocimientos detallados y precisos, a partir de la medicina basada en la evidencia científica 4 . Por lo tanto, la información aquí descrita, que se ha fundamentado en una investigación descriptiva, aporta datos estructurales y clínicos que podrán ser tenidos en cuenta al momento de prestar servicios de salud a la persona con cardiopatía asociada con lesión arterial. Esta información es crucial para el equipo de profesionales de la salud responsables de la valoración, el diagnóstico, el tratamiento y la rehabilitación de personas con enfermedad arterial coronaria. Este estudio también beneficia a los morfólogos y fisiólogos, porque aporta datos más detallados sobre los aspectos anatómicos y de la luz de la arteria interventricular anterior respecto no solo de su origen, sino de su trayecto, permeabilidad por cada uno de los tres segmentos valorados e intervenidos en hemodinamia, de lo que no hay reportes en otras investigaciones en población colombiana. El objetivo de este artículo fue describir los aspectos anatómicos de la arteria interventricular anterior por técnica angiográfica; dichos aspectos incluyen el origen, el trayecto, la dominancia y la permeabilidad. Además, se describen algunas manifestaciones clínicas como isquemia, dolor precordial y enfermedad arterial coronaria, derivadas de las afecciones estructurales por alteración de su permeabilidad y de su relación con los tres tipos de dominancia comparativa por sexo. Materiales y métodos Se desarrolló un estudio descriptivo de corte transversal, con 284 reportes de angiografías coronarias practicadas en el año 2022 en una clínica de gran complejidad del suroccidente colombiano. Se excluyeron los reportes de procedimientos realizados a personas nacidas fuera de Colombia, con antecedentes de cirugía de revascularización coronaria o con cateterismo coronario incompleto, quedando una muestra de 200 reportes angiográficos. De cada reporte angiográfico, se obtuvieron variables cualitativas nominales como: sexo, antecedentes personales de salud que incluyeron dolor precordial y cardiopatía isquémica, dominancia coronaria y datos descriptivos de la arteria interventricular, como origen, trayecto y permeabilidad en sus segmentos S12, S13 y S14. Se consideró normal el origen anatómico común en el tronco de la arteria coronaria izquierda, de la circunfleja izquierda ( ramus circumflexus ) y la interventricular anterior (arteria descendente anterior izquierda). La permeabilidad coronaria se evaluó mediante la escala de flujo TIMI ( Thrombolysis in Myocardial Infarction ) y la dominancia coronaria se clasificó en derecha, izquierda o balanceada, siguiendo los criterios de Schlesinger. Los datos se digitalizaron en una base sistematizada en Microsoft Excel para su tabulación y la obtención de porcentajes. Las imágenes angiográficas con variaciones anatómicas de la arteria interventricular anterior se analizaron con el software ACOM PC Lite de Siemens. El desarrollo del presente estudio fue avalado por el comité institucional de revisión de ética humana de la Facultad de Salud de la Universidad del Valle, y la investigación fue clasificada sin riesgo conforme a lo definido en la Resolución 8430 de 1993 del Ministerio de Salud de Colombia. La información de los reportes angiográficos se manejó según lo dispuesto en la Resolución 1995 de 1999 del Ministerio de Salud de Colombia. Discusión Las variaciones anatómicas del origen de las arterias coronarias tienen una baja prevalencia en la población mundial. En Chile, Ugalde et al . desarrollaron un estudio prospectivo en 10.000 pacientes sometidos a angiografía coronaria y reportaron 1,29 % de anomalías en el origen de las arterias coronarias que, en el 0,8 % de los casos, involucraba a la arteria interventricular anterior con origen en el seno aórtico derecho, sin ningún otro tipo de variación para este vaso sanguíneo 11 . En España, Sarria et al . realizaron un estudio descriptivo con 1.180 angiotomografías y reportaron variaciones en el origen arterial coronario en el 2,2 %; el 15 % correspondió a ausencia del tronco coronario izquierdo, es decir, a un origen independiente de la arteria interventricular anterior y la circunfleja en el seno aórtico izquierdo, lo cual concuerda con la única variación de origen reportada en el presente estudio 12 . Kosar et al . evaluaron 700 reportes de angiotomografía coronaria con un porcentaje de variación del origen del 3,9 %; en el 0,4 % de estos casos, la arteria interventricular anterior se originó en un orificio independiente en el seno aórtico izquierdo 13 . Estos estudios referenciados mostraron hallazgos similares a los de la presente investigación, la cual mostró una muy pequeña prevalencia de variación del origen anatómico de las coronarias, que involucró la arteria interventricular anterior con origen independiente en el seno aórtico izquierdo. El trayecto normal de la interventricular anterior, al igual que las demás ramas de las arterias coronarias, es de tipo subepicárdico, es decir, entre el epicardio y el miocardio, donde se encuentran rodeadas por tejido conjuntivo adiposo, denominado grasa subepicárdica. Las variaciones del trayecto arterial coronario más frecuentes, son de tipo intramiocárdico o puente miocárdico, es decir, la arteria penetra en el espesor del músculo estriado cardiaco y lo atraviesa en alguno de sus segmentos o, incluso, en todo su recorrido 14 . Los reportes angiográficos del presente estudio no refirieron el grado de profundidad del puente miocárdico; sin embargo, la única arteria afectada fue la interventricular anterior, tal como lo sugiere Pérez 15 . En Colombia, Ballesteros et al . realizaron un estudio descriptivo observacional en 154 corazones, que fueron disecados e inyectados con resina en sus arterias coronarias. Encontraron 92 puentes miocárdicos en diferentes segmentos arteriales de 62 corazones. La arteria con mayor cantidad de puentes fue la interventricular anterior, en sus segmentos 12 y 13 16 . En Brasil, Sousa et al . disecaron la arteria interventricular anterior de 30 corazones humanos y reportaron puentes miocárdicos en el 46,7 %, los cuales afectaban principalmente a los segmentos 13 y 12 17 . Al comparar los hallazgos de puentes musculares reportados en estos dos informes con los del presente estudio, hay una diferencia importante. Es probable que la identificación de los puentes miocárdicos mediante procedimientos angiográficos se dificulte más que con la disección directa, sobre todo cuando se trata de puentes parcialmente tunelizados o delgados. En un estudio descriptivo por medio de angiotomografía computarizada de 393 pacientes, De Agustín et al . encontraron 86 puentes en 82 pacientes, en el 87,2 % de los cuales se involucró a la arteria interventricular anterior; los puentes miocárdicos se asociaron con miocardiopatía 18 . En los estudios anatómicos se describe la dominancia coronaria derecha como la más frecuente (85 %), lo que concuerda con el 86 % de los reportes del presente estudio. También, concuerda con un estudio descriptivo directo en población mestiza colombiana, en el cual se encontró dominancia derecha en el 83,7 % de los casos, seguida de circulación balanceada en el 9,2 % 19 . Según Ballesteros et al ., el 56 % de los puentes miocárdicos se presentó en casos de circulación balanceada y, el 39,3 %, en casos de dominancia derecha; la arteria interventricular anterior fue la más comprometida 16 . Estos hallazgos coinciden con los del presente estudio en que la arteria interventricular anterior es la que muestra mayor cantidad de puentes miocárdicos; sin embargo, la dominancia que más se asoció con el hallazgo de puentes fue la derecha. La influencia de la dominancia coronaria izquierda en el dolor precordial asociado con isquemia en la cara anterior del ventrículo izquierdo y el ápex cardiaco, irrigados por la arteria interventricular anterior, tiene valor clínico para el pronóstico a largo plazo del paciente con este tipo de cardiopatía 20 . El flujo sanguíneo en la arteria interventricular anterior es vital para el suministro de oxígeno y nutrientes al fascículo auriculoventricular o haz de His, y al miocardio apical y el de la cara anterior del ventrículo izquierdo; por tanto, la alteración de su permeabilidad se asocia con cardiopatía isquémica, infarto del miocardio y alteración de la conducción eléctrica en los ventrículos. La escala TIMI se utiliza en cardiología intervencionista al evaluar la permeabilidad de las arterias coronarias y determinar el flujo sanguíneo que llega al músculo cardíaco. Su puntaje va de 0 a 3: 0 es ausencia de flujo; 1, flujo mínimo o inapreciable; 2, flujo lento pero sostenido; y 3, flujo normal 21 . En la revisión de Dattoli et al ., la angiografía demostró alteración de la permeabilidad coronaria con compromiso de la arteria interventricular anterior en el 71,5 % de los casos, y dicha lesión se asoció con infarto agudo de miocardio en el 6 al 12 % de los pacientes y que dichos hallazgos suelen ser cada vez más frecuentes en personas menores de 45 años con antecedentes de tabaquismo y dislipidemia 22 . Algunas de las indicaciones para la angiografía coronaria son los cambios electrocardiográficos del ST, el dolor precordial y la alteración de la prueba de esfuerzo. No obstante, no todos los procedimientos angiográficos en este tipo de pacientes indican alteración de la permeabilidad coronaria. Las personas con dolor precordial e isquemia miocárdica, sin obstrucción de las arterias coronarias, se pueden asignar a otros grupos, uno con infarto miocárdico, denominado MINOCA ( Myocardial Infarction with Non-Obstructive Coronary Arteries ), y otro con angina e isquemia, llamado INOCA ( Ischemia with Non-Obstructive Coronary Arteries ) 23 . Estas asociaciones son importantes; en el presente estudio, el 16 % de las personas con alteraciones electrocardiográficas no presentaron ninguna lesión obstructiva en la arteria interventricular anterior. Pérez-Riera et al . describieron el caso de una mujer de 73 años con antecedentes de dislipidemia, prediabetes e hipertensión arterial, con signos de taquicardia sinusal y extrasístole ventricular en el electrocardiograma; su angiotomografía coronaria demostró una lesión obstructiva del S12 de la arteria interventricular anterior. Se implantó una endoprótesis vascular ( stent ) medicada y la paciente mejoró. Esto es un claro indicador de la importancia de la arteria interventricular anterior en el sistema de conducción cardiaco por la irrigación proporcionada al fascículo aurículo-ventricular 8 . Montero-Cabezas et al . presentaron dos casos de enfermedad obstructiva de la arteria interventricular anterior, precedida por depresión del segmento ST en el electrocardiograma, aunque la indicación de procedimientos de intervencionismo e, incluso, de cirugía de revascularización de urgencia, se asocia con elevación de dicho segmento ST en personas con dolor precordial y oclusión coronaria. Por lo tanto, no siempre el trazado del electrocardiograma se asocia con con alteración de la permeabilidad coronaria 4 . Las variaciones anatómicas del origen de la arteria interventricular anterior fueron poco frecuentes; en el presente estudio, se presentó en el 0,5 % de los casos. Según los conocimientos actuales, ocupan el cuarto lugar entre el grupo de variaciones coronarias. En primer lugar, está la arteria coronaria con origen en el seno contralateral; en segundo lugar, la arteria coronaria única; y en tercer lugar, la arteria coronaria derecha originada en la interventricular anterior. El trayecto normal de las arterias coronarias es subepicárdico; es decir, entre el epicardio y miocardio. Sin embargo, los puentes musculares se consideran la principal variación del trayecto coronario que, en el caso de la arteria interventricular anterior, representó el 2 %; en otros estudios, este tipo de variante del trayecto arterial se reporta hasta en el 47 % de los casos. La dominancia coronaria derecha presentó la mayor asociación con el dolor precordial. La dominancia en la irrigación influyó sobre la presencia de puentes musculares, pero sí se correlacionó con alteración de la permeabilidad en mujeres con circulación balanceada; la dominancia derecha fue la que con menor frecuencia se relacionó con obstrucción de la arteria interventricular anterior. Los hallazgos del presente estudio no difirieron de los de otros reportes de otras partes del mundo, respecto a prevalencias de origen anatómico normal y variante, dominancia coronaria y alteración de la permeabilidad. Se considera novedosa en este estudio, la descripción de cada uno de los tres segmentos de la arteria interventricular anterior, que, finalmente, son los que se valoran clínicamente mediante hemodinamia y cardiología intervencionista, para la toma de decisiones en pacientes con cardiopatía isquémica.
Agradecimientos A Freddy Moreno Gómez, por su asesoramiento metodológico.
CC BY
no
2024-01-16 23:47:20
Biomedica. 2023 Dec 1; 43(4):483-491
oa_package/ab/8d/PMC10789501.tar.gz
PMC10789504
38226097
Introduction and background Opioid use disorder (OUD) is a significant cause of morbidity and mortality, globally affecting over 16 million people and responsible for more than 100,000 deaths per year [ 1 - 3 ]. Key reasons for this include overprescription and accessibility to opioid analgesia with undercurrents of other biopsychosocial aetiological factors also at play [ 4 , 5 ]. Often referred to as the opioid crisis or opioid epidemic, OUD remains a significant public health issue [ 3 ]. Opioid replacement therapy (ORT) using opioid receptor agonists, such as methadone and buprenorphine, remains the gold standard approach for managing withdrawal symptoms and cravings associated with OUD. Current guidelines stipulate the prescription of ORT based on clinical features indicating opioid abuse, such as somatic withdrawal symptoms of diaphoresis, tachycardia, anxiety and behavioural symptoms of drug-seeking practices, and tunnel vision [ 3 , 5 ]. ORT has shown significant positive outcomes in managing OUD, such as reducing risky behaviour, crime rates, illicit opioid use, and all-cause mortality [ 6 - 8 ]. Despite these benefits, significant limitations associated with current ORT still exist, including the development of dependence and addiction to opioid agonists, risk of overdose, lack of equitable accessibility, and the potential for drug diversion [ 7 , 9 - 11 ]. These limitations stem from the inherent propensity of these drugs to elicit dependence and addiction, prompting the need to investigate non-opioid options in managing OUD [ 12 , 13 ]. The use of cannabidiol (CBD) has garnered clinical interest in tackling the opioid crisis recently. CBD is a potential non-opioid therapeutic for managing OUD due to its ability to interact with various neurochemical pathways associated with reducing addiction and withdrawal syndromes [ 14 ]. Additionally, the proposed analgesic effects of CBD have been suggested to reduce prescription opioid use, which is a significant factor towards opioid accessibility, misuse, and eventual OUD [ 4 , 15 ]. This review aims to provide insight into the potential role of CBD in ameliorating the opioid crisis and its role in the management of opioid dependence disorders. Herein, we will explore the current understanding of OUD and the factors that contribute to this condition and review the present landscape of ORT. The therapeutic potential of CBD in this arena along with its limitations and direction of future research will be further discussed.
Conclusions Opioid medications, both licit and illicit, have led to significant morbidity and mortality worldwide. Approaches have aimed to improve these outcomes for those with opioid dependence and OUD through ORT. Although shown to be effective, ORTs have significant limitations, one of which is the propensity to induce dependence and addiction. This calls for an unmet need for non-addictive, non-opioid alternatives in managing OUDs. Preliminary and preclinical evidence has found that CBD may target pathways that may improve addiction and dependence; however, the clinical applications of this are challenged by a lack of high-quality and randomised clinical trials as well as government policy regarding medical marijuana products. Additionally, at this point in time, there is a poor understanding of the best-practice considerations for CBD implementation, including an understanding of the efficacious dosages for CBD-based OUD therapy as well as a lack of clearly defined tolerability and side-effect profiles. Our review calls for further robust research prior to the consideration of CBD in informing evidence-based policies and frameworks related to ORT.
Opioid use disorder (OUD) is a significant cause of morbidity and mortality worldwide and is linked to a complex interplay of biopsychosocial factors as well as the increasing overprescription and availability of opioid medications. Current OUD management relies on the controlled provision of opioid medications, such as methadone or buprenorphine, known as opioid replacement therapy. There is variable evidence regarding the long-term efficacy of these medications in improving the management of OUD, thereby necessitating an exploration into innovative approaches to complement, or even take the place of, existing treatment paradigms. Cannabidiol (CBD), a non-psychoactive compound derived from the cannabis plant, has garnered attention for its diverse pharmacological properties, including anti-inflammatory, analgesic, and anxiolytic effects. Preliminary studies suggest that CBD may target opioid withdrawal pathways that make CBD a potential therapeutic option for OUD. This narrative review synthesises current literature surrounding OUD and offers a nuanced review of the current and future role of CBD in managing this condition. In doing so, we highlight the potential avenues to explore with respect to CBD research for the guidance and development of further research opportunities, framework and policy development, and clinical considerations before medicinal CBD can be integrated into evidence-based clinical guidelines.
Review Understanding opioid use disorder Opioids are a large drug class derived from opium alkaloids found in the resin of opium poppy seeds ( Papaver somniferum ) [ 16 ]. Opioids can be naturally derived (morphine, codeine), semi-synthetic or synthetic (oxycodone, hydrocodone), or illegally manufactured (heroin) [ 3 ]. In practice, opioids are commonly prescribed as strong analgesics for the management of both acute and chronic pain [ 16 , 17 ]. This occurs through the ability of opioids to bind a variety of opioid receptors (delta, kappa, mu) found in the peripheral and central nervous system. Activation of these receptors triggers opioid signalling pathways that elicit downstream analgesic effects for effective pain management [ 18 ]. Despite these positive therapeutic effects, it is important to recognise that opioids are simultaneously considered substances of abuse with a high potential for addiction and misuse that can easily transition into an OUD [ 18 ]. Despite their role in modern medicine, opioids are potential substances of abuse due to their addictive potential that can easily spiral into an OUD [ 18 ]. OUD describes the repeated, hazardous patterns of opioid misuse that result in the development of tolerance, dependence, and addiction. Its significant worldwide mortality rate arises as a result of respiratory depression from narcosis; studies have highlighted the risk of mortality from individuals with OUD is between six and 20 times higher than that of the general population [ 1 , 3 ]. Opioid dependence results in morbidity that arises from the physiological and psychological symptoms of withdrawal. Physiological symptoms may include myalgia, arthralgia, nausea, vomiting, diarrhoea, and insomnia [ 19 ]. Psychological symptoms include compulsive opioid use habits and drug-seeking behaviour, preoccupation with a drug-seeking behaviour, development of drug-related cues and reward salience, an inability to control intake, and emotional and mental instability without opioid use such as anxiety [ 20 - 23 ]. These put a strain on interpersonal relationships, contribute to mounting medical expenses, decrease employability, and place the individual at an increased risk of crime and incarceration [ 3 , 24 ]. Neurophysiological Factors Chronic opioid use gives rise to neurophysiological maladaptations, resulting in the development of opioid dependence and addiction. These changes are exhibited by the central nervous system effects of the mu-opioid receptor [ 18 ]. The activated mu-opioid receptor acts centrally to increase dopamine neurotransmitter release in areas of the brain, including the ventral tegmental area, nucleus accumbens, and dorsal striatum, eliciting a reward phenomenon [ 5 , 8 , 25 ]. Opioid misuse therefore subsequently leads to intense feelings of euphoria, fuelling the motivation for ongoing opioids that subsequently progresses to dependence syndromes [ 8 ]. However, chronic opioid abuse leads to long-lasting neuroplastic adaptations from repeated activation of dopamine activity, resulting in goal-directed behaviour and habit formation [ 23 , 26 ]. These neuroplastic adaptations are responsible for the development of opioid dependence and addiction, resulting in intense withdrawal symptoms with cessation [ 5 , 25 ]. Genetic factors influencing the mu-opioid receptor pathway can also increase the likelihood of developing OUD [ 3 ]. Additionally, chronic opioid use results in neurocircuitry changes, such as decreased receptor sensitivity and expression, impaired coupling of receptor activation, dysfunctional intracellular signalling activation, and adaptations in cell signalling pathways, further contributing to the development of opioid tolerance, dependence, and addiction [ 17 , 18 ]. Although conventional understanding and current gold-standard treatments for OUD focus on changes at the mu-opioid receptor level, it is important to note that other key neurophysiological signalling processes are implicated in OUD development. One mechanism is through the dysregulation of the endocannabinoid system through potentiated dopamine signalling with opioid abuse, which can exacerbate OUD [ 27 ]. Specifically, current studies have posited that disrupted mesolimbic dopaminergic pathways increase CB1R activity, which consequently potentiates a dopaminergic response in the ventral tegmental area, nucleus accumbens, and dorsal striatum to produce rewarding euphoric effects important in the development of OUD [ 23 , 27 ]. Furthermore, it is interesting to note that CB1R and mu-opioid receptors are colocalised, which may result in reciprocal interactions that further potentiate dopaminergic effects and complicate OUD [ 23 ]. Moreover, serotoninergic pathways are disrupted with opioid abuse. Interestingly, differing neurophysiological effects have been documented depending on the nature of opioid pharmacokinetics. Acute opioid use has been demonstrated to result in surges of serotonin (5-HT) release within specific regions of the brain; however, chronic or sustained opioid use contrarily results in a reduced or absent 5-HT response. This impairment of physiological serotoninergic signalling is hypothesised to impair central pain and reward regulation, which can further complicate opioid abuse into OUD [ 28 , 29 ]. Psychosocial Factors There are several psychological and social factors that increase one’s susceptibility to OUD and the eventual development of opiate dependence and addiction. Mental health disorders and unstable emotional environments increase the likelihood for an individual to misuse opioids [ 3 ]. Individuals with a personal history of depression, anxiety, relationship strain, sexual or physical abuse or trauma, and comorbid psychiatric disorders are linked with opioid abuse as a form of self-medication to relieve oneself from aversive events [ 1 , 3 ]. This repeated behaviour eventually forms a learned association between stressful situations and opioid misuse, thereby developing subconscious associations between opioid use, the euphoric effects of opioid abuse and relief from aversive life stressors [ 30 ]. Opioid misuse is further compounded by the overprescription and oversupply of prescription-only opioids, with a vast majority of opioid misuse stemming from what was initially a non-medical indication [ 18 ]. In Australia, approximately 11% of individuals aged 14 or over were reported to have used opioids for illicit or non-medical purposes, the majority of exposure from pharmaceutical opioids (9.7%) secondary to excessive accessibility from over-prescribing [ 4 , 31 ]. Concerningly, a lack of education about appropriate opioid stewardship practices is a key determinant preceding OUD and oftentimes inappropriate continuation of prescription opioids after completion of pain management regimes can result in persistent opioid misuse and progression into OUD [ 1 , 5 ]. OUD is a chronic, relapsing disease due to the addictive properties of opioids and the burdensome nature of withdrawal symptoms [ 1 ]. One study found that approximately 60% of abstainers succumbed after three months, with 75-85% relapsing after 12 months [ 17 ]. A general framework of OUD and relapse can be conceptualised as an ‘addiction cycle’, consisting of a negative withdrawal symptom experience, followed by a phase of preoccupation, anticipation, and craving, finally culminating in binging and intoxication [ 20 ]. Relapse of abstinence can be triggered by various factors, including uncontrollable withdrawal symptoms, incentive salience from opioid overprescription, exposure to opioids, and relief from aversive life stress [ 30 , 32 ]. Although a vulnerability to relapse will be lifelong, maintaining abstinence for at least five years has been shown to substantially reduce the likelihood of relapse [ 1 ]. Current ways of managing OUD include psychosocial treatment (including cognitive behavioural therapy), ORT, or medically supervised opioid cessation (detoxification) with symptomatic management [ 3 ]. The current landscape of opioid replacement therapy Although there are several current strategies for managing OUD, ORT is the most effective and is widely considered the gold standard [ 33 ]. ORT has been shown to reduce opioid-related mortality by up to 70% [ 34 - 36 ]. Two formulations are available as ORT in Australia: buprenorphine and methadone [ 37 ]. Both of these drugs are selective, long-acting mu-opioid receptor agonists with some differing pharmacodynamic properties; methadone is a full agonist, whilst buprenorphine is a partial agonist with a low dissociation rate with the mu-opioid receptor; however, both bind to the mu-opioid receptor with higher affinity than other opioids, ultimately resulting in long-lasting, protective effects with minimal induction of euphoria [ 37 , 38 ]. The main principle of ORT is to replace non-medical opioid use and manage withdrawal symptoms with long-acting mu-opioid receptor agonists, to elicit anti-craving effects without inducing euphoria [ 39 ]. Mechanistically, both methadone and buprenorphine competitively bind to mu-opioid receptors with a higher affinity compared to other opioids, including heroin [ 38 ]. This elicits three main effects; withdrawal symptoms are managed and suppressed due to the occupation of mu-opioid receptors; the long-acting effect of these drugs minimises opioid-induced euphoria; and by competitively binding to the mu-opioid receptors, additional concomitant use of non-medical or illicit opioids will have no euphoric effect [ 22 , 37 , 38 ]. Some physiological side effects from ORT, including gastrointestinal symptoms, can be managed concomitantly with anti-emetics or non-steroidal anti-inflammatory medications to provide symptomatic relief [ 19 ]. Eventually, ORTs are progressively tapered to minimise opioid craving and withdrawal symptoms, whilst simultaneously overcoming opiate dependence and withdrawal [ 8 ]. Current ORTs have been shown to significantly reduce risky behaviour, crime rates, illicit opioid use, and all-cause mortality since their introduction [ 6 - 8 ]. Despite the clinical success, there are significant limitations with current ORTs, which typically revolve around dependence and development of addiction, relapse, misuse, drug diversion, and poor treatment availability and adherence [ 10 , 12 , 37 , 40 ]. Propensity to Cause Dependence and Addiction As methadone and buprenorphine are opioid agonists, a potential side-effect is the development of iatrogenic dependence and addiction towards ORT. This does pose an ethical debate on the safety and efficacy of using potentially addictive, abusable medications to manage a patient afflicted by a similarly problematic drug [ 13 , 40 ]. As a result, treatment with buprenorphine has increasingly come into favour over methadone, attributable to its pharmacodynamic difference of partial agonism, which decreases the tendency of eliciting euphoria, dependence, and addiction [ 12 , 22 ]. Furthermore, buprenorphine has commonly been co-formulated with the opioid receptor antagonist, naloxone, to reduce euphoric effects and prevent drug diversion [ 8 ]. However, despite this safety mechanism, both ORTs can still develop dependence and addiction, cause withdrawal symptoms during the tapering phase, promote relapse, and exacerbate OUDs [ 10 ]. Problematically, these withdrawal symptoms can persist for months [ 26 ]. Risks During Relapse Due to the onset of withdrawal symptoms upon ORT tapering, there remains a high risk of relapse. Studies have highlighted that within the first month of ORT discontinuation, the mean relapse rate was approximately 50% [ 9 ]. Of note, the consequence of relapse is especially risky after a course of ORT due to the decreased use of opioids from long-term treatment, which leads to a reduction in physiological opioid tolerance, an increase in opioid sensitivity, and subsequently, exacerbated negative effects include respiratory depression, cardiovascular effects, and increased vulnerability of overdose [ 41 , 42 ]. Unsurprisingly, there is an eight-fold increased risk of mortality within the first month immediately after discontinuation of ORT [ 42 ]. Misuse of Opioid Replacement Therapy and Drug Diversion Although the effects of euphoria are mitigated due to the pharmacological long-acting nature of ORTs, there remains a plausible possibility of misuse, particularly for the full mu-opioid receptor agonist methadone [ 8 , 12 ]. Reasons for misuse are varied and can begin with inappropriate and prolonged continuation of ORT due to fear of relapse, or stem from medication abuse for euphoric effects [ 9 ]. Certain patient groups with tolerance to opioids, such as those with chronic pain and complex care needs, may require greater opioid doses to achieve adequate levels of analgesia. These populations are important to recognise as judicious opioid stewardship in this group fundamentally differs from those of the opioid-naïve population. Failure to recognise this need can lead to stigmatisation of such populations and poor pain management. Anecdotally, many individuals who were dissatisfied with their prescribed opioid dose have been described to have sought out opioids in greater amounts, often illicitly or through means of seeking prescriptions from multiple independent prescribed, termed ‘doctor shopping’. An extension of misuse is drug diversion, which is a major public health concern. Drug diversion involves the distribution of methadone or buprenorphine into the black market for illicit recreational purposes with population-level studies highlighting diverted ORTs (methadone in particular) significantly contributing to opioid-related deaths [ 7 , 13 ]. Treatment Availability and Retention There are a variety of social, economic, and psychological factors that influence ORT availability and retention rates. Studies have estimated that up to 15% of individuals with OUD are on ORT [ 37 , 43 , 44 ]. Although it is widely used for dependence and addiction, there remains an existing social stigma, preconceived negative views, and a general lack of awareness that ORT is a medically prescribed and evidence-based method to manage OUD [ 37 , 40 , 44 ]. By extension, some countries (such as Russia) and state systems (such as correctional facilities) discourage or prohibit the use of ORTs [ 40 ]. Secondly, the costs of ORTs are high, limiting accessibility [ 12 , 26 ]. Finally, there are strict regulations that govern the prescription and supply of ORT. In Australia, Victorian prescribers must have completed pharmacotherapy training and obtained a permit from the Department of Health for each patient prescribed ORT. Pharmacies too are bound by the regulations that enforce supervised dosing and tight regulations around takeaway doses. Strict regulations around ORT are in the interests of minimising drug diversion but can represent a barrier to entry with potential ORT patients unwilling to frequent their chosen pharmacy location up to seven days of the week to seek treatment due to either inconvenience or stigma. As a result, patients were often involuntarily discontinued from ORT at the discretion of the clinician due to an inability to follow rigorous treatment programs [ 9 ]. Hence, despite clinical successes, there is an unmet need for non-addictive, non-opioid therapies in managing OUDs [ 26 ]. Cannabidiol Cannabis, otherwise known as marijuana or Cannabis sativa , is a commonly used recreational drug that has been legalised in some countries for recreational and medical indications, including as an analgesic and anxiolytic [ 33 , 45 ]. Constitutively, the main active components of cannabis are cannabinoids, which primarily consist of the non-psychoactive CBD and the psychotomimetic Δ9-tetrahydrocannabinol (THC) [ 40 ]. There has been growing interest in the use of medicinal cannabis-related products for the management of OUD. As the legal use of medicinal cannabis becomes more prevalent worldwide, studies have recently highlighted exciting associations between medicinal cannabis use and decreasing opioid abuse, demonstrating its potential in managing OUD. Firstly, implemented policies that allow the legal use of medicinal cannabis with adequate availability in dispensaries have shown lower opioid prescriptions, decreased non-medical opioid abuse, and reduced rates of mortality (from opioid overdose), as shown by several studies investigating opioid-related deaths within many US states between the period of 1999 and 2010 [ 46 - 48 ]. Secondly, many individuals reported that using medicine to self-manage OUDs improved symptoms of opioid withdrawal, anxiety, and gastrointestinal upset [ 49 - 51 ]. Finally, in a large survey (n = 2897), 97% of participants reported that the use of medicinal cannabis-related products for chronic pain management resulted in the reduction of opioid use [ 52 ]. A study found that prescription opioid use decreased by 40-60% with medicinal cannabis use, where patients reported greater satisfaction and preference for medicinal cannabis-related products compared to prescription opioids [ 53 ]. Although these studies are observational and rely on self-reporting and surveys thus increasing the risk of bias, these results highlight the significant potential of medicinal cannabis-related products in ameliorating the opioid crisis. Of all the constituents of cannabis, CBD shows the greatest promise for the treatment of OUD [ 54 ]. The key advantage of CBD rather than whole cannabis and THC is due to its non-psychoactive properties, which eliminate the risk of dependence and addiction [ 55 ]. This is particularly important as individuals affected by OUD are more likely to develop cannabis use disorder [ 19 ]. Furthermore, a non-psychoactive agent with existing widespread social acceptance may be more appealing to patients and minimise the psychosocial barriers to entry [ 56 ]. The safety profile of CBD is modest and typically elicits only mild side effects, such as diarrhoea and moderate sedation [ 57 ]. Additionally, CBD can be safely co-administered with opioids in the event of possible concomitant opioid relapse during treatment [ 44 , 58 ]. CBD has two main avenues of interest in addressing the OUD; CBD has been investigated for its direct potential as a non-opioid alternative for managing opiate addiction, and since there is growing evidence and greater leniency towards implementation of medicinal cannabis as an effective analgesic, CBD has a potential role in pain management that may reduce prescription opioid use and indirectly reduce the incidence of OUD [ 59 , 60 ]. Role in Opioid Dependence and Addiction CBD has been posited to influence various signalling pathways within the body to manage opioid dependence and addiction. It is a non-competitive negative allosteric modulator of cannabinoid receptor type 1 (CB1R) and part of the endocannabinoid system [ 19 , 23 ]. During opioid abuse, disruptions of dopaminergic signalling triggered by CB1R activation in the endocannabinoid system may further potentiate dopamine release; all of which is hypothesised to be involved in neurological maladaptations resulting in OUD [ 23 ]. These include the development of emotional associations to substance-related cues, reward salience, and compulsive substance use habits that eventually result in dependence and addiction [ 23 ]. As CBD negatively influences CB1R signalling, it can potentially attenuate dependence and addiction [ 23 , 27 ]. Secondly, CBD is an agonist for serotonin 1A (5-HT1A) receptor, part of the serotonergic pathway. Serotoninergic pathways have been shown to be disrupted with substance use disorders, which are associated with impulsivity, addiction, and relapse [ 28 ]. The effects of CBD on the serotonergic system have been shown to reduce craving and relapse, elicit anxiolytic effects, and improve stress management through complex pathways [ 61 ]. Finally, CBD is an allosteric modulator of the mu-opioid receptor; part of the opioid signalling pathway [ 40 ]. Interestingly, the CB1R receptors are colocalised with the mu-opioid receptors, hence, there is an overlap between the endocannabinoid and opioid signalling systems [ 19 ]. CBD action on the mu-opioid receptor has been shown to attenuate opioid withdrawal symptoms [ 24 ]. Overall, CBD may exhibit multiple mechanisms for managing opiate dependence and addiction without psychoactive properties, which is conducive to its suitability as a non-opioid therapeutic. Currently, multiple clinical studies have shown that CBD treatment has significant impacts on reducing craving, anxiety, and attention towards environmental cues that trigger drug-seeking behaviour in patients with OUDs [ 57 , 62 , 63 ]. However, it is important to note that there is a paucity of high-powered randomised control trials and evidence for the use of pure CBD for managing opioid withdrawal symptoms, especially since the current literature on medicinal cannabis is conflicting [ 64 , 65 ]. Whilst it is an exciting new agent in the management of OUD, further research will be required to fully realise the potential of CBD. Reduction of Prescription Opioid Use CBD has been postulated to be able to replace or act as an adjunct to opioids as an analgesic; if so, it has the potential to decrease opioid use, thereby indirectly reducing OUD. Despite medicinal cannabis being an established analgesic, pain management with pure CBD has only been reported anecdotally and is currently understudied [ 26 , 45 , 66 ]. It is also important to note that there are no pure CBD medications approved for pain management [ 26 ]. It is posited that CBD can elicit analgesic effects through effects on the endocannabinoid, inflammatory, and nociceptive systems [ 67 ]. Firstly, CBD has been shown to activate transient receptor potential cation channel subfamily V member 1 (TRPV1) and transient receptor potential cation channel, subfamily A member 1 (TRPA1) receptors, which subsequently decreases inflammation and reduces the secretion of pro-inflammatory molecules [ 67 ]. These anti-inflammatory effects are posited to induce analgesic effects [ 55 , 68 ]. Secondly, CBD action as a positive allosteric modulation of mu-opioid receptors may result in endocannabinoid-opioid system interactions to induce analgesic effects [ 24 ]. Current clinical studies have shown conflicting clinical evidence for the effects of cannabinoids in pain management. In one study, topical CBD oil was shown to significantly reduce peripheral neuropathic pain [ 69 ]. However, other studies demonstrate that CBD has no significant effect on pain management [ 70 , 71 ]. Additionally, CBD may have the potential as an adjunct treatment for prescription opioids due to endocannabinoid-opioid interactions, which may reduce the amount of opioids required in pain management [ 53 ]. In particular, a study demonstrated that combination treatments using CBD and prescription opioids resulted in hyper-additive pain relief, resulting in overall reduced opioid use and side effects [ 72 ]. However, it is a novel outlook into the adjunctive role of CBD in pain management as a non-opioid therapeutic option that may reduce prescription opioid use to ameliorate the opioid crisis. Clinical Considerations When considering the clinical applications of CBD in OUD, an area that remains poorly characterised is the clinical aspects of dosing and adverse effects. The underlying reason for this is explained by the regulation of these substances, both in research and therapeutical applications, which therefore limits the jurisdictions to which clinical trials can take place, as well as the patients that can access these medications. With respect to dosing, at current there is no best-practice recommended dosing of CBD for OUD. Studies have shown varying dosages for other indications. For example, in chronic pain, efficacy has been found with respect to doses of 20.8 mg of CBD per day in improving neuropathy. This is, however, confounded by the self-titration of CBD products and their integration with THC, which makes interpreting the dose of CBD as well as its effect quite difficult [ 73 ]. In anxiety disorder, clinical trials have found a dose of 600 mg of CBD monotherapy taken 90 minutes prior to public speaking could significantly improve anxiety levels compared to control [ 73 ]. Jurisdictions whereby CBD has been approved for medicine use, such as Canada, have shown the efficacious introduction of products like Sativex® spray (THC 2.7 mg: CBD 2.5 mg), which has been approved for neuropathic pain and multiple sclerosis with good efficacy [ 67 ]. With respect to OUD, a more recent study by Suzuki et al. evaluated the effects of single-dose CBD at 600 mg in improving cue-induced cravings in patients with OUD already on methadone or buprenorphine [ 63 ]. In this study, CBD was shown to reduce drug-related cues and cravings. Despite these initial outcomes, OUD as an indication has yet to be thoroughly explored and therefore the doses of CBD required for efficacious ORT remain poorly characterised with a call for dose-response studies and more rigorous and higher-powered randomised-controlled clinical trials needed to better evaluate this effect to guide OUD policies and frameworks when it comes to considering CBD integration. Similarly, the adverse effect profile of CBD remains under-explored. Notably, a significant research gap remains in research that characterised the long-term effects of CBD, which is mainly attributed to the fact CBD has only recently become approved for clinical therapeutic and research use and therefore it is expected there is a lag time for robust research with adequate follow-up length to develop. Nonetheless, there is preliminary evidence of the adverse effects associated with cannabinoids (inclusive of THC and CBD), including gastrointestinal side effects (nausea, vomiting, and in the severe form, cannabis-induced hyperemesis syndrome), cognitive impairment and drowsiness, and risk of mental health disorders (including mania, anxiety, and psychosis, among others) [ 70 , 73 ]. With respect to the latter, paradoxically CBD and medicinal cannabis products have also been reported to alleviate such symptoms and conditions. Other studies, however, have shown CBD is well tolerated, with no incidence of any adverse effects [ 63 ]. It appears that there may be dose-dependent relationships to explain this; however, these are similarly poorly understood. Overall, the prevalence and incidence of these adverse events remain unclear, further alluding to the need for further research to better characterise the tolerability and safety profile of CBD prior to widespread clinical application. Current limitations and future directions Despite the significant potential of medicinal cannabis-related products (specifically CBD) in managing OUD, it is an understudied field with limited clinical evidence. This is attributable to the long-standing classification of cannabis, whereby its illegal status provides a major barrier to cannabis research and clinical trials [ 56 , 73 ]. However, with increasing decriminalisation and medicinal repurposing of cannabis worldwide, there will be greater accessibility and resources to conduct primary research on the role of CBD [ 54 ]. This, however, poses to remain a challenge, with regulations existing in many jurisdictions. Specifically, cannabis and by extension medicinal cannabis base products remain listed as a Schedule 1 drug, which continues to impede research efforts and slow the progress of drug development [ 74 ]. Fortunately, in 2022, the US passed the “Medical Marijuana and Cannabidiol Research Expansion Act” to streamline the research in this space and a step towards important milestones for the introduction of new drugs, including the generation of robust research to meet the requirements of the US Food and Drug Administration (FDA). Similar challenges have been experienced in other jurisdictions, with bodies such as the Therapeutic Goods Administration (TGA) of Australia limiting medicinal cannabis-based products to be prescribed solely by authorised prescribers and limited licenses being distributed for cultivation, production, and manufacture of cannabis-based medicinal products. Despite these improvements, the products on the market have inconsistent quality and control standards and therefore hinder the development of evidence-based best practice guidelines for clinical applications. In this way, jurisdictions worldwide would benefit similarly from more robust research that addresses questions of dosing and safe stock handling and supports the development of formalised training and safety guidelines. From the current scope of the literature, there are many promising results from observational studies and clinical trials indicating the potential role of CBD in managing opiate dependence and addiction [ 57 , 62 , 63 ]. Future studies using high-power randomised-controlled trials (with controlled routes of administration, evaluation of dose, and inclusion of greater sample sizes with more rigorous and standardised measures) will be beneficial in elucidating and generating evidence for the potential of CBD in managing OUD [ 14 , 45 , 73 ]. Future studies investigating CBD in pain reduction and as an opiate adjunct treatment for pain management would further elucidate the potential of CBD as a non-opioid analgesic, which may reduce opioid use and downstream OUDs.
CC BY
no
2024-01-16 23:47:20
Cureus.; 15(12):e50634
oa_package/b5/c7/PMC10789504.tar.gz
PMC10789508
38226181
1. Introduction Enterococci are natural inhabitants of the intestine, oral cavity, and the female genital tract of humans, and animals; however, they can cause opportunistic infections if they are relocated to sterile sites [ 1 , 2 ]. In a hospital environment after gastrointestinal colonization, Enterococci can lead to various infections that include bloodstream infections, infective endocarditis, intra-abdominal/pelvic abscess, urinary tract, and surgical wound infection in critically ill patients [ 2 ]. Many of these infections originate from the intestinal flora of colonized individuals. VRE has different selection pressures for proliferation and rapid expansion of its resistant population [ 3 ]. Enterococci can grow under a wide range of temperatures and pH and are resistant to dry conditions. They can grow in a high salt concentration. As a result, Enterococci can persist in a hospital environment for a long period of time and spread easily among admitted patients and hospital staff [ 4 ]. A major problem with the Enterococci is that they are very resistant to antibiotics and have the ability to survive in harsh environments in the community and persist in hospital settings [ 3 ]. Because of this, they become important and responsible etiological agents in the community as well as healthcare-associated infections (HAIs), particularly in patients with prolonged hospital stays, severe underlying disease, or previous broad-spectrum antibiotic therapy [ 5 , 6 ]. According to the World Health Organization (WHO) report in 2017, vancomycin-resistant Enterococci (VRE) is one of the most resistant bacteria in their “Global Priority list of antibiotic-resistant bacteria” [ 6 , 7 ]. In the same manner, the Center for Disease Control and Prevention (CDC) has classified Enterococci among bacteria with a threat level of serious [ 8 , 9 ]. During the last decade, a dramatic increase in the occurrence of vancomycin-resistant enterococci (VRE) has been noted in hospitals within the United Kingdom and the United States, and they have currently become the cause of one-third and one-fifth of all healthcare-associated infections in the United States and some European countries, respectively [ 10 , 11 ]. Although the presence of VRE has been studied in many developed regions of the world, however there is a lack of comprehensive data indicating the burden of VRE in Africa, but few studies in South Africa showed 74.8% followed by Egypt 37.2%, Uganda 9.8%, Morocco 8.2%, and Ethiopia 7.9%, respectively [ 12 ]. Enterococci are the second bacteria to be reported from surgical wound infection and nosocomial urinary tract infection (UTI) and the third most frequently reported cause of bacteremia. Especially, Enterococcus faecalis and Enterococcus faecium have become causes of international concern [ 6 ]. At present, a serious public issue is the presence of multidrug-resistant bacteria including VRE and limited availability of drugs to treat VRE infections in the GIT as components of the normal microflora. The clinical significance of multidrug-resistant bacteria in the GIT is also documented by different researchers. In their study, an ESBL-positive strain of Klebsiella pneumoniae and Enterobacter cloacae , isolated from rectal swabs, were found to be identical with strains of VRE due to the transfer of the genetic determinant of vancomycin resistance to other Gram-positive and Gram-negative bacteria causing bacterial infections [ 8 ]. Asymptomatic VRE gut colonization precedes infection with susceptible hosts, such as patients who are exposed to multiple and prolonged courses of antimicrobial agents like human immunodeficiency virus (HIV)-infected individuals, severely ill, hospitalized for long lengths of stay, living in a long-term care facility, located in close proximity to another colonized or infected patient, or hospitalized in a room previously occupied by a patient colonized with VRE. Colonization is often obtained by vulnerable hosts in an environment with an increased rate of patient colonization with VRE [ 10 , 13 ]. The colonization rate of VRE was reported in Europe, Asia, Australia, South America, and some African countries. However, there are no sufficient data available on the prevalence and risk factors of VRE in developing countries like Ethiopia. It also becomes a therapeutic challenge to physicians due to the ease of acquiring vancomycin-resistant genes and the presence of different selection pressures for VRE proliferation and rapid expansion of resistant populations [ 10 ]. Several studies have documented that Enterococcal infections are most commonly caused by the patient's own commensal flora [ 12 ]. Therefore, this study was conducted with the aim of determining the gastrointestinal colonization rate of vancomycin-resistant Enterococci among hospitalized patients at Hawassa University Comprehensive Specialized Hospital in Southern Ethiopia.
2. Methods 2.1. Study Design and Area A prospective hospital-based cross-sectional study was conducted from April 1 to June 30, 2021, at Hawassa University Comprehensive Specialized Hospital (HUCSH) which is located in Hawassa city, Sidama Regional State, Ethiopia, located 275 kms from the capital city of Ethiopia Addis Ababa. According to projections of the central statistics authority of Ethiopia, Hawassa population is estimated to be 436,992 in 2012E.C [ 14 ]. Hawassa University Comprehensive Specialized Hospital is the only hospital in the region with more than 500 beds serving for about 18 million populations in the nearby regions of Oromiya, SNNPR, and Somalia. The hospital gives different outpatient and inpatient health care services such as HIV/AIDS care and treatment, oncology services, chronic disease management clinic, surgery, gynecology and obstetrics, internal medicine, pediatrics, ophthalmology, psychiatry, radiology, and pathology services. 2.2. Study Population All patients hospitalized in the medical ward, surgical ward, pediatric ward, and adult intensive care unit for >48 hours were considered as a study population. Patients from whom consent and assent were obtained from parents/guardians were included in the study, whereas patients who were unable to provide specimens were excluded from the study. The sample size was estimated using a single population proportion formula assuming a prevalence of 5% vancomycin-resistant Enterococci reported from Jimma, Ethiopia [ 12 ], a confidence interval of 95%, and a 3% margin of error. After considering 10% for the nonresponse rate, the final sample size was 223. To determine the proportion of sample size for five wards, the average number of hospitalized patients in the past three months (January to March 2021) before the study was considered. Accordingly, the total number of patients in 4 wards, medical ward, pediatric ward, surgical ward, and adult intensive care unit were 182, 66, 12, 94, and 10, respectively. The total sum of patient flow for the last three months was 2007. The sample size was proportionally allocated to each ward based on the number of hospitalized patients in each ward in the last three months. 2.3. Data Collection An interviewer-administered structured questionnaire was used to document the patient's demographic and clinical details, which included age, sex, place of residence, occupation, educational level, marital status, medical history, clinical diagnosis, prior hospital/ICU admission, date of present admission to hospital and PICU, history of antibiotic usage, consistency of stool, hand washing habit, admission ward, length of hospital stay, the reason for admission, and use of the medical device. 2.4. Isolation and Identification of Enterococci During the study period, about 223 patients who were admitted and stayed for >48 hrs and from whom 5 mg of fecal specimen was collected in sterile plastic containers were included in the study. The specimens were labeled with a unique number, date, and time of specimen collection and transferred to the microbiology laboratory of HUCSH within 30 minutes. In case of delay, the specimen was placed within Cary-Blair semisolid medium (Oxoid Ltd, Basingstoke, Hampshire, England). Finally, stool specimen was inoculated onto Enterococcus selective media, Bile Esculin azide agar containing 6 mg/L vancomycin (Hardy Diagnostics, Santa Maria), and incubated at 37°C. After 24 hours of incubation, the plates were observed for a colony of bacteria with black/brown colonies, were presumptively identified as Enterococcus, and confirmed to the species level based on Facklam and Collins standard biochemical tests [ 15 ]. Colonies of bacteria which were Gram-positive cocci and catalase-negative were further subcultured on Brain Heart Infusion broth containing 6.5% NaCl and incubated at 45°C for 24 hours; this was indicated by turbidity which confirms the identity of Enterococci [ 16 ]. Mannitol fermentation (mannitol salt agar containing 7.5% NaCl) and ampicillin susceptibility were used for the identification of E. faecalis from E. faecium . Accordingly, E. faecalis can grow on mannitol salt agar and ferment mannitol, whereas E. faecium are unable to ferment mannitol and resistant to ampicillin [ 13 ]. 2.5. Methods of Detection of Vancomycin-Resistant Enterococci Pure colonies of Enterococci which were isolated from culture were taken using a plastic swab and mixed in physiological saline until the turbidity matches 0.5 McFarland standards. By using a sterile cotton swab and dipping it into prepared suspension, we transferred and gently swabbed onto the surface of Mueller–Hinton agar (MHA); after 3 to 5 minutes, a 30 μ g vancomycin disc was placed on the surface of MHA and incubated aerobically at 37°C for 24 hours and inspected for a zone of inhibition. The diameter of zone of inhibition was measured using a ruler, a diameter of zone of inhibition zone ≤14 mm was considered resistant, 15-16 was considered intermediate, and a diameter of zone of inhibition ≥17 mm was considered susceptible. The minimum inhibitory concentration (MIC) of vancomycin was determined by the agar dilution method for all the Enterococcal isolates grown on BEA as per the Clinical and Laboratory Standard Institute guidelines version 21 [ 17 ]. Similar McFarland standards' suspension was prepared to perform MIC, the suspension was prepared, vancomycin E-test (AB Biodisk, Solna, Sweden) was placed on the surface inoculated Mueller–Hinton agar and incubated aerobically at 37°C for 24 hours, and the MIC was read by observing the inhibition zone. The MIC values were interpreted based on the breakpoint recommended by CLSI. Values of MIC ≤ 4 μ g/ml, 8–16 μ g/ml, and ≥32 μ g/ml were considered susceptible, intermediate, and resistant, respectively [ 17 ]. 2.6. Antimicrobial Susceptibility Testing The antimicrobial susceptibility of the isolates to other antibiotics, namely, penicillin (P) (10 IU), ampicillin (AMP) (10 μ g), (Gen) (10 μ g), erythromycin (ERY) (15 μ g), tetracycline (TE) (30 μ g), chloramphenicol (CL) (30 μ g), linezolid (30 μ g), and ciprofloxacin (CIP) (5 μ g), was also performed on Mueller–Hinton agar (MHA) (OXOID, UK) by the Kirby–Bauer disk diffusion technique as mentioned by the Clinical and Laboratory Standard Institute guideline 2021 [ 17 ]. 2.7. Operational Definition 2.7.1. E. faecalis A strain of Enterococcus was susceptible to ampicillin and ferment mannitol on mannitol salt agar, while E. faecium strains were resistant to ampicillin and were unable to ferment mannitol [ 13 ]. 2.7.2. Multidrug-Resistance (MDR) This is when a bacterium is nonsusceptible to at least one antimicrobial agent that belongs to three or more antimicrobial categories [ 18 ]. 2.8. Data Quality Assurance The questionnaire was pretested in a population representing 5% of the sample size at Adare General Hospital a week before the actual data collection. The quality of reagents and equipment was checked, and all the reagents were used according to manufacturer instructions. The data were collected by trained data collectors. Bacterial strains such as S. aureus ATCC 25923 and E. faecalis ATCC51299 were used to check the performance of culture media. During the preparation of a new batch of culture media, sterility was checked by incubating 5% of the batch at 35–37°C for 24 hours. 2.9. Data Processing and Analysis Data entry and analysis were performed using SPSS version 25. Summary statistics were performed using frequencies and proportions for categorical data such as the sociodemographic and clinical characteristics of participants. Crude odds ratio (COR) and adjusted odds ratio (AOR) with a 95% confidence interval were computed using bivariable and multivariable binary logistic regression. Variables with a p < 0.25 in a bivariable analysis were selected for further analysis by multivariable binary logistic regression. Statistically, significant association was set at p < 0.05. 2.10. Ethical Considerations This research was reviewed by the Institutional Review Board of Hawassa University College of Medicine and Health Science, and permission was obtained from the Institutional Review Board (IRB) of the College of Medicine and Health Sciences with (Reference number: IRB/149/13). Permission was granted by Hawassa University Comprehensive Specialized Hospital. Participation was voluntary based, confidentiality was ensured for each participant, and informed consent was secured before the start of each interview, and the stool sample was collected for each participant.
3. Results 3.1. Sociodemographic Characteristics of Participants A total of 223 hospitalized inpatients were tested with 100 response rates, of which 123 (55.2%) were males and the median age of the study participants was 30 years with a range of 1–80 years. Most participants (40.8%) were in the age range of 15–30 years. More than 60% of study participants were urban dwellers ( Table 1 ). 3.2. Clinical Characteristics of the Study Participants The duration and history of previous hospital stay of the study participants were 213 (95.5%) who stayed for <2 weeks with an average stay in the hospital before sample collection being 6.5 days (with a standard deviation of ±3.84). The majority of cases (51.6%) of admission were recorded from the surgical ward. 10.8% were admitted to ICU. Among these, 76.7% of them underwent invasive procedure such as surgery or urinary catheterization ( Table 2 ). 3.3. Vancomycin-Resistant Enterococci Colonization Rate From the total 223 stool specimens, 141 Enterococci species were isolated with a colonization rate of 63.2%. Among the isolated species, 65 (46.1%) and 76 (53.9%) were E. faecalis and E. faecium , respectively. The proportion of isolated species in different environments in the study area was higher in surgical ward followed by medical ward. Of the total 141 Enterococci isolates, 26 (18.4%) were vancomycin-resistant by the disk diffusion method. However, further check was performed for VRE by E-test strip tests; 15 out of 26 Enterococci were confirmed to be vancomycin-resistant by using E-test strips test. All of the 15 isolated VRE strains belong to E. faecium . The colonization rate of VRE using the E -test was 15 (6.7%) with 95% CI: (4.0, 10.6). A high proportion of VRE was detected in adults admitted to ICU (22.2%) followed by medical ward with a proportion of 11.6% ( Figure 1 ). The proportion of VRE among males whose age is > 50 years was 8.1% ( Table 3 ). The proportion of VRE in the medical ward, surgical ward, pediatric ward, and adult intensive care units was 8 (5.7%), 14 (9.9%), 0, and 4 (2.8%), respectively, Based on E -test strips, the proportion of VRE in the abovementioned hospital wards was 5 (6.2%), 8 (7%), 0, and 2 (16.7%), respectively ( Table 4 ). 3.4. Factors Associated with VRE Colonization Rate Many variables were assessed for the presence or absence of association with VRE among hospitalized patients using both bivariate and multivariable logistic regression models. Accordingly, sociodemographic parameters such as sex, residence, and educational status and other clinical parameters such as length of hospitalization, ICU admission, history of treatment outside the hospital, and previous history of treatment with vancomycin were candidate variables for multivariable analysis. In multivariable analysis, study participants who had no formal education (AOR = 4.26, 95% CI: 1.01, 18.06) were more likely to develop vancomycin-resistant Enterococci species (VRE) compared to those who had formal education. Based on their hospital stay, those who stayed for longer than two weeks (AOR = 4.10, 95% CI: 1.08, 15.57) were 4 times more likely to develop VRE as compared with their counterparts. Study participants who had a pervious history of treatment with vancomycin (AOR = 4.77, 95% CI: 1.26, 18.09) were also about 5 times more likely to develop VRE as compared with those who had no history of taking vancomycin and those who took any of the antibiotics ( Table 5 ). 3.5. Antimicrobial Resistance Profile of Enterococci Among 141 Enterococci isolated, 56.7%, 53.9%, 63.1%, and 70.2% were resistant to penicillin, ampicillin, erythromycin, and tetracycline, respectively, while 98.6% and 87.9% of Enterococci were susceptible to linezolid and chloramphenicol, respectively. The E. faecalis were resistant to erythromycin 69.7%, tetracycline 88.2%, and ciprofloxacin 69.7% ( Table 6 ). All VRE E. faecium were resistant to erythromycin; only one of them was susceptible to penicillin. However, 86.7% of them were susceptible to linezolid and chloramphenicol ( Table 7 ). 3.6. Multidrug-Resistant Profile of Enterococci Overall, the proportion of multidrug resistance (MDR) among Enterococci was 77 (54.6%). Most of the 23 (29.9%) were resistant to five antibiotics belonging to different classes, whereas 21 (27.3%) were resistant to four antibiotics, and 12 (15.6%) were resistant to six antibiotics that belong to different classes of antibiotics ( Table 8 ).
4. Discussion Enterococci are commensal of the gastrointestinal tract and are often multidrug-resistant; they may transfer antibiotic-resistant genes to other potentially pathogenic bacteria such as S. aureus . They themselves can cause disease in some circumstances, particularly in a hospital environment where patients with several underlying conditions reside. If VRE causes disease among colonized patients or other hospitalized patients it is difficult to manage as there are few treatment options [ 19 , 20 ]. In the current study, 6.7% of hospitalized patients at HUCSH were colonized with VRE. A similar VRE colonization rate was reported from other parts of Ethiopia: Gondar (6.2%) [ 16 ], West Amhara (7.7%) [ 21 ], Addis Ababa (6.7%) [ 22 ], and Northwest Ethiopia (7.8%) [ 23 ]. The overall prevalence of Enterococci colonization in the current study was (63.2%) which was higher than the studies in Jimma, Ethiopia (23.08%) [ 24 ], and Dessie, Ethiopia (37.33%) [ 10 ]. However, this finding was lower than other studies, reported from Ethiopia (76−89%) [ 12 , 24 ]. The majority of isolated species 76 (53.9%) were E. faecium followed by 65 (46.1%) E . faecalis . All vancomycin resistance belongs to E. faecium and emerging as the main nosocomial pathogen in the last two decades [ 25 ]. In some studies conducted in countries such as Turkey (1.55%) [ 26 ] and Nigeria (4.07%), the prevalence of VRE was lower than our finding [ 27 ]. This difference might be due to laboratory methods used (disk diffusion vs. MIC), variation in the study participants, effective use of infection prevention, antibiotic use, and source of specimen. Long duration of hospital stay and use of antibiotics are among the frequently reported risk factors for VRE colonization and infection. The gradual increase and clonal expansion of VRE might have contributed to a higher prevalence [ 28 , 29 ]. In addition to this, variation in the prevalence of VRE could be due to socioeconomic variation, exposure to antibiotics for a prolonged duration, and habit of antibiotic use. Participants with no formal education were about 4 times more likely to be colonized with VRE than their counterparts. The finding was consistent with studies conducted in Nigeria [ 27 ], Ethiopia (925), and China [ 28 ]. Participants without formal education may have a tendency of using antibiotics without appropriate prescription and sharing of antibiotics, both of which could lead to the development of antibiotic resistance. Longer hospital stay for longer than two weeks during admission was another factor which was 4 times more likely to develop VRE. This result is in line with a study conducted among hospitalized patients in Jimma, Ethiopia [ 12 ]. Participants who had pervious history of treatment with vancomycin were also highly associated with developing VRE than their counterparts. Consistent finding was reported from Brazil [ 29 ] and Germany [ 30 ]. In another study conducted in Gondar, Ethiopia, reported that use of antibiotics also causes the emergence of VRE [ 16 ]. About 57% of Enterococci isolated in this study were resistant to penicillin which is higher than the resistance rate reported in India (22.8%) [ 31 ], Jimma (22.7%) [ 32 ], and Dessie (34.8%) [ 17 ]. 53.9% of Enterococci isolated showed resistance to ampicillin which is comparable with a report from Gaza 53.2% (535). The level of resistance to erythromycin in this study was (63.1%). This finding was higher than the study reported from Brazil 32.6% (). Also, a low proportion of resistance to tetracycline was reported from Brazil (17.3%) compared to the finding in this study which indicated 70.2% resistance [ 29 ]. The higher drug resistance profile in our study might be due to variations in sample size and study participants who were hospitalized patients exposed to different antibiotics. Unlike our study (56.7%), higher penicillin-resistant Enterococci prevalence was reported from Gaza (71.3%), India (69.6%) [ 31 ], South India (89%) [ 33 ], Arbaminch (69.9%) [ 32 ], and Addis Ababa (80%) [ 12 ]. Reports from Iran (80%) [ 34 ] and Sothern India (86%) [ 33 ] showed high ampicillin-resistant Enterococci as compared to ours. In the current study, only 6.4% of Enterococci were resistant to chloramphenicol which was different from not in a report in India (96%) [ 31 ]. 70.2% of Enterococci isolated in this study were resistant to tetracycline which is high compared to reports from other parts of Ethiopia (34%) [ 19 ] and Iran (18%) [ 34 ], but higher resistance was reported from Gaza (80.9%). Moreover, most E. faecium isolated in the present study were resistant to erythromycin, tetracycline, and ciprofloxacin as compared to E. faecalis . Almost all vancomycin-resistant Enterococci were resistant to ampicillin which showed a signal that they are acquired from a hospital environment. A similar study in the United States also indicated that VRE was reported only from health care settings only [ 35 ]. About 54.6% of Enterococci isolates were multidrug-resistant which is lower than the finding reported from Iraq (85.7%) [ 35 ]. In the opposite, lower MDR was reported from Dessie, Ethiopia (29.5%) [ 17 ]. The discrepancy of the findings might be due to variation in the geographical distribution of strain, trend, and frequency of antibiotic prescription, community antibiotic usage practice, and definition of MDR. 4.1. Limitation of the Study The isolated Enterococci were not identified to the genetic level using molecular characterization due to resource limitation and budget constraints. Even though the study is conducted in a large hospital in the region and gives a service for different regions of Sidama SNNPR, Oromiya, and Somalia, it might be generalized to other hospital settings.
5. Conclusions In our study, high incidence of vancomycin-resistant Enterococci was found. Previous exposure to antibiotics and hospital stay for more than two weeks were significant factors for vancomycin-resistant enterococci gut colonization. The study also showed that the isolated Enterococci had variable degrees of resistance to commonly prescribed antibiotics. Most Enterococci isolated were also showed resistant to one or more of the commonly prescribed antibiotics which leads to a common worldwide problem multidrug resistance. Therefore, periodic surveillance on antimicrobial resistance pattern, adhering to rational use of antibiotics and implementing infection prevention protocols may reduce colonization by VRE.
Academic Editor: Joseph Falkinham Background Vancomycin-resistant Enterococci (VRE) is a global health problem and responsible for healthcare-associated infections (HAIs) in patients with prolonged hospital stay, severe underlying disease, and previous broad-spectrum antibiotic therapy. These bacteria can cross-resist and transfer drug-resistant genes to other potentially pathogenic bacteria. Therefore; this study was aimed to determine the gastrointestinal colonization rate of VRE, its antimicrobial susceptibility profile, and associated factors among hospitalized patients. Methods Prospective cross-sectional study was conducted using stool samples from 223 patients admitted to different wards at Hawassa University Comprehensive Specialized Hospital, from April 1 to June 30, 2021. Patients admitted to the hospital for more than 48 hours for various medical conditions were included. Sociodemographic and clinical characteristics were collected using a structured questionnaire. Fecal specimens were cultured on Enterococci selective media. Enterococcus species were identified using their growth and mannitol fermentation properties. Vancomycin resistance was screened using both the Kirby–Bauer disk diffusion method and a vancomycin E -test strip. Data were entered and analyzed using SPSS version 25. Descriptive and logistic regressions were used to determine the frequency and association of factors with the VRE colonization rate. A p value of <0.05 was considered statistically significant. Results A total of 223 fecal specimens were collected and processed, and 141 (63.2%) them were positive for Enterococci . The predominant species was E. faecalis 65 (46.1%) followed by E. faecium 76 (53.9%). In this study, the gastrointestinal colonization rate of VRE was 15 (6.7%) and all the species belong to E . faecium . Study participants who had no formal education (AOR = 4.26, 95% CI: 1.01, 18.06), hospitalized patients for >2 weeks (AOR = 4.10, 95% CI: 1.08, 15.57), and those who had a history of treatment with vancomycin (AOR = 4.77, 95% CI: 1.26, 18.09) were more likely to be colonized with vancomycin-resistant Enterococci . More than 95% of Enterococci isolates were susceptible to linezolid, whereas 70.2%, 63.1%, 56.7%, and 53.9% were resistant to tetracycline, erythromycin, penicillin, and ampicillin, respectively. Among the total Enterococci isolated, 141 (54.6%) were multidrug resistant. Conclusions In our study, high proportion of vancomycin-resistant Enterococci was found. Previous exposure to antibiotics and hospital stay were significant factors for VRE gut colonization. The isolated Enterococci showed variable degrees of resistance to commonly prescribed antibiotics which leads to a worldwide problem multidrug resistance. Therefore, periodic surveillance on antimicrobial resistance pattern, adhering to rational use of antibiotics, and implementing infection prevention protocols may reduce colonization by VRE.
Acknowledgments We would like to thank the staff of the microbiology unit for giving us the opportunity to conduct this research in fulfilment of master's degree at the Hawassa University College of Medicine and Health Science, School of Medical Laboratory, and their help during the laboratory work. We also thank the nurses who were involved for sample and data collection. We acknowledge all study participants for their willingness to take part in the study during this thesis work. Abbreviations Vancomycin-resistant Enterococcus Hawassa University Comprehensive Specialized Hospital. Data Availability All data generated for this study were included in the manuscript and the data are available at the Hawassa University research and technology transfer directorate data base https://www.hu.edu.et/index.php/administration/vice-president-offices/research-and-technology-transfer . Ethical Approval Ethical clearance was obtained from the Institutional Review Board (IRB) of the College of Medicine and Health Sciences, Hawassa University (Reference number: IRB/149/13). Before data collection, written informed consent was obtained. Assent was obtained from minors. Disclosure This project or thesis was conducted as a partial fulfillment of the masters of degree in diagnostic and public health microbiology at the Hawassa University College of Medicine and Health Science, School of Medical Laboratory. The whole project work is deposited at Hawassa University repository web page and as hard copy in the school of medical laboratory [ 36 ]. Conflicts of Interest The authors declare that they have no conflicts of interest. Authors' Contributions TTM conducted proposal development, data collection, data analysis, and write-up of the study. MM conducted proposal development and supervision during data collection. MMA performed proposal development and proposal review, supervision during data collection, and manuscript preparation. Both the authors have read and approved the final manuscript.
CC BY
no
2024-01-16 23:47:20
Int J Microbiol. 2024 Jan 8; 2024:6430026
oa_package/b2/12/PMC10789508.tar.gz
PMC10789509
38225940
1. Introduction Certain problems such as alveolar bone resorption, sharp ridges, or a friable nature of supporting mucosa result in an uneven distribution of stresses to denture-supporting tissues, leading to tissue injury, sore spots, patient discomfort, and altered fit of prosthesis [ 1 , 2 ]. To overcome these problems, soft denture lining materials can be used. Soft denture lining materials are made up of resilient materials that are applied over the denture-bearing surface of the prosthesis. These materials serve as a cushion, absorbing the masticatory load and its traumatic effects on the tissues supporting the dentures [ 3 , 4 ]. Resilient denture liners are made of silicone elastomers or plasticized acrylic resins. They may be used for both short- and long-term purposes, and they could be heat- or auto-polymerized [ 5 , 6 ]. In order to provide denture wearers with the greatest possible benefits, the ideal soft lining material should have a variety of characteristics, such as low water solubility, resistance to microbial growth, good resiliency, biocompatibility, and dimensional stability [ 7 , 8 ]. Despite this, there are a variety of challenges with soft lining materials, including their failure to bond to denture bases, loss of resilience, color changes, poor tear strength, porosity, and subsequent plaque accumulation with dominant Candida albicans ( C. albicans ) colonization [ 9 ]. This is one of the major issues that affect the soft lining material's long-term efficacy [ 10 ]. C. albicans is a common yeast found in the oral cavity that can adhere to dentures, host cells, bacteria, and other candida cells. This adhesion leads to the creation of biofilm, which makes the organisms resistant to antimicrobial and antifungal medicines [ 11 – 13 ]. Denture-induced stomatitis is the most common clinical illness caused by C. albicans , a fungus that is also responsible for causing certain other oral infections within the oral cavity [ 10 ]. It is frequently accompanied by poor oral hygiene, poor diet, smoking, trauma from wearing dentures continuously, decreased salivary flow, or inadequate denture base material or quality [ 10 ]. Denture stomatitis can be challenging to manage because of its complex aetiology. There have been many different treatment methods suggested such as maintaining oral and denture hygiene, removing dentures at night time, using topical or systemic antifungal medications, diet modification, and relining or replacing prosthetics [ 14 , 15 ]. Antifungal therapy for denture stomatitis may offer symptomatic relief, but the recolonization of fungi within the oral cavity cannot be prevented, which ultimately causes its recurrence. Moreover, it is also accompanied by adverse consequences such as drug-resistant fungus and the toxicity of currently available medications. Additionally, elderly people, who make up a greater proportion of the denture-wearer population, are unable to take the recommended dosage of the medication due to poor motor skills [ 14 – 16 ]. There is a global interest in medicinal plant extracts since herbal therapy is thought to be a very reliable and safe alternative to antimicrobial medications with few to no side effects. In order to prevent the colonization of candida, care has been taken to modify the soft denture base lining material through the addition of herbal and natural agents such as chitosan, aloe vera, and mint oil [ 10 , 17 ]. Aloe vera is considered as the most popular plant species used, both medically and economically [ 18 ]. More than 200 different biologically active compounds, including amino acids, anthraquinones, enzymes, hormones, minerals, salicylic acid, saponins, steroids, carbohydrates, and vitamins, have been identified in the plant based on its chemistry. Its biological functions include the ability to heal wounds, acting as an antibacterial agent, reducing inflammation, acting as an antioxidant, acting as allergen suppressant, and moisturizing the skin [ 19 – 22 ]. In addition, aloe vera has been proven to have potent antibacterial properties that are useful in the treatment of gingival diseases. It also reduces soft tissue edema, which in turn decreases gingival bleeding. Along with its potent antibacterial properties, which are beneficial in treating periodontal pockets where standard cleaning is challenging, it contains antifungal characteristics that may aid in the treatment of denture stomatitis. Its antiviral qualities are also documented to aid in the treatment of cold sores (herpes simplex) and shingles (herpes zoster) [ 22 ]. On the other hand, chitosan, a natural polymer derived from crustacean outer shells, is suited for use in several medicinal applications due to its antifungal and antibacterial properties. It was suggested as a bioadhesive to the oral mucosa since it is nontoxic [ 23 ]. Its oligomers interfere with the growth of fungal cells by interacting with their growth-promoting enzymes and diffusing them into hyphae [ 24 , 25 ]. Considering the effectiveness of both natural agents on candida growth, this study aimed to compare the antifungal effectiveness of chitosan and aloe vera powders on mean reduction in fungal growth incorporated in heat polymerised soft denture lining material, since as per the author's knowledge, none of the studies had been conducted before to study this parameter in comparison.
2. Materials and Methods This in vitro experimental study was conducted at the Prosthodontics Laboratory of the Institute of Dentistry, LUMHS, Jamshoro, Pakistan, and antifungal activity was tested at Medical Research Centre, LUMHS, Jamshoro, Pakistan, with a total sample size of 60, divided into three groups: group 1 (chitosan incorporation), group 2 (aloe vera incorporation), and group 3 (control) by using a nonprobability convenience sampling technique. Only standard dimension, freshly fabricated and sterilized heat-cured soft denture lining plates (6 × 6 × 2 mm), and freshly incubated C. albicans species were included in this study. 2.1. Mould Preparation After taking approval from the Ethical Review Committee of LUMHS, Pakistan, a total of 60 multiple specimens of modelling wax (Kemdent Tenatex Eco, UK) were fabricated in dimensions of 6 × 6 × 2 mm and placed in freshly mixed dental stone (ISI HI-TECH; according to manufacturer's instructions W/P ratio: 32 mL/100 g) within the lower portion of the dental flask as shown in Figure 1 . When the dental stone was completely set, a separating media was applied to the dental stone and allowed to dry. The upper part of the flask was completely filled with the dental stone and kept until set. The flask was opened afterwards and wax patterns were removed leaving a space for the soft liner pattern [ 22 ]. 2.2. Specimen Preparation 2.2.1. Proportioning, Mixing, and Fabrication of Soft Liner/Chitosan Powder Specimen As recommended by the manufacturer, a mixing ratio of 1.2 g per 1 mL was utilized with the soft liner (GC Tokyo, Japan). In order to obtain an exact P/L ratio, the weight of the chitosan powder (Sigma Company; 2% of the total wt. of powder component) was removed from the total weight of the soft liner powder during the mixing process [ 3 ]. A clean glass jar with a lid was used for mixing. When the soft lining material reached the dough stage, it was taken out by the hands, placed on the flask that had been previously prepared, and covered with a polyethylene sheet. To ensure that the soft lining material was distributed evenly inside the mould and to remove any excess material, the upper portion of the flask was placed on it, and it was subjected to a hydraulic pressure of 100 kg/cm 2 for 5 minutes. The flask was removed from the press and opened and the extra material and polyethylene were scraped away by using a wax knife. The packed flask was once more kept under pressure for 5 minutes, after which it underwent a 90-minute cure at 70°C following the manufacturer's instructions, followed by a 30-minute temperature increase to 100°C. After the curing cycle was complete, the flask was removed from the water bath and left to cool for 30 minutes at room temperature before keeping them under tap water for a further 15 minutes. Specimens were retrieved and excess material was cut down with sharp scalpel blades followed by finishing with fine-grit silicone polishing burs and fine-grit sandpaper [ 26 , 27 ]. 2.2.2. Proportioning, Mixing, and Fabrication of Soft Liner/Aloe Vera Powder Specimen As recommended by the manufacturer, a mixing ratio of 1.2 g per 1 mL was utilized with the soft liner (GC Tokyo, Japan). In order to obtain an exact P/L ratio, the weight of the aloe vera powder (aloe vera powder prepared after drying fresh aloe vera leaves plucked from a plant in sunlight after 15 days) was removed from the total weight of the soft liner powder during the mixing process [ 3 ]. A clean glass jar with a lid was used for mixing. When the soft lining material reached the dough stage, it was taken out by the hands, placed on the flask that had been previously prepared, and covered with a polyethylene sheet. To ensure that the soft lining material was distributed evenly inside the mould and to remove any excess material, the upper portion of the flask was placed on it, and it was subjected to a hydraulic pressure of 100 kg/cm 2 for 5 minutes. The flask was taken out of the press and opened, and the extra material and polyethylene were scraped away by using a wax knife. The packed flask was once more kept under pressure for a further five minutes, after which it underwent a 90-minute cure at 70°C following the manufacturer's instructions, followed by a 30-minute temperature increase to 100°C. After the curing cycle was complete, the flask was taken out of the water bath and left to cool for 30 minutes at room temperature before keeping them under tap water for a further 15 minutes. Specimens were retrieved and excess material was cut down with sharp scalpel blades followed by finishing with fine-grit silicone polishing burs and fine-grit sandpaper [ 26 , 27 ]. 2.3. Isolation of Candida In order to create a candidal suspension with 10 CFU/mL of 0.5 McFarland standards, yeast was obtained from a microbiology lab and diluted in 0.9% NaCl. This solution matches the McFarland standard bacteriologic solution. Sabouraud dextrose agar was made, autoclaved for 15 minutes at 121°C/15 pressure, and stored in accordance with the manufacturer's instructions. All soft lining samples were placed into the tubes with 100 mL of the candidal suspension and 9.9 mL of previously prepared Sabouraud dextrose agar. Following this, the tubes were incubated at 37°C for 24 hours. All samples were taken out after incubation and rinsed five times in autoclaved deionized water to eliminate any loosely attached cells. Crystal violet stain was used to stain the adhering cells for 1 minute after fixing them with methanol 80% for 30 seconds as shown in Figure 2 [ 28 ]. For each sample, the adhering candida cells were counted on three standard fields by using an inverted light microscope, and the mean of those fields was recorded as shown in Figure 3 . Filamentous forms were not counted in order to standardize the measurement of adhering cells, whereas budding daughter cells were counted as separate yeast [ 29 ]. 2.4. Statistical Analysis Data were analyzed using the Statistical Package for the Social Sciences version 20.0 (IBM Corp, Armonk, NY, USA). Aloe vera- and chitosan-infused denture soft liners' mean values were compared. By using the one-way ANOVA test, a comparison of the means of groups was made. A P value of ≤0.05 was considered significant.
3. Results By examining the stained specimens for evaluating the adherence ability of C. albicans to soft liners for each group under the inverted light microscope, the mean values obtained for the control, chitosan, and aloe vera groups were 79.1, 41.15, and 16.05. Of all the three groups, it was found that the aloe vera powder had a significant efficacy against candida growth as compared to the chitosan and control groups ( P value = 0.001) as shown in Table 1 .
4. Discussion The viscoelastic characteristics of soft liners enable them to act as a cushion between the denture and the edentulous ridge by evenly distributing the occlusal forces over the denture-bearing area but an increased risk of candidal infection has been noted with the use of soft lining materials [ 22 ]. Natural antimicrobial compounds have been recommended as an alternative to synthetic, systemic, or local antibiotics due to their antibacterial and antifungal properties, as well as their availability and affordability [ 30 , 31 ]. In numerous studies, the addition of various antifungal drugs and nanoparticles in the soft denture liner material has been attempted to prevent the colonization of fungus [ 10 , 32 ]. Growing interest has been observed in chitosan modification and application in the biomedical field because of its biocompatibility, biodegradability, nontoxic characteristics, and antibacterial activity [ 33 , 34 ]. On the other hand, aloe vera is renowned for its significant therapeutic benefits and is one of the most beneficial plants in nature for health. This could be a result of the presence of a new protein with a molecular weight of 14 kDa that has antifungal and anti-inflammatory capabilities. This protein works by preventing trypsin from performing its protease-inhibitory activity [ 19 ]. In the current study, chitosan and aloe vera powders were added to soft liners in an effort to utilize their antifungal characteristics. According to the findings of this study, the number of C. albicans cells adhering to the surface of the soft lining material containing chitosan and aloe vera powders has significantly decreased when compared to the control specimen group. Saeed et al. incorporated two different types of chitosan into tissue conditioners, i.e., tissue conditioner modified by chitosan and tissue conditioner modified by chitosan oligosaccharide and compared them with the control group. They found that compared to the control group, experimental groups showed more reduction in the number of colony-forming units of C. albicans . Additionally, the tissue conditioner modified by chitosan oligosaccharide, once immersed in saliva, exhibited improved inhibition until the third day as compared to the tissue conditioner modified by chitosan [ 35 ]. Mohammaed and Fatalla investigated the antifungal effects of 1.5 wt% and 2 wt% of chitosan nanoparticles on heat-cured acrylic-based soft lining material and found a highly significant decrease in the number of candida cells adhered to the soft liner after incorporating 1.5 wt% and 2wt% of chitosan compared to samples of the control group [ 22 ]. According to Abdulwahhab and Jassim, adding aloe vera powder (3% and 10%) to heat cure acrylic soft lining material powder causes a statistically significant decrease in C. albicans ' cell count in comparison to the control group. Additionally, improvement in shear bond strength and tear strength was also noted [ 3 ]. A recent study by Nair et al. showed that aloe vera had the least antifungal activity when compared to the conventional denture cleansers, neem and triphala. In the current study, while comparing the mean reduction in candida count between the three groups, aloe vera was shown to be more effective than chitosan and the control group [ 36 ]. We were unable to compare our findings with the previous research since no such studies have explored the effectiveness of chitosan powder versus aloe vera powder used in the denture soft lining against the adherence of candida. Further research is required to determine the effect of chitosan and aloe vera powders on the mechanical properties of denture soft liners since both natural substances have been reported to be beneficial against the growth of candida. Additionally, when naturally dried aloe vera powder particles were mixed with soft lining material in the current investigation, there was a mild brownish discolouration of the material. Therefore, it is proposed that additional research is needed to assess the impact of commercially available and fresh aloe vera gels in order to address material discolouration. 4.1. Limitations Our primary aim was not only to evaluate the antifungal efficacy of natural substances but also to examine the influence of these substances on the mechanical properties of soft denture liners. Unfortunately, due to certain limitations imposed by permissible restrictions, we were unable to utilize the resources and equipment available at a different institution to investigate the mechanical effects. Accordingly, it is recommended that another study should be undertaken to assess the influence of incorporating these materials on the mechanical properties of soft liners. Furthermore, it is essential to explore the antifungal properties of these materials when utilizing various resin types for the fabrication of complete dentures. A notable limitation of this study was the scarcity of data available in the literature, which prevented us from conducting a comparative analysis with other studies. Therefore, it is recommended that similar investigations should be undertaken in other regions. Additionally, further research is needed to evaluate the antifungal efficacy of dried aloe vera powder in comparison to the clear gel found in aloe vera leaves. Such findings will facilitate our recommendation of aloe vera as a reliable antifungal agent.
5. Conclusion From the results of this research, we can conclude that both chitosan and aloe vera powders can be regarded as strong antifungal materials and their incorporation within the soft denture lining material can help achieve the antifungal activity against candida microorganisms. Furthermore, aloe vera powder was found to be more effective than chitosan. The findings of this study recommend the need to further evaluate the effects of these natural agents on the mechanical properties of denture soft liner materials.
Academic Editor: Vikram Dalal Background Soft denture lining materials act as a cushion between the denture base and tissues. Alongside having many advantages, its main problem is candida growth due to its rubbery and porous texture. Many interventions have been performed to halt the growth of candida within soft lining materials such as the use of antifungal therapy and strict oral and denture hygiene but there are consequences such as recurrence, drug resistance, and toxicity related to these interventions. Since natural agents such as aloe vera and chitosan have been proven to have antibacterial and antifungal properties with minimum adverse effects, this study aimed to study the effectiveness of chitosan and aloe vera powders incorporated within denture soft lining materials against candida adherence. Methodology . A total of 60 soft-lining material samples were prepared that were equally divided into three groups, viz., group 1 (chitosan incorporation), group 2 (aloe vera incorporation), and group 3 (control). Candida was obtained from the microbiology lab to form a candidal suspension, diluted in 0.9% NaCl to match the McFarland standard bacteriologic solution. Samples were incubated at 37°C for 24 hours in test tubes containing 100 mL of the candidal suspension and 9.9 mL of the previously prepared Sabouraud dextrose agar. Crystal violet stain was used to stain the adhering cells by fixing them with methanol 80%. For each sample, the adhering candida cells were counted on three standard fields by using an inverted light microscope, and the mean of those fields was recorded. Results The mean value for samples containing aloe vera was 41.15, while the mean values for samples containing chitosan and the control group were 16.05 and 79.1, respectively. Of all the three groups, aloe vera powder had a significant efficacy against candida growth as compared to the chitosan and control groups ( P value = 0.001). Conclusion Both herbal agents were effective against candida growth. In comparison, aloe vera was more effective against candida growth compared to chitosan.
Data Availability The dataset used in the current study will be made available on request from Dr. Muhammad Rizwan Memon, [email protected] , [email protected] . Conflicts of Interest The authors declare that they have no conflicts of interest.
CC BY
no
2024-01-16 23:47:20
Scientifica (Cairo). 2024 Jan 8; 2024:9918914
oa_package/e5/f8/PMC10789509.tar.gz
PMC10789510
0
1. Introduction Very long-chain acyl-coenzyme A dehydrogenase (VLCAD) deficiency, or VLCADD, is an autosomal recessive disorder of impaired fatty acid metabolism. VLCAD catalyzes the breakdown of 14 to 20 carbon long-chain fatty acids during beta-oxidation [ 1 ]. In times of increased metabolic demand or fasting, VLCADD patients face deficient energy supply and accumulation of toxic fatty acid metabolites that can damage multiple organ systems [ 2 ]. Stress, cold exposure, pain, and illness are mechanisms by which the body requires an increased energy supply and are commonly encountered by a patient perioperatively. We describe the anesthetic management of a 13-year-old patient with VLCADD who underwent a posterior spinal fusion for scoliosis. Due to the need for intraoperative motor and somatosensory neuromonitoring and the patient's contraindication to propofol, unique anesthetic consideration was needed; total intravenous anesthesia (TIVA) and inhaled anesthetics were used sequentially for the management of this patient. This manuscript adheres to the CARE guidelines and is patient privacy compliant per our institution's guidelines with written Health Insurance Portability and Accountability Act (HIPPA) authorization form completed. 1.1. Case Description A 13-year-old female (weight 56 kg, BMI 22) with past medical history of VLCADD, mild intermittent asthma, no past surgical history, and ASA physical status class III underwent a posterior spinal fusion and segmental spinal instrumentation of T3-L3, along with Ponte osteotomies at T5-L2 for severe adolescent idiopathic scoliosis. As a newborn, she had recurrent episodes of hypoglycemia requiring prolonged neonatal ICU admission. Her diagnosis was made with a positive newborn screen followed by VLCAD gene sequencing showing 2 deleterious mutations (C1748G (S583W) and C1894T (R632C)). Preoperatively, cardiovascular review of systems was negative. Creatine kinase (CK) and renal function tests were within normal limits. During surgery, in addition to standard American Society of Anesthesiologists' (ASA) monitoring, invasive blood pressure monitoring was achieved with radial artery cannulation. A raw 4-channel EEG was used for neuromonitoring. The patient was premedicated with 4 mg of midazolam and induction was achieved using 20 mcg/kg fentanyl, 3 mg/kg ketamine, 1.5 mg/kg lidocaine, and 0.5 mg/kg rocuronium before intubation. A high dose of fentanyl was used to provide a strong analgesic to prevent movement intraoperatively; a neuromuscular blocking agent could not be used due to the need for motor-evoked potential (MEP) monitoring. Although ketamine is also an analgesic, it does not last long enough in a single dose at induction to provide the high level of analgesia needed for the duration of this surgery. Infusions of ketamine at 1.0 mg/kg/hr, lidocaine at 1.5 mg/kg/hr, midazolam at 3.0 mg/hr, and remifentanil at 0.15 mcg/kg/hr were administered for maintenance of anesthesia and analgesia. The rocuronium did not need reversal as the patient had 3 out of 4 MEPs before the procedure began. Boluses of anesthetics were avoided to preserve the neuromonitoring signals at a steady state, since the latency and amplitude of signals risk being increased and decreased, respectively. Per our enhanced recovery after surgery (ERAS) protocol, the patient ingested a clear carbohydrate-rich supplement 3 hours before the procedure (Nestle BOOST Breeze®, 237 mL, 54 g total carbohydrates). Intraoperatively, a background infusion of dextrose 5% in lactated Ringer's solution was run at 110 mL/hr to prevent hypoglycemia, while glucose and CK levels were monitored closely to detect potential rhabdomyolysis early ( Table 1 ). The patient's temperature was maintained within a range of 36–38C° using warmed fluid, increased ambient temperature, and forced air warmers set to 43C° (±3C°). Because neuromonitoring is not definitive for determining neurological damage, a physical exam is necessary at the end of the surgery. After successful derotation of the spine and confirmation of somatosensory evoked potential (SSEP) and MEP signals, the surgeons began closing the incision. At that time, the infusions of ketamine and midazolam were stopped, and inhaled anesthetics were started for a more predictable wakeup in preparation for the planned physical exam after wakeup and extubation. Total surgery time was 3 h16 m and total anesthesia time was 4 h18 m. Estimated blood loss was 1200 mL and total fluids given were IV fluid 3.5 L crystalloid and 500 mL 5% albumin. Urine output was 600 mL. Intraoperative fluid management remains at the discretion of the attending anesthesiologist; however, goal directed management is strongly encouraged and, as in this case, generally adopted. Several variables inform fluid management including blood loss, urine output, blood pressure, insensible loss from the wound, and the patient's physiologic response to prone positioning. Due to decreased venous return from pooling in the viscera and lower extremities, prone positioning necessitates adequate preload to maintain blood pressure; this is most successful with judicious administration of balanced crystalloids and/or colloids. Replacement with 3 mLs of crystalloid per 1 mL of blood loss was targeted. Cell saver was used for the procedure and allogenic blood transfusion was not given. Postoperatively, the patient was able to move all extremities on physical exam. Dexmedetomidine was administered for immediate postanesthesia care unit (PACU) analgesia. The patient stayed in the PACU for 2 hours before being transferred to the postoperative floor, where dexmedetomidine was discontinued and multimodal analgesia of acetaminophen, gabapentin, ketorolac, and opioids were used. The patient was discharged on postoperative day 4. Figures 1(a) and 1(b) show x-ray imaging of the successful procedure.
2. Discussion VLCADD is a rare disorder with an estimated incidence of 1 : 30,000–400,000 live births [ 3 ]. Due to deficient or defective VLCAD enzyme, these patients are unable to break down fatty acids to utilize as a secondary energy source when glucose levels are low, and thus are prone to hypoglycemia [ 4 , 5 ]. The clinical severity of symptoms varies greatly among patients: some present with multisystemic disease and organ failure, while others only exhibit mild clinical symptoms with vigorous exercise or illness [ 2 ]. During periods of unmet energy demands, catabolic processes such as rhabdomyolysis, cardiac dysfunction, or arrhythmias can occur, manifesting clinically as symptoms such as muscle pain, cramps, and weakness [ 1 , 2 ]. Additionally, when long-chain fatty acids remain unmetabolized, cytotoxic long-chain acyl-carnitines accumulate in various organ systems, which can cause cardiomyopathy, skeletal myopathy, and organ lipidosis [ 6 , 7 ]. Because of these potentially life-threatening complications, patients with VLCADD require dietary modifications including scheduling regularly spaced meals, avoiding long-chain fatty acids, and supplementing with medium-chain fatty acids [ 8 ]. By maintaining a constant glucose supply, the need for beta-oxidation as a source of energy is prevented. Given these characteristics, our main goal during this procedure was to avoid extraneous stressors that would increase metabolic demand and cause decompensation in the patient. This was done while simultaneously considering the need for adequate intraoperative neuromonitoring conditions needed for a safe scoliosis surgery. Importantly, motor-evoked potential monitoring is crucial for the early recognition of potential or impending neurological damage in these pediatric patients. Spinal cord ischemia or direct cord injury during the procedure can have devastating consequences for the patient. Although propofol is typically used as part of the TIVA technique during surgeries requiring neuromonitoring, it is contraindicated for VLCADD patients due to its emulsion consisting of mainly long-chain fatty acids that the patient would not be able to properly metabolize [ 1 , 4 ]. Small induction dosages of propofol may be tolerated in some VLCADD patients depending on the degree of the enzyme deficiency or in the presence of sufficient glucose supply. Conversely, propofol must be avoided entirely given its potential for severe complications in VLCADD patients [ 4 ]. TIVA technique with propofol was excluded given the dose dependent increase in sequelae from propofol administration. An alternative anesthetic plan of fentanyl, ketamine, and lidocaine was used for induction, and ketamine, lidocaine, midazolam and remifentanil were infused intraoperatively for anesthetic maintenance. Narcotic target-controlled infusions must reach a threshold concentration to prevent intraoperative patient movement. In lieu of waiting for target concentration of remifentanil, and to ensure adequate analgesia, fentanyl boluses were administered at induction. To avoid elevations in CK that can be caused by depolarizing neuromuscular blocking agents, rocuronium, a nondepolarizing agent, was used to optimize our intubation before the start of the procedure [ 4 ]. Notably, low-dose dexmedetomidine was not utilized as part of the TIVA due to the drug's effect on suppressing MEPs. Mahmoud et al., as well as a more recent retrospective case-control study by Holt et al., showed clinically and statistically significant attenuation of MEP amplitudes using dexmedetomidine during TIVA for posterior spinal fusion surgeries [ 9 , 10 ]. Dexmedetomidine was, however, used postoperatively because of its advantageous effects as an analgesic and sedative during the recovery period from a major surgery. Our anesthetic management was unique since TIVA was used for the majority of the surgery, while inhaled anesthetics were subsequently used near the end of the procedure after stopping infusions. This method was advantageous in this situation given the need to carefully time the patient's awakening and consider the depth of the patient's anesthesia so an adequate physical exam could be performed after wakeup. Additionally, a potential intraoperative Stagnara test also necessitates a rapid wakeup. Due to the rapid metabolism of remifentanil by plasma esterases and the ability to reverse the effects of both fentanyl and midazolam intraoperatively via titration of intravenous naloxone and flumazenil, respectively, conditions for an adequate Stagnara test can be generated if needed. While not exact, raw EEG interpretation by an experienced neurologist is invaluable in determining the depth of anesthesia. The neuromonitoring for this case consisted of a neuromonitoring technician present intraoperatively to follow a raw 4-channel EEG, as well as a neurologist remotely monitoring the case to comment on the depth of anesthesia. Alpha and delta waves are dominant on EEG under typical anesthesia, with increasing delta and theta waveforms emerging with deeper anesthesia [ 11 ]. Titrating our infusions down according to the EEG interpretation communicated by the neuromonitoring specialist allowed us to maintain an ideal depth of anesthesia and prevent an obtunding effect from ketamine and midazolam. While ketamine can enhance EEG waveforms, we saw no significant changes in waveforms during the case that were concerning for inadequate anesthetic depth as read by the neuromonitoring team. Finally, we started inhaled anesthetics just before closure for a more predictable wakeup. Volatile anesthetics could not be used for the majority of the case due to its inhibitory effect on the anterior horn cells of the spinal cord that preclude neuromonitoring. Despite previous conflicting recommendations due to concerns for rhabdomyolysis with volatile anesthetic use in VLCADD patients [ 12 ], recent literature reports and the review by Redshaw and Stewart have demonstrated that inhaled anesthetics are safe to use in VLCADD patients and were safely used at the end of our case as well [ 4 , 5 , 13 , 14 ]. The few existing reports on managing patients with VLCADD recommend strict intraoperative monitoring of glucose and serum CK levels along with continuous glucose infusions to prevent hypoglycemia [ 12 , 14 ]. Utilizing this strategy, glucose levels ranged from 140 to 200 mg/dL intraoperatively with no significant elevations of CK levels beyond what is expected after a major spine surgery [ 15 ]. Had signs of rhabdomyolysis been evident, we would have decreased surgical stress by increasing our anesthetics, transfusing blood products or increasing energy supply through glucose administration. Finally, pain management, body temperature control, and blood loss were other important considerations. Because pain activates a sympathetic response in the body, it was necessary to provide adequate analgesia with a multimodal approach without obtunding the patient's sensory responses needed for neuromonitoring [ 4 ]. Hypothermia was avoided since involuntary muscle movements generate heat and increase skeletal muscle energy demand [ 5 ]. Furthermore, the patient's prone positioning during the case and the nature of higher intraoperative blood loss in scoliosis cases required consideration of the risks and benefits of permissive hypotension to decrease blood loss. Given the patient's young age and prior functional status, the risk of excessive bleeding was agreed to be more detrimental than permissive hypotension. Periods of systolic blood pressure below 90 mmHg were met with the assistance of a nicardipine infusion, and tranexamic acid (TXA) was also used to help prevent excessive bleeding. It has been shown that high doses of TXA are safe in the adolescent age group [ 16 ]. We bolused 50 mg/kg over 30 minutes, followed by a 25 mg/kg/hr infusion. This aided in our fluid management as well by helping limit blood loss. Overall, this case demonstrates a distinct and carefully devised anesthetic plan for a patient with unique medical and surgical considerations. The strengths of this case include the sequential use of inhaled anesthetics after TIVA to allow for a rapid wakeup and immediate postoperative physical exam. Additionally, while there have been several reported cases of VLCADD management perioperatively (summarized in Table 2 ), to our knowledge, intraoperative neuromonitoring in the setting of VLCADD has not been reported. Limitations to our approach include the rarity of VLCADD and the nondefinitive nature of EEG monitoring in signaling the depth of anesthesia to fine-tune our infusions. With communication among anesthesia, surgery, and neuromonitoring teams before and during the operation, the patient successfully underwent a major surgery without complications.
Academic Editor: Renato Santiago Gomez Patients with very long-chain acyl-CoA dehydrogenase deficiency (VLCADD) are prone to hypoglycemia and clinical decompensation when metabolic demands of the body are not met. We present a pediatric patient with VLCADD who underwent a posterior spinal fusion for scoliosis requiring intraoperative neurophysiology monitoring. Challenges included minimization of perioperative metabolic stressors and careful selection of anesthetic agents since propofol-based total intravenous anesthesia (TIVA) was contraindicated due to its high fatty acid content. This case is unique due to the sequential use of inhaled anesthetics after TIVA to allow for a rapid wakeup and immediate postoperative physical exam. Additionally, intraoperative neuromonitoring in the setting of VLCADD has not been reported in the literature. With communication among anesthesia, surgery, and neuromonitoring teams before and during the operation, the patient successfully underwent a major surgery without complications. This trial is registered with NCT03808077 .
Acknowledgments The authors thank the MetroHealth Department of Anesthesiology manuscript review committee including Dr. Augusto Torres MD and Dr. Hesham Elsharkawy MD (MetroHealth Hospital of Case Western Reserve University School of Medicine, Cleveland, OH, USA) for review and comments of the manuscript during manuscript preparation. This project was supported in part by the Clinical and Translational Science Collaborative (CTSC) of Cleveland which is funded by the National Institutes of Health (NIH), National Center for Advancing Translational Science (NCATS), Clinical and Translational Science Award (CTSA) under grant UL1TR002548. Data Availability The protected health information used to support the findings of this study are restricted by the MetroHealth System in order to protect patient privacy. Data are available from Dr. Samuel DeJoy MD, [email protected] , for researchers who meet the criteria for access to confidential data. Consent This manuscript is patient privacy compliant per our institution's guidelines with written Health Insurance Portability and Accountability Act (HIPPA) authorization form completed. Disclosure The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. Conflicts of Interest Author LET is a grant recipient through Merck Investigator Studies Program (MISP) to fund clinical trial at MSKCC ( NCT03808077 ). LET serves a consultancy and advisory role for Merck & Co. Pharmaceutical Company. LET serves a consultancy and advisory role for GE Healthcare. LET receives stipend for his role as examiner with The American Board of Anesthesiology. LET is an expert testimony witness and receives compensation for this role. The remaining authors declare that they have no conflicts of interest regarding the publication of this article.
CC BY
no
2024-01-16 23:47:20
Case Rep Anesthesiol. 2024 Jan 8; 2024:1050279
oa_package/64/a6/PMC10789510.tar.gz
PMC10789511
38226365
1. Introduction Using implants for edentulous patients has given hope to obtaining a prosthesis that is adequately retained, stable, and comfortable as well [ 1 ]. Broadly, overdenture attachment systems are divided into five main categories: ball, locator, bar, magnet type, and telescopic attachments. The selection of attachment type depends on various factors such as interarch space, angulation between the implants, and patient's economic condition. The highest required space (between the tooth incisal edge to the mucosa) is related to the bar attachment (13–14 mm), and the lowest is related to the magnet attachment (8.5 mm). The angle between the implants can be corrected better in the bar or locator attachment than in the telescopic or ball attachment [ 2 ]. Implant overdenture bars are traditionally fabricated using the lost-wax technique and conventional casting method, which is time-consuming and labour-intensive. Alternatively, the overdenture bar framework can be fabricated using the CAD/CAM method. Occasionally, the fabrication of one-piece cast implant frameworks may encounter issues like potential misfits and porosities [ 3 ]. CAD/CAM has proved to have higher precision and accuracy [ 4 ]. The improved accuracy can be attributed to multiple factors. Firstly, it benefits from the use of fewer fabrication steps, which have their own inherent margin of inaccuracy. CAD/CAM fabrication eliminates impression, cast pouring, investing, and alloy casting stages. Additionally, the accuracy of scanner and milling machine used in this process may contribute to the overall precision, surpassing that of traditional laboratory techniques [ 5 ]. The accuracy and fit of these CAD/CAM frameworks have been shown to be more accurate than one-piece cast framework by a number of studies [ 5 – 8 ]. CAD/CAM implant frameworks offer potential cost savings compared to one-piece cast frameworks due to the use of titanium alloy instead of noble alloy. They are also lighter in weight, and the locator or ball attachments are securely screwed into a milled screw base, resulting in consistent insertion axis and reduced wear. Additionally, locators or ball attachments can be replaced individually without replacing the entire framework. In contrast, the conventional casting method may introduce errors during attachment placement. Redoing the CAD/CAM framework is easier as the same design file can be used without the need for a new impression [ 9 ]. In 2010, Bueno et al. stated that the implant-supported milled bar overdenture is a very interesting option in the treatment of patients with moderate-to-severe reabsorbed maxilla problems. It offers both the advantages of removable prostheses and the stability and retention of a fixed prosthesis [ 10 ]. In order to achieve better conformity of the prosthetic restoration made to the patient's gum and esthetics, the patient's gingival form should be reconstructed; to do this, soft material is used to let the dentist have the patient's gingival form in the laboratory [ 11 ]. Gingival mask is a highly precise copy of the peri-implant gingival tissue, which aids in more accurate designing of prosthetic restoration, superior oral hygiene, and improvement of periodontal condition. Also, gingival mask allows the observation of precise seating of the suprastructure on implant analog and plays a fundamental role in the fabrication of a suprastructure with an ideal fit [ 12 , 13 ]. Several materials are used as gingival mask, such as polyether impression material and silicone material [ 14 ]. Two methods are commonly used for the fabrication of gingival mask: Direct method: an implant impression is made, the gingival mask is placed at the respective site, and the gypsum cast is poured [ 12 , 13 ] Indirect method: an implant impression is made, the gypsum cast is poured, and the gingival mask space is created by trimming the cast. A silicone index is tightened on the cast by a screw-retained abutment, and gingival mask is injected into the site through the silicone index holes (according to manufacturer instructions) In a number of articles, different methods have been used to make a new gingival mask on the patient's previous cast after forming the emergence profile with a temporary restoration in patients with a fixed implant restoration treatment plan [ 15 , 16 ]. In 2015, Esguerra [ 15 ] suggested the pickup technique of provisional restorations for the fabrication of a special gingival mask in full-arch fixed implant restorations. In this method, the impression copings are connected to each other intraorally by using acrylic resin, and then, analogs are tightened over them. Type IV dental stone is poured into the base former, and analogs are mounted in dental stone by half of their length. The provisional restoration is tightened intraorally and a pickup impression is made and transferred to the cast, and the space between the cast and impression is filled with gingival mask [ 15 ]. In 2018, Tse and Marchack [ 16 ] used the vacuum sheet and gingival mask injection technique to transfer the emergence profile form from a provisional restoration to final restoration. In this method, an impression is made intraorally, and the master cast is poured. Next, a provisional restoration is fabricated for the patient, and the emergence profile is corrected by composite resin. A pickup impression is made from the provisional restoration intraorally, the cast is poured, and the vacuum sheet is formed over it. The provisional restoration along with the vacuum sheet is transferred to the master cast. Some holes are created in the vacuum sheet for the injection of gingival mask, and the gingival mask is subsequently injected at the site [ 16 ]. Fabrication of fixed or removable implant-supported full-arch restorations is a complex procedure that requires several clinical and laboratory sessions. In this process, the dental clinician and technician often need to frequently remove the gingival mask from the cast for precise fabrication of suprastructure and observation of its perfect fit over the cast, which increases the risk of losing the gingival mask. Currently, no alternative is available for this method. Losing a gingival mask would necessitate repeating the entire process. In our research, no study was found to solve this problem without repeating the entire process. Herein, a simple precise technique is described step by step to solve the problem of losing the gingival mask in the patient with CAD/CAM milled bar and ball attachment treatment plan for a maxillary and a mandibular overdenture, without the need to repeat the entire clinical and laboratory procedures.
3. Discussion Temporary prosthesis in completely edentulous patients is a diagnostic tool that provides acceptable function and aesthetics for patients until the final prosthesis is ready. They can be changed many times until the prosthesis is satisfactory for the patient. When a satisfactory result is obtained, the final prostheses should be copied of the temporary prostheses [ 17 ]. In a number of articles, to determine the emergence profile in fixed restoration, combined methods that include the use of temporary restoration and scanning are used [ 18 – 20 ]. It should be noted that the use of these methods is complicated in edentulous patient due to the difficulty of scanning the edentulous jaw and the absence of temporary restoration in overdentures. The advantages of the proposed technique include enabling refabrication of gingival mask in a short time and without requiring additional treatment sessions, or requiring reimpressions from the entire arch. The shortcomings of this technique include the possibility of void formation or incomplete recording of peri-implant tissue by the light-body impression material probably due to the lack of support of the light-body impression material. This problem occurred in the mandibular impression of the patient mentioned, which may lead to the construction of a superstructure with insufficient gingival compatibility. This problem is more important in patients with a fixed restoration treatment plan because it is more difficult to follow hygiene. To overcome this problem, it is recommended to use two-phase one-stage (putty+wash) impression technique (light-body impression material is injected at the site, and high-consistency putty impression material is packed over it). This technique can be used for both fixed and removable implant-supported restorations. It does not necessarily require a resin pattern, and an impression jig can be used instead. In the follow-up of this patient for one year, no signs of inflammation or peri-implantitis were found. In order to determine the accuracy of this method in a measurable way, it is suggested to compare the scan of the gingival mask obtained from this method with the scan of the original gingival mask in the future study.
4. Conclusion Gingival mask plays an important role in the fabrication of an optimal restoration. The aim of this study was to provide a simple and low-cost solution to reconstruct the missing gingival mask without repeating the previous steps. This technique has two main stages. In the first stage, the resin pattern or impression jig is closed on the implants in the patient's mouth, and the impression material is injected around them. In the second stage, the resin pattern and impression material are closed on the cast and the gingival mask is injected. This technique has the following advantages: Using the previous cast of the patient and not requiring repetition of previous steps Saving time and cost for refabrication of gingival mask Applicability in both fixed and removable implant-supported restorations The technique was effective in our patient and solved the problem, but due to the lack of articles in this field, it is suggested to conduct more studies with longer follow-up time.
Academic Editor: Mine D ndar Gingival mask is a copy of the peri-implant tissue, which plays an important role in the fabrication of an optimal restoration. Losing the gingival mask is a clinical problem that complicates the process of restoration fabrication. Herein, a simple precise technique is described step by step to solve this problem in the patient with CAD/CAM milled bar and ball attachment treatment plan for a maxillary and a mandibular implant-supported overdenture, without the need to repeat the entire clinical and laboratory procedures.
2. Case Report The definition of abbreviations and techniques is given in Table 1 . The patient's informed consent was obtained for the publication of this case report. The patient was a 58-year-old man who had been referred to the Faculty of Dentistry at Tehran University of Medical Sciences with a complaint of toothlessness. He had 4 implants in each upper and lower jaw in the area of teeth 12, 14, 22, 24, 32, 34, 42, and 44. The distance between the implants was about 1 cm, except for the two anterior maxillary implants, which were 2 cm apart. The implant-supported overdenture treatment plan was considered for the patient. After the initial arrangement of the teeth, due to the large interarch space and inappropriate angulation between the implants (especially the two anterior maxillary implants), the milling bar and ball attachment system were selected. At the stage of trying the milling bar and ball resin patterns in the mouth, it was found that the gingival masks of both casts were lost in the transfer of the casts from the laboratory to the clinic. To solve this problem without adding the patient's treatment sessions, the following steps were performed in order. Screw-retained resin pattern was tightened intraorally with 10 N/cm torque ( Figure 1 ) Light-body addition silicone impression material (Betasil Vario Light, Muller-Omicron GmbH & Co.KG, Lindlar, Germany) was injected around the resin pattern to approximately 1 cm distance After setting of impression material, the resin pattern screws were untightened and the resin pattern along with the attached impression material was removed from the oral cavity ( Figure 2 ) One layer of separator (petroleum jelly) was applied on the internal surface of the impression material, and the resin pattern was screwed onto the cast with 10 N/cm torque A round diamond bur was used to create one hole in the buccal and one hole in the lingual surface of the impression material of gingival mask. The hole in the buccal surface was used to inject the automix injectable gingival mask (GI Mask, Coltene/Whaledent Inc., Altstatten, Switzerland), and the hole in the lingual surface was used to allow air leakage and prevent air retention at the site ( Figure 3 ). The injection was continued until the material leaked out through the lingual hole After setting of gingival mask, the resin pattern along with the attached impression material was removed from the cast ( Figure 4 ) The rest of the procedure for the fabrication of milled bar and ball attachment was continued for the patient ( Figure 5 ). The attachment was tightened intraorally on the implant, and proper relationship of abutments with surrounding soft tissue was clinically evaluated. Also in order to access for hygiene, the sufficient space between the bar attachment and the soft tissue was ensured
Data Availability The data used to support the findings of this study were supplied by Somayeh Zeighami under license and data will be available on request. Requests for access to these data should be made to Somayeh Zeighami. Conflicts of Interest The authors declare that they have no conflicts of interest.
CC BY
no
2024-01-16 23:47:20
Case Rep Dent. 2024 Jan 8; 2024:4166767
oa_package/7b/6c/PMC10789511.tar.gz
PMC10789512
0
1. Introduction With over 265 million players worldwide, soccer has a large cultural and economic impact [ 1 ]. While traditionally played on natural grass, the economic benefits of artificial grass have seen an adoption of these playing surfaces around the world. Lower maintenance costs and increased pitch availability means artificial grass surfaces can save clubs approximately $2,255 AUD annually ($3074 AUD in 2023 dollars based on the consumer price index) as well as 515 man-hours in maintenance and upkeep [ 2 ]. As shown in Figure 1 , fourth-generation artificial grass consists of tufted fibre, a rubber performance infill, a supporting sand infill, and a shock absorption layer. Previous generations of artificial grass lacked these infill layers, with players 2.5–4.5 times as likely to incur injuries on first- and second-generation artificial turf when compared to natural grass [ 3 , 4 ]. While later generations of artificial grass provide no direct injury risk to players, they do have preconceived behaviours when playing on artificial grass, which could impact their lower limb biomechanics [ 5 ]. Potthast [ 6 ] found that along with changes in a player's behaviour due to changes in the surface type, there were also biomechanical differences, particularly for ankle eversion when they performed an instep kick on natural and artificial grass. This change in player biomechanics could have implications for the athlete, placing them at risk of lower limb injury. The knee and ankle are the two primary injury locations for soccer players, accounting for approximately 40% of injuries [ 7 ]. A tear of the anterior cruciate ligament (ACL) in the knee has numerous negative impacts on an individual, including significant economic costs, a large rehabilitation period, and an increased risk of Osteoarthritis later in life [ 8 ]. This high injury risk coincides with the reported 70%–80% of ACL injuries occurring in non-contact situations [ 9 ]. In a systematic video analysis of 39 non-contact ACL injuries of professional players, Waldén et al. [ 10 ] found that pressing actions (often involving a sidestep cut to tackle an opponent), followed by regaining balance after kicking or landing after heading the ball, were the most common causes for ACL injury. A 16-year analysis of collegiate level soccer players noted that female soccer players experienced 0.28 ACL injuries per 1,000 hr (training and games) compared to 0.09 ACL injuries per 1,000 hr in men's soccer [ 11 ]. The cause of this disparity among sexes is disputed among the literature. Studies have shown that the intercondylar notch in the knee is smaller in females than in males, whereas others highlight the influence of the menstrual cycle or the difference in biomechanics, notably the development of knee valgus, between sexes [ 12 , 13 ]. While there are significant differences in male and female injury rates, there exists limited literature that examines potential causes of these differences, particularly the biomechanics of male and female soccer players that occur over a broad range of movements in a game-specific environment with minimal outside interference. Previous literature has focussed on single movements of soccer players, such as changes of direction [ 14 – 17 ], kicking [ 18 – 21 ], and jumping [ 22 – 24 ]. These studies utilised motion capture software to analyse the position and kinematics of the players' lower limbs. In these studies, the motion capture systems were set up in an indoor environment, with Thomas et al. [ 15 ] opting to use an indoor, artificial grass patch as opposed to a traditional laboratory floor utilised by other researchers. With regard to testing methods, there are also significant differences between studies with Landry et al. [ 14 ], Condello et al. [ 16 ], Thomas et al. [ 15 ], and Pollard et al. [ 17 ] using 3D kinetic and electromyographic analysis, force plate analysis and a motion capture analysis respectively to deliver varying results. While some studies found kinematic differences between sexes at the hip [ 14 ] and others at the knee [ 15 ], some studies found no differences between male and female players [ 17 ], highlighting the influence of testing methods. Similar trends were noted in the analysis of kicking kinematics, with some studies noting significant differences at the hip joint [ 20 ], a factor that aligns with the findings of Landry et al. [ 14 ] for a change of direction analysis, and other studies finding no significant differences in the position of the standing leg between sexes [ 21 ]. Interestingly, a majority of the literature examines differences during an instep kick, a kicking technique often used for shooting or longer passes; however, Althoff and Hennig [ 25 ] suggest that female players are beginning to rely on shorter passes, utilising the side-step kicking motion rather than the instep kicking motion. While present literature examines the biomechanical differences in a reliable and repeatable manner, ACL injuries are not limited to one movement, as evidenced by Waldén et al. [ 10 ], and players appear to behave differently when playing on different surfaces [ 5 ]. Therefore, the aim of the present study is, through the application of 3D motion capture, to characterise the biomechanics of male and female soccer players over a variety of movements that are performed on a game-specific artificial grass surface. Due to the link between knee valgus and the potential for ACL injury risk, particularly for female players, the hypothesis for this research was that knee valgus would be the most influential biomechanical characteristic among female soccer players.
2. Materials and Methods 2.1. Participants Ten amateur soccer players (five male and five female) were recruited for this study to perform eight distinct movements. The participants were grouped by their pre-acknowledged sex as either “male” or “female.” Each player was required to be within the age range of 18–25 years, have a minimum of 5 years of playing experience at a club level, as well as no previous injuries to their ACL in either leg. The five male soccer players (1.80 ± 0.06 m height, 76 ± 8 kg mass, 23 ± 2 years of age, 12 ± 3 years playing experience) and the five female soccer players (1.62 ± 0.11 m height, 62 ± 10 kg mass, 21 ± 2 years of age, 12 ± 5 years playing experience) provided their informed consent to participate in this study that was approved by The University of Adelaide Human Research Ethics Committee. 2.2. Tasks Prior to testing, the players were screened with a questionnaire detailing injury history, and they were given an overview of tasks and the surface type. Based on this information, players were instructed to wear their choice of soccer boots to suit the playing surfaces. By instructing players to wear their own footwear, any unfamiliarity with the boot feel was avoided. Players completed their own 15-min warm-up prior to performing each of the following actions four times: Straight-line run; Run and stop; Run and single leg 180° turn; 45° plant-and-cut; Two legs vertical jump; A vertical jump, followed by a 45° take-off (jump exit); Kicking with the instep and the side foot. Straight-line running, sudden stops, and changes of direction were all categorised in defensive “pressing,” while landing from jumps and kicks were all labelled as potential ACL injury mechanisms [ 10 ]. Prior to each test, players were told what movement they were to be performing and, where applicable, informed of the area within the capture frame where the movement must take place. Players were given a visual demonstration of a “successful” trial for each movement; however, no further instruction was given to specific technical aspects of each movement to ensure the results obtained were as true to the players' normal actions as much as possible. The players' approach speed for applicable actions such as running and change of direction movements was recorded by observing their average horizontal speed of the left anterior superior iliac and right anterior superior iliac markers in the motion capture software. If a participant's speed was particularly low (approximately 1 m/s) compared to previous trials, they were questioned for possible fatigue and encouraged to take a break. Trials that were performed at speeds outside of this lower threshold were included in the study. No pre-determined rest periods were prescribed between each movement; however, participants were questioned for mental and physical wellbeing before each action commenced. Motion capture results were checked after each test, and in trials where a particular marker was not obtained throughout the entire movement, the trial was repeated (up to a maximum of six total trials for each movement). As the participants were considered amateur players, it was assumed there would be significant kinematic differences between their dominant and non-dominant foot, particularly for kicking tasks, thus, participants were instructed to perform actions with their dominant foot, even change of direction movements. 2.3. Data Collection Kinematic data were recorded using a 12-camera VICON motion capture system (VICON, Oxford, UK) operating at 100 Hz, as recommended by the motion capture company LOGEMAS (LOGEMAS, Queensland, Australia) for outdoor motion capture. Sixteen 14 mm reflective markers were attached to each participant's lower limbs as per the VICON Plug-In-Gait marker set with marker clusters replacing individual markers on the shank and thigh segments. Players wore skin-tight clothing, and double-sided tape was used to adhere the markers to bony landmarks. Rigid sports tape was used to secure the marker clusters to the participant's outer thigh and shin segments. Rigid marker tape was also used to secure a marker to the distal end of the participant's first phalange. This location was chosen so as not to impede with the instep or sidestep kicking motion. The remaining markers on the foot were attached to the medial and lateral malleoli and the calcaneus using double-sided tape and rigid tape if necessary. The TekScan F-Scan in-sole pressure system (TEKSCAN, Massachusetts, USA) was used to calculate the vertical ground reaction force of the players for each movement. This force data was used to correlate the initial point of ground contact and was not used to perform a kinetic analysis. The artificial grass testing surface, shown in Figure 2 , consisted of a Max S yarn, a styrene-ethylene-butylene-styrene performance infill, a sand-supporting infill, and finally, a ShockPad draining and shock absorption system. The motion capture volume was approximately 8 m × 4 m × 2 m, with players having ample approach room. Ankle inversion and eversion angles, as well as the plantar and dorsiflexion, were recorded. The internal/external rotation, varus and valgus rotation, and the flexion and extension at the knee were also calculated. Finally, the internal and external rotation and abduction and adduction of the hip were collected. The linear velocity, in the direction of motion, of the player's hip centre was recorded for each running-based trial. While movements at the ankle have minimal direct impact on ACL strain, they have a significant impact on the stability of the lower limb, with Baez et al. [ 26 ] finding a correlation between an individual's ankle joint function and the biomechanics at the knee. The biomechanical values gathered at the knee can place the ACL under direct strain (knee valgus and internal rotation) or indirect strain by placing the athlete in the “position of no return,” as described by Ireland [ 27 ]. This “position of no return,” whereby the femur becomes internally rotated, and the tibia externally rotated, places the knee in extreme levels of valgus rotation and can place additional strain on the ACL. Hip adduction and internal rotation also lead to athletes adopting the “position of no return” [ 27 ]. Key joint angles, such as knee rotation, knee valgus, and hip rotation, were identified as key components with respect to ACL injury, and hence, they needed to be recorded and processed. Each value was recorded at ground contact, where the vertical acceleration of the foot was at the minimum value. Each instantaneous point was utilised in the statistical analysis, while the average values for each gender are displayed in Tables S1 – S6 . This singular value was chosen as it resembled the foot-fixation often attributed to non-contact lower limb injuries. This point corresponded with the initial vertical ground reaction force calculated using the in-sole pressure sensors. The foot-fixation, coupled with the initial pressure, resulted in this instance becoming the most likely for a player to sustain an ACL injury. Joint angles recorded at this moment could be either positive or negative relative to the neutral position. Prior to data collection, participants were asked to perform a static T-pose in the centre capture frame. This was used as a calibration for the motion capture system as well as the basis for a neutral position for each subject. 2.4. Data Processing Data were processed in VICON Nexus using a low-pass Butterworth filter with a 10 Hz cut-off frequency as used by similar studies [ 28 , 29 ] and then analysed using VICON ProCalc to determine joint angles and 3D kinematics. Kinematic values at the ankle, hip, and knee were calculated using definitions provided by VICON, based on the analysis of Kadaba et al. [ 30 ] and Davis et al. [ 31 ]. 2.5. Statistical Analysis The testing method resulted in the recording of eleven different data points across two sexes for eight different movements. To reduce the number of variables within the data set, a principal component analysis (PCA) was utilised. PCA is used to reduce the dimensionality of the data while retaining the variation within the data set [ 32 ]. By analysing the amount of variance in the data set that is attributed by each variable, the results can be explained in terms of key variables or principal components (PCs) rather than the entire variable list. A PCA was performed for each movement for both the male and female data to observe if there were variables that consistently provided the most impact on variance for each sex and movement. The PCA was performed in SPSS Statistics (IBM, New York, USA), and the data set was evaluated using the Kaiser–Meyer–Olksin (KMO) measure of sampling adequacy and Bartlett's test of sphericity. A KMO value close to 1 indicated that a PC would approximate the variance in the data, while a KMO of less than 0.5 indicated that the data reduction would not yield an accurate representation of results. Movements that resulted in a KMO of less than 0.5 were not reduced using the PCA; however, movements that yielded a KMO greater than 0.5 were rotated using a direct Oblimin rotation due to the nature of the variables, particularly the likelihood of some degree of correlation between lower limb biomechanics as suggested by Costello and Osborne [ 33 ]. A similar approach was adopted by Landry et al. [ 14 ], who selected an orthogonal rotation for their PCA. The subject-to-item ratio, described by Costello and Osborne [ 33 ] as the number of trials per variable tested, is a metric for observing the impact of sample size. In literature, it is common for subject-to-item ratios to be 10 : 1 or less. In this particular analysis, the ratio was 3.3 : 1, which lied between the 2 : 1 and 5 : 1 range used by 25.8% of studies [ 33 ]. The scree plot, shown in Figure 3 , details a variable, in this case, a joint angle, as a component, as well as the associated eigenvalue for each component. This eigenvalue is derived from the transformation of each variable along the principal component, with a higher eigenvalue denoting a higher influence on the variance of data. The number of components analysed for each test was selected based on visual inspection as well as the eigenvalue for each variable. An inspection for each scree plot was undertaken for any key inflection points, as seen in Figure 3 . It was at this inflection point where the number of components was selected. In instances where a clear inflection point could not be identified, an eigenvalue approach was utilised by selecting the number of variables with eigenvalues greater than 1, as suggested by Ringnér [ 32 ]. Due to the location of the inflexion point in Figure 3 , two PCs were chosen for this particular movement. Once the number of PCs has been selected, a component matrix is constructed, as shown in Table 1 . This matrix details the PCs and the corresponding impact of the individual biomechanical value, with a value closer to 1 signifying a higher influence on the variance. From Table 1 , it can be deduced that, for Component 1, the internal/external rotation of the knee had the largest influence, while for Component 2, it was the knee varus/valgus for this movement. This methodology was used to deduce what biomechanical value had the most influence over a broad range of movements for both male and female players.
3. Results Tables S1 – S6 in the supplementary material detail key metrics and joint angles at the ground contact point for both male and female players over a variety of actions. Angles are measures relative to a stationary T-pose for each athlete and are listed as their mean value over the four trials, as well as a standard deviation. Tables S1 and S2 detail joint angles of the ankle joint for both males and females, respectively. A positive value for ankle rotation about the frontal plane axis denotes ankle inversion, while a negative value represents eversion. Similarly, positive angles represent plantarflexion, while negative angles represent dorsiflexion. Approach speeds of the athletes were not controlled by the researcher but instead were used as a guideline to ensure each trial was performed consistently for each athlete. The knee joint angles for male and female soccer players are reported in Tables S3 and S4 , respectively. A positive value denotes knee flexion, while a negative denotes knee extension (relative to the athlete's standing T-pose). For rotations around the longitudinal axis, a positive value denotes internal rotation, while a negative value represents external knee rotation. Knee varus is represented by a positive joint angle, while knee valgus is denoted by a negative value. Hip rotation around the longitudinal axis, as well as hip abduction and adduction, are shown in Tables S5 and S6 . A positive value denotes internal hip rotation about the longitudinal axis, as well as presenting hip abduction.
4. Discussion The aim of the present study is to characterise the biomechanics of male and female soccer players, through motion capture analysis, over a variety of movements on an artificial grass playing surface. Studies have suggested that changes in joint angles during certain movements could lead to an increased risk of ACL injury in female soccer [ 12 , 13 ]. Joint angles of interest, particularly in the “position of no return,” include internal rotation and adduction of the hip and an extended knee under valgus with an external tibial rotation [ 27 ]. This then formed the hypothesis that female players exhibited higher levels of knee valgus and hip adduction over a broad range of movements. As shown in Table S3 , female players landed with a high level of knee flexion at ground contact for a majority of movements, aside from the plant-and-cut movement, where the knee undergoes a level of extension. This extension removes the body's ability to effectively minimise the load through the lower limb as it would by progressively flexing the knee joint throughout the motion. This is notable as the plant-and-cut movement is one mechanism that the previous literature details as a high-risk movement for ACL injuries in females [ 27 , 34 ]. Similarly, for a plant-and-cut movement, as well as a stop-turn movement, female players demonstrate large values of internal hip rotation, which Ireland [ 27 ] has highlighted as a potential factor for ACL injury. In terms of knee internal/external rotation as well as varus/valgus rotation, female players tend to exhibit safer biomechanical angles for most movements aside from straight-line running. For running, female players exhibited both knee valgus and internal knee rotation which can place the lower limb in the “position of no return” and at a heightened risk of ACL injury. For both the sidestep and instep kick, as detailed in Table S4 , females had larger knee valgus when compared to males, which can place added strain on the ACL. For the sidestep kick, females also experienced external knee rotation of similar magnitude to the measurements of Kellis et al. [ 35 ]. This external rotation can place direct strain on the ACL due to the femoral position relative to a stationary tibia. At the ankle, female players performed actions with a dorsiflexed foot, indicating a tendency to land on the heel when striking the ground. One exception, however, was for a jump-exit movement whereby female players would land with their foot in a plantarflexed position before accelerating at a 45° angle. This plantarflexion could indicate a shift forward in body weight, which, when coupled with the level of hip abduction experienced in the same movement, could be a potential indicator for injury risk. From the PCA, two key joint angles were found to have the largest impact on the variance between joint angles, namely the internal and external rotation of the knee joint about the longitudinal axis, as well as the abduction and adduction of the hip joint. The rotation of the knee joint is of particular importance as it relates directly to non-contact ACL injury. This is evident in the fact that the knee is one of the most injured joints among soccer players [ 7 ]. The influence of hip abduction and adduction angle could correlate with several anatomical and muscular differences between male and female athletes. This result, however, disproves the hypothesis that knee valgus is the underlying factor in female player biomechanics, supporting the research of Nilstad et al. [ 36 ] and Krosshaug et al. [ 29 ], who suggest that knee valgus may not play an influential role in the prediction of ACL injury risk. While there has been an abundance of literature regarding the biomechanics of male soccer players, studies are often limited with regard to the number and nature of the performed movements as well as the settings in which they are performed. This significantly impacts the accuracy of such results as they do not accurately represent game-like scenarios. Both males and females followed the same biomechanical behaviour at the hip joint throughout the different movements. As shown in Table S4 , the male hip remained in a somewhat neutral position at ground contact aside from movements that included an angled acceleration, such as the run turn, plant cut, and jump exit movements. In these movements, at ground contact, the hip was rotated internally with abduction. For the run-turn and plant-cut movements, this was coupled with an internal knee rotation, highlighting a potential point of foot fixation as the hip rotates relative to the shank, potentially placing the athlete in the “position of no return” [ 27 ]. The magnitude of the hip rotation and abduction angles, particularly during the plant-cut movement, are comparable to those obtained by Dos'Santos et al. [ 37 ]. For the instep kick, male players exhibited a degree of knee valgus as well as internal rotation, both of which are factors that have links to ACL strain. While these values are similar in magnitude to the measurements of Kellis et al. [ 35 ], they are less than those obtained in a plant cut or a run stop motion, both of which have been identified as high-risk movements [ 13 ]. Male soccer players tended to exhibit some level of knee flexion across all the tested movements, a factor Ireland [ 27 ] identifies as beneficial for the reduction in risk of ACL injury. While males exhibited flexion across all movements, the magnitude was often less than their female counterparts, indicating a tendency to perform movements in more of a neutral position. In terms of ankle positioning, male players performed movements with an approximately neutral ankle in the frontal plane with minimal inversion or eversion across all movements. For jumping movements, players exhibited plantarflexion, indicating an inclination to land on the toes rather than the heel. This could imply that the players' body-weight may be shifted forward, away from the frontal plane, which poses a potential risk for injury [ 27 ]. In contrast, for movements where the player had an incoming velocity, the foot was more dorsiflexed, indicating a heel strike and a weight shift behind the frontal plane. For male players, the PCA showed that the internal and external rotation of the knee, coupled with the internal and external rotation of the hip, had the largest influence on the variance of the remaining joint angles. While the influence of the knee rotation was consistent between sexes, the change from hip abduction to hip rotation could be partly explained by the decrease in the activation of abductor muscles compared to female athletes, as demonstrated by Lewis et al. [ 38 ]. This highlights the need for coaches and strength training personnel to employ an individualised approach when considering the differences in movement between male and female players. These rotations about the knee and the hip occur along the longitudinal axis of the body. Movements about this axis have been explored with regard to the shoe–surface interaction of players to avoid fixation-related injuries. To help mitigate the increased risk of injury early in the season, as discussed by Orchard [ 39 ], coaches should avoid intense, rotation-based movements until female players are adequately prepared and warmed up. As the axis of rotation for the key movements at the hip and knee is the same as the shoe–surface interaction, this proposes a potential link between lower limb movement, foot fixation, and injury risk, thus highlighting the importance rotational traction plays in lower limb injury risk and how significant selecting the right boots for the surface conditions may be [ 40 – 42 ]. As evidenced by the standard deviation of results and the exclusion of eight movements from the PCA, the sample size of the research was a limiting factor. While variations within each of the four trials of the same player were relatively minor, the variation between players of the same sex was, for some manoeuvres, significant. Despite these limitations, the study methodology was supported by approach speeds and actions performed in the literature. While no direct instruction was given to participants with respect to their approach speed, the value was recorded to compare against the speed of movement reported in the literature. The straight-line running speeds, while not near the magnitude of the top speeds typical in professional sport, are reminiscent of the speeds athletes are performing at for 30% of the game, while high-speed running accounts for only 10% of game actions [ 43 ]. Plant and cut approach speeds in this research were consistent with other tests conducted by Besier et al. [ 44 ], Rovan et al. [ 45 ], and Suzuki et al. [ 46 ]. To help understand the lower limb movements of both male and female soccer players, a 3D motion capture analysis of players was performed on an artificial grass surface for a variety of movement types. Previous studies have shown that the surface type, as well as player sex, can affect lower limb biomechanics and, thus, the potential risk of ACL injuries for soccer players. It was found that while the specific joint angles were dependent on the action being performed, certain movements did follow similar trends in the data. For example, movements involving an acceleration phase (plant-cut, jump-exit, and run-turn) showed higher levels of hip movement for both male and female soccer players. Similarly, in jumping movements, male players landed on a dorsiflexed foot, while for running-based movements, they exhibited some degree of plantarflexion. These trends, when considered holistically at the ankle, knee, and hip, could provide insight into the potential risk of ACL injuries and possibly highlight the cause as to the disparity between male and female ACL injury rates.
Academic Editor: I-Lin Wang Soccer is played by a variety of individuals with varying abilities. The complicated lower limb movements involved within the game often lead to knee and ankle injuries, with anterior cruciate ligament injuries being the most severe with regard to rehabilitation time and ongoing health risks. This research explores the biomechanical kinematics of male and female soccer players on synthetic grass to determine whether trends in lower limb biomechanics over a variety of movements could explain injury risk. Both male and female players ( n = 10) aged between 19 and 24 years performed running-based and stationary-start movements. Biomechanical measurements at the hip, knee, and ankle were recorded. Observations showed that specific differences in joint angles were largely dependent on the movements performed; however, for male players, on average, across all movements, 84.6% and 72.6% of the variation in joint angles could be explained by internal/external rotation at the hip and knee, respectively. For female players, internal/external knee rotation, as well as hip abduction and adduction, accounted for 83.6% and 80.2% of the variation in joint angles, respectively, across all the tested movements. This highlights the importance of hip mechanics and knee alignment for players when performing a variety of movements.
Acknowledgments Open access publishing was facilitated by The University of Adelaide as part of the Wiley—The University of Adelaide agreement via the Council of Australian University Librarians. Data Availability The participants of this study did not give written consent for their data to be shared publicly, so due to the sensitive nature of the research, supporting data are not available. Conflicts of Interest The authors report there are no competing interests to declare. Supplementary Materials
CC BY
no
2024-01-16 23:47:21
Appl Bionics Biomech. 2024 Jan 8; 2024:9588416
oa_package/76/dc/PMC10789512.tar.gz
PMC10789513
0
1. Introduction Desmoid fibromatosis (DF) is a rare soft tissue neoplasm characterized by the proliferation of monoclonal fibroblasts. While histologically benign, DF tumors can exhibit unpredictable behavior, as they invade surrounding tissues. Commonly found in various anatomical sites, such as the limbs, abdominal wall, and mesentery, these tumors can lead to functional impairment, pain, and potential life-threatening complications if they compress vital structures. The incidence of DF is relatively low, ranging from 2 to 5 cases per million per year. It is more frequently observed in females, with a peak incidence during reproductive years [ 1 ]. The impact of pregnancy on desmoid tumors has been a topic of interest, but due to the rarity of the condition, the literature on its course, effects on future pregnancies, and management guidelines remains limited. This case report presents a unique instance of a multigravida with an abdominal lump that exhibited rapid growth during pregnancy, necessitating surgical resection. The case is noteworthy for its presentation during pregnancy, the exceptionally fast growth rate of the tumor, and the requirement for surgical intervention due to pain. We also review the existing literature on this topic to shed light on the management and outcomes of DF tumors during pregnancy. By sharing this case and reviewing the literature, we aim to contribute to the understanding of DF in pregnancy and offer insights into its appropriate management.
4. Discussion The etiology of desmoid tumors is poorly understood. The origin is often sporadic; however, an increased incidence in patients with Gardner's syndrome and familial adenomatous polyposis (FAP) has been reported [ 2 , 3 ]. Desmoid tumors associated with pregnancy were first described in 1832 by MacFarlane in a postpartum woman who had an abdominal wall desmoid, which was surgically excised [ 4 ]. Due to the rarity of the condition, there are no specific guidelines on the diagnosis and management of desmoid tumors in pregnancy. The diagnosis of DF during pregnancy requires a high index of suspicion. In most cases reported, it was erroneously presumed to be uterine fibroids [ 5 ]. Multiple studies were conducted to identify the role of female sex hormones in the development of these tumors. Expression of estrogen receptors on desmoid tumors, with regression noted after menopause or following therapy with antiestrogen drugs, has been reported [ 6 ]. Various genetic, hormonal, and physical factors are considered to play an essential role in the etiopathogenesis of desmoid tumors. Mutations involving APC and CTNNB1 genes, which are part of the Wnt signaling pathway, were identified as significant genomic mutations leading to the monoclonal proliferation of fibroblasts [ 2 , 7 ]. The resolution of disease with antiestrogen drugs and menopause strengthens the theory on the influence of estrogen on the development of DF [ 8 ]. Desmoid fibromatosis, although rare, tends to occur more commonly in specific contexts. These contexts include familial adenomatous polyposis (FAP), a hereditary condition characterized by the development of numerous polyps in the colon and rectum. Individuals with FAP have an increased risk of developing desmoid fibromatosis, often in the abdominal region [ 9 ]. Furthermore, Gardner syndrome, a variant of FAP, is also associated with desmoid fibromatosis [ 10 ]. Another context where desmoid fibromatosis is more prevalent is a history of trauma or previous surgery [ 11 ]. While these contexts are more frequently associated with desmoid fibromatosis, it is important to note that it can still develop in individuals without these specific risk factors, like the case in our study that develop without any syndrome or previous trauma in the region. A review of the literature done in 2012 by Robinson et al. defined pregnancy-associated desmoid tumors as tumors discovered during pregnancy or within three years after delivery. Out of the 50 cases in the study, the most common site was the abdominal muscles, particularly the right rectus muscle [ 12 ]. There are rare cases reported of desmoid tumors in extra-abdominal sites such as the vulva, larynx, neck, and popliteal fossa [ 13 , 14 ]. Furthermore, a recent study involving 382 patients with desmoid-type fibromatosis found that the prevalence of pain was only 36%, thereby highlighting the relatively uncommon nature of this symptom in the condition [ 15 ]. Although the radiologic characteristics of the mass suggested a benign process, the rapid growth resembles a malignant condition behavior, such as malignant melanoma. This is among the most rapidly growing neoplasms and one of the most commonly diagnosed during pregnancy. However, it is estimated that melanoma develops in preexisting nevi in two-thirds of cases [ 16 ]. Besides, a systematic review published in 2020 that analyzed cases of malignant melanomas during pregnancy verified that 91.6% of cases were cutaneous melanoma [ 17 ]. No skin lesions or alterations being seen on this patient associated with benign radiologic characteristics of the mass made a diagnosis of malignant melanoma less probable. During pregnancy, a mass in the abdominal wall can also be confused with various other conditions, including uterine fibroids, abdominal hernia, and lipoma. Uterine fibroids are benign tumors that develop in the uterus and can increase in size during pregnancy, leading to noticeable protrusions in the abdominal wall [ 18 , 19 ]. Abdominal hernias occur when organs or tissues protrude through abdominal wall openings, and the increased abdominal pressure during pregnancy can contribute to their development or exacerbation [ 20 ]. These hernias can resemble desmoid fibromatosis in terms of abdominal bulging. Lipomas, on the other hand, are benign tumors composed of fatty cells. Hormonal changes during pregnancy can cause lipomas to grow or become more noticeable, leading to palpable masses in the abdominal wall, which may be mistaken for desmoid fibromatosis [ 21 , 22 ]. Proper differentiation between these conditions is crucial for an accurate diagnosis and appropriate treatment planning during pregnancy. The management of desmoid tumors should be individualized. Observation with periodical imaging is the preferred method in asymptomatic patients or when the tumor is away from vital structures. Active management includes surgical excision, radiotherapy, or medical management with antiestrogen drugs such as tamoxifen or toremifene [ 6 , 8 ]. The locally aggressive nature of the tumor and its unpredictable course make local control of the disease a priority, and surgical resection is recommended in most cases. The management of desmoid tumors during pregnancy is based on patient symptoms and pregnancy considerations such as gestational age when considering surgical excision. Typically, excision is postponed until after delivery [ 23 , 24 ], as active surveillance is widely recognized as the primary and established strategy for managing primary or recurrent sporadic or familial desmoid tumors [ 3 ]. There have been only a few cases reported where excision was carried out during pregnancy. Durkin et al. reported a case where an extra-abdominal desmoid tumor was diagnosed during the first trimester with a core-needle biopsy of the lesion. At 20-week gestation, the patient underwent local surgical excision with clear margins and repair of the abdominal wall with mesh. The patient had an uneventful pregnancy course and delivered vaginally at term without complications [ 25 ]. Our patient noticed an abdominal lump before pregnancy. Since it had grown during pregnancy, active management was warranted to relieve the patient's pain associated with the tumor. Due to the risk of recurrence of the tumor, diligent follow-up is necessary. In a multi-institutional study done among four sarcoma centers, out of the 92 cases of DF in women, 48% had pregnancy-related DF, and 13% had a postsurgical relapse. The study concluded that even though subsequent pregnancies are associated with an increased risk of relapse, it can be safely managed [ 26 ]. In a case described by Ooi and Ngo, a patient developed a painful mass over her cesarean wound. After excision, histopathology revealed a desmoid-type fibromatosis with positive margins. An abdominal wall mesh repair was utilized. Years later, when the patient desired pregnancy, she underwent complete excision of the residual tumor before conceiving to avoid the risk of progression during pregnancy. She subsequently had an uncomplicated pregnancy and delivery [ 27 ]. This case suggests that excision of the DF tumor and abdominal wall repair is not a contraindication to subsequent pregnancy. In our case, the patient underwent resection without mesh repair and had positive residual margins after the resection. There has been no clinical or radiological evidence of recurrence at a six-month follow-up. This study's strengths include the narrative of rare neoplasm behavior during pregnancy. In addition, unlike what is usually proposed as the recommended management, in this case, the surgical resection could not wait until delivery and needed to be done during pregnancy. Hence, besides increasing awareness of desmoid fibromatosis diagnosis, this case brings evidence that surgical management can be done safely, if necessary, during pregnancy. Our study has limitations. Drawbacks of our study include a short follow-up period and being a single case report. The patient included may not be representative of the broader population with desmoid fibromatosis. Hence, the inclusion of only one case limits the generalizability of the findings to a larger population. Additionally, the short-term follow-up period in our study restricts our ability to assess the long-term effects of the surgical intervention and the potential recurrence of the tumor. Since the recurrence rate of desmoid fibromatosis can be as high as 80%, monitoring the patient for a more extended period could have revealed recurrence of the tumor. Furthermore, the lack of a standardized protocol for the surgical management of desmoid fibromatosis during may introduce variability in the treatment approach, making it challenging to replicate the study's results in other settings. Moreover, it is known that patients can respond differently to surgery and its inherent procedures, such as anesthesia, so it would be important to evaluate the risks and benefits on a larger scale for the development of a specific guideline for desmoid fibromatosis management during pregnancy. A high index of suspicion is needed for the diagnosis of desmoid fibromatosis in any patient who develops an abdominal mass separate from the uterus during pregnancy. Treatment is individualized, depending on the symptoms of the patient. This case supplements the preexisting data on the effect of pregnancy on such tumors. However, surgical management of DF during pregnancy has not yet been well explored in the literature. Therefore, our case provides additional evidence that while the typical approach for managing desmoid fibromatosis during pregnancy is expectant, surgical resection should be considered based on patient's symptoms.
Academic Editor: Manvinder Singh Desmoid fibromatosis (DF) is a rare and locally aggressive neoplasm. We present a case of a 28-year-old previously healthy multigravida who noticed a lump in her abdomen near the umbilicus two months before becoming pregnant. It underwent rapid growth during pregnancy, causing pain and discomfort. Targeted ultrasound of the area showed an irregular mass measuring 0.9 × 1.7 × 1.4 cm. The origin of the mass was unclear, suggesting a connection with the intra-abdominal contents. An MRI done three weeks later revealed a subcutaneous ovoid mass measuring 3.0 × 2.3 × 3.0 cm, which was significantly larger. Due to pain and rapid growth, surgical resection was done at 25 weeks of pregnancy. Histopathological examination revealed a desmoid tumor. The patient had an uneventful recovery and term vaginal delivery without complications. Hence, our case serves as evidence that DF tumors can be surgically managed during pregnancy with minimal to no complications.
2. Case Description This is the case of a 28-year-old female, gravida 2, para 1, who presented to our office at seven-week gestation for routine antenatal care. She previously had one uncomplicated pregnancy and vaginal delivery. Her medical history was unremarkable, and she had no personal or family history of tumors. The patient had noticed a lump in her abdomen near the umbilicus two months before becoming pregnant. During pregnancy, it showed rapid growth. Ultrasound and subsequent MRI were performed, revealing benign characteristics and no evidence of local or distant extension (Figures 1 and 2 ). The uterus was palpable and appropriate for gestational age, and fetal heart tones were within the normal range. Maternal-fetal medicine and general surgeon specialists recommended expectant management during pregnancy and surgical excision after delivery. However, due to the continued growth and pain, the patient opted for surgical resection at 25 weeks of pregnancy. The mass measured approximately 3.5 cm and was resected under local anesthesia and IV sedation. Histopathological evaluation of the resected mass revealed bland spindle cells in long fascicles with compressed blood vessels showing perivascular edema and focal extravasated red blood cells. Immunohistochemical stains were positive for beta-catenin, smooth muscle actin (SMA), and desmin. The diagnosis was consistent with desmoid-type fibromatosis, and the tumor had contact with the deep, medial, and focal superficial margins. The patient had an uneventful recovery from the surgery and delivered vaginally at full term, giving birth to a healthy female infant with normal Apgar scores. Follow-up MRI seven months after the resection did not show any residual mass or recurrence. The management plan includes continuing with clinical and radiological surveillance. 3. Mass Characteristics Prior to the resection, the mass measured approximately 3.5 cm and was located just below and lateral to the left of the umbilicus, subcutaneously. Physical examination revealed a small, round mass with a hard texture and tenderness upon palpation, with no overlying erythema or induration. Targeted ultrasound of the area revealed an irregular hypoechogenic mass measuring 0.9 × 1.7 × 1.4 cm. The origin of the mass was unclear; however, a small neck deeper to the mass was noted, indicating a possible connection with the intra-abdominal contents. However, during the subsequent MRI three weeks after, the mass was found to have grown significantly, measuring 3.0 × 2.3 × 3.0 cm. The rapid growth of the mass during pregnancy is an important finding. The characteristics of the mass suggested a benign process, such as ectopic endometriosis or a desmoid tumor, with no evidence of local or distant extension. 5. Patient Perspective The patient was contacted to provide her perspective on the treatment received. The patient stated that she initially felt scared about undergoing surgical treatment during pregnancy, mainly because the specialists had advised against it. However, as the pain caused by the mass became increasingly intense and due to its rapid growth, she started to worry that the mass could affect her uterus and the growth of her child. Consequently, she made the decision to proceed with surgical resection. Her recovery from the surgery was smooth, and her pain subsided. She describes it as the best decision she could have made because it allowed her to enjoy her pregnancy without concerns or pain related to the mass.
Data Availability This manuscript is a case report; hence, all the information regarding the case (chart, imaging exam results, and pathology reports) contains the person's identifying data. Hence, we cannot make the information available for readers. But if necessary, we will be happy to present any data without identifying information upon request. Additional Points Key Messages . (i) Pregnant patients who develop an abdominal mass separate from the uterus should be evaluated for desmoid fibromatosis (DF). (ii) Surgical resection of DF tumors can be safely performed during pregnancy, especially in symptomatic patients. (iii) Early diagnosis and timely surgical intervention in pregnant individuals with symptomatic DF tumors can lead to successful management and favorable outcomes for both the mother and the fetus. Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this article.
CC BY
no
2024-01-16 23:47:21
Case Rep Obstet Gynecol. 2024 Jan 8; 2024:5881260
oa_package/ae/f2/PMC10789513.tar.gz
PMC10789514
38225984
1. Introduction Total ketone bodies include 3 molecules, acetoacetate (AcAc), β -hydroxybutyrate ( β -OHB), and acetone. Under starvation conditions, especially glucose deficiency conditions, ketone bodies provide the brain with an important alternative metabolic fuel source. β -OHB play a key role in sparing glucose utilization and reducing proteolysis [ 1 ]. The accumulation of ketone bodies can be brought about in pathological states, which sometimes leads to metabolic acidosis with a high anion gap. The disorders related to ketone body accumulation are mainly classified by 3 distinct mechanisms: diabetic ketoacidosis (DKA), alcoholic ketoacidosis (AKA), and starvation ketoacidosis (SKA) [ 2 ]. Although these pathologies are similar as metabolic acidosis and increased total ketone body concentrations, there is a difference in the findings for diagnosis such as a clear history of diabetes mellitus, alcohol abuse, or starvation [ 3 , 4 ]. DKA is one of the most serious acute metabolic complications of diabetes mellitus (DM), and the main mechanism of DKA is a lack of insulin in the body [ 5 ]. The criteria for diagnosing DKA include plasma glucose, >250 mg/dL; pH, <7.30; serum bicarbonate (HCO3-) level, <18 mEq/L; and the presence of ketonuria and/or ketonemia, an anion gap, >10 mEq/L [ 6 ]. However, since metabolic markers associated with DKA can be affected by various degrees of respiratory compensation or other metabolisms, they are relatively nonspecific for DKA [ 7 ]. AKA was first reported by Dillon et al. in 1940 [ 8 ]. The criteria for diagnosing AKA are as follows: AKA is a relatively common syndrome in chronic alcohol abusers and binge drinkers, and AKA is accompanied by metabolic acidosis and anion gap opening together with increased ketone body production triggered by chronic excessive alcohol consumption, withdrawal, or poor food intake, and dehydration [ 9 , 10 ]. The clinical features of AKA are very similar to those of DKA, except for a history of chronic alcohol abuse and lack of specific clinical presentation [ 11 , 12 ]. Most simple diagnosis of DKA or AKA is performed by taking the medical history of diabetes mellitus or alcohol abuse. However, in emergencies, patients with either DKA or AKA often suffer from disturbance of consciousness, and it is difficult to communicate with them. Additionally, DM patients who abuse alcohol can also develop AKA. Although there is no report showing a direct association between DM and AKA, there are a few reports that the two conditions can be coexistent [ 11 , 12 ]. Although many cases of AKA are under poor conditions, individuals with AKA do not usually have an actual loss of consciousness despite the severity of acidosis and marked ketonemia [ 13 ]. When altered mental status and loss of consciousness are brought about, they are typically attributable to other underlying factors such as hypoglycemia or severe infection [ 12 ]. This study is aimed at examining which factors are useful for the diagnosis and distinction of simple DKA and simple AKA as possible in patients with ketoacidosis in emergencies. In this report, we present the characteristics of DKA and AKA to illustrate the importance of using a systematic approach to reach the final diagnosis.
2. Materials and Methods 2.1. Study Population The patients with ketoacidosis who were hospitalized in Kawasaki Medical School General Medical Center from April 2015 to March 2021 were included in this study. We recruited 49 subjects with ketoacidosis who were brought to the emergency room. First, we excluded patients with hyperosmolar hyperglycemic status (HHS) and overlapped DKA and HHS, because each pathology differed in ketoacidosis ( n = 23). We excluded patients with cancer and/or using steroid drugs for the treatment of some diseases ( n = 2). Almost all patients in this study were brought to the emergency room with coma and hospitalized. With the hope of providing a more complete approach pertaining to the biochemical analyses, we excluded AKA patients complicated with diabetes ( n = 2) and DKA patient are heavy drinkers ( n = 1). Figure 1 shows the flowchart of selection and exclusion in this study subjects. This study protocol was approved by the Research Ethics Committee (REC) of Kawasaki Medical School and Hospital (protocol code 5770-00). Since this study was retrospective, instead of obtaining informed consent from each patient, we provided public information about this study via the hospital homepage. 2.2. Materials and Methods Total ketone bodies, AcAc and β -OHB, were measured with an enzyme cycling method (FUJIFILM Wako Pure Chemical Co., Japan; JCA-ZS050, JEOL Ltd., Japan). pH, base excess (BE), and lactate were measured with a blood gas analysis (ABL-800 FLEX, Radiometer Medical ApS., Denmark). Chemistry examination (ChE, total protein, albumin, total cholesterol, triglyceride, γ -glutamyl transpeptidase ( γ -GTP), aspartate aminotransferase (AST), alanine aminotransferase (ALT), creatinine, urea nitrogen, sodium, potassium, chloride, and C-reactive protein (CRP)) was performed with an enzyme method and ion-selective electrode method (Shino-Test Co., Japan; Sekisui Medical Co., Ltd., Japan; KAINOS Laboratories, Inc., Japan; and JCA-ZS050, JEOL Ltd., Japan). Plasma glucose, HbA1c, and blood osmotic pressure were measured with an amperometry method, high-performance liquid chromatography (HPLC) method, and cryoscopic method (ADAMS Glucose GA-1172, ADAMS A1c HA-8182, and OSMO STATION OM-6060, ARKRAY, Inc., Japan). 2.3. Statistical Analysis All analyses were performed by using JMP version 9 (SAS Institute Inc.). The Wilcoxon rank sum test was performed to examine the various clinical parameters in comparison of DKA and AKA.
3. Results 3.1. Characteristics of the Study Patients The clinical characteristics of the patients in this study are shown in Table 1 . The clinical parameters in DKA and AKA patients were as follows: total ketone bodies (reference range, 0.0–130.0 μ mol/L), 11029.2 ± 1957.4 μ mol/L vs. 5737.8 ± 1964.0 μ mol/L; AcAc (reference range, 0.0–55.0 μ mol/L), 2716.4 ± 493.2 μ mol/L vs. 1073.1 ± 258.3 μ mol/L ( p < 0.05); β -OHB (reference range, 0.0–85.0 μ mol/L), 8310.7 ± 1542.5 μ mol/L vs. 4837.8 ± 1762.1 μ mol/L; β -OHB/AcAc, 3.11 ± 0.43 vs. 4.31 ± 0.84; pH (reference range, 7.360–7.460), 7.00 ± 0.04 vs. 7.20 ± 0.03 ( p < 0.05); and BE (reference range, -2.5–2.5 mEq/L), −26.6 ± 2.4 mEq/L vs. −13.5 ± 1.6 mEq/L ( p < 0.05). These data suggest the presence of marked elevation of ketone bodies and acidosis in both groups. Lactate levels were markedly higher in AKA compared to DKA: lactate (reference range, 0.63–2.44 mEq/L), 2.61 ± 0.42 mEq/L vs. 12.01 ± 2.80 mEq/L ( p < 0.0005) in DKA and AKA, respectively. Also, we divided male and female data to better understand sex-related effects on the development of ketoacidosis, which can also be included in the discussion. The clinical characteristics of the patients in this study are shown in Table 2 . The clinical parameters with sex-related effects in DKA patients were as follows (male vs. female): total ketone bodies, 10311.5 ± 2602.4 μ mol/L vs. 11348.2 ± 2670.5 μ mol/L; AcAc, 2509.4 ± 495.2 μ mol/L vs. 2808.5 ± 694.7 μ mol/L; β -OHB, 7795.4 ± 2130.1 μ mol/L vs. 8539.7 ± 2092.3 μ mol/L; β -OHB/AcAc, 2.96 ± 0.32 vs. 3.18 ± 0.62; pH, 7.11 ± 0.06 vs. 6.94 ± 0.05 ( p < 0.05); base excess (BE), −21.9 ± 3.7 mEq/L vs. −29.2 ± 2.8 mEq/L ( p < 0.05); and lactate, 2.61 ± 0.42 mEq/L vs. 2.70 ± 0.57 mEq/L in male and female, respectively. Similarly, the clinical parameters with sex-related effects in AKA patients were as follows (male vs. female): total ketone bodies, 2842.1 ± 537.8 μ mol/L vs. 8633.5 ± 3257.6 μ mol/L; AcAc, 421.5 ± 249.6 μ mol/L vs. 1378.5 ± 205.7 μ mol/L ( p < 0.05); β -OHB, 2420.6 ± 579.5 μ mol/L vs. 7254.9 ± 3057.3 μ mol/L; β -OHB/AcAc, 3.44 ± 0.74 vs. 4.89 ± 1.32; pH, 7.23 ± 0.05 vs. 7.18 ± 0.03; base excess (BE), −11.0 ± 2.3 mEq/L vs. −15.3 ± 1.8 mEq/L; and lactate, 13.95 ± 3.91 mEq/L vs. 8.14 ± 1.86 mEq/L in male and female, respectively. After that, we compared the clinical parameters in DKA and AKA patients with divided male and female data. The clinical parameters of male in DKA and AKA patients were as follows: total ketone bodies, 10311.5 ± 2602.4 μ mol/L vs. 2842.1 ± 537.8 μ mol/L ( p < 0.05); AcAc, 2509.4 ± 495.2 μ mol/L vs. 421.5 ± 249.6 μ mol/L ( p < 0.05); β -OHB, 7795.4 ± 2130.1 μ mol/L vs. 2420.6 ± 579.5 μ mol/L ( p < 0.05); β -OHB/AcAc, 2.96 ± 0.32 vs. 3.44 ± 0.74; pH, 7.11 ± 0.06 vs. 7.23 ± 0.05; and base excess (BE), −21.9 ± 3.7 mEq/L vs. −11.0 ± 2.3 mEq/L. These data suggest the presence of marked elevation of ketone bodies and acidosis in both groups in male and that ketone body levels were markedly higher in DKA compared to AKA. Lactate levels were markedly higher in AKA compared to DKA: lactate, 2.61 ± 0.42 mEq/L vs. 13.95 ± 3.91 mEq/L ( p < 0.05) in DKA and AKA, respectively. In addition, the clinical parameters of female in DKA and AKA patients were as follows: total ketone bodies, 11348.2 ± 2670.5 μ mol/L vs. 8633.5 ± 3257.6 μ mol/L; AcAc, 2808.5 ± 694.7 μ mol/L vs. 1378.5 ± 205.7 μ mol/L; β -OHB, 8539.7 ± 2092.3 μ mol/L vs. 7254.9 ± 3057.3 μ mol/L; β -OHB/AcAc, 3.18 ± 0.62 vs. 4.89 ± 1.32; pH, 6.94 ± 0.05 vs. 7.18 ± 0.03 ( p < 0.005); and base excess (BE), −29.2 ± 2.8 mEq/L vs. −15.3 ± 1.8 mEq/L. These data suggest the presence of marked elevation of ketone bodies and acidosis in both groups in female and that acidosis levels were markedly severe in DKA compared to AKA. Lactate levels were markedly higher in AKA compared to DKA: lactate, 2.70 ± 0.57 mEq/L vs. 8.14 ± 1.86 mEq/L ( p < 0.005) in DKA and AKA, respectively. 3.2. Low Body Weight in Patients with Alcoholic Ketoacidosis There was a very close association between AKA and various factors such as low body weight ( Figure 2 ). The parameters in DKA and AKA patients were as follows: body weight, 61.1 ± 3.7 kg vs. 46.0 ± 2.2 kg ( p < 0.05) ( Figure 2(a) ); body mass index (BMI), 23.1 ± 1.1 kg/m 2 vs. 18.0 ± 0.7 kg/m 2 ( p < 0.05) ( Figure 2(b) ); and cholinesterase (ChE) (reference range, 240–486 U/L), 359.4 ± 32.8 U/L vs. 211.1 ± 18.5 U/L ( p < 0.005) ( Figure 2(c) ). There was no difference in total protein ( Figure 2(d) ) and albumin levels ( Figure 2(e) ) between DKA and AKA. These data indicate that in AKA patients, carbohydrate and protein stores were markedly depleted due to chronic alcohol misusers. 3.3. Metabolic Markers in Patients with Alcoholic Ketoacidosis and Diabetic Ketoacidosis DKA patients had severe hyperglycemia and poorly controlled DM. Almost all AKA patients were under hypoglycemic conditions and were brought to the emergency room with coma ( Figure 3 ). The parameters in DKA and AKA patients were as follows: plasma glucose levels, 896.9 ± 96.7 mg/dL vs. 51.0 ± 20.3 mg/dL ( p < 0.0005) ( Figure 3(a) ); hemoglobin A1c (HbA1c) (reference range, 4.9–6.0%), 13.2 ± 0.9% vs. 5.1 ± 0.2% ( p < 0.0005) ( Figure 3(b) ); and serum osmolality (reference range, 275–290 mOsm/kg), 340.0 ± 9.3 mOsm/kg vs. 302.3 ± 3.3 mOsm/kg ( p < 0.005) ( Figure 3(c) ). Lipid markers were also lower in AKA compared to DKA ( Figure 3 ): total cholesterol (reference range, 142–248 mg/dL), 262.5 ± 28.3 mg/dL vs. 168.4 ± 12.3 mg/dL ( p < 0.05) ( Figure 3(d) ), and triglyceride (reference range, 30–149 mg/dL), 238.3 ± 45.3 mg/dL vs. 99.8 ± 12.9 mg/dL ( p < 0.05) ( Figure 3(e) ). These data indicate that glycemic control was quite poor in DKA patients. On the other hand, hypoglycemia and low levels of lipid-associated data in AKA patients may reflect low body weight. 3.4. Liver Dysfunction due to Heavy Drinking in Patients with Alcoholic Ketoacidosis AKA patients had liver dysfunction due to heavy drinking ( Figure 4 ). The parameters related to liver function in DKA and AKA patients were as follows: γ -glutamyl transpeptidase ( γ -GTP) (reference range, 9–32 U/L), 27.9 ± 3.9 U/L vs. 121.0 ± 28.9 U/L ( p < 0.005) ( Figure 4(a) ); aspartate aminotransferase (AST) (reference range, 13–30 U/L), 31.3 ± 4.6 U/L vs. 117.0 ± 15.2 U/L ( p < 0.0005) ( Figure 4(b) ); and alanine aminotransferase (ALT) (reference range, 7–23 U/L), 25.4 ± 6.1 U/L vs. 55.3 ± 14.3 U/L ( p < 0.005) ( Figure 4(c) ). Liver dysfunction in AKA patients may reflect alcoholic liver damage. 3.5. Other Biochemical Markers in Patients with Alcoholic Ketoacidosis It seems that both DKA and AKA cause dehydration, which is associated with diuretic effects by hyperglycemia or alcohol, although this is no conclusive evidence about this point. Creatinine levels were not different between DKA and AKA. Urea nitrogen (BUN) levels were lower in AKA compared to DKA ( Figure 5 ). The parameters related to kidney function in DKA and AKA patients were as follows: creatinine (reference range, 0.46–0.79 mg/dL), 1.88 ± 0.38 mg/dL vs. 1.23 ± 0.36 mg/dL ( Figure 5(a) ), and BUN (reference range, 8–20 mg/dL), 50.1 ± 9.6 mg/dL vs. 20.9 ± 3.0 mg/dL ( p < 0.05) ( Figure 5(b) ). Electrolytes in DKA and AKA patients were as follows: sodium (reference range, 138–145 mmol/L), 130.9 ± 2.5 mmol/L vs. 141.9 ± 1.2 mmol/L ( p < 0.005) ( Figure 5(c) ); potassium (reference range, 3.6–4.8 mmol/L), 5.21 ± 0.39 mmol/L vs. 4.17 ± 0.58 mmol/L ( Figure 5(d) ); and chloride (reference range, 101–108 mmol/L), 92.9 ± 2.7 mmol/L vs. 95.9 ± 3.1 mmol/L ( Figure 5(e) ). White blood cells and neutrophil levels were markedly elevated compared with normal range in both groups, but there was no significant difference between DKA and AKA (Figures 5(f) and 5(g) ). Hemoglobin, hematocrit, and C-reactive protein (CRP) levels were not different between DKA and AKA. In addition, amylase and pancreatic-amylase levels were not different between DKA and AKA. It seems that the changes in BUN and sodium indicate that dehydration and electrolyte abnormalities are associated, at least in part, with the pathophysiology, although each pathophysiology differs in DKA and AKA.
4. Discussion The differential diagnosis of conditions resulting in high anion gap is difficult and complicated, especially when the conditions are accompanied by various metabolic disorders including ketoacidosis (DKA, AKA, and SKA), renal failure, lactic acidosis, and various ingestions (methanol, ethylene glycol, salicylates, etc.). When dealing with ketoacidosis, taking a medical history is important to perform differential diagnoses such as insulin deficiency, alcohol abuse, or starvation as the etiology. However, it is also undeniable that some patients have the effects of both, such as DKA who are heavy alcohol drinkers and AKA undergoing diabetes treatment. DKA results in a loss of consciousness, and AKA also results in impaired consciousness. AKA sometimes leads to the reason for the investigation, admission, and sudden unexplained deaths of alcohol-dependent patients, which has been reported worldwide, especially in Europe and the USA [ 12 , 14 ]. On the other hand, AKA is very rarely diagnosed as checked acidosis caused by alcohol in Japan. This may be due to the difference in the number of emergency department visits by alcoholics between Japan and the US, as well as the fact that the concept of AKA is not widely accepted among clinicians in Japan [ 15 ]. In this study, 5 of the 7 AKA patients were transported to the emergency room because of hypoglycemia. It seems that many typical AKA patients are complicated with hypoglycemia, which leads to disturbance of consciousness and needs transportation to the emergency room in Japan. In addition, such cases of hypoglycemia were complicated by low body weight due to heavy drinking. Thus, the disturbance of consciousness complicated with AKA is smoothly improved as hypoglycemia is improved. Therefore, when an AKA patient presents with hyperglycemia, the coexistence of another condition such as DKA must be considered. In this study, we focus specifically on a simple comparison of cases of each DKA and AKA, which was complicated with other pathology. It is known that we should pay attention to diagnosis especially when both conditions are involved, since the cause of impaired consciousness is different between hyperglycemia with dehydration and hypoglycemia and since the treatments of first touch are quite different. One of important points is the ratio of β -OHB to AcAc ( β -OHB/AcAc ratio) in ketoacidosis. There have been several previous reports on levels of β -OHB, levels of AcAc, and β -OHB/AcAc ratio in DKA and AKA. In general, AKA is often characterized by a high β -OHB/AcAc ratio with a high anion gap and elevated lactate. It has been reported that a β -OHB/AcAc ratio is significantly higher in AKA compared to DKA [ 16 ]. It was reported that this ratio averages about 5-7 in AKA and about 2-3 in DKA [ 12 , 16 ]. In the present study, there was not a significant difference in β -OHB level and a β -OHB/AcAc ratio between DKA and AKA, although AcAc was significantly lower in AKA. As shown in the present study, since there was a significant difference in pH and BE between DKA and AKA, acidosis was severe in DKA ( Table 1 ). These results are likely caused by a large amount of ketone bodies, although there was no significant difference between DKA and AKA. As described in the result section, there was a difference in the amount of ketone body levels, including AcAc and β -OHB, and the degree of acidosis. Additionally, it is known that the β -OHB/AcAc ratio changes markedly during acute DKA from a normal ratio of 1 : 1 to as high as 15 : 1 [ 17 , 18 ]. Once we start insulin therapy, β -OHB levels usually decrease speedily before the AcAc level and this ratio returns to normal as β -OHB is metabolized to acetoacetate. Moreover, it has been reported that AKA patients have higher blood lactate concentrations and a higher lactate to pyruvate ratio compared to those with DKA [ 12 , 19 ], although it was reported that lactate levels were elevated in DKA patients [ 20 ]. Additionally, lactate level was significantly elevated in AKA compared with DKA in the present study. The features of AKA such as a high β -OHB/AcAc ratio and elevated lactate may be able to explain the pathology of each DKA and AKA. As shown in Figure 6(a) , the combination of insulin deficiency and increased counterregulatory hormones in DKA also leads to the release of free fatty acids into the circulation from adipose tissue and to hepatic fatty acid oxidation to ketone bodies. The alteration finally brings about severe ketonemia and severe metabolic acidosis [ 21 ]. On the other hand, as shown in Figure 6(b) , it seems that the acidosis in AKA results from the accumulation in plasma lactate and ketone bodies including β -OHB and AcAc induced by increasing the nicotinamide-adenine dinucleotide (NADH)/nicotinamide-adenine dinucleotide (NAD) ratio [ 22 ]. The increased NADH/NAD ratio is thought to be pivotal in the development of ketoacidosis and lactic acidosis in AKA. The production of lactate and the degree of acidosis, which was associated with elevated lactate, may be caused by differences in pathophysiology, such as hyperglycemia in DKA and elevated NADH/NAD ratio in AKA, and which may be a useful factor in the differentiation. In the present study, since metabolic markers associated with ketoacidosis can be affected by various degree of respiratory compensation or other metabolisms, they are relatively nonspecific and lack specific clinical presentation. Therefore, it was very difficult to determine the pathophysiology based on only one point of examination in an emergency. In this study, there was a significant difference in various clinical parameters between DKA and AKA. Although the sample size in this study was small, several differences were observed with divided male and female data. Not to mention, there was a significant difference in terms of physical stature in both DKA and AKA. These data suggest the presence of marked elevation of ketone bodies in males with DKA groups. On the other hand, these data suggest the presence of marked elevation of ketone bodies in females with AKA groups. However, although there were some gender differences compared with DKA and AKA, the results of the presence of significant differences in ketone bodies, acidosis, and lactate levels have remained in comparison to DKA and AKA. The most obvious difference is low body weight in AKA compared with DKA in the present study. Even SKA can lead to a high anion gap metabolic acidosis. In addition, starvation increases levels of cortisol and growth hormone [ 22 ]. Chronic alcohol misusers have markedly depleted carbohydrate and protein stores. In addition, although many chronic alcohol misusers may still receive some caloric intake from ethanol, other sources of dietary intake may be chronically reduced, resulting in starvation and depleted hepatic glycogen stores [ 23 ]. Glycogenesis is suppressed due to the increased NADH/NAD ratio [ 24 ]. Extremely low body weight patients may be more likely to develop under AKA conditions. Hypoglycemia may be a more common cause of impaired consciousness in such patients. In the present study, we showed that there was a very close association between AKA and various factors such as low body weight. Liver dysfunction may be pathological as a result of chronic alcoholism [ 24 ]. This study also showed significant liver dysfunction in AKA. On the other hand, as a matter of course, DKA patients suffered from DM and their glycemic control was poor in many cases. Dehydration may occur in both DKA and AKA. However, creatinine levels were not different between DKA and AKA, although BUN levels were lower in AKA compared to DKA. Although dehydration is thought to be a factor in the development of AKA [ 9 ], there were no obvious findings of dehydration in this study. However, under DKA conditions, osmotic diuresis due to hyperglycemia and dehydration can occur in most circumstances [ 6 ]. It is thought that dehydration under DKA conditions complicated with hyperglycemia led to elevated BUN in this study, although we excluded hyperglycemic hyperosmolar state (HHS) patients, who were mainly diagnosed under pathological conditions. Since it was reported that dehydration with hyperglycemia led to elevated BUN levels, in the present study, elevated BUN may also reflect a condition of dehydration. In addition, electrolyte abnormality in AKA patients has been also described, although it was not observed systematically [ 12 , 19 ]. It was reported that concurrent disease processes, including extracellular fluid depletion, alcohol withdrawal, and severe liver disease, resulted in various mineral balance abnormalities which mixed acid-base disturbances in AKA patients [ 9 ]. In this study, however, we revealed normal sodium levels in AKA and significantly decreased sodium levels in DKA. In general, conditions with hyperglycemia increase plasma osmolality and mobilize water from within the cells, resulting in a decreased serum sodium level. Isolated dehydration causes hypernatremia, not hyponatremia. Hyperglycemic crises cause both volume depletion and dehydration through osmotic diuresis. The loss of water in osmotic dehydration is relatively greater than the loss of sodium and potassium. The hyponatremia in hyperglycemia is the result of cellular dehydration. Volume depletion in AKA can be associated with either hyponatremia or hypernatremia when the rates of water and monovalent cation losses differ. Finally, when the blood level of urea is examined, both the rate of production and the rate of excretion of urea should be addressed [ 25 ]. This result may be due to the fact that the typical AKA is underweight and undernourished and has an obvious reduction in blood glucose, lipids, etc., which gives rise to colloid serum osmolality. There are several strengths in this study. First, this report compared simple DKA with simple AKA, eliminating other factors as much as possible. In metabolism-associated status, many factors are involved and affect each other, complicating the pathophysiology. Therefore, despite the importance of comparing such DKA and AKA in the absence of other effects, there have been few reports. Second, awareness of AKA among the Japanese is low, and this study is especially important for the initial response in emergency settings. Third, despite the unique circumstances of diabetes and alcohol, DKA and AKA finally have become similar conditions as ketoacidosis. It was shown that evaluation of each pathology such as serum albumin value, diabetes, liver dysfunction, and dehydration was finally important. There are limitations in this study. First, the sample size in this study was small, and this study was performed in Japanese subjects. Therefore, we think that a relatively small sample size can limit the generalizability and reliability of the findings in this study. In addition, we think that the results of this study are not necessarily applicable to Caucasians. Second, due to the limitation of the small sample size, the parameters were limited, such as free fatty acids, amylase, and pancreatic-amylase levels, to comprehend the complicated DKA and AKA conditions. Third, the study design is retrospective, meaning it relies on historical data. This can introduce potential biases and may not provide as strong evidence as prospective studies. As shown in this study, while metabolic conditions, such as DKA and AKA, are complicated in their pathogenesis, SGLT2 inhibitors, which are often used for type 2 DM, are also associated with DKA. When using SGLT2 inhibitors, insulin secretion is reduced due to increase of urinary glucose excretion and decrease of blood glucose levels. Also, the amount of glucose which can be utilized in the body is decreased, and thereby, production of ketone body production is increased. In DKA triggered by the usage of SGLT2 inhibitors, blood glucose levels may not be significantly elevated due to increase of urinary glucose excretion (“euglycemic DKA”) [ 26 ]. Although there were no SGLT2 inhibitor users in this study, the pathogenesis of metabolic acidosis may be more complicated and require more attention in diabetic patients using SGLT2 inhibitors. In addition, this study focuses on DKA and AKA, with limited discussion of other potential conditions that can lead to high anion gap metabolic acidosis.
5. Conclusion It would be safe to conclude that when dealing with ketoacidosis, taking a medical history is important to perform differential diagnosis such as insulin deficiency, alcohol abuse, or starvation as the etiology, rather than the degree of acidosis and β -OHB/AcAc ratio in Japanese subjects with DKA and AKA. The data in this study clearly show that it is important to precisely comprehend the pathology of dehydration and alcoholic metabolism which would lead to appropriate treatment for DKA and AKA. On the other hand, since the data in this study acknowledge that the pathogenesis of metabolic conditions such as DKA and AKA is quite complex, a more detailed investigation would be necessary in the future.
Academic Editor: Mayank Choubey This study is aimed at examining which factors are useful for the diagnosis and distinction of ketoacidosis. We recruited 21 diabetic ketoacidosis (DKA) and alcoholic ketoacidosis (AKA) patients hospitalized in Kawasaki Medical School General Medical Center from April 2015 to March 2021. Almost all patients in this study were brought to the emergency room in a coma and hospitalized. All patients underwent blood gas aspiration and laboratory tests. We evaluated the difference in diagnosis markers in emergencies between DKA and alcoholic ketoacidosis AKA. Compared to AKA patients, DKA patients had statistically higher values of serum acetoacetic acid and lower values of serum lactate, arterial blood pH, and base excess. In contrast, total ketone bodies, β -hydroxybutyric acid, and β -hydroxybutyric acid/acetoacetic acid ratio in serum did not differ between the two patient groups. It was shown that evaluation of each pathology such as low body weight, diabetes, liver dysfunction, and dehydration was important. It is important to perform differential diagnosis for taking medical histories such as insulin deficiency, alcohol abuse, or starvation as the etiology in Japanese subjects with DKA or AKA. Moreover, it is important to precisely comprehend the pathology of dehydration and alcoholic metabolism which would lead to appropriate treatment for DKA and AKA.
Data Availability The data underlying this article cannot be shared publicly due to privacy concerns. The data could be available after appropriate ethical committee approval and should be handled in compliance with relevant data protection and privacy regulations. Conflicts of Interest HK has received honoraria for lectures and received scholarship grants from Sanofi, Novo Nordisk, Lilly, Boehringer Ingelheim, MSD, Takeda, Ono Pharma, Daiichi Sankyo, Sumitomo Pharma, Mitsubishi Tanabe Pharma, Pfizer, Kissei Pharma, AstraZeneca, Astellas, Novartis, Kowa, Chugai, and Taisho Pharma. KK has been an advisor to, received honoraria for lectures from, and received scholarship grants from Novo Nordisk Pharma, Sanwa Kagaku Kenkyusho, Takeda, Taisho Pharmaceutical Co., Ltd., MSD, Kowa, Sumitomo Pharma, Novartis, Mitsubishi Tanabe Pharma, AstraZeneca, Nippon Boehringer Ingelheim Co., Ltd., Chugai, Daiichi Sankyo, and Sanofi.
CC BY
no
2024-01-16 23:47:21
J Diabetes Res. 2024 Jan 8; 2024:8889415
oa_package/2d/24/PMC10789514.tar.gz
PMC10789515
38225974
1. Introduction Cow urine has been used in traditional Indian medicine, known as Ayurveda, for thousands of years. Cow urine distillate (CUD) is a concentrate of cow urine that is used in Ayurvedic medicine as a treatment for various ailments. It is considered to have detoxifying and purifying effects and is used as a component in herbal formulations [ 1 ]. While some proponents of Ayurveda believe in its benefits, but it is not widely accepted or recognized as a medicine in modern and conventional therapy [ 2 ]. Antimicrobial agents, such as antibiotics and antiviral drugs, are critical for the treatment of infectious diseases. However, the overuse and misuse of these agents has led to the emergence of antibiotic-resistant bacteria, also known as “superbugs.” These superbugs are a major public health concern as they can cause serious infections that are difficult to treat, leading to prolonged illness and increased healthcare costs [ 3 ]. One of the major problems with traditional antimicrobial agents is that they are often synthetic compounds that can have toxic side effects and can also lead to the development of antibiotic-resistant strains of bacteria. In addition, many traditional antimicrobial agents are expensive and not accessible to everyone, especially in developing countries [ 4 , 5 ]. CUD could be an alternative to traditional antimicrobial agents because it is a natural product that is readily available and inexpensive. There have been some studies on the antimicrobial and antioxidant properties of cow urine [ 6 – 8 ]. Traditionally (especially in Indian traditional medicine), it is strongly believed that CU can cure bacterial as well as viral diseases along with fever, anaemia, epilepsy, abdominal pain, constipation, and wound [ 9 , 10 ]. This culture of folklore remedy is still in practice around the rural area of India and Nepal. In addition, CUD is believed to have anti-inflammatory and analgesic properties, and it may be useful in the treatment of certain types of cancer and diabetes [ 11 , 12 ]. In South Asian country, CU is believed to cure cancer [ 13 ]. The molecular-level relationship between cow urine and bacterial proteins can provide the significant evidence of the bactericidal activity of CUD. However, more research is needed to understand the full potential of cow urine as an antimicrobial agent and to determine the safety and efficacy of using it for this purpose. The aim of the study is to investigate the antibacterial activity of CUD through both in vitro and in silico (molecular docking) methods in order to provide evidence-based results for its potential use as an antibacterial agent.
2. Materials and Methods 2.1. Ethical Approval The ethical approval for the research was taken from the Internal Review Committee (IRC) at Sunsari Technical College (IRC No. ST15RE115). 2.2. Collection of Cow Urine and CUD Preparation The milking cows were selected (10 cows of the same breed from the same farm) to collect the 20 ml of urine from each in a sterile container by randomized sampling technique. The sample was brought and stored in the refrigerator (4°C) until further use. A simple distillation process was used to collect the CUD at 100°C, and the distillate was stored in a sterile glass flask inside the refrigerator at 4°C [ 14 ]. After distillation, the residue was evaporated to obtain crude mass and submitted to prepare different required concentrations. The crude mass was taken, and 5%, 10%, and 15% concentrations with distilled water were prepared by the process called serial dilution. 2.3. Test Microorganism The fresh pathogenic bacterial species were authenticated, collected, subcultured, and preserved. Bacterial strains consist of Staphylococcus aureus (Gram-positive), Escherichia coli , Salmonella typhi , and Klebsiella species (Gram-negative) [ 15 ]. 2.4. Antibacterial Screening Sterilization of Petri plates: the Petri dish was washed and sterilized in an autoclave at the temperature of 121°C for 15 minutes at 15 Ibs. Pressure by wrapping the Petri dish with aluminum foil. 2.4.1. Preparation and Sterilization of Media and Nutrient Broth About 19 grams of Muller–Hinton Agar (MHA) were placed in a 1000 ml conical flask, and 500 ml of distilled water was added from time to time. 1.3 grams of nutrient broth were placed in a 250 ml conical flask, and 100 ml of distilled water was added and both were heated to dissolve in a heating plate. After that cotton plug was placed in the mouth of the conical flask and covered with aluminum foil. It was put into an autoclave for sterilization at 121°C temperatures, at 15 Ibs pressure for 15 minutes [ 16 ]. 2.4.2. Subculture of Pure Bacterial Strain When the media was cool, the nutrient broth was poured into four sterilized test tubes, and pure bacteria were added to the test tube with the help of a sterilized inoculating loop. 2.4.3. Pouring of MHA in Petri Plates When the media was cool, it was poured into sterilized Petri plates and allowed to solidify in an aseptic condition. After solidification, the Petri plates were wrapped with aluminum foil and stored in the refrigerator at 4°C. 2.4.4. Preparation of Filter Paper Disc The filter paper disc was prepared using Whatman No. 1 filter paper, and the paper disc was obtained with the help of the paper puncher to obtain a 6 mm diameter. The filter paper disc was placed in Petri plates and it was sterilized in a hot air oven at a temperature of 161°C for 2 hrs [ 17 ]. 2.5. Assay of Antibacterial Activity The antibacterial assay was carried out by the agar disc diffusion method. The antibacterial susceptibility testing was performed in three steps [ 18 ]. 2.5.1. Inoculation of Bacteria in the Media In the aseptic condition under laminar airflow, the test organism was inoculated into the sterile Mueller–Hinton Agar by uniformly distributing it through the Petri plates with the help of a spreader. Then, proper labeling was done in each plate representing five regions, i.e., one for the positive control (ciprofloxacin 5 mcg), one for the negative control (dimethyl sulfoxide-DMSO), and three for different concentrations (5, 10, and 15%) of a sample. 2.5.2. Incorporation of Samples into the Lawn Media The prepared sterilized filter paper discs were impregnated with different concentrations of cow urine distillate and negative control DMSO separately. After that, the impregnated disc was placed in the inoculated agar plate in their region, and standard antibiotics (ciprofloxacin 5 mcg) were also placed, respectively. 2.5.3. Incubation of the Plates The impregnated plates with cow urine and standard antibiotics were incubated at 35°C for 24 hours. The diameter of the zone of inhibition indicates the antibacterial activity against test organisms. Each assay was carried out three times in this experiment, and the result obtained was recorded. 2.6. Minimum Inhibitory Concentration (MIC) To determine the minimum inhibitory concentration (MIC), the broth dilution method was employed. Initially, 10 ml of Mueller–Hinton broth was added to sterilized test tubes and was then sterilized at 121°C for 15 minutes. A turbid solution was produced using the Mac-Farland turbidity standard scale. The test microorganism was introduced to sterilized test tubes containing 10 ml of normal saline and then incubated for 6 hours at 37°C. This was followed by diluting the test microorganisms until the turbidity matched that of the Mac-Farland scale, resulting in a concentration of approximately 1.5 × 10 8 cfu/ml. The compound was then serially diluted in sterilized broth to achieve concentrations of 200, 100, 50, 25, and 12.5 μ g/ml, respectively. The MIC is the lowest concentration of the compound that resulted in no turbidity in the test tube after the broths had been incubated at 37°C for 24 hours [ 19 – 21 ]. 2.7. Molecular Docking Process The 3-dimensional (3D) structure of organic compounds and metabolic compounds present in the fresh cow urine were studied from different articles and downloaded from the PubChem database in “sdf” file format. The torsion angle and geometry minimization/energy minimization (MMFF94) were done by using MGL tool. The “sdf” file is converted into “pdbqt” from open-babel software [ 22 , 23 ]. The crystallographic structure of DNA gyrase B ATP binding domain of Escherichia coli protein (PDBID: 4KFG, resolution: 1.60 Å) [ 24 ] was retrieved from the Protein Data Bank (PDB) in “pdb” file format ( Figure 1 ). The choice to use DNA gyrase (PDBID: 4KFG) for molecular docking investigations is driven by its essential function in bacterial DNA processing, which makes it an attractive candidate for antibiotic development. Utilizing the three-dimensional configuration of an enzyme assists in forecasting its interactions with individual chemical components of CUD, hence facilitating the identification of novel antibacterial drugs in the battle against antibiotic resistance [ 25 ]. Moreover, 4KFG has better resolution and conformation was validated with Ramachandran plot (Supplementary Material, Figure S1 ). The crystal structure of protein was purified by removing water molecule and co-crystal native ligand. The polar hydrogen was added to the protein structure in Discovery studio visualizer 2021 software [ 26 , 27 ]. AutoDock Vina v.1.2.0 ( https://vina.scripps.edu/ ) software was used for molecular docking studies [ 28 , 29 ]. The grid dimension was set as default and blind docking was performed taking ciprofloxacin as reference. For the grid set up, the spacing was set as 1 Å and grid dimension was 50 dimensions for all X , Y , and Z axes. The center dimension was set as 14.299, 18.687, and −12.407 for the X -axis, Y -axis, and Z -axis, respectively. To validate the docking interaction, the redocking method was employed and the root mean square deviation (RMSD) value was calculated using PyMol 2.5.4 software ( https://pymol.org/ ). The RMSD value less than 2 Å was considered as best fitted model [ 30 , 31 ]. 2.8. Main Component of CUD The authentic research articles were downloaded from the Scopus-indexed journal by using “GC-MS (gas chromatography-mass spectroscopy) of cow urine” as prompt. Around 10 relevant articles were selected, and the main common chemical component from the article analysed [ 32 – 34 ]. We have chosen 7 common chemical components and submitted them for the docking studies.
3. Results 3.1. Organoleptic Characteristic The physical appearance and the organoleptic characteristics of the CUD are analysed and reported ( Table 1 ). The organoleptic characteristic provides evidence for compatibility and pleasing behavior for the conventional use of it. The organoleptic test was conducted as a preliminary analysis to identify the physical state of CUD. 3.2. Antibacterial Activity Antibacterial activity of CUD against four different strains, i.e., Staphylococcus aureus (Gram-positive), Escherichia coli , Salmonella typhi , and Klebsiella species (Gram-negative) is studied, DMSO is taken as negative control, and ciprofloxacin as a standard drug. The result of the study is shown in Table 2 and Figure 2 . 3.3. Minimum Inhibitory Concentration Table 3 shows the MIC of CUD in different bacterial species. CUD is showing good activity on Staphylococcus aureus and E. coli ( Figure 3 ). The minimum inhibitory concentration (MIC) of CUD was tested against different bacteria. The results show that CUD has antimicrobial activity against all the tested bacteria with MIC values ranging from 12.5 to 50 μ g/ml, which are higher than the reference compound. 3.4. Molecular Docking Result Analysis The DNA gyrase enzyme is very essential in the cell transcription process of bacteria, and it controls the vital steps in the process. For docking study, we have taken the DNA gyrase B ATP binding domain protein of Escherichia coli . The protein (4KFG) has a native ligand (DOO) which has similar chemical fragment (aromatic and heterocyclic) as the cow urine chemical component (Supplementary Materials, Figure 2S ). This resemblance makes 4KFG as perfect candidate as a target molecule for docking studies. The binding energy (in negative sign) and amino acid responsible for H-bonds were analysed and listed in ( Table 4 ). From the result, it was observed that most of the ligand molecules have good binding interaction with protein. Among seven ligand molecules, 2-hydroxycinnamic acid showed the best binding energy of Δ G = 6.9 kcal/mol ( Figure 4 ), with two conventional H-bond interactions. The amino acids Val71 and Asp73, with respective bond distances of 1.93 Å and 3.90 Å, were in-charge of H-bond interactions. The binding energy of 2-hydroxycinnamic acid was somewhat lower than reference ciprofloxacin (Δ G = 7.4 kcal/mol, Figure 5 ), but it had more hydrogen bond interactions than the reference. Similarly, ferulic acid ( Figure 6 ) showed the binding energy of Δ G = −6.8 kcal/mol, which is slightly lower than the reference ciprofloxacin. The same amino acids (Val71 and Asp73) as 2-hydroxycinnamic acid participated in the interaction of ferulic acid with protein, although a variation in bond distance (2.06 Å and 3.08 Å) was detected. The highest conventional hydrogen bond interaction was demonstrated by gallic acid (Asp45, Glu42, and Ser108) with a binding energy of Δ G = −6.4 kcal/mol. Phenol displayed the lowest binding energy with only one hydrogen bond (Δ G = 5.2 kcal/mol). The calculated RMSD value of all the ligands molecules ranges 1.05 to 1.87 Å, which indicate the stability of ligands‐protein complex. 3.5. Main Component of CUD Different spectral studies revealed the presence of a variety of aromatic and heterocyclic components in CUD which may help in the antibacterial activity. The prominent components of CUD include gallic acid, ferulic acid, cinnamic acid, allantoin, and 1-heneicosanol ( Table 5 ).
4. Discussion This study was conducted to analyse and evaluate the antibacterial activity of CUD by in vitro and in silico approach. The antibacterial activity of CUD against Gram-positive Staphylococcus aureus and Gram-negative Escherichia coli , Salmonella typhi , and Klebsiella species was evaluated at concentrations of 5%, 10%, and 15%. The results indicated that the 15% concentration of CUD displayed the highest antibacterial activity when compared to 5% and 10%. The greatest antibacterial activity was observed in Salmonella typhi and Staphylococcus aureus with diameters of 20.8 ± 0.6 mm and 18.6 ± 0.42 mm, respectively, at a 15% concentration, compared to the standard antibiotic ciprofloxacin. Our study shows different results compared to previous studies. Sathasivam et al. found smaller zones of inhibition for Salmonella typhi (10.4 ± 1.2 mm) [ 39 ], while Majhi and Bardvalli had similar results to our study [ 40 ]. Poornima et al. found that CUD had better antibacterial activity against Gram-positive bacteria, which is consistent with our findings [ 41 ]. In our study, the zone of inhibition for E. coli was 13 ± 0.8 mm at a 15% concentration of CUD, which is nearly 50% less than the standard drug ciprofloxacin (24 ± 1.0 mm in diameter). Jarald et al. found that crude cow urine had better antibacterial activity than CUD [ 42 ]. CUD may lose some of the potential components during the distillation process. This may hamper in potency of cow urine to inhibit the growth of bacteria. Moreover, Ahuja et al. reported similar findings to our study with a 14 mm zone of inhibition for E. coli [ 43 ]. These previous studies validate the findings of our research. The MIC of CUD against Staphylococcus aureus and E. coli is 12.5 μ g/ml, while for Klebsiella pneumoniae and Salmonella typhi it is 25 and 50 μ g/ml, respectively. The MIC of the reference compound (ciprofloxacin) is 6.25 μ g/ml. These data suggest that CUD has antimicrobial activity against these bacteria, but its potency is lower compared to the reference compound. However, it is important to note that MIC values are just one measure of antimicrobial activity, and further studies are required to fully understand the effectiveness and safety of CUD as an antimicrobial agent. Molecular docking is the process of interaction between the ligand and protein. It predicts the attachment of drug molecule to the binding site of receptors [ 44 ]. To explore the antibacterial mechanism of CUD, we have conducted in silico docking studies of CUD-active compounds with bacterial proteins responsible for protein synthesis. CUD has antibacterial activity, although the exact mechanism of action is not fully understood. Possible reasons for CUD's antibacterial activity include the presence of compounds such as urea, ammonia, osmolytes, and organic acids, which can denature proteins, disrupt cell membranes, cause dehydration, and have antimicrobial properties [ 45 , 46 ]. Our findings revealed that 2-hydroxycinnamic acid (Δ G = −6.9 kcal/mol, Table 4 ) and ferulic acid (Δ G = −6.8 kcal/mol, Table 4 ) displayed the best docking scores with the targeted proteins, implying that CUD might function through this mechanism against the tested bacterial strains. The antibacterial activity of both the compounds was found to be associated with their hydrogen bonding interactions with amino acids, Val71 and Asp73. According to reports, cow urine's antibacterial properties are attributed to the presence of 2-hydroxycinnamic acid, ferulic acid, gallic acid, cinnamic acid, phenol, carbolic acid, and allantoin. The peptides and derivatives in cow urine increase bacterial cell surface hydrophobicity, resulting in an impressive bactericidal effect. Cow urine is also known to boost the phagocytic activity of macrophages [ 47 ]. It was also claimed that cow urine has the ability to prevent the development of antibacterial resistance by blocking the R-factor, which is a component of the plasmid genome in bacteria [ 48 ]. Nautiyal and Dubey have found that CUD has antimicrobial activity against certain bacteria and fungi, and it is traditionally used as a disinfectant [ 35 ]. It is believed that cow urine contains a compound called urea which can denature bacterial proteins by breaking down their secondary and tertiary structures. This can disrupt the function of the protein and potentially inhibit the growth of the bacteria [ 49 , 50 ]. Cow urine contains a compound called c-di-GMP (cyclic dimeric guanosine monophosphate), which is known to play a role in bacterial biofilm formation [ 51 ]. The study found that c-di-GMP present in cow urine can inhibit the production of a bacterial protein called SdiA, which is involved in biofilm formation. This suggests that cow urine may have the potential to inhibit the formation of bacterial biofilms [ 52 ]. CUD also contains osmolytes that can cause the dehydration and death of bacterial cells [ 53 ]. Moreover, the docking score of phenolic compounds (Δ G = −6.4 to −6.9 kcal/mol, Table 4 ) was found to be significant, indicating their potential role in the bactericidal activity of cow urine. Cinnamic and ferulic acids were also identified as important components that interacted strongly with DNA gyrase through hydrogen bonding. These results suggest a possible mechanism of action for cow urine. The high-level interaction between cow urine and DNA gyrase protein suggests it may be an effective solution to the problem of antibacterial resistance.
6. Conclusion In this study, it is found that CUD possesses significant antibacterial properties which support the claims of traditional practitioners. From the report of ZOI, around 15% of CUD significantly showed the antibacterial activity. It was also found that the 12.5 μ g/ml MIC value of CUD prominently inhibits the growth of bacteria. Molecular docking studies also clearly explain the molecular interaction of the CU constituent with the DNA gyrase protein. Ferulic acid and 2-hydroxycinnamic acid (constituents of CUD) showed the binding energy of 6.8 and 6.9 kcal/mol, respectively. However, an integrated approach is necessary to promote highly valuable virtues.
Academic Editor: Rajeev K. Singla Cow urine distillate (CUD) is a traditional Indian medicine used to treat various diseases, including bacterial infections. However, there is limited evidence to support its use as a medicine, and its safety and efficacy have not been thoroughly studied. In this study, we evaluated the antibacterial activity of CUD against five bacterial strains using in vitro and in silico approaches. In vitro experiments showed that CUD has significant antibacterial activity against all tested strains with a zone of inhibition (ZOI) ranging from 13 to 24 mm and minimum inhibitory concentration (MIC) values ranging from 12.5 to 50 μ g/ml. The results indicated that the 15% concentration of CUD displayed the highest antibacterial activity against Staphylococcus aureus and Salmonella typhi . To further investigate the antibacterial mechanism of CUD, we performed in silico docking studies of the active compounds of CUD with bacterial proteins involved in protein synthesis. Our results showed that 2-hydroxycinnamic acid (Δ G = −6.9 kcal/mol) and ferulic acid (Δ G = −6.8 kcal/mol) exhibited the best docking scores with the targeted proteins (DNA gyrase, PDBID: 4KFG). The hydrogen bonding interaction with amino acids Val71 and Asp73 was found to be crucial for their antibacterial activity.
5. Limitation Due to the resource's limitations, the research is lacking in the spectral analysis and isolation of individual components of CUD. The research does not claim that the individual components of the CUD contribute in the same way in vitro as computational studies have indicated. This will remain the verse of future scope.
Acknowledgments The authors express their profound gratitude to the Department of Administration, Sunsari Technical College, Nepal, for their invaluable contributions in conducting the analytical studies. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that they have no conflicts of interest. Authors' Contributions LokRaj Pant conceptualized the study, proposed the methodology, wrote the original draft, and performed data analysis. Shankar Thapa conceptualized the study, performed molecular docking, and wrote the original draft. Bibek Dahal performed data analysis, wrote, reviewed, and edited the article, and performed supervision. Ravindra Khadka reviewed and edited the article. Mahalakshmi Suresha Biradar performed molecular docking and reviewed and edited the article. Supplementary Materials
CC BY
no
2024-01-16 23:47:21
Evid Based Complement Alternat Med. 2024 Jan 8; 2024:1904763
oa_package/d2/86/PMC10789515.tar.gz
PMC10789516
38226321
1. Introduction The presence of bacteria in the urine can range from asymptomatic to a serious kidney infection with sepsis [ 1 ]. In community medicine, urinary tract infections (UTIs) are the second most frequent infection [ 2 ]. UTI has emerged as a significant and urgent public health issue. An estimated 150 million cases of UTI, which have a significant risk of morbidity and mortality, are detected each year [ 3 ]. Males and females of all ages are both affected by UTIs. Females are more prone to experience UTIs than males, which is probably due to anatomical variations, hormonal influences, and behavioral factors [ 4 ]. UTI cases range from 23.1% to 37.4% among Nepalese patients visiting general hospitals. More than 95% of UTI instances are due to bacteria, which is a prevalent cause. Escherichia coli causes more than 80% of UTIs and is the most frequent bacteria [ 5 ]. It is well-recognized that UTIs can result in kidney scarring permanently as well as short-term morbidities such as fever, dysuria, and lower abdominal pain (LAP) [ 6 ]. Altogether, 50% of all nosocomial infections and 70% to 95% of uropathogenic E . coli (UPEC) are considered the source of community-acquired UTIs [ 7 ]. E. coli predominates in a higher extent of UTI cases, followed by Proteus species , Staphylococcus saprophyticus, Klebsiella species, and other Enterobacteriaceae families [ 8 ]. A significant public health concern is the development of antibiotic resistance in the treatment of UTIs. Counterfeit and spurious pharmaceuticals of uncertain quality are widely available, notably in underdeveloped nations where hunger, illiteracy, and poor hygiene habits are highly prevalent [ 9 ]. The most researched class A beta-lactam enzymes in Gram-negative bacteria are the TEM and CTX-M beta-lactamases, which are mainly plasmid-borne [ 10 ]. They are thought to be the most typical beta-lactam-resistant mechanism in Gram-negative bacilli and are spreading quickly throughout the world [ 11 ]. Based on the antibiotic resistance profile of the urinary pathogens from the most recent surveillance data, treatment for UTI cases is frequently initiated empirically. Thus, the purpose of the current investigation was to ascertain the prevalence of the bla CTX-M and bla TEM genes in cefotaxime-resistant E. coli isolated from urine samples of UTI-diagnosed patients who had visited Grande International Hospital in Kathmandu, Nepal.
2. Methodology 2.1. Study Design, Site, and Criteria This hospital-based descriptive cross-sectional study was carried out at Grande International Hospital, and molecular assays were conducted at National College, Kathmandu, at the Department of Microbiology, Kathmandu, Nepal, on the duration of November 2021 to May 2022. 2.1.1. Inclusion Criteria Samples from patients of all age groups for evaluation of urinary tract infections referred by the clinician were accepted, and the isolated Escherichia coli isolates were included in our study. 2.1.2. Exclusion Criteria Isolates other than Escherichia coli were excluded. 2.2. Sample Size and Sampling Technique For routine culture and antibiotic susceptibility testing, 1050 urine samples referred by clinicians were processed, and altogether, 165 Escherichia coli isolates were selected using a convenience sampling method. 2.3. Sample Collection and Transportation Patients were instructed to fill a sterile, dry, wide-necked, leak-proof container with 10–20 ml of the clean-catch midstream region of their urine. The container was correctly labelled with the sample code, name, date, and collection time, and it was immediately transported to the microbiology laboratory. For delayed processing, the urine samples were subjected to 10% boric acid [ 12 ]. 2.4. Laboratory Processing of the Specimen 2.4.1. Urine Culture The urine samples were inoculated on the surface of cystine lactose electrolyte deficient (CLED) agar plates, using a standard calibrated loop (∼4 mm diameter). The semiquantitative bacteriuria count was determined to estimate a significant UTI. On the surface of the culture medium, a loop of urine was streaked, and it was then incubated for 24 hours at 35°C under aerobic conditions. The total count of colonies was used to calculate the colony-forming unit (CFU) per milliliter of urine. The bacterial count was reported as significant for >10 5 CFU/ml. If the specimen was found to be contaminated, then a repeat sample was requested [ 13 ]. 2.5. Identification of the Isolates According to Bergey's Manual of Systemic Bacteriology, standard microbiological methods such as the study of colony morphology, Gram's staining, and other biochemical tests (catalase test, oxidase test, Triple Sugar Iron (TSI) test, Sulphide Indole Motility test (SIM), Urease test, Citrate utilization test, Methyl Red test, Voges–Proskauer test, Sorbitol tests, and so on) were used to identify E. coli [ 14 ]. 2.6. Antibiotic Susceptibility Test Isolated organisms were preceded by an antibiotic susceptibility test (AST) following CLSI guidelines recommendation. The antibiotics used were ampicillin (10 μ g), amikacin (30 μ g), cotrimoxazole (25 μ g), ciprofloxacin (5 μ g), nitrofurantoin (300 μ g), norfloxacin (10 μ g), ceftazidime (30 μ g), gentamicin (30 μ g), cefotaxime (30 μ g), and meropenem (10 μ g). The Kirby–Bauer disk diffusion method was used to conduct the in-vitro susceptibility test. In this approach, the test organism's broth culture was uniformly spread out across the surface of Mueller Hinton agar (equivalent to McFarland (0.5; inoculum density: 1.5 × 10 8 organisms/ml)). The medium was covered with the appropriate antibiotic disks, which were then incubated for 18 hours at 37°C. Following incubation, the inhibition zone was measured using a measuring scale in mm, and the zones were compared using established interpretive criteria based on CLSI guidelines recommendations to determine whether they were susceptible, intermediate, or resistant. E. coli ATCC 25922 was utilized to standardize the drug susceptibility test and to check antibiotic disk quality control. [ 15 ]. 2.7. Screening of Multidrug Resistant (MDR), Extensive Drug Resistance (XDR), and Pandrug Resistance (PDR) In the study, isolates were classified as MDR if they were resistant to at least three classes of antibacterial agents, whereas the isolates resistant to at least one agent of all antimicrobials but susceptible to only one or two antibacterial groups were classified as XDR. Finally, isolates that were still resistant to all commercially available antibacterial agents were classified as PDR [ 16 ]. 2.8. Phenotypic Detection of ESBL Production The antibiotic cefotaxime (CTX, 30 μ g) disks was used to conduct the initial screening test for the ESBL producer. According to CLSI 2017, isolates were further tested for ESBL production only if the zone of inhibition of cefotaxime had a diameter of less than 25 mm [ 13 ]. A combination disk test utilizing cefotaxime (30 μ g) and cefotaxime/clavulanic acid disks (30/10 μ g) was performed on the E. coli that were not susceptible to cefotaxime. The zones of inhibition for the cefotaxime and cefotaxime/clavulanic acid disks were measured. Isolates were defined as ESBL producers when there was an increase in zone diameter of 5 mm when clavulanic acid was present compared to individual disks [ 16 ]. 2.9. Conservation of the Isolates For further molecular analysis, the E. coli isolates were stored in tryptic soy broth with 20% glycerol at −70°C. 2.10. DNA Extraction of E. coli Isolates DNA extraction of bacterial isolates was carried out from cefotaxime (CTX)-resistant E. coli using boiling lysis. For this, the preserved bacteria were subcultured on nutrient agar (NA) and were incubated for 24 hrs at 37°C. A Luria Bertani (LB) broth was used to inoculate the isolated colony from the NA, and it was then incubated at 37°C. The DNA was extracted and then suspended in 50 μ L of TE buffer, which was later maintained at deep freeze (−20°C) for preservation [ 17 , 18 ]. 2.11. DNA Amplification and Detection To identify the presence of ESBL genes, conventional PCR was employed for PCR amplification. A master mix including 200 μ M of dNTPs (dATP, dCTP, dGTP, and dTTP), 120 nM of each primer (forward and reverse), 2.5 U of Taq polymerase in 1 × PCR buffer, 25 mM of MgCl 2 , and 3 μ L of DNA template was added to a 21 μ L volume to perform PCR amplification operations. The following temperature and cycling settings were used: for bla TEM gene (F.P: 5′-GAGACAATAACCCTGGTAAAT-3′R.P:5′-AGAAGTAAGTTGGCAGCAGTG-3′) and for bla CTX-M gene (FP: 5′-GAAGGTCATCAAGAAGGTGCG-3′, RP: 5′-GCATTGCCACGCTTTTCATAG-3′) [ 19 ]. For both bla TEM and bla CTX-M genes, amplification conditions were initial denaturation at 94°C for 5 min, 35 cycles of 95°C for 1 minute, 56°C for 45 sec, 72°C for 1 minute, and a final extension at 72°C for 7 min [ 20 , 21 ]. To detect amplified genes, 10 μ L of each reaction were subjected to gel electrophoresis by 2% agarose gel containing ethidium bromide (5 μ g/mL) for 1 h at 100 V in 0.5 × TBE buffer. Then, a UV transilluminator was used to see the amplified DNA bands. bla TEM amplicon size was 459 bp, whereas as bla CTX-M amplicon size was 560 bp [ 20 – 22 ]. The known positive bacterial strains for CTX-M and TEM genes were run separately as a positive control for the PCR amplification process, and the sterile water was used as a negative control'. 2.12. Diagnostic Comparison of Phenotypical Method with Molecular Method The following formula was used to compute the sensitivity (SE), specificity (SP), positive predictive value (PPV), negative predictive value (NPV), and accuracy (Ac) to proceed with the evaluation of phenotypical detection when compared with PCR methods: where TP is the True Positive, TN is the True Negative, FP is the False Positive, and FN is the False Negative [ 23 ]. 2.13. Data Processing and Statistical Analysis All the raw data of experiments was entered in an MS Excel sheet. The data analysis was carried out using SPSS version 20 (Statistical Package for Social Science). All the numerical data were presented as simple descriptive data. The comparison of antimicrobial-resistant data with beta-lactamases and non-beta-lactamase producers as well as the comparison of the drug-resistant pattern (MDR and XDR) with ESBL producers and nonproducers was performed using the chi-square or Fisher exact test. To determine the significance of the outcome, a p value less than or equal to 0.05 was considered statistically significant.
3. Results 3.1. Distribution of Bacteria in a Urine Sample Among 1050 urine specimens processed, the number of samples containing bacterial growth in culture media was found to be 335/1050 (31.9%). Among 335 bacterial growth, there were 165/335 (49.2%) growth of Escherichia coli , whereas 170/335 (50.7%) growth was observed for bacteria other than E. coli and the remaining 715 (68%) out of a total of 1050 samples were found to be sterile. 3.2. Age-Wise and Gender-Wise Distribution of Cases Out of 1050 samples, 400 (38%) received from male patients, while 650 (61.9%) were from female patients. The female participants of all age groups showed the highest number of participants belonging to the age group of ≥60 years was 240 (22.9%), whereas the lowest number of participants falling under <10 years was 4 (2.3%) ( Figure 1 ). 3.3. Antibiotic Susceptibility Pattern of E. coli Isolates Antimicrobial susceptibility tests of Escherichia coli obtained from urine were carried out using the Kirby–Bauer disk diffusion method. Among E. coli isolates, 95.7% were sensitive to meropenem, followed by gentamicin (86.7%), and nitrofurantoin (84.8%). Similarly, 64.2% and 55.7% of the isolates were sensitive to amikacin and cefotaxime, respectively. However, 57.6% and 53.4% of E. coli isolates showed resistance against ampicillin and cotrimoxazole. The sensitive and resistant rates are elucidated in Table 1 . 3.4. Antibiotic Resistance Pattern of ESBL-Producing and Nonproducing E. coli Isolates The antibiotics resistance pattern that was obtained by disk diffusion tests was compared with the ESBL detected by the combined disk diffusion method. The percentage of Escherichia coli isolates that produced ESBLs by the combined disk diffusion method was around 30.3% (50/165) while the percentage of isolates that did not produce ESBLs was determined to be 69.7% (115/165). The resistant pattern of each applied antimicrobial disk was compared with ESBL producer and non-ESBL producer Escherichia coli to estimate the significant difference in resistant rates for each antibiotic. According to our results, a significant difference in resistant rates was observed in the cases of ampicillin, cefotaxime, gentamicin, and meropenem which were found to be statistically significant ( P < 0.05). The results are summarized in Table 2 : 3.5. Multidrug Resistance (MDR) and Extensive Drug Resistance (XDR) among E. coli Isolates Out of 165 E. coli isolates, 86 (52.1%) were found to be multidrug resistant (MDR), while 10 (6%) were extensive drug resistant. 3.6. ESBL Producer and Nonproducer among E. coli Isolates Among 165 ( E. coli ) isolates, 73 were suspected ESBL producers on primary screening by using a cefotaxime disk; altogether, 50 were confirmed as ESBL producers by phenotypical confirmation using the combined disk diffusion method. The predominant ESBL producer was observed in E. coli , representing 68.5% of total cases. 3.7. Statistical Relationship of ESBL with MDR and XDR Among 165 E. coli isolates, 50 (30.3%) were found as ESBL producers, of which 96% were MDR and 4% were XDR. Throughout the proportion, significant associations ( P =0.001 <0.05) were observed in MDR and XDR patterns when compared with the ESBL producers Escherichia coli . The results are depicted in Table 3 . 3.8. Molecular Prevalence of bla CTX-M and bla TEM Gene among Cefotaxime-Resistant E. coli Isolates Among 50 ESBL-producing E. coli isolates, 50%, 46%, and 26% harbored bla CTX-M , bla TEM , and bla CTX-M + bla TEM genes, respectively. However, among the 23 non-ESBL-producing E. coli isolates (cefotaxime resistant), 47.8%, 73.9%, and 47.8% showed positive amplification of bla CTX-M , bla TEM , and bla CTX-M + bla TEM genes, respectively ( Table 4 ). To determine the prevalence of the bla CTX-M and bla TEM genes, bacterial DNA amplification was carried out using the conventional PCR method. The amplification of bla CTX-M and bla TEM genes is shown in Figures 2 and 3 . Among 73 cefotaxime-resistant E. coli isolates, the bla CTX-M , bla TEM , and both ( bla CTX-M + bla TEM ) genes were found to be 49.3%, 54.8%, and 32.87%, respectively. The prevalence of bla TEM gene was subsequently greater than that of bla CTX-M genes and both co-producer genes ( bla CTX-M + bla TEM ). The order of magnitude based on the comparison in the prevalence of these genes is represented as bla TEM > bla CTX-M > bla CTX-M + bla TEM which is depicted in Figure 4 . 3.9. Correlation of Phenotypical and Molecular Method concerning bla CTX-M Genes of E. coli Altogether, n = 73 cephalosporin-resistant E. coli isolates were subjected to conventional PCR and the comparison was made with the phenotypical method in the context of bla CTX-M gene-associated ESBL. A total of n = 73 cefotaxime-resistant Escherichia coli resemble the presence of bla CTX-M and bla TEM genes. Twenty-five isolates of E. coli were phenotypically detected by combined disk diffusion method from 35 E. coli isolates that were positive for bla CTX-M gene by PCR. The sensitivity and specificity of a phenotypic method when compared with PCR in terms of bla CTX-M genes were 69.44%, 95% CI (51.89% to 83.65%), and 42.50%, 95% CI (27.04% to 59.11%), respectively, whereas positive predictive value (PPV) and negative predictive value (NPV) were found to be 52.08% and 60.71%. Hence, the overall accuracy of the phenotypic method for ESBL concerning bla CTX-M gene was found to be 55.26%, with a 95% CI (43.41% to 66.69%). The results are summarized in Table 5 .
4. Discussion Urinary tract infection (UTI) severity varies, ranging from asymptomatic bacteria ascending to the kidneys, which can subsequently lead to sepsis [ 24 ]. In a community, UTIs are the second most common infection; elder people are more prone to it because of less immunity, decreased secretions of different hormones, poor sanitation, and so on [ 25 ]. In our study, 165 (15.7%) urine samples isolated E . coli in culture. E. coli were the highest prevalent bacteria from the urine sample which was well represented by a study conducted in Nepal by Poudel et al., in which the prevalence of E. coli was estimated to be 42.4% [ 26 ]. Many other researchers conducted in Nepal have reported a higher prevalence of E. coli [ 19 , 21 , 22 , 24 ]. A comparable study conducted by Batra et al. showed 23.6% culture growth among various specimens [ 27 ]. However, in our study, the growth was found to be inferior to the study carried out by Kateregga et al., where the growth was 46.9%. Administration of previous antimicrobial therapy may hinder the ability of the organism to flourish in culture media [ 28 ]. During antibiotic susceptibility testing, antibiotics of different classes were tested against the isolates. Among 165 E. coli isolates, 95.7% and 86.7% were sensitive toward meropenem and gentamicin, respectively. This finding synchronizes with the findings of Sah et al. and Parajuli et al., who reported that their susceptibility towards meropenem was more than 80% [ 29 , 30 ]. Another study reported that 86.4% of E. coli was found to be sensitive to carbapenem drugs such as meropenem, followed by gentamicin (72.8%), and amikacin (66%) which was nearly similar to our study because carbapenems are utilized as a supplementary therapy option for infections brought on by MDR Gram-negative pathogens [ 31 , 32 ]. Our study, compared to other research that differs in sample types, sample sizes, organism growth rates, and antibiotic resistance patterns, may account for the proportion of sensitive isolates in both investigations. In our study, out of 165 E. coli isolates, 85 (51.5%) showed multidrug resistant (MDR), and 10 (6%) showed extensive drug resistance. The present finding of MDR was nearly similar to the previous finding done at Bir Hospital in Nepal, where the rate of MDR was equivalent to 67.4% [ 33 ]. However, the present findings of MDR were comparatively lower than the findings compared to studies [ 30 – 34 ]. Patients who misuse, overuse, and inappropriately use antimicrobial therapy raise the cases of MDR. The primary contributing factor to the greater multidrug-resistant pattern may be incorrect antibiotic treatments from general practitioners, nurses, or over-the-counter medications, typically given in inappropriate doses, before presenting to the hospital [ 34 ]. The antibiotic-resistant patterns of Escherichia coli have been significantly associated with the presence of various beta-lactamase genes and the resistance traits for quinolones and aminoglycosides in the plasmid. Antibiotic resistance is primarily caused by switching the target sites, drug inactivation, modification enzymes, and the pump efflux system [ 35 , 36 ]. In our study among E. coli isolates, 30.3% were ESBL producers which are similar to the findings by Shilpakar et al., who reported 35.5% of ESBL [ 37 ]. Another study by Uc-Cachon et al. [ 38 ] reported 83.13% ESBL among E. coli isolates which was higher than our study. Similarly, in the report by Poudyal et al., the ESBL-producing E. coli was as high as 80% [ 39 ]. Another study in Nepal showed 34.5% ESBL, where 33.3% were observed in E. coli which was a similar finding with our study [ 40 ]. According to a study by Shristi et al., the prevalence of E. coli that produces ESBLs was just 18.2% [ 41 ]. The several patients enrolled in the research, including outpatients, inpatients, patients in intensive care units (ICUs), and patients with various underlined diseases, could be one explanation for the apparent variation in different research studies that have been conducted internationally. These characteristics of hospitalized patients allow for significant factors that contribute to antibacterial resistance. These elements may involve intrusive methods and technologies, the administration of broad-spectrum antibiotics regularly, more patients who frequently have co-morbid conditions, and extended hospital stays [ 42 ]. Our study revealed the major common ESBL genes were TEM and CTX-M in E coli isolates that were resistant to cefotaxime, and bla TEM (54.8%), was in strong concordance with an Indian study [ 43 ]. The most prevalent ESBL gene among Enterobacteriaceae has also been identified as the bla CTX-M gene [ 43 – 45 ]. Multiple gene occurrences within the same organism were also seen, with bla TEM + bla CTX-M (32.87%) being the most prevalent [ 19 ]. These genes are typically found on plasmids or chromosomal DNA [ 43 , 44 ]. The presence of the gene in plasmids further facilitated its transfer to different species of bacteria. The TEM beta-lactamase was the first plasmid-mediated enzyme, from which many of the ESBL have been derived as TEM can hydrolyze the third generation of cephalosporins, particularly ceftazidime. At the beginning of the 21st century, a new class of ESBLs on plasmids dubbed bla CTX-M that preferentially hydrolyze cefotaxime became predominant in European countries and started to spread in Southeast Asia [ 44 ]. Monitoring of ESBLs should not be limited to phenotypic screening since there have been reports illustrating the disparity between phenotypic and genotypic detection [ 45 ]. Our study is focused on the isolation of plasmids from ESBL-producing and non-ESBL-producing E. coli isolates and PCR detection of plasmid-borne beta-lactamase genes ( bla TEM and bla CTX-M ) that confer antibiotic resistance phenotype. The relative distribution of each gene among the E. coli isolates studied was analyzed and the presence of ESBL-associated genes which phenotypically non-ESBL isolates was evaluated. The presence of ESBL genes (CTX-M) in phenotypically undetected cases among tested E. coli isolates may confer ESBL phenotype to pathogens after acquiring the needed mutation. It is therefore necessary to consider prevalent ESBL genes ( bla CTX-M ) in phenotypically undetected cases making appropriate steps to address problems associated with the increasing frequency and rapid spread and horizontal gene transfer of ESBL-producing bacteria. Consequently, the new information can be useful in developing accurate protocols for detecting ESBLs and treating infections. While comparing the efficiency of phenotypical test keeping PCR detection as a reference method, our study revealed a sensitivity of 69.4% and a specificity of 42.5%, with an overall accuracy of 55.2%. The proportion of sensitivity differs based on study, sample size, burden of ESBL, and nature of genes that are recognized by phenotypical methods. In our study, the specificity might have been lowered compared to sensitivity, and we assume it is because of the presence of ESBL genes in phenotypical undetected strains of Escherichia coli that were resistant to cefotaxime. This study revealed data on the growing dominance of MDR- E. coli and ESBL in Nepal. A previous study conducted by Regmi et al. in Nepal also revealed that 63.04% of MDR was found in E. coli [ 46 ]. Another study conducted by Manandhar et al. estimated 62% of MDR in E. coli and 16 ESBL-producing E. coli out of 19 ESBL-producing Gram-negative bacteria in Kathmandu, Nepal [ 47 ] that determined increasing trends of MDR and ESBL producing in Escherichia coli . According to our study, 50.6%, 54.8%, and 32.8% of the 73 cefotaxime-resistant E. coli isolates had the genes bla CTX-M, bla TEM , and bla CTX-M + bla TEM , suggesting that these genes could transfer from one healthcare to other hospitals while shifting the patients. It aids in the ongoing monitoring and surveillance of both the genes coding for antibiotic resistance and other traits for it for better treatment options, stops the use of unnecessary antibiotics to stop the spread of antibiotic-resistant E. coli , and increases the number of antibiotics available for their effective use in protecting future generations [ 48 ]. The primary causes of prolonged infection, increased hospitalization, a higher cost of medication, and increased morbidity and mortality rates are often identified as bacteria that are becoming increasingly resistant to regularly used antibiotics [ 49 ]. Antibiotic usage guidelines at the national level should be developed and implemented. The use of antibiotics should be addressed only when required by medical professionals.
6. Conclusions E. coli was the most predominant in our current investigation. Females were highly infected as compared to males. Meropenem and gentamicin were found to be more sensitive towards E. coli isolates, while ampicillin was least effective. The multidrug-resistant and extensively drug-resistant E. coli increases complications in treating UTIs. The application of the PCR method in our study determined that even non-ESBL producers by the phenotypical method harbor ESBL genes such as TEM, CTX-M, and both co-producers that was very crucial information which concluded that the molecular methods also should be performed simultaneously with phenotypical methods on cefotaxime-resistant strains. Nevertheless, the phenotypical methods were easy to perform, cheap, and have good sensitivity for detection of ESBL and are still recommended by the CLSI Guidelines which could be useful for detection in the least developed countries like ours. To accurately estimate the burden of AMR, it is advised that future research involving a larger sample size and more detection of a larger number of different genes can sufficiently improve detection of missed cases of ESBL by the phenotypic method.
Academic Editor: Abdelaziz Ed-dra Urinary tract infections (UTIs) are highly prevalent globally, and various antibiotics are employed for their treatment. However, the emergence of drug-resistant uropathogens towards these antibiotics causes a high rate of morbidity and mortality. This study was conducted at the Microbiology Laboratory of Grande International Hospital from November 2021 to May 2022 and aimed to assess the prevalence of UTI caused by Escherichia coli and their antibiotic susceptibility pattern with a focus on extended-spectrum beta-lactamases (ESBLs) and the prevalence of two genes ( bla CTX-M and bla TEM ) in cephalosporin-resistant E. coli . Altogether, 1050 urine samples were processed to obtain 165 isolates of E. coli . The isolates were identified by colony morphology and biochemical characteristics. Antimicrobial susceptibility tests (ASTs) were determined by the Kirby–Bauer disk diffusion method, and their ESBL enzymes were estimated by the combined disk method (CDM). Two ESBL genes ( bla CTX-M and bla TEM ) were investigated by polymerase chain reaction (PCR) in cefotaxime-resistant E. coli . Among the 1050 urine samples that were processed, 335 (31.9%) were culture-positive with 165 (49.2%) identified as E. coli. The age group ≥60 years (30.3%) had greater susceptibility to bacterial infections. AST revealed that meropenem was highly effective (95.7% susceptibility), while ampicillin showed the least sensitivity (42.4%). Among the E. coli isolates, 86 were multidrug resistant (MDR) and 10 were extensively drug resistant (XDR). Of these, 46 MDR (96%) and 2 XDR (4%) were ESBL producers. The prevalence of ESBL genes ( bla CTX-M and bla TEM ) was 49.3% and 54.8%, respectively. The overall accuracy of CDM as compared to PCR for the detection of the bla CTX-M gene was 55.26%. The prevalence of MDR E. coli harboring the bla CTX-M and bla TEM genes underscores the imperative role of ESBL testing in accurately identifying both beta-lactamase producers and nonproducers.
5. Strengths and Limitations The prevalence of E. coli , their antibiogram, and the status of MDR, XDR, and PDR in urine samples from our study in a single hospital setting can be useful additional reference data for the existing literature and treating physicians. This study's examination of the incidence of resistant genes in ESBL-producing E. coli emphasizes the value and necessity of molecular diagnostic facilities for a more accurate identification of infectious illnesses. However, there are some limitations in our study as well. The prevalence of AMR cannot be generalized, as our study's methodology was exclusive to a single institution and used small clinical samples. We were unable to characterize the resistance genotypes of other Gram-negative organisms because of the limited laboratory resources and financing. Furthermore, our study did not encompass the characterization of genotypes for other Ambler class beta-lactamases, and we did not examine the presence of other ESBL gene members, including those within the SHV family.
Acknowledgments We are grateful to the staff and faculty members of the Department of Microbiology of National College and Grande International Hospital, Kathmandu, Nepal, for their support and coordination in accomplishing the study. We express our sincere gratitude to all the patients for their active participation in our study. Abbreviations American Type Culture Collection Clinical and Laboratory Standards Institute Mueller–Hinton agar Combined disk method β -lactamase coding gene Cefotaximase-Munich Colony-forming unit Extended-spectrum beta-lactamases Gastro-intestinal Lactose fermenting Lipopolysaccharide Multidrug-resistant Extensively drug-resistance Pandrug-resistant Nonlactose fermenting Penicillin-binding proteins Polymerase chain reaction. Data Availability All the data generated in our study are within the manuscript. The additional raw data have been uploaded in Supplementary File 1 and Supplementary File 2 . Ethical Approval Ethical approval for this study was obtained from the Institutional Review Committee (IRC) of the Grande International Hospital, Kathmandu, Nepal, GIH (Ref. No.: 03/2022). Consent Written informed consent was obtained from each patient for their voluntary participation in the study. Disclosure The research work is an experimental part of a dissertation conducted in Grande International Hospital. Conflicts of Interest The authors declare that they have no conflicts of interest. Authors' Contributions R.K.S, P.D., G.R.G., R.P., and E.T. designed the study. R.K.S. performed lab work and data collection. E.T., R.P., P.D., and G.R.G supervised the study. R.K.S., E.T., and P.D. contributed to data curation. R.K.S. and P.D. performed data analysis. P.D. and E.T. drafted the manuscript. P.D., E.T., and R.P. reviewed, edited, and finalized the manuscript. All authors have read and agreed to the published version of the manuscript. Supplementary Materials
CC BY
no
2024-01-16 23:47:21
Can J Infect Dis Med Microbiol. 2024 Jan 8; 2024:5517662
oa_package/a8/df/PMC10789516.tar.gz
PMC10789521
37417898
Methods Establishment of the study population The study population included as cases all persons with a positive polymerase chain reaction (PCR) test for the SARS-CoV-2 virus, with the information being obtained through the system of mandatory reporting of communicable diseases in Sweden, the SmiNet registry, as previously reported ( 16 ). We restricted the cases that were eligible for inclusion to subjects aged 18–64 years and to reports that were received between 1 October 2020 and 31 December 2021 (N=683 566). This was a period of widespread testing for the virus in society, without any focus on certain occupations in the second and third waves of the pandemic. From the SmiNet registry, we extracted the Swedish personal identity number of each case and the date (index date) when the positive PCR sample was obtained. We selected four living controls for each case, matched for gender, age (case year of birth) and region of residency on the index date, from the Swedish Historical National Population Registry (N=3 404 166). We extracted information from the Swedish national socioeconomic database, called LISA (the longitudinal integration database for health insurance and labor market studies), regarding the highest educational level attained [categorized as: pre-high school (up to 9 years), completed high school, or university examination]; country of birth; and dwelling-area including the number of inhabitants in the residence. From LISA, we also obtained information about annual occupational history for the period of 2014–2020. Finally, we included only those cases (N=561 582) and corresponding controls (N=2 211 372) for whom there was information regarding occupation during the period of 2014–2020. Among the cases, we defined two separate types: for SARS-CoV-2, all persons with a positive PCR test for SARS-CoV-2 (N=561 582) and, for severe COVID-19, the cases with a positive PCR test for SARS-CoV-2 and admission to hospital for a COVID-19-related diagnosis sometime in the period from 7 days prior to the index date to 30 days after the index date. We also included deceased persons who had diagnosis of COVID-19 (U07.1 or U07.2) as an underlying cause-of-death up to 90 days after the index date. This resulted in a final population of 5985 cases with severe COVID-19. A COVID-19-related diagnosis was defined as a definitive diagnosis where three infectious disease specialists had listed it as a COVID-19-related disease and a diagnosis of U07.1 or U07.2 ( 17 ). These diagnoses were obtained from the Swedish National Hospital Discharge Registry and the Swedish National Mortality Register (supplementary material, www.sjweh.fi/article/4103 , table S1). Comorbidities We used the Swedish National Hospital Discharge Registry and the Swedish Prescribed Drug Registry to identify the following comorbidities as confounders based on their ICD-10 codes: chronic obstructive pulmonary disease (COPD, ICD10 J43-J44); ischemic heart disease (IHD, ICD10 I20-I25); and diabetes mellitus (ICD E10-E14) during the three years preceding the index date. We defined the use of oral and systemic corticosteroids according to the Anatomical Therapeutic Chemical (ATC) codes (ATC H02) if these drugs were dispensed at any time within one year preceding the index date. Classification of occupational exposures The occupation of each individual in the year closest preceding the index date, in 2014–2020, was classified at the four-digit level according to the ISCO-88 and ISCO-08 codes ( 18 , 19 ). We applied two previously described JEM to assess the risk of becoming infected with the SARS-CoV-2 virus: an international ( 9 ) and Swedish ( 10 ) JEM, as previously described ( 20 ). The international JEM was designed to capture eight dimensions that were judged to be important for the risk of being infected, divided into the categories of low, increased, and high risk. All the dimensions were compared with the 'no risk' category – defined as homeworkers or persons not working with others. We used the Danish application of the JEM, as it was assumed to reflect Swedish conditions. Thus, in the present study, we applied the five following categories of risk dimensions: (i) number of workers in close vicinity to each other per day: high (>30), increased ( 10 – 30 ), and low (<10) risk; (ii) nature of contacts: high [working in workspaces with regular contacts with persons with suspected or diagnosed COVID-19 (for this application, infected patients)], increased (working with the general public), and low (working in workspaces with coworkers only) risk; (iii) contaminated workspaces: high (frequently sharing materials/surfaces with the general public ≥10 times/day), medium (sometimes sharing materials/surfaces with the general public <10 times/day), and low (frequently sharing materials/surfaces with coworkers ≥10 times /day) risk; (iv) working location: high (mostly inside for >4 hours/day), medium (partly inside for 1–4 hours/day), and low (mostly outside) risk; (v) social distancing, ie, the possibility to maintain ≥1 m of social distance: high (can never be maintained), increased (cannot always be maintained), and low (can always be maintained) risk. We also applied a Swedish JEM, based on the O*NET data, which maps physical proximity and exposure to diseases or infections, as previously described ( 10 , 20 ). This JEM has standardized scores for each occupational group, yielding scores in the range of 0–100. Thus, the scale for physical proximity was: 0 – I do not work near other people (>30 m distance); 25 – I work with others but not in close proximity (eg, private office); 50 – I work in slightly close proximity (eg, shared office) to other persons; 75 – I work in moderately close proximity (at arm’s length) to other persons; and 100 – I work in very close proximity (near touching) to other persons. The scale for Daily exposure to diseases or infections at the current workplace was as follows: 0–24 – At least once a year, but not every month; 1 st group. 25–49 – At least once a month, but not every week; 2 nd group 50–74 – At least once a week, but not daily; 3 rd group 75–100 – Daily; 4 th group, highest exposure. We present the results in four groups of the mean scores for each dimension. Statistical methods We used a conditional logistic multivariable regression analysis to calculate the odds for a positive test for SARS-CoV-2 and for severe COVID-19 in association with the JEM-defined categories of exposures tested as indicator variables. The basic (matched) models were adjusted only for matching strata (ie, equivalent to adjusting for gender, age and geographic region, and index date). For SARS-CoV-2 infection, we present only the matched model. The further-adjusted models included, in addition, education, country of birth, location/inhabitants of the residence, number of inhabitants of the residence, chronic obstructive pulmonary disease (COPD), ischemic heart disease (IHD), diabetes, and dispensed corticosteroids. These confounders were selected a priori but underlying assumptions are visualized in a DAG model (supplementary figure S1). All the JEM-defined exposure categories were tested in separate models for each exposure. We also analyzed the interactions by stratification with regard to gender and metropolitan area (Stockholm). Furthermore, we analyzed the risks for SARS-CoV-2 infection and severe COVID-19 disease for all the occupations (4-digit level), using >500 cases (SARS-CoV-2 infection) and >50 cases (severe COVID-19). The reference group in this analysis was comprised of occupations that were classified as having the lowest level of potential exposure using the international and Swedish JEM (ie, the three most common occupations were commercial sales representatives, software developers, and advertising and marketing professionals). Regarding SARS-CoV-2 infection, the occupations were tested in unconditional matched models with adjustments for gender, age and geographic region. For severe COVID-19 disease the occupations were tested in the adjusted models (see above). We also performed gender stratified analyses for the different occupations. In the stratified analysis, we used regarding severe COVID-19 >25 cases and for SARS-CoV-2 infection >250 cases. SARS-CoV-2 infection we only report the 20 occupations with the highest and lowest odds ratios (OR), respectively. All the statistical analyses were performed using the SAS version 9.4 M7 software (SAS Inc, Cary, NC, USA), and 95% confidence intervals (CI) were calculated.
Results The study comprised 561 582 cases of SARS-CoV-2 infection and 5985 cases with severe COVID-19 disease. Of the cases with severe COVID-19, 121 were deceased within 90 days of the index date. Descriptive data for the cases and corresponding controls, including the prevalences of occupational exposures, are described in table 1 . For severe COVID-19 disease, the highest odds in the matched models were seen for the dimensions of: regular contact with infected patients (OR 1.86, 95% CI 1.68–2.05), physical proximity (OR 1.86, 95% CI 1.71–2.03), and the highest exposure group of exposure to diseases or infections (OR 1.87, 95% CI 1.66–2.10) ( table 2 ). In the additionally adjusted models, the odds were generally lower but the pattern was similar. The highest odds were still for regular contact with infected patients (OR 1.37, 95% CI 1.23–1.52, physical proximity (OR 1.47, 95% CI 1.34–1.61), and the highest exposure group of exposure to diseases or infections (OR 1.72, 95% CI 1.52–1.96). The odds were moderately increased for frequently sharing materials/surfaces with general public (OR 1.30, 95% CI 1.19–1.41) and the odds were decreased (although the confidence interval comprised unity) for mostly working outside (OR 0.77, 95% CI 0.57–1.06). For SARS-CoV-2 infection, in the matched models, the highest odds were for the dimensions of regular contact with infected patients (OR 1.37, 95% CI 1.35–1.38), physical proximity (OR 1.38, 95% CI 1.37–1.40), and for the highest exposure group of exposure to diseases or infections (OR 1.44, 95% CI 1.42–1.45) ( Table 2 ). The odds for SARS-CoV-2 infection were decreased for mostly working outside (OR 0.83, 95% CI 0.80–0.86). The results were similar for men and women, both regarding severe COVID-19 disease ( table 3 ) and SARS-CoV-2 infection (supplementary table S2). We also separately analyzed the metropolitan area of Stockholm (the capital city) and found similar results (data not shown). Table 4 lists the odds for severe COVID-19 disease in all the occupations with >50 cases, as compared with the unexposed control occupations. The five occupations with the highest odds for severe COVID-19 disease were bus and tram drivers, nursing professionals, primary school teachers, early childhood educators and childcare workers. Of note is that heavy truck and lorry drivers did not have increased odds for severe COVID-19. Table 5 lists the odds for the 20 occupations with the highest odds for SARS-CoV-2 infection. The five occupations with the highest odds for SARS-CoV-2 infection were prison guards, early childhood educators, primary school teachers, firefighters, and midwifes. The 20 occupations with the lowest OR for SARS-CoV-2 infection are listed in supplementary table S3. The odds for severe COVID-19 among men and women are shown in supplementary table S4. Among men the highest odds were for bus and tram drivers (OR 2.04, 95% CI 1.49–2.79), security guards and elementary workers. Among women the highest odds were for certified specialized physician (OR 2.05, 95% CI 1.31–3.21), early childhood educators, and nursing professionals. The odds for SARS-CoV-2 infection among men and women are shown in supplementary tables S5 and S6.
Discussion In the present study with national coverage, we show that during the second and third waves of the pandemic, close contact with infected or diseased patients/persons and close physical proximity still increased the odds of having severe COVID-19 disease, as well as having been infected with the SARS-CoV-2 virus. Outdoor work seems to be protective against infection. The observed pattern among the occupations supports the notion that contact with infected patients/persons and close proximity persist as important risk factors. A major strength of our study design is that we used national databases with high levels of coverage to assess the outcomes of interest: SARS-CoV-2 infection and severe COVID-19 disease. We also utilized a more-specific definition of severe COVID-19 disease compared with most other studies. In addition to an in-hospital diagnosis and the COVID-19-related U07 diagnosis, we required a diagnosis that was clinically linked to COVID-19 disease ( 17 ). This will diminish the risk of having false positive associations as we avoid to classify as cases, comorbidity not clearly related to COVID-19, ie, persons with unrelated diseases and accidentally detected COVID-19. We used the SmiNet registry, which has comprehensive coverage of all SARS-CoV-2 tests in Sweden, although we acknowledge that not all cases with positive detection of the virus will be captured. We also use the Swedish Inpatient Register, which is acknowledged to be of high quality ( 21 ). However, we acknowledge that the diagnoses may be misclassified, but as we studied the younger part of the population (<65 years) we consider that this misclassification will not severely bias the results. Another strength of our study is that we employed random controls from the same national population. Furthermore, we were able to consider a number of key potential confounders using Swedish registry data. These confounders included level of education (as a proxy for socioeconomic status, SES), living density in dwellings, and comorbidities that might modify the risk, such as diabetes, COPD, IHD and dispensed corticosteroids. Despite these adjustments, we cannot exclude a residual bias from, for instance, smoking habits or use of public transportation. We did not control for COVID-19 vaccination. However, in the latter part of our study period, in December 2021, 80.5% of the adult population had received two doses of COVID-19 vaccine. A study from the County of Stockholm has analyzed the age-standardized prevalence of vaccination against COVID-19 in different occupations covering the period until January 2022 ( 22 ). Among women, the high-risk occupations, certified specialized physician, early childhood educators and nursing professionals had rather high prevalence of vaccination: 86.0%, 75.7% and 82.9%, respectively. Among men, the prevalences in high-risk occupations were lower: bus and tram drivers (63.7%), security guards (76.5%) and elementary workers (66.6%). From these data, it is difficult to conclude whether the vaccinations were effective in protecting workers at risk. Another limitation regarding the occupational analyses is the false positive (or negative) results due to multiple testing. Hence, the results regarding the different occupations have to be regarded as hypothesis generating results. A key analytic strength of this study is our approach to categorizing occupational exposure. It is generally acknowledged that the JEM approach avoids the recall bias inherent to respondent-elicited exposure histories ( 23 ). Furthermore, we limited the analyses to occupational exposures during the year preceding the diseased state as we assumed that this period was critical in terms of increased risk. The Swedish JEM for proximity and exposure to diseases is based on data collected in the US ( 24 ). We do not consider this to be a problem, besides which US military personnel were not included. Military personnel constitute a very small fraction of the Swedish working-age population. We do not include the dimension of face covering, which has been applied differently in Sweden compared to many other countries. In addition, we do not include the dimensions of income insecurity or migrant background. The reasons for this are twofold. First, we have data on socioeconomic status and migrant status on the individual level from our national registries. Second, we use the Danish application of the international JEM, and we consider that the Swedish and Danish labor markets are different with regards to migration and income security. We performed the JEM analyses based on assumptions of exposure to infected persons and the risk of close proximity. The subsequent analyses of a high number of occupations may have resulted in some spurious associations of increased or decreased risk for some occupations due to random variation, regardless of the presence of statistical significance. However, we may have missed some occupations that are at risk, due to the low numbers, ie, occupations with <50 persons with COVID-19 disease or <500 SARS-CoV-2-positive persons. However, in the gender stratified analyses we used <25 persons and <250 persons as the limit. Therefore, cautious interpretation and critical discussions are needed before the results for specific occupations are communicated ( 3 ). The main outcomes of the present study are that close contacts in the workplace (ie, physical proximity and contact with infected or diseased persons or patients, increase the risks for severe COVID-19 and a positive test for SARS-CoV-2. Even if this is in line with inferences drawn from studies that have looked at specific occupations, few studies have investigated these dimensions in such detail. In a British study, workers who had close daily contacts with others were more likely to be seropositive for SARS-CoV-2 compared to homeworkers ( 8 ). The results of meta-analyses also support the idea that physical distancing in general decreases the incidences of SARS-CoV-2 infection and COVID-19 ( 25 ). In a recent study that applied the British version of the international JEM, associations were noted between SARS-CoV-2 infection and the number of contacts and social distancing, and the authors observed that in three domains – number, nature of contacts, and social distancing – there was an exposure–response relationship between the exposure levels and risk of SARS-CoV-2 infection ( 26 ). In a British study that applied the O*NET-based JEM, frequent occupational exposure to disease/infections and working in close proximity with others were associated with increased risk for a positive SARS-CoV-2 test ( 27 ). Taken together, our results and those of previous studies support the reasonable conclusion that close physical proximity in the workplace and contacts with infected/diseased persons/patients increase the risks for a positive test for SARS-CoV-2 and the clinical disease of COVID-19. In the present study, frequently sharing materials or surfaces with the general public did not appear to be a strong risk factor. In the British application of the international JEM, the importance of sharing surfaces was also uncertain, showing either a lack of dose–response or no increase in risk ( 26 ). The SARS-CoV-2 virus is often detected on surfaces, although whether or not these viruses are viable remains uncertain ( 28 ). This may explain the unclear results with regard to this dimension. In our study, outdoor work seems to be protective. However, in other studies, the converse has been found. In the British application of the international JEM, outdoor work was associated with increased odds for SARS-CoV-2 infection ( 26 ). In a US study, outdoor workers had higher odds for SARS-CoV-2 infection compared to indoor workers ( 29 ). One possible mechanism for the observed inconsistencies with regard to risk from outdoor work may be that although the virus concentration is generally likely to be attenuated in the larger air volumes, these workers may to a varying degree share high-risk indoor facilities eg, for breaks and change of clothes. Hence, our observation may be highly dependent on such specific circumstances and may not be applicable to other contexts. Thus, our observations have to be replicated to remove uncertainty regarding the evidence. Our findings regarding different occupations and risks for both SARS-CoV-2 infection and severe COVID-19 disease are broadly consistent with the reported results from smaller studies conducted in other countries and in other contexts. Our results showing increased risks for nurses, midwifes and certified specialist physicians corroborate earlier studies reporting increased risks among healthcare workers ( 3 , 4 , 6 , 8 , 30 , 31 ). Other occupations that involve close contacts with both the general public and infected persons are taxi drivers and bus drivers. In a Chinese study, using a taxi more than once a week was a clear risk for severe acute respiratory syndrome (SARS) ( 32 ). Infected persons may use public transportation; a British study noted that during the influenza season of 2008–2009, patients attending their primary care physician more often had used the bus or tram prior to coming in contact with their physician, as compared with controls ( 33 ). This may represent a way for bus/tram drivers to be infected. We found an almost doubled odds for severe COVID-19 disease among bus and tram drivers, although such a relationship to a positive test for SARS-CoV-2 was not found among these workers. In an Italian study, (especially male) bus drivers had almost a three-fold increased risk of COVID-19 ( 34 ). A similar occupation, albeit with less contact with the public, is heavy truck and lorry drivers. This occupation did not show increased odds for severe COVID-19. Therefore, we conclude that bus and tram drivers are probably at increased risk for severe COVID-19 disease due to occupational exposure. We also observed increased odds, both for severe COVID-19 and SARS-CoV-2 infection, among primary school teachers and early childhood educators. In Sweden, both elementary schools and kindergartens remained open during the pandemic, and with some exceptions so did the upper secondary schools. The Swedish COVID-19 Commission concluded that it was a clear benefit for the children and for society that schools were kept open ( 15 ). We agree with this, although the increased risk of disease for the front-line workers in primary schools and kindergartens underscores the need for the introduction of additional safety measures to reduce viral transmission and to maintain a high frequency of vaccination in these occupational groups. Teachers in primary schools and kindergartens are mostly in the age interval groups for which the vaccination rates are rather low. The gender stratification of occupations clearly showed that among women the occupations in health care and childcare were associated with increased risk: certified specialist physicians, early childhood educators, nursing professionals and childcare workers. Among men the increased risk was for other kind of social services, like bus and tram drivers, security guards, elementary workers and kitchen helpers. We conclude that close contacts with infected or diseased patients/persons and close physical proximity increase the odds of having a positive test for SARS-CoV-2 virus or for suffering from severe COVID-19 disease. The findings for different occupations support the hypothesis that contacts with infected patients/persons and close proximity are important risk factors. The results indicate that occupational groups outside the healthcare sector also may be considered for occupational compensations. There is a need to introduce additional safety measures, including vaccinations, to reduce viral transmission in these work environments. Ethics approval The Gothenburg Ethics Committee approved the study (Dnr- 04792-19).
Objective This study aimed to investigate whether workplace factors and occupations are associated with SARS-CoV-2 infection or severe COVID-19 in the later waves of the pandemic. Methods We studied 552 562 cases with a positive test for SARS-CoV-2 in the Swedish registry of communicable diseases, and 5985 cases with severe COVID-19 based on hospital admissions from October 2020 to December 2021. Four population controls were assigned the index dates of their corresponding cases. We linked job histories to job-exposure matrices to assess the odds for different transmission dimensions and different occupations. We used adjusted conditional logistic analyses to estimate odds ratios (OR) for severe COVID-19 and SARS-CoV-2 with 95% confidence intervals (CI). Results The highest OR for severe COVID-19 were for: regular contact with infected patients, (OR 1.37, 95% CI 1.23–1.54), close physical proximity (OR 1.47, 95% CI 1.34–1.61), and high exposure to diseases or infections (OR 1.72, 95% CI 1.52–1.96). Mostly working outside had lower OR (OR 0.77, 95% CI 0.57–1.06). The odds for SARS-CoV-2 when mostly working outside were similar (OR 0.83, 95% CI 0.80–0.86). The occupation with the highest OR for severe COVID-19 (compared with low-exposure occupations) was certified specialist physician (OR 2.05, 95% CI 1.31–3.21) among women and bus and tram drivers (OR 2.04, 95% CI 1.49–2.79) among men. Conclusions Contact with infected patients, close proximity and crowded workplaces increase the risks for severe COVID-19 and SARS-CoV-2 infection. Outdoor work is associated with decreased odds for SARS-CoV-2 infection and severe COVID-19. Key terms
Occupational exposures and dimensions are important determinants of respiratory infections ( 1 ). This has been evident in the coronavirus disease (COVID-19) pandemic caused by infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ( 2 ). Several studies have identified higher risk levels for different occupational groups, mainly healthcare and transportation workers, as well as occupations that involve personal service duties ( 3 – 7 ). These studies have indicated that workers who come in close contact with either the general public or infected patients are at increased risk for COVID-19. However, few studies have analyzed the importance of different occupation-related contact modalities. In a British study, it has been shown that workers who were in close contact daily with others had a higher prevalence of antibodies against SARS-CoV-2, as compared to homeworkers ( 8 ). Several job-exposure matrices (JEM) have been developed to assess workplace factors that are associated with exposure to SARS-CoV-2 (9–11). The majority of the studies conducted to date have focused on the first wave of the pandemic, and only a few have examined the development of occupational risk in the subsequent waves of the pandemic. There are probably considerable differences between the waves depending on the extent of implementation of preventive measures, which include improved access to appropriate personal protection equipment, adherence to disease-prevention guidelines, and expanded vaccination programs. In addition, more-contagious SARS-CoV-2 variants emerged, increasing viral transmission in society ( 12 , 13 ). The extent of this may also differ between occupational groups ( 14 ). It is of importance to emphasize that the course of the pandemic and its associated occupational risks should be discussed in the specific context of a given country. Sweden differs from many other countries in that there were less-strict rules regarding social distancing and the use of face masks, and schools were not closed ( 15 ). During the first wave of the pandemic, there was selected screening of SARS-CoV-2 for healthcare workers, which is why analyses of occupational risks based on positive tests for SARS-CoV-2 might be biased for that period. We hypothesized that, even in the later waves of the pandemic, workers who were in close contact with either the general public or with infected patients had increased odds for severe COVID-19 and SARS-CoV-2 infection. Supplementary material
Conflicts of interest The authors declare no conflicts of interest. Data availability statement The study outcomes are based on matching with Swedish national registers. Data are available upon reasonable request. Requesters for data sharing need to have a Swedish ethical permit. Funding This study was supported with grants from the Swedish Heart and Lung Foundation, Swedish Council for Working Life, Health, and Welfare (FORTE) (no. 2021-00304), and the Swedish State under the agreement between the Government of Sweden and the County Councils, the ALF agreements (74570, 77990 and 965149).
CC BY
no
2024-01-16 23:47:21
Scand J Work Environ Health.; 49(6):386-394
oa_package/33/f4/PMC10789521.tar.gz
PMC10789547
38226091
Introduction Vertebral osteomyelitis (VO) is a spinal infection resulting in inflammation and destruction of the vertebral bodies and intervertebral discs [ 1 - 3 ]. VO can be caused by bacteria, fungi, or other organisms, with Staphylococcus aureus being the most common pathogen of pyogenic vertebral osteomyelitis (PVO), responsible for 55%-80% of cases [ 1 - 3 ]. This condition is increasingly prevalent in the elderly [ 4 ]. The proportion of patients aged 60 years or more among patients with PVO is 52% in Japan, and a similar trend has been reported in several studies [ 5 - 7 ]. Outcomes of treatment of PVO such as the incidence of paralysis and mortality within one year are worse in the elderly than in the younger adults [ 5 , 6 ]. Thus, the treatment of PVO in the elderly is an important issue. Traditional treatment options for PVO include antibiotics and surgical intervention, as well as debridement and instrumentation, which are effective in some cases and ineffective in others [ 8 - 10 ]. When combined with iliopsoas abscess and epidural abscess, treatment becomes complex and demands careful management. Patients should be carefully monitored for the appearance of neurologic symptoms and, in some cases, strict bed rest is required [ 11 - 14 ]. Additionally, the duration of antibiotic therapy, including oral antibiotics, may require 6-12 weeks [ 15 ]. These factors are serious conditions strongly associated with poor clinical outcomes and higher mortality in patients with PVO [ 16 - 18 ]. Continuous local antibiotic perfusion (CLAP) has been reported to have good results in the treatment of difficult-to-treat infections and instrumentation-related infections [ 19 - 21 ]. CLAP administers appropriate (high) concentrations of antibiotics locally and reduces the side effects of antibiotics because it applies continuous drainage via negative pressure [ 19 - 21 ]. The antibiotic selected is Gentamicin because it is a bactericidal antibiotic, concentration-dependent, and can be measured in blood levels [ 13 - 15 ]. The usefulness of CLAP is that it can eliminate antimicrobial-resistant bacteria and biofilm-associated bacteria, and control implant-associated infections while sparing the implants. In spine surgery, CLAP has been reported to have favorable outcomes for postoperative surgical site infections [ 20 ]. Therefore, CLAP has been proposed as a novel approach to treating difficult-to-treat infections. Despite this, there have been no documented instances of its application for cases of PVO accompanied by iliopsoas abscess and epidural abscess. Conventionally, CLAP has been utilized within compact, enclosed spaces, such as the medullary cavity or subcutaneous tissue, because it requires negative pressure for continuous perfusion [ 19 - 21 ]. This requirement is likely why its application in more expansive spaces like the retroperitoneal region remains unexplored. The present article presents a unique case of PVO with iliopsoas abscess and epidural abscess, successfully managed with a retroperitoneal CLAP procedure.
Discussion The present case illustrates two important findings. First, it elucidates that PVO with iliopsoas and epidural abscesses could be treated by combining anterior-posterior fixation with retroperitoneal CLAP. Second, it demonstrates the feasibility of the CLAP procedure for the retroperitoneal space, vertebral body, and intervertebral disc through an approach similar to the LLIF technique. A previous report demonstrated the feasibility and usefulness of CLAP from the posterior side for surgical site infection in four patients after spinal instrumentation surgery [ 20 ]. However, there have been no reports of CLAP from the anterior or lateral side for the approaching vertebral bodies, intervertebral discs, or iliopsoas muscle. For retroperitoneal CLAP, it is necessary to place the tube close to the site of infection to increase the local antimicrobial concentration because the retroperitoneal space is large [ 23 ]. In addition, there is a possibility that the tube may deviate from its intended position. In the present case, the placement of the tube in the scar or the iliopsoas muscle prevented tube deviation. The contrast agent injected through the dual-lumen tubes had a contrast effect within the iliopsoas muscle and intervertebral space, confirming that the antibiotics were correctly delivered to the infected site even when placed in the retroperitoneal space, which is sparser and more complex than posterior structures [ 23 ]. Moreover, we avoided complications due to the spread of Gentamycin over a large space, which did not occur. If infection control is anticipated, CLAP to other musculoskeletal sites is usually completed within two weeks. Thus, in this case, multiple blood tests were performed within two weeks postoperatively and contrast studies were performed one week postoperatively. However, the optimal timing and frequency of blood tests and contrast studies remain unclear. Further cases are needed to establish the appropriate timing and frequency of these tests to prevent inappropriate perfusion of gentamicin. Our second key finding is that treatment combining anterior-posterior fixation with retroperitoneal CLAP was effective for PVO with iliopsoas and epidural abscesses. Conventional surgical treatments can sometimes result in poor outcomes [ 2 ]. Factors like advanced age, diabetes mellitus, and immunocompromised conditions are associated with severe symptoms of PVO [ 2 ]. Furthermore, incomplete debridement, antimicrobial-resistant bacteria, and negative cultures contribute to poorer outcomes in the surgical treatment of PVO [ 24 ]. The high concentration of Gentamicin used in CLAP is effective regardless of the susceptibility of bacteria. Therefore, CLAP might be a viable option in cases of suspected resistant bacterial involvement or when choosing the appropriate antibiotic is challenging due to negative cultures. However, spinal fusion surgery, including anterior debridement, is a standard and useful treatment for refractory PVO with severe vertebral body destruction and instability [ 25 , 26 ]. In this case, these treatments were the primary contributors to infection control. Further studies are needed to evaluate the effectiveness and safety of retroperitoneal CLAP.
Conclusions This case demonstrates that PVO with iliopsoas and epidural abscesses could be treated by combining anterior-posterior fixation with retroperitoneal CLAP. Additionally, it highlights the clinical feasibility of using retroperitoneal CLAP, or “anterior CLAP in spine surgery.” Retroperitoneal CLAP may be a treatment option for refractory PVO.
Pyogenic vertebral osteomyelitis (PVO) is a prevalent infection in the elderly, frequently complicated by iliopsoas and epidural abscesses. Traditional treatments are often ineffective for refractory cases. In this report, a 76-year-old man with PVO, iliopsoas, and epidural abscess was unresponsive to antibiotics, presenting with severe lower back pain and functional impairments. A two-stage surgical intervention was implemented: anterior debridement, autogenous bone graft fixation, and novel application of retroperitoneal continuous local antibiotic perfusion (CLAP), followed by posterior fixation. A contrast test verified correct CLAP perfusion into the iliopsoas abscess and intervertebral disc space. Substantial improvements were noted postoperatively, including a marked reduction in pain, inflammation, and the size of both abscesses. In conclusion, this case demonstrates the feasibility and effectiveness of retroperitoneal CLAP in treating refractory PVO, offering a potential innovative solution for cases resistant to conventional therapies.
Case presentation A 76-year-old man presented severe back pain and fever. His medical history included immunoglobulin A (IgA) nephropathy, atrial fibrillation, diabetes mellitus, and an abdominal aortic aneurysm that had been previously stented. Laboratory parameters showed a high white blood cell count (10,900 per μl) and elevated C-reactive protein (CRP, 34.48 mg/dL). He had no other infection source or previous infections and was diagnosed with PVO based on his spinal MRI findings. He had been treated with transvenous antibiotics and bed rest. Initially, he was treated with cefazolin sodium (1.0 g every 12 hours) based on the susceptibility of Escherichia coli detected in his first blood culture. However, his laboratory parameters did not fully improve after eight weeks of initial treatment. Subsequently, he was treated with tazobactam/piperacillin (4.5 g every 8 hours) because his second blood culture identified extended-spectrum beta-lactamase-producing Escherichia coli. Despite the second treatment in four weeks, there was no improvement in his laboratory parameters and back pain. As a result, after three months of treatment with transvenous antibiotics and bed rest, he was transferred to our hospital for surgical treatment. Upon admission, he had difficulty walking and sitting due to back pain but exhibited no neurological deficits. Laboratory parameters showed decreased hemoglobin (8.0 g/dL) and elevated CRP (3.64 mg/dL), suggesting chronic inflammation (Table 1 ). One month after the second blood culture, his third blood culture was taken and was negative at the time of admission to our hospital. Radiological findings revealed diffuse idiopathic skeletal hyperostosis above the T12 vertebral body and bone destruction at the L2 inferior endplate and L3 superior endplate. Computed tomography showed bone destruction at the L2 vertebral body and L3 vertebral body showing a gas-forming infection (Figure 1 ). MRI showed high signal intensity at the L2-3 disc on T2-weighted images and a large multifocal cyst within the right iliopsoas muscle (Figures 2A , 2B ). At the level of the L3 vertebral body, epidural fluid was shown on T2-weighted images (Figure 2C ). Based on these findings, the diagnosis was PVO with a right iliopsoas and epidural abscesses. Intravenous meropenem hydrate was empirically started with reference to past blood culture results, and a two-stage surgical approach was planned. The first stage included anterior debridement, anterior fixation using a mesh cage with an autogenous bone graft, and a retroperitoneal CLAP procedure. Anterior fixation was chosen because a large bony defect could be created if the vertebral body with gas-forming infection was sufficiently debrided. Considering that infection control of the L3 vertebral body with gas-forming infection and large iliopsoas abscesses with anterior debridement and fusion may be difficult, we selected to add a retroperitoneal CLAP procedure. The second stage included posterior fixation with a percutaneous pedicle screw technique from the T12 to L5 vertebral body (Figure 3 ). The retroperitoneal CLAP procedure was performed with the patient in a left lateral decubitus position to access the right iliopsoas muscle. After an incision along the tenth rib and its resection, the lateral lumbar interbody fusion (LLIF) technique was used to access the psoas major muscle and the L2-3 intervertebral disc. After debridement of the iliopsoas abscess, intervertebral disc, and necrotic vertebral body, a mesh cage filled with autologous iliac bone and ribs was inserted. Subsequently, two dual-lumen tubes (Salem Sump Tube, Cardinal Heath, Dublin, Ohio, US) were inserted percutaneously through a site separate from the original skin incision (Figure 3C ). A 16 Fr dual-lumen tube was placed along the retroperitoneal curvature to the intervertebral disc, and its tip was inserted into a pocket created by ligating the scar tissue and the iliopsoas muscle. A 24 Fr dual-lumen tube was inserted into the capsule of the iliopsoas muscle abscess. The wound was closed and covered with negative-pressure wound therapy (NPWT, Renasys; Smith & Nephew Medical, Kingston upon Hull, UK). The suction pressure of the NPWT was set at -40 mmHg. The suction ports of the dual-lumen tubes were connected with a Renasys Y-connector to apply a common negative pressure. Gentamicin (60 mg/50 mL) was continuously administered using a syringe pump at a low-flow rate (2 mL/h) through the dual-lumen tubes [ 19 - 21 ]. Posterior fixation was performed two days after the anterior surgery. Immediately after the surgery, the patient’s lower back pain improved markedly. Rehabilitation was started without restrictions wearing a rigid brace, even during CLAP treatment. One week postoperatively, a contrast test using a diluted nonionic contrast agent (iopamidol) through the two dual-lumen tubes displayed adequate contrast in the iliopsoas abscess cavity and L2-3 intervertebral disc (Figure 4 ). Iopamidol was selected because it has been reported that it can be administered into the retroperitoneal space. [ 22 ] Two weeks postoperatively, CRP laboratory values improved to 0.35 mg/L (Table 1 ), and CLAP treatment was concluded. He was able to walk with a walker. Blood gentamicin levels were maintained in a safe range during the CLAP treatment (Table 1 ). Three weeks postoperatively, MRI indicated a substantial reduction of the iliopsoas and epidural abscesses (Figure 5 ). Six weeks postoperatively, CRP values improved to 0.12 mg/L (Table 1 ), and antibiotics were changed from intravenous meropenem hydrate (0.5 g every 12 hours) to oral sulfamethoxazole-trimethoprim (2 tablets of 400 mg of sulfamethoxazole and 80 mg of trimethoprim every 12 hours). He was able to walk with a T-shaped cane. Twelve weeks postoperatively, he achieved independent gait.
ShO: Data curation, Writing - original draft, Conceptualization, Methodology; MI: Conceptualization, Writing - review & editing; NT: Conceptualization; KO: Writing - review & editing; ST: Writing - review & editing; NS: Writing - review & editing; YS: Writing - review & editing, Conceptualization; KI: Writing - review & editing, Methodology; YE: Writing - review & editing, Project administration; SuO: Writing - review & editing, Project administration; SeO: Supervision, Project administration
CC BY
no
2024-01-16 23:47:21
Cureus.; 15(12):e50636
oa_package/8e/4c/PMC10789547.tar.gz
PMC10789563
37951143
Materials and methods Generation of human induced pluripotent stem cells (iPSCs) PBMCs were isolated from the patient’s whole blood samples by Percoll gradient separation, purified by multiple washes in DPBS, and cultured in the StemPro ® −34 SFM medium (100 ng/mL SCF, 100 ng/ mL FLT3, 20 ng/mL IL-3, 20 ng/mL IL-6, and 20 ng/mL EPO (ThermoFisher Scientific). The CytoTune TM -iPS 2.0 Sendai Reprogramming Kit (ThermoFisher Scientific) was used for reprogramming the PBMCs following the manufacturer’s instructions. Cell culture The iPSCs were passaged at 90 % confluency using Gentle Cell Dissociation Reagent (STEMCELL TM technologies). Detached cells were resuspended in Brew medium with 5 μM ROCK1 inhibitor (SelleckChem), and replated onto Matrigel-coated (1:500) 6-well plates. Cells were cultured for 24 hr at 37 °C with 5 % CO 2 , and after that the media was replaced with Brew medium every two days. RNA extraction and RT-qPCR Total RNA was extracted from iPSCs at passage 13 using the miRNeasy Micro Kit (Qiagen), then cDNA was synthesized using the iScript TM Reverse Transcription Supermix (BIO-RAD). Target genes were examined using the TagMan TM Universal PCR Master Mix (ThermoFisher Scientific) and probes ( Table 2 ). Immunofluorescence staining Cells at passage 18 were fixed with 4 % paraformaldehyde for 20 min. After two washes with DPBS, the cells were permeabilized using 0.1 % Triton X 100 in DPBS for 10 min. Subsequently, blocking was performed with 10 % goat serum in DPBS for 1 hr. Cells were then incubated with primary antibodies ( Table 2 ) overnight at 4 °C. On day 2, the cells were washed with DPBS and incubated with the corresponding secondary antibodies ( Table 2 ) for 1 hr at room temperature. The nuclei were counterstained with NucBlue Probes (ThermoFisher Scientific). Karyotyping The Karyotyping was performed on iPSCs at passage 12 using the KaryoStat TM assay (ThermoFisher Scientific). Targeted sequencing The genomic DNA was extracted using the QuickExtract TM DNA extraction solution (Biosearch Technologies). The PCR assay was performed using the PrimeSTAR GXL DNA Polymerase (Clontech) and the primers ( Table 2 ) under the following conditions: 98 °C for 5 s 60 °C for 15 s, 72 °C for 30 s for 35 cycles. The PCR products were purified, and the sequencing was performed at Stanford Protein and Nuclear Acid (PAN) facility. Mycoplasma detection The mycoplasma test was conducted on iPSCs at passage 13 using the MycoAlertTM Detection Kit (Lonza), following the manufacturer’s instructions. Trilineage differentiation iPSCs at passage 17 were used for all three-germ layer differentiation. The StemXVivo Ectoderm kit (R & D systems) and the StemDiff TM Definitive Endoderm differentiation kit (STEMCELL TM Technologies) were used to derive the ectoderm and the endoderm, respectively. Mesoderm differentiation was induced by 6 μM CHIR-99021 (Selleck Chemicals) in RPMI media supplemented with B27 minus Insulin for 48 hr. Short tandem repeat (STR) analysis Genomic DNA was extracted using the DNeasy Blood & Tissue Kit (Qiagen). PCR and capillary electrophoresis were performed using the CLA IdentiFiler TM Direct PCR Amplification Kit and ABI3130xl at the Stanford PAN facility.
We generated two induced pluripotent stem cell (iPSC) lines from peripheral blood mononuclear cells (PBMCs) of breast cancer patients carrying germline ATM mutations, a gene associated with a 7% prevalence in breast cancer. These iPSC lines displayed typical morphology, expressed pluripotency markers, maintained a stable karyotype, and retained the ability to differentiate into the three germ layers. These patient-specific iPSC lines hold great potential for mechanistic investigations and the development of drug screening strategies aimed at addressing ATM-related cancer.
Resource utility Germline mutations in the ATM gene are known to cause the autosomal recessive Ataxia Telangiectasia disease and also confer an increased risk of developing breast cancer. Generating iPSC lines carrying ATM variants provide an unlimited source for disease modeling, gene therapy, and screening of compounds for potential therapeutic effects ( Table 1 ). Resource details The Ataxia Telangiectasia Mutated (ATM) gene, also known as ATM serine/threonine kinase, is a crucial tumor suppressor gene that belongs to the phosphatidylinositol 3-kinase-related protein kinase (PIKK) superfamily. It plays a pivotal role in DNA repair and cell cycle control ( Moslemi et al., 2021 ). Mutations in the ATM gene are the primary cause of Ataxia Telangiectasia, an autosomal recessive neurodegenerative disorder. Recent studies have revealed a significant association between ATM variants and the risk of multiple types of cancers, particularly breast cancer ( Stucci et al., 2021 ). Carriers of pathogenic ATM variants have a 2 to 4-fold increased risk of developing breast cancer ( McDuff et al., 2021 ), especially early-onset cancer and bilateral breast cancer ( Renwick et al., 2006 ). To understand these genetic associations, we successfully generated and characterized iPSC lines derived from female donors carrying specific ATM variants: c.4143dup and c.5697C > A, respectively. These iPSC lines serve as renewable and genetically relevant cellular models for investigating disease pathology and conducting drug screening for precision medicine. We recruited two breast cancer patients, a 72-year-old White female who developed breast cancer, stage II (T2N1M0), of the left breast and a 23-year-old East Asian female who developed breast cancer, stage IV (T3N3M1), of the left breast. Genetic testing demonstrated that they carried the pathogenic variants ATM c.4143dup (ClinVar ID: 181880) and c.5697C > A (ClinVar ID: 421488), respectively. Using a Sendai Virus-based vector carrying the Yamanaka factors OCT4, SOX2, KLF4 , and c-MYC ( Mondéjar-Parreño et al., 2021 ), we successfully generated iPSCs from the patients’ peripheral blood mononuclear cells (PBMCs), named SCVIi083-A and SCVIi084-A. Both lines exhibited typical stem cell morphology when observed under bright field microscope ( Fig. 1A ). These cells expressed pluripotent markers (SOX2, NANOG, and POU5F1) and lost the expression of Sendai virus vector (SEV), as shown by quantitative RT-PCR ( Fig. 1C ). The iPSCs were further analyzed for pluripotency markers using immunofluorescence staining ( Fig. 1B ). The iPSC lines tested negative for mycoplasma ( Fig. 1D ). Karyotype analysis exhibited normal female chromosomes ( Fig. 1E ). Sanger sequencing demonstrated a heterozygous mutation of c.4143dup in SCVIi083-A and c.5697(C > A) in SCVIi084-A ( Fig. 1F ). Short tandem repeat (STR) analysis of the parental PBMCs and derived iPSCs confirmed clonal identity (Submitted in the archive with journal). Both lines were successfully differentiated into the three germ layers ( Fig. 1G ).
Acknowledgments This work was supported by National Institutes of Health 75N92020D00019, R01 HL141851, R01 HL141371, R01 HL150693, R01 HL163680 (JCW), and the Tobacco-Related Disease Research Program(TRDRP) T32FT4853 (M. Z.). Data availability Data will be made available on request.
CC BY
no
2024-01-16 23:47:21
Stem Cell Res. 2023 Dec 6; 73:103246
oa_package/4b/50/PMC10789563.tar.gz
PMC10789578
37196659
INTRODUCTION The prefrontal cortex (PFC) plays a central role in orchestrating the activities of various brain regions. The cognitive and emotional functions of the PFC enable complex behaviors, and its malfunction contributes to diverse mental disorders. 1 , 2 PFC contains a mosaic of distinct cortical areas, with an estimated 45 PFC areas out of 180 total neocortical areas in humans 3 and 35 PFC areas out of 130 total in macaques, 4 and the marmoset has 26 PFC areas out of a 117-area parcellation. 5 These areas have complex patterns of connections with many cortical and subcortical regions. 1 , 2 , 6 , 7 However, our knowledge of PFC connectivity remains fragmentary, particularly for quantitative connectivity data of primates. An existing retrograde tracer database covers most but not all PFC subregions, 6 , 8 and no quantitative analysis has been reported using anterograde tracers. In primates, columnar organization has been studied extensively, particularly in the visual cortex. It generally reflects commonalities along the radial axis (from white matter to pia), exemplified by orientation columns and ocular dominance columns in area V1 and tuning for other features in extrastriate visual areas. 9 – 11 Patchy anatomical connections of these columnar modules in the tangential domain correlate with repeating representations of various features such as stimulus orientation or eye dominance. 9 , 10 However, because preferred orientation is represented as a smoothly changing variable, a single orientation “column” in visual cortex is more a conceptual abstraction than a discretely demarcated three-dimensional (3D) cortical domain. By contrast, submillimeter-scale discrete projections suggestive of a different type of columnar organization have been reported using anterograde tracer injections in macaque PFC, but major questions remain as to whether such patterns reflect discrete, segregated modules vs. highly overlapping connectivity profiles, whether they are predominantly patchy vs. stripe-like, and whether their origins and terminations are consistently columnar or are often layer-specific. 12 – 16 In this study, we provide evidence bearing on these and other issues using a dataset based on 44 anterograde and 13 retrograde tracer injections into PFC and adjacent frontal lobe regions in common marmosets. The marmoset is an increasingly popular non-human primate (NHP) model for neuroscience studies. 17 – 24 Its cortex is one-tenth the size and far less convoluted than the macaque cortex. Yet it contains the frontal eye field (FEF), V5 (middle temporal area [MT]), and granular PFC, all common to primates but lacking clearly defined homologs in rodents. 1 , 22 , 23 Systematic analysis of our high-quality dataset revealed patchy and columnar corticocortical and corticostriatal axonal projections plus a complementary pattern of diffuse projections generally restricted to one or a few layers. By combining anterograde and retrograde double-tracing in some animals, we demonstrated a striking reciprocity of patchy cortical connectivity. We also mapped the topographic organization of PFC connectivity to multiple regions of the association cortex, revealing a pattern similar but not identical to that demonstrated by stimulation-fMRI mapping of macaque lateral PFC. 25 Our datasets are freely accessible ( https://dataportal.brainminds.jp/ ; see also Skibbe et al. 26 ) and add to a growing collection of publicly available marmoset neuroscience-related datasets. 27
STAR★METHODS RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Akiya Watakabe ( [email protected] ). Materials availability Plasmids generated in this study have been deposited to Addgene: pAAVTRE3_Clover (#135179), pAAV-TRE3_ mTFP1-Vamp2 (# 201214) and pAAV-EF1_Cre (#201198). Data and code availability All the section image data from STPT, standardized 3D data for the whole brain, and the flatmap stack data for the cortical signals of the marmosets have been deposited at Brain/MINDS data portal ( https://dataportal.brainminds.jp/marmoset-tracer-injection ) and are publicly available as of the date of publication. The high-resolution images used in the paper are labeled with the marmoset number and the section number (e.g., #82–139) and are available through the zoomable section viewer. In addition to the visualization and search tools available at these sites, users can download standardized 3D data for tracer segmentation, fluorescence-weighted segmentation, AAV-transduced cell segmentation, and original and standardized flatmap stack data in nifti format and other information from The Brain/MINDS Marmoset Connectivity Atlas 26 ( https://dataportal.brainminds.jp/marmoset-connectivity-atlas ) or on request. The mouse analyses used the existing, publicly available data of the Mouse Brain Connectivity Atlas 28 , 29 ( https://connectivity.brain-map.org/ ). The accession numbers for the used datasets are listed below. Other microscopy data reported in this paper will be shared by the lead contact upon request. Any additional information required to reanalyze the data reported in this paper is available from the corresponding authors upon request. EXPERIMENTAL MODEL AND SUBJECT DETAILS All experimental procedures were carried out following the National Institute of Health Guide for the Care and Use of Laboratory Animals (NIH Publications No. 80–23) revised in 1996 and the Japanese Physiological Society’s “Guiding Principles for the Care and Use of Animals in the Field of Physiological Science,” and were approved by the Experimental Animal Committee of RIKEN (W2020–2-009(2)). The age and sex of the marmosets used are listed in Table S1 . We did not distinguish these factors in this study. METHOD DETAILS Marmoset experiments In our standard procedure, we acquired a structural MRI scan (T1w, T2w, DWI) using a 9.4-T BioSpec 94/30 unit (Bruker Optik GmbH, Ettlingen, Germany) and a transmit and receive coil with an 86-mm inner diameter 76 under anesthesia at least one week before surgery to plan injections. The presumed positions of cortical areas were determined by registration of the Brain/MINDS reference 24 and the stereotaxic positions were aligned by using the interaural plane and the anterior limit of the cortex. Surgery for tracer injections was performed as previously described with some modifications. 77 Pressure injection was performed using a glass micropipette with an outer diameter of 25–30 μm connected to a nanoliter 2000 injector with a Micro4 controller (World Precision Instruments). For exposed cortical areas, we injected 0.1 μl each of tracers at two depths (0.8 and 1.2 mm from the surface), aiming to deliver the AAV to all cortical layers. For deep injections (e.g., OFC), we injected 0.2 μl of tracer at one depth. With these volumes, we did not experience overflow, unlike the experience in our previous study. 77 To avoid fluorescence cross-talk, all subjects received a fluorescent tracer at only one site. However, in some cases, we injected nonfluorescent tracers, such as BDA, into several other locations, but these results are not reported here. After surgery, the marmosets were returned to the cages and euthanized four weeks later. The variability of injections (injection volume and depth) was analyzed in a post-hoc manner based on the spread of AAV-transduced cell bodies (see “ image processing pipeline ” below). AAV tracers We used a TET system to amplify fluorescence signals. 68 , 69 , 78 , 79 This system labeled all the known projections from the PFC and did not show strong cell-type bias. The detection of two types of projections is also not due to our TET labeling system, as we observed a similar convergence and spreading of axon fibers using the conventional biotinylated dextran amine (BDA) method (data not shown). Our standard tracer mix included AAV1-Thy1S-tTA (1 × 10e9 vg/μL), AAV1-TRE-clover (1 × 10e9 vg/μL; a GFP derivative), and AAV1-TRE3-mTFP1-Vamp2 (0.25 × 10e9 vg/μL; cyan fluorescent protein targeted to the pre-synapse). In later experiments, we also included AAV2retro EF1-Cre (1.5 × 10e9 vg/μL) in the tracer mix for co-injection. We deposited plasmids for these AAV vectors to Addgene ( https://www.addgene.org/ ). PFC injections In this study, we present the data from 44 high-quality datasets involving injections into various frontal areas of the left hemisphere ( Figures 1H and S1G ; Table S1 ). To plan injections, we referred to previous studies. 5 , 6 , 46 , 80 , 81 We divided the frontal areas of marmoset into nine subregions, including the frontopolar cortex (FP), dorsolateral PFC_ventral (dlPFCv), dorsolateral PFC_dorsal (dlPFCd), dorsomedial PFC (dmPFC), ventrolateral PFC (vlPFC), anterior cingulate cortex (ACC), and orbitofrontal cortex (OFC), plus premotor areas (PM) and dorsal ACC (dACC) ( Figure S1G ). The ACC subregion in this study comprises areas A32, A14, A25, and A24a and corresponds to the subgenual anterior cingulate cortex (sgACC) and perigenual ACC (pgACC) in other studies 82 and was differentiated from the dACC comprising A24b and A24c. We injected most densely into the aforementioned dlPFC subdivisions dlPFCv and dlPFCd, corresponding to areas 8aV/A45 and 8aD/46D, respectively, by our cortical annotation; dlPFCv most likely overlaps extensively with the frontal eye field (FEF). 33 , 83 The relationship of “dlPFCd” to macaque areas 46, 9/46, and 8aD is currently unclear. However, our results suggest the inclusion of A46-like areas in dlPFCd, judging from the auditory projection ( Figure S4G ). Throughout the manuscript, we used 29 datasets with clean injections in six PFC subregions (FP, dmPFC, dlPFCd, dlPFCv, ACC, and OFC) for subregion comparison. These injections are shown color-coded in Figures 1H and S1G . Some of the injections localized at the border of these subregions and were used only for parcellation-free analyses (gray dots in Figure 1H ). Injections into the vlPFC, PM, and dACC subregions were also used only for the selected analyses. Serial two-photon tomography imaging (STPT) After transcardial perfusion with 4% paraformaldehyde in 0.1 M phosphate buffer (pH 7.4), the marmoset brains were removed, post-fixed at 4 °C for 2–3 days, and transferred to 50 mM phosphate buffer (pH 7.4). All marmoset brains had an ex vivo MRI scan (T2w and DWI) before further processing. MRI was performed using a 9.4-T BioSpec 94/30 unit (Bruker Optik GmbH) and a transmit and receive solenoid type coil with a 28-mm inner diameter. 76 Agarose embedding was performed as previously described 30 in 5 % agarose using a custom-made mold. Before embedding, we treated the brain with 1 mg/ml collagenase (Wako 031–17601) at 37 °C for one hour and manually removed the meninges as thoroughly as possible. This step ensured direct crosslinking of agarose to the brain for stable sectioning at a 50-μm interval. STPT was performed as described 28 – 30 using TissueCyte1000 or TissueCyte1100 (TissueVision). We immersed the agarose block in 50 mM PB supplemented with 0.02 % sodium azide for imaging and sectioning. We used a 920–940 nm laser for excitation, and the two-photon excited fluorescence was recorded in three-color channels. Three-color imaging was critical in our study to (1) distinguish tracers from lipofuscin granules, which are characteristic of aged brains 84 ( Figure S1F ), and (2) to delineate AAV-infected cell bodies around the injection site, which was only possible in the unsaturated blue channel (ch 3 Figure S1D ). In our setup, the pixel spacing for x and y were 1.385 μm/pixel and 1.339 μm/pixel, respectively. Both hemispheres were sectioned concurrently in the coronal plane and in the rostral-to-caudal direction. We imaged four optical planes (12 μm intervals) per 50 μm slice for the rostral part and one optical plane per slice for the caudal part. We used only the first optical plane in this study. Complete sectioning of each brain was carried out in 20 – 30 sessions with different XY coverage, and it took approximately 10 days to process the whole brain. Post-STPT histology Tissue sections were manually collected for further histological analysis. To confirm the cytoarchitecture, we used one in ten sections for Nissl staining. The floating sections were mounted onto a glass slide after the agarose was removed and air-dried. Mounted sections were later rehydrated with PBS, and dark-field backlit images were taken using an all-in-one microscope (Keyence BZ-X710) before Nissl staining. The backlit image showed tissue contrast very similar to a conventional histological myelin stain (manuscript in preparation). We used the same section for Nissl staining, and the combined information of the backlit and Nissl-images provided useful information for anatomical delineation ( Figure S3 ). AAV2retro-Cre was detected by anti-cre recombinase (Milipore, clone 2D8), followed by Cy3-conjugated secondary antibody. Retrograde nuclear staining was imaged by the all-in-one microscope and the images obtained were processed by the image processing pipeline for retrograde signal detection. 26 For confocal microscopy, we stained the retrieved sections with the anti-GFP antibody (abcam, ab13790) and anti-Homer1 antibody (Nittobo Medical; previously FRONTIER INSTITUTE, MSFR103200) in conjunction with Alexa488 conjugated anti-chick antibody and Cy3-conjugated anti-rabbit secondary antibody. After mounting on a glass slide, the stained sections were imaged using an Olympus Fluoview FV3000 confocal microscope and a 40× silicone immersion lens (UPLSAPO40XS). Image processing pipeline The details of the image processing pipeline are described elsewhere. 26 , 85 Briefly, the raw images in the three channels were stitched after background correction. The Ch1 (red) image provided the tissue background image with some tracer bleed-through, and the Ch2 (green) image provided the tracer fluorescence and tissue background. In our setup, we observed very weak signals for Ch 3 (blue), which helped identify AAV-transduced cell bodies that were not identifiable with other channels due to signal saturation. Using this information, the pipeline automatically determined the precise spread of AAV transduction for each injection. This spread was calculated to be 2.5 ± 1.3 mm 3 (mean ± SD) in the STPT template space (see below). We also calculated the depth bias of each injection (Deep Layer Index) by dividing the number of transduced cell-positive voxels in the deep layers (layer level 1–20) by those in all layers of the flatmap stack (see below), as shown in Table S1 . The correlation between injection volume and patch number, patch size and solitary ratio was 0.001, 0.447, and 0.466, respectively, and the coefficient between the deep layer index and each of these variables was 0.13, 0.05 and 0.33. The correlation between the deep layer index and the DL ratio of patches was 0.5 ( Figure S10D ); the differences across subregions in the DL ratio persisted after correction ( Figure S10E ). The pipeline also accurately separated axon-specific fluorescence from the background based on the Ch1 and Ch2 images. On visual inspection, we encountered virtually no false positives (e.g., lipofuscin granules misidentified as axons) for this process. Axonal detection was quite sensitive, although it was usually partial, meaning that not all visually recognizable axon fibers were labeled as the positive signals. The stitched coronal section slices (more than 600) were placed in the 3D space based on the recorded stage co-ordinates and non-linearly registered in 3D to the standard reference space or STPT template (see below). In this data transformation, a unit isocube (‘STPT-voxel’) corresponds to a 50 μm × 50 μm square of one slice image (50 μm interval), whose signal intensity was determined by the sum of positive pixels (out of ~1400), each of which was weighted for fluorescent intensity by its 16-bit encoding. In this way, we secured a wide dynamic range of quantification and a high signal-to-noise ratio. All quantitation of anterograde labeling was based on the fluorescence-weighted axon signal values. Before normalization (see below), the total sum of thus-defined signal intensities was highly variable between samples due to various experimental factors, including the variable spread of the tracer near the injection site. However, there was little correlation with age (r = −0.062; p = 0.69) or sex (p = 0.38, by a t-test), which we did not analyze further in the current study. To evaluate the registration accuracy, we compared the registered Ch1 images of individual samples with the STPT template. More precisely, we determined the boundary positions separating the cortex, putamen, globus pallidus, and internal capsule for each sample along a line ROI on horizontally sliced images ( Figure S1A ). These boundaries were determined automatically by selecting the peaks of optical density changes near each target boundary (red arrows (i)-(v)). As shown in the whisker plot, the deviations were within a few STPT voxel units (corresponding to 50-μm isocubes). Although this evaluation does not guarantee accurate registration for all the voxel points of the entire brain, it demonstrates highly accurate registration for regions near high-contrast landmarks, such as the white matter. Standard reference space (STPT template) To facilitate 3D-3D registration between optically-based and MRI-based volumes, we generated a standard reference image for a marmoset brain based on iteratively averaged Ch1 images. 26 We call this 50 μm-isocubic 3D image as the “STPT template.” To allow multimodal data integration, the initial averaging step was performed using BMA2019 Ex Vivo (Space 2), which is a 100 μm isocubic reference image based on a population average of 25 ex vivo MRI (T2w) contrasts. 86 Nissl-based cortical annotation originally performed on a single brain 24 , 87 was transferred to the STPT template and used without further adjustment. Handling of 3D data for visualization and substructure analyses After completing the pipeline, the tracer intensity data and AAV-transduced area data were registered to the STPT template by ANTs using a multi-scale affine image registration followed by a multi-scale SyN registration. 26 , 73 As a similarity metric, we used normalized mutual information. These 3D data were converted to a NIFTI format ( https://nifti.nimh.nih.gov/ ), and structures of interest could be easily excised from each registered image volume using the labelmap. The labelmap for the definition of substructures was manually annotated based on the image contrast of the STPT template ( Figure S1B ) using 3D Slicer 72 [ https://www.slicer.org/ ]. After excising the structures of interest, the tracer intensities were standardized to the maximum value within the substructure, converted to 8-bit or 16-bit, and saved as a tiff stack file. FluoRender 71 was used to visualize 3D data for the presentation of images and movies. Striatal data sometimes included strong contaminating signals of passing axon fibers within the internal capsule and white matter. In such cases, we manually masked these regions using a 3D Slicer ( https://www.slicer.org/ ) and performed the normalization again. Conversion of cortex data into a flatmap stack and its normalization The outer (pial) and inner (white matter) segmentation surfaces of the cerebral cortex were defined based on the image intensity of the STPT template and manually corrected when necessary ( Figure S1B ). A 167,082-vertex surface mesh from the STPT template with vertices approximately midway through the cortical sheet provided an initial geometric substrate for generating pial, white, and midthickness surfaces that were in topological correspondence with one another. Radial trajectories determined by a dense orientation field created by a heat propagation model were used to generate the pial surface via vertex migration to the outer cortical boundary, the white surface via vertex migration to the inner cortical boundary, and the midthickness surface via migration to the midpoint of the radial trajectory linking the pial and white vertices. Fifty equally spaced vertices were identified along each radial trajectory connecting the inner and outer surface vertices ( Figure S1C , right hemisphere). The Brain/Minds cortical midthickness flatmap 86 based on the same 167,082-vertex mesh as the above 3D surface was used to define the x-y coordinates of a flatmap representation; the z dimension was represented by the spacing between vertices along each radial trajectory in the 3D volume. ANTs nonlinear registration 73 was then used to generate a deformation field that aligned the 50-layer array of vertices in the 3D cortical model to the corresponding set of vertices in the flatmap stack. The deformed cortical volume was then resampled using trilinear interpolation to generate a flatmap stack volume based on a 500 × 500 × 50 array of ‘flatmap-stack-voxels’ (fm-vox; Figure S1C , left hemisphere). 26 , 85 , 86 To evaluate the relationship between tangential distances in the 3D model vs the flatmap stack, we placed seed points at the midthickness layer in the flatmap stack with 50 fm-vox spacings. These seed points were mapped to the STPT template space using the deformation field. Spheres of 15-STPT-vox radius were placed at each seed point in the STPT template space and then mapped back to the flatmap stack using the deformation field ( Figure S1E ). The mean cross-sectional area was 247 fm-vox 2 . Given that the cross-sectional area for the 15-STPT-vox sphere can be approximated by a circle with 15-STPT-vox radius (706.5 STPT-vox 2 ), and one STPT-vox corresponds to 50-μm in the STPT template space, one fm-vox in the flatmap stack based on these values is on average 84 μm on a side. To standardize the intensities of the cortico-cortical projection patterns of the flatmap stack, we prioritized the constancy of signals in the upper layers (levels 26–50) because the lower layers contained more diffuse and widespread signals, some of which may be passing fibers. We first averaged layer levels 26–50 of the flatmap stack to make a 2D flatmap in which columnar convergence could be detected as peaks of fluorescence intensity. Herein, one pixel corresponds to tangential components of one fm-voxel. We masked the injection area based on the MIP of all layers of the flatmap stack for the injection site segmentation, because fluorescence intensity is usually saturated and unreliable in this region. We then selected pixels with the top 0.1% of the intensity values out of those that constituted the 2D flatmap and set the minimum value of the selected pixels at 10,000. This standardization method provided very similar patterns across different intensity ranges within the same frontal group irrespective of the original tracer intensity. The same coefficient was used to standardize the flatmap stack. For the six-color overlay of different PFC groups in Figure 1I , the MIP images for layer level 26–50 from the same PFC subregions were MIP-merged to generate a 2D flatmap. The flatmaps of different groups were then overlaid with different colors in a “winner-take-all” style, in which the color of the strongest signals was displayed to avoid color mixing. For the six-color overlay in Figure S1H , the color was merged using “Merge Channels Function” of ImageJ, by which regions of overlap had added values with cap at 255. To convert the tracer intensity data into a log-scale representation, the standardized values were log-transformed after adding 0.001 and thresholding by 0. For visualization, the log-transformed data were scaled to an 8-bit representation. We routinely used the Videen color palette (obtained from Connectome Workbench color palette; https://github.com/Washington-University/workbench ) for pseudocolor intensity scaling. Calculation of Wiring distance To estimate the distances between an anterograde tracer injection site and projection targets in the cortex, we used a minimal cost path algorithm. Our approach involved finding the cheapest paths within the tracer signal image that connect the center of the injection site with each tracer-positive location in the cortex. To achieve this, we assigned a cost to each voxel in the marmoset brain. The cost of a path was calculated as the sum of all voxels it needed to traverse to connect the injection site with the target location. We determined the cost of a voxel based on whether it had tracer signal, its location in white or gray matter, and whether it was located in a location where fibers did not pass (i.e., outside the brain or in brain fluid). Voxels inside the tracer had the lowest cost, followed by those in white matter, while the costs for voxels located outside gray and white matter were set to infinity. We used the skimage route_through_array function from the scikit-image image processing toolbox 74 to compute the minimum cost paths. All calculations were done in the STPT reference space, and the resulting connection distance field was mapped to a cortical flatmap stack. To create the binary mask GWM, we combined all gray matter and white matter voxels. We used the serial two photon template image STPT, normalized to the interval [0,1], as the likelihood map for gray matter since it was bright in cell-dense areas. We also had a binary mask CM comprising cortical and subcortical regions, which we used to determine projection targets. Additionally, we used the cell density image CELL indicating the injection site, as well as the inversed normalized tracer image TR (1 for no signal, 0 for strongest signal), both scaled to the interval [0,1]. All images were scaled down to a 200 μm isotropic resolution. To determine the injection site, we first calculated the center of gravity of the cell density map. We then identified all projection targets as voxels that overlapped with the CM binary masks, had no signal in the cell density map CELL (i.e., not in the injection site), and had a signal in the tracer image (part of the tracer signal). We computed the cost for voxels with tracer signal as tr_in = (TR+TC*10) (voxels where TR<1), and the cost for the other voxels as tr_out = (TC*10 + 50) (voxels, where TR=1). In both cases, voxels located in gray matter were more expensive than those in white matter (term TC*10). For the latter case, we added a constant of 50 to increase costs. We found these values heuristically. To favor paths in voxels with bright tracer signals over voxels with low intensities, we added the inverted signal TR to the first term. We then created the total cost image W = GWM*(tr_in + tr_out), where we set all costs outside GWM to infinity. We computed the cheapest paths for all pairs between the center of the injection site and the position of the target voxels, and then created a path distance image by calculating the path lengths and storing the value at the target voxel locations. Finally, we mapped the path distance image to a flatmap stack ( Figure S2C ). Exponential Distance Rule Having established the injection-to-target distances, we investigated the relationship between connectivity strength and wiring distance. In previous retrograde tracer studies, connectivity strength was determined by “fraction of extrinsic labeled neurons (FLNe),” that is, the number of retrogradely labeled neurons in a given area relative to the total number of labeled neurons. FLNe thus represents the normalized connection weight between the target areas and the source area. 31 , 32 , 88 Alternatively, distance-connection relationships were determined by calculating the probability of projection distance (p(d)), based on the histogram of the number of the labeled neurons in bins of projection distance. 32 The latter value is independent of area specification, but both values matched well in the marmoset study. 32 In our calculation, we used the flatmap-based approach to investigate the distance-intensity relationship. For each injection sample, the distance map and the strength map were made; the distance map shows the distance of each tracer-positive fm-vox from the injection site (determined in 3D as described above) and the strength map shows the log 10 value for the normalized tracer signal intensity. For normalization, the tracer intensity values for all layers were averaged and adjusted using the same standardization factor used for the upper layer flatmap (see above). Using these two maps, we were able to determine the relationship between connection strength and wiring distance at the voxel level. An important question was whether it follows the Exponential Distance Rule (EDR), as reported for the macaque and marmoset retrograde studies. 31 , 32 , 88 EDR postulates p(d)=ce −λd , where p(d) is the probability of connection and d is the projection distance. 31 At the-synaptic targets, the axon fibers tend to branch heavily ( Figure S8F ) leading to higher tracer signals per unit voxel. Thus, the tracer signal intensity in our dataset is considered to reflect the connection probability. As shown in Figure S2G , the histogram of distance calculated from summed p(d) approximately follows an exponential decay with decay rate λ = 0.27. This histogram and λ are surprisingly similar to those reported for marmoset retrograde data (λ = 0.3; see Figure 5C of Theodoni et al.). 32 Since the macaque study calculated the decay rate λ based on FLNe, we also performed a similar area-based analysis. FLP or “fraction of labeled projection” is based on the sum of tracer signals of a given area relative to the total tracer signals and corresponds to the FLNe of the retrograde data. The scatter plot showing the log 10 FLP values of each area plotted against wiring distance ( Figure S2J ) shows considerable variability but has a slope λ = 0.27. The large variability is comparable to that reported for marmoset 32 and macaque. 31 Our data suggest that a part of this variability comes from subdivision differences ( Figure S2K ). This observation is not surprising, considering the projections of PFC areas towards remote association areas skipping primary somatomotor areas (e.g., dlPFCv). When we tested EDR with columnar patches, which represent strong signals ( Figure S2M ), our results were consistent with previous EDR estimates up to a distance of 10–12mm. At longer ranges, we found a substantially shallower slope ( Figures S2L – S2O ). Detection and characterization of columnar patches in corticocortical projections Columnar patches were detected as local maxima in the 2D flatmap generated by averaging the upper layers (layer levels 26–50) of the 3D flatmap stack, with the injection sites masked and the signal values standardized as described above. Averaging across all flatmap layers gave similar results but with about 10% fewer patches, because diffusely spread signals in the deep layers sometimes obscured patches in the upper layers. Local maxima in 2D were identified using the “find peaks” function of MATLAB 2019a, inspired by the Fast 2D peak finder 89 ( https://www.mathworks.com/matlabcentral/fileexchange/37388-fast-2d-peak-finder ), MATLAB Central File Exchange. Briefly, the 2D image was Gaussian smoothed, and peaks larger than the defined minimum height values were searched in the x- and y-directions, pixel by pixel. Only 2D spots that peaked in both the x-and y-direction searches were identified as local maxima. If the peaks were detected in consecutive fm-voxels, we counted them as one. Thus, the theoretical minimum separation is ~1.41 fm-vox (2e0.5), which occurs when the detected peak voxels are in a diagonal position. The minimum height value for columnar patch detection was set at 100, i.e., two orders of magnitude lower than the standardization value of 10,000. Columnar patches detected at the fringe of the injection site due to masking were removed manually by visual inspection. We detected 1867 columnar patches from 44 datasets. In this algorithm, we set the filter size of the Gaussian smoothing to 7-fm-vox. Setting it to 3-fm-vox resulted in detection of only 10 % more patches, and most patches were identical. The additional patches were generally weak and surrounded by high background signals. We conclude that the current filter size effectively captures a patchy signal distribution while reducing noisy signals. To examine the extent of convergence of tracer signals to the centers of columnar patches, we set three 3D ROIs for measurement ( Figure 2C ); namely, a central cylinder with a 4-fm-vox radius, a 19x19x50 fm-vox rectangular cuboid (hereafter referred to as 19-vox cuboid) and a 50x50x50 fm-vox cubic (hereafter referred to 50-vox cubic). The idea is to measure the ratio of the number of tracer-positive voxels contained within the cylinder to that in the 19-vox cuboid and the 50-vox cube. First, we expressed the intensity of the tracer signals in each voxel as a fraction of the maximum value within the upper half of the central cylinder. Second, the tracer signals were binarized at 0.5, half the maximum value. Third, the largest clump of the binarized tracer signals within the upper half of the central cylinder was chosen to represent the columnar patch of choice ( Figures 2B and 2C , green “clump”). The ratio of the number of voxels within the central cylinder to that of all voxels that constitute this binary clump within the 19-vox cuboid and 50-vox cube was calculated as the index of central convergence. Columnar patches were often clustered, and stronger tracer signals were at nearby locations connected to the central clump. Therefore, we classified the columnar patches into “solitary” and “connected” groups based on the degree of central convergence. When we set 50% of central convergence for the 50-vox cubic ROI as the threshold, 1008 patches out of 1867 patches fulfilled this criterion. The central convergence for this group of patches was equal to that of the 19-vox cuboid ROI, which means that the segmented central clumps of the tracer signals were contained within the 19-vox cuboid and disconnected from the surrounding signals. Hence, we called this group “solitary”. The patches that did not fulfill this criterion were called “connected” because the central clumps for over 97 % of the remaining 859 patches extended outside the 19-vox cuboid. Here, the size of the central cylinder was adjusted to maximize the inclusion of columnar signals and minimize the inclusion of surrounding signals based on the planar average ( Figure 2D ). We reasoned that the presence of 50% of the voxels of the continuous binary clump in the 50-vox cubic ROI is a reasonably high convergence. The 19-vox cuboid ROI was used as a middle ROI to confirm the connectedness of the clump. Because of the relatively high degree of central convergence, the morphology of the central clumps for the solitary groups could be reasonably well approximated by ellipsoids whose major axis lengths, centroid positions, and angles were estimated by regionprop3 function (CY Y; 2022;. https://github.com/joe-of-all-trades/regionprops3 ). To estimate the size of the patches, we measured the tangential spread of the above-mentioned binary clumps by projecting the positive voxels across all layers. We measured only the solitary patches to exclude inclusion of surrounding signals. We also measured the ratio of the solitary patches among all the patches for each injection. We expected this value to indirectly reflect the patch size by becoming smaller when the patches get bigger and connect to the surrounding signals. Indeed, we found a tendency of dmPFC, FP and ACC patches to show larger solitary patch sizes and lower solitary ratios ( Figures S4A – S4C ). In rare cases, detected patches that were in close proximity were aggregated when measuring the patch size. We did not adjust for such cases. Comparison of mouse and marmoset data for columnar projections All the mouse data used in this study were obtained from Mouse Brain Connectivity Atlas 28 , 29 ( https://connectivity.brain-map.org/ ). First, we selected the datasets with injections into the frontal areas (ACAd, ACAv, FRP, ILA, MOs, ORBI, ORBm, ORBvl, PL) of the wild-type mice. The experimental IDs for these samples are; 112458114, 139426984, 146593590, 112514202, 139520203, 100140756, 157556400, 100141454, 112952510, 141602484, 141603190, 157710335, 180709942, 180719293, 180916954, 585025284, 112306316, 170721670, 180673746, 180709230, 126860974, 112423392, 158435116, 157711748. The whole-brain tracer data (projection energy) with 10 μm resolution for these datasets were then acquired via the application programming interface (API) and converted to flatmaps as previously described. 90 The flatmaps for both hemispheres were resized to 816×408 pixels. The image intensity was standardized similarly to that used for the marmoset flatmap except that the injection center was not excised before measurement. The columnar patches were detected using the same code used for the marmoset flatmap. The mouse flatmaps represented the average of all layers, whereas the marmoset analysis had used flatmaps representing only the upper half of the layers. For comparison, we performed a new patch detection using the flatmaps of all layers. As shown in Figure S6 , the columnar patches were far less conspicuous for inter-areal mouse PFC projections; the strong within-area patchiness observed for marmosets (e.g., robust columnar projections near the injection site in Figures S6E and S6F ) occurred rarely in mice. This difference was quantitatively shown ( Figure S6G ). Our detection algorithm simply finds the peaks above a threshold in the X and Y directions after smoothing. It can therefore detect broad peaks that do not necessarily extend radially like those of the marmoset cortex. It is also sensitive to standardization methods. The mouse data differ from our marmoset data in several ways. (1) The mouse study used regular AAV-GFP, which is less efficient at labeling axons than our TET-enhanced AAV. (2) Mouse brains were sliced at 100 μm intervals as opposed to 50 μm intervals for our marmoset brains, and is accordingly less reliable in detecting submillimeter scale structures. (3) The frontal region of the mouse is curved, and flatmapping causes large distortions. (4) We could not subtract the tracer signals of the cell bodies before standardization for the mouse data. All these differences might in principle result in differences in the detectability of columnar patches in the two species. However, the original slice images suggest that the projection patterns of axons are very different between mice and marmosets (e.g., Figures S6B and S6F ). Therefore, we consider the large species differences in the incidence of patches ( Figure S6G ) to be a robust finding. Prediction of columnar patch distribution patterns by polynomial regression for corticocortical projections To examine the topographic projections to the cingulate and temporal fields, we set the ROIs for these regions as shown in Figure 2G , and determined the center of mass of the columnar patches present within each ROI for each sample for the six PFC subregions. Among the 29 injections, 24 had at least one columnar patch in the cingulate field, and 21 had columnar patches in the temporal field. To find a regression model to predict the projection from the injection, we used a polynomial regression of degree 2, which exhibited a better corrected AIC score than degree 3. As predictor variables, we used the X and Y coordinates of the injection center in the flatmap. As response variables, we used the X or Y coordinates of the averaged positions of columnar patches in the cingulate and temporal ROIs. As shown in Figures S4D and S4E , we achieved reasonable fitting (R 2 =0.68~0.92) for the injections in PFC. The permutation test, in which the models constructed by the original and shuffled data were compared for accurate reconstitutions, confirmed the effectiveness of the regression model. In this test, the accuracy of the predictions by the regression models was estimated by correlation coefficients between the predicted values and the true values. Comparison of the model constructed by the original dataset with those by 1000 shuffled datasets showed that none of the shuffled models surpassed the accuracy of the original data (p<0.001). To extrapolate the obtained topography, we calculated the projections from every point in the PFC map to the cingulate and temporal fields, as shown in Figure 2H . This calculation resulted in projections outside the flatmap, suggesting the presence of distortions at the fringes. Figure 2H displays only the projections within the flatmap region. Analysis of the frontal patch distribution pattern We set the extent of frontal cortex to be the boundary between motor and somatosensory cortex posteriorly and between OFC and insular cortex laterally (see Figures 2G and 2I ). To quantify the distribution of the columnar patches in the frontal areas, we used ellipse fitting on the flatmap representation. Because the columnar patches tended to show a distorted distribution along the narrowly stretched area 24a, we excluded this area from ellipse fitting (see Figure S5B for ROI). As shown in Figure S5B , the centers of the ellipses were generally offset from the injection centers, but the relative positions were maintained. This relationship was approximated by polynomial regression models of degree 2 ( Figure S5C ). The permutation test similar to what we did for the patch center prediction (see the preceding section) confirmed the robustness of the regression models (p<0.001). To visualize the spread of columnar patches generated from injection into different PFC subdivisions, ellipses that encompassed 70 % of the patches for each injection were overlaid and the common overlapping region for over half (>49%) of the injection samples was used as an indicator of the spread of frontal projections for that subdivision (enclosed by thick colored lines in Figure S5B ). To analyze the spacing of patches, we examined the shortest inter-patch distance for each patch that was within the ellipse for that injection and calculated the mean for each injection sample ( Figures S5D – S5G ). These measurements were based on distances within the flatmap and can differ modestly from the 3D distance due to distortion, particularly for ACC patches, as medial and orbital areas that are widely separated in the flatmap are substantially closer in the 3D configuration. Overall, we observed generally similar values for the mean of the shortest interpatch distance across samples (~1mm). This value was statistically indistinguishable from the values for randomly shuffled controls (x1000 repetitions, data not shown). In a previous study, a center-to-center distance of 500–600μm was reported as the spacing of intrinsic connectivity in macaque PFC. 13 Our value is slightly larger but still in reasonable agreement, considering the very different labeling and detection methods and the species difference. Analysis of contralateral projection patterns To evaluate the bilateral symmetry of projection patterns, we compared the mirror-flipped images of the contralateral projections with the ipsilateral images. For 2D comparison, we used the mean values for the upper layers (26–50). As expected from previous reports of “homotopic” contralateral projections, 91 , 92 we observed very similar patterns of the log 10 images for both hemispheres after intensity adjustment (the right intensity was 12.8 ± 7.8 % of the left intensity). To quantitatively evaluate the similarity of strong projections around the injection site, we set a 200×200 fm-vox ROI surrounding the injection site for each sample for the left (ipsi) and right (contra) flatmaps. We excluded the cell-positive injection regions from both ipsi and contra images and binarized the images to extract the top 5 % of the tracer-positive voxels. This made the patterns of tracer distribution comparable between contra and ipsi sides and across samples. To evaluate the similarity between ipsilateral and contralateral patterns, we compared the overlap of the true pairs with shuffled pairs and found that the true pairs showed greater overlap than shuffled pairs in all but three cases, in which some shuffled pairs showed higher overlap than the true pair ( Figure S5K ). Non-negative matrix factorization (NMF) analyses of corticocortical and corticostriatal projection patterns To find common patterns of projections for corticocortical (or corticostriatal) projection, we performed NMF analyses using log-transformed data. NMF is an effective method of dimensionality reduction for neuroimaging data. 93 Unlike principal component analysis, all coefficients and basis images are nonnegative, allowing a more straightforward interpretation of the results. 94 For corticocortical projections, 500 × 500 fm-vox MIPs of all layers with injection site masks were downsized to 100 × 100 size, and the data for 44 injections were converted to a 44 × 10,000 matrix to be used for “nnmf” function of MATLAB. For corticostriatal projections, a 180 × 300 × 220 STPT-vox space containing the left striatum was downsized to a 45 × 75 × 55 space, and a 44 × 185,625 matrix was used for NMF analyses. NMF analysis attempts to decompose the original matrix A (sample × pattern) into basis images (or components) W and coefficients H (A ≅ WH). Given the nonnegative constraints, A is not exactly equal to WH. Furthermore, the nnmf function of MATLAB uses an iterative algorithm starting with random initial values for W and H, and the results obtained vary each time slightly. To find the best result, we repeated the analyses 100 times and examined the residuals between A and WH. The residuals were relatively constant but sometimes became large. Inspection of the basis images showed that very similar images were generated when the residuals were small. Therefore, we selected W and H with the smallest residuals as the basis images. Because we had only 44 datasets for analysis, we tried to minimize the number of basis images while achieving adequate reconstitution efficiency. With four basis images, the averaged correlation coefficients between the reconstituted and the original images were approximately 0.927 ( Figure S8B ). Reconstitution by five basis images did not result in substantial improvement (0.934), and we chose to use four basis images for further analysis. Conversion of NMF_W1, 2, 3, and 4 into a single color map was performed by first combining the W1/W4 and W3/W2 pairs by subtraction. By this subtraction, W1 (or W3)-dominant regions take positive values, whereas the W4 (or W2)-dominant regions take negative values because of the nonnegative feature. Furthermore, because of the antagonistic relationship of these pairs, overlapping regions that cancel each other’s value are small in extent. As a result, the positional information of the four components (or basis images) was mostly retained in the positive and negative domains of the subtracted values (compare Figure 3B with Figure 3D ). Using the 2D colormap shown in Figure 3G , 75 these two value sets can be jointly represented by color-coding. This color-coded map contains only information about the ratio of the two coefficient differences and not about coefficient magnitude. In the associated spatial map ( Figure 3H ), regions with a low contribution of any of the four components were excluded. Conversion of coefficients 1, 2, 3, and 4 into a colored dot map ( Figure 3H ) was performed similarly. Analysis of retrograde labeling We used AAV2retro-EF1-cre as a non-fluorescent retrograde tracer. This construct accumulates CRE protein in the nucleus, which results in relatively even labeling of neurons of diverse sizes. It was also advantageous for the automatic identification of labeled cells. After immunohistological detection and imaging, the retrieved image data were processed for automated segmentation of labeled nuclei and registration to the corresponding STPT image (FR and HS, manuscript in preparation). Although we collected only one in ten sections to detect the retrograde tracer, we were able to map the result of retrograde tracing to the STPT template by registering the slice images to the STPT slice data. To evaluate the colocalization of anterograde and retrograde tracers at the columnar scale, we measured the image intensity by “Plot Profile” function of FIJI using the vertical line ROI shown in Figure 5B 70 [ https://imagej.net/software/fiji/ ]. We also measured the amount of each tracer in the 117 cortical areas of the flatmap stack format for the examination of correlations. The correlation coefficients in Figures 5G and S11 were calculated based on the log 10 of the original signal values. We did not adjust the signal values for the size of each area and we excluded areas with fewer than threshold (10 0.5 =~3.2) retrograde signals from calculations of correlation coefficients. Visual inspection of antibody-stained images revealed some injection cases with a low signal-to-noise ratio, as judged by an abundance of false-positive artifacts across multiple cortical areas. We distinguished such samples by calculating the ratio between the number of areas having low values and the number with above-threshold values, excluding areas with no signals. The samples that failed this quality check exhibited lower correlation coefficients ( Figure S11B ), suggesting that a high signal-to-noise ratio of the retrograde tracing data, which is determined by the sensitivity and specificity of immunolabeling, affects the correlation with the anterograde data. In two cases, we also observed substantial leak of the retrograde AAV into the white matter, which appeared to have transduced the cells via passing fibers. We excluded these two cases from further analyses. Although the anterograde and retrograde tracer distributions were well correlated overall, there were mismatches of several types. Some mismatches were attributable to purely technical confounds or differences in methodology. Technical confounds included gaps in data arising from the wider interval between sections analyzed for retrograde data and also misregistration, especially in distorted regions on the fringes. In addition to these technical reasons, methodological differences could lead to mismatches. One significant difference is that each retrogradely labeled cell is counted as one, irrespective of the amount of tracer contained in the nucleus. We suspect that the diffuse connectivity is less efficiently labeled by the retrograde tracer and that its identification is more sensitive to experimental conditions. On the other hand, our anterograde tracing cannot distinguish passing fibers from synaptic connections and may overestimate genuine “connectivity”, especially for sparse signals. Thus, the evaluation of reciprocity using the dual tracer system that we adopted in this study requires careful consideration of each case. Lamina-profiling of the columnar and diffuse cortical projections Inspection of signal distributions for individual columnar patches indicated various laminar preferences for these patches. Because the orientation of axonal profiles was not always orthogonal to the flatmap but was largely contained within the central cylinder of 4-fm-vox radius, we measured the maximum intensity of tracer signals within the central cylinder for each of the 50 layers. We defined this as the laminar profile of the columnar patch of interest. We performed hierarchical clustering to classify the laminar profiles of the 1867 columnar patches. Pearson’s correlation coefficient was used to measure distance, and Ward’s method was used for the clustering. Although we selected columnar patches based on the values of layer level 26–50, the profiles of all 50 layers were used to calculate the correlation coefficient. Figure 4A shows the original laminar profiles of the columnar patches (before averaging) aligned according to the tree structure of the hierarchical cluster analysis. We divided them into eight clusters (separated by white border lines) and averaged each of the laminar profiles shown in Figure 4A . We further defined three lamina types, “DL,” “ML”, and “UL”. The defined lamina types were color-coded blue, green, and red, respectively, for representation in flatmap format in Figure S10B . To examine the laminar profiles of the overall areal projections, including both the columnar and diffuse projections, we divided the cortical hemisphere into 117 areas and examined the averaged laminar profile in each area. When we calculated the sum of tracer signals for the 44x117 injection-to-target area pairs, 97% of the pairs had non-zero signals. To assess the significance of these signals, we aligned the log 10 values of the sum of the signals in order and found that the decline in log 10 values gradually accelerated around the 4,000 th pair and dropped sharply after the ~4,500 th pair. Accordingly, we decided to use the top 4,000 injection-to-target area pairs for laminar profiling as the pairs with significant connections. The hierarchical clustering was performed in a similar manner to that used for the focal projections, except that we made a new group called “DL2”, which showed very restricted signal distribution near the gray/white matter interface. Some of these signals may represent fibers of passage. At present, we cannot distinguish between synaptic contacts and fibers of passage, although we did observe bouton-like varicosities to be associated with some deep-layer axons by confocal microscopy ( Figure S8F ). Occupancy and colocalization of focal (patchy) tracer signals in the striatal region We noticed that the occupancy of tracer signals in the striatal regions varied widely among the PFC subregions when focal/patchy projections were visualized ( Figure 6A , linear view). To quantitatively estimate this feature, we set the threshold at 50% of the maximum intensity ( Figure 6C ) and counted the ratio of positive voxels in the caudate nucleus, putamen, and nucleus accumbens, separately over the entire striatal voxels for each sample. The values in the three striatal compartments were summed for analysis by ANOVA to compare the six PFC subregions. For the colocalization analysis, the ratio of the overlapping voxel numbers to the number of combined voxels was calculated for every pair of 29 injections ( Figure S12C ). Due to the patchy distribution, overlap within the same PFC subregions was low on average but varied greatly across pairs. Detection of patches of corticostriatal projections in 3D For detecting patches of corticostriatal projections in 3D, the 3D data were first converted to a set of XY and YZ MIP images for the detection of local maxima in each 2D image in a similar way used for the detection of corticocortical columnar patches, and the 3D positions that fit both MIP images were selected as the local maxima in 3D. When two patches were detected within a six-STPT vox distance in 3D, we merged them for simplicity. The detected patches were projected onto the XY, YZ, and XZ MIP images for visual inspection and accurate detection. In the detection of caudate patches, we found that strong signals in the Muratov bundle or internal capsule were sometimes included and selected as patches. In such cases, we either masked those regions for re-standardization or simply deselected them. Tracer intensities were standardized to the maximum values within the striatal region for each sample, and the minimum height value for patch detection was set at half the maximum value. An example of such patch detection is shown in Figure S12D . The distribution of these patches could generally be well approximated by an elongated ellipsoid ( Figure S12D , red ellipses), consistent with the previous observation that macaque corticostriatal projections are longitudinally aligned. 41 Prediction of corticostriatal patch distribution patterns by polynomial regression model To find regression models that can predict the projection coordinates from the injection coordinates, we tested the polynomial regression model with degree 2 and searched for the optimal fit. As predictor variables, we used the x, y, and z coordinates of injections in the STPT template space. As response variables, we used the x, y, or z coordinates for the average positions of the detected STPT-vox intervals that roughly correspond to AP = +13.5, +14.5, +15.5, +16.5, and +17.5 in the Paxinos atlas and visualized the corresponding lines in the caudate. QUANTIFICATION AND STATISTICAL ANALYSIS Statistical tests We generally used 8, 6, 5, 2, 4, and 4 samples for dlPFCv, dlPFCd, dmPFC, FP. ACC, and OFC, respectively, for subregion comparisons. The injections that were positioned near the borders of these subregions were excluded in such cases. Values are reported as mean ± standard deviation (SD) throughout the manuscript. The correlation coefficients (r) in Figures 5G , 5H , S8B , and S11 refer to the Pearson correlation. Those in Figure 6H refer to Spearman’s rank correlation. R 2 for the regression model refers to the coefficients of determination in Figures S4D , S4E , and S8C . The p-values were calculated using one-way ANOVA and Tukey’s post hoc test for significant factors in the ANOVA for Figures 2E , 3C , 6D , and 6G .
RESULTS STPT imaging in the marmoset brain A key to our project was the implementation of serial two-photon tomography (STPT) 28 – 30 in the marmoset brain. STPT captures high-resolution serial section images in accurate 3D coordinates. Combined with enhanced GFP expression in a Tet-based adeno-associated virus (AAV) vector system, we achieved highly sensitive and detailed volume imaging of the entire brain, not available by conventional methods. This is illustrated by the reconstruction of columnar axonal spread for an exemplar dorsolateral PFC (dlPFC) injection ( Figures 1A – 1D : arrow and arrowheads 1–3; Videos S1 and S2 ). Depending on section obliquity relative to the radial axis, a single section may capture most of a projection column ( Figure 1B ) or only part of it ( Figure 1A ), but in all cases serial sections revealed the full 3D pattern ( Figures 1C and 1D ). Registration fidelity to the common template space 26 was estimated to be 100–200 μm ( Figure S1A ; see STAR Methods ), which enabled reliable automatic anatomical annotations ( Figure S1B ) and integration across multiple datasets. The slice interval was set at 50 μm, thereby enabling smooth conversion of cortical layers to a stack of flatmaps (“flatmap stack”) ( Figure S1C ) even in regions of cortical curvature (e.g., frontal pole, dorsomedial convexity) (see Figure S1E for distortion of intracortical areas). We separated the axon signals from background noise (e.g., lipofuscin granules; Figure S1F ) using a machine learning-based algorithm, achieving more than five orders of magnitude range of signal intensity ( Figure S2F ). This wide intensity range is consistent with previous retrograde studies in macaques 31 and marmosets. 32 The distribution of the log 10 values of projection intensity and its relationship to distance from the injection site along axonal trajectories was similar but not identical to that in previous studies (see Figure S2 ; also see STAR Methods ). Pseudocolor scaling of logarithmic values revealed both the convergence of axons into patches and diffusely spread weak signals ( Figures 1E – 1G ). The columnar nature of these exemplar patches is particularly evident in an oblique 3D view of PFC ( Figure 1G ). Importantly, the patches were not randomly scattered in the tangential domain, but are mostly grouped into rows, consistent with a stripe-like arrangement 13 if the data were analyzed at lower resolution (see also Figure 2A ). Figures 1H and S1G show the estimated locations of injection sites on the cortical flatmap that includes a 117-area cortical parcellation ( Figure S3 ) based on an architectonic analysis of a single hemisphere. 5 , 24 The areal boundaries provide a useful reference frame, but our regional analysis was based on clustering the atlas areas into 6 larger PFC “core” subregions identified by geographic location, shaded in different pastel colors in Figure 1H and assigned geographic labels as indicated in the figure and legend. Figure 1I represents the overlay of projections from the six core PFC subregions using linear scaling. The projections from different subregions were largely segregated in the three posterior association area fields and had very different weightings, with dorsolateral PFC-dorsal (dlPFCd) projections (green) dominating the cingulate field, dorsolateral PFC-ventral (dlPFCv) projections (red) dominating the parietal field, and orbitofrontal cortex (OFC) and anterior cingulate cortex (ACC) projections dominating the temporal field. There is a quasi-orderly topographic representation in each of the association fields (see below and Figures S1H and S1I ). Using this coverage, the distribution patterns and principles of patchy projections and diffuse spread are examined in detail below. Columnar projections recapitulate PFC topographic gradients locally in each association field To characterize axonal convergence patterns in the cortex in greater detail, we searched for local maxima in the flatmap ( Figure 2A ) and identified 1,867 patches of strong labeling from all injections combined. To assess how consistently these patchy patterns are aligned as columns parallel to the radial axis, we binarized the tracer signals for morphological assessment. We found that more than half (54%, 1,008/1,867) of the binarized 3D clumps were solitary (disconnected from surrounding label) and were reasonably well approximated by ellipsoids. Of the 1,008 solitary patches, 569 (56%) had ellipsoids whose major axes were within 10° of the estimated radial axis ( Figure 2B , green histogram and below the dotted line in scatter plot on right); and we consider all of these to be oriented radially. In the remaining 44%, most patches were insufficiently elongated to identify a clear axis of orientation, and most of them represent isolated patches in superficial layers ( Figure 2B , red dots in scatter plot on right). For the 859 “connected” cases (46%), the averaged data suggested that most of them were associated with radially oriented signal convergence, especially in the upper layers (ULs) ( Figure 2C ). Indeed, a translucent display of UL portions of all patches aligned as a single stack ( Figure 2D ) indicated that patches were consistently narrow in diameter (~8–10 voxels) for both solitary (“Sol”) and connected (“Con”) cases. This corresponds to ~672–840 μm diameter in the flatmap stack (see Figure S1E ). We conclude that the submillimeter connectivity patches, both local and long-distance (including those projecting outside PFC), predominantly have a radially oriented columnar architecture, although intensity profiles can vary considerably across layers, and many patches are restricted to superficial cortical layers. Accordingly, we refer to these as “columnar patches.” Each injection generated a variable number of columnar patches within and outside the frontal cortex ( Figure 2 E , one-way ANOVA, p < 0.05). Within the frontal cortex, subregions frontopolar cortex (FP), dlPFCd, and OFC injections averaged ~40–60 patches/injection, whereas dlPFCv, dorsomedial PFC (dmPFC), and ACC had significantly fewer. Outside the frontal cortex, dlPFCd, dlPFCv, and FP averaged ~10–20 patches/injection and the others had fewer still. A map of the total number of columnar patches (circle diameter) and the frontal/total ratio (circle hue) for each injection locus showed the total numbers highest for dorsolateral injectionsand high frontal ratiosmainlyin orbitomedial subregions ( Figure 2F ), suggesting regional differences in the number and distribution of columnar projections. Furthermore, the size of the solitary patches and the solitary ratio (solitary/all) differed across PFC subregions: in dlPFCv, dlPFCd, and OFC, the average patch size was relatively small, and the average solitary ratio was relatively high, whereas the converse was the case for dmPFC, FP, and ACC ( Figures S4A – S4C ). These observations suggest qualitative and quantitative differences in the interactions of each area with nearby and long-range neuronal populations. Importantly, the distribution of columnar patches in cingulate, parietal, and temporal association cortices was systematically related to the location of injection sites ( Figure 2G ). In the cingulate and temporal cortex, the columnar patches from all six sub-regions reflect key aspects of PFC topography. Specifically, we could infer the mean target positions in each region based on the injection coordinates using polynomial regression models ( Figures S4D and S4E ). These models indicate that PFC topographic gradients are represented as shown in Figure 2H . The projections to the parietal cortex differed in that columnar patches arose only from dorsolateral injections (dlPFCd, dlPFCv, FP-red, green, cyan, Figure 2G ). A seed analysis using all injections showed segregated projections from the anterior and posterior compartments of the dorsolateral frontal lobe, including premotor cortex—PM ( Figure S4F ). We also found columnar patches in early auditory areas (the core and belt) originating from two portions of dlPFCd and dlPFCv ( Figure S4G ). We propose that the patchy projections from the PFC take parallel pathways toward these extra-frontal fields, where they recapitulate key aspects of PFC topography in region-specific ways. Figures S4H and S4I show that these topographic projections are impressively similar to those reported for macaque lateral PFC. 25 We also found orderly relationships in the distribution of columnar patches within the frontal cortex ( Figures S5A – S5C ). The frontal patches from each injection were distributed over a quasi-elliptical up to 1 cm tangentially. The mean shortest interpatch distance was ~1 mm ( Figures S5D – S5G ), similar to but modestly larger than reported in the macaque (500–600 μm). 13 , 16 We also found that the pattern of projections to the contralateral hemisphere was strikingly similar to the ipsilateral projections, especially in the frontal cortex, as illustrated by comparing an ipsilateral to a mirror-flipped contralateral image that was adjusted for global intensity differences ( Figure S5H ). Although the overlay of the contra- and ipsilateral projections was often slightly offset, comparison with shuffled data indicates a largely symmetric pattern of bilateral projections ( Figures S5I – S5K ). Patchy projections appear to be much less prominent in the mouse PFC ( Figure S6 ), suggesting they may have a distinctive role in the primate brain. Diffuse projections recapitulate PFC topographic gradients globally and locally Figure 3A shows a highly distributed pattern of weak anterograde labeling visible using log scaling and gray-scale encoding for injections in dlPFCv (top) and dmPFC (middle)— Figures S7 and S8A show additional injections. Diffuse projections were even observed in regions lacking any columnar patches ( Figure 3A , green arrows), suggesting their regional specificities are not identical. To analyze the spatial patterns of the diffuse projections, we performed a nonnegative matrix factorization (NMF) analysis. Figures 3B and 3C show the results of NMF, which generated four common components (basis images) (NMF_W1, 2, 3, and 4) and associated coefficients (coeff. 1, 2, 3, and 4). The reconstruction by these values was in general well correlated with the original image (average correlation = 0.93) ( Figure S8B ). The four basis images can be grouped into two quasi-orthogonal pairs on the flatmap ( Figure 3B ; separated by a vertical blue bar). A dorso-medial pair (NMF_W1) vs. ventro-lateral (NMF_W4) pair is reminiscent of the dual origin concept proposed by Pandya and coworkers. 7 An anterior/rostral (NMF_W2) vs. posterior/caudal (NMF_W3) pair resembles the antagonism between the “apex transmodal network” vs. the “canonical sensory-motor network” (primarily visuo-motor and distinct from the somato-motor network) previously described in the marmoset 33 based on retrograde tracer data. 6 The subregion-specific patterns are reflected in the coefficient values for these components, namely, coeff. 1 through 4 ( Figure 3C , one-way ANOVA, posthoc Tukey test, *p < 0.05). Notably, dlPFCv was the only subregion with high coeff. 3 values, indicating that NMF_W3 was nearly unique to dlPFCv. In other cases, 2, 3, or 4 subregions had substantial coefficients, suggesting that the positional gradients of injections strongly influence these coefficients. To visualize the differential contributions of the four parameters in a simpler form, we combined the antagonistic pairs by subtraction, retaining most of the original information in the positive and negative domains ( Figure 3D ). Similarly, we combined the coefficients by subtraction ( Figure 3E ). This data conversion highlighted the orthogonal trends of NMF_W1 through W4 ( Figure 3F ). Using a 2D color index strategy 25 ( Figure 3G ), these trends were represented in a single map ( Figure 3H , right panel). A similar strategy converted the coefficient values of each injection by hue representation ( Figure 3H , left panel). The gradual progression of hues suggests graded changes in the combination of coefficient values according to injection location. Indeed, we were able to model their relationship by polynomial regression ( Figure S8C , right two panels). Since the projection patterns reflect the coefficient values, we infer that the source region mainly projected to target regions similar in hues (compare the left and right panels of Figure 3H ). Given the presence of global gradients for the diffuse projections, we quantified the patch intensity and patch numbers and found that the patchy projections also show similar gradients ( Figures S8D and S8E ). Conversely, when we examined the center of mass of diffuse projections, it showed a modest but genuine bias ( Figure S9A ), albeit not as pronounced as for the patchy projections. Furthermore, NMF analyses using only the local values successfully visualized the recapitulation of the PFC layouts in the cingulate and temporal areas, although some variability was observed in the distribution of the coefficient values ( Figures S9B – S9G ). We suggest that PFC gradients are mapped globally across the cortical hemisphere to determine the overall projection patterns and are also mapped locally in each association field to determine the positioning of both patchy and diffuse projections. Columnar and diffuse projections have different laminar profiles The laminar profile of axon terminals has been used to infer hierarchical relationships between connected areas. 34 – 37 To investigate this issue in our data, we classified the laminar patterns of columnar patches by hierarchical clustering ( Figures 4A and S10A ). Based on the stereotypical patterns reported previously, particularly for the visual cortex, one might suspect that clusters targeting upper and deeper layers (grouped as ULs and DLs, respectively) represent feedback connections, whereas those that target widely across the middle layers (MLs) might be feedforward or lateral (horizontal) connections. We obtained eight clusters that were combined into DL, ML, and UL groups, with a majority being ML type (green), and examined their areal distribution patterns ( Figure 4B for #47; Figure S10B ). These examples showed a tendency for similar laminar types to cluster, albeit with considerable intermingling. Next, we examined the laminar profiles of the diffuse projections. As regions of interest (ROIs) for profiling the continuous diffuse patterns, we used the 117-area parcellation ( Figure S3A ). We selected 4,000 injection-to-target area pairs out of 44 × 117 possible combinations (78%) as putatively positive (see STAR Methods ) and classified them by hierarchical clustering into four types: UL, ML, DL, and DL2 ( Figure 4C ). The incidence of ML laminar types for diffuse terminations was much lower than for columnar patches in the frontal cortex ( Figure 4C , upper panel) and even more so outside the frontal cortex ( Figure 4C , lower panel), as is also apparent for the exemplar injection (#47, Figure 4D ). In many areas, the area-wise laminar types differed from the dominant patch types, presumably because strong but restricted columnar signals in the MLs were dwarfed by diffuse, widespread projections in DLs ( Figures 4E and 4F ). Figure S10F compares the percentage of the dominant laminar types of the columnar patches with that of the area-wise measurement (left panel), which showed a decrease of the ML type and an increase in DL and DL2 types. Importantly, some areas are reciprocally connected with one another by DL pathways in both directions ( Figure S10G ). If these represent genuine inter-areal connections and not just fibers of passage, this pattern would be inconsistent with a traditional hierarchical model in which DL represents a feedback pathway (see discussion ). Reciprocity of corticocortical projections determined by anterograde/retrograde double-tracing It is well established that most but not all inter-areal corticocortical connections are reciprocal, 8 , 15 but the degree of spatial precision largely remains to be determined. Thus, we investigated whether both patchy and diffuse projections in the marmoset PFC are associated with reciprocal projections. To test this, we co-injected a non-fluorescent retrograde tracer in 14 cases and compared its distribution pattern with the anterograde tracer by immunostaining. Figure 5 shows our main findings using case #80 as an exemplar. The retrograde signals exhibited striking colocalization with the anterograde signals by visual inspection ( Figure 5A ), in densitometry ( Figure 5B ) and cross-correlation analyses ( Figure 5C ). Retrogradely labeled neurons occurred in all cortical layers except layer 1 in many patches ( Figures 5A and 5B ) but could be concentrated in DLs outside patches ( Figure 5E ). Strong anterograde columnar patches were consistently associated with moderate to strong retrograde labeling. We also observed a widespread but sparser pattern of retrogradely labeled neurons in locations containing diffuse anterograde label ( Figure 5E ). Thus, reciprocal connectivity applies to both patchy and diffuse projections ( Figure 5D ). To quantify these relationships, we projected the retrograde data into a flatmap for comparison with the anterograde data ( Figure 5F ). We confirmed that strong anterograde signals were consistently associated with clusters of retrograde signals (e.g., Figure 5F , white arrows, compare both panels). We also observed the colocalization of retrograde signals with the sparse diffuse anterograde signals in the temporal and cingulate cortex (cyan arrowheads). To quantify this reciprocity, we integrated the total amount of anterograde and retrograde tracer signals in each cortical area and plotted their correlations ( Figure 5G ). The correlation was high (r = 0.94) in the frontal cortex (red dots), where strong columnar patches were abundant, and lower but still robust (r = 0.76) when non-frontal areas (open circles) were included, where diffuse signals dominated. Similar results were found in the other 13 dual-injection cases, supporting the generality of these observations ( Figures 5H and S11 ). Furthermore, we also found a generally strong correlation of area-wise anterograde/retrograde labeling for the contralateral projections ( Figures S5L and S5M ). Corticostriatal projections also consist of patchy and diffuse projections PFC has massive unidirectional projections to the striatum, constituting the first step of the cortico-basal ganglia-thalamo-cortical loop that plays an integrative role in goal-directed behaviors. 38 , 39 Anterograde tracer injections in the macaque showed that the corticostriatal projections include both focal and diffuse projections, 40 which may correspond to patchy and diffuse projections in our study. In marmosets, we also observed dense patches of anterograde label surrounded by sparser and more widespread anterograde label; here, we characterize these patterns in detail using an analysis strategy similar to that applied above to corticocortical projections. The striatum includes the caudate nucleus (Cd), putamen (Pu), nucleus accumbens (Ac), and tail of the caudate nucleus (Cdt), which are implicated in different functional circuits. 39 Typical termination patterns for three exemplar pairs of injections in these structures are shown using linear scaling ( Figure 6A ) to emphasize the strong patchy projections, and in log-scale view ( Figure 6B ) to emphasize the diffuse spread of weaker projections. In the linear view, we observed multiple patches along the rostrocaudal axis mainly within the Cd, consistent with previous observations in macaques. 41 The patches were discrete for dlPFCd (#29, green) and dlPFCv (#42, red) injections ( Figure 6A , left panel) and were more distributed for other injections, particularly for the A25 injection (#81, yellow) that primarily targeted the Ac ( Figure 6A ; see also Video S3 ). In the log-scale view, the tracer spread widely and there was extensive overlap across the different injections ( Figure 6B , lower panels). The patchiness of corticostriatal projections was evaluated by measuring the signal spread at a threshold of half the maximum value ( Figures 6C and 6D ). Strong signals were concentrated in the Cd, except for the A25 injection (targeting Ac) ( Figure 6D ). The projections from dlPFCv were the most focal among the subregions, whereas those of dmPFC and ACC projections were the broadest, occupying more than 7-fold more space. The location of these projections varied systematically within the Cd, with little overlap between adjacent subregions ( Figures 6E and S12C ). As with the corticocortical projections, we could predict the centers of patches based on the injection coordinates using polynomial regression models ( Figures S12D – S12G ). These observations support the importance of topography in determining the specificity of patchy corticostriatal projections, consistent with previous studies in the macaque. 40 , 42 The axonal terminations included fine axonal fibers with bouton-like varicosities ( Figures S12A and S12B ). To characterize the distribution of diffuse projections, we performed NMF analyses and generated four components ( Figure 6F ). The NMF_W1 component, spanning most of Cd plus the anterior Pu, was strongly represented in all PFC subregions ( Figure 6G , coeff. 1), suggesting that the Cd is an important target for all the PFC subregions. Other striatal regions (e.g., Pu, Ac, and Cdt) also received projections from some subregions of the PFC (compare Figures 6F and 6G ). Interestingly, the profiles of these coefficient value sets were similar to those of the corticocortical projections ( Figure 6H ; Spearman’s rank correlation test). These observations strongly suggest that the global patterns of corticocortical and corticostriatal projections are governed by similar PFC topographic gradients, albeit in a nonlinearly skewed form. Visualization by the color indexing strategy confirmed that the PFC gradients were recapitulated globally in the striatum ( Figure S12H ).
DISCUSSION Topographic gradients in the primate brain We found that both patchy and diffuse projections recapitulate PFC topography in their local and global patterns ( Figure 7A ). Whereas the patchy projections showed segregated parallel streams to extra-frontal association areas, the diffuse projections spread widely and overlapped extensively. PFC projections are considered a mixture of these projection patterns ( Figures 7A and 7B ). Topographic organization has been proposed as a key organizing principle for various PFC extrinsic projections 34 , 39 , 41 , 43 – 46 and functional properties. 47 – 49 A recent study of electrical microstimulation (EM)-fMRI reported a topographic mapping of lateral PFC functional connectivity in five association cortex regions. 25 Our datasuggest thata similar topographic mapping of anterograde connectivity occurs in at least three cortical regions in the marmoset. Furthermore, it likely involves orbitomedial PFC and applies to corticostriatal connections as well. Precise correspondences between macaque and marmoset association areas have yet to be fully established, despite many cytoarchitectonic and connectional similarities. 5 , 24 The presence of gradients that are similar but not identical in the temporal, parietal, and cingulate fields provides important insights into both conserved and divergent connectional architecture across primate species. Macroscale network organization in humans may share features in common with marmosets, 33 , 50 suggesting the generality of dual global/local topographic organization across the primate line-age. Although we consider our evidence for topographic mapping to be compelling, we observed considerable diversity in patch distributions (e.g., Figures S5A – S5C ). Considering the variability even in the same individual in EM-fMRI mapping, 25 mesoscale connectivity may include a stochastic component. Alternatively, individual differences that remained after registration may account for our variabilities. Functional implications of PFC connectivity patterns The concept of “parallel distributed networks” has been used to characterize subdivision-specific connectivity patterns in the macaque PFC 45 as well as parallel interdigitated networks in fMRI studies of individual human subjects. 51 We consider this concept applicable to the patchy projections in our study ( Figure 7B , right panel, green and magenta columns). An intriguing question is whether each column in PFC is a discrete entity that has well-defined borders with minimal overlapping inputs and outputs with neighboring columns ( Figure 7C , left). Alternatively, a column identified by one tracer injection might partially overlap and be “blended” with a column identified by a tracer injected in a different location ( Figure 7C , right). Our observation that anterograde and retrograde columnar patches were in most cases precisely coextensive ( Figures 5A – 5C ) is consistent with the discrete columnar model but does not prove it. If further experiments indeed confirm the discrete columnar model for PFC, a host of fascinating questions arise: how many columns are in each PFC area? How many other columns does a given PFC column project to and receive inputs from? Are there gaps between neighboring columns, or is the PFC completely tiled by a mosaic of columns? How do neighboring columns differ in connectivity and function, gene expression patterns, and/or cell type composition? Recent advances in anatomical tracer methods, 52 , 53 submillimeter fMRI in monkeys 54 and in humans, 55 , 56 and spatial transcriptomics 57 will likely yield important insights regarding these issues. A discrete columnar system in PFC would differ fundamentally from columnar systems in the visual cortex that have been intensively studied, particularly in the macaque (see introduction ). An iso-orientation domain in V1 is in essence a thin ribbon that winds through the cortical sheet, but the ribbon lacks a well-defined thickness because orientation preferences change continuously rather than in discrete steps. An ocular dominance stripe in V1 has a finite thickness (width) related to eye-specific geniculocortical terminations, but along the length of a stripe, features such as orientation and spatial frequency appear to be mapped continuously. 10 , 58 Area V2 has a tripartite arrangement of thick stripes, thin stripes, and interstripes and putative columnar systems for representing multiple dimensions, including orientation, binocular disparity, and hue (reviewed in Vanni et al. 9 and Sincich and Horton 59 ). In higher visual areas, modular organization has been reported in area V4 related to color and shape and in the inferotemporal cortex related to faces, bodies, color, disparity, and objects. 11 , 60 – 62 However, we are not aware of compelling evidence in any extrastriate visual area for discrete columns at a submillimeter scale of the type hypothesized for PFC above and previously. 12 – 16 On the other hand, columnar patches found in premotor, cingulate, parietal, and temporal areas in our study and in macaque anterograde studies 63 may well reflect a similar columnar system. Multi-electrode recordings in macaque lateral PFC suggest a spatially non-monotonic tangential correlation structure 64 to which patchy projections might contribute. In contrast to the patchy PFC columnar system, diffuse projections spread widely and overlap extensively ( Figure 7B , mixed color area in the top and bottom layers). An important question is the degree to which anterograde signals in DLs represent fibers of passage rather than just axonal terminations. In this regard, our confocal microscopy observations suggest the presence of boutons along axon fibers in DLs ( Figure S8F ). The dual tracer experiments involving DL and DL2 domains suggest a modest but non-zero incidence of reciprocal connections ( Figure S11C ). Thus, while we acknowledge that fibers of passage may be common, diffuse projections appear likely to contribute functionally relevant connections. Such connections might have significant modulatory effects when a large population of neurons exhibit coordinated activity. Resting-state fMRI in marmosets suggests the existence of such coactive networks involving frontal areas, including a candidate for the default mode network. 17 , 20 This raises the intriguing question of the degree to which diffuse vs. patchy projections arise from and/or terminate on separate vs. overlapping neuronal populations. Neural recordings from the monkey PFC suggest that computations in the PFC emerge from the concerted dynamics of large populations of neurons, 65 in which multidimensional activities may be superimposed. 66 , 67 The anatomical features we found for PFC might contribute to the segregation and integration of population activities. For example, the reciprocal columnar architecture is well-suited for forming recurrent networks. As discussed above, neural computations might occur in completely or partially segregated parallel networks with modulation through diffuse connectivity ( Figure 7C ). It is also important to know how lamina-specific connections contribute to the network organization. We find it notable that the laminar profile of projections commonly differs for the patchy and diffuse projections ( Figure 7B ). Laminar profiles have been considered to reflect the directionality of information flow and hence the hierarchical organization of cortical areas. 8 , 35 , 37 Our data suggest inconsistencies with a traditional hierarchical scheme for PFC organization, but the issues of laminar profiles and hierarchical organization will benefit from further analyses using more extensive datasets, which are currently ongoing. Progress on this front may provide insights as to how layer-specific cell types contribute to the formation of PFC circuits. 13 , 16 Another fundamental issue is the need for a more accurate cortical parcellation of marmoset PFC, incorporating multimodal data including gene expression data. Finally, we note that our marmoset PFC connectivity database offers many exciting opportunities to perform fine-grained analyses that were not previously possible and should contribute to our understanding of PFC structure and function in primates.
AUTHOR CONTRIBUTIONS Conceptualization, A. Watakabe, D.C.V.E., and T.Y.; methodology, A. Watakabe, M.T., and J.H.; software, A. Watakabe, H.S., M.F.R., A. Woodward, and R.G.; investigation, A. Watakabe, J.H., and J.W.; resources, A. Watakabe, M.T., H.M., H.S., M.F.R., A. Woodward, and R.G.; data curation, A. Watakabe, H.S., and A. Woodward; writing – original draft, A. Watakabe; writing – review & editing, A. Watakabe, H.A., K.N., N.I., H.S., D.C.V.E., H.O., S.I., and T.Y.; funding acquisition, K.N. and T.Y.; supervision, T.Y., S.I., and H.O. SUMMARY The prefrontal cortex (PFC) has dramatically expanded in primates, but its organization and interactions with other brain regions are only partially understood. We performed high-resolution connectomic mapping of the marmoset PFC and found two contrasting corticocortical and corticostriatal projection patterns: “patchy” projections that formed many columns of submillimeter scale in nearby and distant regions and “diffuse” projections that spread widely across the cortex and striatum. Parcellation-free analyses revealed representations of PFC gradients in these projections’ local and global distribution patterns. We also demonstrated column-scale precision of reciprocal corticocortical connectivity, suggesting that PFC contains a mosaic of discrete columns. Diffuse projections showed considerable diversity in the laminar patterns of axonal spread. Altogether, these fine-grained analyses reveal important principles of local and long-distance PFC circuits in marmosets and provide insights into the functional organization of the primate brain. Graphical Abstract In brief In this article, Watakabe et al. perform extensive tracer mapping of the marmoset PFC, finding two types of projections (patchy and diffuse) to be topographically arranged in the cortex and striatum. Fine-grained analyses enabled by this new resource deepen our understanding of local and long-range connectivity of the primate PFC.
Supplementary Material
ACKNOWLEDGMENTS We thank N. Hasegawa and RIKEN ARD/RRD for marmoset care. We thank Drs. Kathleen Rockland, Nenad Sestan, and Takuya Hayashi for critical reading of the manuscript. We thank the RIKEN CBS-Olympus Collaboration Center for the technical assistance with confocal image acquisition. This work was supported by the program for Scientific Research on Innovative Areas (grant no. 22123009) from MEXT, Japan; the program for Brain Mapping by Integrated Neuro technologies for Disease Studies (Brain/MINDS: JP15dm0207001 to T.Y. and JP19dm0207088 to K.N.) from AMED, Japan; JSPS KAKENHI grant no. JP22H05154 and 22H05163 to K.N.; the Cooperative Study Program of Exploratory Research Center on Life and Living Systems (ExCELLS; program no. 19-102 to K.N.); and NIH grant RO1 MH-060974 to D.C.V.E. This work was made possible in part by software funded by the NIH: FluoRender: Visualization-Based and Interactive Analysis for Multi-channel Microscopy Data, 1R01EB023947-01, and the National Institute of General Medical Sciences of the National Institutes of Health under grant nos. P41 GM103545 and R24 GM136986.
CC BY
no
2024-01-16 23:49:19
Neuron. 2023 Jul 19; 111(14):2258-2273.e10
oa_package/25/c1/PMC10789578.tar.gz
PMC9284965
35840902
Background Coronaviruses are large, enveloped, single-stranded RNA viruses found in humans and other animals, such as dogs, cats, bats, chickens, cattle, pigs, and birds. These viruses have the potential to cause respiratory, enteric, hepatic, and neurologic diseases. The most common coronaviruses in clinical practice are 229E, OC43, NL63, and HKU1, which typically cause common cold symptoms in immunocompetent individuals and contribute 15% to 30% of common cold cases [ 1 , 2 ]. Two other strains, the severe acute respiratory syndrome coronavirus (SARS-CoV) and the Middle East respiratory syndrome coronavirus (MERS-CoV), are associated with severe respiratory disease and are responsible for the first significant coronavirus outbreaks [ 2 , 3 ]. On December 21, 2019, a novel coronavirus was identified in hospitalized patients with pneumonia in Wuhan, China. Genetic analysis revealed that this novel coronavirus fits into the genus betacoronavirus. Further phylogenetic analysis showed that the SARS-CoV-2 virus belongs to the subgenus Sarbecovirus and that is more similar to two bat-derived coronavirus strains, bat-SL-CoVZC45 and bat-SL-CoVZXC21, than to known human-infecting coronaviruses, including SARS-CoV [ 3 , 4 ]. Because seasonal coronaviruses are regarded as mild upper respiratory pathogens with a known peak prevalence during December–March each year in the U.S. (coinciding with the winter respiratory virus season), molecular testing is not frequently performed in the clinical outpatient practice, and it is reserved for surveillance purposes [ 5 ]. However, because of the increased availability of molecular test methods and the adoption of sCoV testing as part of routine multiplex diagnostic screens, particularly for patients with severe respiratory illness or admitted to critical care units where a precise microbiologic diagnosis is more clinically relevant, it is now possible to recognize and characterize the associated disease spectrum of severe sCoV infections and compare it to that of COVID-19 [ 5 , 6 ]. The clinical presentation, diagnostics, and outcomes of patients with COVID-19 have been well described in multiple case series and cohort studies [ 7 – 10 ] and compared to hospitalized patients with other respiratory viruses [ 11 – 14 ]. Nevertheless, there is limited data on how COVID-19 compares clinically to seasonal coronaviruses (sCoV). Unlike SARS-CoV and MERS-CoV, SARS-CoV-2 carries the potential to become a recurrent seasonal infection; hence, it is essential to compare the clinical spectrum of COVID-19 to the existent endemic coronaviruses in an attempt to help clinicians distinguish both entities during potential co-circulation throughout winter seasons and guide further management [ 5 , 15 , 16 ]. Thus, this study compares the clinical characteristics, course, and outcomes of hospitalized patients with COVID-19 with hospitalized patients with sCoV infection.
Methods Design, setting, and participants This cross-sectional retrospective cohort study included 380 hospitalized adult patients (18 years or older) with sCoV or COVID19 across four AMITA Health hospitals located in the Chicago metropolitan area. A total of 190 patients hospitalized with pneumonia (ICD-10-CM Code J18.9), upper respiratory tract infection (ICD-10-CM Code J06.9) or lower respiratory tract infection (ICD-10-CM Code J22), and a positive respiratory viral panel (BioFire® FilmArray Respiratory Panel) for sCoV from January 1, 2011, to March 31, 2020, were identified by the Electronic Health Records department and thus, no sample size calculation was performed. Those patients were compared with 190 patients randomly selected from a de-identified dataset that included 313 hospitalized adult patients with molecularly confirmed new-onset symptomatic COVID-19 (AbbottTM RealTi me TM SARS-CoV-2 assay or AbbottTM ID NOW COVID-19TM assay) admitted from March 1, 2020, to May 25, 2020. Definitions Respiratory failure was defined as room air oxygen saturation less than or equal to 90% or using any means of supplemental oxygen associated with shortness of breath. Sepsis and septic shock were defined according to the 2016 Third International Consensus Definition for Sepsis and Septic Shock [ 17 ]. Acute kidney injury (AKI) was diagnosed according to the KDIGO clinical practice guidelines [ 18 ], and acute respiratory distress syndrome (ARDS) was diagnosed according to the Berlin Definition [ 19 ]. Troponin leak was defined as non-ACS cardiac troponin elevation above reference range levels [ 20 ]. The severity of COVID-19 illness and sCoV infections was defined and unified according to the National Institutes of Health guidelines for the management of COVID-19 [ 21 ]. Other definitions include: residents of long-term care facilities as residents of group, board and care homes, assisted living facilities, nursing homes, or continuing care retirement communities; neurocognitive impairment as any dementia, Parkinson’s disease with cognitive impairment, intellectual disability, or cerebral palsy; altered mental status as any alteration in alertness, orientation or level of consciousness; immunosuppression as patients on daily dose ≥ 20 mg of prednisone or equivalent, active chemotherapy, immunotherapy, immunomodulators (immunosuppressants), or patients diagnosed with any hematological neoplasia. Data collection Clinical data were manually extracted and collected by the investigators via retrospective chart review from an electronic medical record system (Epic). Information collected included demographic data, medical history, underlying comorbidities, symptoms, signs, laboratory findings, imaging studies, treatment measures, survival to hospital discharge (survivors), and in-hospital death or referral to hospice (nonsurvivors). A 10% random sample was re-abstracted to ascertain agreement and monitor calibration. We calculated a Cohen’s kappa for each categorical variable and intraclass correlation coefficient for continuous variables included in the analysis. The mean (SD) Cohen’s kappa for categorical variables was 0.85 (0.15), with a percentage agreement of 94%, indicating a strong level of interrater agreement. The mean intraclass correlation coefficient for continuous variables was 0.94 (0.08), indicating excellent interrater reliability. The study was approved by the Institutional Review Board of AMITA Health System (2021-0180-02). The Ethics Commission waived the requirement for informed consent, given that this research involves no more than minimal risk to participants. Statistical analysis Descriptive statistics were used to summarize the data; categorical variables were described as frequency and percentages, and continuous variables were described using median and interquartile range (IQR) values. Non-normal distribution was confirmed with the Shapiro–Wilk test. We used the Mann–Whitney U test, Chi-squared test, or Fisher exact test to compare differences between patients with sCoV infection and COVID-19 when appropriate. An exploratory unconditional multivariable logistic regression model with generalized estimating equations with exchangeable correlation structure correcting standard error estimates for site-level clustering was used to assess differences in case-fatality between patients with sCoV infection and participants with COVID-19 [ 22 ], adjusting for age, residence (home or long-term care facility [LTCF]), do-not-resuscitate/do-not-intubate (DNR/DNI) status and quick Sequential Organ Failure Assessment (qSOFA) score. We opted to fit these variables into the model based on clinical knowledge and previous literature. A two-sided alfa of less than 0.05 was considered statistically significant.
Results Demographics and baseline characteristics The median age of the base cohort was 72 years (IQR, 59.0–83.0 years; range 21–98 years) and 203 (53.4%) were male. Among patients with sCoV infection, the Human Coronavirus (HCoV)-OC43 was the most common coronavirus with 47.4% of the cases, followed by HCoV-HKU1 (20.5%), HCoV-229E (17.4%), and HCoV-NL63 (14.7%) (Fig. 1 ). Baseline characteristics, disease severity, and inpatient case-fatality rates were not significantly different between each sCoV, except for a significantly higher rate of inpatients with CoV-HKU1 and a history of COPD and a significantly higher rate of patients with CoV-229E who required IMV (Table 1 ). When comparing demographics and baseline characteristics between inpatients with sCoV and COVID-19, both groups were of similar age, more patients with sCoV infection were female, White, and admitted from home, while patients with COVID-19 were more likely to be male and admitted from an LTCF. Of note, more patients with COVID-19 were admitted with DNR/DNI orders (Table 2 ). The proportion of patients with two or more comorbidities, obesity and a history of smoking was not significantly different between patients with sCoV infection and COVID-19. However, patients with sCoV infection presented higher rates of cardiovascular disease, history of malignancies, COPD or asthma, and immunodeficiency, whereas patients with COVID-19 presented higher rates of diabetes and neurocognitive disorders (Table 2 ). Clinical presentation and interventions Upon presentation to the hospital, more patients with sCoV infection reported chills and cough, while more patients with COVID-19 reported fever, anosmia, and diarrhea. The rates of shortness of breath were not different between groups. Clinically, patients with COVID-19 presented higher rates of altered mental status, higher body temperature, and lower blood pressure than patients with sCoV infection (Table 2 ). Patients with sCoV infection presented a higher white blood count, while patients with COVID-19 presented higher serum creatinine levels and blood urea nitrogen (Table 2 ). Between patients with sCoV and COVID-19, there were no differences in the rates of leukopenia (white blood cells < 4.0 × 10 9 /L, 6.3% vs. 9.5%; p = 0.254), lymphopenia (lymphocyte count < 0.6 × 10 9 /L, 71.6% vs. 78.9%; p = 0.096), or thrombocytopenia (platelet count < 150 × 10 9 /L, 13.2 vs. 19.5%; p = 0.096). On imaging, a more significant proportion of patients with sCoV infection showed no acute findings or unilateral opacities, whereas more patients with COVID-19 were found to have bilateral or diffuse (Table 2 ). With regards to interventions (Table 3 ), more patients with sCoV infection were placed on nonrebreather masks (12.1% vs. 6.3%) and noninvasive ventilation (13.2% vs. 1.1%) in the emergency department. On the other hand, more patients with COVID-19 were placed on high-flow nasal cannula (8.9% vs. 0.5%) and humidified high-flow system (3.7% vs. 0%). A similar proportion of patients required invasive mechanical ventilation (IMV) on presentation and later during the hospital stay. Both groups of patients with sCoV infection and COVID-19 were administered similar rates of steroids (45.3% vs. 43.7%) and antibiotics (95.8% vs. 91.1%). A larger proportion of patients with COVID-19 required vasopressors (16.8% vs. 10%), neuromuscular blockers (17.9% vs. 0.5%), and prone positioning (11.1% vs. 1.1%). Outcomes Regarding inpatient outcomes (Table 3 ), patients with sCoV infection and COVID-19 developed similar respiratory failure rates. Patients with COVID-19 presented higher rates of sepsis, AKI, and ARDS. A higher number of individuals with sCoV were found to have co-infective organisms than individuals with COVID-19. Rates of mild and moderate illness were similar among both groups of patients on presentation, but significantly more patients with COVID-19 presented with severe disease. The time from symptom onset to discharge or death was not significantly different between patients with sCoV infection and COVID-19. Though, patients admitted with COVID-19 had a higher length of hospital stay than patients with sCoV. Rates of intensive care unit (ICU) admissions were similar between both groups; however, more patients with sCoV were successfully extubated and successfully discharged from the ICU than patients with COVID-19. The inpatient case fatality rate was significantly higher in patients with COVID-19 compared with patients with sCoV infection. In the unconditional logistic regression model with generalized estimating equations, patients with COVID-19 presented a significantly increased risk of death compared to patients with sCoV infection (adjusted Odds Ratio [aOR] 3.86, Confidence Interval 1.99–7.49; p < 0.001) (Table 4 ). We performed three sensitivity analyses. First, using an automated variable selection procedure, we performed a backward stepwise (likelihood ratio) logistic regression to compare our variable selection model based on current evidence of known risk factors associated with viral respiratory infections severity with an automated variable selection model. Covariates with the greatest P-value were progressively removed until only covariates with a P-value less than 0.10 remained in a block with significant improvement of fit compared to the previous block. In this model, COVID-19 remained as a significant predictor of death compared with sCoV infection (aOR 3.42 [1.76–6.63]; p < 0.001). Second, we adjusted the regression model with a propensity score that was calculated from saving the predicted probabilities of a logistic regression with COVID-19 or sCoV infection as dependent variable and age and sex as independent variables, then adjusted the backward selection regression model by including predicted probabilities as a covariate. Additionally, the backward selection regression model was also performed with the logit of the predicted probabilities as a covariate. Lastly, given the lack of a standardized protocol regarding when to order a respiratory multiplex panel by PCR within the Integrated Healthcare System, there is an inherent selection bias towards patients with more severe sCoV infection as physicians tend to order this panel for patients with severe respiratory infections where a precise microbiologic diagnosis is more important. Thus, we performed a subgroup analysis with a model that only included patients admitted to the ICU. Again, COVID-19 carried a significantly greater risk of death compared to sCoV infection (aOR 5.42 [2.08–14.08]; p = 0.001) (Table 4 ).
Discussion This retrospective cohort study examined the characteristics and clinical outcomes of hospitalized patients with sCoV infection compared to patients with COVID-19. Patients with COVID-19 presented a higher case fatality rate and an almost fourfold increased risk of death than patients with sCoV. Interestingly, the rates of ICU admission and IMV use were not significantly different. However, more patients with sCoV were extubated and were more likely discharged from the ICU than patients with COVID-19. Seasonal coronaviruses are usually associated with mild upper respiratory illness in adults and are not a considerable public health burden [ 16 ]. Though, elderly individuals and immunocompromised hosts can sometimes develop life-threatening bronchiolitis, pneumonia, and even neurological infection (hCoV-OC43) [ 2 ]. In one study of community-acquired pneumonia requiring hospitalization among U.S. adults, the incidence of coronaviruses in individuals 80 years of age or older was similar to that of Streptococcus pneumoniae [ 23 ]. Besides, previous studies have linked common respiratory viruses, including sCoV, with COPD exacerbations, asthma exacerbations, and worsening cardiovascular disease [ 24 – 27 ]. In our cohort, patients admitted with sCoV were found to be initially admitted due to exacerbation of a pre-existing condition, namely heart failure exacerbation and COPD or asthma exacerbation, and later found to have a sCoV infection, where coronaviruses were likely responsible for disease aggravation, as demonstrated by the significantly higher proportions of patients with sCoV infection and underlying cardiovascular disease, obstructive pulmonary disease, and immunodeficiency in comparison to patients with COVID-19. In contrast, most patients with SARS-CoV-2 infection were merely admitted due to COVID-19 and its complications. The clinical spectrum of hospitalized patients with SARS-CoV-2 infection has been mainly compared to SARS, MERS, and other pandemic viruses [ 28 , 29 ]; nevertheless, our data shows significant differences with these viruses and important similarities with hospitalized patients with sCoV infection. For instance, although all coronaviruses can affect persons in all age groups, hospitalized patients with COVID-19 and sCoV infection were found to be older (median age 69 and 74 years, respectively). In contrast, previous series reported younger populations affected by SARS and MERS (median age 39 and 56 years, respectively) [ 30 – 35 ]. COVID-19 and MERS affected more male patients, while sCoV and SARS affected predominately female patients. Overall, SARS series reported fewer patients with pre-existing underlying conditions (10 to 30%) [ 30 – 32 ], while in MERS series, 50 to 96% of patients were reported to have at least one underlying condition [ 33 – 35 ]. Similar to MERS series, more than 80% of hospitalized patients with sCoV and COVID-19 had two or more underlying comorbidities in our cohorts. For COVID-19, sCoV, and MERS, the most common presenting symptoms included fever, cough, and shortness of breath, while in SARS series, fever and cough were more prominent relative to shortness of breath [ 30 – 35 ]. Leukopenia on admission was less common in our cohort of patients with sCoV (6.3%) and COVID-19 (9.5%) compared to previous MERS (14–42%) and SARS (25–35%) series [ 34 , 35 ], whereas lymphopenia rates were similar in patients with sCoV (71.6%), COVID-19 (78.9%), and SARS (68–85%) in comparison to MERS (34%) [ 35 ]. As expected, rates of bilateral or multifocal infiltrates at admission were overall higher in patients with COVID-19 (61.6%), SARS (29–45%), and MERS (26–80.3%) than in patients with sCoV infection (30.5%) [ 30 – 34 ]. The rates of ICU admission among patients with sCoV (35.3%) and COVID-19 (32.1%) in our cohorts were higher than in SARS series (20–26%) but lower than in MERS series (78–89%) [ 30 – 33 , 35 ]. Overall, the rates of IMV were higher in MERS series (24.5–80%), followed by our cohort of patients with COVID-19 (19.5%), SARS series (13.8–21%), and our cohort of patients with sCoV infection (14.2%) [ 30 – 35 ]. Case fatality rates were higher in series of hospitalized patients with MERS (20.4–65%), followed by our cohort of hospitalized patients with COVID-19 (34.7%), SARS series (3.6–13.6%), and our cohort of hospitalized patients with sCoV infection (11.6%) [ 30 – 35 ]. Considering all patients, including outpatients and inpatients, the estimated case-fatality rate of COVID-19 is around 1–3%, 9.5–15% for SARS, and 34.4% for MERS. The overall case-fatality rate for seasonal coronaviruses is not well described [ 28 , 29 ]. However, using data from the Underlying Cause of Death tool in the CDC Wide-ranging ONline Data for Epidemiologic Research (CDC WONDER) Online Database and the National Respiratory and Enteric Virus Surveillance System (NREVS), we estimated a rough case fatality rate of 0.0027% (108 deaths from unspecified coronavirus illness reported between the years 2014–2017 in the CDC WONDER Online Database and 39 588 cases of HCoV reported to the NREVSS during the same period) [ 5 , 36 ]. Compared to other respiratory pathogens other than coronaviruses, COVID-19 shares some similarities but also has a unique disease spectrum. In a study by Shah et al., similarly to our results, most comorbidities, medications, symptoms, vital signs, laboratories, treatments, and outcomes did not differ between patients with and without COVID-19. However, patients with COVID-19 were more likely to be admitted to the hospital (79% vs. 56%, p = 0.014), have more extended hospitalizations (median 10.7 days vs. 4.7 days, p < 0.001), and develop ARDS (23% vs. 3%, p < 0.001), and were unlikely to have co-existent viral infections compared with patients with an acute respiratory illness different that COVID-19 [ 11 ]. Furthermore, Spieza et al. showed that patients with COVID-19 pneumonia had significantly shorter clot formation time and higher maximum clot firmness (P < 0.01 and P < 0.05, respectively) than patients with non-COVID-19 pneumonia [ 12 ]. In a systematic review that compared COVID-19 to influenza, comorbidities such as cardiovascular diseases, diabetes, and obesity were significantly higher in COVID-19 patients. In contrast, pulmonary diseases and immunocompromised conditions were significantly more common in influenza patients, similar to our population with sCoV infection. Neurologic symptoms and diarrhea were statistically more frequent in COVID-19 patients compared to influenza patients, reminiscent of our cohort of COVID-19 patients. Ground-grass opacities and a peripheral distribution were more common in COVID-19 patients than in influenza patients, where consolidations and linear opacities were described instead. In comparison, our patient population with COVID-19 also most commonly presented diffuse opacities with bilateral distribution compared with patients sCoV infection. Lastly, COVID-19 patients were found to have significantly worse outcomes than influenza patients: More often transferred to intensive care unit with a higher rate of mortality [ 13 ]. The severity of COVID-19 compared to influenza was demonstrated again in a study by Talbot et al., where patients with COVID-19 showed greater severity and complications, including more ICU admissions (aOR 5.3, 95% CI 11.6–20.3), ventilator use (aOR 15.6, 95% CI 10.7–22.8), seven additional days of hospital stay in those discharged alive, and death during hospitalization (aOR 19.8, 95% CI–12.0, 32.7) [ 14 ]. With the expansion of SARS-CoV-2 worldwide, the emergence of new, more transmissible variants [ 37 , 38 ], and the variable effectiveness of current vaccines against those variants [ 39 ], there is little hope for eliminating the virus from the human population. Unlike SARS-CoV and MERS-CoV, which were locally contained, SARS-CoV-2 will likely transition to endemicity and continued circulation with the other sCoVs [ 16 ]. Seasonal coronaviruses have annual circulation peaks in the winter months in the U.S., and individual species show variable circulation from year to year [ 5 ]. Recent data from the NREVSS showed that during the 2019–20 winter season, HCoV-HKU1 was the most common sCoV circulating in the U.S., followed by HCoV-NL63. In comparison, during the 2020–21 winter season, HCoV-OC43 was the most common sCoV circulating in the U.S., again followed by HCoV-NL63 [ 40 ]. Our cohort encompassing nine years, the most common isolated sCoV was HCoV-OC43, followed by HCoV-HKU1. Although it is not clear whether COVID-19 will become a chronic seasonal disease, numerous epidemiological studies and models have explored the relationship between COVID-19 transmission and meteorological factors. These models have shown that infectivity of SARS-CoV-2 and mortality of COVID-19 are more substantial in colder climates and that COVID-19 seasonality is more pronounced at higher latitudes where larger seasonal amplitudes of environmental indicators are observed [ 15 , 41 ], supporting the circulation of SARS-CoV-2 as a seasonal respiratory pathogen. This study has several limitations. As mentioned before, one of the most significant limitations is the selection bias associated with the inpatient use of the respiratory multiplex panel by PCR. Since its availability and up to the writing of this manuscript, there is no formal protocol in place within the Integrated Health System regarding when to order this test. Physicians can order the panel at their discretion. In consequence, there may be a selection bias towards patients with more severe disease, whereas patients with less severe disease were omitted. We tried to address this issue with a sensitivity analysis, including only critically ill patients. Another significant limitation is the fact that the data of the COVID-19 population analyzed in this study were obtained during the initial wild-type (Wuhan-Hu-1) phase in the United States and before the emergence of variants of concern that later replaced the wild-type virus, namely Alpha, Delta, and Omicron, that have been shown to have different biological, epidemiological and clinical characteristics [ 42 , 43 ]. This was a retrospective cohort study, and clinical data were retrospectively collected through electronic medical records and manual chart review. Therefore, a degree of inter-rater variability is expected. Second, the present study was observational and included populations of patients distributed at different points in time; thus, unknown risk factors and bias might have been unequally distributed between the two groups in the analysis. The subjects with COVID-19 included for analysis encompass a series of consecutively admitted patients early in the pandemic before using steroids as the standard of care and the development of standardized, evidence-based management guidelines, and widespread availability of COVID-19 vaccines, which have shown to have a significant impact on morbidity and mortality. On the other hand, the cohort of subjects with sCoV infection included patients from a period of 9 years, during which progress in medical knowledge and patient care are expected; hence, the crude case-fatality ratio must be taken with caution. Finally, the analyzed population was limited to one Integrated-Delivery Health system in the Chicago metropolitan area and may have limited external generalizability.
Conclusions In conclusion, the clinical spectrum of hospitalized patients with COVID-19 is more similar to SARS and MERS in terms of illness severity and case-fatality rate than hospitalized patients with sCoV infection. However, the demographics and baseline characteristics of patients hospitalized with COVID-19 and sCoV infection are more similar, affecting older populations with many underlying conditions, making it difficult to distinguish both entities solely on a clinical basis. Thus, should SARS-CoV-2 transition into an endemic virus after the pandemic, clinical findings alone may not help confirm or exclude the diagnosis of COVID-19 during high acute respiratory illness seasons. With the availability of specific COVID-19 therapies and infection prevention protocols, the respiratory multiplex panel by PCR that includes SARS-CoV-2 in conjunction with local epidemiological data may be a valuable tool to assist clinicians with management decisions.
Background Unlike SARS-CoV and MERS-C0V, SARS-CoV-2 has the potential to become a recurrent seasonal infection; hence, it is essential to compare the clinical spectrum of COVID-19 to the existent endemic coronaviruses. We conducted a retrospective cohort study of hospitalized patients with seasonal coronavirus (sCoV) infection and COVID-19 to compare their clinical characteristics and outcomes. Methods A total of 190 patients hospitalized with any documented respiratory tract infection and a positive respiratory viral panel for sCoV from January 1, 2011, to March 31, 2020, were included. Those patients were compared with 190 hospitalized adult patients with molecularly confirmed symptomatic COVID-19 admitted from March 1, 2020, to May 25, 2020. Results Among 190 patients with sCoV infection, the Human Coronavirus-OC43 was the most common coronavirus with 47.4% of the cases. When comparing demographics and baseline characteristics, both groups were of similar age (sCoV: 74 years vs. COVID-19: 69 years) and presented similar proportions of two or more comorbidities (sCoV: 85.8% vs. COVID-19: 81.6%). More patients with COVID-19 presented with severe disease (78.4% vs. 67.9%), sepsis (36.3% vs. 20.5%), and developed ARDS (15.8% vs. 2.6%) compared to patients with sCoV infection. Patients with COVID-19 had an almost fourfold increased risk of in-hospital death than patients with sCoV infection (OR 3.86, CI 1.99–7.49; p < .001). Conclusion Hospitalized patients with COVID-19 had similar demographics and baseline characteristics to hospitalized patients with sCoV infection; however, patients with COVID-19 presented with higher disease severity, had a higher case-fatality rate, and increased risk of death than patients with sCoV. Clinical findings alone may not help confirm or exclude the diagnosis of COVID-19 during high acute respiratory illness seasons. The respiratory multiplex panel by PCR that includes SARS-CoV-2 in conjunction with local epidemiological data may be a valuable tool to assist clinicians with management decisions. Keywords
Abbreviations Acute kidney injury Acute respiratory distress syndrome CDC Wide-ranging ONline Data for Epidemiologic Research Coronavirus disease 2019 Do-not-resuscitate/do-not-intubate Human Coronavirus Interquartile range Invasive mechanical ventilation Long-term care facility Middle East respiratory syndrome coronavirus National Respiratory Enteric Virus Surveillance System Quick Sequential Organ Failure Assessment Seasonal coronavirus Severe acute respiratory syndrome Acknowledgements None. Author contributions GRN: Conceptualization, project administration, data curation, writing—original draft, methodology, formal analysis. GE: Project administration, data curation, writing—original draft, writing—review and editing. TD, QZ, EH, BP1, MAYB, DPTG, CWC, BP2, TIR, VPTG: Data curation, writing—review and editing. DSBS Data curation, formal analysis, writing—review and editing. JS: Project administration, supervision, writing—original draft, writing—review and editing. All authors have reviewed and approved the manuscript (and any substantially modified version that involves the author’s contribution to the study) and have agreed both to be personally accountable for the author’s own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. All authors read and approved the final manuscript. Funding There has been no financial support for this work. Availability of data and materials The data and materials used to support the findings of this study are available from the corresponding author upon reasonable request. The local IRB committee prohibits the release of the dataset without protocol amendments. Declarations Ethics approval and consent to participate The study was approved by the Institutional Review Board of AMITA Health System (2021-0180-02). Ascension Ethics waived the requirement for informed consent, given that the probability and magnitude of harm or discomfort anticipated in this research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests, and hence involves no more than minimal risk to participants. The Ascension Ethics allowed access to data through chart review. Data from patients was anonymized before its use. Consent for publication Not applicable. Competing interests The authors of this manuscript have no conflicts of interest to disclose.
CC BY
no
2024-01-15 23:35:09
BMC Infect Dis. 2022 Jul 15; 22:618
oa_package/dd/37/PMC9284965.tar.gz
PMC10003871
36902924
1. Introduction Since the 1980s, with the rapid development of many Chinese cities, many buildings with a frame bottom floor and a masonry top floor, called bottom frame structures, have been built. Bottom frame structures are used as buildings on both sides of the street, with the lower floors being used for commercial purposes and the upper floors for work and housing [ 1 ]. Compared to frame structures, bottom-frame structures can save 20–25% of construction costs [ 2 ]. In general, the bottom frame structures are adapted to the degree of economic development in China and with Chinese characteristics. There are similarities and differences between the bottom frame structure and the European pilotis system. In the pilotis system, the superstructure is usually a shear wall structure, which is grouted and reinforced so that the shear wall can withstand shear forces. Research on bottom-frame structures began early worldwide, with Mantel [ 3 ] arguing that a flexible bottom-floor construction could improve the structure’s seismic performance. As a result, many researchers have conducted experimental and finite element studies on the seismic performance of bottom frame structures. Gao Xiaowang [ 4 , 5 ] conducted seismic tests on the bottom frame structures model with scales of 1/2 and 1/3. Liang Xingwen [ 6 ] carried out a proposed dynamic seismic response test of a 1/2 scale model of the double bottom frame. Zheng Shansuo [ 7 ] performed three simulated shaking table tests with a 1/6 scale model. All these tests provided a detailed summary of the load-carrying capacity, seismic performance, and damage mechanism of the bottom frame structure under seismic action. As there are many factors affecting the seismic performance of the bottom frame structure, numerical methods are of great importance for the study of the functional performance of the bottom frame structure under seismic conditions. Li Qi [ 8 ] conducted a dynamic time analysis of a two-story frame underframe structure and investigated its elastic–plastic response under different seismic effects using the finite element program CANNY. Chen Jun [ 1 ] and Song Linbo [ 9 ] carried out pushover analysis and elastoplastic time analysis of the underframe structure system by building a finite element model to obtain the densification capacity and elastoplastic response of the underframe structure under seismic action, respectively. The abovementioned experimental and numerical methods have obtained the damage patterns, load-carrying capacities, and the laws of continuous collapse processes for various types of bottom frame structures under seismic conditions. The characteristics of these experimental and numerical studies can be summarized as follows. the load-carrying capacity of bottom frame structures obtained from these experimental and numerical studies is often the load-carrying capacity corresponding to their ultimate working condition. Moreover, the reference point for the design and construction of the bottom frame structure is also based on the load-carrying capacity corresponding to the ultimate working condition; All of the above studies consider the inherent property of uncertainty/randomness in the ultimate working state of a structure. Design methods based on the ultimate working state are difficult to accurately estimate a structure’s working capacity for various structural and loading conditions. This result has further led to empirical and statistical approaches to structural analysis and design; Data obtained from experimental and numerical studies, such as experimentally measured strains and the strain energy density of the finite element model, are not fully utilized. Therefore, it is impossible to accurately estimate the seismic load capacity of the bottom frame structure based on existing theories and methods. The final state of the structure contains huge random variations and empirical errors. Therefore, it is impossible to accurately predict a structure’s load-carrying capacity using the ultimate working state as a foothold. Based on this understanding, existing structural analyses do not attempt to reveal the general laws of operation of various structures. Experimental and numerical studies have formed a fixed paradigm [ 10 ]. In such a paradigm, the adverse effects of uncertainty in the load-carrying capacity of structures were reduced, resulting in outstanding engineering achievements. However, uncertainty in load-carrying capacity has become a bottleneck in current structural engineering research, and new theories are needed to reveal the laws embedded in the working of structures. Zhou’s view is that Newton’s and Hooke’s laws reveal the transient laws of structural working in the elastic phase, but any theory or law does not reveal the evolution of structures from the elastic–plastic phase to the damage phase. The general laws of structural work may be contained in the experimental strain and displacement data, but new theories and methods are needed to model them to find the laws. Zhou [ 11 , 12 ] has developed a structural stressing state theory and proposed a corresponding analysis method to break through the above bottlenecks. The structural stressing state theory treats the failure of a structure as an evolutionary process, which can be characterized by modeling the displacement and strain data during the loading process. The elastic-plastic branching points and the starting point of failure are then defined by defining the abrupt change in the evolution of the structure’s stress state. In recent years, the modeling of the stressing state of dozens of structures of different materials and conditions has revealed defined elastoplastic branch points and failure points, including steel box girder bridges [ 13 ], arch supports [ 14 ], steel-tube-restrained concrete arches [ 15 ], steel frames [ 11 ], steel nodes [ 16 ] and members [ 17 ], steel tube [ 18 ] and spiral reinforced concrete [ 19 , 20 ] columns, reinforced masonry shear walls [ 21 ], and concrete airport pavement [ 22 ]. This study proposes a method for modeling the measured strain data from shaking tables of bottom frame structures based on the structural stressing state theory. The measured strain data from the shaking table of the substructure is modeled as a generalized strain energy density (GSED), and the stressing state modes (matrices or vectors) and characteristic parameters of the substructure are established based on the GSED, which are called stressing state characteristic pairs. A mutation determination criterion is applied to determine the location of the mutation points. The mutation points reveal the failure starting point and the elastic–plastic branch point during the seismic damage of the bottom frame structure, and the load corresponding to the failure starting point is defined as the structural failure load.
3. Structural Stressing State Theory and Methods 3.1. Concept and Physical Law in Structural Stressing State Theory The structural stressing state manifests structural response at a load level. Structural response data, such as strains and displacements, can be modeled to describe the structural stressing state. The numerical mode (matrix or vector) formed by the response data is called a structural stressing state mode with both shape and magnitude characters so that a parameter can characterize a structural stressing state mode. Stressing state mode and characteristic parameter are called the stressing state characteristic pair. According to the natural law from quantitative change to qualitative change of a system, the stressing state evolution of a structure with the load increase presents a mutation around a certain load level. This mutation feature is general for various structures under individual loading cases, so it could be called the structural failure law. The law reflects the general and essential working features of structures: (a) Various structures, structural members, and specimens (measuring material strength) undergoing a complete loading process certainly embody the stressing state mutations at specific loads; (b) The stressing state mutations define the failure starting point and the elastic-plastic branch (EPB) point in the structural failure process. Both characteristic points provide physical-law-based references to improving and updating the existing design codes governed by empirical and statistical judgments. In a sense, the structural stressing state theory and the structural failure law could update the foothold (structural ultimate/peak states) of the present structural analysis and design, that is, they could address the classic issue, the uncertainty of structural load-bearing capacity, and structural design. At present, structural stressing state analysis generally follows the following procedure: Model the experimental data to obtain the basic variables to express stressing state modes and characteristic parameters. For instance, this study transforms the tested strains as generalized strain energy density (GSED) values to describe the structural stressing state; Build the stressing state characteristic pair of the structure, that is, the stressing state mode and the parameter characterizing the mode. Detect the mutation points in the curves of characteristic parameter evolution applying the criterion, and then verify the mutation characteristic in the evolution of the stressing state mode; Redefine/update structural failure load and define the EPB load according to structural stressing state mutation features and provide them as the reference to the update and improvement of the existing structural designs. 3.2. Modeling of Structural Stressing State The experimental strains can be modeled as the state variables to express the structural stressing state. However, strains’ directionality makes it challenging to form the stressing state mode (the vector or matrix of strains) and the parameter characterizing the mode. Therefore, a standard method is to model the strain as the scalar quantity: where is the GSED values of the ith point at the j th load ( F j ); is the normalized value of ; is the maximum value among (the load step j = 1, 2, ..., n); ε i is the strain at the i th point. Thus, the stressing state mode can be expressed by GSED values according to the analytical intentions. Of course, GSED values can be further modeled as other forms of state variables to express structural stressing state characteristic pairs. For the bottom frame model, the stressing state submodes, the matrixes or vectors formed by GSED values or others, can be composited referring to the locations of strain gauges. For instance, the submodes can be built for individual members, columns, masonry walls, the 1st floor, the 2nd floor, or the frame and wall. All the submodes can combine the stressing state mode of the whole structure. The characteristic parameter can be the sum of the GSED-based elements in the mode. 3.3. Detection of Structural Stressing State Mutation Structural stressing state characteristic pairs will present the mutation feature at a load level according to the natural law of quantitative change and qualitative change. Here, the Mann–Kendall (M–K) criterion [ 24 , 25 ] is applied to detect the mutation point in the characteristic parameter-load ( E-F ) curve. The operative steps of the M–K criterion are as follows: For the numerical sequence { E ( i )} (the load step i = 1, 2, ..., n ), a statistical quantity d k at the kth load step can be defined as: where m i is the cumulative number of the samples; “+1” is to add one more to the present value if the inequality on the right side is satisfied for the j th comparison. The mean value and variance of the statistical quantity d k were calculated as follows: Then, a new statistic quantity UF k is defined by and the UF k -F curve can be plotted. For the inverse sequence of { E(j) } (the load step j = n , n − 1, ...,1), the same steps from Equations (2)–(4) are proceeded to derive the UB k -F curve. Finally, the intersection of the UF k -F and UB k -F curves defines the characteristic point of the E-F curve, that is, the mutation point of structural stressing state.
2.4. The Experimental Results Table 6 , Table 7 and Table 8 list the inter-story displacement angles for each level of seismic acceleration used in the shaker tests to describe the evolution of the failure mode of the bottom frame model. It should be noted that first-hand accurate data on the inter-story displacement angles for each level of acceleration were not available for this study, and the inter-story displacement angles for the three types of seismic waves presented in the table are approximations extracted from the data presented in the test report charts for this test.
5. Discussion The above stressing state modeling study of the bottom frame structure reveals the EPB and FS points in the failure process by establishing the stressing state mode with characteristic parameters, and further verification can be observed for the experimental phenomena listed in Table 5 . At 0.22 g, the new cracks appeared close to the location of the window bottom, and the last cracks propagated and became cracked all the way through, implying that some limited local failure led to the structural elastic working behavior that started to affect the structural normal working state. Therefore, this point was characterized as the EPB point, and 0.22 g was called the EPB load. From 0.22 g to 0.40 g, the cracks under the window bottom propagated and formed the small triangle failure area; the cross cracks appeared at the up part of the side beam. The oblique cracks appeared at the masonry wall close to the pedestal of the bottom frame. The local failure quickly propagated to present the structural elastoplastic working state, that is, the structure worked in the plastic formation accumulation state, which the structural design requirement could not allow. At 0.40 g, the new cracks appeared under the window bottom, and the previous ones propagated, further promoting the triangle failure area. The oblique cracks developed at the masonry wall close to the pedestal of the bottom frame, together with new cracks. The structural stressing state form would mutate to the other, lose the normal working state, and start its failure. A load of 0.40 g was defined as the structural failure load in structural stressing state theory. It should be stated again that 0.40 g was the structural failure starting point and the embodiment of the structural failure law. Furthermore, the EPB point could be the principle derived from structural failure law, which might be taken as the general design principle of structures. So far, we can summarize that the bottom frame structure indeed presents the stressing state mutation behavior at a certain seismic intensity, complying with structural failure law or the natural law of quantitative change to qualitative change. In other words, when the structural stressing state quantitatively develops to a certain extent, it will qualitatively mutate and present a different profile (stressing state mode) from the previous one, which the M–K criterion can detect. Then, based on the structural failure starting point, it can detect the structural EPB characteristic point, which might be taken as the design principle. The EPB point provides the physical-law-based reference to improving the design of bottom frame structures or other structures. So far, we can summarize that the substructure does exhibit a mutation in stressing state behavior under specific seismic intensities, in line with the structural damage law or the quantitative to qualitative change law. In other words, when the quantitative change in the structural stressing state develops to a certain level, a qualitative change will occur, presenting a different shape from the previous one. Combining the results of the M–K criterion and the mutation characteristics of the Δ E j -a j curve, the FS point, and EPB point can be detected. Among other things, the EPB point provides a reference for improving the design of bottom frame structures or other structures based on physical laws. For comparison’s sake, the structural stressing state feature provides a new foothold complying with the natural law for structural analysis and design, different from the foothold of the structural ultimate working state, which is the existing structural analysis and design standard. The two footholds have the essential difference, one in particular and general; the other is uncertain and specific. In a sense, structural analysis and design have been anticipating and pursuing the former, but the foothold on the structural ultimate state would lead to the belief that there was a physical law for structural bearing capacity. This belief may have been broken by discovering the starting point of the structure’s failure process: the specific embodiment of the natural law in the structural working process. Thus, structural analysis and design could be mainly governed by the definite and general structural working law (structural failure law), that is, the structural failure starting point and the EPB point with the attribute of certainty, rather than the structural failure ending point (structural ultimate state) with uncertainty/randomness.
6. Conclusions At present, the application of structural stressing state theory to various structures has significance in science and engineering: the scientific significance is to achieve the specific scientific discovery in the working process of a structure or a type of structure based on the natural law from quantitative change to qualitative change of a system, that is, to reveal the general and definite working law of the structures unseen in the existing structural analysis; the engineering significance is it addresses the classic issue, the uncertainty of structural bearing capacity and the inconsistent design criterion of various structures. In this study, structural stressing state theory is first applied to reveal the seismic working law of the bottom frame structure, from which can be drawn the following conclusions: The GSED values transferred from the experimental strain data can express the substructure’s stressing state mode and characteristic parameters under seismic action. The M–K criterion and the Δ E j -a j curve can find two mutation points of stressing states, the starting point of the structural damage process and the elastoplastic branching point during the regular operation of the structure. The seismic capacity of the bottom frame structure should be determined as the failure starting point of the structural damage process, and the seismic intensity can be referred to as the structural damage load. The EPB point can be used as a direct reference for the design of the substructure as a design criterion extracted from the laws of nature or the structural damage law. The method based on the structural stressing state theory can eliminate the typical problem of inherent randomness in the ultimate state, which can lead to explicit design criteria for the seismic load capacity of the structure. In addition, this study has developed a method for modeling experimental seismic strain data and analyzing the characteristics of structural seismic stress states, enriching and developing structural stress state theory and facilitating its further application.
These authors contributed equally to this work. As a classic issue, structural seismic bearing capacity could not be accurately predicted since it was based on a structural ultimate state with inherent uncertainty. This result led to rare research efforts to discover structures’ general and definite working laws from their experimental data. This study is to reveal the seismic working law of a bottom frame structure from its shaking table strain data by applying structural stressing state theory: (1) The tested strains are transformed into generalized strain energy density (GSED) values. (2) The method is proposed to express the stressing state mode and the corresponding characteristic parameter. (3) According to the natural law of quantitative and qualitative change, the Mann–Kendall criterion detects the mutation feature in the evolution of characteristic parameters versus seismic intensity. Moreover, it is verified that the stressing state mode also presents the corresponding mutation feature, which reveals the starting point in the seismic failure process of the bottom frame structure. (4) The Mann–Kendall criterion distinguishes the elastic–plastic branch (EPB) feature in the bottom frame structure’s normal working process, which could be taken as the design reference. This study presents a new theoretical basis to determine the bottom frame structure’s seismic working law and update the design code. Meanwhile, this study opens up the application of seismic strain data in structural analysis.
2. The Shaking Table Test of the Bottom Frame Model 2.1. The Bottom Frame Model This experimental model in 1/5 scale ratio was designed referring to an actual 4-story bottom frame building close to the street based on the Code for Seismic Design of Buildings of P.R. China GB50011-2010 [ 23 ]. The bottom frame was reinforced concrete, and the three stories were masonry structures. The configuration of the structural model is listed in Table 1 , and the floor plan is shown in Figure 1 . Section 2.3 below shows the accurate picture of the bottom frame model ( Figure 2 ). The direct application of amplitude-modulated seismic waves to a 1/5 scale model of the bottom frame structure does make it challenging to respond to the non-linear response of the structure and the results from resonance. Therefore, based on the similarity theory, the three seismic waves and the geometrical and material parameters of the model were converted in this study, as shown in Table 2 . The results of the 1/5 scale model shaker tests and the corresponding modeling and study results are equally reliable as long as the similarity theory is sound. In other words, the model tests based on similarity theory concluded that the Eigen-periods of the model were in the ratio of 1/2.23 to that of the actual structure. Therefore, by determining the seismic intensity corresponding to the bottom frame model’s failure point, the actual structure’s seismic eigen-periods can be calculated from the ratio. The artificial mass of the model is added to simulate the weight and various constant and live loads. For the total mass of real structure: M T = M beam + M column + M wall + M live . For the total mass of the model: m T = M T × similarity ratio of mass. For the mass of the model members: m = M member × volume similarity ratio of the model member. For the total mass of artificial weight: m w = m T − m. Table 3 lists the mass parameters of individual stories and the artificial weights. 2.2. The Experimental Plan Table 4 shows the working parameters of the shaking table and the seismic input to test the bottom frame structure model. Since different seismic inputs considerably affected the experimental output results, the seismic input for the shaking table test selected the typical and prominent seismic records, even the artificial seismic waves consistent with the design response spectrum in the statistical sense. This study selected the El-Centro (El), Taft (Tf), and Wolong (Wl) seismic records according to the practical experience of the shaking table. In order to reflect the possible seismic cases, the test input the three seismic records in the same seismic period (the 30 s), respectively. The horizontal seismic magnitude was applied along the weakest direction. According to Specification 5.1.2 in China Code GB50011-2010, the magnitudes of the Taft wave in three directions were set as X:Y:Z = 1:0.85:0.65. Because the bottom frame model’s mass was not beyond the shaking table’s limit, the similarity ratio of the input seismic accelerations was set as 1. The seismic input scheme is shown in Table 5 , in which WNS means the while-noise sweep. Table 5 presents the failure profile, and a description of the bottom frame structure, and the corresponding explanation is given in Section 4.3 . 2.3. The Experimental Measurement Figure 2 shows the layout of accelerators and displacement meters on each floor according to China Code GB50011-2010. Twelve accelerators were set to measure the accelerations along with three directions (X, Y, Z). Two horizontal accelerators and one vertical accelerator were put at individual points on the top floor and the ground floor. Two horizontal accelerators were put at the individual points A on the 1st, 2nd, and 3rd floors. Twelve displacement meters were set to measure the displacements along with three directions (X, Y, Z): Two horizontal displacement meters were set at the individual points C on the top and bottom locations of the frame column as well as the second, third, and fourth floors; On the 4th floor, a vertical displacement meter was set at Point B and a lateral displacement meter at Point D to verify the structural torsional response. Figure 3 shows the layout of 32 strain gauges for measuring strains according to China Code GB50011-2010. In addition, three cameras were arranged to picture the cracking propagation on the three sides. It was a pity that only points 2~7 output the stain values in the testing process. However, using the limited strains, structural stressing state theory and method can still present the essential stressing state features of the bottom frame structure. 4. The Stressing State Analysis of the Bottom Frame Model 4.1. The Seismic Stressing State Modeling For the strain values of a point to individual moments in each time history, they can be modeled as the GSED values by Equation (1). For a time history, the GSED values at individual moments can be calculated as: in which and are the GSED value and the strain value at the i th measured point, at moment t and under the j th seismic intensity (ground acceleration magnitude) , respectively. Since each seismic acceleration during loading includes El Centro, Taft and Wolong waves, and El Centro and Wolong dominate the structural response. e+ is defined as a measure of the stressing state of the measurement points under each seismic conditions. In order to make a difference calculation, the seismic response of the modeling in this study is chosen to be 30 s. Correspondingly, the GSED values at can be calculated as: in which is the GSED value at the i th measured point during the seismic history T under means acceleration at the j th earthquake intensity. In this test of the bottom frame structure model, only measured points 2~7 recorded the strains in the entire seismic process, and the other 26 points just obtained a part of the strains. For the conventional structural analysis, the strains at the ultimate points could not reflect the working features of the whole structure. However, structural stressing state analysis could effectively reflect the working features of the whole structure using the strains at some specific points, shown in Figure 4 below. Besides, the results obtained by selecting different measuring points for structural stressing state analysis are almost identical. This result is because structural stressing state analysis is based on the structural failure law. Here, suppose that the strains at measured points 2~7 could represent the stressing state of the structure, and the stressing state mode ( ) to can be built as the vector: in which superscript “+” represents the strains in the positive seismic direction; if superscript is “−”, it represents the strains in the negative seismic direction. Correspondingly, the stressing state characteristic parameter ( ) can be the sum of several GSED values: where is the maximum among . Also, the stressing state submodes for the 1st and 2nd floors can be built as vectors and the corresponding characteristic parameters can be written as: This study mainly concerns the evolution of the stressing state characteristic pair to the seismic intensities to find out the failure starting point of the bottom frame structure. 4.2. The Stressing State Mutation Feature The structural stressing state theory characterizes the evolution and mutation in the structural stressing state theory by establishing stressing state mode and characteristic parameters. As a result, there is a great deal of flexibility in how stressing state characteristic pairs are established. For example, Figure 4 shows the curves for several state variables at individual seismic intensities. It can be seen that the most significant state variables correspond to different seismic intensities, which indicates that the state variables at one measurement point are not representative of the functional characteristics of the whole structure. This is because a single state variable can only model the local state information of a particular measurement point and cannot reflect the global information of the working state of the whole structure. In other words, the study of a single state variable alone cannot reveal the failure characteristics of a structure under seismic conditions. 4.3. The Stressing State Features of the Whole Structure Figure 5 shows the E j -a j curves that characterize the overall response of the substructure model. The characteristic points P and Q are defined as the elastoplastic branch (EPB) and the substructure’s failure starting (FS) points under shaking table conditions, respectively. As shown in Figure 5 , the seismic intensity/moment of the EPB and FS points are 0.4 g/14.8 s and 0.3 g/7.4 s, respectively, where the mutation around the EPB and FS points are more evident in the Δ E j -a j curves. The Δ E j -a j curve has a Z-shape, with a monotonically decreasing Δ E j -a j curve before the EPB point and a monotonically increasing Δ E j -a j curve after the EPB point until the FS point when it starts to show a decreasing trend. The stressing state mode M j -a j curve corresponding to the E j -a j curve also shows distinctive sharp points and mutations near the failure characteristic points, as shown in Figure 6 a,b. The horizontal coordinates of Figure 6 a represent the seismic intensity, and the curves represent the different measurement points. The curves show turning and cusp features near the characteristic points P and Q. The horizontal coordinates of Figure 6 b represent different measurement points in space, and the curves represent different seismic intensities. The curves show the mode’s leap near the characteristic points. These phenomena show that modeling the structure’s stressing state can lead to numerical modes that show significant mutation around the characteristic points, further validating the modeling method and the reasonableness of the characteristic points. 4.4. The Stressing State Features of Two Floors Figure 7 a,b show the characteristic parameter curves, that is, the E j -a j curves, that characterize the stressing state of the two floors of the bottom frame structure model. Combined with the result of M–K criterion and the Δ E j -a j curve, it can be determined that the EPB and FS points of the 1st and 2nd floors of the bottom frame structure are almost identical to the whole stressing state characteristic points, corresponding to a seismic intensity/moment of 0.4 g/14.8 s and 0.3 g/7.4 s. This phenomenon indicates that although the flexibility of the bottom floor characterizes the bottom frame structure, the masonry part of the structure still has good overall working performance during the input of seismic action. In particular, the Δ E j -a j curve shows a more pronounced turning and cusp characteristic near the EPB and FS points of the bottom frame structure. Moreover, the highest and sharpest points of the Δ E j -a j curve are often found near the FS point, which further demonstrates the importance of the FS point in the seismic design process of the bottom frame structure.
Acknowledgments Thanks to Rui Zhe for contributing to revising this paper’s language. And for her excellent insights into concrete and masonry structures that have improved the paper. Author Contributions Conceptualization, L.Z. and R.L.; methodology, R.L. and B.L.; formal analysis, Z.S.; investigation, R.L. and J.K.; resources, R.L. and J.K.; data curation, Z.S.; writing—original draft preparation, Z.S.; writing—review and editing, G.Z.; visualization, J.K. and B.L.; supervision, Z.S. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement The data presented in this study are available on request from the corresponding author. Conflicts of Interest The authors declare no conflict of interest.
CC BY
no
2024-01-15 23:35:11
Materials (Basel). 2023 Feb 22; 16(5):1809
oa_package/3e/bd/PMC10003871.tar.gz
PMC10034578
36934759
Introduction The success of evolutionary game theory (EGT) since Maynard Smith and Price published their classic 1973 Nature paper ‘The logic of animal conflict’ [ 1 ] is well known. In the span of just a few years, it became one of the main modelling methods in the study of phenotypic evolution. It has stimulated numerous naturalists who endeavoured to use it to interpret their data. Now a genuine theory, i.e. a field of mathematical research that has enriched the other branches of game theory, it is taught in many academic programmes in theoretical biology, evolutionary biology and animal behaviour. When a method is almost instantly used, discussed and applied by several practitioners, its success is seen as unsurprising, obvious. The theory was so ‘right’, so interesting, that one comes to think that it had to spread. One only needs to celebrate the initiators for coining such a useful approach; the rest, as they say, is history. But there was nothing obvious in the rapid success of EGT. With hindsight, it is surprising that naturalists found in mathematical models and simplified computer simulations a stimulus for novel reinterpretations of their data, or for starting new work. And also remarkable that such diverse theoreticians—mathematicians, physicists, economists and biologists—decided to use these models, creating a field of theoretical research. Here, we have interviewed a cast of contributors to ascertain how they became informed of the opportunities of EGT for their own work. Our intent is to pay attention both to networks of information, and to EGT's burgeoning effect on research programmes, theoretical and mathematical. In reconstructing these developments, we pay much attention to the effective influence of John Maynard Smith, who remained, for the span of a decade or more, the main contributor, the main popularizer, and, to use a term he employed in another context, the main marriage-broker among theoreticians and naturalists. But we also hope to go beyond the appreciation of the contributions of a single researcher, which have rightly been celebrated (e.g. [ 2 – 7 ]). EGT can be seen as a success because a community took shape that studied and used it: here, we put its emergence under closer focus. Since one of us (G.A.P.) was involved in these developments, we will draw, often extensively, on his recollections and subjective appreciation on developments and events; for this reason, G.A.P. is referred to throughout as ‘I’, ‘me’, etc. in this article.
A method for behavioural ecology: a case study Both Brockmann's and Riechert's pathways testify to the importance of collaborative ventures in this first flurry of EGT application by fieldworkers. To analyse one's data in EGT terms, collaborating with colleagues better versed in mathematical analysis was often critical. Brockmann's collaboration in Oxford with Richard Dawkins and the then fledgling theoretical biologist Alan Grafen is particularly instructive of the lessons gained with these new tools. Studying the nesting behaviour of golden digger wasps in North America during her PhD, Brockmann investigated joint provisioning between wasps. Female wasps nest in underground burrows, which they usually dig and provision solitarily, but occasionally two females occupy the same burrow and fight whenever they meet; ultimately only one lays an egg in the shared nest, which benefits from the work of both provisioners. Brockmann interpreted this as a possible example of the evolution of social behaviour: females cooperate in establishing resources that are later monopolized by the winner. This situation offered a direct example of a primitively social behaviour leading to the evolution of eusociality, then the main ‘obsession’ of Hymenopteran sociobiology [ 77 ]. At the time of her PhD, Brockmann lacked a clear theoretical or modelling framework for analysing her data in such a way. She tried to determine the payoffs, but lacked a way forward. Reading Dawkins' Selfish gene [ 66 ] on the plane to England in 1977 for her Oxford sabbatical, she realized it contained several ideas that might apply to the unpublished chapter on joint provisioning in her dissertation. She learnt more about EGT in conversations with Dawkins during her Oxford sabbatical, and they collaborated on a joint analysis of her data, which we detail below, enlisting the recently graduated Alan Grafen, who had decided to pursue a further degree in economics instead of biology. Dawkins wanted to give Grafen a project that would keep him in his field. The Sphex study provided the lure. Their process of model building and data interpretation was collaborative: Brockmann provided the empirical insight and data for calculating payoffs, Grafen formulated the models and performed calculations, and Dawkins and Brockmann wrote the manuscript [ 78 ]. As an editor of Animal Behaviour , Dawkins had developed his own views on how to write a scientific paper. Rather than giving a rag-bag of ‘Methods’ encompassing several experiments followed by a similarly unstructured list of ‘Results’, he encouraged authors to write their hypothesis first, their experimental protocols, their conclusion, before turning to the next hypothesis and experiment [ 79 ]. The Sphex collaboration gave him an opportunity to apply his editorial recommendations. The paper, processed in record time by Maynard Smith for the Journal of Theoretical Biology, explains very clearly how they initially applied Brockmann's interpretation and formulated it in EGT terms, before rejecting it in favour of an alternative model with a different assumption. Their line of attack was as follows [ 78 ]. Assuming that burrows are more successful when two females collaborate, they distinguished between two different strategies for a wasp: (i) digging and founding a burrow alone, and (ii) actively joining a founder's burrow and contributing to provisioning it, in the prospect of gaining control of the burrow later. To exist as a mixed ESS, both strategies should achieve equal payoffs in a frequency-dependent equilibrium. After analysis, they had to discount this model: founders did approximately twice as well as joiners. There was obviously little benefit in joining a nest once another female had begun provisioning it. Brockmann's original intuition of making it a model of social behaviour had failed (for her views on joint nesting as a preadaptation to social life, see [ 80 ]). To account for her data, they reconsidered the role of proximate factors, especially the wasp's ability to assess the volume of larval food material in a burrow. They had assumed that a wasp knew as much as the scientist observing it: when entering a burrow, it could assess whether the burrow was already being provisioned by another female. Dropping this assumption, they considered the possibility that wasps provision burrows without ‘knowing’ whether they are provisioned or not. Then, as they note, ‘sharing’ a burrow is a regrettable consequence of having entered and provisioning a burrow. Their revised model worked for one of two populations (New Hampshire), but not the other (Michigan) [ 78 ]. Brockmann et al .'s study [ 78 ] is one of many that demonstrate how optimality theory (which includes EGT) can be employed to test hypotheses about adaptation. It is not a procedure for demonstrating that a trait is optimal, which is an assumption of the method; rather, should observations match model predictions, it suggests that the researcher may have correctly identified the selective forces operative in shaping the trait [ 81 ]. In Brockmann et al .'s case, EGT effectively changed their interpretation of field data. Accurately formulated model assumptions and predictions could be compared with empirical evidence, allowing field researchers to accept (or reject) their hypotheses. This study was praised by researchers anxious to raise standards of empirical tests of optimality theory in evolutionary biology [ 82 ]. However, the paper also showed why EGT and optimality theory were not magic keys applying to any population. A model applying well in one population did not necessarily work in another. For their non-fitting population (Michigan), Brockmann et al . limited themselves to conjecturing the presence of gene flow from other populations. Interestingly, like Brockmann et al ., Hammerstein & Riechert [ 74 ] had mixed success in their long-term comparative study of the spider A. aperta living in different environments. They found a close fit with EGT predictions for one ecotype (a desert grassland population), but not for a second (living in a more favourable riparian habitat). They suggested that gene flow prevented this second population from completely adapting to its local environment. This explanation is plausible: Riechert and Maynard Smith showed that the two ecotypes differ genetically [ 83 ], and that there is indeed evidence of high gene flow in the second population [ 84 ]. This extensive line of work is a helpful reminder that, while apparently circumventing information on the underlying genetics, the empirical success of predictions based on EGT critically depends upon the opportunities, and constraints, of genetics (see §§8 and 10).
One contribution of 18 to a theme issue ‘ Half a century of evolutionary games: a synthesis of theory, application and future directions ’. Though the first attempts to introduce game theory into evolutionary biology failed, new formalism by Maynard Smith and Price in 1973 had almost instant success. We use information supplied by early workers to analyse how and why evolutionary game theory (EGT) spread so rapidly in its earliest years. EGT was a major tool for the rapidly expanding discipline of behavioural ecology in the 1970s; each catalysed the other. The first models were applied to animal contests, and early workers sought to improve their biological reality to compare predictions with observations. Furthermore, it was quickly realized that EGT provided a general evolutionary modelling method; not only was it swiftly applied to diverse phenotypic adaptations in evolutionary biology, it also attracted researchers from other disciplines such as mathematics and economics, for which game theory was first devised. Lastly, we pay attention to exchanges with population geneticists, considering tensions between the two modelling methods, as well as efforts to bring them closer. This article is part of the theme issue ‘Half a century of evolutionary games: a synthesis of theory, application and future directions’.
Evolutionary game theory: first steps The reception given to the theory of games developed by von Neumann and Morgenstern [ 8 ] in the mid-1940s provides a useful starting point. Albeit surprisingly, but abundantly documented by historians, game theory met with only lukewarm interest among its intended audience of economists (e.g. [ 9 , 10 ]). The first researchers to make extensive use of game theory were applied mathematicians working in the new institutes of Cold War Science, especially the RAND Corporation, founded in 1948 as an advisory committee in research and development for the U.S. Armed Forces. Thus, game theory's initial success did not lie in fuelling the interest of economists, but in its ability to offer tools to other communities of investigators. One of the most instructive works on this history is Paul Erickson's The world the game theorists made [ 11 ], which reconstructs the circulation of game theory in several scientific communities over the course of the Cold War. The Cold War provided the major context for both the motivation and the funding for research on game theory. Applied mathematicians found in it a set of convenient optimizing methods for their modelling decisions in situations of uncertainty. Then, from the mid-1950s onwards, directly stimulated by the immediate context of the Cold War, researchers in social and political sciences adopted these methods to study conflicts and their resolutions. A fascinating chapter in Erickson's book [ 11 ] concerns the way evolutionary biologists adopted game theory: it was not an instant success. Erickson draws a sharp contrast between the theory of games as promoted by the first generation of population biologists in the United States in the 1960s, such as Lewontin in 1961 [ 12 ], and Slobodkin in 1964 [ 13 ], with the evolutionary theory of games fashioned by Hamilton [ 14 ] and Maynard Smith and Price [ 1 ] a few years later in the UK. Both groups used game theory, but the former used it mainly as an analogy, which proved to be less fruitful than they initially hoped. Thus understood, game theory invaded evolutionary biology in two different waves, stimulated by very different theoretical aims, and which met very different fates. The main difference between these approaches was the scale at which selection was assumed to act. In the first wave, populations were pictured as having strategies against the environment. Lewontin's paradigmatic 1961 game theory paper pictured genetic polymorphism as a randomizing strategy played against a changing environment. This analogy was interesting, even striking, but it failed to generate any research programme. Further, the environment can hardly be envisaged as a strategic player. Even Lewontin became unenthusiastic that it could be a useful tool in evolution [ 11 , 15 ]. A more compelling example was given by R. A. Fisher in 1958 [ 16 ], who suggested that genetic polymorphism could represent a mixed strategy in an evolutionary game against predators. This suggestion was never adopted, and only served, in later accounts, as a forerunner to the evolutionary theory of games. By contrast, as admirably recapitulated by Erickson [ 11 ], the second wave of evolutionary game theorists, Hamilton, Maynard Smith and Price, made it a modelling method tailored for phenotypic selection: they applied game theory in terms of the behaviour of individuals . There were several examples where researchers, especially at the start of the behavioural ecology era in the late 1960s and early 1970s were considering cases where individual fitness depended on both one's own action and the actions of others in the same population (e.g. see [ 17 ]). Some of us clearly felt the need for an approach studying selection pressures occurring simultaneously on the same individuals. For example, the requirement for such a formalism was stated explicitly in a letter to me by Robert Trivers (R. L. Trivers 1971, personal communication to G.A.P.; see [ 18 ]): Maynard Smith and Price's [ 1 ] central breakthrough was to propose just that—a technique for analysis. They envisaged animals as players adopting strategies in an evolutionary game and sought an evolutionarily stable strategy (ESS), i.e. a strategy that, when played by the population, could not be invaded by any rare mutant strategy. While simple optimization was inadequate, their two ESS conditions permitted a form of competitive optimization suitable for analysis of inter-individual conflicts: thus EGT and behavioural ecology grew together rapidly and were synergistic, each both necessary for and simultaneously catalysing the spread of the other. John Maynard Smith, George Price and the evolutionary theory of games Maynard Smith's contribution has sometimes been downplayed, in suggestions that his role was limited to disseminating and popularizing a method invented by more creative minds, first among them W. D. Hamilton and G. R. Price, in the late 1960s. A highly distinguished, recently deceased ecologist once told me of his feeling that Maynard Smith's notable talent consisted of his sharp clarity in developing and making use of insights. It is not our intention here to challenge this appreciation by reviewing Maynard Smith's numerous creative contributions throughout his career. We limit ourselves to demonstrating how the growth of EGT as a modelling method in the 1970s was simply inseparable from Maynard Smith's inputs. Maynard Smith was not only the co-author of the 1973 paper [ 1 ] that founded EGT. Over the decade that followed its publication, he remained the main force for its growth, through both his scientific works and his ability to attract and stimulate talents. Without Maynard Smith, similar modelling methods for studying frequency-dependent selection at the phenotypic level would almost certainly have been developed: game-like approaches in newly emerging behavioural ecology (e.g. [ 19 , 20 ]), sex ratio theory [ 14 , 21 ] and anisogamy evolution [ 22 ] show that researchers working on interacting phenotypes needed such a method. But Maynard Smith's contribution had been precisely this: he and Price had gone beyond tackling a single problem (the evolution of animal conflicts) to generate an analytical method that could be applied to any phenotypic situation involving frequency-dependent selection. Much has been written on the collaboration between Maynard Smith and Price that generated ESS formalism [ 15 , 23 – 25 ]. Prompted by Hamilton's papers on the evolution of altruism, Price became interested in evolutionary theory and set out to construct a method for modelling the evolution of altruistic traits. In parallel, he investigated how strategies limiting damage in conflicts could evolve, resulting in a long manuscript, ‘Antlers, intraspecific combat and altruism’, submitted to Nature . Maynard Smith refereed ‘Antlers’, and wrote a favourable report suggesting cuts, and Price put his manuscript aside. Stimulated, Maynard Smith began taking an interest in the subject. In a sabbatical at the University of Chicago in autumn 1970, he mentioned to students his problems in convincing Price to publish his results (M. Slatkin 2011, personal communication to J.B.G.). Maynard Smith eventually published a few pages on the method in a popular book On evolution before the publication of their joint paper; the acknowledgements attributed the credit of the idea to ‘Dr. George Price, now working in the Galton Laboratory at University College London. Unfortunately, Dr Price is better at having ideas than at publishing them’ [ 26 , pp. vii–viii]. The long delay between Price's original submission and the publication of the joint paper deserves comment. Price wanted to improve his manuscript, met some problems in computer simulations (his main problem was to find strategies resisting small perturbations) and gradually set the project aside. This reflected a wider pattern characterizing Price's brief career as a theoretical biologist. After writing a text, Price could quickly lose interest in it and turn to something else. A grant application he wrote in 1969 reflects the tremendous diversity of his research interests, from altruism to sexual selection (G. R. Price, ‘Proposal to the Science Research Council’, Supplementary Details of Intended Research: On group selection, human evolution’, GRPP 84116; see [ 23 ]). But this boundless curiosity, and the intensity he brought to any problem under his consideration, had as a reverse side an ability to become detached from a problem once he had worked on it. This was indeed the only reservation Hamilton made when writing a report on Price's application: ‘there seem[ed] just a possibility that he might lose interest in the work halfway through, not care to publish results, not heed biological advice as to what were reasonable models, or some such thing’ (Hamilton to P. H. Williams, Secretary of the Biological Sciences Committee, SRC, 2 May 1969, GRPP 1 84116). With hindsight, Hamilton's intuition was remarkably prescient. A case in point is Price's dealings with his major methodological contribution, the equation that now bears his name. This equation [ 27 ] has been justly celebrated (e.g. [ 28 ]) and has proved to be a very powerful guide for framing problems in evolutionary theory (see [ 23 ]). However, tellingly, by 1973 Price was already disappointed with his own contribution. The Galton Laboratory was then producing reams of electrophoretic data on enzyme variation in humans. Now interested in the problems raised by enzyme variation, Price turned his interests to statistical tests of neutrality (e.g. [ 29 ]). His equation turned out to be of little help, and Price mentioned his disillusionment to Hamilton: the equation was less useful than he thought, since it did not distinguish between selection pressure and population properties (Price to Hamilton, 13 August 1973, WDHP, Z1 X 83). The subsequent history of the Price equation would deserve a separate paper; the equation was not an instant success. In his autobiography, Hamilton has explained how he managed to get Price's note published in Nature [ 30 ]; he himself used Price's method in his papers, in his lectures at UCL and, later, at the University of Michigan, but to limited immediate effect—few students seemed to appreciate it. By the late 1970s, Hamilton was wondering if finding usefulness in that approach reflected some mental twist peculiar to Price and himself (Hamilton to Jon Seger, 12 February 1981, WDHP, Z1 X 63; on Hamilton and Price's collaboration, see [ 31 ]). It was only in the early 1980s, when Seger, then a PhD student at Harvard, made use of it to model coefficients of relatedness in kin selection research, that the Price equation took life as a modelling tool [ 32 ]. So if the fate of EGT had rested solely upon Price's shoulders, would the ‘Antlers’ paper have been shortened to the point of being almost unusable (just as with his terse 1970 note in Nature introducing his equation) or would he have lost interest in it, just as he did for almost all evolutionary subjects to which he applied his talents? Asking this question allows us to better appreciate Maynard Smith's contribution. It is certainly not for nothing that he was the first author of the ‘Logic of animal conflict’ [ 1 ]. He solved computer problems that frustrated Price (see below), extended the analysis to the ‘War of Attrition’ and wrote the paper. ‘The logic of animal conflict’ investigated two models of contests. The main model was a computer simulation involving five strategies played against each other over a number of moves. This model later provided the basis for the simplified, one-move ‘Hawk–Dove’ game. In the second model, the War of Attrition, both opponents continue displaying or fighting until one retreats. The essential difference is that, in Hawk–Dove, costs (e.g. a serious injury) are discrete and sustained by only one opponent when both play Hawk, whereas in War of Attrition costs increase continuously for both opponents during the contest until one gives up. The main aim was to demonstrate that ‘Retaliator’, a strategy of limited aggression, can arise through individual selection; the formulation in the published version reflected a compromise between Maynard Smith's continuous advocacy of individual selection (versus group benefit), and Price's long-held view that ‘possibly many adaptations that appear to be group-benefitting and not individual-benefitting will turn out on deeper analysis to be both individual- and group-benefitting’ (Price, ‘Proposal to the Science Research Council’, GRPP 84116, op. cit. above; see [ 33 ] in the present issue for a more detailed discussion of the 1973 paper [ 1 ]). More than this, Maynard Smith made use of the method. While Price's intelligence can be compared with a bushfire, moving from one field to another, Maynard Smith, who could be similarly versatile in his interests, decided to put EGT to work. In 1974, he developed a more extensive analysis of animal conflicts [ 34 ] and began work on the theory of asymmetrical contests (see §4). Around 1975, with Eric Charnov and Jim Bull, he applied EGT to the evolution of hermaphroditism (see §9). By 1977, inspired by unpublished work started by his colleague Paul Harvey, he used EGT to model problems of parental investment, such as when it can pay to desert one's reproductive partner [ 35 ]. A vast variety of problems in trait evolution were thus amenable to the simple phenotypic EGT modelling approach. By 1979, Maynard Smith was able to enumerate eight different problems that had already been treated with game theory [ 36 ], including inter-species competition, animal dispersal, intra-familial conflict over parental care, resource allocation in plants, hermaphroditism and the evolution of anisogamy. Three years later, his monograph Evolution and the theory of games [ 37 ] not only provided a general review of the field, but also contained many new developments. It succeeded Maynard Smith and Price's 1973 paper [ 1 ] as the main reference on the subject. From a method limited to studying animal contests, Maynard Smith had made EGT a general framework for studying selection on phenotypes. Discussing Maynard Smith's theoretical contributions would deserve a separate treatment. For instance, it would be of interest to examine his definitions of ESS and how he revised them in view of later development of stability analysis. Our aim here is rather to discuss his other effect on the growth of this field, as a powerful catalyser and disseminator. The mechanisms of his influence were subtle, and even paradoxical. In the early 1970s, Maynard Smith was the head of the School of Biological Sciences at the University of Sussex. He and his colleagues Paul Harvey, Brian and Deborah Charlesworth and Timothy Clutton-Brock constituted the nucleus of a very active group in theoretical and empirical biology, with a vibrant social life, vividly described by Marek Kohn [ 38 ]. In the 1970s, this group became a hotspot for theoretical biologists. However, Maynard Smith did not found a school of game theory at Sussex, or a centre of research entirely devoted to the study of animal conflicts. He showed limited interest in attracting funds for his work [ 2 ], and visitors were expected to find their own financial support. He even avoided accepting students intending to work specifically on game theory. When Michael Rose asked him to act as his advisor while working on theoretical biology, Maynard Smith declined and asked him to work with his colleague Brian Charlesworth [ 39 ]. Researchers on sabbatical at Sussex in those years do remember an extremely stimulating environment, but not a frantic hub of game theory. Why then do we argue that Maynard Smith exerted a pivotal and continuous influence over the growth of EGT? First of all, because of his talks and conferences. One can almost track the diffusion of game theory into evolutionary biology by following his talks; he became, for a time, both the main model-maker and the itinerant popularizer of the methodology. Maynard Smith was a very charismatic orator. ‘I had read his scientific papers before attending the conference’, Michael Rose remembered, ‘but they had not prepared me for his verbal powers. John could captivate an audience of scientists like Elvis Presley singing to a Las Vegas crowd’ [ 39 , p. 5]. It is a testimony to Maynard Smith's unusual ability to capture the attention of his audiences and convince them of the fertility of his area of study that three mathematicians we contacted for this paper acknowledged that their interest in EGT was a direct outcome of attending a Maynard Smith lecture. Tim Bishop, perhaps the first student in mathematics to devote a PhD dissertation to EGT, attended a UK mathematical genetics conference in early 1975 where Maynard Smith described his result on the mixed ESS for War of Attrition contests. His informal proof was enough to convince Bishop's advisor, the Sheffield mathematician Chris Cannings, that a more formal treatment was needed, setting the subject for Bishop's PhD dissertation. Both Bishop and his advisor, went on to work extensively on the war of attrition. Two other prominent examples of mathematical converts are W. G. Hines and Peter Taylor, who also made major contributions to ESS theory in the 1970s and 1980s. They both first heard of EGT when Maynard Smith lectured at a conference of the Canadian Society of Mathematics in 1975 and were sufficiently impressed to turn to EGT (see §7). Last, but not least, we should mention Maynard Smith's ability to orientate the field through his refereeing works for science journals. Being one of the few biologists able to assess modelling work in the United Kingdom, he was frequently asked to review manuscripts on theoretical biology. With his rising reputation, he became even more central: in game theory, most papers were directed to him, either by journals for reviews or by researchers for comments. He could thus relate results obtained by researchers from different schools and help information to circulate between them. Several theoreticians have told us of their feeling of relief when reading his reviews, making clear the problem and the main points of the papers, sometimes lost in a mass of complex algebra, to the benefit of the author. But reviewing papers also helped Maynard Smith to follow EGT developments, presenting him with new ideas and stimulation to think about new questions, or potential for collaborative ventures with other scientists. It is to these other actors that we now turn our focus. We consider who used the methods developed by Maynard Smith and Price, what were their motivations, and how their work affected the growth of the field. Turning to theory, making models more realistic Let us begin this survey with a personal example. In my PhD at the University of Bristol, I had made a general investigation of sexual selection in dung flies. Males gather around fresh cattle droppings to mate with and then guard gravid females as they lay their eggs in the dung. Over the course of this work, I observed males fighting for females [ 40 , 41 ]. Soon after, during the 2 years before publication of Maynard Smith and Price's 1973 paper [ 1 ], I had started work on a theoretical analysis of animal contests, based on individual selection and on the notion that contestants assessed asymmetries between them. This paper was in draft when I read Maynard Smith and Price's paper and its synopsis [ 26 ] in 1973. Depressed by the news (it is never great to learn that one of the best theoretical biologists in the country has just published a paper on the very same problem), I nonetheless noticed significant differences in our emphases. Maynard Smith and Price had assumed symmetry; the two contestants were equal in all respects. In my approach, I had emphasized that contestants were not equal. I had observed numerous fights between male dung flies, where asymmetries between contestants were generally obvious (e.g. ‘owner’–‘attacker’, larger–smaller). In my view, models needed to include this major feature. I distinguished between two main kinds of payoff-related asymmetries between contestants. The first concerns fighting ability, which I called ‘resource-holding power’ (RHP; thinking of male flies keeping hold on a female against competitors): animals differ in relative strength, which should affect contest outcome. The second asymmetry concerns the value of the resource ( V ): a food item, mating opportunity, or a territory cannot be assumed to be of equal value for all contestants. In essence, my view was that fighting functioned to assess these relative RHPs and V s, which determined relative fitness payoffs, and thus how long each contestant could ‘afford’ to fight. This analysis resulted in an ‘assessor rule’ for contest outcomes, relating to the benefit/cost ratios of the two opponents. Maynard Smith reviewed the paper, found it interesting, and investigated a different case: a situation where asymmetry between contestants was not related to payoffs. He showed that when fighting can lead to dangerous injury, a purely arbitrary (i.e. payoff-uncorrelated) asymmetry between otherwise symmetric contestants could be used to define a ‘peaceful’ (i.e. non-escalatory) solution. He published this result in the same issue of Journal of Theoretical Biology in 1974 [ 34 ], immediately before my paper on assessment strategies [ 42 ]. Maynard Smith invited me to Sussex for discussions in July 1974, and we corresponded on the topic for two years. A more substantial account has been given elsewhere [ 41 ] of our subsequent collaboration, which applied ESS logic to contests in which contestants have asymmetric costs and benefits of fighting (i.e. payoff-related asymmetries) [ 43 ]. Although at that time I had limited mathematical expertise and was unable to contribute significantly to the mathematical developments, I sent several letters to Maynard Smith during 1974 suggesting various lines of enquiry, a few of which were followed up in our joint paper, published in Animal Behaviour in 1976 [ 43 ]. Theoretical papers on contests quickly followed by other authors (e.g. [ 44 – 48 ]). One of the major developments of this area was the sequential assessment game constructed by Magnus Enquist and Olof Leimar [ 49 ]. This model develops the notion that RHP assessment is not immediate, but improves during the contest: animals increase their information about their relative RHPs during successive bouts in a contest. Although it could be claimed that some of its essence had been foreshadowed earlier [ 42 , 43 , 50 ], their very plausible analysis led to detailed and specific predictions amenable to quantitative tests. Indeed, Enquist later decided to test them empirically (with success) by devising contests in aquaria between males of the cichlid fish Nannacara anomala [ 51 ]. I can draw some similarities between Enquist and my own attitudes to Maynard Smith and Price's 1973 approach. We were both attracted by EGT's potential to develop a valid theory of animal contests in terms of individual selection, and both had to become more mathematically proficient to manipulate ESS methods. While a graduate student at the University of Stockholm in the mid-1970s, Enquist had also discovered Maynard Smith's book On evolution [ 26 ]. Enthusiastic, he started a PhD on animal contest theory (for which I later became external examiner). There was no mathematical expertise on biological issues among students of animal behaviour at Stockholm at that time, so he had to train himself in mathematical biology. He asked Olof Leimar, then a PhD student in theoretical physics, for assistance; Leimar became so interested in EGT that he too switched his PhD to the subject. Similarly, following Maynard Smith's advice, I taught myself some basic skill in calculus to use ESS methods. However, Enquist and I also reacted to what we perceived as gross simplifications in the first models, which, in our view, lacked biological realism. I have mentioned my unease about Maynard Smith & Price's assumption of symmetry, which conflicted with my intuition derived from dung fly contests. Similarly, Enquist was extremely critical of the Hawk–Dove game. As an amateur naturalist since boyhood, he felt that the Hawk–Dove game did not capture how animals fight. It was only with time that he came to appreciate its value as a guide for clarifying thinking. What we wish to emphasize here is that researchers can be attracted to a method because of its perceived deficiencies: they then feel they have something to offer. Consultation of Maynard Smith's and Price's papers held at the British Library shows in retrospect that Enquist and I were in good company in questioning the realism of the early Hawk–Dove models. It transpires that Maynard Smith and Price had themselves been worried by the relevance of game theory to the analysis of conflicts in real animals. Price was anxious to get his facts right and searched for appropriate references in the empirical literature (G. R. Price to V. Geist, 24 March 1974, JMSP). Similarly, Maynard Smith consulted his ethologist friends for advice, but with limited success. On their own admission, his former colleagues at the University of Sussex were unimpressed (P. Slater 2011, personal communication to J.B.G.): there seemed to be a huge gap between the complex behaviours studied by ethologists and the theoretical analyses using simplified strategies. But for others, this gap was a stimulation to embrace theoretical biology. Models and further observations have challenged the basic statement Price wanted to demonstrate with game theory, that natural selection would usually lead to peaceful settlements of contests. Animals do fight and sometimes at great cost to themselves (e.g. see [ 52 – 54 ]). Furthermore, 50 years later, some theoretical results are still challenging biological intuition. Maynard Smith's 1974 demonstration that contestants could use ‘uncorrelated’ asymmetries to peacefully settle conflicts [ 34 ] is the kind of result that leaves me ambivalent, as a theoretical biologist. My natural history intuition tells me that this is unlikely, and I think Dan Rubenstein and I managed to show that it cannot occur in a War of Attrition [ 50 ] when individuals can accurately assess payoffs (see also [ 48 , 55 ]). But as a theoretician, I cannot disagree with the formal proof that it may apply in Hawk–Dove situations. Evolutionary game theory and the rise of behavioural ecology While ethology had mostly been preoccupied with describing an animal's behaviour patterns, behavioural repertoires and the internal system of ‘drives’, or internal states evoking them [ 56 ], the new science of behavioural ecology focused on adaptive value and represented a major change in approach (see Stuhrmann's detailed account [ 57 ], and also [ 58 , 59 ]). Group selection interpretations of adaptation, often implicit, were pervasive in ethology and ecology up to the late 1960s and beyond, until George Williams' famous critique favouring individual selection in 1966 [ 60 ], after which debate continued (e.g. [ 61 , 62 ]). From very early on, students of behavioural ecology saw EGT as relevant to their data, and, more broadly, as giving direction to their fieldwork (e.g. see [ 63 ]). More than that, it exemplified the research programme of studying behaviours as adaptations fashioned by natural selection. The theory was rapidly popularized. Although E. O. Wilson's Sociobiogy: the new synthesis [ 64 ] in 1975 only included Maynard Smith and Price's 1973 result as a hypothesis on ritualized aggression, behavioural ecologists based in the UK laid stress on optimality and ESS approaches. No specific chapter was devoted to it in the first edition in 1978 of Krebs and Davies' Behavioural ecology: an evolutionary approach [ 65 ], but the cover shows a hawk chasing doves and the Hawk–Dove payoff matrix, and the editors stressed that the method underlay (or should underlie) arguments in several chapters, including sex ratio, lek behaviours, ritualized conflicts and animal distributions. Richard Dawkins in The selfish gene [ 66 ] in 1976 promoted EGT as one of the major developments in twentieth century science. In accord with the enthusiasm of the times, I (in the second edition of Behavioural ecology [ 67 ]) praised it as the major recent development in evolutionary theory. In a nutshell, EGT found its place among the three major theories available in behaviour studies, alongside inclusive fitness and (frequency-independent) optimization theory. Two major places for the spread of new theories among students of animal behaviour were certainly the universities of Oxford and Cambridge in the United Kingdom [ 57 ]. Documenting the flux of information from and to these places certainly sheds much insight into the rapid growth of behavioural ecology [ 57 ]. With a strong basis of trained ethologists and population ecologists, both held vibrant seminar series to which leading researchers, such as Trivers and Maynard Smith, came to present their recent works. Students there were informed of upcoming work and were able to develop research strategies accordingly. Thus at Oxford, Nick Davies designed ingenious experiments on the speckled wood butterfly, testing the ‘owner win’ rule of animal contests [ 68 ]. While being the main fortress of the more ethological approach, keeping proximate factors under close focus, Cambridge also became an important centre of exchange. From 1975 to 1980, Patrick Bateson assembled a Sociobiology Research Group at King's College, which allowed considerable exchanges and collaborative ventures between students of animal behaviour [ 69 ]. For example, Tim Clutton-Brock (later at the University of Sussex) and his co-workers were studying red deer fights, and discussing them in terms of EGT predictions [ 70 , 71 ]. The King's Sociobiology Group was certainly important for me. Based at Liverpool, I was (relatively) isolated from the main lines of ongoing research in the nascent behavioural ecology, and depended upon correspondence, reviews and visits to follow progress. A one-year stay at Cambridge (1978–1979) was an opportunity to start new collaborations. I collaborated with Dan Rubenstein on modelling assessment of asymmetries in contests [ 50 ], and analysed data on struggles between male dung flies for females with the statistician Elizabeth A. Thompson. Although the distribution of dung fly contest lengths seemed roughly consistent with a symmetric war of attrition, it became clear that they were more likely to be asymmetric conflicts in which the outcome favoured the ‘owner’ [ 72 ], as Hrefna Sigurjónsdóttir (then my PhD student) and I later demonstrated [ 73 ]. Let us consider in more detail the examples of two US researchers, Jane Brockmann and Susan Riechert, whose career paths shed light on circulation of information to researchers in North America. Both trained at the University of Wisconsin in the late 1960s to early 1970s, where Brockmann studied the behaviour of golden digger wasps, Sphex ichneumoneus , while Riechert specialized on the ecology of the spider Agelenopsis aperta . Both of their works had initially a strong ethological bent; they used quantitative ethological methods such as analysis of behaviour sequences. Only in later phases of their work did they reinterpret their data in the light of EGT, providing among the best empirical applications of this approach (see §6). Personal interactions mattered in the circulation of the approach; Brockmann's and Riechert's foray into EGT depended crucially upon such interactions. After her PhD, Brockmann undertook a sabbatical in Oxford in 1977–1978. She had planned to work with David MacFarland, then the leading expert on quantitative methods in ethology; MacFarland being absent, she instead collaborated with Richard Dawkins, who had promoted EGT in The selfish gene . They jointly used ESS methods to reanalyse her observations on digger wasps (see §6). Later, Brockmann put Riechert, who was making a transition from ecology to behaviour, in contact with John Maynard Smith to reanalyse her own data on spider contests. Through Brockmann's intercession, Riechert launched an influential collaboration with Maynard Smith's former student, Peter Hammerstein, which led to a major attempt at measuring payoffs in natural populations, over a field study spanning decades (see enlightening accounts of this important work in [ 74 – 76 ]). Beyond behavioural ecology: interdisciplinary collaborations Maynard Smith and Price's [ 1 ] paper had developed EGT as a technique for modelling animal contests. For some years, contests were indeed one of the main areas of its application. However, researchers quickly realized that EGT had much broader applications. Research went in two directions. The first—applying it to other biological problems—was remarkably fruitful. Although several studies had foreshadowed EGT, having a simple formalism energized its rapid application to a burgeoning variety of adaptations (see §3). The second direction involved exploring the mathematical underpinnings of the method. This aim attracted a significant number of applied mathematicians. Theoretical developments in this field were thus published not only in such journals as Animal Behaviour or Journal of Theoretical Biology , but also in outlets such as Advances in Applied Probability . What did EGT have to offer to mathematicians? Although it seemed simple, ESS theory was rich in hidden complexities. Its simplicity misled the first researchers who used it. A well-known example is Maynard Smith and Price's's analysis of their own game, which was questioned by geneticists from the University of Birmingham, who showed that a new ESS could be found if the Maynard Smith–Price matrix was restricted to a sub-set of strategies [ 85 ]. Happy with the main results, which were consistent with their general interpretation of animal contests, Maynard Smith and Price did not perceive that their own simulations were richer than initially planned. More fundamental problems appeared when Maynard Smith and his collaborators tried to delineate the method more clearly. One of the two mathematicians drawn to EGT by Maynard Smith's talk in Canada (see §2), W. G. Hines, was initially deceived by ESS's apparent simplicity. As a statistician, he initially believed that EGT was sufficiently well established for him to build methods for estimating payoff matrices from field data. He soon realized that clarifying its mathematical foundations was still a work in progress. As he later commented: ‘it seemed to me that a field of study motivated by a wish to understand issues in theoretical biology became replaced by interest in an enjoyable diversity of questions that arose from that wish, but which gained academic lives of their own’ (Gordon Hines 2011, personal communication to J.B.G.). It was not clear, for example, whether a system must have an ESS, or how many ESSs could coexist for a given system. Nor was it clear how to find all the ESSs. And the very meaning of an ESS was obscure. For instance, was a mixed ESS a property of the population, with individuals playing possibly very different strategies, or a given set of strategies played by each individual in a population? The first investigations on these issues began in Sussex. John Haigh, Maynard Smith's applied probability colleague at the University of Sussex, studied the mathematical properties of ESS in m × m matrices ( m being larger than 2), showing that it was possible that no ESS existed, and deriving methods for finding all ESSs in a matrix [ 86 ]. Also, would a population actually converge to an ESS (e.g. see [ 87 ])? In the decade that followed, the mathematical study of evolutionary stability became a field of inquiry of its own (see summary by [ 88 ]). A major contribution was made by Peter Taylor, the other mathematician converted by Maynard Smith during this talk in Canada. Interested in foundational issues, Taylor was struck by an apparent anarchy: Maynard Smith and his collaborators had built different models for each situation, but was there any real mathematical unity behind them (Peter Taylor 2011, personal communication to J.B.G.)? He started to compare different ESS methods, such as Maynard Smith's and similar methods in sex ratio evolution (see §9), striving to make it a coherent body. Taylor's solution was to incorporate EGT into the mathematical theory of dynamical systems. Here, a strategy's payoff is essentially proportional to the growth rate of those who adopt it in a population. This work uncovered major similarities with other areas of biological research. Following Taylor & Jonker's equation for game dynamics [ 89 ], Schuster & Sigmund [ 90 ] showed how the same basic replicator equation applied to four different research areas (population genetics, ecology, animal behaviour and prebiotic evolution), thus integrating EGT into the realm of ‘replicator dynamics’. What we wish to emphasize is that, just as EGT went beyond the theory of animal contests, it emerged as much more than a theory for behavioural ecology. It was a genuinely interdisciplinary field, to which mathematically orientated researchers from varied disciplines contributed. The exchanges with economists were more subtle, as expertly discussed by Grüne-Yanoff [ 91 ]. There were unquestionably interactions between economics and behavioural ecologists in the 1970s and early 1980s. The first major dialogue, and first EGT conference, was organized in Bielefeld in November 1978 by Peter Hammerstein, a young mathematician working as a theoretical biologist at the Institute of Mathematical Economics (see [ 76 ]). In addition to Maynard Smith and the game theorist Reinhard Selten (Nobel Memorial Prize for Economics 1994), delegates included several behavioural ecologists, such as Nick Davies, Richard Dawkins, Alan Grafen, John Krebs and myself. Certainly a major event in EGT, it was followed by conferences in Queen's University, Ontario in 1982 and again in Bielefeld in 1985. Discerning effects from conferences is a difficult undertaking. The 1978 Bielefeld conference was especially important for me: my first international conference, it enabled me to discuss my (then in press) arms race model [ 92 ] with Dawkins and Krebs, who were also working on arms races in evolution [ 93 ]. My model yielded no ESS, and Selten outlined how his ‘trembling hand’ theorem could be used to stabilize it, which I later used [ 94 ]. Though I do not remember the conference as a major basis for collaborations, a notable effect was certainly visible on the organizer, Peter Hammerstein, who cleverly enlisted Selten and Maynard Smith as advisors for his PhD. However, the short-term effects of these interdisciplinary encounters should not be over-emphasized. To my recollection (and Grüne-Yanoff makes the same general point [ 91 ]), we (the biologists) found in EGT a convenient way of framing issues, but were mostly unaware of game theory in economics. Suffice it to quote Maynard Smith's response as to why he never cited Nash: ‘Who is Nash?’ (see [ 11 ]). For their part, economists using game theory sometimes perceived EGT as redundant: that it could offer a genuinely different and rewarding approach was not immediately apparent, though links grew later and continue (e.g. [ 95 ]). Economists discussing EGT spent much effort (sometimes justifiably) relating biologists' discoveries to previous treatments by economists. EGT's main effects on economics were probably felt well after the early 1980s, possibly in the wake of interest generated by Axelrod and Hamilton's celebrated computer simulations of the Tit-for-Tat effect, which turned economists' attention to the issue of equilibrium selection [ 96 ]. Before this, it is difficult to pinpoint major collaborations or effects. This was precisely why Maynard Smith was impressed by Selten; according to Hammerstein, he was the first economist who did not try to demonstrate to him that EGT was just another way of doing classic game theory, but understood that it had different aims [ 76 ]. An effect of the field's uptake by mathematical modellers trained in mathematics, economics and physics meant a sharp rise in mathematical standards. Maynard Smith often remained, for these mathematicians, the biologist to be consulted over the plausibility of a model's assumptions. However, he sometimes struggled with the increasingly technical developments of this literature (Maynard Smith to Eshel, 3 December 1980, JMSP, Add. MS 86597 A). To the mathematician Christopher Zeeman, he admitted (Maynard Smith to E. C. Zeeman, ca 1978, JMSP, Add. MS 86749): Evolutionary game theory and population genetics: controversy Maynard Smith's success in raising EGT's profile represents an interesting puzzle. As a theoretical population geneticist, his influence was sufficiently far-ranging in his field to attract interest among his colleagues. However, population geneticists' contributions to EGT were perhaps more limited than expected: why did so few contribute to EGT in the late 1970s and early 1980s? A possible reason may well be their lack of interest in it. Population geneticists were then busy with the many problems raised by molecular data, available through the spread of electrophoretic methods. Detecting unambiguous signals of natural selection at the molecular level was very difficult (e.g. see [ 97 ]). Phenotypic selection seemed of less pressing concern. An example is Motoo Kimura, arguably the leading theoretical population geneticist at the time. Kimura reacted enthusiastically to Maynard Smith and Price's [ 1 ] paper, which he read with ‘absorbing interest’, and sent congratulations for an ‘outstanding achievement’ (Kimura to Maynard Smith, 20 November 1973, JMSP 86726). Kimura was not particularly prone to over-emphasis about matters unrelated to neutrality—his words can be taken at face value. Had this paper been published 10 years earlier, Kimura might have wanted to work more on the subject, as he was always on the lookout for interesting biological problems. But after 1968, he was focused on developing and defending his ‘neutral mutation–random drift’ hypothesis on molecular evolution; it left little room for explorations in other fields. Further, EGT seemed in part redundant in view of well-researched areas in population genetics. For example, the eminent Nottingham population geneticist Bryan Clarke expressed the view to me that we already had frequency-dependent selection (a concept he had had a major part in developing [ 98 ]), so why did we need ESS? Another objection was that EGT gave only a simplified understanding. Using Alan Grafen's provocative turn of phrase, EGT/optimality procedures are based on a ‘phenotypic gambit’ [ 99 ]: they study adaptations ‘as if there were a haploid locus at which each distinct strategy was represented by a distinct allele, as if [relative payoffs] gave the number of offspring for each allele, and as if enough mutation occurred to allow each strategy the opportunity to invade’. These simplifying assumptions expel complexities arising through diploid genetic machinery [ 79 , pp. 63–64]. Understandably, it became controversial among professional experts dealing with these complications. R. C. Lewontin provided a compelling example of this complex reception. As mentioned (§2), in 1961 [ 12 ] he had used game theory to tackle one of the major population genetics problems of the day—polymorphism as an adaptation to changing environments. By the mid-1960s, he had lost faith in it, as part of his disillusionment with optimality methods used extensively in ecology for their lack of dynamical sufficiency: optimality only indicates the local optimum for a population, not whether a given population reaches that equilibrium. Lewontin also noted that fits between model predictions and given observed traits can be mere coincidence [ 100 ]. In the early 1970s, Maynard Smith informed Lewontin of the new potential of game theory, now refashioned as ESS theory. Lewontin knew Maynard Smith from symposia on theoretical biology organized by C. D. Waddington at the Villa Serbelloni, Lake Como in 1966–1967. High power intellects with very broad interests, superb speakers and debaters, well versed in Marxist views on science and society, they became friends. Lewontin invited Maynard Smith to the University of Chicago, and in 1972 Lewontin spent a term at the University of Sussex, where he wrote chapters of his book The genetic basis of evolutionary change [ 101 ]. Perhaps less known is that he worked with Maynard Smith on basics of ESS theory. In retrospect, this is not as surprising as it may seem. Closer in spirit to the framework of population genetics than simple optimization methods, ESS concerns analysis of stability against rare mutant strategies arising in a population. According to reports Lewontin wrote to his sponsor, the Atomic Energy Commission, he was able to prove during his stay that a mixed strategy ESS always exists when two pure strategies occur at equilibrium (Lewontin,‘ A study of mathematical models of mutation and selection in multi-locus systems', AEC contract, no. AT(11-1)-1437). Since this work remains unpublished, we do not know exactly what Lewontin demonstrated and must content ourselves with this statement that he worked on such problems. The same reports mention further investigations on game theory. In 1975–1976, a visitor from Israel in Lewontin's laboratory at Harvard, Ilan Eshel, worked on ESS stability criteria (on Eshel, see §10). Eshel was attempting to derive an exact genetic basis for the principle of equalization of parental investment between male and female offspring. Of particular interest to Lewontin was that Eshel's results from ESS analysis ‘rarely’ corresponded to stable equilibria in genetic systems (Lewontin, AEC contract, no. E(11-1)-2472). From then on, Lewontin must have concluded that EGT models gave unreliable conclusions and made his concerns explicit in his review [ 102 ] of Maynard Smith's book Evolution and the theory of games , and in his review [ 103 ] of Maynard Smith's textbook Evolutionary genetics . These criticisms reflected a professional inclination by geneticists. Once one is used to modelling genes, it is not easy to return to the level of phenotypes. Studying genes understandably gives evolutionists the feeling that they are the right level for investigating evolutionary problems, and population genetic formalism helps in comparing the effects of various forces (natural selection, mutation, migration and drift) on a given study system. EGT seemed restricted to the study of selection. Since phenomena other than selection could change gene frequencies in populations and even drive them to fixation, ESS formalism could seem misleadingly restrictive (see below). For dealing with phenotypes, population geneticists thus preferred honing their own methods, either through the distinguished empirical approaches of ecological genetics, or through the updated methods of quantitative genetics, which was being rejuvenated for detecting selection on continuous characters [ 104 ]. Maynard Smith was not shaken by these objections, which he rather considered a research question, and worked on whether EGT and population genetics gave convergent results [ 105 ]. He followed the works of Eshel, who, collaborating with Marcus Feldman, investigated conditions under which the two methods give comparable results (e.g. [ 106 ]). On the other hand, he refused to make genetics the sole acceptable formal framework in evolutionary theory. To Bengt Bengtsson, a theoretical population geneticist with reservations about the methodology, he admitted frankly (Maynard Smith to Bengtsson, 27 October 1985, JMSP, Add. MS 86604): Behavioural ecologists like myself responded to the attack on EGT and optimality by suggesting that that the different interests of the two disciplines led both to make unrealistic simplifications, but in opposite directions [ 67 ]. ESS theorists sacrificed genetic rigour to consider more complex strategy sets. But population genetics modellers themselves constrained the expansion of strategic possibilities in the interest of analytical tractability: their models were of limited use to those of us working on complex sets of behaviours. Evolutionary game theory and population genetics: modelling sex ratio In contrast with the controversy mentioned in the previous section, some theoretical biologists found it convenient to use both approaches in their mathematical endeavours. The study of sex ratio and sex allocation phenomena offers a remarkable example of such joint pursuits. A history of this field would require an alternative paper, dealing with ESS history before the ESS label. This would begin with the evolution of the 1 : 1 sex ratio, originating with Darwin [ 107 ], Carl Düsing [ 108 ] and several others (e.g. [ 109 ]; see [ 110 ] in the present issue for a detailed history). Sex ratio evolution was revisited, notably by Richard Shaw in the mid-1950s, author of the celebrated Shaw–Mohler equation (see his autobiography [ 111 ]), and for later extensions to cases when the assumptions underlying the 1 : 1 ratio do not hold, initially by W. D. Hamilton [ 14 ], see [ 110 , 112 – 114 ]. We here restrict our attention to Eric Charnov, who adopted ESS methods in the 1970s in his study of sex allocation theory, drawing on recollections he shared with us. Charnov trained as an ecologist at the University of Washington, Seattle in the late 1960s and early 1970s. As a graduate student, Charnov fell under the spell of V. C. Wynne-Edwards' interpretation of trait evolution through group selection [ 115 ]. In Charnov's recollections, Wynne-Edwards asked broad, far-ranging questions going to the heart of biology, and group selection's explanatory power seemed impressive. Then, in 1971, Charnov attended a course given by a former student of David Lack, Gordon Orians, who framed his lectures against Wynne-Edwards' explanations. After digesting these criticisms and reorganizing his thinking accordingly, Charnov concluded that the questions remained, even if Wynne-Edwards’ answers were wrong. ‘Many things, like alarm calls, became puzzles to be thought about. Unsolved puzzles, partially solved puzzles' (E. Charnov 2012, personal communication to J.B.G.). Eager to address these on a firm ground, Charnov turned himself into a theoretical ecologist. After taking several graduate-level classes in economics and optimization methods (operations research), he set out to apply fitness optimization ideas to the study of animal behaviour and life-history evolution. Stimulated by the optimal diet models of R. H. MacArthur, Eric Pianka and John Emlen, he first set to work on theory of optimal foraging, striving to make models amenable to quantitative tests [ 116 ]. In summer 1974, then a professor at the University of Utah, Charnov turned his attention from optimal foraging to sex ratio, because the former proved more difficult for estimating tradeoffs and testing predictions. ‘Sex allocation was a [life history] theory that made sometimes surprising predictions and could be tested because we could know the tradeoffs, at least well enough. I was focused on realistic, testable theory from the beginning; and getting data’ (E. Charnov 2022, personal communication to G.A.P.). This shift in research direction was catalysed by a book manuscript on the subject by an empiricist colleague. That manuscript, still unpublished, explained the Shaw–Mohler equation for sex ratio [ 21 ]. Building on Fisher's verbal discussion, Richard Shaw and his colleague Dawson Mohler had shown how to model sex ratio by tracking an autosomal gene affecting sex ratio through the offspring and grand-offspring generations. This led to the surprising result, that at the 1 : 1 equilibrium all variants were of equal fitness. In other words, an equal sex ratio is an evolutionarily stable (population) strategy. Although many genetic variants coding for biased sex ratios can coexist in the population, the population equilibrium is 1 : 1. The Shaw-Mohler result is a remarkable example of the patterns that can emerge at the phenotypic level. In the second half of the 1970s, Charnov made it the basis for extensive investigations, asking how it would apply to cases, such as simultaneous and sequential hermaphroditism, not considered by Shaw and Mohler. It is interesting to note that Charnov used two different methods. On the one hand, he used phenotypic methods. For instance, he rederived Shaw and Mohler's result in the more complex demographic setting of an age-structured population with overlapping generations (the Shaw–Mohler model was designed for separate generations) [ 117 ]. After a stay in the UK in summer 1975, Charnov, his PhD student Jim Bull, and Maynard Smith published a joint paper, which presented a new quantitative theory for hermaphroditism [ 118 ]. Based on their independent derivations, the paper was written by Maynard Smith and featured an ESS model. ‘His argument was cumbersome, but correct’, Charnov granted (E. Charnov 2022, personal communication to G.A.P.). He then turned to theory of sequential hermaphroditism and applied it to empirical data. The best data available were provided by marine ecologists. Charnov thus used extensive data on life-history parameters in a pandalid shrimp (a protandrous hermaphrodite) to investigate timing of sex change [ 119 ]. But his stay in the UK had helped Charnov to extend his range of modelling tools. At the University of Sussex, he interacted with Brian Charlesworth, an expert at anchoring theoretical developments into firm population genetics foundations. Charlesworth taught Charnov how to analyse stability in population genetics equations, by investigating the effect of introducing into a population a mutant at a single locus. Back in Utah, Charnov decided to expand his modelling toolkit and retrained in population genetics, analysing scores of examples, including in sex allocation theory. There are good reasons for not neglecting genetics. Among many animals, sex determination depends on genetic mechanisms, such as homogamety–heterogamety. Running computer simulations with Bull, Charnov investigated how mutants modifying sex determination affected the population polymorphism, and, in turn, led to new sex-determining systems. It was in this context that they appreciated the power of Shaw and Mohler's result. In his own words, ‘when Bull and I did our simulations, we were very confused. Every time we set a starting frequency the genotype frequencies changed for, maybe, 6 generations and then just stopped. And for the same genotype system, where it stopped depended completely on the starting frequency. It took us a while to realize that the equilibrium genotype frequencies depended entirely upon starting frequencies, BUT EVERY PHENOTYPIC EQUILIBRIUM WAS A POPULATION SEX RATIO OF 1/2. I had never encountered a dynamical model like that’ (E. Charnov 2022, personal communication to G.A.P.). (Unaware of all previous sex ratio theory, I had encountered exactly the same problem in 1967 during my PhD when running computer simulations on sex ratio evolution; see [ 17 ]). By the late 1970s, Charnov decided to summarize this effort by investigating various patterns of genetic sex determination, showing in each case how to recover the Shaw–Mohler equation as a guiding principle [ 120 ]. In his view, although population genetics methods (invasion dynamics) were seen as more rigorous, and although they were necessary in investigation of complex patterns when the fitness function cannot readily be specified, such as haplodiploid inheritance, the Shaw–Mohler method was often much easier for obtaining the phenotypic answers he sought, and it helped focus on the quantities of interest. In conversation with us, Charnov recollected that Lewontin served as referee for his tenure application. While Lewontin praised Charnov's work for putting sex allocation theory on a secure population genetic basis, Charnov tended to view this work as being derivative rather than foundational. The genetic methods had mainly confirmed the phenotypic answers. Ilan Eshel, the timescale of the evolutionary process and conclusive comments Although there was controversy, these discussions between theoretical biologists and population geneticists should not be framed too exclusively as confrontation. Showing convergences (or differences) between EGT and population genetics models was a widely shared preoccupation within the small community of EGT theoreticians in the early 1980s. For instance, Hines determined that population means tended to the ESS (when possible) for the single-locus multi-allele case with additive inheritance [ 121 ], before investigating ESS under more complex genotypic maps with Bishop (e.g. [ 122 ]). This should be no surprise. Comparison between rival modelling methods is commonplace in the history of science, and forms a significant part of assimilating ideas and gaining confidence in new methods. A prominent example is given by the history of calculus in the eighteenth century, split between continental methods inspired by Leibniz's calculus, and geometric methods in England in continuity with Newton's approach in the Principia (see [ 123 ]). A different line of attack was Ilan Eshel's sustained attempt at framing population genetics and phenotypic approaches as alternative (but compatible) perspectives focused on distinct processes. Trained as a mathematician in Israel, Eshel obtained a PhD at Stanford for work on the advantages of recombination in a constant environment. He became an important member of the school advocating mathematically ‘exact’ approaches in population genetics theory [ 124 ]. However, he also considered the issues tackled by Hamilton, Maynard Smith and the broader field of phenotypic evolution to be the major problems in evolutionary theory. One of his long-standing interests concerned the behaviour of herds facing predators. Inspired by Hamilton's ‘selfish herd’ theory [ 125 ], Eshel used EGT to investigate how shared common interests between predators and the strongest or fastest individuals in a group of prey species may result in those prey helping the predators to locate the weakest or slowest prey in the herd [ 126 ]. As a theoretician, Eshel thus faced a major contradiction. Phenotypic problems were the issues of evolutionary importance, but conclusions drawn from purely phenotypic approaches, or from simple one-locus two-allele models, were unlikely to hold under more complex genetic situations; conversely, exact population genetic models (being usually limited to two-locus theory) were unlikely to apply to the many important adaptations whose genetic basis was unknown. Building on two decades of work on these issues, Eshel's solution to this conundrum was to draw a distinction between two processes of evolution, which he called short-term and long-term views of evolution (e.g. [ 127 – 129 ]). Eshel's scheme is reminiscent of Sewall Wright's shifting balance theory of adaptive evolution. Both rely on a process of ‘trial and error’ circumventing limitations of the process of gene frequency change. Wright famously declared that, to evolve, a population should not be under the sole control of natural selection; in his evocative shifting landscapes of gene frequencies, selection leads a population to a single peak, possibly suboptimal, and exhausts variation necessary for further evolution. According to Wright, another process, based on differentiation of the species into partially isolated subpopulations, is required for exploring the full adaptive landscape, for reaching the highest peaks and for retaining variation [ 130 ]. In Eshel's scheme, the ability of populations to reach a phenotypic optimum (or, in frequency-dependent selection, an ESS) is bounded by the complications of the genetic machinery in systems with diploid inheritance, especially epistasis and recombination (see [ 131 ] for a general history of the problem). However, Eshel argued that gene frequency change under a fixed set of genotypes does not offer a full description of the process of adaptive evolution. It represents only one scale of evolution, which he called short-term evolution. A longer-term view includes the continuous supply of new mutations. When advantageous mutations are rare relative to the time required to reach equilibrium, new mutations occur away from the stable equilibria of the short-term process (when they exist) and reset the process of gene frequency change towards new states (to a new stable equilibrium, a new cycle or a state of chaos). Long-term evolution proceeds by an infinite sequence of similar transitions, from one fixed set of genotypes to another fixed set of genotypes, each of them being subject to the episodes of short-term evolution caused by the establishment of successful mutations. Eshel's work and, similarly, that of Hammerstein [ 132 ] focused on demonstrating that, in the long-term process, a multi-locus genetic system under frequency-dependent selection can approach an ESS (when it exists). To many, this effectively solved the conundrum. Both approaches were correct for their respective purposes. These were just different purposes, representing different views on the evolutionary process and how to study it mathematically. However, empirically, as Eshel admitted, long-term evolution does not guarantee that any population has reached an ESS. For instance, the supply of newly arising mutations may have been deficient, either because of insufficient time or accidental allele loss (for asymmetric contests, see [ 133 ]). Thus, his scheme might mostly have reassured only those of us who were already comfortable with ESS methods, happy to focus on the phenotypes and ready to leave organisms deal with their own genetic problems. How theoreticians consider unification attempts such as Eshel's scheme is an open question. Grafen, who has developed his own major long-term programme to formalize NeoDarwinism [ 134 ], once observed that few biologists seem concerned by foundational issues. ‘It may ... be the case that fashion has somewhat turned against ‘high theory’, and favours more low-tech, more empirical work that lacks the taint of master narrative’ [ 134 , p. 63; 135 ]. As far as I can offer a tentative conclusion based on my own involvement, the years in which I was most involved in the study of animal conflicts—the early days of EGT—were years of ‘low theory’. Certainly, then, EGT meant extending the full range of Darwinism to animal behaviour and many other areas, and this was of major theoretical significance to us. But the mathematics we learnt was pragmatic. Maynard Smith led us to formulate problems and seek solutions, to make models, to learn the basics of calculus, but also—and fundamentally—to gain confidence in ourselves and to trust our instincts.
Acknowledgements We express our very sincere gratitude to all the scientists contacted for this paper, who so generously shared their recollections; to the staff of the British Library, London, for kindly allowing us to consult their archives; and to John Welch, two anonymous reviewers and editor Jussi Lehtonen, whose comments helped us to much improve this paper. Data accessibility This article has no additional data. Authors' contributions J.-B.G.: conceptualization, writing—original draft, writing—review and editing; G.A.P.: conceptualization, writing—original draft, writing—review and editing. Both authors gave final approval for publication and agreed to be held accountable for the work performed herein. Conflict of interest declaration We declare we have no competing interests. Funding We received no funding for this study. Endnote 1 Abbreviations used in in-text manuscript references. JMSP: John Maynard Smith papers, Add. MS 86569–86840, British Library, London. GRPP: George R. Price papers, Add. MS 84115–84116, British Library, London. WDHP: William D. Hamilton papers, British Library, London.
CC BY
no
2024-01-15 23:43:51
Philos Trans R Soc Lond B Biol Sci.; 378(1876):20210493
oa_package/72/16/PMC10034578.tar.gz
PMC10184354
37189133
Introduction The endospore-forming Bacillus cereus is a Gram-positive, food poisoning-associated pathogen, with the ability to cause severe gastrointestinal tract infections [ 1 , 2 ]. Besides, there is a growing body of evidence supporting the association between B. cereus infections and a range of acute non-gastrointestinal tract diseases, such as sepsis and infections of the central nervous system, particularly in immunosuppressed patients and newborns [ 3 , 4 ]. The pathogenesis of B. cereus is mainly related to the heat- and gastric acid-stable emetic toxin cereulide, a cyclic dodecadepsipeptide causing vomiting and – in severe cases – organ failure [ 5 , 6 ], while the multicomponent pore-forming enterotoxins, such as non-hemolytic enterotoxin (Nhe) and hemolysin BL (Hbl), provoke a diarrheal syndrome [ 7 – 10 ]. The tripartite enterotoxin Nhe is present in almost all enteropathogenic B. cereus strains [ 1 , 7 , 11 , 12 ] however, the exact mode of action of the Nhe toxin at the cellular level is still poorly understood. Several studies have shown that all three toxin components NheA, NheB and NheC are necessary for optimal pore formation and, finally, cell membrane leakage in vitro, which requires a specific concentration, ratio and binding order of the Nhe components at the target cell surface [ 13 – 15 ]. NheC has been suggested to be mandatory in the priming step, however, due to its low abundance and the lack of tailored analytical tools, its detection remains challenging [ 14 , 16 ]. Only few proteomics studies have detected NheC in the exoproteome of B. cereus [ 17 , 18 ]. In addition, several other exoproteins, including proteases and membrane-damaging phospholipases, have been discussed recently as putative virulence factors in B. cereus pathogenicity [ 19 – 21 ]. The sphingomyelinase (SMase) of B. cereus has been shown to synergistically interact with Nhe as well as with Hbl, suggesting its contribution to the severity of the disease [ 22 , 23 ]. B. cereus SMase, similar to other bacterial SMases is able to hydrolyze sphingomyelin [ 24 ], thereby affecting the dynamics of membranes and the host immune system. Recently, it has been demonstrated that toxicity of a given strain correlates with the quantity of secreted SMase, the B component of Nhe and proteolytic activity [ 25 ]. Nevertheless, how multicomponent enterotoxins are transported in the extracellular milieu and finally delivered to target host cells remains poorly understood. In the past years, naturally produced extracellular vesicles (EVs) from bacteria have gained considerable importance as a novel transport system of multiple virulence factors in host–pathogen interaction and pathogenesis. EVs represent spherical membrane-enclosing structures that are released as a conserved mechanism for cell-free inter- and intra-species cellular communications across all three domains of life [ 26 , 27 ]. Bacterial membrane vesicles have been extensively studied in Gram-negative bacteria [ 28 , 29 ], however, recent studies have also demonstrated the production of EVs in Gram-positive bacteria [ 30 , 31 ]. Although the exact mechanisms of vesicle biogenesis and transport through the thick peptidoglycan layer of Gram-positive bacteria remains poorly understood, a possible mechanism has been described by the activity of cell wall degrading enzymes which generate holes in the peptidoglycan layer and allow the release of EVs into the surrounding [ 32 – 35 ]. Bacteria-derived EVs are loaded with a large diversity of bioactive compounds, including proteins, nucleic acids, and virulence factors [ 30 , 36 ]. The cargo of EVs determines their biological functions, ranging from bacterial survival, biofilm formation, resistance to antibiotics, host immune invasion and modulation, and infection [ 31 , 37 ]. EVs from pathogenic Gram-positive species carry a range of toxins and molecules that are involved in immune evasion [ 30 ]. In Staphylococcus aureus , EVs have been shown to deliver virulence-associated factors, causing cytotoxicity to host cells [ 38 , 39 ]. EVs associated with cytosolic pore-forming toxins of Streptococcus pneumoniae have been shown to bind complement proteins, thereby promoting pneumococcal evasion of complement-mediated opsonophagocytosis [ 40 ]. Pneumococcal vesicles are also able to induce protection against infection in vivo [ 41 ], while some studies revealed their contribution to inflammatory responses and tissue damage in hosts [ 42 , 43 ]. Likewise, B. anthracis -derived EVs contain biologically active anthrax toxin components that are toxic to macrophages and induce a protective response in immunized mice [ 44 ]. To elucidate the role of EVs in the pathogenesis of enteropathogenic B. cereus , we characterized their production and cargo content by using a proteomics approach and studied their interaction with human host cells in vitro. Our study provides evidence that B. cereus EVs are loaded with several virulence-associated factors, such as SMase, phospholipase C, and the multicomponent enterotoxin Nhe. We could show that B. cereus EVs interact with intestinal epithelial cells via cholesterol-rich domains and dynamin-mediated endocytosis, leading to Nhe internalization and delayed cytotoxicity. Notably, SMase packed in B. cereus vesicles complemented Nhe-induced hemolysis in vitro, highlighting the function of EVs as vehicle of multiple virulence factors for their concerted actions on host cells. The identification of toxin-loaded EVs in B. cereus adds a new layer of complexity to our understanding of how multicomponent enterotoxins are assembling and further affecting host interaction and pathogenesis.
Methods B. cereus strains and growth conditions The enteropathogenic B. cereus strain NVH0075-95, isolated from vegetables after a large food poisoning outbreak in Norway [ 7 ], and its isogenic mutant strains Δsmase, ΔnheBC , and ΔnheBCΔsmase [ 23 ] were routinely grown in Lysogeny Broth (LB) or on LB agar plates at 30 °C. Isolation of B. cereus EVs EVs were isolated from B. cereus culture supernatants, as described previously with minor modifications [ 44 ]. Briefly, B. cereus strains were inoculated at a cell density of OD 600 of 0.05 in 50 mL LB-medium and grown for 17 h at 30 °C under shaking (120 rpm). After removal of the bacterial cells at 3,000 × g and 4,000 × g for 15 min at 4 °C, the EV containing supernatant was sterile filtered (0.45 μm cutoff) and centrifuged at 10,000 × g for 15 min at 4 °C to remove cellular debris. Subsequently, an Amicon® ultrafiltration system (100 kDa cutoff; Millipore, USA) was used to concentrate the EVs and remove soluble proteins (< 100 kDa) and supernatant. Finally, EVs were collected by ultracentrifugation at 125,000 × g for 1 h at 4 °C in a TLA-45 rotor (Optima TLX centrifuge; Beckman Coulter, USA). EV pellets were washed and resuspended in phosphate-buffered saline (PBS). Protein determination was performed using DC Protein Assay (BioRad, Vienna, Austria) according to the manufacturer’s instructions. Proteins in the vesicle-free filtrate were precipitated by 10% ice-cold trichloroacetic acid (TCA) solution as reported [ 66 ]. Protein concentration from precipitated proteins was determined using the 2-D Quant Kit (GE Healthcare, USA), according to the manufacturer’s instructions. The characterization of EVs was performed according to minimal information for studies of extracellular vesicles (MISEV) 2018 guidelines [ 70 ]. Mass spectrometry was used to determine the protein composition of EVs, whereas transmission electron microscopy (TEM) and nanoparticle tracking analysis (NTA) were utilized to visualize their characteristic lipid-bilayer structure and size, respectively. Protein, fatty acids, and polysaccharide ratios of EVs compared to the bacterium were determined by means of FTIR spectroscopy. Furthermore, FTIR spectroscopy was used to monitor the quality of EV preparations (see description in the section FTIR spectroscopy). Size distribution using nanoparticle tracking analysis (NTA) The effective diameter and size distribution of EVs were measured using ZetaView × 30 TWIN Laser System 488/640 (Particle Metrix, Inning am Ammersee, Germany) as described [ 73 , 74 ]. For this purpose, EVs were diluted 1:1,000 in shortly prior sterile-filtered PBS and the instrument was calibrated using 100 nm polystyrene beads. Particle tracking analysis was performed in scatter mode with a 488 nm laser with the following settings: Minimum brightness 30; minimum area 10; maximum brightness 255; maximum area 1000; temperature 25 °C; shutter of 70; sensitivity was adjusted to achieve the appropriate amount of traces, as suggested by the manufacturer. Fourier transform infrared (FTIR) spectroscopy Differences in the metabolic fingerprints between bacteria and EVs as well as the robustness of EV isolations were assessed by FTIR spectroscopy. Therefore, purifying EVs from six independent bacterial cultures were subjected to FTIR spectroscopy as described previously [ 75 ]. In brief, suspensions containing either EVs or bacterial cells were prepared and transferred to zinc selenite (ZnSn) optical microtiter plates (Bruker Optics GmbH, Ettlingen, Germany) and dried at 40 °C for 40 min. FTIR spectra were recorded in transmission mode with the aid of an HTS-XT microplate adapter coupled to a Tensor 27 FTIR spectrometer (Bruker Optics GmbH, Germany) using the following parameters: 4000 to 500 cm −1 spectral range, 6 cm −1 spectral resolution, averaging of 32 interferograms with background subtraction for each spectrum. To compare FTIR spectra derived from bacterial cells and EVs, FTIR spectra were preprocessed using vector normalization, baseline correction and calculation of second derivates over the whole spectra using a second-order 9-point Savitzky–Golay algorithm. Spectroscopic ratios of fatty acids (3020 – 2800 cm −1 ), proteins (1720—1500 cm −1 ) and polysaccharides (1200—900 cm −1 ) of bacterial cells versus EVs were calculated as described previously with minor modifications [ 75 ]. In brief, raw spectra were baseline corrected and smoothed using the Savitsky-Golay method (5 smoothing points, 3rd-grade polynomial), followed by total integration of the indicated areas, whereas the integration of the amide area was fitted using Lorentzian component bands. Transmission electron microscopy (TEM) analysis For transmission electron microscopy (TEM) imaging, pelleted EVs were fixed in 3% neutral buffered glutaraldehyde (Merck Millipore, USA), pre-embedded in 1.5% agar and washed in Sorenson's phosphate buffer (pH 6,8; Morphisto, Vienna, Austria), as described previously [ 76 ]. After post-fixation in 1% osmium tetroxide (Electron Microscopy Sciences, Hatfield Township, PA, USA), samples were sequentially dehydrated in ethanol series, soaked in propylene oxide and embedded in epoxy resin (Serva Electrophoresis GmbH, Heidelberg, Germany). Ultrathin sections (70 nm) were obtained with a Leica Ultramicrotome (Leica Ultracut S, Vienna, Austria) and contrasted with alkaline-lead citrate (Merck Millipore, USA) and methanolic-uranyl acetate (Sigma-Aldrich, USA). Vesicle structures were visualized on a transmission electron microscope Zeiss EM 900 (Carl Zeiss Microscopy GmbH, Jena, Germany) equipped with a digital Frame-Transfer-CCD camera (Tröndle TRS, Moorenweis, Germany). Liquid chromatography-tandem mass spectrometry (LC–MS/MS) Sample preparation and digestion by filter aided sample preparation (FASP) Isolated B. cereus EVs were prepared for proteomic analysis to screen for virulence factors as described previously [ 77 ]. Equal protein amounts of isolated B. cereus EVs were precipitated with 10% ice-cold TCA [ 78 , 79 ]. Precipitated proteins were washed with ice-cold acetone, air-dried and re-suspended in 6 M urea, 2 M thiourea and 10 mM TRIS. In total, 30 μg of protein was mixed with 8 M Urea in 50 mM TRIS and loaded onto an Amicon 10 kDa filter (2 × 20 min at 10,000 × g ). The samples were reduced with 200 mM dithiothreitol (DTT) (37 °C, 30 min) and alkylated with 500 mM iodoacetamide (IAA) (37 °C, 30 min). After washing with 50 mM TRIS twice, digestion was done using Trypsin/LysC Mix in a ratio of 1:25 (protease:protein) overnight at 37 °C. Digested peptides were recovered in 150 μl of 50 mM TRIS and acidified with 0.1% trifluoroacetic acid (TFA). Prior to LC–MS analysis, peptide extracts were desalted and cleaned using C18 spin columns (Pierce Biotechnology, USA) according to the manufacturer’s protocol. The dried peptides were dissolved in 300 μl 0.1% TFA, of which 3 μl were injected into the LC–MS/MS system. Mass spectrometry and data analysis Peptides were separated and identified on a nano-HPLC Ultimate 3000 RSLC system (Dionex, USA) coupled to a high-resolution Q Exactive HF Orbitrap mass spectrometer (Thermo Fisher Scientific, USA). Raw spectra were subjected to database searches using Proteome Discoverer Software 2.4.0.305 with the Sequest HT algorithm (Thermo Fisher Scientific, USA). The UniProt database for B. cereus (taxonomy 1396, accessed on 11.5.2022) and a common contaminant database ( https://www.thegpm.org/crap/ ; accessed on 11.5.2022) were used for query of the spectra. Following search parameters were applied: Trypsin as an enzyme with a maximum of two allowed missed cleavages; 10 ppm precursor mass tolerance and 0.02 Da fragment mass tolerance. As dynamic modifications oxidation/ + 15.955 Da (M) and deamidation/ + 0.984 Da (N, Q), as N-terminal modifications acetyl/ + 42.001 Da (N-Terminus), Met-loss/-131.040 Da (M) and Met-loss + Acetyl/89.030 Da (M) and as fixed modifications Carbamidomethyl/ + 57.021 Da (C) was used. Proteins were identified on the basis of at least two peptides and strict false discovery rate targets of 0.01 (1%) threshold in all nodes in Proteome Discoverer 2.4.0.305 (Thermo Fisher Scientific). The overlap of protein identities in the biological replicates was used for further analysis. Signal peptide cleavage sites were predicted using SignalP v6.0 [ 80 ], subcellular locations were predicted using PsortB v3.0.3 ( https://www.psort.org/psortb/ and gene-term enrichment analysis was performed using KEGG and by setting a false discovery rate (FDR) to < 0.05. Immunoblotting Proteins from EVs and TCA-precipitated EV-free supernatants were separated on 12.5% SDS-PAGE gels and blotted on a nitrocellulose membrane (BioRad Transblot SD Semi Dry Transfer Cell, BioRad, Vienna, Austria). Subsequently, blotted proteins were labeled either with mouse monoclonal anti-SMase antibody (0.5 μg/mL, 33kDaI-MAb 2A12), mouse monoclonal antibodies anti-NheA IgG1 (1A8; 2.5 μg/mL), anti-NheB IgG1κ (1E11; 1.25 μg/mL), and anti-NheC IgM (3D6; 5 μg/mL) [ 81 ] followed by peroxidase-conjugated goat anti-mouse IgG antibody (1:20,000) (Dianova, Hamburg, Germany). Generally, 5 μg of total protein was used for immunoblotting. In addition, 40 μg total protein of EV-free supernatants was tested for NheC. Immunoreactive bands were visualized using Super Signal West Pico Chemiluminescent Substrate (Thermo Fisher Scientific, USA) and scanned with ChemiDoc MP Imaging System (BioRad, Vienna, Austria). EV uptake assay Uptake of EVs by human intestinal Caco2 epithelial cells was monitored using the self-quenching lipophilic dye Octadecyl Rhodamine B Chloride (R18; Molecular Probes, Life Technologies, USA) as described [ 82 , 83 ]. In brief, 5 μg of EV protein were stained with 1 mg/mL R18 for 1 h at room temperature, followed by two washing steps in 0.2 M NaCl at 125,000 × g for 1 h at 4 °C. Prior to vesicle uptake, 1 × 10 4 Caco2 cells were incubated in MEM/10% FBS for 48 h in 96-well plates (Corning Inc., USA) at 37 °C in 5% CO 2 atmosphere and were then incubated with R18-labeld EVs in 100 μL 0.2 M NaCl per well. To inhibit vesicle uptake, Caco2 cells were treated either with cholesterol-sequestering agents Filipin III (10 ug/ml) and Imipramine (10 mM), or endocytosis inhibitors Dynasore (80 μM), Cytochalasin D (1 μg/ml), Chlorpromazine (15 ug/ml), and Amiloride (10 mM; all from Sigma Aldrich). The substances were added 1 h prior to the addition of R18-labeled vesicles. Fluorescence was detected every 2 min for a total period of 90 min at 37 °C with a fluorescence reader (570/595 nm; SpectraMax M3, Molecular Devices, USA). The % of EV uptake was calculated after 90 min of treatment and normalized to untreated control. Cytotoxicity assay Cytotoxicity of EVs was quantified using a cell culture assay based on Caco2 cells. Caco2 cells (2 × 10 5 cells/mL) were incubated with 200 μg/mL EVs in MEM-Earle medium supplemented with 2% fetal calf serum (FCS; v/v) for 24 h at 37 °C in a 5% CO 2 atmosphere. The viability of the cells was measured using the Vita-Orange Cell Viability Reagent (Biotool, Switzerland), according to the manufacturer’s protocol. The viability of treated cells was determined as a percentage compared to untreated control cells. Super-resolution microscopy Visualization of Nhe containing extracellular vesicles Extracellular vesicles (25 μg) were fixed with 4% paraformaldehyde (PFA) and incubated consecutively with mouse monoclonal antibody NheA IgG1 (1A8; 12.5 μg/mL), NheB IgG1k (1E11; 12.5 μg/mL) and NheC IgM (3D6; 17.5 μg/mL) for 1 h. After washing steps with PBS/5% BSA, Nhe proteins were labeled with secondary goat anti-mouse IgG AlexaFluor®488 and goat anti-Mouse AF568 IgM (4 μg/mL, Molecular Probes, Life Technologies, USA). Delivery of Nhe-containing EVs to Caco2 cells Caco2 cells were cultured in 8-well ibidi μ-slides (ibidi GmbH, Martinsried, Germany) and treated with 200 μg/mL of EVs for 2 h at 37 °C, 5% CO 2 . After washing with PBS, cells were fixed with 4% PFA and incubated with a buffer solution containing 0.05% bovine serum albumin (BSA), 0.1% Triton X-100, and 0.025% Tween 20 in PBS. Cells were incubated consecutively with mouse monoclonal antibody NheA IgG (1A8) and NheC IgM (3D6) or NheB IgG1κ (1E11) and NheC IgM (3D6) for 1 h at a final concentration of 4 μg per well and were detected with the secondary goat anti-mouse IgG AlexaFluor®488 and goat anti-Mouse AF568 IgM (2 μg per well, respectively) (Molecular Probes, Life Technologies, USA). Nuclei were stained with 1.5 μM 4',6-diamidino-2-phenylindole (DAPI, Sigma Aldrich). Fluorescence 3D-SIM (structured illumination microscopy) images were acquired with the Zeiss LSM710 Elyra PS.1 microscope system equipped with an Andor iXon 897 (EMCCD) camera. Image processing was performed using the Zeiss ZEN 2012 software (Carl Zeiss Microscopy GmbH, Jena, Germany). Hemolysis assay Human erythrocytes from three different donors were isolated from leukocyte reduction system (LRS) chambers of a Trima Accel® automated blood collection system (Terumo BCT, USA) by Ficoll gradient centrifugation. Plasma and mononuclear cell layer were removed and erythrocytes were washed twice with PBS (pH 7.4). A quantitative hemolytic assay was performed as described earlier [ 82 ]. Briefly, 50 μL human erythrocytes (5 × 10 8 /mL) were incubated with an equal volume of 12.5, 25 and 50 μg of EVs for 1 h at 37 °C. PBS and 5% (v/v) Triton X-100 served as negative and positive controls, respectively. After incubation, 100 μL ice-cold PBS was added and centrifuged at 400 × g for 15 min at 4 °C. The hemolytic activity from the supernatant was determined by the release of hemoglobin at 540 nm (SpectraMax M3, Molecular Devices, USA) and calculated as percentage relative to the positive control. Isolation and stimulation of human primary monocytes with EVs Human peripheral blood mononuclear cells (PBMCs) from three different donors were isolated from leukocyte reduction system (LRS) chambers of a Trima Accel® automated blood collection system (Terumo BCT, USA) and cultured as previously described [ 84 ]. To characterize the pro-inflammatory potential of EVs, 200 μl of human primary monocytes (2.5 × 10 6 /mL) were stimulated with 100 μg/mL of EVs for 4 h in a 96-well plate at 37 °C in RPMI medium. Supernatants were collected and the concentration of TNF-α was quantified by ELISA (Merck Millipore, USA). Statistical analysis of data Statistical analyses for at least three independent biological replicates were performed with the aid of GraphPad Prism8 software (GraphPad Software, Inc., USA). Statistical significance was calculated using Student’s two-tailed unpaired t-test and the two-way ANOVA with Tukey's multiple comparisons test was applied for multiple comparisons. Statistical significance was concluded when a probability value ( p value) was lower than 0.05.
Results B. cereus secretes EVs into the culture supernatant, containing the multicomponent enterotoxin Nhe and the membrane active enzyme SMase In recent years, extracellular prokaryotic membrane vesicles have been reported to play an important role for Gram-negative bacteria in cell–cell communication and transport of virulence factors to host cells [ 30 ], but still little is known about their role in Gram-positive bacteria. With the aim to evaluate the production and secretion of spherical EVs from Gram-positive B. cereus , the enteropathogenic strain NVH0075-95 was grown in LB broth for 17 h at 30 °C and pelleted vesicles were visualized by transmission electron microscopy (TEM). TEM exhibited intact spherical structures with a diameter of up to 200 nm, suggestive of extracellular membrane vesicles (Fig. 1 B). The size distribution of vesicles was confirmed by nanoparticle tracking analysis (NTA) measurements (Fig. 1 A), revealing B. cereus EVs with a peak size ranging from 75 to 200 nm in diameter. Both complementary approaches highlighted the presence of B. cereus EVs. These results strongly support the notion that Gram-positive B. cereus actively produces and secretes EVs, heterogeneous in size, into the extracellular milieu during cell growth in vitro. Fourier transform infrared (FTIR) spectroscopy was employed to further characterize the B. cereus EVs and determine their lipid and protein content. Spectra of isolated EVs were recorded in the spectral range of 4000 to 500 cm −1 and preprocessed spectra were subjected to chemometric analysis. In parallel, spectra were recorded from bacteria to determine the difference in the spectral profiles of EVs and bacteria (Fig. 1 C-E). As revealed by subtraction analysis of the 2nd derivate spectra, (Fig. 1 G) the most prominent differences between EVs and bacteria were found in the protein region (in amid I and amid II bands (1700–1600 cm −1 and 1600–1500 cm −1 , respectively), indicating changes in peptide backbones (Fig. 1 G, H) and the polysaccharides region (1200–900 cm −1 ). The latter region includes functional groups in polysaccharides of cell walls, phosphate-containing molecules and cell surface glycostructures (Fig. 1 G, I). To gain further insights into the spectral differences between bacteria and their EVs, we calculated the ratios of proteins to polysaccharides and fatty acids to proteins. Compared to bacterial cells, the ratio of proteins to polysaccharides was significantly higher in EVs (Fig. 1 F), whereas the ratio of fatty acids to proteins was higher in bacteria than in EVs (Fig. 1 J). These results indicate that EVs differ in their protein/fatty acid composition from bacteria, which is reflected in their characteristic spectral fingerprints. Overall, FTIR spectroscopy proved to be a suitable method for determining B. cereus -specific EV fingerprints, which could be used to monitor the quality of EVs. Since virulence factors form a large constituent of the protein content in bacteria-derived EVs [ 30 , 39 ], we next characterized the cargo proteins of the B. cereus EVs by liquid chromatography–tandem mass spectrometry (LC–MS/MS) to screen for virulence factors. The majority of identified proteins (Additional file 3 : Table 1) were predicted to be cytoplasmic (47%) and membrane- or cell wall-associated (5% and 13%, respectively), while 17% of the proteins were predicted to occur extracellularly (Fig. 2 A, Additional file 4 : Table 2). Based on SignalP analysis, 26% of proteins in EVs contain a Sec signal peptide, predicted to be secreted via the classical secretory pathway. In contrast, 68% of the proteins have no predicted signal peptide, indicating that the release of EVs appears to be an important process for the secretion of proteins. Only 6% are predicted lipoproteins, which have a signal peptidase II cleavage site and primarily belong to the ATP-binding cassette (ABC) transporter substrate-binding proteins (Fig. 2 B, Additional file 1 : Figure S1, Additional file 5 : Table 3). Pathway enrichment analysis on identified proteins revealed various biological processes to be overrepresented in EVs, including metabolic pathways and biosynthesis of secondary metabolites (Fig. 2 C). Among the identified EV proteins, we found all three components of the enterotoxin complex Nhe, which plays a key role in B. cereus enteropathogenicity [ 1 , 2 ], and SMase, which hydrolyzes sphingomyelin and was reported to be a crucial factor for the toxicity of a given strain [ 25 ]. Moreover, several virulence-associated factors were identified in B. cereus -derived EVs (Fig. 2 D), including penicillin-binding protein (PBP), adhesin, enolase, collagenase, alcohol dehydrogenase-like protein, bacillolysin, thiol-activated cytolysin and phospholipase C (PLC). B. cereus EVs deliver the multicomponent enterotoxin Nhe to human intestinal Caco2 cells Since the cytotoxic strain NVH0075-95 expresses the tripartite pore-forming toxin Nhe but lacks the hbl genes [ 11 , 25 ], we further investigated the association of the Nhe enterotoxin with EVs. Secreted and purified vesicles from B. cereus NVH0075-95 were analyzed by using highly specific monoclonal antibodies for each Nhe component. Western immunoblotting confirmed the presence of the three single components in EVs. Notably, NheC was highly concentrated in B. cereus EVs, while it could not be detected in the vesicle-free supernatant, even when eightfold amounts of proteins were loaded. Both components NheB and predominantly NheC were packaged in vesicles, whereas NheA was detected in vesicles but mainly located in the vesicle-free supernatant (Fig. 3 A). Three-dimensional structured illumination microscopy (3D-SIM) of B. cereus EVs confirmed the co-localization of enterotoxin components NheB and NheC (Fig. 3 B), as well as NheA and NheC (Additional file 2 : Figure S2A). Given the important role of the host gastrointestinal tract in the pathogenesis of B. cereus [ 45 ], we next studied the delivery of B. cereus vesicle-associated Nhe components to human intestinal Caco2 cells, followed by cellular internalization using 3D-SIM. The three Nhe subunits were detected in Caco2 cells already at 2 h upon treatment with B. cereus EVs (Fig. 3 C-D). At 2 h post-stimulation with B. cereus vesicles, the morphology of Caco2 cells remained unchanged and no nuclear fragmentation was observed when compared to untreated Caco2 cells (Fig. 3 C-D, Additional file 2 : Figure S2B). Moreover, 3D-SIM images at single cell level showed colocalization of NheB and NheC components at the edges of Caco2 cells (Fig. 3 E, insert I, II). These findings are consistent with the hypothesis that the vesicle cargo containing Nhe components was being internalized into intact Caco2 cells. In order to determine whether B. cereus -derived EVs, enriched with virulence factors (see Fig. 2 D), induce cytotoxicity, and thus play a critical role in microbial pathogenesis, Caco2 cells were treated with EVs for 24 h. At 24 h post-stimulation, B. cereus -derived EVs induced a cytotoxic effect with a cell viability of less than 50% (Fig. 3 F). These results suggest that the delivery of EV cargo to Caco2 cells is a prerequisite for cytotoxicity. Cellular uptake of B. cereus extracellular vesicles is mediated via cholesterol-rich microdomains and endocytosis Since we demonstrated that B. cereus uses EVs to transport the tripartite enterotoxin Nhe into host cells, we next investigate whether B. cereus vesicles are able to enter human intestinal epithelial cells via membrane fusion to release their contents. To this end, EVs of B. cereus were fluorescently labeled with the lipophilic dye octadecyl rhodamine B chloride (R18) and subsequently applied to intact Caco2 cells. R18 fluorescence is quenched at high concentrations in cell membranes and dequenched when diluted by fusion with the host cell membrane. Thus, an increase in R18 fluorescence directly correlates with the fusion reaction of the host cell membrane and vesicles. Interaction of R18-labeled NVH0075-95 EVs with Caco2 cells showed a rapid and significantly time-dependent increase in fluorescence, indicating membrane fusion of vesicles and host cells (Fig. 4 A). In contrast, the fluorescence did not increase in control samples containing only Caco2 cells or only rhodamine-R18 labeled vesicles. Moreover, confocal microscopy indicated the attachment and aggregation of NheB-loaded vesicles to the edges and surfaces of Caco2 cells (Fig. 4 B) with a partial distribution of NheB along the cell membrane (Fig. 4 B, insert). To identify the entry route of EVs, host cells were treated with rhodamine-R18 labeled vesicles in the presence of chemical drugs to block cellular functions. The cholesterol sequestering agent Filipin III complex, which binds to cholesterol in cholesterol-rich microdomains in the cell membrane, significantly reduced the uptake of B. cereus vesicles by approx. 65% as compared to internalization by untreated Caco2 cells (Fig. 4 C, Additional file 2 : Figure S2C). Moreover, EV uptake could be significantly blocked by using clathrin- and dynamin-mediated endocytosis inhibitors, Chlorpromazine and Dynasore, respectively. In the presence of Chlorpromazine, EV internalization was significantly reduced by 29% compared to control, (Fig. 4 C, Additional file 2 : Figure S2D), whereas treatment with Dynasore revealed the strongest decrease (by 82%) in EV uptake (Fig. 4 C; Additional file 2 : Figure S2D). By comparison, Imipramin, a substance with surfactant properties and known for blocking acidic sphingomyelinase [ 46 ], showed a slight, but not significant decrease in vesicle internalization (by 28%) (Fig. 4 C, Additional file 2 : Figure S2D). Amilorid and Cytochalasin D, inhibitors blocking macropinocytosis and F-actin polymerization, respectively, showed no effect on B. cereus vesicle internalization by Caco2 cells (data not shown). Collectively, these results indicate that the uptake of B. cereus vesicles by Caco2 cells occurs through multiple pathways but predominantly via cholesterol-rich domains and dynamin-mediated endocytosis. Bioactive B. cereus vesicles induce hemolysis and elicit an inflammatory immune response in human blood cells An important facet of prokaryotic EVs is their immunoreactivity via triggering pro-inflammatory immune responses that likely impact pathogenesis [ 40 , 47 ]. In our model, B. cereus EVs triggered the secretion of the pro-inflammatory cytokine tumor necrosis factor-alpha (TNF-α) in human primary monocytes (Fig. 5 A). TNF-α levels were highly upregulated at 4 h post-stimulation with B. cereus vesicles, indicating the onset of an inflammatory response. To evaluate whether toxin-containing EVs from B. cereus NVH0075-95 are biologically active, we assessed their hemolytic activity on human erythrocytes. B. cereus EVs caused hemolysis in a concentration-dependent manner (Fig. 5 B). Next, we generated EVs from an NVH0075-95 NheBC null mutant ( ΔnheBC) to investigate the contribution of Nhe to EV-induced hemolysis. The successful deletion of nheBC was confirmed by immunoblot analysis (Fig. 5 C). Notably, the hemolytic activity of EVs from the isogenic ΔnheBC mutant was clearly reduced compared with EVs originating from the wild-type strain. However, the ΔnheBC deletion did not completely abolish hemolysis (Fig. 5 B), suggesting the involvement of other EV-derived virulence factors. Since SMase, a host damaging phospholipase shown to synergistically interact with Nhe in vitro and in vivo [ 23 ], was identified in NVH0075-95 EVs by proteome profiling (Fig. 2 D), we next generated EVs from an NVH0075-95 Δsmase mutant and a double knockout mutant of nheBC and smase ( Δ smase Δ nheBC ). The EVs derived from the Δsmase mutant showed decreased hemolytic activity to the same extent as ΔnheBC mutant vesicles. However, EVs originating from the double depletion of SMase and NheBC ( Δ smase Δ nheBC ) abolished hemolysis (Fig. 5 B), emphasizing the pivotal role of EVs as vehicles for a cooperative action of enterotoxin Nhe and phospholipase SMase in B. cereus pathogenicity.
Discussion EVs secreted by clinically relevant bacteria are considered important mediators of host–pathogen interactions. In recent years, there is growing evidence that pathogenic Gram-positive bacteria deliver multiple virulence factors via EVs in a protected manner to target host cells and thereby contribute to bacterial pathogenesis [ 30 , 31 ]. The Gram-positive endospore-forming pathogen B. cereus secretes a wide variety of membrane-damaging toxins that could act together or synergistically with each other and other virulence factors to enhance the cytotoxic potential [ 22 , 23 , 25 ]. However, the exact mechanism of B. cereus toxin and virulence factor delivery to the host and their uptake into target cells is hitherto unknown. Here we show that the enteropathogenic B. cereus NVH0075-95 secretes biologically active EVs, loaded with exotoxins and various virulence factors, which elicit an inflammatory immune response in host cells. In line with data from B. anthracis [ 44 ], a close relative of B. cereus [ 1 , 2 ], B. cereus EVs revealed spherical-like structures with an average size of 150 nm confirmed by light scattering and TEM analysis. Similar sizes were detected in other Gram-positive bacteria [ 48 , 49 ], suggesting a natural, usual common size of Gram-positive bacteria-derived EVs. Proteome profiling of B. cereus EVs revealed that proteins derived from the cytoplasm represent the most abundant EV component. Similar observations were reported for other Gram-positive bacteria, such as B. anthracis, Enterococcus faecalis, S. aureus and S. pneumoniae [ 39 , 41 , 44 , 47 , 50 , 51 ]. Since cytoplasmic proteins lack secretion signals and more than half of the proteins detected in B. cereus vesicles do not have a predicted signal peptide, it is reasonable to conclude that EVs represent a specific secretory mechanism to B. cereus , as lately reported for S. aureus [ 51 ]. Several virulence-associated factors were identified in B. cereus EVs, including the multicomponent enterotoxin Nhe as well as membrane-active enzymes, such as the phospholipase PLC [ 52 ] and the sphingomyelin-degrading SMase [ 19 , 53 ]. In addition, our proteomic screening approach revealed a penicillin-binding protein (PBP) critical for cell wall modification [ 44 , 54 ], a tissue-degrading collagenase [ 52 , 55 ], and an alcohol dehydrogenase-like protein recently described as a pathogenic biomarker involved in B. cereus virulence and survival against host innate defense [ 56 ]. Overall, these findings suggest a putative role of B. cereus EVs in the transfer of virulence factors to host cells. The tripartite pore-forming toxin Nhe requires the combined action of the three components NheA, NheB and NheC to induce lysis in vitro [ 13 , 15 ]. Although Nhe is thought to be crucial for B. cereus pathogenicity, the mechanisms by which the toxin components are delivered to the host cell and assembled on host cell membranes are poorly understood. According to the current model on the mode of action of Nhe, binding of NheB and NheC occurs in solution before binding to the host cell membrane, albeit both NheB and NheC are capable of binding to cell membranes. The subsequent binding of NheA further induces pore formation and, finally, cell membrane leakage [ 13 , 15 ]. NheC is assumed to be required in the priming step to induce maximum cytotoxicity [ 13 , 14 ]. However, since NheC is present only in very low amounts in B. cereus natural culture supernatant, studies on the complex formation of Nhe components have so far only been performed with recombinant NheC in artificial systems [ 13 – 15 ]. It has been suggested that NheC in solution is almost 90% bound to NheB, which is necessary to induce but also to limit the toxic effect of Nhe [ 14 ]. These results from artificial systems are supported by our in vitro data using 3D-SIM microscopy, showing that NheB and NheC colocalize in B. cereus EVs as well as on host cell membranes. The genes encoding the three Nhe components are organized into an operon that is polycistronic transcribed after PlcR activation [ 57 ]. Thus, the different levels of Nhe subunits usually found in supernatants of B. cereus cultures indicate posttranscriptional regulation and / or a regulated secretion of Nhe components. Secretion of premature Nhe components via the general secretory pathway has been described but the exact mechanism of secretion is still unknown [ 58 ]. By using highly specific antibodies for protein detection we demonstrated in our current work that NheC is strongly enriched and exclusively located in B. cereus EVs as compared to the vesicle-free supernatant. NheB together with NheC was detected in B. cereus- derived EVs, while NheA was mainly detected in the vesicle-free supernatant, supporting its role in the final stage of pore formation rather than at initiation [ 13 , 59 ]. Since our data show that Nhe components are enriched in EVs, it is tempting to speculate that B. cereus uses EVs as an export system to transport multicomponent toxins and virulence-associated factors simultaneously and at specific concentrations to host cells. Similarly, it has been reported that B. anthracis uses EVs for the transport of the anthrax toxin, which comprises one binding component and two active components [ 44 , 49 ]. These findings along with results from preliminary studies (data not shown) with a B. cereus strain producing Hbl, a further tripartite enterotoxin, foster the hypothesis that Gram-positive bacteria employ EVs as vehicles to deliver multicomponent toxins to host cells at defined concentrations and in a shielded manner. Fusion of EVs with non-phagocytic cells via macropinocytosis, clathrin-mediated endocytosis (CME), caveolin-mediated endocytosis, and non‐caveolin/non-clathrin-mediated endocytosis (using lipid rafts or direct membrane fusion) has been well described as pathways for the uptake of outer membrane vesicles (OMVs) from Gram-negative bacteria [ 60 ]. In contrast, there is not much data available about the uptake mechanism of EVs from Gram-positive bacteria by host cells. Transcytosis of probiotic B. subtilis EVs through Caco2 monolayer has been recently suggested as a possible transport route of EVs across the epithelium to the bloodstream and surrounding tissue and organs [ 61 ]. As the gastrointestinal tract is essential for the virulent life cycle of B. cereus , we used human intestinal epithelial Caco2 as an in vitro model to study the interaction of B. cereus -derived EVs with the host. Consistent with previous reports in S. aureus [ 38 , 48 ], we could show that B. cereus EVs fuse with human intestinal epithelial Caco2 cell membranes via cholesterol-rich domains in a time-dependent manner, which was strongly blocked by Filipin III. Besides, Dynasore, an endocytosis inhibitor, entirely blocked the uptake of B. cereus vesicles by Caco2 cells, implying that B. cereus utilizes dynamin-mediated endocytosis as an entry route. Some reduction of vesicle internalization was also observed with Chlorpromazine, an inhibitor of clathrin-dependent endocytosis. However, no effects were observed when blocking macropinocytosis and actin-dependent endocytosis. A recent study showed that S. aureus EVs are internalized by macrophages predominantly via the dynamin-mediated pathway whereas no effect was observed by using inhibitors for clathrin-, lipid raft-, and actin-dependent endocytosis [ 62 ]. These findings highlight the differences in the architecture and composition of EVs derived from various bacterial species while indicating the conservation of host uptake mechanisms. Both membrane fusion and endocytosis depend on the integrity of EVs, which allow direct delivery of concentrated components into host cells, enhancing thereby cell damage and immunomodulation. Although our results presented here emphasize the importance of cholesterol-rich domains and dynamin in B. cereus EV uptake, it could not be excluded that B. cereus EVs exploit diverse entry routes for their cargo delivery to diverse host cells. Thus, further studies will be needed to fully decipher the mechanism of interactions between B. cereus- derived EV and host cells. Our data further demonstrated that upon successful membrane fusion, the three components of Nhe were in close proximity to the cell membrane region of human epithelial Caco2 cells. This finding is in line with previous studies reporting that NheC contains a putative hydrophobic membrane integrative region that is essential for binding to cell membranes [ 13 , 14 , 63 ]. In addition, Hbl component L1 and component B, which share 40% and 25% sequence identities with NheB and NheC, respectively [ 2 ], harbor putative transmembrane regions that facilitate the binding to the cell membrane, leading to rapid cell lysis or activation of the NLRP3 inflammasome in a manner dictated by the bioavailability and concentration of Hbl [ 64 ]. A similar effect might also apply to the assembly of Nhe components. In addition to intact bacteria, uptake of bacterial EVs and their cargo by host cells can also lead to cytotoxicity, depending on the proteinaceous cargo the EVs contain. For instance, internalization of probiotic B. subtilis EVs by Caco2 cells did not affect cellular proliferation or viability [ 61 ], while pathogenic S. aureus EVs of different strains showed versatile cytotoxic potential towards host cells [ 47 , 65 ]. The pathogenic potential of B. cereus strains also varies widely, ranging from strains that show no in vitro cytotoxic activity to strains that are highly cytotoxic [ 25 , 66 ]. The use of bacterial supernatants obtained by low-speed centrifugation showed that B. cereus strain NVH0075-95 has the most potent toxic effects on human primary endothelial cells (HUVEC), in addition to Vero and human Caco2 epithelial cells. However, cell death occurred late at 24 h post-stimulation, suggesting the loss of mitochondrial function rather than rapid pore formation [ 67 ]. Similarly, we demonstrated that NVH0075-95-derived EVs, enriched in various virulence-associated mediators, elicited a cytotoxic effect on human Caco2 cells 24 h after stimulation, underscoring the critical role of EVs in B. cereus pathogenicity. Despite the clinical importance of B. cereus in humans, little is known about the role of the immune system in host defense against this pathogen. Our data showed that B. cereus EVs are functionally active and able to induce hemolysis in human red blood cells in a concentration-dependent manner, which so far has only been shown on erythrocytes using recombinant Nhe components [ 68 ]. Furthermore, our study revealed that the presence of both Nhe and SMase in B. cereus EVs increased hemolysis compared to single mutant vesicles, emphasizing the importance of the interplay of virulence factors for B. cereus pathogenicity. A synergistic interaction of Nhe and SMase has also been described for B. cereus virulence in vivo using an insect model [ 23 ]. Based on our study, it is tempting to speculate that besides Nhe and SMase, multiple virulence factors packaged in B. cereus vesicles might act in concert to potentiate pathogenicity. Thus, the investigation of B. cereus vesicles might open new possibilities to study this synergism in more detail ex vivo as well as in vivo. Furthermore, we observed that B. cereus EVs interact with human monocytes, resulting in the systemic induction of TNF-α secretion. TNF-α is an endogenous alarm signal, which drives inflammatory responses in injury or infection to recruit other immune cells to evoke an immune-stimulatory cascade. Similarly, increased secretion of TNF-α has been reported in antigen-presenting cells upon exposure to streptococcal EVs [ 40 , 43 , 69 ], suggesting their immunomodulatory effect. A recent study in S. aureus revealed that pore-forming toxins and lipoproteins associated with EVs induced NLPR3-dependent caspase-1 activation via K + efflux and TLR2 signaling, respectively, in human macrophages which resulted in the cellular release of pro-inflammatory cytokines IL-1β and IL-18 and, finally, pyroptosis [ 62 ]. Interestingly, it was reported that a recombinant Nhe complex drives activation of the NLRP3 inflammasome by targeting the plasma membrane of host cells [ 63 ]. This observation may indicate that B. cereus utilizes EVs as a targeted toxin delivery system of both lipoproteins and functional toxins to induce NLRP3 activation, as recently shown for S. aureus [ 62 ]. However, further studies are needed to confirm this association.
Conclusion Though considerable progress has been made in the characterization of EVs from Gram-positive bacteria [ 30 , 31 ], their role in B. cereus pathogenicity remained elusive. The discovery of B. cereus vesicles provides the first insights into the protective transfer of B. cereus multicomponent toxins to host cells and their assembly at host cell membranes under physiological conditions. It thus opens up new possibilities for deciphering their molecular mechanisms of action. Our results demonstrate that EVs produced by B. cereus serve as a secretory pathway to deliver bacterial effector molecules to the host simultaneously and at defined concentrations, enabling their concerted and synergistic action on target cells. EVs derived from a wild-type strain and isogenic knockout mutants proved to be a valuable tool to fine-tune the EV protein cargo for studying synergistic interactions of pore-forming toxins and cell membrane active enzymes and also provide new opportunities to study immunomodulating functions of bacterial effectors delivered to the host by EVs.
Background Extracellular vesicles (EVs) from Gram-positive bacteria have gained considerable importance as a novel transport system of virulence factors in host–pathogen interactions. Bacillus cereus is a Gram-positive human pathogen, causing gastrointestinal toxemia as well as local and systemic infections. The pathogenicity of enteropathogenic B. cereus has been linked to a collection of virulence factors and exotoxins. Nevertheless, the exact mechanism of virulence factor secretion and delivery to target cells is poorly understood. Results Here, we investigate the production and characterization of enterotoxin-associated EVs from the enteropathogenic B. cereus strain NVH0075-95 by using a proteomics approach and studied their interaction with human host cells in vitro. For the first time, comprehensive analyses of B. cereus EV proteins revealed virulence-associated factors, such as sphingomyelinase, phospholipase C, and the three-component enterotoxin Nhe. The detection of Nhe subunits was confirmed by immunoblotting, showing that the low abundant subunit NheC was exclusively detected in EVs as compared to vesicle-free supernatant. Cholesterol-dependent fusion and predominantly dynamin-mediated endocytosis of B. cereus EVs with the plasma membrane of intestinal epithelial Caco2 cells represent entry routes for delivery of Nhe components to host cells, which was assessed by confocal microscopy and finally led to delayed cytotoxicity. Furthermore, we could show that B. cereus EVs elicit an inflammatory response in human monocytes and contribute to erythrocyte lysis via a cooperative interaction of enterotoxin Nhe and sphingomyelinase. Conclusion Our results provide insights into the interaction of EVs from B. cereus with human host cells and add a new layer of complexity to our understanding of multicomponent enterotoxin assembly, offering new opportunities to decipher molecular processes involved in disease development. Supplementary Information The online version contains supplementary material available at 10.1186/s12964-023-01132-1. Keywords Open Access funding for this article was provided by the University of Veterinary Medicine Vienna (Vetmeduni Vienna).
Limitation of this study Currently, there are no minimal guidelines for the isolation of bacterial extracellular vesicles, in contrast to the minimal information for studies of eukaryotic extracellular vesicles defined in 2018 by a position paper of the International Society for Extracellular Vesicles (MISEV2018) [ 70 ]. In the present study, we thus employed a commonly used differential centrifugation approach for the isolation of B. cereus EVs. After the removal of bacterial cells and cell debris (including insoluble proteins), EVs were concentrated by filtration using a cutoff of 100 kDa to ensure that proteins < 100 kDa, possibly present in the supernatant most likely due to cell lysis, are not enriched in the vesicle fraction. Finally, EVs were pelleted by ultracentrifugation to remove the supernatant as well as the remaining soluble proteins. Alternatively, tangential flow filtration, size-exclusion chromatography (SEC), or density gradient centrifugation (DGC) can be considered as further purification steps for reaching high-quality EVs [ 51 , 71 ]. However, each of the aforementioned methods for EV purification has its pros and cons, and further systematic studies are needed to define the most suitable protocols for the isolation of EVs for a defined bacterial species and the respective purpose of EV production. For instance, it has been shown by Hong et al. [ 71 ] that the purification of crude E. coli EVs by SEC or DGC did only result in the removal of a low number of potential contaminating proteins. Furthermore, it has been reported that higher purification methods can miss important EV compounds, for instant LPS in E. coli- derived EVs [ 72 ]. Thus, we opted for differential centrifugation to isolate crude EV preparations and did not use additional methods for further EV purifications. However, it cannot be completely ruled out that the vesicles in our current work are still contaminated with aggregated proteins. Therefore, we characterized single vesicles, by applying different approaches such as TEM, FTIR, and NTA to meet the general criteria of the MISEV2018 guidelines. In addition, the data from our proteomic studies revealed the presence of flotillin in B. cereus EVs (Additional file 3 and Fig. 2 C). Flotillin is mentioned in MISEV2018 category 2b [ 70 ] as a marker to demonstrate the EV nature and the purity level of an EV preparation. Since the main purpose of our proteomics approach was to screen for potential virulence factors in EVs, we performed only qualitative proteome analysis using two biological replicates. Thus, further quantitative proteomic studies of EVs as well as vesicle-free supernatants are needed to fully decipher the proteidogenous cargo of B. cereus EVs, including detailed quantitative information on toxins and other virulence-related factors. However, such studies require a substantially different proteomic approach, using specific labeling techniques (e.g., iTRAQ) or specific label-free techniques (e.g., Sequential Window Acquisition of all Theoretical Mass Spectra (SWATH) mass spectrometry). Therefore, they are beyond the scope of our current work, which focuses on the biological activity of B. cereus EVs. Supplementary Information
Abbreviations ATP-binding cassette Human colon carcinoma cell line 2 Clathrin-mediated endocytosis Dithiothreitol Extracellular vesicles Filter aided sample preparation Fetal calf serum False discovery rate Fourier tansform infrared spectroscopy Hemolysin BL Iodacetamide Lysogeny broth Liquid Chromatography-Tandem Mass Spectrometry Leukocyte reduction system Minimal information for studies of erxtracellular vesicles Non-hemolytic enterotoxin Nanoparticle tracking analysis Outer membrane vesicles Peripheral blood mononuclear cells Penicillin binding protein Phosphate-buffered saline Paraformaldehyde Phospholipase C Octadecyl rhodamine B chloride Sphingomyelinase Thrichloroacetic acid Transmission electron microscopy Trifluoroacetic Tumor necrosis factor-alpha Acknowledgements We thank Richard Dietrich (Department of Veterinary Sciences, Faculty of Veterinary Medicine, Ludwig-Maximilians-University, Munich, Germany) for the generous gift of the antibodies. The excellent technical support by Tatjana Svoboda (Institute of Microbiology, University of Veterinary Medicine, Vienna, Austria) as well as the valuable support of Waldtraud Tschulenk und Ingrid Walter (Institute of Morphology, University of Veterinary Medicine) with TEM is gratefully acknowledged. Likewise, the authors are thankful to Endre Kiss (Core Facility Multimodal Imaging, Faculty of Chemistry, University of Vienna) for skillful technical assistance with confocal and SIM imaging. This research was supported using resources of the VetCore Facility (Proteomics, ELMI) of the University of Veterinary Medicine, Vienna. This work was supported by the Core Facility Multimodal Imaging, University of Vienna, Faculty of Chemistry member of the VLSI. Authors’ contributions TB, MES conceived and designed the study; TB, AD, MK, GF performed the experiments and analyzed the data; AD drafted parts of the manuscript; TB and MES wrote the manuscript. All authors reviewed and approved the manuscript. Funding Open Access funding for this article was provided by the University of Veterinary Medicine Vienna (Vetmeduni Vienna). The work of TB and MK was supported by the grant IGF Project 18677 N. The IGF Project 18677 N of the FEI was supported via AiF within the program for promoting the Industrial Collective Research (IGF) of the German Ministry of Economic Affairs and Energy. The work of AD was supported by the Vienna Anniversary Foundation for Higher Education (H-409332/2021). Availability of data and materials The datasets used and/or analyzed in the present study are available from the corresponding author on reasonable request. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE [ 85 ] partner repository with the dataset identifier PXD041561. Declarations Ethics approval and consent to participate Studies with human blood were approved by the ethics committee of the Medical University Vienna (ECS2177/2013) and written informed consent was obtained from all healthy donors. Competing interests The authors declare no competing interests.
CC BY
no
2024-01-15 23:35:09
Cell Commun Signal. 2023 May 15; 21:112
oa_package/1f/17/PMC10184354.tar.gz
PMC10229423
37198474
Methods Cell lines K562 (CCL-243), 293T (CRL-3216), HeLa (CCL-2), A375 (CRL-1619), A2058 (CRL-11147), SH4 (CRL-7724), MDA-MB-435S (HTB-129), SK-MEL-5 (HTB-70), SK-MEL-30 (HTB-63) and THP1 (TIB-202) cell lines were obtained from the American Type Culture Collection (ATCC). UACC-62, UACC-257 and LOX-IMVI cells were obtained from the Frederick Cancer Division of Cancer Treatment and Diagnosis (DCTD) Tumor Cell Line Repository. All cell lines were re-authenticated by STR profiling at ATCC before submission of the manuscript and compared to ATCC and Cellosaurus (ExPASy) STR profiles in 2020, with the exception of THP1 (TIB-202) and U937 (CRL-1593.2), which were purchased from ATCC for the experiments. Cells lines from the PRISM collection were obtained from The PRISM Lab (Broad Institute) and were not further re-authenticated. MDA-MB-435S cells were previously assumed to be ductal carcinoma cells and recent gene expression analysis assigned them to the melanoma lineage (ATCC). Cell culture and cell growth assays Cell line stocks were routinely maintained in DMEM (HeLa, 293T, K562, A375, A2058, SK-MEL-5, MDA-MB-435S) containing 1 mM sodium pyruvate (Thermo Fisher Scientific) with 25 mM glucose, 10% FBS (Thermo Fisher Scientific), 50 μg ml −1 uridine (Sigma), 4 mM l -glutamine and 100 U ml −1 penicillin–streptomycin (Thermo Fisher Scientific); or in RPMI (SH4, UACC-62, UACC-257, SK-MEL-30, LOX-IMVI, THP1) with 11.1 mM glucose, 10% FBS (Thermo Fisher Scientific), 2 mM l -glutamine and 100 U ml −1 penicillin–streptomycin (Thermo Fisher Scientific) under 5% CO 2 at 37 °C. All growth assays, metabolomics, screens and bioenergetics experiments were performed in medium containing dialysed FBS. For growth experiments, an equal number of cells was counted, washed in PBS and resuspended in no-glucose DMEM (Thermo Fisher Scientific), or no-glucose RPMI (Teknova) complemented with 10% dialysed FBS (Thermo Fisher Scientific), 100 U ml −1 penicillin–streptomycin (Thermo Fisher Scientific) and 5–10 mM of glucose, galactose, uridine or mannose (all from Sigma) dissolved in water, or with an equal volume of water alone. For RNA and other nucleoside complementation assays, 0.5 mg ml −1 purified RNA from Torula yeast (Sigma) or the selected nucleosides (Sigma) were weighted and directly resuspend in DMEM. In all cases, cells were counted with a Vi-Cell Counter (Beckman) after 3 to 5 d of growth and only live cells were considered. Cell viability in glucose and galactose was determined using the same Vi-Cell Counter assay. Measurements were taken from distinct samples. Open reading frame screen For ORF screening, K562 cells were infected with a lentiviral-carried ORFeome v8.1 library 2 (Genome Perturbation Platform, Broad Institute) containing 17,255 ORFs mapping to 12,548 genes, in duplicate. Cells were infected at a multiplicity of infection of 0.3 and at 500 cells per ORF in the presence of 10 μg ml −1 polybrene (Millipore). After 72 h, cells were transferred to culture medium containing 2 μg ml −1 puromycin (Thermo Fisher Scientific) and incubated for an additional 48 h. On the day of the screen, cells were plated in screening medium containing no-glucose DMEM supplemented with 10% dialysed FBS, 1 mM sodium pyruvate (Thermo Fisher Scientific), 50 μg ml −1 uridine (Sigma) and 100 U ml −1 penicillin–streptomycin (Thermo Fisher Scientific) and 25 mM of either glucose or galactose (Sigma) at a concentration of 10 5 cells per ml and with 500 cells per ORF. Cells were passaged every 3 d and 500 cells per ORF were harvested after 0, 9 and 21 d of growth. Total genomic DNA was isolated from cells using a NucleoSpin Blood kit (Clontech) using the manufacturer’s recommendations. Barcode sequencing, mapping and read count were performed by the Genome Perturbation Platform (Broad Institute). For screen analysis, log 2 (normalized read counts) were used, and P values were calculated using a two-sided t -test. The presence of lentiviral recombination within the ORFeome library was not tested and as such genes that dropped out should be considered with caution, as these may represent unnatural proteins 42 . Stable gene over-expression cDNAs corresponding to GFP , UPP1 -FLAG and UPP2- FLAG were cloned in pWPI/Neo (Addgene). Lentiviruses were produced according to Addgene’s protocol. Twenty-four hours after infection, cells were selected with 0.5 mg ml −1 G418 (Thermo Fisher Scientific) for 48 h. Polyacrylamide gel electrophoresis and immunoblotting Cells grown in routine medium were harvested, washed in PBS and lysed for 5 min on ice in RIPA buffer (25 mM Tris pH 7.5, 150 mM NaCl, 0.1% SDS, 0.1% sodium deoxycholate, 1% NP40 analogue, 1 × protease (Cell Signaling) and a 1:500 dilution of Universal Nuclease (Thermo Fisher Scientific)). Protein concentration was determined from total cell lysates using a DC protein assay (Bio-Rad). Gel electrophoresis was done on Novex Tris-Glycine gels (Thermo Fisher Scientific) before transfer using the Trans-Blot Turbo blotting system and nitrocellulose membranes (Bio-Rad). All immunoblotting was performed in Intercept Protein blocking buffer (Li-cor) or in 5% milk powder in TBST (TBS + 0.1% Tween-20). Washes were done in TBST. Specific primary antibodies were diluted at a concentration of 1:100–1:5,000 in blocking buffer. Fluorescent-coupled secondary antibodies were diluted at a ratio of 1:10,000 in blocking buffer. Membranes were imaged with an Odyssey CLx analyzer (Li-cor with Image Studio Lite v4.0) or by chemiluminescence. The following antibodies were used: FLAG M2 (Sigma, F1804), Actin (Abcam, ab8227), TUBB (Thermo, MA5-16308), UPP1 (Sigma, SAB1402388), MITF (Sigma, HPA003259), TYR (Santa Cruz sc-20035), MLANA (CST, 64718), HK2 (CST, 28675), GPI (CST, 94068), ALDOA (CST, 8060), TKT (CST, 64414), RPE (Proteintech, 12168-2-AP), PGM2 (Proteintech, 11022-1-AP), UCK2 (Proteintech, 10511-1-AP), TYMS (Proteintech, 15047-1-AP), S6 ribosomal protein (Santa Cruz, sc-74459) and phosphor-S6 (Santa Cruz, sc-293144). Two commercially available antibodies to UPP2 were tested (Sigma, SAB4301661; Abcam, ab153861), but no specific band could be detected. PRISM screen A six-well plate containing a mixture of 482 barcoded adherent cancer cell lines (PR500) 13 grown on RPMI (Life Technologies, 11835055) containing 10% FBS was prepared by The PRISM Lab (Broad Institute) seeded at a density of 200 cells per cell line. On day 0, the culture medium was replaced with no-glucose RPMI medium (Life Technologies, 11879020) containing 10% dialysed FBS and 100 U ml −1 penicillin–streptomycin and supplemented with 10 mM of either glucose or uridine ( n = 3 replicate wells each). The medium was replaced with fresh medium on days 3 and 5. On day 6, all wells reached confluency and cells were lysed. Lysates were denatured, (95 °C) and total DNA from all replicate wells was PCR amplified using KAPA polymerase and primers containing Illumina flow cell-binding sequences. PCR products were confirmed to show single-band amplification using gel electrophoresis, pooled, purified using the Xymo Select-a-Size DNA Clean & Concentration kit, quantified using a Qubit 3 Fluorometer, and then sequenced via HiSeq (50 cycles, single read, library concentration of 10 pM with 25% PhiX spike-in) as previously described 43 . Barcode abundance was determined from sequencing, and unexpectedly low counts (for example, from sequencing noise) were filtered out from individual replicates so as not to unintentionally depress cell line counts in the collapsed data. Replicates were then mean-collapsed, and log fold change and growth rate metrics were calculated according to equations ( 1 ) and ( 2 ): where n u and n g are counts from the uridine and glucose supplemented conditions, respectively, n 0 and n f are counts from the initial and final timepoints, respectively, and t is the assay length in days. Data analysis and correlation analysis were performed by The PRISM Lab following a published workflow 13 . RNA extraction, reverse transcription and qPCR qPCR was performed using the TaqMan assays (Thermo Fisher Scientific). RNA was extracted from total cells grown in routine media with a RNeasy kit (Qiagen) and digested with DNase I before murine leukaemia virus reverse transcription using random primers (Promega) and a CFx96 qPCR machine (Bio-Rad) using the following TaqMan assays: Hs01066247_m1 (human UPP1 ), Mm00447676_m1 (mouse Upp1 ), Mm01331071_m1 (mouse Upp2 ), Hs01117294_m1 (human MITF ), Hs01075618_m1 (human UCK1 ), Hs00989900_m1 (human UCK2 ), Mm00550050_m1 (mouse Hmgcs2 ), Hs00427620_m1 (human TBP ) and Mm00782638_s1 (mouse Rplp2 ), Mm00434228_m1 (mouse Il-1B) and Mm01277042_m1 (mouse Tbp). An assay for human UPP2 (Hs00542789_m1) was tested but no amplification could be detected. Human PBMCs and mouse BMDM data were normalized to TBP , and liver mouse data were normalized to Rplp2 , both using the ΔΔCt method. qPCR primers for ChIP are described below. Chromatin immunoprecipitation MDA-MB-435S cells were washed once with PBS and fixed with 1% formaldehyde in PBS for 15 min at room temperature. Fixation was stopped by adding glycine (final concentration of 0.2 M) for 5 min at room temperature. Cells were harvested by scraping with ice-cold PBS. Cell pellets were resuspended in SDS lysis buffer (50 mM Tris-HCl, pH 8.1, 10 mM EDTA, 1% SDS, protease inhibitor (Pierce Protease Inhibitor, EDTA-Free (Thermo Fisher Scientific))), incubated for 10 min at 4 °C, and sonicated to generate DNA fragments (around 500 base pairs) with a Qsonica Q800R2 system. Samples were centrifuged to remove debris and diluted tenfold in immunoprecipitation dilution buffer (16.7 mM Tris-HCl, pH 8.1, 1.2 mM EDTA, 0.01% SDS, 1.1% Triton-X100, 167 mM NaCl, protease inhibitor). Chromatin (~50 μg) was pre-cleared with normal rabbit IgG (EMD Millipore) and protein A/G beads (Protein A/G UltraLink Resin (Thermo Fisher Scientific)) in low-salt buffer (20 mM Tris-HCl, pH 8.1, 2 mM EDTA, 0.1% SDS, 150 mM NaCl, protease inhibitor) containing 0.25 mg ml −1 salmon sperm DNA and 0.25 mg ml −1 BSA for 2 h at 4 °C. Pre-cleared chromatin was incubated with 5 μl of anti-MITF (D5G7V (Cell Signaling Technology)) or 5 μg of normal rabbit IgG overnight at 4 °C (~1:10 vol:weight dilution). Samples were incubated with protein A/G beads for another 2 h at 4 °C. Immune complexes were washed sequentially twice with low-salt buffer, twice with high-salt buffer (20 mM Tris-HCl, pH 8.1, 2 mM EDTA, 0.1% SDS, 500 mM NaCl, protease inhibitor), LiCl buffer (250 mM LiCl, 1% NP40, 1% sodium deoxycholate, 1 mM EDTA, 10 mM Tris-HCl, pH 8.1, protease inhibitor) and twice with Tris-EDTA. After washes, immune complexes were eluted from beads twice with elution buffer (1% SDS, 10 mM dithiothreitol, 0.1 M NaHCO 3 ) for 15 min at room temperature. Samples were de-crosslinked by overnight incubation at 65 °C and treated with proteinase K (Qiagen) for 1 h at 56 °C. DNA was purified with QIAquick PCR purification kit (Qiagen). qPCR using KAPA SYBR FAST One-Step RT–qPCR Kit Universal (KAPA Biosystems) was performed to check MITF enrichments using the following primers: UPP1 -TSS (5′-TGACCTTGGGTTAGTCCTAGA-3′) and (5′-AGCAGCCAGTTCTGTTACTC-3′); UPP1 —3.5 kb (5′-AGCAACCTGGGAAAGTGATG-3′) and (5′-CGCCAACTCTCACTCATCATATAG-3′); TYR promoter (5′-GTGGGATACGAGCCAATTCGAAAG-3′) and (5′-TCCCACCTCCAGCATCAAACACTT-3′); ACTB gene body (5′-CATCCTCACCCTGAAGTACCC-3′) and (5′-TAGAAGGTGTGGTGCCAGATT-3′) Gene-specific CRISPR–Cas9 clone knockouts To generate UPP1 KO single-cell clones in MDA-MB-435S and UACC-257 cells, a sgRNA targeting UPP1 (TTGGATTTAAAAGTCTGACG) was ordered as complementary oligonucleotides (Integrated DNA Technologies) and cloned in pLentiCRISPRv2 (Addgene). Purified DNA was co-transfected with a GFP-expressing plasmid in the cell lines of interest using Lipofectamine 2000 (Thermo Fisher Scientific). After 48 h, cells were sorted using an MoFlo Astrios EQ Cell sorter and individual cells were seeded in a 96-well plate containing routine culture media for clone isolation. UPP1 depletion in single-cell clones was assessed by protein immunoblotting using antibodies to UPP1. The region corresponding to the sgRNA targeting site in the UPP1 gene was sequenced in MDA-MB-435S using TGGGAGCAACAGGGGTTAAG and TCAAGCATTTGTGGGTTGGTC primers and showed a homozygous 1-bp deletion in clone 1, heterozygous 4-bp and 9-bp deletions in clone 2, and heterozygous 1-bp and 2-bp insertions in clone 3. The 9-bp deletion in clone 2 is expected to produce a truncated protein (hypomorphic allele). To deplete the expression of ALDOA , GPI , HK2 , PGM2 , TKT , RPE , UCK1 , UCK2 and TYMS , two sgRNAs were cloned into pLENTICRISPRv2. UPP1 -expressing K562 cells were transduced with lentiviruses carrying these sgRNAs, selected with puromycin and the pooled population was analysed after 7–10 d. sgRNA sequences were: ALDOA_sg1 AATGGCGAGACTACCACCCA; ALDOA_sg2 AGGATGACACCCCCAATGCA; GPI_sg1 TGGGAGGACGCTACTCGCTG; GPI_sg2 TGACCCTCAACACCAACCAT; HK2_sg1 CATCAAGGAGAACAAAGGCG; HK2_sg2 TTACTTTCACCCAAAGCACA; PGM2_sg1 TGATTCTAGGAGCGTGAACA; PGM2_sg2 AATCCCCTGACTGATAAATG; TKT_sg1 GAAACAAGCTTTCACCGACG; TKT_sg2 CCTGCCCAGCTACAAAGTTG; RPE_sg1 ATATCTATCTGATTAGCCCA; RPE_sg2 CCCCAGAGTCTAGCATCCGG; UCK1_sg1 TGTGTCACAAAATCATAGGT; UCK1_sg2 CCGCTCACCCCTATCAGGAA; UCK2_sg1 TCTGCTCCGAGGTAAGGACA; UCK2_sg2 TACTGTCTATCCCGCAGACG; TYMS_sg1 TTCCAAGGGAGTGAAAATCT; TYMS_sg2 ATGTGCGCTTGGAATCCAAG. siRNA treatment UACC-257 and MDA-MB-435S cells were transfected with a non-targeting siRNA (N-001206-14-05) or an siRNA targeting MITF (M-008674-0005; Dharmacon) using Lipofectamine RNAiMAX according to the manufacturer’s instruction. Cells were analysed 72 h after transfection and robust MITF knock-down was confirmed by qPCR. Immune cell isolation and differentiation Human THP1 cultured cell lines were differentiated in routine medium containing 100 nM PMA (Sigma). After 2 d, the medium was changed for medium containing 100 ng ml −1 LPS (O111:B4, Sigma, L4391) or 1 mg ml −1 Torula yeast RNA (Sigma) and incubated for two additional days. Mouse BMDMs were extracted from hips, femurs and tibias of three 13-week-old C57BL/6J male mice and plated in DMEM supplemented with 50 ng ml −1 M-CSF (ImmunoTools, 12343115), 10% heat-inactivated FBS, 1% penicillin–streptomycin and 1% HEPES. After 3 d, the medium was replenished with M-CSF-supplemented DMEM. On day 6, cells were detached, counted and replated at 2 × 10 6 ml −1 per well of a six-well plate. Three hours after plating, cells were further treated with 0.1 μg μl −1 LPS O111:B4 (Sigma L4391), 1 mg ml −1 RNA (Sigma R6625) or 5 μg ml −1 R848 (Invivogen tlrl-r848) for 24 h. Cells treated with the IKK inhibitor BMS-345541 (Merck 401480) were pre-treated with 5 μM BMS-345541 for 1.5 h and then polarized with R848 and BMS-345541 for 24 h. Human PBMCs were isolated from buffy coats of blood donors from a local transfusion centre. Buffy coats were centrifuged on a Lymphoprep (Stemcell, 07851) gradient followed by CD14 + purification with CD14 microbeads (Miltenyi, 130050201), according to manufacturer’s instruction. Isolated CD14 + cells were plated in RPMI medium supplemented with 50 ng ml −1 M-CSF (ImmunoTools, 11343113), 10% heat-inactivated FBS, 1% penicillin–streptomycin and 1% HEPES. After 3 d, the medium was replenished with M-CSF-supplemented DMEM. On day 6, cells were detached, counted and replated at 1.5–2 × 10 6 ml −1 per well of a six-well plate. PBMC polarization was performed as with BMDMs. Genome-wide CRISPR–Cas9 screening A secondary genome-wide CRISPR–Cas9 screening was performed using K562 cells expressing UPP1 -FLAG and a lentiviral-carried Brunello library (Genome Perturbation Platform, Broad Institute) containing 76,441 sgRNAs 44 , in duplicate. Cells were infected with multiplicity of infection of 0.3 and at 500 cells per sgRNA in the presence of 10 μg ml −1 polybrene (Millipore). After 24 h, cells were transferred to culture medium containing 2 μg ml −1 puromycin (Thermo Fisher Scientific) and incubated for an additional 48 h. On day 7, the cells were plated in no-glucose DMEM containing 10% dialysed FBS and 100 U ml −1 penicillin–streptomycin and supplemented with 10 mM of either glucose or uridine at a concentration of 10 5 cells per ml and with 1,000 cells per sgRNA. Cells were passaged every 3 d for 2 weeks and, on day 21, 1,000 cells per sgRNA were harvested. DNA isolation was performed as for the ORFeome screen. CRISPR screen analysis was performed using a normalized z -score approach where raw sgRNA read counts were normalized to reads per million and then log 2 transformed using the following formula: log 2 (reads from an individual sgRNA / total reads in the sample 10 6 + 1) 45 . The log 2 (fold change) of each sgRNA was determined relative to the pre-swap control. For each gene in each replicate, the mean log 2 (fold change) in the abundance of all four sgRNAs was calculated. Genes with low expression (log 2 (fragments per kilobase of transcript per million mapped reads) < 0) according to publicly available K562 RNA-sequencing data (sample GSM854403 in Gene Expression Omnibus series GSE34740 ) were removed. log 2 (fold changes) were averaged by taking the mean across replicates. For each treatment, a null distribution was defined by the 3,726 genes with lowest expression. To score each gene within each treatment, its mean log 2 (fold change) across replicates was z -score transformed, using the statistics of the null distribution defined as above. Metabolite profiling (steady state) For steady-state metabolomics of glycolytic and PPP intermediates, an equal number of cells expressing GFP or UPP1 -FLAG were washed in PBS and pre-incubated for 24 h in no-glucose DMEM supplemented with 10% dialysed FBS (Thermo Fisher Scientific), 100 U ml −1 penicillin–streptomycin (Thermo Fisher Scientific) and 5 mM of glucose, galactose or uridine (all from Sigma) dissolved in water, or with an equal volume of water alone. Cells were then re-counted and 2 × 10 6 cells were seeded in fresh medium of the same formulation and incubated for two additional hours before metabolite extraction. Cells were pelleted and immediately extracted with 80% methanol, lyophilized and resuspended in 60% acetonitrile for intracellular LC–MS analysis. 13 C 5 -uridine tracer on cultured cells For tracer analysis on cultured cells, an equal number of cells expressing GFP or UPP1 -FLAG were washed in PBS and pre-incubated in no-glucose DMEM or RPMI medium supplemented with 10% dialysed FBS (Thermo Fisher Scientific), 100 U ml −1 penicillin–streptomycin (Thermo Fisher Scientific) and 5 mM unlabelled uridine (all from Sigma) dissolved in water. After 24 h, the medium was changed for the same medium with the exception that 13 C-labelled uridine ([1′,2′,3′,4′,5′- 13 C 5 ] uridine, NUC-034, Omicron Biochemicals) was used. Cells were incubated for five additional hours before metabolite extraction. Cells were then harvested, the medium was removed and saved, and cellular pellets were resuspended in a 9:1 ratio (75% acetonitrile; 25% methanol:water) extraction mixture, spun at 20,000 g for 10 min, and the supernatant was transferred to a glass sample vial for LC–MS analysis. Animal experiments All animal experiments in this paper were approved by the Massachusetts General Hospital, the University of Massachusetts Institutional Animal Care and Use Committee, or the Swiss Cantonal authorities, and all relevant ethical regulations were followed. All animals used were male C57BL/6J mice purchased from The Jackson Laboratory, aged 8–13 weeks. All cages were provided with food and water ad libitum. Food and water were monitored daily and replenished as needed, and cages were changed weekly. A standard light–dark cycle of 12-h light exposure was used. Animals were housed at 2–5 per cage. The temperature was 21° ± 1 °C with 55% ± 10% humidity. 13 C 5 -uridine tracer in mice For in vivo tracing analysis, 8- to 12-week-old C57BL/6J male mice were fasted overnight or fed ad libitum and injected intraperitoneally with 0.2 M 13 C-labelled uridine diluted in PBS to 0.4 g per kg body weight. After 30 min, blood and livers were collected from the mice under isoflurane anaesthesia. Liver was flash frozen in liquid nitrogen before subsequent analysis, and blood was collected in EDTA plasma tubes, spun and plasma was stored for further analysis. For plasma metabolite analysis, 117 μl of acetonitrile and 20 μl of LC–MS-grade water was added to 30 μl of plasma, the mixture was vortexed and left on ice for 10 min. The samples were then spun at 21,000 g for 20 min, and 100 μl of the supernatant was transferred to a glass sample vial for downstream LC–MS analysis. Intracellular LC–MS analysis For labelled and unlabelled LC–MS analysis of intracellular metabolites, 5 μl of sample was loaded on a ZIC-pHILIC column (Milipore). Buffer A was 20 mM ammonium carbonate, pH 9.6 and buffer B was acetonitrile. For each run, the total flow rate was 0.15 ml min −1 and the samples were loaded at 80% B. The gradient was held at 80% B for 0.5 min, then ramped to 20% B over the next 20 min, held at 20% B for 0.8 min, ramped to 80% B over 0.2 min, then held at 80% B for 7.5 min for re-equilibration. Mass spectra were continuously acquired on a Thermo Q-Exactive Plus run in polarity switching mode with a scan range of 70–1000 m/z and a resolving power of 70,000 (@200 m/z ). Data were acquired using Xcalibur (v.4.1.31.9, Thermo Fisher). Data were analysed using TraceFinder (v.4.1, Thermo Fisher) and Progenesis (v.2.3.6275.47961) software, and labelled data were manually corrected for natural isotope abundance. Media/plasma LC–MS analysis Media and plasma samples were subjected to the following LC–MS analysis: 10 μl of sample was loaded on a BEH Amide column (Waters). Buffer A was 20 mM ammonium acetate, 0.25% ammonium hydroxide, 5% acetonitrile, pH 9.0, while buffer B was acetonitrile. Samples were loaded on the column and the gradient began at 85% B, 0.22 ml min −1 , held for 0.5 min, then ramped to 35% B over 8.5 min, then ramped to 2% B over 2 min, held for 1 min, then ramped to 85% B over 1.5 min and held for 1.1 min. The flow rate was then increased to 0.42 ml min −1 and held for 3 min for re-equilibration. Mass spectra were collected on a Thermo Q-Exactive Plus run in polarity switching mode with a scan range of 70–1,000 m/z and a resolving power of 70,000 (@200 m/z ). Data were acquired using Xcalibur (v.4.1.31.9, Thermo Fisher). Data were analysed using TraceFinder (v.4.1, Thermo Fisher) and Progenesis (v.2.3.6275.47961) software, and labelled data were manually corrected for natural isotope abundance. Oxygen consumption and extracellular acidification rates by Seahorse XF analyzer Approximately 1.25 × 10 5 K562 cells were plated on a Seahorse plate in Seahorse XF DMEM medium (Agilent) supplemented with 10 mM glucose, galactose, mannose or uridine, or with an equal volume of water alone, and 4 mM glutamine (Thermo Fisher Scientific). FBS was omitted. Oxygen consumption and ECARs were simultaneously recorded by a Seahorse XFe96 analyzer (Agilent) using the Mito Stress Test protocol, in which cells were sequentially perturbed by 2 μM oligomycin, 1 μM CCCP and 0.5 μM antimycin (Sigma). Data were analysed using the Seahorse Wave Desktop Software (v.2.6.3, Agilent). Data were not corrected for carbonic acid derived from respiratory CO 2 . Lactate determination Lactate secretion in the culture medium was determined using a glycolysis cell-based assay kit (Cayman Chemical). An equal number of K562 cells expressing GFP or UPP1 -FLAG were washed in PBS and pre-incubated for 24 h in no-glucose DMEM medium supplemented with 10% dialysed FBS (Thermo Fisher Scientific), 100 U ml −1 penicillin–streptomycin (Thermo Fisher Scientific) and 5 mM glucose, galactose or uridine (all from Sigma) dissolved in water, or with an equal volume of water alone. Cells were then re-counted and seeded in fresh medium of the same formulation and incubated for three additional hours. Cells were then spun down and lactate concentration was determined on the supernatants (spent media). Gene Ontology analysis Gene Ontology (GO) analysis was performed using GOrilla with default settings and using a ranked gene list as input 46 . Only GO terms constituted of < 500 genes and scoring at FDR < 0.001 with a minimum of two genes were considered significant and are displayed in the figures. The complete unfiltered data can be found in Supplementary Table 1 . Gene-specific cDNA cloning and expression cDNAs of interest were custom designed (Genewiz or IDT) and cloned into pWPI-Neo or pLV-lenti-puro using BamHI and SpeI (New England Biolabs). Statistics and reproducibility All data are expressed as the mean ± s.e.m., with the exception of oxygraphic data that are expressed as the mean ± s.d. All reported sample sizes ( n ) represent biological replicate plates or a different mouse. All attempts at replication were successful. All Student’s t -tests were two sided. Statistical tests were performed using Microsoft Excel and GraphPad Prism 9. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Glucose is vital for life, serving as both a source of energy and carbon building block for growth. When glucose is limiting, alternative nutrients must be harnessed. To identify mechanisms by which cells can tolerate complete loss of glucose, we performed nutrient-sensitized genome-wide genetic screens and a PRISM growth assay across 482 cancer cell lines. We report that catabolism of uridine from the medium enables the growth of cells in the complete absence of glucose. While previous studies have shown that uridine can be salvaged to support pyrimidine synthesis in the setting of mitochondrial oxidative phosphorylation deficiency 1 , our work demonstrates that the ribose moiety of uridine or RNA can be salvaged to fulfil energy requirements via a pathway based on: (1) the phosphorylytic cleavage of uridine by uridine phosphorylase UPP1/UPP2 into uracil and ribose-1-phosphate (R1P), (2) the conversion of uridine-derived R1P into fructose-6-P and glyceraldehyde-3-P by the non-oxidative branch of the pentose phosphate pathway and (3) their glycolytic utilization to fuel ATP production, biosynthesis and gluconeogenesis. Capacity for glycolysis from uridine-derived ribose appears widespread, and we confirm its activity in cancer lineages, primary macrophages and mice in vivo. An interesting property of this pathway is that R1P enters downstream of the initial, highly regulated steps of glucose transport and upper glycolysis. We anticipate that ‘uridine bypass’ of upper glycolysis could be important in the context of disease and even exploited for therapeutic purposes. In this study, Skinner, Blanco-Fernández et al. show that uridine can be salvaged through the non-oxidative branch of the pentose phosphate pathway to feed glycolysis in conditions of glucose scarcity. Subject terms
Main We sought to identify new genes and pathways that might serve as alternative sources of energy when glucose is limiting. We transduced K562 cells with a library comprising 17,255 barcoded open reading frames (ORFs) 2 and compared proliferation in medium containing glucose and galactose, a poor substrate for glycolysis (Fig. 1a ). We used Dulbecco’s modified Eagle’s medium (DMEM) that contained glutamine, as well as pyruvate and uridine, for which oxidative phosphorylation (OXPHOS)-deficient cells are dependent 1 , 3 . After 21 d, we harvested cells and sequenced barcodes using next-generation sequencing (Extended Data Fig. 1a and Supplementary Table 1 ). The mitochondrial pyruvate dehydrogenase kinases 1–4 (encoded by PDK1 – PDK4 ) are repressors of oxidative metabolism, and all four isoforms were depleted in galactose (Fig. 1b ). Unexpectedly, we found striking enrichment in galactose for ORFs encoding UPP1 and UPP2, two paralogous uridine phosphorylases catalysing the phosphate-dependent catabolism of uridine into R1P and uracil (Fig. 1b,c and Extended Data Fig. 1b,c ). To validate the screen, we stably expressed UPP1 and UPP2 ORFs in K562 cells and observed a significant gain in proliferation in galactose medium (Fig. 1d ). This gain was dependent on uridine being present in the medium, while expression of UPP1/ UPP 2 , or addition of uridine, had no effect in glucose-containing medium. Importantly, we found that UPP1 -expressing cells also efficiently proliferated in medium containing uridine in the complete absence of glucose or galactose (‘sugar-free’), while control cells were unable to proliferate (Fig. 1e and Extended Data Fig. 1d ). The ability of UPP1 cells to grow in sugar-free medium strictly depended on uridine, and none of the other seven nucleoside precursors of nucleic acids could substitute for uridine (Fig. 1f ). Uridine-derived nucleotides are building blocks for RNA (Fig. 1g ), and RNA is an unstable molecule, sensitive to cellular and secreted RNases. We tested if RNA-derived uridine could support growth in a UPP1 -dependent manner and supplemented glucose-free medium with purified yeast RNA. The intracellular abundance of all four ribonucleosides accumulated following addition of RNA to the medium, with significantly lower uridine levels in UPP1 -expressing cells, suggesting UPP1 -mediated catabolism (Fig. 1h ). Accordingly, UPP1 -expressing K562 cells proliferated in sugar-free medium supplemented with RNA (Fig. 1i ). We conclude that elevated uridine phosphorylase activity confers the ability to grow in medium containing uridine or RNA, in the complete absence of glucose. We next addressed the mechanism of how uridine supports the growth of UPP1 -expressing cells. Previous studies have noted the beneficial effect of uridine in the absence of glucose and proposed mechanisms that include the salvage of uridine for nucleotide synthesis and its role in glycosylation 4 – 8 . Others reported the beneficial role of uridine phosphorylase in maintaining ATP levels and viability during glucose restriction in the brain 9 – 11 . To further investigate the molecular mechanism of uridine-supported proliferation, we performed a secondary genome-wide CRISPR–Cas9 depletion screen using K562 cells expressing UPP1 -FLAG grown on glucose or uridine (Fig. 2a,b and Extended Data Fig. 2a ). We found that, although most essential gene sets were shared between glucose and uridine conditions, three major classes of genes were differentially essential in uridine as compared to glucose (Fig. 2b , Extended Data Fig. 2b and Supplementary Table 1 ): (1) As expected from pyrimidine salvage from uridine, all three enzymes involved in de novo pyrimidine synthesis (encoded by CAD , DHODH and UMPS ) were essential in glucose but dispensable in uridine. (2) Genes central to the non-oxidative branch of the pentose phosphate pathway (non-oxPPP; PGM2 , TKT , RPE ) showed high essentiality in uridine. Among them PGM2 , which encodes an enzyme that converts ribose-1-P to ribose-5-P and connects the UPP1/UPP2 reaction to the PPP, was highly essential in uridine, but almost fully dispensable in glucose. Accordingly, uridine-grown cells were particularly sensitive to depletion of PGM2 , TKT and RPE , or to TKT inhibition, while they were insensitive to the de novo pyrimidine synthesis inhibitor brequinar(Fig. 2c and Extended Data Fig. 3a,b ). In contrast, genes of the oxidative branch of the PPP ( G6PD , PGLS , PGD ) did not score differentially between glucose and uridine. (3) As expected from their essentiality in glucose-limited conditions 3 , 12 , genes encoding the mitochondrial respiratory chain were generally more essential in uridine, although to a lesser extent compared to the non-oxPPP, perhaps due to the low energy supply in the absence of glucose. In contrast to the previously proposed mechanisms 4 – 8 , ablation of genes involved in uridine salvage for nucleotide synthesis ( UCK1 /UCK 2 , TYMS ) or in glycosylation had no effect on the growth of cells in uridine when compared to glucose (Fig. 2c , Extended Data Fig. 3b,c and Supplementary Table 1 ). Central enzymes of glycolysis were essential both in glucose and in uridine, indicating that a functional glycolytic pathway is required for survival with uridine alone. However, our comparative analysis revealed that several upper glycolytic enzymes (encoded by ALDOA , GPI and HK2 ) were dispensable in uridine, and only essential in glucose (Fig. 2b,c and Extended Data Fig. 3b ). Not all steps of upper glycolysis scored in either condition, potentially due to the multiple genes with overlapping functions encoding glycolytic enzymes, a common limitation in single gene-targeting screens. Nevertheless, genes found to be dispensable in uridine included all steps upstream of fructose-6-P (F6P) and/or glyceraldehyde-3-P (G3P), which connect the non-oxPPP to glycolysis, pointing to a key role for these two metabolites in supporting proliferation on uridine. The essentiality of the non-oxPPP, with the dispensability of upper glycolysis in uridine (Fig. 2b,c ), prompted us to hypothesize that the ribose moiety of uridine can enter glycolysis and serve as a substrate for biosynthesis and energy production. Lactate secretion and glycolytic utilization of uridine, however, were excluded in earlier work 4 – 8 . Nonetheless, given the importance of PPP enzymes and the dispensability of upper glycolysis, we reinvestigated this possibility and measured lactate secretion in uridine-grown cells. Strikingly, we found that UPP1 -expressing cells grown in uridine secreted high amounts of lactate (Fig. 2d ). Accordingly, we found using liquid chromatography–mass spectrometry (LC–MS) that uridine restored steady-state abundance of most central carbon metabolism detected in the absence of glucose, strongly suggesting some degree of lower glycolysis activity from uridine (Extended Data Fig. 4a ). To directly test if uridine-derived ribose could serve as a substrate for glycolysis, we designed a tracer experiment using isotopically labelled uridine with five ribose carbons ( 13 C 5 -uridine) and LC–MS (Fig. 2e ). UPP1 -expressing cells avidly incorporated 13 C 5 -uridine, as seen by the presence of 13 C in all the intracellular intermediates of the PPP and glycolysis analysed, including ribose-phosphate, upper and lower glycolytic intermediates and lactate, while control cells showed very little label incorporation. Tricarboxylic acid (TCA) cycle intermediates, among them citrate, were also partially labelled (mostly M + 2), indicating potential incorporation of carbon from glycolysis via pyruvate. To determine whether this labelling pattern extends in vivo, we next injected overnight fasted mice intraperitoneally with a 13 C 5 -uridine tracer and measured incorporation in the liver and in circulating metabolites after 30 min. As in cell lines, we found 13 C incorporation in ribose-phosphate and glycolysis in 13 C 5 -uridine-treated animals (Fig. 2f and Extended Data Fig. 4b–e ). Incorporation efficiency was smaller than in cell culture, as expected from low-dose 13 C 5 -uridine injection, shorter treatment time and competition with other endogenous substrates in vivo, including unlabelled uridine. 13 C 5 -uridine incorporation also occurred in fed animals, albeit to a lesser extent, and expression of liver Upp1 and Upp2 did not change with feeding (Extended Data Fig. 4b–f ). We also found modest but significant incorporation of uridine-derived 13 C in glucose, indicating gluconeogenesis from uridine-derived carbons (Fig. 2f and Extended Data Fig. 4c,d ). Together, our results indicate that in cell lines and in animals in vivo, uridine catabolism provides ribose for the PPP, and that the non-oxPPP and the glycolytic pathway communicate via F6P and G3P to replenish glycolysis thus entirely bypassing the requirement for glucose in supporting lower glycolysis, biosynthesis and energy production in sugar-free medium (Fig. 2g ). We next sought to determine whether any human cell lines exhibit a latent ability to use uridine-derived ribose to grow on uridine when glucose is absent without the need for over-expression. We screened 482 pooled barcoded adherent cancer cell lines spanning 22 solid tumour lineages from the PRISM collection 13 in medium containing 10 mM glucose or uridine, in the absence of any supplemental sugar (Fig. 3a , Extended Data Fig. 5 and Supplementary Table 1 ). Cells from the melanoma and the glioma lineages grew remarkably well in uridine as compared to the other lineages, whereas Ewing sarcoma cells grew significantly less well (Fig. 3b ). Cell lines from the PRISM collection have been extensively characterized at a molecular level 14 , so we searched for genomic factors that correlate with the ability to grow on uridine (Supplementary Table 1 ). Genome wide, the top-scoring transcript, protein and genomic copy number variant was UPP1 (Fig. 3c–e ), in strong agreement with our ORF screen (Fig. 1b ). Expression of UPP1 across the CCLE collection was the highest in cell lines of skin origin (Extended Data Fig. 6a,b ), where high uridine phosphorylase enzyme activity has been documented 15 , and tended to be lowest in the bone lineage. UPP2 was almost never expressed in the CCLE collection (average transcripts per million (TPM) < 1; Extended Data Fig. 6a ). In agreement with these results, we confirmed significant, UPP1 -dependent, proliferation and uridine catabolism in melanoma cells grown in sugar-free medium supplemented with uridine or RNA (Fig. 3f–h and Extended Data Fig. 6c–e ). We conclude that the endogenous expression of UPP1 is necessary and sufficient to support the growth of cancer cells on uridine. We next investigated the factors that promote UPP1 expression and growth on uridine by integrating our results with CCLE data to prioritize transcription factors, which highlighted MITF as a strong candidate in melanoma cells, both at the protein and the transcript level (Fig. 3c,d and Extended Data Fig. 6a,b ). We found that MITF over-expression promoted UPP1 expression and uridine growth (Extended Data Fig. 7a,b ), while endogenous MITF binding was detected in the transcription start site (TSS) and the promoter (−3.5 kb from the TSS) of UPP1 in a large-scale chromatin immunoprecipitation (ChIP) study 16 , which we experimentally validated (Extended Data Fig. 7c,d ). Accordingly, siRNA-mediated depletion of MITF decreased UPP1 expression in melanoma cells (Extended Data Fig. 7e ). Our solid tumour PRISM cancer cells collection did not include cells of the immune lineage, where UPP1 is expressed at high levels 17 , 18 , so we asked whether immune cells exhibit the capacity to metabolize ribose from uridine either at baseline or in a transcriptionally regulated manner. In the human monocytic THP1 cell line, in macrophage colony-stimulating factor (M-CSF)-matured peripheral blood mononuclear cells (PBMCs), and in primary mouse bone marrow-derived macrophages (BMDMs), we found that differentiation into macrophages and/or further polarization with immunostimulatory molecules increased UPP1 expression (Fig. 3i–k and Extended Data Fig. 8a,b ). In contrast, expression of pyrimidine salvage genes ( UCK1 / UCK 2 ) and 13 C 5 -uridine incorporation into UMP were not affected, and even decreased, during this process (Extended Data Fig. 8c,d ). Among the immunostimulatory molecules, RNA enhanced UPP1 expression, suggesting the existence of a feed-forward loop, where RNA (and conceivably RNA-containing pathogens and debris) may trigger UPP1 expression and uridine salvage for building blocks and energy production. Supporting this idea, stimulation of PBMCs and BMDMs with a TLR7/TLR8 agonist (R848) lead to a significant, IκB kinase (IKK)-dependent, increase in UPP1 transcription in BMDMs (Fig. 3j,k and Extended Data Fig. 8e ). Label incorporation from uridine ribose was also strongly increased in citrate and lactate after differentiation of THP1 and after BMDM stimulation with R848, while it wasn't further increased in M-CSF-matured PBMCs, possibly due to high baseline capacity for uridine catabolism in these cells (Fig. 3l and Extended Data Fig. 8f,g ). Together, our results indicate that macrophages have the capacity to use uridine-derived ribose for glycolysis, and that UPP1 expression and uridine catabolism can sharply increase during cellular differentiation and in response to immunostimulating molecules, with cell type and species differences. We next sought to determine whether glycolysis from uridine is under acute regulation in the same way as from glucose. Active OXPHOS tends to keep glucose uptake and glycolysis at lower levels, while acute inhibition of OXPHOS leads to an immediate and strong increase in glucose-supported glycolysis, as evidenced by a robust increase in the extracellular acidification rate (ECAR) following oligomycin treatment (Fig. 4a,b ). Strikingly, we found no ECAR stimulation by OXPHOS inhibitors, no difference in 13 C 5 -uridine incorporation following antimycin blockage of the electron transport chain, and no increase in uridine import in OXPHOS-inhibited UPP1 -expressing cells grown on uridine (Fig. 4b,c and Extended Data Fig. 9a,b ). Because glycolysis from both uridine and glucose share a common pathway from G3P (Fig. 2g ), differential regulation of glycolysis following OXPHOS inhibition must occur in the upper part of the pathway. Consistent with this notion, we observed no stimulation of ECAR in mannose-grown cells, a sugar connected to glycolysis by F6P (Extended Data Fig. 9c ). We conclude that substrates such as uridine can enter glycolysis in a constitutive way, in contrast to glucose, by bypassing regulatory steps of upper glycolysis such as glucose transport and initial phosphorylation. In line with this, we next performed a competition experiment to evaluate if the presence of glucose affects the incorporation of uridine in cells. Incorporation of uridine in lactate was notably not affected by competition with glucose in our experimental conditions, despite the presence of a large molar excess of glucose (Fig. 4c ). Therefore, and in agreement with a bypass of regulatory steps of upper glycolysis, uridine can be incorporated into cells even when lactate production from glucose is saturated, suggesting constitutive import and catabolism. Cells with severe OXPHOS dysfunction classically have to be grown on glucose, and uridine must be supplemented 1 . The traditional explanation has been that glucose is required to support glycolytic ATP production as OXPHOS is debilitated, and that uridine supplementation is required for pyrimidine salvage given that de novo pyrimidine synthesis via DHODH requires coupling to a functional electron transport chain 1 , 3 (Extended Data Fig. 9d ). Having observed energy harvesting from uridine, we finally tested whether uridine-derived ribose could also benefit OXPHOS-inhibited cells in the absence of glucose. We found a significant UPP1 -dependent rescue of viability in galactose-grown cells treated with antimycin A (Fig. 4d ), now revealing that supplemental uridine benefits mitochondrial dysfunction in two ways: (1) pyrimidine salvage when de novo pyrimidine synthesis is impossible, and (2) energy production in UPP1 -expressing cells. For decades it has been known that cells with mitochondrial deficiencies are dependent on uridine to support pyrimidine synthesis given the dependence of de novo pyrimidine synthesis on DHODH, whose activity is coupled to the electron transport chain 1 . Although it has been documented, it is less appreciated that uridine supplementation can support cell growth in the absence of glucose 4 – 10 . Here, we show that, in addition to nucleotide synthesis, uridine can serve as a substrate for energy production, biosynthesis and gluconeogenesis. Mechanistically, we show that glycolysis from uridine-derived ribose is initiated with the phosphorylytic cleavage of uridine by UPP1/UPP2, followed by shuttling of its ribose moiety through the non-oxPPP and glycolysis, hence supporting not only nucleotide metabolism but also energy production or gluconeogenesis in the absence of glucose (Fig. 2g ). By comparing uridine to other nucleosides and using similar tracer experiments to ours, Wice et al. 7 observed incorporation of uridine-derived carbons in most cellular fractions in mammalian cell culture and in chicken embryos. However, they did not detect pyruvate and lactate in uridine, and concluded that uridine does not participate in glycolysis, but rather is required for nucleotide synthesis, and proposed that energy is derived exclusively from glutamine in the absence of glucose 6 , 7 . Loffler et al. and Linker et al. reached the same conclusion 4 , 8 . Our observations based on a genome-wide CRISPR–Cas9 screening and metabolic tracers (Fig. 2 ) agree with previous observations that cells can proliferate in sugar-free medium if uridine is provided, and that uridine is crucial for nucleotide synthesis—but differ mechanistically on the role of glycolysis in this condition, as we were able to identify a significant amount of labelling in glycolytic intermediates and secreted lactate, as well as a high ECAR, all consistent with glycolytic ATP production from uridine. It has previously been reported that uridine protects cortical neurons and immunostimulated astrocytes from glucose deprivation-induced cell death, in a way related to ATP, and it was hypothesized that uridine could serve as an ATP source 9 . Our genetic perturbation and tracer studies are consistent with this hypothesis. The capacity to harvest energy and building blocks from uridine appears to be widespread. Here, we report very high capacity for uridine-derived ribose catabolism in melanoma and glioma cell lines (Fig. 3a–h ), in primary human and mouse macrophages (Fig. 3i–l ), and we also detect labelling patterns from uridine-derived ribose in the liver and the whole organism in vivo (Fig. 2f and Extended Data Fig. 4c–e ). Our gain-of-function and loss-of-function studies suggest that tissues expressing UPP1/ UPP 2 will have capacity for glycolysis from uridine-derived ribose. Based on gene expression atlases 18 , 19 , we predict uridine may be a meaningful source of energy in blood cells, lung, brain and kidney, as well as in certain cancers. Uridine is the most abundant and soluble nucleoside in circulation 20 and it is possible that uridine may serve as an alternative energy source in these tissues, or for immune and cancer metabolism, similar to what has been proposed for other sugars and nucleosides 21 – 23 . It is notable that the strongest human metabolic quantitative trait loci for circulating uridine corresponds to UPP1 (ref. 24 ), while uridine phosphorylase activity is the main determinant of circulating uridine in mice 25 . A fascinating aspect of glycolysis from uridine is its apparent absence of regulation, at least at shorter timescales. The ability of uridine to serve as a constitutive input into glycolysis might have clinical implications for human diseases, as uridine is present at high levels in foods such as milk and beer 26 , 27 , and previous in vivo studies have shown that a uridine-rich diet leads to glycogen accumulation, gluconeogenesis, fatty liver and pre-diabetes in mice 28 , 29 . We now report that glycolysis from uridine lacks at least two checkpoints as (1) it is not controlled by OXPHOS (Fig. 4a–c ), and (2) it occurs even when lactate production from glucose is evidently saturated (Fig. 4c ), or after food intake in vivo (Extended Data Fig. 4d,e ). Although glycolysis from uridine appears to occur at a slower pace than from glucose, we speculate that constitutive fuelling of glycolysis and gluconeogenesis from a uridine-rich diet may contribute to human conditions such as fatty liver disease and diabetes. Such a ‘uridine bypass’ is conceivable because glycolysis is so strongly controlled in upper glycolysis, for example, glucose transport 30 , which we show is bypassed by uridine, because the non-oxPPP and glycolysis are connected by F6P and G3P (Fig. 2g ). This ability of uridine to bypass upper glycolysis may be beneficial in certain cases. For example, disorders of upper glycolysis, notably GLUT1 deficiency syndrome 31 , may benefit from uridine therapy and from induction of UPP1/ UPP 2 expression. At longer timescales, UPP1 expression and capacity for ribose catabolism from uridine appear to be determined by cellular differentiation and further activation by extracellular signals. Here we focused on the monocytic lineage and found that (1) in THP1 cells, UPP1 expression and activity sharply increased during differentiation and polarization, (2) high baseline rates of glycolysis from uridine are observed in M-CSF-matured PBMCs and (3) treatment with immunostimulating molecules acutely promote both UPP1 expression and uridine catabolism in BMDMs (Fig. 3i–l ). Whereas we didn’t investigate whether uridine is required for macrophage activation, we noticed that all the agonists tested ultimately lead to nuclear factor kappa B (NF-κB) activation, which binds in the UPP1 promoter 17 , 32 . It is thus likely that NF-κB may serve as a transcription factor for UPP1 . Supporting this assertion, we found that blocking NF-κB signalling with upstream IKK inhibitors abolished R848-induced Upp1 expression (Extended Data Fig. 8e ). Uridine phosphorylase and ribose salvage by UPP1 appears to lie downstream of a number of signalling pathways with potential relevance to disease. We have demonstrated that uridine breakdown is promoted by MITF, a transcription factor associated with melanoma progression, which we show binds upstream of UPP1 to promote its expression (Extended Data Fig. 7 ). In an accompanying study, Nwosu, Ward et al., demonstrate that UPP1 expression is under the control of KRAS–MAPK signalling 33 . It is notable that both MITF and NF-kB can act downstream of KRAS–MAPK 34 – 38 and that some pancreatic cell lines with high uridine phosphorylase activity highlighted by Nwosu, Ward et al. 33 unpublished data, also scored in our PRISM screen (Supplementary Table 1 ). Finally, we found that RNA in the medium can replace glucose to promote cellular proliferation (Fig. 1i and Extended Data Fig. 6e ). RNA is a highly abundant molecule, ranging from 4% of the dry weight of a mammalian cell to 20% of a bacterium 39 . Recycling of ribosomes through ribophagy, for example, plays an important role in supporting viability during starvation 40 . Cells of our immune system also ingest large quantities of RNA during phagocytosis, and we experimentally showed that the expression of UPP1 increases with macrophage activation (Fig. 3i–k ), including when cells are stimulated with RNA itself, suggesting the existence of a positive feedback loop. Uridine seems to be the only constituent of RNA that can be efficiently used for energy production, at least in K562 cells (Fig. 1h ). Whereas the salvage of RNA to provide building blocks during starvation has long been appreciated for nucleotide synthesis, to our knowledge, its contribution to energy metabolism has not been considered in the past, except for some fungi that can grow on minimum media with RNA as their sole carbon source 41 . We speculate that, similar to glycogen and starch, RNA itself may constitute as large stock of energy in the form of a polymer, and that it may be used for energy storage and to support cellular function during starvation, or during processes associated with high energy costs such as the immune response. Supplementary information Source data
Extended data Extended data is available for this paper at 10.1038/s42255-023-00774-2. Supplementary information The online version contains supplementary material available at 10.1038/s42255-023-00774-2. Acknowledgements The authors thank T. Ast (Broad Institute), P. Broz (University of Lausanne), O. Goldberger (Massachusetts General Hospital), S. Luther (University of Lausanne), M. Miranda (Massachusetts General Hospital), M. Rebsamen (University of Lausanne), M. Ronan (Broad Institute), D. Rosenberg (Broad Institute), R. Sharma (Massachusetts General Hospital) and T.L To (Broad Institute) for their help and for sharing reagents. This work was supported by National Institutes of Health grants R35GM122455 (to V.K.M.), F32GM133047 (to O.S.S.), DK115881 (to R.P.G.), R01AR043369-24 (to D.E.F.), P01CA163222-07 (to D.E.F.), K99/R00 GM124296 (to H.S.), an SNF Project Grant 310030_200796 (to A.A.J.), a grant from the Dr. Miriam and Sheldon Adelson Medical Research Foundation (to D.E.F.) and a J. Bolyai Research Scholarship of the Hungarian Academy of Sciences and a grant from the National Research, Development and Innovation Office OTKA FK138696 (to L.V.K.). V.K.M. is an Investigator of the Howard Hughes Medical Institute. Author contributions O.S.S., J.B.-F., L.J.-C., A.K., L.V.K., H.S., R.P.G. and A.A.J. performed the experiments; M.G.R. and J.A.R. supervised L.J.-C.; D.E.F. supervised A.K. and L.V.K.; A.A.J. supervised J.B.-F.; V.K.M. supervised O.S.S., and H.S., R.P.G. and A.A.J. until independence; A.A.J. and V.K.M. designed the project; A.A.J. and V.K.M. wrote the manuscript with input from all authors. Peer review Peer review information Nature Metabolism thanks the anonymous reviewers for their contribution to the peer review of this work. Primary Handling Editor: Alfredo Giménez-Cassina, in collaboration with the Nature Metabolism team. Funding Open access funding provided by University of Lausanne. Data availability All data generated or analysed during this study are included in the article and its Supplementary Information . Results of the ORFeome, the CRISPR–Cas9 and the PRISM screens are available in Supplementary Table 1 . Data from the Cancer Cell Line Encyclopedia are available at https://depmap.org/portal/ . Source data are provided with this paper. Competing interests V.K.M. is a paid scientific advisor to 5AM Ventures. O.S.S. was a paid consultant for Proteinaceous Inc. D.E.F. has a financial interest in Soltego, a company developing salt-inducible kinase inhibitors for topical skin-darkening treatments that might be used for a broad set of human applications. The interests of D.E.F. were reviewed and are managed by Massachusetts General Hospital and Partners HealthCare in accordance with their conflict-of-interest policies. The remaining authors declare no competing interests.
CC BY
no
2024-01-15 23:35:06
Nat Metab. 2023 May 17; 5(5):765-776
oa_package/05/40/PMC10229423.tar.gz
PMC10233181
37264348
Background Performing physical activities is vital in maintaining good health as it plays a large role in preventing cardiovascular diseases and reducing mortality rates [ 1 – 4 ]. For this reason, it is imperative that we maintain our physical health by participating in physical activities throughout the day and avoiding living sedentary lifestyles. Nowadays teens have been reported to often follow an inactive lifestyle [ 5 ]. Children and adolescents in the United Arab Emirates (UAE) do not reach the recommended amount of physical activity per day [ 6 ]. Physical activity report cards of children and youths in the UAE showed that only 25% of adolescents (aged 13–17 years) participated in physical education classes despite the governmental requirements for a minimum number of lessons per week. Moreover, only less than half of the adolescents met the recommended guidelines for screen time, and sedentary time further increased with age [ 7 ]. Directly monitoring the activity levels allows people to get an idea of how much or how little they are doing and may help adjusting their level of activity in order to improve or maintain their own health [ 8 , 9 ]. Furthermore, measuring activity levels can also be used to survey the overall health of a population, which can be helpful in designing a health program for the benefit of the population or test the effectiveness of a health program [ 10 , 11 ]. Generally, people struggle with incorporating physical activities in their daily routines as they have to adjust it according to their work and/or school schedules. In some cases, certain jobs allow people to be more active because of their work nature. Another aspect that can impact the activity level is the amount of leisure time given after work and school during the week [ 12 – 14 ]. The world-wide lockdown imposed following the COVID-19 pandemic had a great negative impact on many people’s daily physical activities, contributing markedly to the increasingly sedentary lifestyle adopted across the globe [ 15 – 19 ]. To counteract these negative effects, health policies need to be implemented. According to the revised policies in the UAE for the Sharjah Emirate, there are three official days off for the weekend effective from January 2022. This can result in two different scenarios: some people may like to spend their time participating in more physical activities such as playing sports or outdoor games while some people may like to spend time watching television or by participating in home-based activities. With the latter option, it is important to monitor activity during these days to ensure that activity levels do not drop significantly for teens and young adults with the introduction of the extra day of the weekend. A fairly recent systematic review by Sharara et al. (2018) provides evidence for the lack of physical activity and also the prevalence of obesity in Arab countries [ 20 ]. In order to collect data for a large population, a self-reported questionnaire is the easiest way to collect, analyze and administer data. Each method has its advantages and disadvantages; therefore, the decision regarding which measure to use will be based on the purpose of the research, sample size, and availability of resources [ 21 ]. The Arab Teens Lifestyle Study (ATLS) questionnaire is commonly used to assess the physical activity levels, sedentary behavior, and dietary habits of teens and young adults in Arab countries since 2011. Al-Hazzaa et al. (2011) recommended the ATLS as a valid tool for estimating sedentary behavior and physical activities in individuals with a mean age of 16.1 ± 1.1 years [ 21 ]. Recently, this questionnaire has been revised and renamed as ATLS-2 [ 22 ]. The age group of the users of this questionnaire ranges from 14 to > 30 years [ 21 – 26 ]. A study by Al-Hazzaa et al. in 2011 [ 21 ] reported comparable estimates of self-reported physical activity levels with the ATLS and step counts measured by pedometers (r ≤ 0.30); however, the reported r value showed a weak correlation. Quantitative accelerometer-measured sedentary and physical activity data are further required to validate the self-reported measures of the ATLS-2 physical activity questionnaire. Previous studies have reported conflicting inconsistent findings between sedentary behavior and physical activity levels with self-reported questionnaires compared to accelerometer data [ 27 – 29 ]. Recent studies have used objective methods such as accelerometer devices to substantiate the validity and reliability of sedentary behavior and physical activity variables [ 30 , 31 ] when measured for a period of 7 days [ 32 ]. The use of activity monitors can enhance participants attitudes toward cautiously monitoring their daily activities; therefore, these devices can be used to promote a healthy lifestyle for Arab teens [ 33 ]. Previously, the Actigraph accelerometer has been used for physical activity assessment of young adults in the UAE [ 34 ]; however, as per one study, its reliability in detecting moderate to vigorous intensity activities was not specifically evaluated [ 35 ]. The Fibion is a validated, new triaxial thigh-worn accelerometer that has been used in certain recent studies in the UAE [ 36 , 37 , 38 ]. There is no study yet reporting the validity of self-reported ATLS-2 physical activity questionnaire compared to a thigh-worn accelerometer measured sedentary and physical activity levels. The aim of this study was to evaluate the concurrent validity of the self-reported sedentary and physical activity time of the ATLS-2 physical activity questionnaire compared to Fibion accelerometer-measured data in adolescents and young adults of the UAE. We hypothesized that self-reported sedentary and physical activity time of the ATLS-2 would show negligible/weak correlations with Fibion accelerometer-measured variables in adolescents and young adults.
Methods Study design and setting We used a cross-sectional design for this study on adolescents and young adults. Participants were recruited from the UAE’s public and private schools and universities between January and August 2022. Participants One hundred and fifty physically active adolescents and young adults of both sexes, aged between 14 and 25 years, were enrolled from any of the UAE’s public and private schools and universities. According to a literature review of 114 studies reporting validity of self-reported questionnaires, 90% used a sample size ≥ 100. Moreover, the COSMIN guidelines recommend a sample size more than 100 for validation studies [ 39 ]. Therefore, 150 participants were deemed sufficient for this study [ 40 ]. Though the term “teen” has been used for referring to the instrument, the developers of the ATLS questionnaire have applied it to participants aged between 14 and mid-twenties [ 21 – 26 ]. Participants in the present study were excluded if they had any of the following pathological conditions: musculoskeletal, rheumatic, cardiovascular, or systemic conditions or any recent surgery affecting physical activity, sleep, and/or dietary patterns. Participants were recruited using advertisements placed on university/school notice boards, flyers, mobile applications (e.g., WhatsApp) and/or word of mouth. The Research Ethics Committee of the University of Sharjah reviewed the study proposal and approved it (REC-22-02-23-01-S). Participants and/or their parents read the information sheet and then informed consent was obtained from them (for adults) or their parents/guardians (for adolescents) before they were enrolled in the study. Research instrument Anthropometric measurements were measured using a stadiometer for height assessment (Seca 213- Hamburg, Germany) and a body composition analyzer (Tanita HD 318 Tanita, Tokyo, Japan) for weight assessment. Sedentary behavior and physical activity were assessed using the ATLS-2 physical activity questionnaire ( https://lh-hsrc.pnu.edu.sa/wp-content/uploads/2018/11/ATLS-Questionnaire-E-Revised-2018-1-1.pdf ) and the Fibion device (Fibion Inc, Jyväskylä, Finland). The ATLS-2 enables collecting and analyzing important lifestyle information from Arab teenagers and young adults [ 40 ]. The ATLS-2 questionnaire is used by adolescents and young adults to self-report variables including anthropometric measures, physical activity (with 34 items), sedentary behavior (with 4 items), sleep duration (with 2 items), and dietary habits (with 10 items). However, dietary questions were not included in the current study. The questionnaire is available in English and Arabic versions. Our participants were given the option to choose either version based on their choice. Procedure The ATLS-2 questionnaire was filled out online by the participants via Google Forms before wearing the Fibion accelerometer. Sedentary and physical activity duration was assessed using the Fibion device affixed to the right anterior thigh. The participants were asked to wear the Fibion device for one week and remove it during any water-based activity since the device is not waterproof. The Fibion device was worn in the proximal third of the thigh as instructed by the official Fibion website. A non-allergic adhesive tape was used to secure the device to the body. The Fibion device can precisely measure and analyze different intensities of physical activities when it is placed on the anterior mid-thigh [ 37 , 42 ]. Accelerometer data processing We used Fibion data processing techniques reported in earlier studies [ 20 , 22 ] to analyze physical activity time. Fibion data, as well as each student’s age, sex, weight, and height, were submitted to the manufacturer’s website ( www.fibion.com/upload ). As a result, the web service provided explicit reports on the time, intensity, and energy expenditure of physical activity levels. Data from CSV files including minute-by-minute and day-by-day, data were retrieved and analyzed. A customized data fixer tool from the Fibion manufacturer was used to exclude standard night-time (11 pm to 7 am) from all participants to prevent conflation between night-time data with sedentary and upright time data [ 37 ]. For the data to be considered valid, a minimum of 10 h (600 min) per day for 3 weekdays and 1 weekend day must be collected [ 37 ]. Only sedentary behavior and physical activity time per day was included for analysis, after including the number of valid days of participation for each task. Among the 150 participants who were initially enrolled in the study, 19 were excluded due to invalid data or technical difficulties with the devices during data collection, and 131 were thus included in the analysis. To mitigate the differences in wear time of the Fibion device between the participants, all the variables were normalized to 16 h of activities ([the time duration of each task*16]/the total wear time) [ 29 , 43 ]. All the variables of interest (duration of walking, cycling, high intensity, moderate intensity and total activity), except for sitting/sedentary time (hours/16-hour day), extracted from the Fibion data were expressed as minutes/16-hour day. The ATLS-2 physical activity questionnaire data were obtained using Google sheets associated with the Google form customized for ATLS-2 data collection. Participants self-reported the duration of activities in minutes per r (24-hour) day. Some tasks mentioned in the ATLS-2 questionnaire such self-defense sport, household work, and dancing activities will fall into one of the three intensity types (light, moderate, and high). As the accelerometer devices cannot record the contextual factors of activity (e.g., self-defense, household, or dancing activity) and upper limb activities, the intensity of activity was considered while calculating the total activity time from the Fibion output. For ATLS-2 total activity time, a sum of walking, run/jogging, cycling, moderate intensity, high intensity, self-defense sport, household work and dancing activities was used. Statistical analysis The Shapiro-Wilk tests were used to determine if the data were normally distributed. Concurrent validity between ATLS-2 and Fibion data was assessed using the Spearman’s Rho correlation coefficients as the data were not normally distributed. The criteria used for interpreting correlation coefficients are shown in Table 1 [ 36 , 44 ]. The Bland-Altman plots and 95% limits of agreement were used to detect outliers and systematic/proportional bias. In the Bland-Altman plots, mean values were plotted against differences between Fibion and ATLS-2 measures (i.e., Fibion accelerometer-measured time – self-reported time with the ATLS-2 for each task). These plots included 95% limits of agreement (mean ± [1.96 * SD] where mean and SD are the mean and standard deviation of differences between Fibion and ATLS-2 measures, respectively). In addition, proportional bias in the data was assessed using linear regression analyses with difference and mean scores of the two methods (ATLS-2 and Fibion accelerometer data) and used as dependent and independent variables respectively [ 43 ]. A significance level < 0.05 was set for all analyses. Statistical analyses were performed with the IBM SPSS statistics version 28 (IBM Corp., Armonk, NY, USA).
Results Among 131 participants (age range: 14–25 years), 81% of the participants were non-Emirati Arabs (n = 106), 13% were Asians (n = 17), and 6% were Emiratis (n = 8). Other participants characteristics are summarized in Table 2 . All variables showed only negligible or weak and insignificant correlations between ATLS-2 and Fibion measurements. Overall, self-reported time with the ATLS-2 underestimated accelerometer-measured time for sedentary behavior and physical activities in the population studied. Study participants’ characteristics and the Spearman’s rho correlation coefficients correlating variables of the Fibion and ATLS-2 are presented in Tables 2 and 3 , respectively. Most participants did not score any value for most activities (e.g., biking, dancing, and sports of various intensities) in the ATLS-2. The Bland-Altman plots with 95% limits of agreement are shown in Figs. 1 , 2 , 3 , 4 , 5 and 6 . Except for walking and moderate intensity activity time (regression model p value > 0.05; Figs. 2 and 5 ), a proportional/systematic bias was evident in other plots (regression models p values < 0.05) with a tendency for a decrease in difference scores when there is an increase in mean scores between the methods as revealed by the regression lines.
Discussion This study investigated the concurrent validity of self-reported ATLS-2 and Fibion accelerometer-measured time during various activities performed during the day such as sitting, walking, cycling, high intensity and moderate intensity activities. Overall, there were negligible or weak correlations between most variables measured by the two methods. The findings from the study were consistent with our hypothesis. Subjective self-reported measures of the ATLS-2 were not as accurate as objectively measured sedentary and physical activity time parameters of the Fibion accelerometer, the reference standard used in our study. As the ATLS-2 is a self-reported measure, people often may not recall exactly how long they may be participating in certain activities, which may lead to an underestimation/overestimation of values. For example, it may be difficult to estimate how many hours they are sitting down during the day. Moreover, adolescents and young adults might underestimate or exaggerate their physical activity status when asked. These reasons can account for discrepancies in the results between the ATLS-2 questionnaire and the Fibion device measurements. Specifically, the results showed a negligible correlation between the two methods for sitting/sedentary behavior (r = -0.04) and walking time (r = 0.05). The ATLS-2 seemed to underestimate sedentary time values and other variables (Table 3 ). Moreover, the participants in our study were teenagers and young adults (aged 14–25 years) and they were studying in schools/universities or were newly employed in jobs where they were required to sit for long periods of time. These participants may not be able to accurately estimate how long they have been sitting or doing other activities during the day depending on the variations in their day-to-day tasks and recall bias. The ATLS-2 questionnaire included questions on time spent for watching television, using internet and sleeping hours during weekdays and weekend days to record the sedentary behavior. This could not possibly account for all the sedentary activities people participate in as there are many other activities that can be considered sedentary such as sitting with family or reading. This would further contribute to underestimation of sitting/sedentary time. A systematic review by Lee et al. (2011) found comparable results for self-reported walking time with the International Physical Activity Questionnaire short form (IPAQ-SF) [ 46 ]. When the IPAQ-SF was compared to different actometers, accelerometers and pedometers readings, they discovered that walking time was 28% underestimated at the level of smallest discrepancy. The problem this study highlights is that (brisk) walking frequently matches moderate intensity activity (walking: 3.3 metabolic equivalents [METs] and moderate intensity: 3-5.9 METs). Hence, the accelerometers might include brisk walking under moderate intensity activities. When compared to the accelerometers, the moderate intensity time calculated from the IPAQ-SF revealed weak relationships. Similar discrepancies are plausible while comparing ATLS-2 and Fibion accelerometer measurements. A study by Wang et al. (2013) found that moderate physical activity levels of Chinese youth were overestimated by 106% with the IPAQ-SF when compared to ActiGraph measurements [ 28 ]. Therefore, the results for self-reported moderate intensity are different when compared to accelerometers, thereby posing doubt on the validity of self-reported sedentary and physical activity measures with the IPAQ-SF and ATLS-2. A negligible correlation (r = -0.001) was found between high intensity activity time between both methods in our study. People usually participate in high-intensity sports for a short period of time but still correlations between self-reported and accelerometer-measured time were negligible. In a study by Al Hazza et al. in 2011, similar correlation estimates were found between the pedometer and ATLS-1 questionnaire [ 29 ]. The strength of correlation between the ATLS-1 and pedometer was weak (r = 0.338) for high-intensity ambulatory activities such as running and jogging and negligible for non-ambulatory activities like bicycling (r = 0.135), house-hold chores (r = 0.137), and weight training (r = 0.042). The results may not be directly comparable with our study as they used a pedometer and ATLS-1 whereas we used the Fibion accelerometer and ATLS-2. Nonetheless, in agreement with our findings, the strength of correlation between self-reported ATLS-1 and pedometer values was found to be weak or negligible (Table 1). Strengths, methodological considerations, and limitations This is the first study to assess concurrent validity of the ATLS-2 physical activity questionnaire (Arabic and English versions) and a valid Fibion accelerometer in the United Arab Emirates. For this purpose, we included a large cohort of 131 healthy adolescents and young adults with valid Fibion accelerometer data of ≥ 600 min/days for at least 4 out of 7 days including three weekdays and one weekend day. Participants were asked to recall duration of specific activities of daily living during a typical (usual) week, which could have resulted in recall bias confounding the findings. Recall bias is not the only limitation with such questions but the ability of participants to accurately denote a duration to their activities could be quite difficult. The ATLS-2 questionnaire does not include a specific question inquiring about the number of hours a teen/young adult remain sedentary/sitting during a day. Most of the participants did not self-report to engage in many of the physical activities mentioned in the ATLS-2, such as biking, dancing, and sports of various intensities. Thus, physical activity of adolescents and young adults should be directly monitored with reliable and valid accelerometers or other similar devices, whenever possible, to get better interpretation of their sedentary behavior and physical activity [ 47 ]. This study mainly included a combination of school and university students. Future studies could exclusively validate the ATLS-2 physical activity questionnaire with accelerometers only in school children to see whether it yields more concordant findings between the two methods. However, further revision of the questionnaire of the ATLS-2 physical activity questionnaire is recommended. As people in the United Arab Emirates live in a very dry and hot climate, most children/adolescents and young adults might not choose to do outdoor activities during daytime. Most extra-curricular activities are indeed not included in regular physical education classes for children. Thus, strikingly most participants did not score any value for most activities in the ATLS-2. However, the Fibion accelerometer device captured the duration of activities with different intensities irrespective of the type of task (during study, work or leisure). Participants were asked to remove the Fibion during any water-based activities which precluded comparison to swimming time collected in the ATLS-2. We used the Fibion data fixer tool to exclude standard nighttime (11 pm to 7 am) from all participants to prevent conflation between nighttime data with sedentary and upright time data [ 37 ]. A few minutes of cycling data were recorded by the Fibion device for some participants even though those participants did not report cycling with the ATLS-2. Inaccurate classification of an activity may be caused by a device malfunction, such as a failure to detect signals or start-up issues, which may have an impact on the recording and processing of data. The self-reported scores with the ATLS-2 questionnaire underestimated the duration of sitting/sedentary time and physical activities of different type/intensity, despite the tasks’ duration reported for a 24-hour day, compared to corresponding Fibion data normalized to 16-hour day. Therefore, comparing these two outcomes (self-reported time per 24-hour day vs. Fibion-accelerometer data normalized to 16-hour day) is challenging yet relevant as Fibion data without normalization to 16-hour day would be confounded by device wear time of participants. However, these differences (self-reported ATLS-2 vs. normalized Fibion data) were not expected to confound the correlation values reported in the study. Future recommendations The ATLS-2 questionnaire could add a distinct question about sitting time per day (in hours), to more specifically investigate this component. The number of physical activities included in the questionnaire is extensive and future studies must note that all tasks may not be applicable for all adolescents and young adults. Further validation of dietary component of the questionnaire is warranted. A waterproof wrap or sealed envelope enclosing the Fibion device could be used to allow participants to participate in water-based activities without having to remove device, therefore giving a chance to capture relevant data.
Conclusion This study showed an overall poor correlation and low agreement between self-reported ATLS-2 and Fibion accelerometer-measured sedentary and physical activity time. Negligible to weak correlations were noted for sitting, walking, cycling, high and moderate intensity activity and total activity time between the two methods. The ATLS-2 physical activity questionnaire may be used while collecting self-reported data on a large Arab (teen/young adult) population and in those settings where accelerometers are unavailable; however, the chances for underestimation of sedentary and physical activity time with such self-reported physical activity questionnaires must be noted. We recommend including accelerometers or similar devices whenever possible to provide objective physical activity estimates.
Background Most young adults and adolescents in the United Arab Emirates (UAE) do not meet the established internationally recommended physical activity levels per day. The Arab Teen Lifestyle Study (ATLS) physical activity questionnaire has been recommended for measuring self-reported physical activity of Arab adolescents and young adults (aged 14 years to mid-twenties). The first version of the ATLS has been validated with accelerometers and pedometers (r ≤ 0.30). The revised version of the questionnaire (ATLS-2, 2021) needs further validation. The aim of this study was to validate the self-reported subjective sedentary and physical activity time of the ATLS-2 (revised version) physical activity questionnaire with that of Fibion accelerometer-measured data. Methods In this cross-sectional study, 131 healthy adolescents and young adults (aged 20.47 ± 2.16 [mean ± SD] years (range 14–25 years), body mass index 23.09 ± 4.45 (kg/m 2 ) completed the ATLS-2 and wore the Fibion accelerometer for a maximum of 7 days. Participants (n = 131; 81% non-UAE Arabs (n = 106), 13% Asians (n = 17) and 6% Emiratis (n = 8)) with valid ATLS-2 data without missing scores and Fibion data of minimum 10 h/day for at least 3 weekdays and 1 weekend day were analyzed. Concurrent validity between the two methods was assessed by the Spearman rho correlation and Bland-Altman plots. Results The questionnaire underestimated sedentary and physical activity time compared to the accelerometer data. Only negligible to weak correlations (r ≤ 0.12; p > 0.05) were found for sitting, walking, cycling, moderate intensity activity, high intensity activity and total activity time. In addition, a proportional/systematic bias was evident in the plots for all but two (walking and moderate intensity activity time) of the outcome measures of interest. Conclusions Overall, self-reported ATLS-2 sedentary and physical activity time had low correlation and agreement with objective Fibion accelerometer measurements in adolescents and young adults in the UAE. Therefore, sedentary and physical activity assessment for these groups should not be limited to self-reported measures. Keywords Open access funding provided by Umea University.
Acknowledgements The authors would like to thank Professor Hazzaa M. Al-Hazzaa for providing us the ATLS-2 questionnaire and granting permission to use it for our studies. Author Contribution AA – conceptualization, design, supervision, data curation, data analysis, interpretation, drafting, revision; SAM, ZAZ, TME, HIA, FSJ, HYA – data collection, data curation, data analysis, interpretation, drafting, revision; TS, IMM, CH - interpretation, drafting, critical revision. All authors read and approved the final manuscript. AA led the writing of the paper and he is the guarantor. Funding The study was funded by the VC for Research and Graduate Studies Office, University of Sharjah, United Arab Emirates. Open access funding provided by Umea University. Data Availability Data are available from AA upon reasonable request. Declarations Ethics approval and consent to participate The Research Ethics Committee of the University of Sharjah reviewed the study proposal and approved it (REC-22-02-23-01-S). All study methods were performed in accordance with relevant regulations and guidelines. Participants and/or their parents read the information sheet and then informed consent was obtained from them (for adults) or their parents/guardians (for adolescents) before they were enrolled in the study. Consent for publication Not applicable. Competing interests The authors declare no competing interests. List of Abbreviations Arab Teen Lifestyle Study Revised version of the Arab Teen Lifestyle Study questionnaire International Physical Activity Questionnaire Short Form Metabolic equivalents United Arab Emirates
CC BY
no
2024-01-15 23:35:09
BMC Public Health. 2023 Jun 1; 23:1045
oa_package/05/33/PMC10233181.tar.gz
PMC10258666
37305907
[email protected] One contribution of 14 to a theme issue ‘ Amphibian immunity: stress, disease and ecoimmunology ’. As a class of vertebrates, amphibians, are at greater risk for declines or extinctions than any other vertebrate group, including birds and mammals. There are many threats, including habitat destruction, invasive species, overuse by humans, toxic chemicals and emerging diseases. Climate change which brings unpredictable temperature changes and rainfall constitutes an additional threat. Survival of amphibians depends on immune defences functioning well under these combined threats. Here, we review the current state of knowledge of how amphibians respond to some natural stressors, including heat and desiccation stress, and the limited studies of the immune defences under these stressful conditions. In general, the current studies suggest that desiccation and heat stress can activate the hypothalamus pituitary–interrenal axis, with possible suppression of some innate and lymphocyte-mediated responses. Elevated temperatures can alter microbial communities in amphibian skin and gut, resulting in possible dysbiosis that fosters reduced resistance to pathogens. This article is part of the theme issue ‘Amphibian immunity: stress, disease and ecoimmunology’.
Amphibians responding to changing environments Amphibians are ancient creatures valued by all human societies. They play critical roles in aquatic and semiaquatic environments as important consumers or competitors of insects and as prey for other animals. They share a complex neuroendocrine system with other vertebrate species that enables them to thrive in a variety of environments (reviewed in [ 1 ]). Given their long evolutionary history, it is likely that some species are adapting to current climate changes, but there is a concern that some are unable to adapt quickly enough, leading to losses of biodiversity. The most recent technical report of the Intergovernmental Panel on Climate Change (IPCC, the United Nations body for assessing the science related to climate change) indicates with high confidence or very high confidence that species in all ecosystems have begun to shift their geographical ranges and alter the timing of seasonal events in response to a warming climate ( https://www.ipcc.ch/report/sixth-assessment-report-working-group-ii/ ) [ 2 ]. Many species of amphibians have wet skin with higher evaporative water loss than reptilian and mammalian skin and use the skin for both respiration and regulation of essential ion balance (reviewed in [ 3 ]). Thus, they are likely to be among the species most affected by climate change, with expectations that ranges for some species will contract until no suitable habitat will remain, especially in tropical regions and the Amazon [ 4 , 5 ]. Using more than a decade of observations, Muths et al . [ 6 ] demonstrated that for temperate amphibian species, population dynamics were influenced by climate change, though responses were highly variable and context-dependent. In temperate regions of the midwestern USA (Minnesota and Wisconsin), evidence suggests that calling and breeding started earlier in some warm years compared with historical records dating back to 1895 [ 7 ], and some western habitats are becoming warmer and drier [ 8 ]. For example, Yellowstone National Park in the western USA has warmed significantly since 1980 [ 9 ]. Effects of warming include lower snowfall at high elevations, which leads to shorter amphibian habitat persistence, lower breeding success [ 10 ] and lower overwintering survival, especially of toads infected with the chytrid fungus Batrachochytrium dendrobatidis [ 11 ]. Soils there and elsewhere in the western USA have become drier, and these trends are expected to continue [ 12 ]. In a study of Florida, USA wetlands, Greenberg et al . [ 13 ] used 17 years of temperature, rainfall and water depth measurements to develop a model to forecast water depths of ephemeral wetlands out to the year 2060. Their prediction was that only one of five amphibian species that are currently present would thrive under these conditions. Thus, it seems that, to survive, populations will need to continue to shift their ranges and evolve greater tolerance to warmer, drier conditions. Such range shifts may incur costs in terms of immune defences. Two studies of invasive amphibian species (Cuban treefrogs, Osteopilus septentrionalis , and cane toads, Rhinella marina ) expanding their range in the state of Florida, USA showed that those toads at the leading edge showed diminished activity of one key measure of innate immunity in their plasma. Bacterial killing activity (BKA) was decreased, suggesting poorer complement activity [ 14 , 15 ]. Invasive cane toads at the expanding edge of their range in Australia also showed somewhat poorer responses in measures of a cell-mediated lymphocyte response. Toads from older-established populations away from the invasion front had more circulating white blood cells and recruited more white blood cells into toe-webbing following injection with a plant lectin, phytohaemagglutinin (PHA), than newer populations of toads at the expanding invasion front [ 16 ]. Although the authors did not measure corticosterone in the Cuban treefrog study, they suggested in the discussion that toads at the leading edge may have had elevated corticosterone because of increased metabolic needs associated with movement. This might have explained the reduced immune functions. In another study using invasive cane toads expanding their territories in Florida, this hypothesis did not seem to be supported. That is, baseline levels of corticosterone were not different between the northernmost populations at the leading edge. The invasive cane toads at the northern edge of their habitat range expansion in Florida had a poorer corticosterone response to short-term stress whereas the warmer, more established southern populations responded better with elevated corticosterone responses [ 15 ]. All of these studies suggest possible trade-offs between the need to support metabolism in marginal habitats and to support of immune defences. Many amphibian species depend on precipitation-fed freshwater habitats [ 17 ], which are experiencing greater frequency and severity of droughts with climate change [ 18 ]. Given the persistence of water in these habitats can be variable year to year, some amphibians have remarkable adaptations to sense declining water levels to accelerate larval development and escape the drying pond (reviewed in [ 19 , 20 ]). Although plasticity can increase the chances of surviving in variable environments, exposure to pond drying typically results in faster development at a cost of a smaller size at metamorphosis, leading to lower survival and fecundity (reviewed in [ 21 ]). Because the hypothalamus–pituitary–interrenal (HPI) axis orchestrates both the accelerated metamorphosis phenotype [ 22 ] and the drastic immune system changes that occur with metamorphosis [ 23 ], researchers have hypothesized immune trade-offs are likely to occur (reviewed in [ 24 ]). Only a small number of amphibian species have been studied under shortened hydroperiod conditions (reviewed in [ 25 , 26 ]), and only a few have assessed immune responses. Specifically, shorter hydroperiods led to weaker cellular immune system responses to PHA in wood frogs ( Rana sylvatica ) and northern leopard frogs ( Rana pipiens ) [ 27 , 28 ]. On the other hand, the New Mexico spadefoot toad ( Spea multiplicata ) did not display carry-over effects of pond drying on immune function [ 29 ]. In two species of leopard frogs ( R. pipiens and Rana sphenocephala ), carry-over effects of shorter hydroperiods also included changes in host-associated microbiota [ 30 ], shifting to lower capacities to inhibit pathogen growth [ 31 ]. Thus, the effects of pond drying on the development of specific immune defences in postmetamorphic amphibians have been studied in a limited number of species, and further studies are needed. Effects of heat and dehydration as stressors on immunity in adult amphibians In this section we examine what is known about the effects of extreme heat and/or dehydration on the ability of adult amphibians to mount effective immune responses. It should be noted that amphibians are a very diverse class of animals with variable thermal tolerance limits [ 32 ], and the effects of extreme heat and desiccation on immune function have not been well studied. Most studies have been conducted with anuran species, and there are very few studies on urodeles or caecilians. Thermal performance can also vary by populations within a species [ 33 , 34 ]. Species differ in their capacity to resist evaporative water loss, and hylid frogs with higher desiccation resistance were predicted to be able to tolerate a higher range of temperatures [ 35 ]. As further evidence that species differ greatly in their responses to desiccation, a study of five species of Brazilian toads from differing habitats showed that larger species had higher rates of water uptake but lower resistance to water loss [ 36 ]. Furthermore, thermal performance curves (i.e. performance peaks at some ‘optimal' temperature and mortality occurs at upper and lower limits) (reviewed in [ 37 ]) vary by age and life stage [ 38 ]. Lertzman-Lepofsky et al . [ 39 ] emphasized the importance of considering both elevated temperature stress and evaporative water loss as risks for reaching the physiological limits of amphibians as the Earth warms. For example, using biophysical models based on empirical hydrothermal performance curves, Greenberg & Palen [ 40 ] demonstrated that both thermal and hydration physiology need to be considered when estimating climate change effects on amphibians. Behavioural changes that allow amphibians to move to a warmer temperature setting have the potential for the amphibians to avoid chytridiomycosis caused by the chytrid fungi B. dendrobatidis and Batrachochytrium salamandrivorans [ 41 , 42 ]. However, in a natural setting in Belgium, salamanders in the field rarely achieved the temperature needed to resist infection by B. salamandrivorans [ 42 ]. Thus, the need of some amphibians to remain in cool wet environments precluded their ability to avoid disease. On a positive note, some recent publications suggest that amphibians have both behavioural and physiological plasticity that may enable them to adapt and evolve to changing thermal conditions ([ 43 ], reviewed in [ 44 ]). An example may be found in plethodontid salamanders in the Appalachian Mountains of the USA. A recent study showed that six of fifteen species studied showed significant reductions in body size over the last 55 years as a response to increasing temperatures, especially at southern latitudes with hotter drier conditions. Possible mechanisms for the reduced body size include reduced foraging success under suboptimal conditions resulting in reduced overall growth [ 45 ]. Another example of decreased body sizes over many decades was found in a study of frogs in museum collections from Borneo that were linked with climate records spanning more than 100 years. One conclusion of this study was that frogs were larger under wet conditions than in dry conditions at cool temperatures, suggesting that when resources were limited at colder temperatures, body size was reduced [ 46 ]. Effects of desiccation independent of heat stress on immunity in adult amphibians Free-living amphibians experience many weather-related changes throughout their lives, and the release of glucocorticoid hormones owing to activation of the HPI axis is thought to be important for energy balance during stressful and non-stressful conditions [ 47 ]. The main glucocorticoid hormone in amphibians is corticosterone, and the main mineralocorticoid is aldosterone. Both hormones are involved in normal development, energy mobilization, and osmoregulation, and both can inhibit lymphocyte proliferation and induce apoptosis of lymphocytes in tadpoles and adult frogs (reviewed in [ 1 ]). There are limited studies of the effects of desiccation alone on amphibians, but many studies have documented that adaptations used to survive dehydration can be costly in terms of energy (i.e. increased heart rate and cardiac contractility), taxing on cardiovascular tissues (i.e. increased blood hyperosmolality, hypovolaemia and hyperviscosity), and cause the release of reactive oxygen species (reviewed in [ 48 ]). In terms of immune function under desiccation conditions, several studies of invasive species at invasion fronts in arid climates have demonstrated changes in immune functions of the dispersing populations. The expanding populations of the guttural toad, Sclerophrys gutturalis , in South Africa showed poorer hydration and an apparent higher BKA under field conditions [ 49 ]. However, in ornate forest toads ( Rhinella ornata ), natural desiccation resulted in elevated corticosterone (81 and 282 ng ml −1 when dehydrated by 10 and 20%, respectively). Under these stressful conditions, the numbers of circulating lymphocytes were reduced, while the numbers of circulating neutrophils were increased, suggesting a possible effect of the stressful conditions on immune parameters [ 50 ]. In crab-eating frogs ( Fejervarya cancrivora ), which inhabit mangrove swamps and marshes in Southeast Asia, dehydration increased both aldosterone and corticosterone levels (approx. 20–30 pmol ml −1 aldosterone, approx. 50–85 pmol ml −1 corticosterone) [ 51 ]. Dehydration increased aldosterone in cane toads ( R. marina ) dehydrated by lack of access to water (40 pmol ml −1 in plasma) [ 52 ]. Likely these documented elevations in osmoregulatory hormone levels (the mineralocorticoids aldosterone and corticosterone) are protective during periods of dehydration, but whether these hormonal changes are immunomodulatory depends on their duration and magnitude (reviewed in [ 53 ]). Effects of heat stress on immunity in adult amphibians As ectotherms, amphibian metabolism increases with temperature [ 54 ], resulting in greater energetic demands which could exceed available resources. Amphibians can respond to extreme heat through behavioural changes such as seeking cooler areas underground or underwater. Hypothetically, if high metabolic costs are accrued at upper thermal limits, less energy may be available to mount effective immune responses, although, physiological trade-offs may prioritize immune function in certain contexts. Further, various immune functions likely have distinct but related thermal performance curves [ 55 , 56 ]. Hotter conditions can also trigger physiological changes mediated by the HPI axis, given that glucocorticoids and metabolism are generally thought to positively covary (reviewed in [ 57 ]). For example, exogenous glucocorticoids increased metabolic rates in one study of red legged salamanders ( Plethodon shermani ) [ 58 ]. An example of a species that has a relatively high critical thermal maximum (CT max ; [ 32 ]) is the invasive cane toad ( R. marina ) in Australia. At the extreme end of their range in the Northwest Territory of Australia, cane toads showed increased corticosterone in blood and urine under conditions of heat stress [ 59 – 62 ]. A study of cane toads in a setting in which the temperature was naturally increasing during the day to very high temperatures (shade temperatures exceeded 40°C), corticosterone levels were highest at the time of day when the temperature was the greatest. Increased glucocorticoids were followed by increased evaporative water loss, suggesting cooling due to water loss across the skin. Evaporative water loss is linked to elevated temperatures as a means to cool the skin (reviewed in [ 63 ]). Daily elevated corticosterone levels for the cane toads reached on average nearly 120 ng ml −1 . When the temperatures dropped daily, the toads moved to water sources and became hydrated. If prevented from reaching water sources, the toads died [ 59 ]. Thus, elevated glucocorticoids in this setting appear to be an adaptive response resulting in increased evaporative water loss cooling the toads, and the water loss would be replaced to permit survival under these very harsh conditions. However, these levels of corticosterone would seem to be incompatible with lymphocyte viability [ 64 , 65 ]. Glucocorticoids likely play a role in water-seeking behaviour, as seen in guttural toads ( S. gutturalis) when corticosterone levels were artificially increased [ 66 ]. This may be an example of a trade-off between temporary depression of immune responsiveness to permit survival. The microbial communities of the skin and gut of amphibians are also critical for their health and survival (reviewed in [ 67 , 68 ]). Depletion of skin microbes by antibiotic treatment can increase pathogen susceptibility [ 69 , 70 ]. Thus, when thinking about the effects of heat on immunity, it is important to consider the effects of temperature changes on the microbiome. Two studies of red-backed salamanders ( Plethodon cinereus ) suggest that elevated temperatures (20–21°C, within survival range) altered the microbial communities of the skin and the gut [ 71 , 72 ]. For the gut microbes, the elevated temperature reduced the microbial diversity, leading to reduced capacity to digest food and an increase in a potentially pathogenic bacterial group [ 71 ]. Elevated temperature also reduced the diversity of the microbial skin community, and the diversity was further reduced when the animals were exposed to the pathogenic chytrid fungus B. dendrobatidis [ 72 ]. Because the soil environment is a natural source of amphibian microbial communities from which the host likely selects a subset [ 73 , 74 ], changes in soil temperatures would also impact availability of protective commensal microbes. Thus, temperature shifts toward the warmer end of the environmental temperature tolerance may adversely affect the microbial skin communities and resistance to disease. Effects of natural stressors on tadpoles Free-living tadpoles experience conditions determined by the aquatic environment in which they are hatched from eggs. They have no escape until metamorphosis, and thus at this life stage, they must also adapt by either behavioural or physiological mechanisms. However, their endocrine systems and immunological systems are still developing (reviewed in [ 23 ]). Here we discuss some examples of natural stressors and the effects on tadpole immune responses. Effects of oxygen or food deprivation on aquatic larval stages Aquatic larval stages could be exposed to lower levels of dissolved oxygen and food when temperatures increase. Both limited oxygen in the water and limited food sources may compromise tadpole investment in immunity. Some, but not all, species are capable of increasing oxygen uptake through aerial respiration or gulping air at the water surface [ 75 ]. This response can come at a cost of energy expenditure to swim to the surface as well as increased predation risk. There are many examples in the literature demonstrating the immunosuppressive effects of oxygen deprivation in fish [ 76 ], though we could find none on larval amphibian immunity. One study simulated future climate conditions for developing Polypedates cruciger (common hourglass frog) tadpoles through increasing CO 2 and subsequently decreasing pH, which resulted in lower white blood cell counts in circulation (relative to red blood cells). However, oxygen levels were not measured in this study [ 77 ]. While amphibian larvae may have higher tolerance to hypoxia than fish, given the potential for warming and excess nutrients to increase hypoxia risk in certain ecosystems, more research is warranted here. In a study of short-term food deprivation on western spadefoot toads ( Spea hammondii ), Crespi & Denver [ 78 ] showed that food deprivation of premetamorphic (Gosner stage 31) or prometamorphic (Gosner stage 36) tadpoles resulted in elevated corticosterone to a whole body level that would be incompatible with circulating lymphocytes [ 64 ] whereas postmetamorphic juveniles (nine months postmetamorphosis) decreased the release of corticosterone following food deprivation in favour of reduced activity. Loss of lymphocytes in tadpoles would be replaced when food resources return because the lymphocyte populations are expanding in waves during larval development [ 79 , 80 ]. The decreased activity in postmetamorphic toads was thought to be a strategy to conserve energy until food would become available again, but a secondary benefit, is that corticosterone was not elevated, and lymphocyte activity would not be affected in the juvenile frogs at a time when lymphocyte populations are rapidly expanding [ 79 – 82 ]. Effects of heat stress on aquatic larval stages In general, there are very few published studies of the effects of heat stress on immune defences of tadpoles. However, some recent studies have examined the effects of elevated temperatures on immune cells in the blood of metamorphosing tadpoles and on the larval microbiomes. In addition to the study of the effects of elevated CO 2 that affects pH and white blood cell counts cited above [ 77 ], the authors examined the effects of temperatures elevated by 3 or 5°C (from 29 to 32 or 34°C) on survival and blood cell numbers in developing P. cruciger (common hourglass frog) tadpoles at the conclusion of metamorphosis. Both elevated temperatures reduced survival, and all the tadpoles at 34°C died before reaching metamorphosis. The tadpoles at 32°C experienced high mortality after metamorphosis in comparison with controls. At the conclusion of metamorphosis, the tadpoles at 32°C also had reduced numbers of total white blood cells relative to red blood cells in comparison with control frogs, and the proportions of lymphocytes, monocytes and neutrophils among the white blood cells were increased in comparison with control frogs. This study suggests that elevated temperatures in this tropical frog added additional stress to the haemopoietic cell compartment during the critical period of metamorphosis when lymphocyte numbers in the thymus and spleen are reduced by the glucocorticoid and thyroid hormone-driven events of metamorphosis (reviewed in [ 23 ]). This reduction in immune cells likely made the newly metamorphosed froglets highly vulnerable to infection, and in this study, many died immediately after metamorphosis. The microbial communities that inhabit the gut and skin of larval amphibians are different from those of adults of the same species [ 83 , 84 ]. Elevated temperatures can alter those communities at the larval stages. For example, leopard frog tadpoles ( R. pipiens ) raised at an elevated temperature of 28°C (slightly higher than the medium preferred temperature of 20–25°C) [ 85 ] had strikingly different microbial communities from those raised at 18°C, and the warm tadpoles had a greater abundance of members of the potentially pathogenic genus Mycobacterium ([ 86 ], reviewed in [ 87 ]). A shift in the gut microbiome was also detected within a very short time (1–4 days) in tadpoles of green frogs ( Rana clamitans ) and American bullfrogs ( Rana catesbeiana ) when the acclimation temperature of 24°C was shifted to 29°C [ 88 ]. Additional studies of green frog tadpoles showed that reduction of the gut microbiome by rearing in sterile water reduced the thermal tolerance and survival of microbe-depleted tadpoles in comparison with tadpoles raised in water containing microorganisms [ 89 ]. Another more natural study of the gut microbiota of tadpoles of a bromeliad plant-specialist frog species found in Brazil ( Ololygon perpusilla ) showed that elevated temperatures (about 6°C above ambient) led to significant changes in the gut microbiome that were characterized as dysbiosis. The tadpoles were in competition with other invertebrates, including mosquito larvae. The warming temperatures altered the environmental bacterial community and the arthropod community such that bacterial communities in the tadpole gut changed, resulting in stunted tadpole growth [ 90 ]. All of these studies show that temperature can have a dramatic effect on the gut microbial community necessary for food digestion, adaptation to temperature changes, and survival. Amphibian larvae are susceptible to infection by trematodes (mainly Ribeiroia and Echinostoma genera) and ranaviruses (family Iridoviridae). For example, the Pacific treefrog ( Pseudacris regilla ) is infected by cercariae of the trematode Ribeiroia ondatrae shed by the intermediate-host snail. Several studies suggest that warmer temperatures have differing effects on the parasite and host. A warm temperature of 26°C resulted in fewer cercariae surviving after being shed by the snail and fewer encysted parasites in the tadpoles, but this temperature accelerated development of the tadpoles. The authors attributed this greater resistance of the frog host to a possible shortening of susceptible larval stages or to enhanced immunity, including development of more eosinophils and lymphocytes [ 91 ]. The temperature of 26°C is well within the temperature tolerance of the host tadpoles [ 92 ] but had adverse effects on snail survival [ 93 ]. While immunity was not measured, larvae of several species are more likely to die from ranavirus infection when exposed at warmer temperatures, though studies found considerable interspecific variation [ 94 , 95 ]. More research is needed to assess how antiviral responses vary with temperature to explain differences among species. Concluding remarks Amphibian species at all life stages continue to be vulnerable to population declines owing to multiple interacting factors such as habitat loss, disease, environmental chemicals, invasive species, overuse by humans, and emerging diseases (reviewed in [ 96 ]). Among the newest threats is global climate change. Unpredictable temperatures and rainfall resulting from climate change will exacerbate the effects of the other factors. It is likely that some species will find ways to adapt and evolve, but other species in specialized niches or with small reproductive capacity may not adapt quickly enough. Studies of the effects of climate change on the immune system of amphibians are very limited, but they suggest that the stresses of extreme heat and drought will impact the vulnerability of amphibians to diseases such as chytridiomycosis and ranavirus outbreaks. One area that would benefit from additional research mentioned in this review is the need to understand the effects of shorter hydroperiods on developing amphibians and resulting effects on development of immunity. Another is the effect of reduced oxygen levels in aquatic environments on tadpoles and adults, which need to expend more energy to survive, and the effects of hypoxia on immunity. Most of the studies we have been able to access were from the USA or North America. However, effects of climate change on amphibians in tropical areas will likely be different, and more studies are needed from these vital tropical habitats. It is also the case that most studies of heat stress on immunity in amphibians have been conducted with anuran species (frogs and toads). More studies should also be conducted on urodeles, which seem to be especially vulnerable to the chytrid pathogen B. salamandrivorans [ 42 , 97 , 98 ]. Not only do we need to better understand the effects of changing environments and a changing climate on amphibian immunity, but also future research should focus on ways to mitigate climate change impacts and prioritize vulnerable species.
Data accessibility This article has no additional data. Authors' contributions L.A.R.-S.: conceptualization, funding acquisition, project administration, writing—original draft, writing—review and editing; E.H.LS.: writing—review and editing. Both authors gave final approval for publication and agreed to be held accountable for the work performed herein. Conflict of interest declaration We have no competing interests. Funding Work from the Rollins-Smith laboratory was supported by the US National Science Foundation grant numbers DEB-1814520, IOS-2011291, IOS-2147467 and BII-2120084 and by the Strategic Environmental Research and Development Program (SERDP) of the Department of Defense (Project no. RC2638, C. Richards-Zawacki, principal investigator). The funding agencies had no direct involvement in the preparation of this review.
CC BY
no
2024-01-15 23:43:51
Philos Trans R Soc Lond B Biol Sci.; 378(1882):20220132
oa_package/e5/f7/PMC10258666.tar.gz
PMC10263403
37341226
Potencial conflito de interesse Não há conflito com o presente artigo. Resumo O infarto do miocárdio com artérias coronárias não obstrutivas (MINOCA) é um fenômeno clínico intrigante e de prognóstico incerto, caracterizado pela evidência de infarto do miocárdio (IM) com artérias coronárias normais ou quase normais na angiografia 1 . Atualmente, não há diretrizes para o manejo e muitos pacientes recebem alta sem uma etiologia determinada, significando muitas vezes que o tratamento ideal é adiado.Relatamos três estudos de caso MINOCA com as principais causas fisiopatológicas cardíacas, particularmente epicárdicas, microvasculares e não isquêmicas, levando ao tratamento diferencial. Os pacientes apresentavam dor torácica aguda, aumento da troponina e nenhuma doença coronariana angiograficamente significativa.Neste estudo, analisamos a etiologia, diagnóstico clínico e tratamento da MINOCA em relação à literatura relevante.MINOCA é considerado um diagnóstico de trabalho dinâmico, incluindo distúrbios coronários, miocárdicos e não coronários. Estudos prospectivos e registros são necessários para melhorar o atendimento e o resultado do paciente.
Introdução O infarto do miocárdio com artérias coronárias não obstrutivas (MINOCA) é um fenômeno clínico intrigante com um prognóstico incerto caracterizado por evidência de infarto do miocárdio (IM) com artérias coronárias normais ou quase normais na angiografia. 1 Atualmente, não há diretrizes para o tratamento, e muitos pacientes recebem alta sem uma etiologia determinada, significando muitas vezes que o tratamento ideal é adiado. Relatamos três estudos de caso MINOCA com as principais causas fisiopatológicas cardíacas, particularmente epicárdicas, microvasculares e não isquêmicas, levando ao tratamento diferencial. Caso 1 Uma mulher branca de 63 anos apresentou dor torácica aguda induzida por estresse emocional inesperado. 2 Seu histórico médico incluía hipertensão leve. Durante a admissão, um eletrocardiograma (ECG) mostrou taquicardia sinusal com supradesnivelamento ST de 5 mm em V2-V6, onda Q patológica e supradesnivelamento ST de 2 mm em II, III e avF. Os resultados laboratoriais incluíram um nível elevado de troponina T de 886,3 pg/mL (variação normal 12,7-24,9 ng/ml) e NT-proBNP 1434 pg/ml (normal < 125 pg/mL). Um ecocardiograma transtorácico (ETT) revelou uma fração de ejeção (FE) de 45% com função basal hiperdinâmica e uma parede lateral do ápice dilatada e acinética. De acordo com o resultado da coronariografia de emergência, não houve estenose coronariana ≥ 50% em nenhuma artéria potencialmente relacionada ao infarto. No entanto, a ventriculografia esquerda revelou dilatação balonística apical com acinesia. O diagnóstico de síndrome de Takotsubo foi suspeitado com base nos resultados de anormalidades do movimento apical do VE, precedidas por um gatilho emocional estressante, ausência de doença arterial coronariana aterosclerótica culpada, novas anormalidades de ECG, teste de troponina positivo e NT-pro-BNP significativamente elevado. No 12o dia, o ETT não mostrou alteração no balonamento apical e acinesia. O nível de troponina T estava diminuído na dinâmica. O paciente recebeu alta hospitalar em terapia com metoprolol e ramipril. Em 3 semanas, as anormalidades do movimento da parede apical foram resolvidas e a FEVE voltou ao normal em 59%. A hipertrofia recém-aparente do miocárdio do VE no ápice foi consistente com cardiomiopatia hipertrófica apical (CMH). Um agente de contraste foi administrado e nenhuma bolsa apical ou trombo foi encontrado. A espessura máxima da parede do VE de apeх foi de 17 mm, e o septo interventricular foi de 12 mm no final da diástole. O ECG mostrava alterações da repolarização e ondas T gigantes invertidas nas derivações anterolaterais. A paciente foi orientada sobre o diagnóstico de CMH apical. Ela estava assintomática em seu exame de acompanhamento de 6 e 12 meses, que incluiu ETT. Caso 2 Um homem de 31 anos sem história cardíaca prévia apresentou dor torácica subesternal esmagadora em repouso e palpitação. Seus fatores de risco cardíaco incluíam abuso de tabaco e história familiar de infarto do miocárdio em seu pai aos 40 anos. Durante os últimos 5 anos, ele praticou musculação e usou altas doses de esteróides anabolizantes androgênicos (EAA). Durante a admissão, o ECG inicial mostrou ondas Q patológicas em II, III, AVF, supradesnivelamento do segmento ST de 2 mm nas derivações II, III, AVF e V4-6 sem depressão recíproca do segmento ST em I, AVL ( Fig.2A ). Havia evidência bioquímica de dano miocárdico: troponina T 1952 pg/mL, NT-proBNP 260,8 pg/ml. ETT revelou hipocinesia da parede posterior e lateral com FE normal. Com base nesses dados, ele foi tratado como SCACEST. No entanto, os resultados da angiografia coronária não revelaram lesão significativa das artérias coronárias ( Fig.2B ); porém um provocativo teste intracoronário comacetilcolina para suspeita de disfunção vasomotora foi positivo. Um padrão de espasmo difuso foi encontrado. A investigação diagnóstica dos sintomas atuais do paciente incluiu uma ressonância magnética cardiovascular. Não foram encontradas alterações patológicas. Recebeu alta com bloqueadores dos canais de cálcio e nitratos. Foi dada cautela em relação ao uso de AAS. Em seu exame de acompanhamento de 6 e 12 meses, era assintomático. Caso 3 Um homem de 45 anos, hipertenso, com sobrepeso, apresentou-se ao pronto-socorro com dor torácica em repouso e palpitação 1 hora após o início dos sintomas. O ECG demonstrou elevação do segmento ST nas derivações I, AVL e V2-V6 na admissão. O ecocardiograma transtorácico à beira do leito mostrou hipocinesia das paredes ântero-laterais do ventrículo esquerdo. O paciente foi transferido para angiografia coronária de emergência devido à forte dor torácica contínua e elevação do nível de troponina T de alta sensibilidade em 585,0 ng/ml. Não houve estenose coronariana ≥ 50%. No entanto, havia ectasia múltipla na artéria coronária esquerda, particularmente um pequeno aneurisma saciforme em 11 segmentos e um aneurisma fusiforme em 13 segmentos; aneurisma fusiforme de grandes dimensões com estase de contraste nos 6-7 segmentos da artéria descendente anterior; ectasia em 1 segmento da artéria coronária direita ( Figura 3 ). A paciente foi submetida a revascularização do miocárdio das artérias coronárias descendente e circunflexa em 1 mês após SCA. Discussão Como a MINOCA envolve vários mecanismos fisiopatológicos e várias apresentações clínicas, o tipo de manejo varia dependendo da causa básica. 3 O diagnóstico diferencial inclui miocardite, doença microvascular coronariana, embolia pulmonar, doenças do miocárdio como Takotsubo e desequilíbrio entre oferta e demanda de oxigênio do miocárdio (IM tipo 2). 4 Apesar de ter uma declaração de posição contemporânea da ESC e da AHA, existe grande variabilidade na forma como os pacientes com suspeita de MINOCA são avaliados. Atualmente, o consenso agora exclui miocardite e síndrome de Takotsubo do diagnóstico final de MINOCA. 5 - 7 Relatamos 3 casos clínicos de pacientes com dor torácica aguda, elevação da troponina e ausência de doença coronariana angiograficamente significativa. Embora os níveis elevados de troponina reflitam a lesão do cardiomiócito com a liberação dessa proteína intracelular no sangue, o processo não é específico da doença e pode resultar de mecanismos isquêmicos ou não isquêmicos. 6 No primeiro caso, a CMH apical foi mascarada pelo MINOCA, uma vez que preenchia os seguintes critérios: aumento da troponina cardíaca, sintomas de isquemia miocárdica: novas alterações isquêmicas no ECG, nova anormalidade regional da motilidade da parede no ETT; sem estenose coronária ≥ 50% em qualquer artéria potencial relacionada ao infarto. Não houve diagnósticos alternativos específicos para a apresentação clínica: sepse, embolia pulmonar e miocardite. No entanto, MINOCA é um diagnóstico de trabalho inicial, e imagens cardíacas adequadas são cruciais. A acinesia apical e a dilatação na ausência de doença arterial coronariana obstrutiva foram consideradas sinais de cardiomiopatia induzida por estresse (takotsubo), enquanto a hipertrofia apical encontrada durante o acompanhamento do ETT foi uma variante apical da cardiomiopatia hipertrófica. A RMC pode identificar a causa subjacente em até 87% dos pacientes com MINOCA. 8 No subendocárdio, o realce tardio do gadolínio pode indicar uma causa isquêmica, enquanto a localização subepicárdica pode provar cardiomiopatias ou miocardite, e a ausência de realce tardio relevante do gadolínio com edema e anormalidades específicas associadas à motilidade da parede são uma característica da síndrome de Takotsubo. 8 , 9 A terapia com betabloqueadores nesses pacientes pode ser útil para obter o bloqueio adrenérgico, e outras terapias convencionais para insuficiência cardíaca podem ser aplicadas. 1 No segundo caso, a possível patogênese do infarto relacionado ao AAS inclui espasmo da artéria coronária e/ou trombose temporária. O abuso de AAS deve ser evitado. Os antagonistas do cálcio são centrais no controle dos espasmos das artérias coronárias e são fortemente recomendados como medicamentos de primeira linha. 10 Ao contrário dos betabloqueadores, pois eles prosperam o espasmo deixando a vasoconstrição mediada por alfa sem oposição pela vasodilatação mediada por beta. 11 No terceiro caso, em pacientes com SCA por ectasia da artéria coronária, a ênfase é a restauração do fluxo. A intervenção coronária percutânea de um vaso culpado aneurismático/ectático teve menor sucesso do procedimento e maior incidência de no-refluxo e embolização distal. 12 A ressecção cirúrgica é considerada a terapia de primeira linha para EAC envolvendo o tronco da coronária esquerda, múltiplo ou gigante ( >20 mm, ou > 4× diâmetro do vaso de referência) aneurismas. 13 , 14 Conclusão MINOCA é considerado um diagnóstico de trabalho dinâmico, incluindo distúrbios coronários, miocárdicos e não coronários. Estudos prospectivos e registros são necessários para melhorar o atendimento e o resultado do paciente.
CC BY
no
2024-01-15 23:35:10
Arq Bras Cardiol. 2023 May 26; 120(6):e20220705
oa_package/6e/8b/PMC10263403.tar.gz
PMC10291421
37381861
Introduction There is broad agreement that 15 000 years before the present (BP), almost all humans lived in small mobile foraging bands. By 5000 BP, the first states had arisen in Mesopotamia and Egypt. The intervening 10 000 years saw a transition from egalitarian societies to stratification involving elite and commoner classes, where the elites controlled access to land, inherited their privileges and often enjoyed vastly better living standards than commoners. Contemporary mobile foragers tend to operate in bands of a few dozen people who make seasonal rounds within a traditional territory. Band sizes vary with ecological and technological conditions [ 1 ]. Social norms favour food sharing and oppose self-aggrandizement (see [ 2 , pp. 46–51], and sources cited there). Anthropologists have offered several reasons for the prevalence of egalitarianism among mobile foragers: (i) the production technology is simple; (ii) natural resources are available to everyone; (iii) personal asset accumulation is limited; (iv) food storage is limited; (v) technology for violence is widely accessible; (vi) teamwork can be useful for hunting; and (vii) food sharing mitigates individual risks associated with bad luck or injuries. Prehistoric mobile foragers probably had similar characteristics and thus would also have been highly egalitarian. Sedentary foraging became important in southwest Asia during 15 000–13 000 BP and developed in many other regions in the early Holocene. We have examined the reasons for this transition elsewhere ([ 3 ; 4 , ch. 4]). Ethnography establishes that sedentary foragers generally have much larger group sizes than mobile foragers, higher population relative to natural productivity, more food storage and more stratification [ 5 – 7 , pp. 40–44; 1 , pp. 171–172]. Kelly [ 1 , p. 104] remarks that such societies tend to exhibit ‘social hierarchies and hereditary leadership, political dominance, gender inequality, and unequal access to resources’, although these features are not universal. Pristine transitions to agriculture occurred in 8–10 regions of the world, with the earliest of these dating roughly to the Pleistocene–Holocene climate boundary around 11 600 BP [ 4 , ch. 5; 8 ]. Agricultural productivity gradually rose through learning by doing and domestication. Such societies had much larger regional populations and settlement sizes than sedentary foragers [ 9 ]. Inequality was relatively modest for early labour-limited farming economies, but it increased dramatically as these economies became more land-limited [ 10 ]. Chapter 6 of our recent book on economic prehistory [ 4 ] presents a detailed formal model showing how early inequality could have emerged. One goal here is to describe this model in a verbal way that will be accessible to non-economists. The other main goal is to compare our theoretical framework with two other frameworks in the literature. We regard the three approaches as complementary. Although there are some areas of overlap, each approach helps to account for phenomena that are ignored or downplayed by the other two. Section 2 describes an archaeological synthesis that centres on the Holocene climate, the emergence of concentrated and reliable resource patches, and the transmission of material assets like land and animal herds from parents to children. For convenience, we call this the ‘Holocene environment household inheritance’ theory, or HEHI. Section 3 summarizes a different theory, which focuses on the distribution of a population across a region, with varying assumptions about the ability of early arrivers to exclude late arrivers. We call this the ‘ideal free ideal despotic’ theory, or IFID. Section 4 describes our framework, which we call the ‘insider outsider elite commoner’ theory, or IOEC. In our causal system, climate, geography and technology determine aggregate regional population through long-run Malthusian dynamics. In the short run, migration among individual sites determines local populations, local property rights and an associated pattern of inequality. Section 5 adds some caveats involving kinship and warfare. Section 6 compares our IOEC theory with the HEHI and IFID theories from §§2 and 3. Section 7 is concerned with empirical issues. We briefly describe recent archaeological research on the origins of inequality in several regions. These findings offer partial support for our IOEC theory. Section 8 offers concluding thoughts.
Conclusion The IOEC theory we have developed in this paper and in Dow & Reed [ 4 , ch. 6]) has the following causal structure. (a) Improvements in climate and/or technology raise regional productivity and lead to long-run aggregate population growth through Malthusian dynamics. (b) Regional population growth leads to higher local population densities at individual sites, causing an initial closure of the best sites (insider–outsider inequality) and the subsequent stratification of the best sites (elite–commoner inequality). (c) Over time, the extension of insider and elite property rights to lower-quality sites leads to a contraction of the commons, with the lowest-quality sites remaining open. (d) Thus, productivity growth due to improved climate or technology yields greater regional inequality, more inequality at individual stratified sites and worsening absolute poverty for commoners. We provide formal economic reasoning for each stage in this process. Related theoretical frameworks follow similar trajectories but leave out some elements of this story. The HEHI approach in §2 goes from the Holocene to economic defensibility of good sites, to the increased importance of material assets in agriculture and pastoralism, and then to greater inequality among households or individuals. However, it ignores group property rights and provides no causal explanation for insider–outsider inequality or elite–commoner inequality. The IFID approach in §3 has a similar property rights sequence to IOEC, going from open sites to closed sites to stratified sites, but it is less explicit about the reasons for these transitions and their consequences for inequality. The core of our theory involves local population density and the critical mass of insiders per unit of land area that would be sufficient to exclude outsiders. The main way in which this critical mass can be reached is through Malthusian population growth at the regional level. But other mechanisms could lead to local population growth at some sites. For example, people may respond to negative environmental shocks by migrating to refuge sites that are buffered from the shock, or they may respond to threats of warfare by migrating to easily defended sites, or they may migrate toward newly profitable trade routes. Any of these processes could yield local IO or EC inequality even without population growth at the regional level. Cultural factors such as shifts in religious beliefs might also make individual sites more attractive, with migration again generating population agglomeration and greater local inequality. We understand that non-economists may be uncomfortable with some of the simplifying assumptions used in our IOEC theory: for instance, the binary distinction we draw between open and closed sites, and our use of a particular population threshold to switch from one to the other. However, other theories draw similar distinctions between sites that are economically defensible and those that are not, or between ideal free and ideal despotic distributions. We invite sceptics to construct their own models using their preferred assumptions. But in the meantime, IOEC has several advantages. It is explicit about causality, it offers a unified explanation for a wide range of phenomena, and it has a rich assortment of empirical implications. HEHI, IFID and IOEC share a focus on changing environmental conditions and control over valuable resource locations as important drivers of early inequality. Other theories put the focus elsewhere. For example, some archaeologists maintain that most or all societies include aggrandizers who try to promote their own welfare at the possible expense of others, and that, in periods of stress (environmental, technological, demographic or the like), dominant individuals seize greater control over resources or shape social institutions in ways favourable to stratification [ 16 , 38 ]. Relatedly, some writers emphasize efforts by aggrandizers to gain direct control over the labour time of others rather than control over physical assets such as land [ 39 , 40 ]. Others emphasize the ability of individuals or groups to manipulate social norms or cultural beliefs in ways that enhance their own privileges [ 41 ]. We do not dismiss theoretical stories of this kind, but they are less explicit about causal mechanisms than the approaches we discussed earlier, and they seem more difficult to test. None of the cases in §7 provides comprehensive support for the IOEC theory in §4, but we hope these cases suggest the general plausibility of our story. One lesson we derive from these cases is that empirical researchers often define the starting point for inequality in a region using the initial appearance of stratification (e.g. evidence for chiefdoms). We think this tends to understate the antiquity of inequality by neglecting a potentially prolonged insider–outsider stage during which technological innovation and population growth were leading to the closure of good sites, resulting in greater inequality across but not within sites. Better tests of the IOEC model would require a panel of individual sites within a region, ranked according to their natural productivity and observed at various points in time. We would want skeletal evidence on nutrition or health that could be used to compute means and variances for individual welfare at each site and date. Contextual information on the dynamics of regional climate, technology and population would be valuable. In a perfect world, it would help to have information for each site and date on local population size, the prevailing property rights system (open, closed or stratified), and inheritance rules governing individual membership in insider or elite groups. Such datasets would permit a more systematic evaluation of the IOEC framework.
One contribution of 20 to a theme issue ‘ Evolutionary ecology of inequality ’. We examine three recent frameworks that attempt to explain early inequality. One explanation involves the emergence of dense and predictable resource patches in the Holocene, together with differential asset accumulation and inheritance by individuals or households. In this view, agriculture and pastoralism led to greater inequality because farmland and animal herds were readily inherited. Another explanation involves the distinction between ideal free and ideal despotic population distributions, together with factors that could trigger a transition from the former to the latter. We offer a third framework based on economic concepts. In our view, inequality initially arose across locations ( insider–outsider inequality ) and reflected geographical differences in resource endowments at those locations. As population densities increased, the barriers to individual migration across locations included fewer kinship linkages and the use of force by insiders to exclude outsiders. These barriers became important with the transition from mobile to sedentary foraging and predate agriculture. Insider–outsider inequality was followed by stratification within settlements ( elite–commoner inequality ), which arose at still higher population densities. We see these three theoretical approaches as distinct but complementary. While they overlap, each emphasizes some phenomena and processes ignored by the other two. This article is part of the theme issue ‘Evolutionary ecology of inequality’.
The Holocene environment and household inheritance The HEHI theory of early inequality originated with Borgerhoff Mulder et al . [ 11 ], who distinguished three kinds of wealth (embodied, material and relational) that can be passed from parents to children. Embodied wealth refers to individual characteristics like body weight or grip strength, material wealth refers to physical assets like land or cattle, and relational wealth refers to social assets like the number of one's exchange partners. When parents reliably transmit such wealth to their children, random shocks to the wealth levels of households in one generation yield persistent inequality across households in subsequent generations. These authors define four economic systems: hunter–gatherer, horticultural, agricultural and pastoral. Hunter–gatherers use wild plant and animal foods, while the other three use domesticated plants or animals. Horticultural societies differ from agricultural societies in three ways: they are labour-limited rather than land-limited, they do not have land markets, and they do not use ploughs. Agriculturalists rely mainly on plants while pastoralists rely mainly on animals. The four economic systems and the three wealth types yield 12 possible combinations [ 11 , p. 685]. For each combination, the authors give an estimate of the relative importance of a specific wealth type in a specific economic system based on the opinions of expert ethnographers for 21 small-scale societies. They also estimate the heritability of each wealth type in each economic system based on regression results for parent–child pairs in the same societies. Weighting the wealth types by their relative importance provides an aggregate heritability estimate for overall wealth in each economic system. A similar weighting exercise generates an aggregate Gini coefficient for inequality within each economic system. The main findings are that: (i) material wealth is especially important in agricultural and pastoral societies; (ii) material wealth is more readily inherited than embodied or relational wealth in agricultural and pastoral societies; and (iii) both agriculturalists and pastoralists have substantially greater inequality than hunter–gatherers or horticulturalists (with the latter two tending to be similar). This approach is folded into a larger synthesis by Mattison et al . [ 12 ] and Smith et al . [ 13 ], who stress three preconditions for persistent institutionalized inequality: climate stability, economic defensibility and intergenerational wealth transmission. During the Pleistocene, large and frequent climate shocks favoured high mobility and encouraged risk mitigation through norms of sharing. The transition from the Pleistocene to the Holocene led to a more stable climate and sometimes provided dense, reliable and spatially concentrated resource patches. These patches facilitated sedentism and were often worth defending. ‘[T]he ability to defend resources likely depend[ed] not only on steep resource gradients but also on group size’ [ 13 , p. 12], where greater sedentism tended to promote larger group sizes. These factors led to enhanced roles for material wealth, individual property rights and differential wealth accumulation. In some regions, such developments were accompanied by plant and animal domestication. The result was intergenerational wealth transmission and persistent institutionalized inequality, as in the framework of Borgerhoff Mulder et al . [ 11 ]. Ideal free and ideal despotic distributions The IFID framework for investigating inequality in small-scale societies focuses on the distribution of a regional population across sites or habitats. The central concepts are ‘ideal free distributions’ (IFDs) and ‘ideal despotic distributions’ (IDDs). Early work by archaeologists included Kennett et al . [ 14 ], Kennett & Winterhalder [ 15 ], Kennett et al . [ 16 ] and Shennan [ 17 , 18 ]. For general discussions, see Codding & Bird [ 19 ] and Weitzel & Codding [ 20 ]. In such models, each agent seeks a site with maximum suitability , which is defined to be biological fitness or a closely related variable like food intake. Suitability does not depend solely on the environmental features of a site such as elevation, watershed size or soil quality. Holding natural resources and technology constant, suitability also depends on the number of agents using the site, and it declines (at least eventually) as more agents arrive. For an IFD, any agent can use any site, and the agents at a site all achieve the same suitability. Equilibrium requires that no agent wants to change sites. Thus, the regional population must be distributed so that all occupied sites have equal suitability, while the unoccupied sites have lower suitability levels. The outcome is therefore egalitarian both across and within sites. For an IDD, early arrivers can defend claims to the best sites, and their individual suitability levels are not reduced by later arrivers who occupy less desirable sites. Social barriers to entry at the most valuable sites allow agents to achieve higher suitability than they would have at IFD population levels. Conversely, agents at inferior sites have lower suitability than they would have at IFD population levels. Agents may accept subordination to dominant agents within a site because this is better than the alternative of exit to an inferior site. Thus, with IDD, the suitability of agents can be unequal both across and within sites. Some writers distinguish between negative and positive despotism [ 20 ]. In the former case, the early occupants at a site drive away newcomers. In the latter case, the early occupants extend concessions to newcomers and allow them to stay in subordinate roles so long as they provide labour services. Subordinates may find the concessions offered by despots attractive in comparison with the alternative of moving to the next best site. Weitzel & Codding discuss the costs and benefits of defending a site (negative despotism) and the costs to a despot of allowing people to settle at a site (positive despotism). They conclude their discussion of these issues with a comment that ‘understanding how these trade-offs articulate within an [ideal distribution model] to produce varied outcomes remains under-explored’ [ 20 , p. 352]. Key empirical questions in this literature include the sequence in which individual sites within a region become occupied and how population is distributed across sites at a given point in time. Researchers often attempt to link these observations to underlying environmental factors that determine the qualities of the sites. Less attention is usually given to measures of inequality involving the fitness, nutrition or health of agents, either across or within sites. To use the IFID framework as a theory about the origins of inequality, one would need to identify factors that could trigger the transition from an IFD to an IDD (either positive or negative). Empirical researchers concerned with this issue tend to highlight rising population density as a likely trigger. Although this is plausible, one then needs a theory where regional population is endogenous, by which we mean that regional population is causally determined by factors internal to the theory. One also needs data regarding the changes in suitability levels for individuals or groups. We will return to these matters in §6. Insiders, outsiders, elites and commoners Here we summarize our theory about the origins of inequality. The theory can be applied equally well to sedentary foragers and early farmers. The underlying mathematical framework is described at length elsewhere [ 4 , ch. 6; 21 ]. We preface this discussion with some methodological remarks. When archaeologists and anthropologists engage in formal modelling, they often use agent-based simulations. Economists also use simulation, but they have a long tradition of constructing models by adopting a few crisp assumptions and deriving results analytically. Our approach is in the latter tradition. In this context, simplifying assumptions are vital, both to clarify the key causal pathways and for analytic tractability. To take some examples: in the discussion below, we assume that (i) it is costless to migrate among the sites within a region but impossible to move to a site outside the region; (ii) there is a local population threshold below which outsiders cannot be prevented from entering a site, and above which they can be excluded; (iii) insiders can deter entry through threats of violence that have no opportunity cost in terms of food production. We do not argue that these assumptions are accurate descriptions of reality, but we do argue that they provide a useful starting point for analysis, and that they have interesting implications. Models of this kind can always be made more realistic by adding complexity to one's assumptions. However, several points should be borne in mind. First, models sometimes give surprising or counterintuitive results, and it is easier to understand the reasons for such results when the assumptions are kept simple. Second, it is rarely useful to start a formal analysis with a complex set of assumptions. It is much better to start simple and then explore the implications of changing one assumption at a time. Third, additional realism does not always lead to different implications. Introducing certain factors that were previously ignored could well reinforce the predictions from the original model. Fourth, simple models more often generate unambiguous predictions and thus are more open to empirical testing. Finally, if the predictions of a simple model line up well with known empirical patterns, further complication serves no real purpose. This is especially true if the simple model is also powerful in the sense that it provides a unified explanation for diverse phenomena. In this spirit, we describe the basics of the IOEC model. Consider a region bounded by mountains, deserts, oceans or long distances from other inhabited regions. Migration between regions is negligible. A region has many production sites, where we use the term site in an economic sense to mean an area within which agents use labour and land to produce food. This differs from the archaeological meaning of a site as a location at which data are gathered. We use the terms site and territory interchangeably, but ‘territory’ has the connotation that the land area involved is relatively large. When a sizeable population resides permanently in a geographically compact area, we sometimes use the term settlement instead of site, but this concept does not play any distinct theoretical role. Time periods are the length of one human generation (about 20 years). Within a period, the aggregate regional population is exogenous, but the agents can move among sites subject to constraints described below, so the local populations are endogenous. Each adult agent chooses a site, produces food, has children, and dies. Children become adults in the next period. Events in a single period are the short run . Events unfolding over multiple periods are the long run . Short-run equilibrium At each individual site, food output is determined by region-wide climate, resources and technology; the quality of the site; and the amount of labour used for food production (equal to the adult population at the site). Variations in site quality reflect variations in local geographical factors such as terrain, ecosystems, soil fertility and access to fresh water. Labour exhibits diminishing returns owing to the fixed land input at a site (e.g. doubling labour input results in less than double the food output). The average product of labour is food output per unit of labour input. Diminishing returns imply that food per person falls when the local population rises. When the local population is low enough, anyone can enter the site and produce food there. In this case, the agents at the site share food equally and each receives the average product of labour. There is a threshold for local population density ( d ) such that the existing occupants at a site, called the insiders , can block further entry. Groups of this size can reliably detect entry and carry out reprisals. For example, the insiders might cooperate to kill or drive away the outsiders. Threats of reprisal are credible and therefore potential entrants are deterred. Because deterrence succeeds, in equilibrium, there is no need to carry out the threatened actions, so the exclusion of outsiders has no opportunity cost in terms of food output. We assume insider groups of size d can overcome any coordination or free rider issues connected with the defence of a site and they share food equally among themselves. We think of d as involving the deterrence of individual unrelated outsiders. Cases involving groups of outsiders, or kinship links between insiders and outsiders, will be discussed briefly in §5. Our framework assumes that potential entrants are intercepted at the boundary of a site and that the agents already occupying the site share its resources in an egalitarian way. We do not consider situations where the first household to arrive at a site can force the next household to accept land of lower quality at the same site. Although such situations do sometimes occur at sites in the archaeological sense, we ignore unequal access to resources within an insider group. When choosing a production site in the short run, agents can freely enter any site having a local population below d . Such sites are called open . We define the commons to be the set of all open sites within a region. Agents cannot enter a site having a local population at or above d unless the insiders allow this. We call such sites closed . The defining feature of a region is a low cost of travel among sites. This implies that all sites in the commons must have roughly the same average product in equilibrium because if any significant differences in average product existed, the agents would move from places with low average product to places with high average product. We use w to denote the equilibrium food income per person in the commons. Note, however, that equality of average products only holds for the subset of sites that are open. Sites that are closed, and therefore not in the commons, will have average products above w . At closed sites, insiders may choose to hire some outsiders to work on their land. If they do, they need to offer workers a food income equal to w because this can always be obtained in the commons. We call this the wage . When insiders hire some outsiders, we say that the site is stratified , we refer to the insiders as an elite , and we refer to the hired workers as commoners . Our analysis would be identical if instead of receiving a wage, the commoners paid a rent (again in food units) for the right to work on elite land and consumed their food output net of this rent. In our modelling, the elite at a given site are a cohesive group. We do not consider competition among factions within an elite, as might arise in large-scale societies. Whether insiders hire outsiders depends on the marginal product of labour, defined to be the extra output that results from a small increase in labour input. When the marginal product of labour is less than w , the insiders at a closed site do not hire any outsiders because the latter add less to food output than their cost in wage payments. If the marginal product of labour is greater than w , the insiders will hire some outsiders, and employment of commoners expands until the marginal product of labour falls to the level w . This framework yields the following results for short-run equilibrium. (i) If the regional population is low enough, all sites are in the commons because the local population is below d at all sites. Better sites have higher populations. (ii) If the regional population has an intermediate value, the best sites are closed but no sites are stratified. Lower-quality sites are in the commons. The closed sites have populations equal to the exclusion threshold d , while open sites have fewer people than this. (iii) If the regional population is high enough, the best sites are stratified, sites of intermediate quality are closed but not stratified, and the worst sites stay open. Among stratified sites, better sites have more commoners, but all stratified sites have elites of size d . Long-run equilibrium The aggregate regional population adjusts through Malthusian dynamics. For simplicity, we assume parthenogenesis (all agents are female). Adults who have higher food incomes have more surviving children. There is a level of food income y * at which an adult has one surviving child. In a long-run equilibrium, the regional population settles at a stationary size N * where the average product of labour for the region is y *. An improved climate or technology yields a higher regional population N * in long-run equilibrium but the average food income y * stays unchanged. Note that y * involves aggregate food output and aggregate population for the region. Individual closed sites will have a range of local average products depending on land quality. When there is some inequality across or within sites, the agents with high food incomes produce more children than are needed to replace themselves demographically, while the agents with low food incomes produce fewer children than are needed for replacement. Hence, a stable class structure requires some downward mobility where a subset of the children of insider or elite parents become commoners in each period. The remaining children inherit insider or elite status at the sites of their parents. Commoner parents always have commoner children. Implications Region-wide productivity depends on region-wide climate and technology. If either of these factors improves, the short-run effect is a higher food income per person with an unchanged regional population. The long-run effect is a higher regional population with the same food per person as before the change in climate or technology. As with other Malthusian models, exogenous shocks (positive or negative) are absorbed in the long run through changes in population rather than changes in the average welfare of the individual agents. Our theory combines endogenous population with endogenous property rights. A better climate or technical progress raises population densities in the long run for standard Malthusian reasons, but a higher regional population means that more sites achieve the minimum threshold for the exclusion of outsiders. Thus, the commons shrinks and the average quality of the sites in the commons declines. This depresses the food income of the agents in the commons. If regional productivity becomes sufficiently high, the falling wage induces stratification at the best sites. Consider a region in which productivity is rising over time, for either natural or technical reasons. A crucial prediction of the model is that we should observe a sequence of stages within the region. At a low productivity level, implying low regional population, all sites are open, and equality prevails both within and across sites. At an intermediate productivity level, which gives an intermediate regional population, high-quality sites become closed, but no sites are stratified, so we should see continued equality within each site but inequality across sites, where agents at better closed sites are better off. We call this insider–outsider inequality. At a high productivity level and a resulting high regional population, high-quality sites become stratified, which leads to inequality both within and across sites, where agents who control better sites are again better off. We call this elite–commoner inequality. Another implication of our story is that commoners become worse off in absolute terms as regional productivity increases. With more of the good sites closed, those who remain in the commons work on lower-quality land, food per person in the commons falls, and thus the wage paid to commoners at stratified sites also falls. Because the land rents enjoyed by insiders and elites are simultaneously rising owing to rising regional productivity, inequality rises both for the region as a whole and within the individual stratified sites. Kinship and warfare The parameter d in §4 is the critical mass of insiders required to deter individual unrelated outsiders. Insider or elite groups may sometimes be larger than this for two reasons. Kinship Insiders or elites may be willing to share land with outsiders related by biology or marriage, even if this leads to less food per person among the insiders. For example, kin from other locations might face adverse environmental conditions and seek refuge with relatives. One can think of land sharing in this instance as a form of insurance. Cases of this sort are less likely when settlements are large because members of large communities tend to marry endogamously, and thus have fewer kinship linkages with outsiders [ 22 ]. Warfare Unrelated outsiders do not necessarily arrive one at a time. If insiders perceive a serious threat of attack from an organized group of outsiders, they may wish to expand the size of their own group. The economic trade-off is less food per person in peacetime versus a higher probability of winning in wartime. This leads to a theory of warfare over land among egalitarian groups [ 4 , ch. 7; 23 ]. In stratified societies, elites could grant elite status to victorious warriors who administer conquered lands [ 4 , ch. 8]. The idea that kinship and warfare considerations affect settlement sizes is not uniquely ours. However, we want to point out that the IOEC framework can be extended to handle these considerations when appropriate (for details, see the citations in the two preceding paragraphs). Comparisons of theories This section compares our IOEC theory from §4 with HEHI from §2 and IFID from §3. While IOEC overlaps to some degree with these two alternative theories, in each case it provides explanations for phenomena that the other theories do not. On the other hand, each of the alternative theories focuses on some phenomena that are omitted from IOEC, at least in its current form. Thus, the three theories are best viewed as complements to one another. Insider outsider elite commoner versus Holocene environment and household inheritance Within the HEHI framework, the transition from the Pleistocene to the Holocene was important because it led to greater climate stability, as well as the widespread availability of dense and predictable resource patches. We agree about the importance of these factors but do not model them formally. In our framework, we treat climate improvement as a factor that enhances productivity, yielding more food output from fixed inputs of labour and land. In the long run, this productivity effect increases regional population, which influences property rights and inequality. In IOEC, improvements in climate and improvements in technology both tend to promote inequality because both are sources of long-run productivity growth. HEHI also emphasizes that some sites are better than others, and that it matters whether agents find it feasible or desirable to defend the best sites. We agree with this point and model it explicitly. In IOEC, the prevalence of diminishing returns implies that insiders want to maintain control over good land whenever they can, and the feasibility of maintaining control depends on the density of insiders per unit of land. Insiders may either block outsiders from entering a site or allow them to enter as subordinates who supply labour but do not control land, depending on the quality of the site. HEHI focuses on inequalities across individuals or households, while IOEC does not. Our theory is about inequality across groups: either insiders versus outsiders, or elites versus commoners. We are therefore concerned with the emergence of what we would call structural inequality, and we ignore individual variation within these classes. Conversely, HEHI ignores structural inequality among classes. Note, however, that we do not limit attention to stratified societies involving elites and commoners. We also explain the emergence of inequality across sites in situations where each individual site remains internally egalitarian. We expect that for any given region, insider–outsider inequality will precede stratification chronologically. A central difference between HEHI and IOEC involves property rights and inheritance. HEHI tends to see property rights as being held by individuals or households, and it is therefore concerned with the transfer of asset ownership from parents to children within a household. We tend to see property rights over land as being established and maintained by groups, and in IOEC, land is collectively rather than individually owned. Specifically, we think of land as being held by corporate descent groups, with individuals inheriting membership in such groups rather than directly inheriting private property rights over land parcels. In some applications, however, we do consider the possibility that individual members of an elite could have private rights to land and hire commoners to work on their individual estates. HEHI and IOEC also emphasize different causal channels. IOEC focuses on the long-run effect of productivity-related variables such as climate, geography and technology on aggregate regional population. We model how the resulting regional population will be distributed across sites in the short run, which enables us to endogenize the property rights prevailing at each site in the region. We can then explain how improvements in climate or technology give rise to greater inequality through the mediating effects of population and property rights. HEHI focuses on causal channels running from technology (hunter–gatherer, horticulture, agriculture and pastoralism) to the relative importance of different wealth types (embodied, material and relational), and from the heritability of each wealth type to the degree of individual inequality in a society using a specific technology. Our theory does not rely on a technological classification system of this kind, and it applies equally well to foragers and farmers. On the other hand, we have only a single asset that can be inherited (land). For reasons of data availability, HEHI authors tend to study hunter–gatherer societies that are located toward the egalitarian side of the foraging spectrum [ 11 , 24 , 25 ]. Sedentary hunter–gatherers with high levels of inequality, up to and including class stratification, are known both archaeologically and historically. Classic examples include societies along the northwest coast of North America [ 26 , 27 ]. Other examples include the Calusa and the Chumash. Such societies were often based on concentrated aquatic resources. HEHI writers note the importance of these societies, but they are no longer extant and cannot be used to estimate the relative importance or heritability of wealth types. Even so, for our purposes, it is best to disaggregate mobile and sedentary hunter–gatherers, because the former rarely have persistent institutionalized inequality while the latter sometimes do. Given suitable archaeological data, one can use the IOEC framework to explain the varying degrees of inequality exhibited by prehistoric sedentary foragers. Some writers in the HEHI literature stress the difference between labour-limited and land-limited farming [ 10 ]. This is one aspect of the definitional distinction between horticulture and agriculture drawn by Borgerhoff Mulder et al . [ 11 ], who find that horticulture (labour-limited) has a relatively low level of inequality, while agriculture (land-limited) exhibits a much higher level of inequality. From an IOEC perspective, this correlation can be explained by the fact that Malthusian population growth with fixed land resources will shift a region in a more land-limited direction over time, and this trend will coincide with rising inequality. Insider outsider elite commoner versus ideal free ideal despotic Similarities between our theory and IFID are easy to see. To take one example, our agents maximize food consumption, but this is linked to an adult's surviving offspring, so in effect our agents maximize fitness. The latter is often called 'suitability' in the IFID literature. Thus, IOEC and IFID adopt parallel assumptions about agent motivation. Another similarity involves the distinction between open and closed sites. When all sites have local population densities below our threshold ‘ d ’, all sites are open. In this case, our IOEC concept of short-run equilibrium is identical to the concept of an IFD used by IFID. Once a site reaches this population threshold, further entry is blocked. This generates an IDD among closed sites as in IFID, where insiders who control better sites have more food per capita . Specifically, our concept of insider–outsider inequality corresponds to the idea of negative despotism (exclusion of newcomers) in IFID. We also identify conditions under which commoners will accept subordinate positions in relation to an elite who control access to a site. The concept of elite–commoner inequality in our theory corresponds to the concept of positive despotism (concessions to newcomers) in IFID. In some cases, our model of stratification might be interpreted as a House society where the elite at a site are linked by kinship while the commoners are unrelated or more distantly related [ 28 ]. It might also be interpreted as a system of patron–client relationships [ 29 , pp. 325–327; 30 ]). The IFID literature frequently focuses on the question of whether an individual site will be occupied, and it links such occupation patterns to rankings of site qualities, where population growth tends to bring lower-quality sites into use. By contrast, our IOEC models typically have the feature that all sites are occupied, but those with very low quality have very few occupants. Hence, in its current form, the IOEC model is not well suited to the task of explaining whether a specific site is occupied. In principle, however, the IOEC model could be modified by adopting a different specification for the food technology, where the number of agents at low-quality sites is zero in equilibrium. With this modification, we would get the usual IFID prediction that as the regional population grows, lower-quality sites will successively come into use. Another distinction between the two frameworks is that IFID frequently emphasizes the role of Allee effects, in which suitability increases as the initial agents arrive at a site, reaches a maximum and then decreases as more agents arrive. The initial interval with increasing returns could arise from productivity gains associated with teamwork or a division of labour, where these gains decline or are exhausted at sufficiently large scales and diminishing returns to labour due to a fixed land input eventually dominate. In our current IOEC models, we use a simpler production technology, where diminishing returns to labour prevail regardless of the number of agents at a site. However, one could modify our technological assumptions to include an initial interval in which the average product of labour rises, with a falling average product thereafter. Such models are often used in economics and do not pose any problem in principle, although this would complicate the formal analysis. The main effect would be to create discontinuities where individual sites could jump from zero to positive populations in response to changes in aggregate regional population. We would not observe sites located on the rising part of the average product curve because such equilibria would be unstable. The central issue for IFID in the present context involves the causal factors that trigger a shift from an IFD with equality to an IDD with inequality. Empirical researchers using the IFID framework generally cite population growth as a key factor (see the discussion of the Channel Islands, Neolithic Europe, Polynesia and the Maya in §7). We agree about the causal importance of population, and our parameter d for the density of insiders at a site constitutes the dividing line between ideal free and ideal despotic distributions. The long-run Malthusian component of the IOEC model provides a causal link running from climate and technology to aggregate regional population. The short-run component of the model determines the regional population levels that will be associated with (a) open access, (b) insider–outsider inequality and (c) elite–commoner inequality. Our theory generates predictions about when a shift from (a) to (b) will occur, which can be interpreted in the IFID framework as a transition from an IFD to a distribution with negative despotism. IOEC also generates predictions about when a shift from (b) to (c) will occur, which can be interpreted in the IFID framework as a transition from negative to positive despotism. In the first case, sites are internally egalitarian, and in the second case, they are internally stratified. An important IOEC prediction is that open, closed and stratified sites can coexist in the same region, where low-quality sites are open, intermediate sites are closed but unstratified, and high-quality sites are both closed and stratified. To put the same idea into IFID terminology, the region can exhibit an IFD among one subset of sites, negative despotism among another subset, and positive despotism among a third subset. IOEC also yields predictions about when a site of given quality will transition from one property rights regime to another. One advantage of our theory in comparison with IFID is that it gives clear mathematical predictions about not only the distribution of population across sites, but also the distribution of food income across agents. Accordingly, IOEC provides a more direct foundation for the study of inequality. It also addresses various economic linkages among productivity, inequality and inefficiency. We show that improvements in climate or technology, while raising productivity, simultaneously impoverish commoners. The reason is that property rights are endogenous and the commons shrinks as productivity increases. As this process unfolds, the economy becomes inefficient in the sense that aggregate regional food output falls below its theoretical maximum (it would be possible to raise total output by transferring labour from poor sites in the commons to good sites that are stratified). These effects are easier to see with IOEC than with HEHI or IFID. Regional cases This section surveys some evidence bearing on our IOEC theory from §4. These examples are meant only to illustrate how our theory could be applied to empirical cases. For brevity, we devote one paragraph to each case and omit many details of interest to experts. Western North America Using a sample of 157 foraging societies, Codding et al . [ 31 ] find that larger local groups are more likely to claim ownership rights to resource locations. This relationship is strong for foragers focused on terrestrial plants and aquatic food resources, but not those focused on hunting. Their ownership variable roughly corresponds to our concepts of open access, closed access and stratification. The results are consistent with our prediction that, other things equal, local groups having larger sizes are more likely to reach the critical mass needed to exclude outsiders from a site. Related work by Smith & Codding [ 32 ] shows that hierarchy is associated with control over concentrated aquatic resources. From our standpoint, this shows that such resources can support elite–commoner inequality in sedentary foraging societies. The Channel Islands Jazwa et al . [ 33 ] study the transition from IFD to IDD on Santa Rosa Island, which they associate with the emergence of chiefdoms around 1300 BP. However, their fig. 6 [ 33 , p. 51] shows rising rates of cribra orbitalia and periosteal lesions over time, with declining stature for men, during 2200–1000 BP. This suggests gradually deteriorating standards of living for much of the population well before visible stratification. We propose that technical innovations (single-piece fishhooks by 2500 BP and plank canoes by 1500 BP) supported rising regional populations, leading to sequential closure of higher-quality sites throughout 2200–1300 BP. Such site closures would have caused worsening poverty among outsiders (IO inequality) well before overt stratification emerged around 1300 BP or later (EC inequality). On a larger geographical scale, insider–outsider inequality can be inferred from differences across islands in the rate of cribra orbitalia [ 4 , p. 267; 34 ]. This marker for anaemia was more frequent on islands with poorer resources, even though movement among islands would not have been physically difficult. Neolithic Europe Shennan [ 17 , 18 , 35 ] describes the emergence of inequality for the first farmers to settle in central Europe. Initial settlements seemed to be relatively egalitarian with broad individual mobility. As population increased, favourable locations were filled in, with early arrivers maintaining control over the locations settled first. Cemeteries became common, suggesting claims to ancestral territory. Evidence for an insider–outsider stage includes variation across sites in house sizes, tools and domestic animal bones. Increasingly, it was mainly women who moved, suggesting the formation of patrilineal corporate groups. This is consistent with our expectation that higher local populations lead to inherited membership for insider groups and the exclusion of outsiders. Eventually populations became high enough to yield stratification at the best sites, as indicated by differences in house sizes and grave goods within settlements. Southeast Asia Fochesato et al . [ 36 ] examine the trajectory of inequality in the Upper Mun Valley in Thailand from the arrival of Neolithic rice farmers around 2000 BC to the formation of early states around 500 AD. They use grave goods to compute Gini coefficients for several sites in the region at various points in time. The Ginis for all Neolithic sites are relatively low while those for Bronze Age sites are higher (except one outlier at the end of this period) and similarly for Iron Age sites. At one key site, there is no evidence for inequality in the Neolithic. The authors highlight three surges in inequality within sites. The first two (at one Neolithic site ca 1800 BC and one Bronze Age site during 1100–800 BC) seem related to trade monopolization, were associated with clear elite–commoner divisions, and were temporary. The third (at multiple late Iron Age sites) was connected to aridity, a shift from rainfed dryland farming to irrigated wet fields, and increasing regional population. The higher inequality was permanent and early states developed relatively rapidly. The authors only discuss inequality within sites rather than across sites, so we do not know if or when there was any transition from open to closed sites during the 1000 years of Neolithic farming. However, technological innovation, population growth and the increasingly sharp stratification at late Iron Age sites are consistent with our theory in §4. Polynesia At historic contact, island chains in Polynesia showed strong cross-sectional correlations among the productivity of natural resources, population density and the degree of inequality [ 4 , pp. 264–265]. This pattern conforms to our expectations based on the theory in §4. Archaeological evidence indicates that the initial settlement of Fiji and West Polynesia around 3000 BP was followed by a lengthy period of population growth, as we expect from Malthusian adjustment toward a long-run equilibrium. Kennett & Winterhalder [ 15 , p. 92] remark that in this context, ‘competition for land would have been an important factor in the emergence of social hierarchies, but direct archaeological evidence for these hierarchies is meager until about 1000 BP’. Nevertheless, there is evidence for corporate group formation and inter-group conflict in Fiji and West Polynesia during 1500–1000 BP, such as hilltop settlements and fortifications. We can infer from the clear threat of warfare that insider–outsider inequality existed in this period [ 4 , ch. 7; 23 ] and was followed by elite–commoner inequality, starting around 1000 BP. The Maya Prufer et al . [ 37 ] examine the transition from an IFD to an IDD for the Classic Period Maya. The area of Uxbenka initially had a small population, perhaps around 40 people. As one expects from Malthusian dynamics under favourable environmental conditions, population growth followed. The authors suggest that a core area was settled first, followed by a periphery, with open access keeping the agents equally well off. However, over time the early sites, which were also the larger sites, developed lineal kinship organization, and agents in the periphery had lower status based on descent. Prufer et al . believe individuals in the periphery did not migrate to take advantage of better opportunities in the core because kinship led to locational stickiness (people did not want to leave their own close kin or accept distant kin from other locations). The authors propose that this was sufficient for a transition from IFD to IDD (or in our terms, from open to closed sites). There is no evidence of stratification during 300 BC to 200 AD, but by around 200 AD public works, public architecture, and landscape modification indicate stratification. At this point, the population had risen to about 500. Commoners appear to have had reasonably good outside options (outlying areas had arable land and reliable water supplies), but the authors suggest that peripheral areas were impoverished relative to the elite core. During 400–800 AD, the elite made fewer concessions to non-elites, which we interpret as falling wages for commoners owing to a rising regional population and the diminishing quantity and quality of sites in the commons.
Acknowledgements We thank Robert Kelly, Stephen Shennan, Eric Alden Smith, and three anonymous referees for helpful comments on earlier drafts. They are not responsible for our errors or opinions. Data accessibility The verbal discussion in our paper relies upon results obtained from a formal mathematical model that has been published in the Journal of Political Economy [ 21 ] and in our book ‘ Economic prehistory: six transitions that shaped the world ’ (Cambridge University Press, 2022) [ 4 ]. Full citations to both sources are given in the reference list for the present paper. Authors' contributions G.K.D.: conceptualization, investigation, methodology, writing—original draft, writing—review and editing; C.G.R.: conceptualization, investigation, methodology, writing—original draft, writing—review and editing. Both authors gave final approval for publication and agreed to be held accountable for the work performed herein. Conflict of interest declaration We declare we have no competing interests. Funding We received no funding for this study.
CC BY
no
2024-01-15 23:43:51
Philos Trans R Soc Lond B Biol Sci.; 378(1883):20220293
oa_package/1a/f7/PMC10291421.tar.gz
PMC10368488
37490942
Introduction Animals must gather, process and act on information about their surroundings to efficiently forage. Rather than relying solely on private information through direct sampling of the environment, animals can use information provided by other individuals (often inadvertently) to reduce sampling costs and increase foraging efficiency [ 1 – 4 ]. One simple form of this social learning is local enhancement , which is when an area becomes more attractive because of the presence and/or behaviour of other individuals [ 5 – 7 ]. If the focal individual visits the location after other individuals have moved on, this is called delayed local enhancement [ 5 , 6 ]. Several hypotheses have been proposed for the expected phylogenetic distribution of reliance on social information. The first and most popular hypothesis (social intelligence) is that social species evolve to take advantage of the greater amount of social information available to them and are thus expected to rely more on social information than solitary species [ 8 , 9 ]. A second non-mutually exclusive hypothesis (predation risk) suggests species more at risk for predation rely more on social information because the costs of using individual information are increased [ 5 ]. However, the use of past social information could trade-off against maintaining group cohesion and leaving the group might have high fitness consequences [ 10 ]. Therefore, we propose a third hypothesis: the ‘group cohesion trade-off’ hypothesis that social species rely less on certain types of social information when maintaining group cohesion is important. An ideal system to test these hypotheses is a group of closely related species differing in social behaviour and predation risk. After the last ice age, marine threespine sticklebacks ( Gasteroseus aculeatus ) colonized multiple lakes in British Columbia (BC) and subsequently independently adapted to two distinct ecological niches, the ‘limnetic’ and ‘benthic’, in the process becoming reproductively isolated and thus separate species under the Biological Species Concept [ 11 , 12 ]. Limnetic fish spend most of their lives feeding together in groups (shoals) in the open-water pelagic zone whereas benthic fish tend to be more solitary and spend most of their lives foraging in the densely vegetated littoral zone [ 13 , 14 ]. The increased propensity of limnetic fish to shoal and their longer spines and more extensive armour plating are probable adaptations to higher predation pressure as adults compared to benthic fish [ 14 , 15 ], which is supported by poorer survival of trout predation in pond experiments [ 16 ]. We examined delayed local enhancement in sticklebacks from three different lakes, two with an intact, reproductively isolated pair of limnetic and benthic species (intact ‘species-pairs’, Paxton and Priest Lakes [ 17 ]) and one that historically contained a reproductively isolated species-pair that collapsed into a hybrid swarm after the introduction of crayfish (Enos Lake [ 18 – 21 ]). Thus, we can test for replicated divergence in social information use in limnetic-benthic species-pairs. The ‘social intelligence’ hypothesis predicts limnetic fish will use delayed local enhancement more than benthic fish. If current behavioural and morphological adaptations reflect predation pressure, then the ‘predation risk’ hypothesis also predicts limnetic fish will use delayed local enhancement more than benthic fish. To further distinguish between these hypotheses, we assessed behavioural measures related to boldness as an indicator of a fish's assessment of predation risk. The ‘group cohesion trade-off’ hypothesis makes opposite predictions to these two hypotheses; it predicts limnetic fish will use delayed local enhancement less than benthic fish.
Material and methods Adult fish were wild caught in 2011 and 2012 from Enos Lake (Vancouver Island, BC) and Paxton and Priest Lakes (Texada Island, BC). Canada considers these fish to be endangered and so carefully controls collection, limiting sample sizes. Only fish that resembled limnetic or benthic fish in body shape were collected from Enos Lake [ 22 ]; we refer to them as limnetic-like and benthic-like. Fish were housed at Michigan State University at 15°C and a light : dark cycle that tracked natural changes in daylight in BC. We fed the fish defrosted brine shrimp ( Artemia sp.) and bloodworms ( Chironomus sp.) once per day (except where noted below). All experiments were conducted on non-reproductive fish 4–12 months after capture in 2012 and 2013, because reproductive state can influence social information use [ 23 ]. Demonstrators were female laboratory-reared fish of the same ecotype as the observer. Fish were not fed for 24 h prior to trials to increase their motivation. Our experimental procedure was based on those used previously with sticklebacks [ 5 , 7 ]. The experimental tank (110 l with a 76 × 31 cm footprint and 43 cm water depth, figure 1 a ) was divided into three equal-sized sections using clear acrylic dividers. The sections on the right and left housed demonstrator shoals (three fish each), which could swim freely. The middle section housed the observer. The observer was placed in a clear cylinder (10.5 cm diameter, 14.2 cm height) with an artificial plant refuge. An opaque divider (white corrugated plastic) was placed in the centre of the middle compartment to limit interaction between demonstrator shoals. The interior of the tank, including the floor, was lined with white corrugated plastic to eliminate reflections on the tank walls and prevent fish from seeing outside the tank. Feeders were placed in the centre of each demonstrator section against the front wall. The feeders were 5 × 5 × 53.5 cm high columns with opaque sides and a transparent front, such that the demonstrators, but not the observer, could see the contents. A Canon VIXIA M40 HD camcorder mounted above connected to a monitor allowed live observation and video-recording. Each observer was tested once. To reduce potential stress during the experiment (e.g. owing to neophobia), the observer was first allowed to explore for approximately 40 min a tank identical to the experimental tank, but without dividers. At the start of each trial, two demonstrator shoals and the observer were placed in their compartments and given 10 min to acclimate. A 1 ml transfer pipette was then used to feed each feeder every 90 s for 10 min (six times total). Patch status was randomly assigned in a balanced way; with the ‘local enhancement’ side fed three bloodworms suspended in water and the ‘control’ side fed just water in which the bloodworms had been defrosted ( figure 1 a ). Demonstrators pecked at the transparent feeder front as the bloodworms sunk to the bottom of the column, where they could be eaten through a 2.5 cm tall slot. This design prolonged demonstration, making it a salient cue for the observer. One minute after the final feeding, the observer was visually isolated and demonstrator shoals were removed. The feeders were removed, quickly cleaned and replaced. All dividers and the cylinder were then removed. Using JW atcher 1.0, we recorded the observer's latency to move and all transits into the centre, left and right sides for the next 5 min. We also recorded time spent in cover of the artificial plant (defined as any part of their body covered by the plant). If there was no movement for 10 min after release, the trial was ended. One hundred and thirty-one total trials were run; however, there were two mis-trials (experimenter error with trial set-up) and 20 trials in which the observer never moved once released, leaving 109 trials. Consistent with previous studies (e.g. [ 5 ]), we quantified behaviour for the first 90 s after the observer started moving. Here we focus our analyses of fish choice behaviour on those fish that left the centre region ( n = 73 fish). When we instead analysed the complete dataset, results were similar (see the electronic supplementary material). We analysed the proportion that first chose the local enhancement side with a binomial logistic regression model (using the glm function with the ‘logit’ link function in the stats package [ 24 ]). We calculated a preference score as (time on local enhancement side – time on control side)/(total time on either side). We analysed preference scores with a left (all time on the control side = −1) and right censored (all time on the local enhancement side = + 1) tobit regression model (using the tobit function in the AER package [ 25 ]). We analysed latency to move and time hiding in the plant data for the complete dataset with a linear model (using the lm function in the stats package [ 24 ]) after a square-root transformation to improve normality of residuals. Our models for intact species-pair lakes had fixed effects of lake (Paxton and Priest), species (limnetic or benthic) and their interaction. Our models contrasting differences between intact species-pair lakes versus Enos Lake where species have collapsed had fixed effects of intact (yes = Paxton and Priest Lakes combined, or no = Enos Lake), species (limnetic/limnetic-like or benthic/benthic-like) and their interaction. Analysis of deviance or ANOVA tables were generated using the Anova function from the car package [ 26 ]. All statistical analyses were done in R v. 4.0.2 [ 24 ]. Figures were drawn using the ggplot2 package [ 27 ].
Results Are there species differences in use of social information? First choice We first asked whether species in both lakes with intact species-pairs (Paxton and Priest) differed in their propensity to first choose the local enhancement side. Benthic fish were more likely to first choose the local enhancement side than limnetic fish ( χ 2 1 = 5.41, p = 0.02; figure 1 b ). There was no significant lake × species interaction ( χ 2 1 = 0.52, p = 0.47), supporting parallel species differences across lakes. There was also no main effect of lake ( χ 2 1 = 0.23, p = 0.63). Next, we asked how limnetic and benthic fish from intact species-pair lakes compared to limnetic-like and benthic-like fish from Enos Lake. We found a significant intact × species interaction ( χ 2 1 = 4.21, p = 0.04), caused by limnetic-like and benthic-like fish from Enos Lake showing the opposite pattern as limnetic and benthic fish from intact species-pair lakes ( figure 1 b ); in Enos Lake, limnetic-like fish tended to choose the local enhancement side more than benthic-like fish. Preference score We also calculated a preference score, which could be more nuanced than a fish's first choice. However, fish generally stayed in the region they initially chose (the first choice and preference scores were highly correlated: R = 0.89, t 71 = 16.58, p < 2.2 −16 ). Indeed, the results of analyses using the preference score were parallel to those examining the initial choices made ( figure 1 c ; tables 1 and 2 ). Are there species differences in latency to move and hiding? Latency to move When we considered fish from both intact species-pair lakes (Paxton and Priest), we found a significant effect of species ( F 1, 63 = 5.33, p = 0.024; figure 2 a ), with limnetic fish taking longer to move once released compared to benthic fish. There was no significant lake × species interaction ( F 1,63 = 0.21, p = 0.65), supporting parallel species differences across lakes. There was also no significant lake effect ( F 1,63 = 2.70, p = 0.11). When we compared fish from intact species-pair lakes to those in Enos Lake, we found a significant intact × species interaction ( F 1,105 = 5.45, p = 0.022), caused by limnetic-like and benthic-like fish from Enos Lake showing the opposite pattern as limnetic and benthic fish from intact species-pair lakes ( figure 2 a ); in Enos Lake, limnetic-like fish tended to move sooner than benthic-like fish. Time hiding We found no differences between species in intact species-pair lakes in the time fish spent hiding under the plant ( F 1,63 = 0.56, p = 0.45; figure 2 b ), and no significant interaction between species and lake ( F 1,63 = 0.09, p = 0.77) or main effect of lake ( F 1,63 = 0.68, p = 0.41). There was also no difference between fish from intact species-pair lakes from those in Enos Lake ( figure 2 b ; electronic supplementary material, table S1).
Discussion We tested individual threespine stickleback fish on the ability to use the feeding behaviour of others to locate a food patch. In two lakes with reproductively isolated species adapted to different ecological niches, we found strong evidence for parallel evolution of species differences in social information use; benthic fish used past social information to locate a food patch whereas limnetic fish did not. These differences were not maintained and seemingly reversed in Enos Lake, where the two former species have been hybridizing after anthropogenic disturbance. Given that benthic fish from intact species-pair lakes used the social information whereas the limnetic fish did not, we can reject the hypothesis that sociality selected for increased reliance on this type of social information. In addition, despite using past social information more, benthic fish did not behave as if they were more risk averse than limnetic fish. In fact, limnetic fish took longer to move, suggesting risk aversion, at least when alone, as might be expected given other morphological and behavioural traits suggesting more intense predation pressure [ 14 , 15 ]. This allows us to reject the hypothesis that predation risk selected for use of delayed local enhancement information. Instead, our data suggest that limnetic fish from intact species-pair lakes rely on past social information less, supporting the ‘group cohesion trade-off’ hypothesis. Indeed, Odling-Smee et al . [ 28 ] found that food-limited limnetic fish from Paxton and Priest Lakes showed a preference for a shoal over food when offered each on opposite sides of a tank, suggesting that limnetic fish find being in a shoal especially important. It was somewhat surprising that many individuals (such as limnetic fish from intact species-pair lakes) did not show delayed local enhancement. Two independent studies previously demonstrated delayed local enhancement with UK stream-collected threespine sticklebacks [ 5 , 7 ]. However, these fish were unable to do the more complicated cognitive task of using social cues to determine food patch quality differences more subtle than presence versus absence (public information) [ 5 , 7 ]. Indeed, previous data from a number of different populations, including Paxton limnetic and benthic populations, suggest that unlike ninespine sticklebacks ( Pungitius pungitius ), threespine sticklebacks do not use public information [ 29 , 30 ]. Given previous findings of parallel evolution of better spatial learning in benthic fish compared to limnetic fish [ 28 , 31 ], it is possible that benthic fish are better at learning in general. Benthic sticklebacks from Paxton and Priest Lakes also have larger relative brain volumes than limnetic fish from these lakes [ 32 ], although brain size is a very imperfect proxy for cognitive ability [ 33 ]. Interestingly, these brain size differences are reversed in Enos Lake [ 32 ], similar to our current findings regarding delayed local enhancement. Sticklebacks are a model system for evolutionary biology; further study of their cognition will provide an opportunity to address long-standing questions regarding the evolution of cognition and the brain. Social interaction has been suggested to drive the evolution of intelligence and brain size [ 8 , 9 ], with supporting evidence coming from a comparative primate study associating group size with brain size [ 8 ]. However, this relationship is often not supported, leading to calls for more nuanced investigation of the relationship between sociality and social intelligence (e.g. birds [ 34 ], hyenas [ 35 ]). Indeed, rarely are actual cognitive abilities assessed in tests of the ‘social intelligence’ hypothesis (but see [ 36 – 38 ]). Given our results, we encourage testing whether various types of social learning are associated with sociality in a larger range of animal species, preferably comparing species with recent shifts in sociality. This approach will allow a more holistic test of the ‘social intelligence’ hypothesis.
Electronic supplementary material is available online at https://doi.org/10.6084/m9.figshare.c.6743102 . Individuals can reduce sampling costs and increase foraging efficiency by using information provided by others. One simple form of social information use is delayed local enhancement or increased interest in a location because of the past presence of others. We tested for delayed local enhancement in two ecomorphs of stickleback fish, benthic and limnetic, from three different lakes with putative independent evolutionary origins. Two of these lakes have reproductively isolated ecomorphs (species-pairs), whereas in the third, a previously intact species-pair recently collapsed into a hybrid swarm. Benthic fish in both intact species-pair lakes were more likely to exhibit delayed local enhancement despite being more solitary than limnetic fish. Their behaviour and morphology suggest their current perceived risk and past evolutionary pressure from predation did not drive this difference. In the hybrid swarm lake, we found a reversal in patterns of social information use, with limnetic-looking fish showing delayed local enhancement rather than benthic-looking fish. Together, our results strongly support parallel differentiation of social learning differences in recently evolved fish species, although hybridization can apparently erode and possibly even reverse these differences.
Acknowledgements W. Fetzner and J. Martinez assisted with data collection. Members of the Boughman laboratory gave valuable support. Members of Alison M. Bell's laboratory gave valuable feedback on an earlier manuscript version. We also thank two anonymous reviewers for their constructive comments that improved this manuscript. Ethics Research was conducted under permits from the Ministry of the Environment, BC and approval of the Institutional Animal Care and Use Committee of Michigan State University (reference numbers 04/10-044-00 and 07/12-129-99). Data accessibility Data and analysis code are available from the Dryad Digital Repository: https://doi.org/10.5061/dryad.8931zcrwb [ 39 ]. See also the electronic supplementary material [ 40 ]. Authors' contributions J.K.: conceptualization, data curation, formal analysis, funding acquisition, investigation, methodology, project administration, supervision, validation, visualization, writing—original draft, writing—review and editing; W.L.: investigation, methodology, writing—review and editing; R.M.: investigation, methodology, writing—review and editing; S.M.: investigation, methodology, writing—review and editing; O.B.: investigation, methodology, writing—review and editing; J.W.B.: funding acquisition, methodology, resources, supervision, writing—review and editing. All authors gave final approval for publication and agreed to be held accountable for the work performed therein. Conflict of interest declaration We declare we have no competing interests. Funding Research was supported by a grant to J.W.B. and J.K. by the BEACON Center for the Study of Evolution in Action (grant no. NSF DBI-0939454) and an NSF CAREER grant to J.W.B. (grant no. NSF DEB-0952659). J.K. was also supported by the USDA National Institute of Food and Agriculture Federal Appropriations under project PEN04768 and accession number 1026660.
CC BY
no
2024-01-15 23:43:51
Biol Lett.; 19(7):20230208
oa_package/ab/ce/PMC10368488.tar.gz
PMC10394125
36892404
Introduction With increasing concerns about global sustainability, the transition from today’s fossil-based economy toward a more sustainable biobased economy has received significant interest worldwide ( Menon and Rao, 2012 ). Lignocellulosic biomass represents the Earth’s largest reservoir of renewable resources, with an estimated 180 billion tons of production annually ( Paul and Dutta, 2018 ). Because of its availability, wide distribution, renewability, low cost, as well as current underutilization, lignocellulosic biomass is considered as a promising raw material for producing biofuels and value-added products ( Wyman and Goodman, 1993 , Mabee et al. , 2011 ). Lignocellulosic biomass, such as wood or corn stover, is mainly composed of three main components, namely cellulose (30–50%, w/w), hemicellulose (20–40%, w/w) and lignin (15–25%, w/w) ( Chen, 2014 , Champreda et al. , 2019 ). Cellulose, a linear homopolymer of D-glucose linked by β-(1 → 4)-glycosidic bonds, represents the major component of the plant cell wall (30–50%, w/w, of the total dry matter). The cellulose chains tend to be organized in highly ordered crystalline structures, in which the polysaccharides are held together by a dense network of hydrogen bonds. In the plant cell wall, cellulose microfibrils are embedded in a network of hemicelluloses and lignin. Hemicelluloses comprise a diverse group of branched polysaccharides that consist, to various extents, of pentoses (D-xylose and L-arabinose), hexoses (D-mannose, D-glucose and D-galactose) and sugar acids. They may be acylated to various degrees with acetyl, feruloyl and/or p -coumaroyl groups. The most abundant hemicellulose types are glucuronoxylans, glucomannans and xyloglucans. A fraction of these hemicelluloses, often called recalcitrant hemicellulose, directly adheres to and coats cellulose microfibrils. The remaining hemicellulose may be interlinked through ester and ether bonds and forms an intricate three-dimensional network around the cellulose skeleton ( Scheller and Ulvskov, 2010 ). Lignin is a heteropolymer of phenylpropanoid subunits (guaiacylpropane, syringylpropane and p -hydroxyphenylpropane) that are covalently coupled primarily through ether and carbon–carbon linkages. In addition, lignocellulosic biomass may contain minor amounts of pectin, proteins, lipids and minerals. The complex, inhomogeneous and dense arrangement of all the non-cellulose components creates a physical barrier around the cellulose skeleton. This arrangement, together with the crystalline nature of the cellulose itself, makes plant biomass resilient to biochemical degradation. Plant biomass recalcitrance has been identified as a significant hindrance to lignocellulose depolymerization in nature and in biorefining industries. Enzymatic depolymerization of cellulose to glucose provides an alternative for chemical depolymerization methods and is considered a crucial step in the environmentally sustainable conversion of plant biomass to sugars that can be fermented to value-added products, such as fuels, chemicals and foods ( Zhu et al. , 2016 ). Such depolymerization requires the pretreatment of the biomass to remove and/or remodel other plant cell wall components and to reduce cellulose crystallinity. Of note, enzymes used for depolymerization of cellulose and other components in the plant biomass may also be used to produce and refine cellulose-based novel materials, such as cellulose nanofibers ( de Aguiar et al. , 2020 , Yang et al. , 2020 ). However, protein engineering studies aimed at tailoring cellulases for this particular purpose are rare (e.g. Rahikainen et al. , 2019 ). Enzymatic hydrolysis of cellulose is challenging, even for pure cellulose, and requires the synergistic action of three types of hydrolytic enzymes: cellobiohydrolases (CBHs), endoglucanases (EGs) and β-glucosidases (BGLs) ( Payne et al. , 2015 ). As depicted in Fig. 1 , EGs (EC 3.2.1.4) cleave internal β-(1 → 4)-glycosidic bonds randomly and are thought to act in the more amorphous regions of cellulose where they will generate new chain ends for CBHs ( Lynd et al. , 2002 , Sweeney and Xu, 2012 ). CBHs are processive cellulases that act on the non-reducing (EC 3.2.1.91) or reducing (EC 3.2.1.176) ends of cellulose chains releasing disaccharides (i.e. cellobiose). Of note, several CBHs are known to be capable of initial endo-binding, next to attacking chain ends ( Ståhlberg et al. , 1993 , Kurašin and Väljamäe, 2011 ). Solubilized cellobiose and cello-oligosaccharides are converted to glucoses by BGLs (EC.3.2.1.21) that act from the non-reducing end ( Fig. 1 ). In 2005, Vaaje-Kolstad et al. described a new type of protein that significantly boosts the hydrolyzing ability of hydrolases that act on crystalline polysaccharides ( Vaaje-Kolstad et al. , 2005 ). In 2010 and 2011 ( Vaaje-Kolstad et al. , 2010 , Forsberg et al. , 2011 ), it was shown that these proteins, which are now called lytic polysaccharide monooxygenases (LPMOs; EC 1.14.99.54), enable aerobic microbes to cleave glycosidic bonds in the crystalline parts of cellulose through oxidation of the reducing (C 1 carbon; EC 1.14.99.54) or non-reducing chain end (C 4 carbon; EC 1.14.99.56) at the scissile bond, generating a D-gluconate or a 4-keto-D-glucose, respectively ( Horn et al. , 2012 , Vu et al. , 2014 ). Thus, LPMOs add an ‘endo’-type of activity to the enzyme mix that classical EGs likely cannot provide ( Fig. 1 ), and their discovery has contributed to significant improvements in the efficiency of commercial cellulase cocktails ( Costa et al. , 2020 ). Microorganisms, in particular plant biomass-decomposing soil bacteria and fungi, are the most important source of enzymes for plant biomass depolymerization ( Himmel et al. , 2010 , Koeck et al. , 2014 ). Filamentous fungi secrete multiple, often multi-domain, cellulases, whereas many anaerobic bacteria, as well as few anaerobic fungi, produce a complex of cellulolytic enzymes associated in a structure referred to as the cellulosome ( Bayer et al. , 2004 ). In some anaerobic bacteria, individual cellulases are displayed on the microbial surface or are released in extracellular vesicles ( Arntzen et al. , 2017 , La Rosa et al. , 2022 ). Although both bacteria and fungi can produce cellulases, fungi are considered better candidates for cellulase production due to the efficiency of their enzymes and their capacity to produce large amounts of extracellular enzymes ( Mondal et al. , 2019 ). There are multiple reviews on cellulases and cellulase engineering. Fungal cellulases and their engineering have been reviewed by Payne et al. (2015) in an impressively comprehensive review that addresses many (general) aspects of cellulase structure and function in much detail. Other useful reviews focusing on engineering cellulases include ( Bommarius et al. , 2014 , Greene et al. , 2015 , Contreras et al. , 2020a , Zhang et al. , 2021 ). In this short review, we discuss engineering strategies used for increasing the industrial performance of cellulases, presenting a few selected highlights from the past and recent advancements. After a summary of sequence-based cellulase families, we discuss cellulase properties that are considered industrially relevant and that may be the target of protein engineering endeavors. The industrial application of cellulases requires efficient production strains capable of producing optimized enzyme blends at low cost as well as deep insight into the interplay between the various enzymes in such blends. Although not the main focus of this review, these crucial aspects are also shortly discussed. Cellulases Cellulose-depolymerizing enzymes, as all enzymes, can be described by (and, consequently, classified based on) their amino acid sequence, three-dimensional fold and catalytic mechanism. Cellulases belong to the large class of glycoside hydrolases (GHs) in the manually curated Carbohydrate-Active enZymes database (CAZy) ( http://www.cazy.org/ ) ( Drula et al. , 2022 ), which classifies carbohydrate-active enzymes into families of structurally similar proteins. GHs are the primary drivers of enzymatic polysaccharide degradation in nature and comprise a vast collection of enzymes. CBHs are mainly found in GH families 6, 7, 9 and 48. The CBHs found in fungi generally belong to families GH6 and GH7, while bacterial CBHs are found in families GH6, GH9 and GH48 ( Payne et al. , 2015 , CAZypedia, 2018 , Drula et al. , 2022 ). CBHs are processive enzymes whose catalytic site often has a tunnel-like topology ( Fig. 2 ). This topology facilitates a catalytic process whereby a single cellulose chain slides along the catalytic center while cellobiose units are being released, starting either at the reducing end (by enzymes referred to as CBH I and occurring in families GH7 and GH48) or the non-reducing end (by enzymes referred to as CBH II and occurring in families GH6 and GH9) ( Beckham et al. , 2014 ). The main cellulolytic enzymes produced by filamentous fungi known for strong cellulolytic activity, such as Trichoderma reesei , are family GH7 CBHs, and these enzymes are particularly important in industrial fungal cellulase cocktails ( Payne et al. , 2015 ). Therefore, these CBHs have been primary targets for cellulase engineering ( Taylor et al. , 2018 ). One of the most studied CBHs is Tr Cel7A from T. reesei , a multi-modular enzyme with a GH7 catalytic domain (CD; Divne et al. , 1994 ) that is connected to a Family 1 carbohydrate-binding module (CBM) by a flexible linker ( Fig. 2 ). Interestingly, adding to the spectrum of possible engineering targets, this enzyme is heavily glycosylated ( Fig. 2 ) and recent work has shown that these glycosylations may make specific contributions to cellulase efficiency by interacting with the substrate ( Amore et al. , 2017 ). EGs, are found in many GHs families, including GH5, GH6, GH7, GH8, GH9, GH10, GH12, GH26, GH44, GH45, GH48, GH51, GH74, GH124, GH131 and GH148 ( Drula et al. , 2022 ), with members of family GH5 and a well-known fungal GH7 EG, Tr Cel7B being among the most studied. The active sites of EGs form an open cleft that can accommodate a single cellulose chain in an amorphous region and cleave it randomly along the chain ( Davies and Henrissat, 1995 , Lynd et al. , 2002 , Sidar et al. , 2020 ). EGs can occur as single-domain or multi-modular enzymes. One of the major EGs produced by T. reesei , Tr Cel7B, accounting for 10–15% of secreted proteins during growth on cellulose, is composed of a GH7 domain and a CBM1 ( Kleywegt et al. , 1997 ). Importantly, although CBHs seem tailored to act on recalcitrant forms of cellulose, enabled by their processive nature ( Horn et al. , 2006 , Beckham et al. , 2014 ) ( Fig. 2 ), EGs are not necessarily good and true cellulases. For example, the GH5 family includes enzymes that show minute activities on cellulose, while being efficient in degrading less recalcitrant β-(1 → 4) glycans such as β-glucan and glucomannan. Thus, for judging the potential of described enzymes or the success of an engineering effort, it is crucial to consider which substrates were used for enzyme characterization. BGLs occur in families GH1, GH3, GH9 and GH30. In general, BGLs have a catalytic pocket (subsite −1), which accommodates the nonreducing-end glucose of the cello-oligosaccharide substrate. BGLs are formally not cellulases but are crucial in cellulolytic enzyme cocktails not only to reach the end product, glucose, but also to alleviate the inhibition of cellulases by their oligomeric products.
Abstract Lignocellulosic biomass is a renewable source of energy, chemicals and materials. Many applications of this resource require the depolymerization of one or more of its polymeric constituents. Efficient enzymatic depolymerization of cellulose to glucose by cellulases and accessory enzymes such as lytic polysaccharide monooxygenases is a prerequisite for economically viable exploitation of this biomass. Microbes produce a remarkably diverse range of cellulases, which consist of glycoside hydrolase (GH) catalytic domains and, although not in all cases, substrate-binding carbohydrate-binding modules (CBMs). As enzymes are a considerable cost factor, there is great interest in finding or engineering improved and robust cellulases, with higher activity and stability, easy expression, and minimal product inhibition. This review addresses relevant engineering targets for cellulases, discusses a few notable cellulase engineering studies of the past decades and provides an overview of recent work in the field. Graphical abstract
Protein engineering of cellulolytic enzymes Depending on their application, cellulases are either used individually (e.g. paper manufacturing, cellulose fiber upgrading or fruit juice extraction) or in a cocktail (e.g. biofuel industry, animal feed production and detergent industry). Accordingly, cellulases may be engineered for optimizing their activity individually or in combination with other cellulolytic enzymes. Even with the focus being on improved processing of lignocellulosic biomass, academic work on cellulase engineering generally entails a one-enzyme-at-the-time approach. There are numerous engineering targets for cellulases, some common to all enzymes, and some addressing specific challenges related to the processing and valorization of lignocellulosic biomass. As to the latter, enzyme costs are a major overall cost driver and enzymes are needed in amounts so large that lignocellulose biorefineries may need to include an on-site enzyme production facility. Clearly, there is a lot to gain from enzymes that are easier to produce, more active and more stable. Natural enzymes will usually not do the job, since process conditions (such as operating temperature and pH, and end-product concentration) often differ from the conditions in native environments. In addition, natural enzymes may not be sufficiently active, even at optimal conditions, to allow for economically viable bioprocesses. In short, engineering targets for cellulases and the rationale behind are as follows: Improved catalytic activity, including, e.g. an optimized pH-activity profile; this is a general strategy that may give lower enzyme consumption and increased process efficiency. Improved thermal stability at relevant pH; this is a general strategy for lower enzyme consumption that also may allow running reactions at increased temperatures, which has several advantages, including reduced risks for microbial contamination. Reduced product inhibition and changed enzyme processivity. Product inhibition may be a problem at industrial high-concentration conditions and relates to the enzyme’s affinity for glucose and short cello-oligomers. This affinity also affects the degree of enzyme processivity, which is a cellulase property that receives much attention because it is needed for efficiency on the more recalcitrant (crystalline) parts of cellulose while at the same time making enzymes slow, due to slow product release ( Horn et al. , 2006 , Vuong and Wilson, 2009 , Beckham et al. , 2014 , Kuusk et al. , 2015 , Vermaas et al. , 2019 , Olsen et al. , 2020 , Sørlie et al. , 2020 ). Of note, many engineering efforts made to increase the activity of CBHs deal with manipulating substrate affinity, addressing the delicate balance between having affinity that is high enough to access the substrate and low enough to allow for efficient substrate and product release (see below for examples). Adding or removing CBMs. This may increase enzyme efficiency although the expected increased substrate affinity may be a two-edged sword in high dry-matter processes ( Várnai et al. , 2013 ). Positive effects of CBMs on enzyme stability have also been described ( Sidar et al. , 2020 ). Changing the linkers that connect CBMs and catalytic domains. Nature employs many different linkers with varying degrees of flexibility, length and glycosylation and, although rational engineering of linker sequences seems not yet possible, it is clear that functional variation can be obtained by changing linkers ( Payne et al. , 2013 , Papaleo et al. , 2016 , Amore et al. , 2017 , Sørensen and Kjaergaard, 2019 ). Changing glycosylation. As alluded to above, glycosylation may affect the efficiency of certain cellulases. Glycosylation can be engineered by removing or introducing glycosylation sites, by changing linkers (see above), and, in principle, also by changing the glycosylation machinery of the enzyme-producing microbe. Reducing enzyme adsorption to lignin. The loss of cellulase activity due to enzyme adsorption to the lignin present in pretreated lignocellulosic biomass is generally considered a problem, that perhaps is related to the presence of CBMs ( Kumar et al. , 2012 , Rahikainen et al. , 2013 , Kellock et al. , 2017 ). Improved activity and stability in non-conventional media. This is relevant when using cellulases for refining of cellulose fibers for material applications ( Ribeiro et al. , 2019 ). The methodologies used for cellulase engineering are no different from those used for other enzymes, including directed evolution-type approaches ( Heinzelman et al. , 2009 , Bornscheuer et al. , 2019 ), rational mutagenesis and all sorts of hybrid forms where (semi-)rational strategies are used to reduce the library sizes in screening-based efforts. Rational engineering is increasingly supported by successful computational methods including artificial intelligence ( Wijma et al. , 2014 , Khersonsky et al. , 2018 , Gado et al. , 2021 , Lu et al. , 2022 ). Most importantly, screening of cellulase properties that are industrially relevant requires the use of true substrates during screening. Such true substrates, like steam-pretreated corn stover, are insoluble and heterogeneous, which complicates both handling and product analysis. The use of such substrates may be particularly complicated in high-throughput screening-based approaches, where typically thousands of enzyme variants need to be assessed using automated pipetting, although microtiter plate-based screening methods have been described (e.g. Chundawat et al. , 2008 ). Using realistic substrates will be easier when using lower-throughput approaches, for example, for screening a limited set of rationally designed mutants. For practical purposes, cellulase engineering studies may employ soluble cellulose variants such as carboxymethyl cellulose, or even short soluble chromogenic cello-oligomers. While studies using such substrates may generate interesting results, their relevance for increasing the efficiency of industrial lignocellulose processing may be limited. Because of their superior efficiency and industrial use, fungal cellulases were the targets of most published cellulase engineering studies. For the period before 2015, such studies are extensively reviewed in Payne et al. (2015) . Recent reviews with more or less comprehensive overviews of more recent cellulase engineering studies date from 2020 ( Contreras et al. , 2020a ) and 2021 ( Zhang et al. , 2021 ). These include tables that list, and shortly describe, individual studies published, primarily, in the period of 2010–2020. In the sections below, we provide a brief overview of relatively recent engineering efforts and achievements. Additionally, a more comprehensive summary of recent engineering efforts can be found in Supplementary Table 1 . Engineering for improved enzyme activity Cellobiohydrolases (CBHs) Characterization of a growing number of CBHs has shed light on their detailed mechanism and led to the identification of key amino acid residues important for substrate binding, processivity and product dissociation ( Payne et al. , 2015 ). Depending on the substrate, there is a trade-off between stronger substrate/product binding and high processivity vs. weaker substrate binding, less processivity and faster substrate/product dissociation. Strong substrate binding brings a risk of formation of non-productive enzyme–substrate complexes in which a cellulase is strongly, and more or less permanently, bound to the substrate without being able to cleave it ( Horn et al. , 2006 , Igarashi et al. , 2011 , Kurašin and Väljamäe, 2011 , Vermaas et al. , 2019 ). Thus, rather than focusing on the reactivity of the catalytic center itself, CBH engineering has focused on manipulating substrate and product binding, either by changing residues that interact with the substrate and/or by engineering loops that shape the catalytic cleft/and tunnel and may affect the ease at which substrates are bound and products are released ( Von Ossowski et al. , 2003 , Payne et al. , 2015 ) ( Fig. 3 ). Individually mutated residues include the aromatic residues that typically line the substrate-binding clefts of processive cellulases (and chitinases), likely facilitating the ‘sliding’ of the substrate through the catalytic cleft or tunnel ( Varrot et al. , 2003 ) ( Fig. 3 ). Reported engineering efforts with CBHs often concern manipulation of the structure, length and/or flexibility of substrate-enclosing loops, e.g. by engineering disulfide bridges or by changing loop length or sequence composition ( Voutilainen et al. , 2010 , Sørensen et al. , 2017 , Taylor et al. , 2018 , Schiano-di-Cola et al. , 2019 ). Other engineering targets include specific substrate-binding residues at the entrance of the catalytic tunnel, and/or inside the catalytic tunnel ( Nakamura et al. , 2013 , Kari et al. , 2014 ). These approaches have shed light on structure–function relationships in CBHs (e.g. Nakamura et al. , 2013 , Schiano-di-Cola et al. , 2019 ), and have yielded enzymes with improved properties. For example, Voutilainen et al. (2010) obtained thermostable variants of Cel7A from Talaromyces emersonii by introducing disulfide bridges in loops that form, or are close to, the substrate-binding tunnel, and these mutants showed improved activity on Avicel at higher temperatures. Kari et al. (2014) showed that weakening substrate affinity by mutating Trp-38 in the −4 subsite of Tr Cel7A ( Fig. 3 ) lowers substrate affinity but increases the activity on Avicel 2-fold. Of note, mutation of Trp-40 at the tunnel entrance (−7 subsite, Fig. 3 ) reduced activity toward crystalline substrates ( Nakamura et al. , 2013 ). Much work on GH7 CBHs has addressed the roles of substrate-enclosing loops near the substrate entrance, in particular near the −4 subsite ( Sørensen et al. , 2017 , Taylor et al. , 2018 , Schiano-di-Cola et al. , 2019 ). Sørensen et al. (2017) showed that mutations in a loop covering subsite −4 (the B2 loop) reduced substrate affinity and increased activity on crystalline cellulose (Avicel) two-fold (not unlike the mutation of Trp-38, discussed above). Inspired by comparisons with endo-acting Tr Cel7B, Schiano-di-Cola et al . (2019) mutated tunnel-forming loops in Tr Cel7A and found that deletions in the B2 loop covering the −4 subsite had the largest effects. In contrast to the site-directed mutations described by Sørensen et al. (2017) , the loop deletions reduced activity on crystalline cellulose, leading to the conclusion that the B2 loop is a key determinant of Tr Cel7A’s function as a CBH. Interestingly, the loop deletions increased activity on amorphous cellulose, which illustrates the importance of the choice of substrate when evaluating enzyme performance. The abovementioned positive mutational effects on the efficiency of CBHs have been ascribed to increased dissociation rates, which, next to speeding up the catalytic cycle, may also affect the ability of the enzyme to dissociate when encountering an obstacle ( Kurašin and Väljamäe, 2011 ). Importantly, mutational effects will be substrate-dependent, since the rate-limiting step in catalysis will vary between hard-to-access crystalline substrates, where association-promoting strong affinity may be beneficial, and more easily accessible, less crystalline (amorphous) substrates, where the dissociation step may be rate-limiting, as beautifully shown in early work on chitinases ( Horn et al. , 2006 , Zakariassen et al. , 2010 ). Engineering studies demonstrating improved degradation of (heterogeneous) pretreated lignocellulolytic substrates, which is a more relevant success parameter from an industrial viewpoint, are rare. One key example is an engineered variant of Tr Cel7A that was generated based on the structure of a natural GH7 from Penicillium funiculosum ( Pf Cel7A) showing enhanced activity on pretreated corn stover (when using Pf Cel7A, the time needed to reach 80% conversion of the cellulose was reduced by 60%). Inspired by Pf Cel7A, Taylor et al. (2018) removed a disulfide bond and shortened a substrate-enclosing loop near the tunnel entrance in Tr Cel7A, which yielded a more efficient variant of this enzyme, reducing the time needed to reach 80% conversion by approximately 30% ( Taylor et al. , 2018 ). Variation in substrate affinity may also be achieved by engineering existing CBMs or by deleting or appending CBMs. This is a popular engineering strategy, the value of which for industrial biomass processing remains somewhat unclear ( Cruys-Bagger et al. , 2013 , Várnai et al. , 2013 ), which is discussed below. Importantly, the impact of CBMs on the potentially rate-limiting enzyme properties discussed above, i.e. substrate and product affinity as well as processive movement, is not very clear and may be limited ( Bommarius et al. , 2014 ). Endoglucanases (EGs) As alluded to above, endoglucanases occur in many GH families. The most studied endoglucanases belong to the large, functionally heterogeneous and widely spread GH5 family, and to family GH7. Tr Cel7B, also known as Tr EGI, is a well-known endoglucanase ( Payne et al. , 2015 ) with a documented impact on the overall efficiency of cellulolytic enzyme cocktails from T. reesei ( Chylenski et al. , 2017 ). Aiming primarily at improved stability and activity at higher temperatures, Chokhawala et al. (2015) used site saturation mutagenesis at seven target sites to generate Tr Cel7B variants. One of the selected variants (G230A/D113S/D115T) had 2-fold improved activity at 65°C on Avicel and its thermostability was also improved significantly. Using saturation mutagenesis of a single residue in a substrate-binding loop in a GH5 from Gloeophyllum trabeum , Zheng et al. (2018) were able to increase the activity of this enzyme on barley β-glucan by 1.3 to 1.5-fold. In another study, error-prone PCR-mediated directed evolution of an endo-β-1,4-glucanase from Streptomyces sp. G12 led to a 30% improvement in bioconversion yields for pretreated Arundo donax biomass ( Cecchini et al. , 2018 ). Chen et al. (2018) engineered a β-1,4-endoglucanase from Chaetomium thermophilum through site-directed mutagenesis of noncatalytic residues involved in substrate binding. Two single mutations, Y30F and Y173F, increased the enzyme’s specific activity toward using carboxymethylcellulose sodium (CMC-Na) by 1.4- and 1.9-fold, respectively. Torktaz et al. (2018) engineered Cel5E from Clostridium thermocellum by rational mutagenesis and found that individual mutations N94W, N94F, E133F and N94A improved activity on carboxymethyl cellulose (CMC) and barley β-glucan by 1.1- to 1.9-fold. As a final example, Aich and Datta (2020) reported a 2-fold increase in catalytic activity on CMC by engineering conserved residues in the substrate-binding tunnel and on the surface of a thermostable GH7 endoglucanase from Bipolaris sorokiniana . β-Glucosidases (BGLs) While not being true cellulases, it is important to mention β-glucosidases, which are needed to complete cellulose saccharification. The dual purpose of BGLs in total saccharification of cellulose is to produce free glucose and to alleviate end-product inhibition of CBHs and EGs during saccharification. Notably, BGL itself is also prone to end-product inhibition, by glucose. Due to their important role and identified limitations, BGLs have been the subject of many engineering studies addressing catalytic activity, stability, pH-activity profile, and substrate and product inhibition ( Supplementary Table 1 ). Recent examples, all addressing fungal enzymes belonging to families GH1 or GH3, include the following: (i) site saturation mutagenesis of amino acids forming the catalytic pocket to increase activity toward cellobiose ( Baba et al. , 2016 ); (ii) rational site-directed mutagenesis of active site residues to improve glucose tolerance ( Santos et al. , 2019 ); and (iii) directed evolution (error-prone PCR + screening) to simultaneously increase the k cat / K M for cellobiose and reduce substrate inhibition ( Kao et al. , 2021 ). Changes in catalytic activity have been achieved by various types of mutations near the catalytic site, whereas changes in the affinity for cellobiose and glucose have been achieved by removing an aromatic residue in a sugar-binding subsite (F256M; Kao et al. , 2021 ) and by narrowing the entrance to the substrate-binding pocket (L167W + P172L; Santos et al. , 2019 ), respectively. Engineering for enzyme stability Thermal stability Low thermal stability and rapid loss of catalytic performance of key cellulase components at industrially required, or desirable, temperatures is one of the main concerns in cellulase cocktail development. Hence, the literature is rich in studies showing the engineering of individual cellulases with increased thermal stability, using both rational and random approaches that are well-known from studies on other enzymes ( Eijsink et al. , 2004 , Eijsink et al. , 2005 , Liu et al. , 2019 , Patel et al. , 2019 , Planas-Iglesias et al. , 2021 ). Engineering stability is rather feasible because, although the structural features governing enzyme activity vary from enzyme to enzyme, structural features governing enzyme stability are more, albeit far from fully, universal (see Eijsink et al. , 2004 , for further discussion). In addition, high-throughput screening of enzyme variants is relatively easy when assessing stability, since one can use chromogenic, substrates compared to screening for, for example, catalytic activity on pretreated lignocellulosic biomass. It is important to note that efficient industrial processing of lignocellulosic biomass requires a complex enzyme blend. Thus, although improving the stability of an individual member of the blend may have beneficial effects on process costs, this will not normally be sufficient to allow running the process at higher temperatures as the stability of other enzymes will become limited. One approach for cellulase stabilization has been to insert disulfide bridges, as shown by e.g. Voutilainen et al. (2010) and illustrated by a recent study of Bashirova et al. (2019) , who generated two enzyme variants of a GH5 EG from Penicilllium verruculosum by introducing a disulfide bridge through mutations S127C-A165C or Y171C-L201C. Both variants displayed increased specific activity toward carboxymethylcellulose and barley β-glucan at 50 °C as well as increased thermal stability. Introducing disulfide bridges is just one of several more or less ‘general’ strategies for rational engineering of enzyme stability. Importantly, also free cysteines need attention since they may have a negative effect on thermal stability due to their sensitivity for oxidative damage and/or ability to form intermolecular disulfide bridges. For example, Wu et al. (2013) showed that removal of a free cysteine in Cel6A from H. jecorina resulted in increased thermal stability, whereas introduction of a free cysteine led to reduced stability. Similar trends were observed by Yamaguchi et al. (2020) when engineering the thermal stability of Pc Cel6A from Phanerochaete chrysosporium . The SCHEMA technology developed by Frances Arnold and her team uses a computational algorithm to identify fragments of proteins that can be recombined without disturbing the integrity of the three-dimensional structure ( Voigt et al. , 2002 ). The use of this directed evolution technology for generating cellulases with increased stability has been highly successful. For example, Heinzelman et al. (2009) used this technology to recombine three fungal CBH IIs, yielding 6561 possible chimeric sequences. Screening of this rather limited library yielded multiple enzymes with drastically increased thermal stability and increased activity at elevated temperatures. For some mutants, the half-life at 63°C was increased by one to two orders of magnitude. As another example, Goedegebuur et al. (2017) obtained a 10.4°C increase in the apparent melting temperature and a 44-fold increase in half-life at 62°C upon directed evolution of Tr Cel7A from T. reesei ( Fig. 4 ). Yoav et al. (2019) obtained a thermostable mutant with a 6.4°C increase in inflection temperature (T i ) upon random mutagenesis of β-glucosidase from C. thermocellum . Zheng et al. (2019) also obtained highly thermostable GH5 variants through the production of chimeras of Egl5A from T. emersonii ( Te Egl5A) with Cel5 from S. opalus ( So Cel5). As another example, using an innovative directed evolution approach, Contreras et al. (2020b) obtained a 7.7°C increase in melting temperature of the endo-β-1,4-glucanase Pv Cel5A from P. verruculosum and a 5.5-fold greater half-life at 75 °C. Of note, the stability of cellulases may be affected by the presence of additional domains (CBMs) and linkers, as well as by the degree of glycosylation of these domains and linkers ( Amore et al. , 2017 ). This provides additional targets for cellulase engineering. Engineering pH-dependent activity and stability The pH of enzymatic processes is normally adjusted to the pH optima of the enzymes. For instance, commercial saccharification of lignocellulosic biomass into fermentable sugars is typically carried out with a fungal cellulase cocktail at a mildly acidic pH, which is the optimal pH of fungal growth and indeed of most fungal cellulases. To avoid enzyme inactivation due to fluctuations in pH, fungal cellulases may be engineered for robustness, i.e. stability at a broader pH range. As an example, Xia et al. (2016) found that removing potential O-glycosylation sites by site-directed mutagenesis broadened the operational pH range of a GH3 BGL, Tl BGL3A from Talaromyces leycettanus , from 4.0–5.0 to 3.0–10.0. Cellulases with extreme pH optimum for special applications are most often sought after and obtained through bioprospecting, capitalizing on the rapidly increasing number of extremophiles with sequenced genomes ( Ben Hmad and Gargouri, 2017 ). Engineering cellulase modularity Cellulolytic enzymes (CBHs, EGs and LPMOs) catalyze depolymerization of insoluble cellulose at the solid–liquid interface. The catalytic domains (CD) of these enzymes often carry a CBM that improves substrate-binding and may target the CD to specific areas of the substrate ( Boraston et al. , 2004 ). Some bacterial cellulases are truly multimodular and multifunctional, comprising multiple CBMs and two or more CDs with complementary activities, which degrade the substrate synergistically ( Brunecky et al. , 2017 ). Accordingly, options to improve cellulase characteristics include the generation of chimeric multi-domain proteins by fusing one or more CBMs to a single-domain cellulase, by replacing the CBM of multi-domain cellulases with alternate CBMs, or by fusing CDs with different activities. The problem with these potential approaches is that they are difficult to rationalize. As just one example, CBM effects vary between cellulases and between substrates (e.g. Duan et al. , 2017 ). As to CBMs, an additional issue is that the impact of the domains varies with substrate availability: at high substrate concentrations, e.g. in the beginning of an industrial batch-wise bioprocess, CBMs are not needed and may even slow down cellulases while, on the other hand, CBMs promote cellulase efficiency at lower substrate concentrations, e.g., in the later phases of the bioprocess ( Le Costaouëc et al. , 2013 , Várnai et al. , 2013 , Badino et al. , 2017 , Jensen et al. , 2018 , Christensen et al. , 2019 ). The most common cellulose specific CBMs belong to family CBM1, exclusively found in fungi, and family CBM2, found in bacteria ( Sammond et al. , 2012 ). CBMs may be appended to both the C- or N-terminus of GHs, and their position may be characteristic for certain GH families. For example, CBM1s tend to be appended to the C-terminus in fungal GH7 CBHs and EGs and to the N-terminus in fungal GH6 CBHs and GH5 EGs. Several studies have shown that fusing an additional or alternate CBM to the CD of a cellulase may improve catalytic efficiency toward insoluble cellulosic substrates and/or thermostability ( Voutilainen et al. , 2014 , Oliveira et al. , 2015 , Sadat, 2017 , Christensen et al. , 2019 , Gilmore et al. , 2020 ). An early study led to the conclusion that the addition of a CBM is more beneficial for EGs compared with CBHs ( Szijártó et al. , 2008 ). It must be emphasized that naturally occurring CBMs show different cellulose-binding features ( Carrard et al. , 2000 , Lehtiö et al. , 2003 ) and that such features also may be changed through protein engineering (see, e.g. Takashima et al. , 2007 , Chen et al. , 2014 ). In nature, CBMs and CDs are coupled by linkers of varying lengths and sequences. For example, linkers in fungal GH6/GH7 enzymes are rich in serine and threonine, whereas bacterial cellulase linkers have higher proline content ( Sammond et al. , 2012 , Payne et al. , 2015 ). This difference may have to do with protection against proteolytic attack, which for fungal linkers is prevented by glycosylation ( Fig. 2 ), whereas bacteria, with their limited protein glycosylation abilities, need to incorporate proline residues. There are indications that the position (N- vs. C-terminal) as well as the length and amino acid sequence of linkers have co-evolved with the domains that they connect for optimal function ( Sammond et al. , 2012 , Payne et al. , 2015 ). In general, the amino acid composition determines flexibility, while linker length affects the distance between the connected domains. Importantly, both computational ( Payne et al. , 2013 ) and experimental ( Chen et al. , 2014 ) work has shown that the glycosylations on the linkers and CBMs of fungal cellulases have multiple effects on enzyme function. Using molecular dynamics (MD) simulations, Payne et al. (2013) showed that the glycosyl moieties of linkers in T. reesei GH6 ( Tr Cel6A) and GH7 CBHs ( Tr Cel7A) make considerable contributions to substrate affinity. Chen et al. (2014) investigated the effects of O -mannosylation on the CBM1 of T. reesei GH7 CBHs ( Tr Cel7A) and found that glycosylation promoted both enzyme stability and substrate affinity. Despite growing awareness of the impact of linker types and length on the catalytic activity of cellulases [e.g. on processivity ( Wang et al. , 2018 ) or substrate affinity ( Nakamura et al. , 2016 )], little is known about how this knowledge may be applied to design or engineer linkers for fusion proteins. Engineering cellulase cocktails Commercial cellulase products that are used for saccharification of lignocellulosic feedstocks are the result of decades of extensive strain development to generate production organisms that secrete an optimized blend of cellulases in large amounts, at low cost. In general, cellulase cocktails are fungal secretomes and can thus be improved indirectly by supplementing with another enzyme cocktail or individual enzymes, or directly by genetic modification of the production strain. Cellulase production strains are engineered to modulate transcription and secretion of cellulases, to remove undesirable background activities (e.g., proteases), to replace selected enzyme components of the cellulolytic blend with more potent, naturally occurring or engineered variants, or by inserting (i.e. knocking in) genes encoding additional enzymes. The composition of the resulting cellulase cocktails, i.e., the engineered fungal secretomes, may be optimized further by varying process conditions (including the carbon source) during cellulase expression. Approaches to improve cellulase cocktails have been reviewed recently in detail by Østby et al. (2020) . Perhaps the most remarkable (well-documented) example of strain engineering is the development of the RUT-C30 cellulase hyperproducer prototype strain, which is derived from strain QM6a, and shows a 20-fold increase in extracellular protein levels, reaching about 30 g/L on a lactose-containing growth medium ( Bischof et al. , 2016 ). Recent tools such as RNAi-mediated gene silencing ( Sun et al. , 2022 ), inducer-free expression systems ( Arai et al. , 2022 ), CRISPR/Cas9-based methods ( Rantasalo et al. , 2019 , Fonseca et al. , 2020 ) and synthetic expression systems with engineered transcription factors and promoters ( Rantasalo et al. , 2018 ) provide new tools for engineering cellulase production strains ( Mojzita et al. , 2019 ). For example, Arai et al. (2022) reported an inducer-free expression system for T. reesei that enables protein overproduction in glucose– containing media without inducers like cellulose, lactose and sophorose. In another study, Fonseca et al. (2020) used CRISPR/Cas9-based rational engineering of the RUT-C30 strain to increase protein secretion and β-glucosidase (72-fold) and xylanase (42-fold) activities in the secretome. Implemented changes included the constitutive expression of a mutant of the cellulase master regulator XYR1, heterologous expression of the Te Cel3A BGL from T. emersonii, and deletion of genes encoding for the cellulase repressor ACE1 and the proteases SLP1 and PEP1. By combining CRISPR/Cas9 technologies with an exquisite genetic toolbox, Rantasalo et al. (2019) have developed versatile T. reesei -based expression systems that allow rapid construction of recombinant strains that produce high levels of secreted protein using glucose as a carbon source. Concluding remarks Rational protein engineering and directed evolution have contributed to our current understanding of enzymatic cellulose saccharification and to the efficiency of current commercial cellulolytic enzyme cocktails. As to the latter, it must be noted that the work described above and in other recent reviews ( Payne et al. , 2015 , Contreras et al. , 2020a , Zhang et al. , 2021 ) likely only shows the tip of an iceberg of cellulase engineering, since work done by commercial enzyme suppliers is largely unknown to the scientific community. When assessing publicly available data on cellulases and cellulase engineering, it is important to note that quite some work is based on the use of cellulose substrates that are relatively easy to degrade and of little relevance to the biorefinery, such as carboxymethylcellulose. It needs to be emphasized once more that, when working toward industrial applications, characterization of wild-type cellulases and of mutational effects needs to be done with relevant substrates using relevant (process) conditions. A particular challenge in cellulase engineering lies in the complex interplay between different enzyme traits, which complicates the design of enzymes that are ‘generally efficient’. For example, product inhibition in one of the key fungal cellulases, Tr Cel7A, is a result of strong interactions in the product binding site that, at the same time, are vital for the processive mechanism that is necessary for being able to degrade the more crystalline parts of cellulose ( Kuusk et al. , 2015 , Olsen et al. , 2020 ). It is possible to reduce product inhibition by reducing strong binding interactions in product sites through site-directed mutagenesis, but this comes at a cost of lower enzyme efficiency when working with crystalline substrates ( Atreya et al. , 2016 , Olsen et al. , 2020 ). To make things even more complicated, the need for processivity may vary as a saccharification reaction proceeds and the concentration and composition of the substrate change. It is also worth noting that LPMO action on the crystalline surfaces of cellulose may increase accessibility to cellulases that are less capable of acting on a crystalline material, but intrinsically faster ( Hamre et al. , 2019 ). Of course, different substrates, for example varying in cellulose crystallinity, will require different cellulase blends containing enzymes with different properties, and it is now well established that the development of enzyme technology for the future lignocellulosic biorefinery cannot be based on a one-size-fits-all approach ( Banerjee et al. , 2010 , Hu et al. , 2014 , Du et al. , 2020 , Østby et al. , 2020 ). As alluded to several times above, it is a long way from engineering a single cellulase with improved properties to producing improved enzyme blends for various common lignocellulosic substrates such as pretreated corn stover or steam-exploded wood chips. Saccharification of lignocellulosic biomass, and even pure cellulose, requires multiple enzymes, which means that engineered improved variants of single cellulases are not useful if their synergistic action with other enzymes is not maintained. The synergy between various cellulase types is a topic of great interest and debate (e.g. Igarashi et al. , 2011 , Jalak et al. , 2012 , Zajki-Zechmeister et al. , 2022 ) that we did not address in the above, but that needs ample attention when developing cellulase cocktails rather than individual enzymes. Despite major progress in the past decades, further improvements in cellulase efficiency still seem feasible, targeting the properties discussed above and exploiting the latest developments in computational tools for protein engineering. Alphafold ( Jumper et al. , 2021 ) provides rapid access to structural evaluation of natural diversity, expanding the knowledge base for protein engineering. Machine learning tools enable rapid extraction of functionally crucial enzyme features from sequences and structures. For example, using machine-learning models trained only on the number of residues in the active-site loops, Gado et al. (2021) were able to predict the CBH or EG nature of GH7 family members, as well as the presence of a CBM, with high accuracy. Their models not only provide plausible explanations for the results of several engineering studies already in the literature (and reviewed here), but also point at novel engineering targets for GH7s. An important development for cellulase engineering comes from the work of Peter Westh and colleagues, who have developed a new framework for describing, interpreting and predicting the kinetics of heterogeneous enzyme catalysis, i.e. enzyme catalysis at a solid–liquid interface, as is the case for cellulases ( Kari et al. , 2018 , Schaller et al. , 2021 , Kari et al. , 2022 ). One key finding of this work, inspired by the Sabatier principle, is the strong relationship between substrate affinity and catalytic efficiency, which had been observed before (e.g. Horn et al. , 2006 ; see above), but which, thanks to the work of Westh and colleagues now can be rationalized and quantified in a theoretical framework with predictive value. For example, Schaller et al. , (2021) developed a method for predicting the binding and activation free energies of cellulases that was successfully used in virtual (computational) screening of GH7 cellulases, resulting in the discovery of novel natural GH7 variants with promising catalytic performance ( Schaller et al. , 2022 ). As another example, Kari et al. (2021 ) produced and kinetically characterized 83 cellulases, which allowed them to reveal a linear free energy relationship between the substrate binding strength and the activation barrier, thus underpinning the predictive power of calculated substrate binding strengths. From the perspective of industrial applications, it would be wise to base future efforts on a deeper understanding of the interplay between the various members of cellulolytic enzyme cocktails. In this respect, the interplay between cellulases and the relatively recently discovered LPMOs deserves special attention. LPMOs not only add an endo-type of chain cleaving activity to the cocktail, they also generate a new type of chain ends carrying oxidations. It has already been demonstrated that LPMO action changes the importance of processive action ( Hamre et al. , 2019 ) and that the synergy between LPMOs and cellulases depends on both the substrate and the cellulase ( Tokin et al. , 2020 ). Engineering CBHs to better interact with such oxidized chain ends could be one way to improve synergistic effects between cellulases and LPMOs. Of note, the LPMOs themselves are of course also interesting targets for protein engineering, addressing, for example, their stability under turnover conditions ( Forsberg et al. , 2020 ). Simultaneous focus on a deeper understanding and better exploitation of LPMO and cellulase action may reveal new engineering targets and lead to further improvement of current cellulolytic enzyme cocktails. Supplementary Material
Acknowledgments We are grateful for helpful discussions with numerous colleagues and apologize to all who have done cellulase engineering work that could not be included in this short review. Funding This work was supported by the Research Council of Norway through SFI Industrial Biotechnology (SFI-IB) (grant no. 309558) and Centre for Environment-friendly Energy Research (FME)-Bio4Fuels (grant no. 257622). Conflict of interest The authors declare no conflict of interest. Data availability statement No new data were generated in this manuscript. Edited by: Dr Eva Petersen
CC BY
no
2024-01-15 23:35:08
Protein Eng Des Sel. 2023 Mar 24; 36:gzad002
oa_package/70/c3/PMC10394125.tar.gz
PMC10404922
37545307
Introduction Metacognition, the capacity to monitor one's own uncertainty, is important for adaptive behaviour [ 1 – 3 ]. In a noisy restaurant, if we feel unsure whether the waiter said ‘beans’ or ‘greens’, we may ask him to repeat it [ 4 – 6 ]. A wealth of research has shown that humans are able to meaningfully report their confidence about perceptual decisions [ 7 – 16 ]. They typically assign a higher confidence to their correct than incorrect decisions. Yet, research to date has almost exclusively focused on simple perceptual decisions based on single cues in one sensory modality, while more naturalistic scenarios such as a busy restaurant expose the brain to numerous signals that may come from common or independent sources. To communicate effectively with the waiter, the brain should integrate the waiter's speech signals selectively with his articulatory lip movements and segregate them from visual and auditory signals produced by other guests. Perception in complex audiovisual scenes thus relies inherently on solving the causal inference or binding problem [ 17 – 23 ]. Bayesian causal inference models (see also figure 1 ) address this computational challenge by explicitly modelling the potential causal structures that could have generated the sensory signals. In the case of common sources, signals are integrated weighted by their relative reliabilities into more precise estimates (i.e. fusion). In the case of independent sources, they are processed independently (i.e. segregation) [ 18 , 21 ]. As the brain does not know a priori whether signals come from common or independent sources, it needs to infer the causal structure from the noisy sensory signals themselves such as them occurring at the same time or location. To account for observers' uncertainty about the signal's causal structure, the brain is thought to compute a final estimate by combining the fusion and segregation estimates weighted by the posterior probabilities that signals come from common or independent sources. This decisional strategy is referred to as model averaging (for other decisional strategies see [ 24 ]). Accumulating psychophysics and neuroimaging research has shown that human observers combine sensory signals consistent with the principles of Bayesian causal inference by dynamically encoding segregation, fusion and the final Bayesian causal inference estimates along the cortical hierarchy [ 25 – 29 ]. Yet, little is known about how observers monitor their uncertainties in multisensory environments, in which signals can come from common or separate sources. These more realistic scenarios require the brain to monitor two distinct, but intimately related forms of uncertainty [ 30 ]: causal and perceptual uncertainty. Causal uncertainty refers to observers' uncertainty about the environment's causal structure, i.e. about whether sensory signals come from common or independent sources. Perceptual uncertainty pertains to observers' reported perceptual estimate such as their perceived syllable extracted from auditory speech and/or visual facial movements. Causal and perceptual uncertainty interactively arise during perceptual inference and are corrupted by the same sensory noise [ 18 , 21 ]. At small inter-sensory discrepancies when auditory and visual signals are likely to come from one source, signals are fused into a unified more precise audiovisual percept associated with low causal and perceptual uncertainty. By contrast, at intermediate levels of audiovisual discrepancy, observers will be more uncertain about whether signals come from common or independent sources. According to Bayesian causal inference models, the brain would average the fusion and the segregation distributions approximately with equal weight leading to a broader posterior distribution and hence lower perceptual confidence (see figures 1 and 8 for illustration). The relationship between causal and perceptual confidence can be studied by explicitly manipulating the discrepancy of the physical stimuli. Yet, even physically identical auditory and visual stimuli may elicit different perceptual and causal decisions because of internal and external noise. This inter-trial variability enables us to characterize the relationship between observers’ causal and perceptual confidence over trials using dual tasks that combine causal and perceptual confidence reports. This psychophysics study characterized the relationship between perceptual and causal confidence in audiovisual syllable categorization. We presented human observers with spoken syllables (i.e. auditory phonemes), their corresponding articulatory facial movements (i.e. visemes), and their congruent (e.g. visual Ba and auditory Ba) and incongruent (e.g. visual Ga and auditory Ba) combinations. The incongruent phoneme-viseme pairs were designed to elicit veridical ‘B/P’ or illusory ‘D/T’ or ‘G/K’ auditory percepts (i.e. McGurk-MacDonald illusion) [ 31 ]. We refer to both ‘D/T’ and ‘G/K’ auditory percepts as illusory because the veridical auditory percept is a ‘B/P’ percept. On each audiovisual trial, observers reported their perceived auditory phoneme, the signals' causal structure and their perceptual and causal confidence. First, we assessed whether observers integrate audiovisual congruent and McGurk signals into percepts that are associated with greater confidence than their unisensory counterparts. Second, we characterized the complex relationship between causal and perceptual estimates and their associated confidence. Third, we selected audiovisual congruent and McGurk trials on which the sensory signals evoked causal and perceptual metamers, i.e. observers reach the same causal and perceptual decisional outcome. For instance, congruent (i.e. Da-Da) and McGurk (Ga-Ba) trials on which observers report a ‘Da’ percept and a common source are perceptual and causal metamers [ 30 , 32 ]. We assessed whether despite the same perceptual and causal outcomes observers may still be able to discriminate between them and assign greater levels of causal and perceptual confidence to the congruent than the McGurk trials.
Methods Participants Fifteen participants were initially recruited, but one did not finish the study. Therefore, 14 participants were included in the analysis (two males, two left handers, mean age: 19.5, range: 18–22). The number of participants was a convenience sample. All participants gave written informed consent to participate in this psychophysics study, and they were compensated by means of study credits. Participants reported no history of psychiatric or neurological disorders, and no current use of any psychoactive medications. All had normal or corrected to normal vision and reported normal hearing. The study was approved by the research ethics committee of the University of Birmingham (ERN_11-0470AP4). Stimuli Stimulus material was taken from close-up audiovisual recordings of a female actress' face on a dark background looking straight into the camera ( figure 2 a ) and uttering the following 18 syllables: ba, be, bi, da, de, di, ga, ge, gi, pa, pe, pi, ta, te, ti, ka, ke and ki. In short, the recordings factorially combined six consonants (B, G, D, P, K, T) with three vowels (a, e, i). The six consonants can be organized into a two-dimensional space spanned by the dimensions of (i) place of articulation (i.e. production place along the vocal tract): B/P (labial), D/T (dental) and G/K (guttural); and (ii) voicing: unvoiced (P, T, K) and voiced (B, D, G). Audio and video were recorded with a camcorder (HVX 200 P; Panasonic). The video was acquired at 25 frames per second phase alternating line (PAL; 768 × 576 pixels); audio was acquired at 44.1 kHz (two channels). The recorded videos were edited (using PiTiVi 0.15.2) into 2000 ms long segments (50 frames) with the first articulatory movement starting at t = 1 second after the beginning of the movie (for further details see [ 33 ]). Each video started and finished with a neutral closed-lip view of the speaker's face. We used the movies of 18 different syllables as congruent stimuli. We generated six McGurk stimuli by cross-dubbing the video and the audio-track from the B-vowel (auditory) + G-vowel (visual) and the P-vowel (auditory) + K-vowel (visual) stimuli for the three vowels (a, e, i). Further, we presented the corresponding 18 auditory and visual components as unisensory stimuli (see also figure 2 b ). Finally, we added a considerable amount of white noise to all auditory recordings over the full 2 s epoch length in order to increase the expected proportion of illusory percepts on McGurk trials [ 34 , 35 ]. The noise signal was sampled at random from a zero-centred normal distribution with a s.d. that was equal to 1/3 of the maximum amplitude of the speech signal. This resulted in a signal to noise ratio (SNR; computed with a 30 ms sliding window) that increased rapidly from −70 dB to −13 dB in the first 50 ms after syllable onset (±4 dB s.d. across the 18 auditory stimuli). Thereafter, the SNR gradually increased further to a maximum of 1.5 dB (±2.5 dB s.d.) at 150 ms after syllable onset. Experimental design and trial The experiment included audiovisual (AV) congruent, AV McGurk trials, unisensory auditory (A) and visual (V) trials. On AV trials, observers reported the first-letter consonant that they heard and their perceived causal structure (common versus independent sources) together with their perceptual and causal confidence, respectively. On unisensory trials, observers reported their perceived first-letter consonant together with their perceptual confidence. Each trial started with the presentation of a central fixation cross for 500 ms on a black background. Subsequently, the 2 s stimulus (A/V/AV) was presented. The woman's upper lip's screen position approximately matched the location of the fixation cross. On A-only trials, the fixation cross remained on screen during stimulus playback. After stimulus presentation, the six possible first-letter consonants were presented in a circle ( figure 2 a ). Observers were instructed to report the syllable they heard on A and AV trials and the syllable that they saw on the V trials. Participants indicated their perceived first-letter consonant by moving their mouse over their preferred response option, after which a layered arc automatically appeared from which participants could select their confidence level on a 4-point scale (inner layer = low confidence, outer layer = high confidence). They indicated a response with a left-mouse click. Participants could still change their mind by moving their mouse to another letter until they had clicked. This procedure ensured that they provided their perceptual report and associated confidence simultaneously. On AV trials only, participants were then prompted to indicate whether the two auditory and visual stimulus components came from the ‘same’ or from two ‘different’ recordings by moving their mouse to a left or right bar. The vertical mouse location indicated their causal confidence on a continuous scale (bottom = uncertain, top = certain). Again, participants were allowed to change their minds until they responded by left-mouse button click. Participants were instructed to focus on response accuracy and precision rather than speed. Furthermore, participants were encouraged to scale their confidence responses across trials such that they made use of all four levels for perceptual confidence and the entire certainty bar (i.e. from bottom to top) for causal confidence. The experiment consisted of three 2 h sessions. It began with a short familiarization block that included 54 trials: 18 AV congruent trials, followed by 18 A-only trials and finally 18 V-only trials (each syllable appeared once). Before the beginning of each mini-block of 18 trials, the instructions appeared on the screen. For AV and A trials, these read ‘report what you hear’, whereas for V trials participants were instructed to ‘report what you see’. The main task started after the familiarization block. Participants completed 16 blocks of 144 trials. Each block contained 36 V-only, 36 A-only and 36 AV-congruent trials (two repetitions of each of the 18 syllable), as well as 36 McGurk stimuli trials (six repetitions of each of the six McGurk stimuli sets). The AV-congruent and McGurk stimuli were randomly interleaved, whereas the unisensory trials were presented in separate mini-blocks. Experimental procedure Participants were seated in a small dark room. They placed their chin on a chinrest that was positioned at a distance of 55 cm from a computer monitor (53 by 30 cm). Visual stimuli were shown at a frame-rate of 25 Hz in a central square of 20 by 20 cm, surrounded by a black background. Auditory stimuli were presented at 44.1 kHz by means of headphones (Sennheiser HD 280 Pro) at a comfortable listening volume (65 dB). A Tobii EyeX eye tracker was used during the experiment to monitor whether participants focused their eye gaze on the centre of screen (±5 degrees) during stimulus presentation. The central area of focus corresponded approximately to the woman's upper lip. Participants who did not follow task instructions received corrective feedback. The eye-tracking data were not further analysed. The experiment consisted of 2 h sessions that were performed on separate days, with maximally one week in between successive sessions. Participants had to finish the full experiment within two weeks. Participants were encouraged to take self-paced breaks between blocks within a session. The experiments were run in Matlab 2014b using the Psychophysics Toolbox 3 [ 36 , 37 ] and the Tobii Eyex toolkit [ 38 ]. Confidence normalization, statistical analyses and simulations Confidence normalization The perceptual and causal confidence distributions varied substantially in mean and spread across participants. We therefore normalized the confidence distributions with the help of the cumulative distribution function. More specifically, each raw confidence value x was mapped onto a normalized confidence value equal to the corresponding value in the cumulative distribution. In cases with several identical raw confidence values, we assigned the average across their normalized confidence values to those: This transformation ensured that the mean overconfidence values in all participants was equal to 0.5 and that the confidence values were spread across the entire range. For instance, if a participant indicated maximal causal confidence (top of the bar) in 30% of the trials, the normalized causal confidence values for these trials would all be equal to the mean of 0.7 and 1 = 0.85 (see the electronic supplementary material, figure S1). Statistical analysis We employed generalized linear mixed models (GLMMs) that allowed us to incorporate fixed effects of interest as predictors and account for structure induced by subjects and stimuli by including those as random effects [ 39 ]. We started with the most comprehensive model and stepwise reduced the number of random effects until model estimation converged. Typically, our models incorporated subjects as random intercept and slope, and stimulus as intercept. For detailed specification of each model, i.e. its fixed effects predictors and random effects, please see the formulae in Wilkinson notation stated in each of the statistical results tables (electronic supplementary material). We used GLMMs with logit as link function and with a binomial distribution for binary outcomes such as ‘correct versus incorrect’ (i.e. accuracy) or C = 1 versus C = 2 (i.e. causal inference judgements.), a multinomial distribution for categorical outcomes such as B/P, D/T versus G/K percepts and beta distributions for normalized confidence ratings (i.e. bounded to a range between 0 and 1). The statistical analyses were performed in R v4.3.0 [ 40 ], using the glmmTMB and mclogit packages [ 41 , 42 ]. Based on the fitted models, we generated model predictions by computing marginal means of the response variables for each of the conditions, i.e. factor level combinations [ 43 , 44 ]. Details and results of each analysis can be found in the electronic supplementary material, tables (main). Additional alternative models such as ordinal and ordered beta regression were used to analyse perceptual and causal confidence without the prior normalization described above [ 45 , 46 ]. These alternative analyses confirmed our main statistical results and are therefore not further discussed. Their details and results can be found in the electronic supplementary material, tables (alternative). Simulations To illustrate the relationship between perceptual and causal decisions as well as their corresponding confidence levels, we performed simulations based on the Bayesian causal inference model (for details see [ 18 ]). Originally, the Bayesian causal inference model has been developed to model spatial categorization responses in which continuous spatial estimates are mapped onto discrete spatial choices. Likewise, our simulations made the assumption that Ba, Da and Ga stimuli lie on a continuous abstract ‘place of articulation’ dimension that goes from labial (i.e. lip) ‘Ba’ to dental (i.e. ‘teeth’) ‘Da’ and finally to guttural (i.e. throat) ‘Ga’. Further, we assumed that this dimension is shared across the visual and auditory senses. These continuous estimates are mapped onto ‘Ba’, ‘Da’ and ‘Ga’ categories via categorical perception. Auditory and visual senses provide information about the place of articulation via formants for different phonemes and articulatory movements (i.e. viseme). The second formant in the time-frequency spectrograms discriminates between Ba, Da and Ga ( figure 2 c ). Likewise, the articulatory lip movements inform about the place of articulation. Modelling audiovisual speech integration with a single shared audiovisual dimension (see also [ 47 ]) ignores complexities that arise from more structured inputs of phonemes and visemes that naturally live in a multidimensional space. For instance, our modelling approach ignores the dimension of voicing. Moreover, it ignores nonlinearities that may affect auditory and visual stimulus dimensions differently. An alternative way is to model audiovisual integration of phoneme-viseme pairs in a two-dimensional space in which the auditory stimulus only weakly activates visual information and vice versa [ 22 ]. Yet, such a two-dimensional space is agnostic about what the visual or auditory features refer to in physical space and what these cross-modal co-activations account for. For the example shown in figure 1 and each of the four examples in figure 8 , we sampled one auditory and one visual signal from and and we computed the likelihoods and (pink and brown dashed lines). Assuming a flat (i.e. uninformative) prior over the place of articulation dimension and a causal prior , we computed the posterior distribution: This posterior distribution (solid black line) is a mixture of the full segregation (pink solid) and the fusion distributions (blue solid) weighted by the posterior probabilities over common and independent causes (green bar plots). To obtain the discrete posterior probabilities over syllable categories ‘B/P’, ‘D/T’ and ‘G/K’ (orange bar plots), we integrated the continuous posterior probability distribution limited by the category boundaries that separate the three response choices (i.e. category boundaries ‘Ba’ versus ‘Da’ was set to −0.5 and ‘Da’ versus ‘Ga’ to 0.5). We present these simulation results to provide a qualitative explanation for the pattern of findings in our study. We note that for more complex abstract dimensions such as ‘place of articulation’ several assumptions of the Bayesian model may not fully hold, so that we refrain from formal quantitative modelling (e.g. Gaussian distributions, common decision dimension shared between auditory and visual senses, additional variability within categories etc. For related approaches see [ 48 , 49 ]).
Results In the following, we present the key results organized in line with the results figures and electronic supplementary material, tables (main statistical analyses). The complete statistical results are presented only in the tables (see the electronic supplementary material). Performance accuracy and perceptual confidence in auditory, visual and audiovisual congruent conditions ( figure 3 ) Consistent with Bayesian models of multisensory perception observers integrated audiovisual signals into more precise estimates as indicated by their greater perceptual accuracy. Moreover, because the variance of the posterior distribution of the audiovisual distribution is smaller or equal to that of either unisensory posterior distribution, we would expect participants to be more confident on audiovisual congruent than unisensory conditions. Indeed, in line with these predictions, we observed an increase in perceptual accuracy and confidence for the audiovisual congruent relative to the unisensory conditions ( figure 3 a,b ; electronic supplementary material, table S1A-B). Performance accuracy increased significantly for audiovisual congruent relative to both auditory and visual conditions. Likewise, perceptual confidence increased significantly for audiovisual congruent relative to visual (but not auditory) conditions. The electronic supplementary material, figure S3 further shows that this response pattern for mean perceptual confidence is highly consistent across subjects. Observers were sensitive to their own accuracy as indicated by a significant increase in perceptual confidence for correct relative to incorrect responses ( figure 3 c ; electronic supplementary material, table S1B). This metacognitive sensitivity was significantly greater for congruent than auditory and in particular than visual conditions (i.e. significant interaction between correct/incorrect x sensory modality). This effect can be explained by the differences in performance accuracy across sensory modalities. At first sight, it may be surprising that observers were 50% accurate and hence significantly better than the chance level of 16.67% (i.e. 1/6 possible response options) on the visual first-letter categorization task, but showed no metacognitive sensitivity. In other words, they were unable to discriminate between their correct and wrong visual decisions despite showing better than chance performance. This metacognitive insensitivity arises from the fact that the six response options are spanned by the place of articulation (e.g. B versus D versus G) and the voicing (i.e. B versus P) dimensions. While the visual modality is very informative about the place of articulation, it is uninformative about the voicing dimension. Hence, the vast majority of errors are ‘voicing errors’ such as misclassifying a Ba as a Pa stimulus. As shown in the electronic supplementary material, figure S2, observers were at chance (i.e. approximately 50% correct) when discriminating between voiced (e.g. Ba) and the corresponding unvoiced (e.g. Pa) visual stimuli. When observers perform a task at chance, it is impossible for them to metacognitively discriminate between correct and incorrect responses. So, observers' incapacity to discriminate between voiced and unvoiced syllables based on the visual input alone explains their metacognitive insensitivity for unisensory visual trials (see also the electronic supplementary material, figure S2). Perceptual decisions and confidence on unisensory and McGurk trials ( figure 4 ) The bar plots in figure 4 show how the brain combines auditory ‘B/P’ and visual ‘G/K’ information on McGurk conflict trials together with observers' associated confidence levels. The perceptual accuracy for place of articulation (n.b. pooled over voicing) is comparable for unisensory auditory B/P and visual G/K stimuli. We assessed this statistically using a multinomial mixed effects model in which we classified ‘B/P’ responses as ‘correct’ for auditory B/P stimuli and as ‘other’ for visual G/K stimuli. Conversely, we labelled the ‘G/K’ responses as ‘correct’ for visual G/K stimuli and as ‘other’ for auditory B/P stimuli. The ‘D/T’ responses were labelled as ‘D/T’ responses for both auditory and visual stimuli. This analysis revealed no significant differences in response probabilities for ‘correct’ versus ‘D/T’ responses or for ‘correct’ versus ‘other’ responses between auditory and visual stimuli (see the electronic supplementary material, table S2A). Critically, however, the (predicted) response probabilities for the ‘G/K’ category (i.e. ‘other’) on auditory B/P trials is four times higher than the response probability for 'B/P' category (i.e. ‘other’) on visual G/K trials (electronic supplementary material, table S2A). On 8% of the trials an auditory B/P stimulus is perceived as ‘D/T’ and on 4% of the trials it is perceived as ‘G/K’. By contrast, a visual G/K stimulus is perceived as a ‘D/T’ on 12% of the trials, but it is almost never perceived as a ‘B/P’ syllable (1%). This difference between auditory and visual response probabilities explains that McGurk stimuli are mainly perceived as ‘D/T’ and ‘G/K’, i.e. perceptual categories that are consistent with perceptual interpretations of both the visual G/K and the auditory B/P stimulus (electronic supplementary material, table S2B; i.e. significant ‘intercept’ effects for ‘D/T’ and for ‘G/K’ responses relative to ‘B/P’ responses). The large fraction of 'G/K' perceptual responses on McGurk trials is consistent with previous studies of the McGurk illusion [ 34 , 35 ]. It can be explained by the additional noise that we added to the auditory component signal (see methods section) to decrease its reliability. As a result, the auditory signal received a lower weight when observers integrate auditory and visual signals during perception. However, it is important to note that observers did not simply go with the visual signal and ignored the sound on trials with a ‘G/K’ percept. Instead, they integrated audiovisual signals even on those trials with visual dominant ‘G/K’ percept as evidence by their voicing classification accuracy. The substantial increase in accuracy on the voicing dimension for McGurk relative to visual-only trials demonstrates that observers relied on both visual and auditory information even on trials with a visual dominant ‘G/K’ percept (see the electronic supplementary material, figure S2). Overall, perceptual confidence is significantly higher for McGurk than for unisensory auditory or visual stimuli. Observers were thus more confident on conflicting McGurk trials than on unisensory trials. Moreover, this increase in perceptual confidence for McGurk relative to unisensory trials was particularly pronounced for ‘D/T’ and ‘G/K’ relative to ‘B/P’ perceptual outcomes (i.e. significant interactions between stimulus type and perceptual outcome; electronic supplementary material, table 2C). Similarly, observers' perceptual confidence was higher on McGurk trials with ‘G/K’ outcomes, i.e. perceptual interpretations that integrate audiovisual information, than with ‘B/P’ outcomes where audiovisual information was successfully segregated (electronic supplementary material, table S2D). Collectively, these results suggest that observers are more confident even on conflicting McGurk trials when they integrate audiovisual signals into a ‘G/K’ percept than on unisensory trials or on McGurk trials on which they perceive separate causes (i.e. ‘different’ sources). The relationship between perceptual and causal inference ( figure 5 ) Perceptual and causal decisions are intimately related in the inference process and susceptible to shared sensory noise. These dependencies explain that correct common cause ‘C = 1’ responses on congruent trials go together with correct perceptual responses (i.e. significant effect of trial correctness on causal response fraction; figure 5 a ; electronic supplementary material, table S3A). Likewise, observers associated a greater causal confidence to correct common cause responses mainly when they categorized the first letter correctly (i.e. significant interaction in causal confidence for common ‘C = 1’ versus independent ‘C = 2’ cause responses x correct versus incorrect; electronic supplementary material, table S3C; figure 5 c ). The close relationship between causal and perceptual inference is also manifest in the McGurk trials ( figure 5 b,d ; electronic supplementary material, table S3B,D). The fraction of common cause responses is directly associated with observers' perceptual categorization responses, increasing from ‘B/P’ to ‘D/T’ and ‘G/K’ responses (i.e. significant main effect of perceptual category on causal response fractions; electronic supplementary material, table S3B; figure 5 b ). Likewise, the causal confidence for common cause ‘C = 1’ responses was greatest for ‘G/K’ percepts, while the causal confidence for independent cause ‘C = 2’ responses peaked for ‘B/P’ percepts (i.e. significant interactions between causal and perceptual outcomes on causal confidence; electronic supplementary material, table S3D; figure 5 d ). In other words, as expected from Bayesian causal inference ( figure 1 ), the integrated percept that is conditional on a common cause receives more weight (over the segregated percept) when a common cause is deemed more probable (as expressed by greater causal confidence with ‘C = 1’ responses), thereby increasing the ‘D/T’ and ‘G/K’ response probabilities (also see the electronic supplementary material, figure S4). Vice versa, larger causal confidence with ‘C = 2’ responses lead to higher weights for the segregated percept, thus increasing the probability of a ‘B/P’ percept. The relationship between perceptual and causal confidence ( figure 6 ) Our dual-task design enables us to characterize the relationship between causal and perceptual confidence based on inter-trial variability. To visualize this, we sorted observers' perceptual decisions and confidence according to their causal decision and confidence on those trials (median split per response category for each subject, although note that such a median split was unnecessary for the statistical analysis). Consistent with Bayesian causal inference models, common source decisions and high causal confidence were associated with high perceptual accuracy ( figure 6 a ) and perceptual confidence ( figure 6 c ). Statistically, this observation is supported by a significant interaction between causal decision and causal confidence on observers’ perceptual accuracy. Moreover, we observed a significant main effect of causal confidence (electronic supplementary material, tables S4A,C) as well as a significant interaction between perceptual accuracy and causal response on perceptual confidence. Observers' perceptual confidence increased with their causal confidence in particular for common cause responses, but less so for independent cause responses. Thus, when observers made a wrong response the correlation between their perceptual and causal confidence was attenuated most likely because of random guesses (see below). Likewise, on McGurk trials, observers’ causal inference outcome and causal confidence were closely related with their perceptual outcome and perceptual confidence. As predicted by the Bayesian causal inference model, illusory ‘D/T’ and ‘G/K’ percepts on McGurk trials were significantly more frequent for common cause responses with high relative to low causal confidence (i.e. significant interaction between causal decision and causal confidence on perceptual response fractions ‘D/T’ versus ‘B/P’ and ‘G/K’ versus ‘B/P’; electronic supplementary material, table S4B; figure 6 b ). By contrast, ‘B/P’ percepts were observed mainly on trials with independent cause decision and high level of causal confidence (electronic supplementary material, table S4B; figure 6 b ). On McGurk trials, perceptual confidence was positively correlated with causal confidence (significant main effect of causal confidence on perceptual confidence; electronic supplementary material, table S4D; figure 6 d ). However, this main effect arose from a complex three-way interaction between perceptual outcome (e.g. ‘G/K’ versus ‘B/P’), causal outcome (C = 1’ versus ‘C = 2) and causal confidence (electronic supplementary material, table S4D; figure 6 d ). As shown in figure 6 d , perceptual and causal confidence were closely related for all trials apart from those with ‘B/P’ percepts with ‘C = 1’, i.e. wrong, causal responses—most likely because those erroneous responses reflect random guesses during lapses of attention. In summary, both congruent and McGurk trials demonstrate that causal inference outcome and causal confidence implicitly affect observers' perceptual choices and confidence. Causal and perceptual metamers ( figure 7 ) The results presented so far show that on a large percentage of McGurk trials observers integrated the auditory B/P and the visual G/K stimulus components into an auditory ‘D/T’ or ‘G/K’ percept and report a common cause (C = 1). On these trials, observers thus integrated audiovisual McGurk signals into perceptual and causal metamers of the corresponding D/T and G/K congruent trials with correct perceptual and causal responses. This raises the critical question of whether, despite identical perceptual and causal decisions, observers were metacognitively aware that the ‘D/T’ and ‘G/K’ percepts on McGurk trials rely on incongruent audiovisual information and hence report lower perceptual and causal confidence relative to their congruent metamers. To address this question, we categorized congruent trials (with correct syllable response and ‘C = 1’ causal response) and McGurk trials (with correct voicing response and ‘C = 1’ causal response) according to their perceptual confidence levels separately for ‘B/P’ ( figure 7 a,d ), ‘D/T’ ( figure 7 b,e ) and ‘G/K’ ( figure 7 c,f ) percepts. For instance, figure 7 , left column, shows the response probabilities for the four different confidence levels as a fraction of all trials with a ‘B/P’ percept and ‘C = 1’ causal outcome in the top row and the associated causal confidence ratings in the bottom row (blue = AVc B/P stimulus; green = McGurk stimulus). We assessed statistically whether the mean perceptual confidence (e.g. averaged across all AVc B/P trials with correct perceptual and C = 1 responses) differed between AV congruent and McGurk stimuli. These statistical analyses were performed separately for each perceptual category, i.e. separately for ‘B/P’ ( figure 7 a ; electronic supplementary material, table S5A), ‘D/T’ ( figure 7 b ; electronic supplementary material, table S5B) and ‘G/K’ ( figure 7 c ; electronic supplementary material, table S5C) percepts. We observed significantly lower perceptual confidence on McGurk trials with ‘B/P’ and ‘D/T’ percepts compared to their corresponding AV congruent trials (electronic supplementary material, table S5A-B) suggesting that observers could metacognitively discriminate between AV congruent and McGurk trials despite identical perceptual outcome. Importantly, however, the perceptual confidence was not statistically different between McGurk trials with ‘G/K’ percepts and their congruent metamers (electronic supplementary material, table S5C). Likewise, Akaike information criterion (AIC) and Bayesian information criterion (BIC) did not provide consistent evidence for a difference in perceptual confidence between AV congruent and McGurk trials with ‘G/K’ percept. Next, we assessed whether observers assigned lower causal confidence levels to McGurk trials than congruent stimuli, when we account for differences in perceptual confidence levels by including perceptual confidence as a regressor in our GLMMs (electronic supplementary material, table S5D–F). Again, we observed a lower causal confidence for McGurk relative to congruent trials only for ‘B/P’ and ‘D/T’ percepts (electronic supplementary material, table S5D-E; n.b. expressed by the significant interaction between perceptual confidence and McGurk versus AV congruent trials). By contrast, neither the main effect of stimulus type (i.e. McGurk versus AV congruent) nor its interaction with perceptual confidence was significant for trials with ‘G/K’ percepts (electronic supplementary material, table S5F). These null results were further corroborated by formal Bayesian model comparison. Both AIC and BIC jointly provided evidence that AV congruent and McGurk trials did not significantly differ in their causal confidence (i.e. the more parsimonious model without the predictor stimulus type was a better fit to the data; electronic supplementary material, table S5F). These results suggest that observers regularly integrate conflicting signals from McGurk stimuli into auditory ‘G/K’ percepts that are associated with comparable perceptual and causal confidence as their metameres from G/K congruent stimuli. On those subsets of McGurk trials observers are no longer metacognitively aware of the conflicting ‘B/P’ phoneme and ‘G/K’ viseme stimuli.
Discussion This study investigated how human observers form confidence judgements when presented with spoken syllables, articulatory lip movements or their congruent and McGurk combinations. In such multisensory information integration tasks, observers need to monitor two intimately related sorts of uncertainty: perceptual uncertainty about environmental properties (e.g. syllable's first letter) and causal uncertainty about whether signals come from common or independent sources. Our results demonstrate that human observers form meaningful perceptual and causal confidence judgements that are qualitatively in line with the principles of Bayesian causal inference. A wealth of research has shown that human observers integrate audiovisual signals from common sources weighted by their relative reliabilities into more precise percepts [ 50 – 53 ]. Sensory integration reduces observers' uncertainty about the current state of the world. In our study, auditory and visual senses provide both redundant and complementary information about syllables [ 54 ]. The auditory sense facilitates the discrimination between voiced and unvoiced consonants (e.g. ‘B/G/D’ versus ‘P/K/T’) that is left ambiguous by the visual sense alone. Further, speech signals and the articulatory lip movements together inform about the place of articulation (e.g. ‘B/P’ versus ‘D/T’ versus ‘G/K’). Unsurprisingly, observers benefit substantially from audiovisual integration. They show superior syllable categorization accuracy and higher perceptual confidence on audiovisual congruent relative to unisensory trials ( figure 3 ). Only on less than 10% of the audiovisual congruent trials did observers miscategorize the syllables. As evidence for observers' metacognitive sensitivity, they assigned lower levels of confidence to perceptual categorization errors relative to their correct responses [ 11 ]. Moreover, these perceptual categorization errors also revealed a close relationship between perceptual and causal inference. Observers’ causal accuracy and confidence were greater, when observers categorized the syllable correctly ( figure 5 a,c ). Conversely, observers' perceptual accuracy and confidence were greater for trials with high than low causal confidence ( figure 6 a,c ). As we will discuss in greater detail below, this positive relationship between perceptual and causal accuracy resp. confidence is consistent with Bayesian causal inference models, in which perceptual and causal inference arise interactively and are susceptible to shared sensory noise [ 18 , 21 ]. McGurk trials provide additional insights into the formation of confidence judgements by introducing a small inter-sensory conflict along the ‘place of articulation’ dimension, i.e. by combining an auditory B/P with a visual G/K signal. Unisensory auditory B/P stimuli were predominantly perceived as ‘B/P’, but in approximately 20% of the trials as ‘D/T’ or ‘G/K’. Unisensory visual G/K signals were mainly perceived as ‘G/K’ and in 20% of the trials as ‘D/T’, but nearly never as ‘B/P’. This inter-sensory difference in the distribution over perceptual categories explains that McGurk combinations are mainly integrated into ‘D/T’ and ‘G/K’ percepts that are possible perceptual explanations for both auditory B/P and visual G/K signals ( figure 4 ). Moreover, the perceptual outcome on a McGurk trial is characteristically influenced by observers' causal inference on that trial. Consistent with Bayesian causal inference models, observers perceive the first letter of the auditory syllable as a ‘D/T’ or ‘G/K’ particularly, when they infer that auditory and visual signals come from common sources and hence integrate them (figures 5 and 8 ). The proportion of visual biased ‘G/K’ percepts and the perceptual confidence increases even further for common source judgements with high relative to low causal confidence ( figure 6 b,d ). Conversely, observers reported a veridical ‘B/P’ percept, i.e. a percept unbiassed by the conflicting visual G/K signal, when they inferred that auditory and visual signals come from independent sources. Again, this proportion of ‘B/P’ percepts increased for high relative to low causal confidence. Moreover, for both common and independent source responses, perceptual and causal confidence were positively related: a greater causal confidence was generally associated with a greater perceptual confidence. In short, McGurk trials replicated the tight relationship between perceptual and causal inference/confidence over trials that we already observed for congruent trials. As shown in our simulations ( figure 8 ), this intimate relationship naturally arises in Bayesian causal inference models, because perceptual and causal inference are based on the same auditory and visual inputs that vary across trials because of sensory noise. Thus, when noisy auditory and visual signals are close together along the abstract ‘place of articulation dimension’, observers are likely to infer a common source and integrate audiovisual signals into a visual biased ‘G/K’ phoneme ( figure 8 bottom row). By contrast, an auditory dominant ‘B/P’ percept arises only, when the probability of independent sources is very high ( figure 8 , top row). While the Bayesian causal inference model can qualitatively explain the relationship between perceptual and causal decisions resp. confidence, our results cannot dissociate whether the brain forms Bayesian or approximate confidence estimates when exposed to multiple sensory signals under causal uncertainty. In these situations, the posterior probability distribution turns bimodal. Perceptual confidence may be related to a variety of quantities [ 55 ]. For instance, observers' confidence judgements may reflect the posterior probability at the particular perceptual estimate or the entropy of the full bimodal posterior probability distribution [ 10 , 56 ]. Alternatively, because observers perceive auditory speech signals categorically as ‘B/P’, ‘D/T’ or ‘G/K’, their perceptual confidence may be related to the posterior probabilities over the discrete response options rather than a continuous posterior distribution over a hypothesized place of articulation dimension. Further, in the discrete case, it is unclear whether observers' perceptual confidence reflects Bayesian confidence, i.e. the posterior probability that a decision is correct or some other quantity. For example, in a three alternative visual categorization task observers’ perceptual confidence has recently been shown to reflect the difference in posterior probability between the two most likely options [ 57 ]. Because in our experimental paradigm observers made a choice among six options that were arranged in a two-dimensional ‘place of articulation’ x ‘manner of articulation’ space, it is likely that observers formed approximate or simple heuristic confidence judgements. Future research combining psychophysics with formal quantitative modelling is needed to dissociate between these different strategies to form confidence judgements. McGurk trials provide critical insights into whether observers metacognitively monitor only the final integrated percept or whether they access the unisensory signals and underlying inference processes. An early intriguing study by Hillis et al . (2002) [ 32 ] has previously suggested that observers lose access to individual cues after integrating them within but not between the senses. In an oddity judgement task, conflicting visual- but not visuohaptic-cues were fused into perceptual metamers that were indistinguishable from the standard percepts derived from congruent cues. Following this rationale, we selected McGurk trials on which observers integrated the conflicting audiovisual signals into illusory ‘D/T’ and ‘G/K’ percepts and perceived a common cause. We then compared those to their corresponding perceptual and causal metamers of congruent trials, i.e. congruent audiovisual signals that elicited veridical ‘D/T’ and ‘G/K’ percepts and were perceived as coming from a common source ( figure 7 ). We reasoned that if observers move beyond the integrated percept and retain access to their unisensory ingredients, they should assign lower confidence to the conflicting McGurk signals than their congruent counterparts [ 30 ]. Contrary to this conjecture, we observed closely matched perceptual and causal confidence ratings between congruent and McGurk stimuli with ‘G/K’ percepts and common cause ‘C = 1’ responses ( figure 7 ; electronic supplementary material, table S5). Thus, observers obtained comparable confidence levels for perceptual and causal metamers that were unaffected by the underlying true causal structure, at least for visual biased ‘G/K’ percepts with common source judgements. These results suggest that observers metacognitively monitor mainly the final posterior distribution to form confidence judgements. When they integrate audiovisual signals into ‘G/K’ responses and infer a common source, they do not seem to have access to the unisensory signals or the true inter-sensory conflict. Future research needs to assess whether these findings generalize to other sets of McGurk stimuli. For instance, a previous online study found a significant reduction in perceptual confidence for McGurk trials with ‘Da’ or ‘Na’ percept relative to the corresponding congruent stimuli [ 58 ]. Potentially, by adding noise to auditory component signals in both congruent and McGurk trials our study may have made it more difficult for participants to discriminate metacognitively between congruent and McGurk stimuli. In conclusion, our results show that observers form meaningful causal and perceptual confidence estimates. Consistent with the principles of Bayesian causal inference, these two forms of uncertainty are closely related over trials with higher causal confidence typically associated with higher perceptual confidence. Further, when a common source of the sensory signals is inferred, confidence is directly informed by the final integrated percept with no or only very limited access to the unisensory signals and their true causal structure.
One contribution of 16 to a theme issue ‘ Decision and control processes in multisensory perception ’. Electronic supplementary material is available online at https://doi.org/10.6084/m9.figshare.c.6751342 . Almost all decisions in everyday life rely on multiple sensory inputs that can come from common or independent causes. These situations invoke perceptual uncertainty about environmental properties and the signals' causal structure. Using the audiovisual McGurk illusion, this study investigated how observers formed perceptual and causal confidence judgements in information integration tasks under causal uncertainty. Observers were presented with spoken syllables, their corresponding articulatory lip movements or their congruent and McGurk combinations (e.g. auditory B/P with visual G/K). Observers reported their perceived auditory syllable, the causal structure and confidence for each judgement. Observers were more accurate and confident on congruent than unisensory trials. Their perceptual and causal confidence were tightly related over trials as predicted by the interactive nature of perceptual and causal inference. Further, observers assigned comparable perceptual and causal confidence to veridical ‘G/K’ percepts on audiovisual congruent trials and their causal and perceptual metamers on McGurk trials (i.e. illusory ‘G/K’ percepts). Thus, observers metacognitively evaluate the integrated audiovisual percept with limited access to the conflicting unisensory stimulus components on McGurk trials. Collectively, our results suggest that observers form meaningful perceptual and causal confidence judgements about multisensory scenes that are qualitatively consistent with principles of Bayesian causal inference. This article is part of the theme issue ‘Decision and control processes in multisensory perception’.
Acknowledgements We thank Sonal Patel for help with data acquisition. Ethics The study was approved by the research ethics committee of the University of Birmingham (ERN_11-0470AP4). All participants provided written informed consent to participate in the study. Data accessibility Unfortunately, we cannot make the raw data available because participants did not indicate explicitly in the ethics consent form that they approve of their data to be shared. We have updated our consent forms for future data collections. Summarized data for all conditions are provided in the electronic supplementary materials (see tables of statistical analyses) [ 59 ]. Authors' contributions D.M.: conceptualization, data curation, formal analysis, investigation, methodology, software, visualization, writing—original draft, writing—review and editing; U.N.: conceptualization, funding acquisition, investigation, methodology, project administration, resources, supervision, validation, writing—original draft, writing—review and editing. Both authors gave final approval for publication and agreed to be held accountable for the work performed therein. Conflict of interest declaration We declare we have no competing interests. Funding This research was funded by the ERC starting grant ('multsens'). D.M. is currently supported by the Austrian Science Fund (FWF, ZK-66, ‘Dynamates’).
CC BY
no
2024-01-15 23:43:51
Philos Trans R Soc Lond B Biol Sci.; 378(1886):20220348
oa_package/d4/d7/PMC10404922.tar.gz
PMC10404924
37545312
Introduction Metacognition has previously been defined as the capacity for ‘thinking about thinking’ [ 1 ] and perceptual metacognition can be defined as the capacity to monitor the quality and fidelity of one's own perceptions. Studies now provide various behavioural and computational tools to measure perceptual metacognition [ 2 – 5 ], reveal the neural correlates that support this ability [ 6 – 11 ], and demonstrate how perceptual confidence and perceptual accuracy dissociate in specific situations [ 12 – 16 ]. However, the vast majority of research on perceptual metacognition focuses on the visual modality alone, and specifically, visual confidence judgements [ 17 ]. As others have noted, little is currently known about how our sense of perceptual metacognition extends to multisensory paradigms, with sensory stimulation in two or more sensory modalities [ 18 ]. Thus, to better understand what metacognition is, how it functions, and what adaptive purposes it may serve, it is necessary to further explore the role that it plays in monitoring multisensory representations of the external world. Recent theoretical accounts of metacognition posit that it may play a role in distinguishing between real and imagined stimuli [ 19 ], and help facilitate ‘perceptual reality monitoring’ [ 20 ] to make accurate inferences about which sources give rise to which sensory stimuli [ 21 ]. Interestingly, the process of inferring which external sources in the world give rise to specific sensations is thought to be central to causal inference in multisensory perception [ 22 , 23 ], as the brain must determine if a single source in the environment is producing stimulation in two or more modalities, or if separate sources in the environment are giving rise to multiple sensory signals. If metacognition facilitates our capacity to distinguish between what is real and what is not, might it also help us distinguish between different types of multisensory information in the world? Potentially, different types of multisensory experiences come in (at least) three different forms The first type of multisensory experience is that of a congruent multisensory signal. Congruent multisensory signals can be defined by a single source in the environment giving rise to sensations in two or more modalities at the same time. For instance, when you talk to another person, you see their lips move and hear the sound of their voice, and this information arises from one source. Contrasting congruent multisensory signals are integrated multisensory signals. Integrated signals occur when distinct sources produce conflicting sensory information (e.g. visual and auditory information), but the brain infers that these signals originated from a single source, and combines them into a unique percept. Examples of this include multisensory illusions such as spatial ventriloquism [ 24 , 25 ], temporal ventriloquism [ 26 , 27 ] and the McGurk effect [ 28 ], among others. Lastly, segregated multisensory signals occur when stimulation occurs in two or more sensory modalities, and the brain infers that separate sources give rise to each signal. Considering these different types of multisensory experiences, one can ask: can metacognition help us distinguish between congruent and integrated (illusory) multisensory experiences? And can it do so when our perceptual reports about what we experience are identical across two or more experimental conditions [ 18 ]? It is interesting to consider what a preliminary hypothesis should be when comparing confidence in congruent multisensory perception with confidence in integrated multisensory perception. Over the last 40 years, a tremendous amount of research has emphasized benefits in multisensory integration. One primary benefit comes in reducing and resolving perceptual ambiguity [ 29 ], as many studies attest to the finding that when stimuli are integrated from discrepant sources, the resulting representation is more precise than the pre-existing unisensory representations [ 30 – 32 ]. Further, past research has provided evidence of ‘superadditivity’ in brain responses to integrated multisensory stimuli, showing that neural responses to multisensory stimuli that are somewhat coincident in either space or time are often larger than the sum of unimodal responses, especially for weak stimuli [ 33 – 36 ]. But superadditivity may not be a hallmark of all multisensory interactions [ 37 ], and while it remains possible that the process of integrating stimuli could contribute a unique signal that leads to stronger metacognition for integrated stimuli over congruent stimuli, this seems unlikely. Perhaps confidence judgements for integrated and congruent multisensory stimuli are similar? If it is difficult for observers to tell integrated and congruent multisensory signals apart, this seems possible. However, research demonstrates that enhanced brain responses can occur for congruent multisensory information [ 38 ], which could lead to higher confidence compared with integrated signals. Importantly, many forms of integrated multisensory stimulation move estimates away from the true source of information . For example, the spatial ventriloquist illusion [ 24 , 25 ], where estimates of auditory stimuli are biased by simultaneous visual stimulation, is an example of how multisensory integration makes perception (in an absolute sense) less veridical than if separate representations were maintained for each sensory modality alone. Thus, while multisensory integration has its benefits, it would seem more optimal for observers to be more confident in congruent multisensory information compared with integrated multisensory information. However, to date, little data exist that speak to the behavioural profile of multisensory confidence judgements [ 39 – 41 ]. In this investigation, we explore whether confidence differs for congruent and integrated multisensory stimulation, and if so, whether it also differs when reports are matched across congruent and integrated trials [ 18 ]. We do so by exploiting a well-known example of multisensory integration: the sound-induced flash illusion [ 42 , 43 ]. In the ‘fission’ version of this illusion, if observers are presented with two brief beeps and one visual flash, they often report seeing two visual flashes. In the ‘fusion’ version of this illusion, if observers are presented with one beep and two visual flashes, they sometimes report seeing one visual flash [ 44 , 45 ]. Interestingly, participants' reports of the number of visual flashes in these illusory cases may be equivalent to reports in conditions with congruent audiovisual stimulation, where the number of flashes and beeps is the same. These conditions of distinct-stimulation-but-identical-report in the sound-induced flash illusion allow us to compare whether metacognitive confidence in judgements about the numbers of flashes is different between congruent and integrated stimulation, and to evaluate if confidence is different when the percept (i.e. the number of flashes) is matched across conditions. Previous research supports the hypothesis that phenomenological distinctions can be made between genuine flashes and illusory flashes [ 46 ]. Therefore, even if perceptual reports about the number of flashes are the same across conditions, it seems possible that metacognitive systems may be able to index differences by producing different levels of confidence. In our experiment, on each trial, observers were presented with 0–2 flashes and 0–2 beeps, and on each trial were asked to judge two things: (1) the number of flashes that were presented (or if it was a beep-only trial, the number of beeps), and (2) their confidence in their judgement about the number of flashes. To anticipate, our results showed that the profile of metacognition was marked by higher confidence for congruent stimulation and lower confidence for integrated stimulation, and that even when reports were matched across congruent and integrated trials, confidence was still higher for congruent stimuli. We discuss these results and their implications below.
Experiment—method Participants Forty-six undergraduate students at the University of Florida (33 women, 13 men, mean age = 19.02 years, s.d. = 3.05) volunteered to participate to earn course credit. Participants began the experimental session by completing an informed consent procedure (IRB no. 201902462, University of Florida). All experimental procedures were conducted in accordance with the Declaration of Helsinki. Stimuli and apparatus Participants were positioned approximately 50 cm away from a CRT monitor and were kept in this position for the entire experiment through the use of a chinrest. The computer volume on our Dell PC was set to 30% of system maximum, and the external speaker volume was set to 100%; this yielded an average of 70 dB when tested with consecutive stimulus presentations. Eight conditions were included in our experiment: four unisensory conditions (1 beep (1B), 2 beeps (2B), 1 flash (1F), and 2 flashes (2F)) and four bisensory conditions, including 1-beep/1-flash (1B1F), 2-beeps/1-flash (2B1F), 1-beep/2-flashes (1B2F) and 2-beeps/2-flashes (2B2F). All flashes were presented for 10 ms; all beeps were also 10 ms in duration. In the 1B1F condition, the beep and flash were presented simultaneously. In the 2B2F condition, the beeps and flashes were presented simultaneously, with a 50 ms gap between the initial beep–flash presentation and the second. In the 1B2F condition, there were 50 ms between flashes, and the beep was presented with the first flash. In the 2B1F condition, there was 50 ms between beeps, and the flash was presented with the first beep. Procedure Participants began our task by reading our consent form and signing to provide written consent. Next, participants reported their sex and age for our records. Then they were asked to adjust the chinrest to a comfortable height. Lastly, participants were provided instructions on how to complete the beep–flash illusion task and began a set of eight practice trials. The practice trials consisted of two trials demonstrating the beep sound that would be used, two trials demonstrating what the flash stimulus on the screen looked like, and four trials providing an example of bisensory trials, combining the beep and flash. For the beep-only practice trials, the participants had to report the number of beeps they heard and their confidence level in their decision. For the flash trials, the participants had to report the number of flashes they saw and their confidence level. For the bisensory practice trials, the participants had to report the number of flashes they perceived and their confidence level. Following the practice trials, the participants began the actual experiment consisting of 240 psuedorandomly ordered trials from all eight conditions, which were split up into six blocks of 40 trials. Unfortunately, despite using MATLAB's functions to randomize stimuli properly, we failed to randomize the starting seed in the program (using ‘rng shuffle’), and thus, 27 of our 46 participants received the same pseudorandomized order of trials. Participants were allowed to take a break in between each block. As with the practice task, the participants were presented with eight possible conditions, which were pseudorandomly ordered: 1B, 2B, 1F, 2F, 1B1F, 1B2F, 2B1F, 2B2F. Trials were structured so that each began with a white fixation cross in the middle of a black screen for 1000 ms, followed by the presentation of stimuli, and then by a prompt asking for the participant's responses. The white flash was centred on the screen, approximately 4° below fixation. After the stimulus presentation, in the 1F, 2F, 1B1F, 1B2F, 2B1F and 2B2F conditions, participants had to report the number of flashes they perceived and their confidence in that decision for each trial. Confidence was rated on a discrete 1–4 scale, with 1 = not at all confident, and 4 = extremely confident. In the 1B and 2B conditions, they reported the number of beeps they perceived, and their confidence in the decision for each trial. In total, the experiment lasted approximately 40 min on average.
Results As shown in figure 1 a , we were able to successfully create stimulus conditions that frequently resulted in both ‘fission’ and ‘fusion’ illusions. For example, participants in the 1F2B condition (the ‘fission’ illusion condition) frequently reported two flashes (mean = 1.67, s.d. = 0.26), compared with the 1F1B condition, where they frequently reported one flash (mean = 1.06, s.d. = 0.12). To test whether the difference between conditions was significant, we conducted a Shapiro–Wilk test of normality, which suggested a deviation from normality ( W = 0.95, p = 0.049); therefore, we conducted a Wilcoxon signed-rank test, which indicated that the average number of flashes reported in these two conditions was significantly different ( p < 0.001). Participants in the 2F1B condition (the ‘fusion’ illusion condition) frequently reported one flash, compared with the 2F2B condition, where they frequently reported two flashes. A Shapiro–Wilk test of normality indicated a deviation from normality ( W = 0.93, p = 0.01); we conducted a Wilcoxon signed-rank test, which indicated that the average number of flashes reported in these two conditions was significantly different ( p < 0.001). Next, we plotted the average confidence across our four stimulus conditions ( figure 1 b ). On average, confidence was highest in the 1F1B (3.38) and 2F2B conditions (3.22), and lower in the 1F2B (3.06) and 2F1B (3.15) conditions. However, our most important analysis in this project focused on trials where the type 1 report was matched between different conditions. Specifically, certain conditions frequently resulted in reports of one flash (1F1B; 2F1B) or two flashes (1F2B; 2F2B). We hypothesized that confidence judgements would be able to distinguish congruent multisensory sensations from illusory multisensory sensations, even when the type 1 report was the same. To answer this question, we first selected all of the trials that resulted in a report of one flash in the 1F1B and 2F1B conditions, and all of the trials that resulted in a report of two flashes in the 1F2B and 2F2B conditions. Then, we computed the average confidence for each subject for these trials ( figure 1 c ). As can be seen in the figure, confidence was highest for congruent multisensory trials, and lower for illusory multisensory trials; this was true not only when one flash was reported ( W = 771; p < 0.001), but also when two flashes were reported ( W = 105; p < 0.001). For unisensory trials, confidence was higher when judging the numbers of beeps compared with the numbers of flashes, which is in line with the general conception of the auditory modality being more precise in the temporal domain [ 47 ]. Specifically, confidence for the 1B (mean = 3.71, s.d. = 0.42) and 2B conditions (mean = 3.69, s.d. = 0.43) was higher than average confidence in the 1F (mean = 3.38, s.d. = 0.5) or 2F conditions (mean = 3.21, s.d. = 0.54). In addition to these analyses of averages within a condition, confidence can also be analysed in terms of correct and incorrect trials within each condition. Within unisensory conditions, confidence was much higher for correct compared with incorrect trials. When computing the average confidence across subjects for unisensory visual trials (after excluding subjects that did not have any incorrect trials), confidence for correct trials in the 1F condition was much higher (mean = 3.42, s.d. = 0.50) than confidence in incorrect trials (mean = 2.68, s.d. = 0.83). This general trend was also evident in the 2F condition, with confidence slightly higher in correct (mean = 3.16, s.d. = 0.68) compared with incorrect trials (mean = 3.02, s.d. = 0.68). These trends held for unisensory trials in the auditory domain, with confidence being much higher for correct compared with incorrect trials in the 1B (correct: mean = 3.73, s.d. = 0.38; incorrect: mean = 2.56, s.d. = 1.27) and 2B (correct: mean = 3.70, s.d. = 0.41; incorrect: mean = 2.85, s.d. = 1.06) conditions. With multisensory conditions, some interesting trends emerged. For the trials with congruent multisensory information, as in the unisensory conditions, correct trials exhibited higher confidence than incorrect trials. This was true for not only the 1F1B condition (correct: mean = 3.42, s.d. = 0.52; incorrect: mean = 2.17, s.d. = 0.95), but also the 2F2B condition (correct: mean = 3.27, s.d. = 0.58; incorrect: mean = 2.35, s.d. = 0.83). However, in the 1F2B condition, correct trials actually had slightly lower confidence than incorrect trials (correct: mean = 2.65, s.d. = 0.72; incorrect: mean = 3.06, s.d. = 0.66; W = 252.5, p = 0.01), and in the 2F1B condition, correct trials again had slightly lower confidence than incorrect trials (correct: mean = 2.84, s.d. = 0.61; incorrect: mean = 3.13, s.d. = 0.67; W = 177, p < 0.01). In other words, when subjects (incorrectly) integrated the stimuli, their confidence was slightly higher than when they (correctly) segregated the stimuli in these illusion conditions.
Discussion In this investigation, we aimed to study if metacognitive confidence judgements differed between congruent and integrated (illusory) multisensory stimuli, and if confidence differed between these two conditions when the reported percept was the same. Using the sound-induced flash illusion, we were able to successfully induce both the fission and fusion illusions, to facilitate comparison with congruent bisensory conditions. Our results showed that, overall, confidence judgements were highest for congruent conditions and lowest for incongruent, illusory conditions. Further exploration showed that under conditions with matched reports, confidence was again higher for congruent conditions, and lower for illusory conditions. Together, these results support the conclusion that metacognition can distinguish between congruent and illusory multisensory information. Finally, additional analyses showed that, in general, correct trials had higher confidence than incorrect trials in many conditions (including unisensory visual, unisensory auditory, and congruent bisensory conditions), but for multisensory conditions with mismatches between the number of beeps and flashes (the 1F2B and 2F1B conditions), confidence was actually lower for correct trials compared with incorrect trials, revealing that the incorrect (integrated) trials had higher average confidence than correct (segregated) trials. These findings demonstrate the importance of needing to tease apart metacognitive differences across three types of multisensory processes: congruent multisensory perception, integrated multisensory perception, and segregated multisensory perception when distinct multisensory signals are successfully kept separate from one another. Currently, it is unknown if metacognition across these three processes shows similar profiles across different types of multisensory tasks. Our results provide one step towards better understanding this phenomenon. Further, our findings stress the importance of future multisensory research to distinguish between two different metacognitive measures: metacognitive bias, and metacognitive sensitivity. Technically, metacognitive bias is defined as having relatively high or low confidence at a given performance level, while metacognitive sensitivity is defined by how effectively confidence judgements can distinguish between correct and incorrect judgements [ 4 ]. Moving forward, measures such as type 2 receiver operating characteristic (ROC) can be employed to effectively evaluate metacognitive sensitivity in multisensory tasks across an array of paradigms and conditions. Recent work on perceptual reality monitoring has highlighted the important role that metacognition may play in distinguishing between different sources of information, such as being aware of the differences between perceived and imagined sources of information in the environment [ 19 ]. According to this work, higher-order cortical regions such as prefrontal cortex may play an important role in making these types of source attribution judgements [ 48 , 49 ], as metacognition and reality monitoring may rely upon shared neural mechanisms [ 20 , 50 ]. Interestingly, in the multisensory literature, inferences about the source(s) of sensory information have also recently been conceived as a hierarchical process [ 51 ], with early sensory regions associated with unisensory estimates, and higher-order cortical regions associated with encoding uncertainty about the causal structure (i.e. the sources that give rise to sensory information) of the world [ 52 ]. The authors of of [ 51 , 52 ] noted that the prefrontal cortex has previously been implicated in computations related to the causal structure [ 53 , 54 ], which raises an interesting question: are there shared neural mechanisms that support source monitoring in general, whether it be due to distinguishing between perception and imagination, or distinguishing between different sources of multisensory information in the environment? While conjecture on this point is purely speculative (for now), it appears that the brain's ability to distinguish between different sources of sensory information is an ability that likely extends across domains and tasks. For example, our research group recently demonstrated that confidence is higher for congruent multisensory information compared with integrated multisensory information, even under conditions with matched reports. Kimmet et al . [ 55 ] used an audiovisual speech (McGurk) task and demonstrated that, even when the reported syllable was the same, average confidence values were higher for congruent McGurk stimuli compared with integrated McGurk stimuli, for an array of audiovisual syllable combinations. Thus, despite a wealth of multisensory literature referring to integrated multisensory experiences as ‘illusions’ [ 56 – 60 ], an interesting trend is emerging: participants often know when experiences are integrated (or illusory), and when they come from a single source in the environment. Thus, we can return to the question we raised in the introduction about the metacognitive profile for integrated information: while two decades of research on multisensory integration have emphasized that integration results in an increase in precision in the combined estimate of multisensory information [ 30 – 32 , 61 ], it appears that metacognitive confidence in integrated estimates of sensory properties is lower than for congruent multisensory information from a single source. One can also wonder if these metacognitive differences raise any interesting questions about the phenomenology of these illusions. For audiovisual speech illusions like the McGurk illusions, integrated audiovisual speech seems to be a purely ‘perceptual’ effect; while conflict between integrated auditory and visual information may result in confidence being lower than for congruent stimulation, the McGurk illusion seems profoundly perceptual in nature. However, for other types of multisensory illusions, there may be more reason for questioning and investigating the phenomenological nature of reported effects. For example, in the sound-induced flash illusion, research has shown that observers are able to distinguish between illusory flashes and real flashes [ 46 ]. Similarly, in other illusions such as the spatial ventriloquist illusion, it would be interesting to see if observers could distinguish between auditory stimulation at one specific location and integrated audiovisual information that results in the auditory localization occurring at that same location (i.e. could they accurately identify that the spatial position is different across the two scenarios?) [ 18 ]. Rich debates have permeated the multisensory literature in the last decade regarding whether multisensory judgements are best reflected by truly perceptual effects or (cognitive) response biases [ 62 – 64 ], which is a non-trivial issue that has extended to other perceptual phenomena [ 65 – 67 ]. While additional evidence can be found to support the notion of specific effects like the sound-induced flash illusion being truly perceptual (e.g. by exhibiting feedback resistance, as in [ 68 ], or showing correlates in early cortical areas [ 69 , 70 ]), further work may be needed to illuminate how metacognitive differences across conditions relate to phenomenology. Lurking beneath these issues is a particularly difficult issue to resolve: if multisensory perception is indeed Bayesian in nature [ 37 , 71 – 75 ], then multisensory perception is influenced by priors. How can we determine which influences on priors are cognitive in nature, versus perceptual in nature? Sensory experience can be instructive in many ways; for example, the light-from-above prior can be altered by sensory experience and influence perceptual judgements in later trials [ 76 ]. But sensory experience can also be informative in regards to stimulus frequencies or sensory rewards, which influence perceptual judgements via more ‘cognitive’ influences [ 77 ]. Presently, there may not currently be a clear-cut way to determine which influences change phenomenology, and which simply alter perceptual decision making. While metacognitive or ‘type 2’ judgements may provide some insights into this question [ 78 ], more work is needed to further parse these issues. Overall, we think that the next decade of multisensory research will be especially fruitful, and that the study of metacognition within multisensory paradigms will yield many insights into the nature of the neural basis of metacognition and the function(s) that it serves. One hypothesis regarding the purpose of metacognition relates to information-seeking [ 79 – 81 ], in that specific metacognitive signals may drive further exploratory or information-gathering behaviours. Specifically, metacognition may link to information-seeking via some type of inverted U-function, where extremely high or extremely low confidence is associated with little information-seeking (if you know what something is, or information comes from an extremely noisy source, it may not be worthwhile to pursue further information), but intermediate levels of confidence may be linked to greater information-seeking to resolve ambiguities in stimuli. In this sense, perhaps lower levels of confidence for integrated multisensory stimuli could drive further information-seeking to determine whether the integrated signals truly came from a single source, or whether further exploration could lead to a more accurate inference about multiple sources of information being present. Moving forward, research could pursue the metacognitive profile for multisensory judgements across an array of difficulty levels and in environments where participants can make choices about how long to sample information, to determine if these hypotheses are correct. We think that in order to fully understand the brain's capacity for metacognition, multisensory paradigms must be used, and that better understanding the profile of metacognition in well-known illusions in the field represents a solid foundation to build upon.
One contribution of 16 to a theme issue ‘ Decision and control processes in multisensory perception ’. Hundreds (if not thousands) of multisensory studies provide evidence that the human brain can integrate temporally and spatially discrepant stimuli from distinct modalities into a singular event. This process of multisensory integration is usually portrayed in the scientific literature as contributing to our integrated, coherent perceptual reality. However, missing from this account is an answer to a simple question: how do confidence judgements compare between multisensory information that is integrated across multiple sources, and multisensory information that comes from a single, congruent source in the environment? In this paper, we use the sound-induced flash illusion to investigate if confidence judgements are similar across multisensory conditions when the numbers of auditory and visual events are the same, and the numbers of auditory and visual events are different. Results showed that congruent audiovisual stimuli produced higher confidence than incongruent audiovisual stimuli, even when the perceptual report was matched across the two conditions. Integrating these behavioural findings with recent neuroimaging and theoretical work, we discuss the role that prefrontal cortex may play in metacognition, multisensory causal inference and sensory source monitoring in general. This article is part of the theme issue ‘Decision and control processes in multisensory perception’.
Ethics All research was conducted in accordance with IRB201902462 at the University of Florida. Data accessibility Our data are available from the Open Science Framework website: https://osf.io/p6f3k/ . If you have any further questions about the data or files used to run (or analyse) this experiment, please contact Brian Odegaard: [email protected] . Authors' contributions R.M.: data curation, formal analysis, resources, software, writing—original draft; R.F.: data curation; G.C.: data curation; C.E.M.: data curation, methodology, writing—original draft; S.R.: writing—original draft, writing—review and editing; J.S.: data curation; B.O.: conceptualization, data curation, formal analysis, methodology, project administration, supervision, validation, writing—original draft, writing—review and editing. All authors gave final approval for publication and agreed to be held accountable for the work performed herein. Conflict of interest declaration We declare we have no competing interests. Funding We received no funding for this study.
CC BY
no
2024-01-15 23:43:51
Philos Trans R Soc Lond B Biol Sci.; 378(1886):20220347
oa_package/83/ac/PMC10404924.tar.gz
PMC10408178
37553651
Introduction Human ABO antigens are located on the red cell surface, they play an active role in the cells’ physiology and pathology. They are oligosaccharide structures found on leukocytes, platelets, and tissues. Also, they are present in a soluble form in sweat, saliva, breast milk, and other body fluids [ 1 , 2 ]. ABO ( ABO ), H ( FUT1 ), secretor ( FUT2 ) and Lewis ( FUT3 ) blood system genes control the expression of the carbohydrate repertoire present in areas occupied by microorganisms [ 3 ]. The blood system genes play critical roles in the final ABO antigen structure of an individual’s body tissues and secretions [ 4 ]. The ABO, Lewis(Le), and Rhesus (Rh) blood group systems demonstrate how antigens can be classified into functional categories of structural proteins; enzymes; transporters and channels; adhesion molecules; and receptors for exogenous ligands, viruses, bacteria, and parasites [ 5 ]. Helicobacter pylori (H.pylori) is a helical, gram negative, microaerophilic bacterium, known to colonize the mucous membrane of the human stomach [ 6 ]. H. pylori is a major risk factor for chronic gastritis, peptic ulcers, and gastric cancer [ 7 ]. Globally, H pylori is a major gastric infection that is estimated to affect 50% of the population, and is present in both developed and developing countries [ 8 ]. Its prevalence is > 70% in developing countries [ 9 , 10 ]. In contrast, the prevalence of H. pylori infection in Yemen is not well-defined, as various studies have reported a wide range of 10- 92.8% [ 11 – 13 ]. Blood group antigens can serve as receptors for lectins carried on the surface of various pathogens, facilitating invasion and colonization by binding microbial toxins and inducing infection [ 14 ]. Clinical data on the association between H. pylori, gastric cancer, and ABO/Le are contradictory. A higher incidence of H. pylori infection in O individuals and a lower incidence in A individuals have been reported [ 15 ]. Strikingly, the risk of developing peptic ulcers was higher in O individuals in a large population-based study [ 16 ]. The link between the Le and Se phenotypes seems to be even more unclear. For instance, Se status and H. pylori infection have been shown to be independent risk factors for gastric disease, with a higher risk in non-secretor patients [ 17 ], although Leb (Se) seems to play a crucial role in H. pylori adhesion [ 14 ]. At the tissue level, H. pylori-infected patients with gastric ulcers have increased Lea and loss of H and Leb expression in the inflamed gastric mucosa [ 18 ]. The blood-group antigen-binding adhesion A (BabA) and sialic acid–binding adhesion A (SabA) are important adhesins with carbohydrate-binding domains [ 19 – 22 ]. BabA binds to fucosylated glycoproteins carrying ABO/Le antigens, particularly H and Leb, which are expressed in gastric epithelial cells of secretors [ 23 ]. The tight adherence of BabA is believed to aid in the delivery of multiple virulence factors, such as VacA and CagA, that impact the signaling pathways of the mucosal epithelium [ 24 ]. SabA interacts with glycoconjugates containing sialyl-Lea and sialyl-Lex antigens, which are elevated during inflammation [ 25 ]. Therefore, the ability of H. pylori to initiate and maintain infection may be influenced by the regulation of BabA and SabA [ 26 ]. Our study aimed to determine the prevalence and the close association between ABO, Lewis blood groups, and secretory status and the validity of two non-invasive diagnostic tests in H. pylori infection.
Methods The study was conducted from August to December 2019 at the Department of Endoscopy and/or the Clinical Gastroenterology outpatient service of the Educational Republican Hospital in Sana’a City. One hundred and three adult patients were included in the study if they had symptoms of dyspepsia and upper gastrointestinal endoscopy was indicated. In our study Occult blood were examined to exclude gastric and duodenal bleeding ulcers (One Step Fecal Occult Blood Test Device; Abon Biopharm, China). Patients taking certain medications, such as H2-receptor antagonists, proton pump inhibitors, non steroidal anti-inflammatory drugs and antibiotics were excluded from the study if they had taken them within the past 4 weeks. The calculation of the sample size in our study was based on the established prevalence rate of H. pylori, which was found to be 92.8% according to previous literature. With a margin of error or absolute precision of ± 5% and a confidence level of 95%, the sample size was determined to be 103. The present study employed the subsequent formula: N = Z 2 P(1 − P) /d 2 . This particular formula comprises the following variables: N designates the sample size, Z denotes the Z statistic for a given level of confidence (i.e., 1.96 for a 95% confidence level), P represents the expected prevalence or proportion, and d margin of error or precision (i.e., d = 0.05). H. pylori stool antigen test H. pylori stool antigen tests were measured by using a one-step test device (ABON Bio Pharma, China) for the qualitative detection of H. pylori Ag in the feces. The one-step H. pylori stool Ag test device (ABON BioPharma, Hangzhou, China) is a chromatographic immunoassay for the qualitative detection of H. pylori antigen in human feces and provides results in 10 min with a sensitivity and specificity of > 99.9% [ 27 ]. Briefly, 50 mg (from formed stool) or two drops of liquid stool were transferred to a specimen collection tube containing extraction buffer. The tube was agitated vigorously by hand and left undisturbed for 2 min. Two drops of the extracted specimen were transferred to the specimen well on the test device, and the results were recorded after 10 min. According to the manufacturer’s instructions, the test is defined as positive if two distinct colored lines appear, and negative if one line appears. H. pylori serum antibody test Serum samples were obtained by centrifugation at 3,000 rpm for 10 min. For each serum sample, three drops were used to detect H. pylori antibodies using a rapid chromatographic immunoassay commercial kit (H. pylori One Step Test Device, DiaSpot H. pylori, Indonesia). This test qualitatively and selectively detects H. pylori antibodies in the serum or plasma by utilizing a combination of H. pylori antigen-coated particles and anti-human IgG. This test has a sensitivity > 95.9% and specificity of 75.9%, with an overall accuracy of 85.2%, compared to the culture/histology of endoscopic specimens for H. pylori [ 28 ]. The test was performed in accordance with the manufacturer’s instructions without any modifications. Briefly, three drops of the serum sample were applied directly to the sample well in the test device, and the results were read after 10 min. The appearance of one colored line in the control region indicated a negative result, whereas the appearance of two colored lines in the test region and control regions indicated a positive result. Expression of ABO and Rh antigens in blood ABO and Rh blood group antigens were determined using standardized hemagglutination tests according to the manufacturer’s instructions with anti-A, anti-B, anti-AB and anti-D monoclonal antibodies (Lorne Labs. UK) [ 29 ]. Expression of Lewis antigens in blood Lewis blood group antigens were determined using standardized hemagglutination tests according to the manufacturer’s instructions with anti-Le a , and anti-Le b monoclonal antibodies (Lorne Labs. UK) [ 29 ]. Expression of A,B,H, Le a and Le b antigens in Saliva The ABH and Lewis antigens in saliva were tested by hemagglutination inhibition. methods with anti-A, anti-B, anti-H, anti-Le a , and anti-Le b monoclonal antibodies (Lorne Labs. UK) [ 29 ]. Ethical approval This study was approved by the Research Ethics Committee of the Faculty of Medicine and Health Sciences, Sana’a University, Yemen. Written informed consent was obtained from all participants according to the Helsinki Declaration principles. Statistical analysis: statistical analysis The generated data were coded, entered, validated, and analyzed using SPSS 23 (SPSS, Chicago, IL, USA). We tested for association in categorical variables using the chi-square test and reported the corresponding p-values. Sample means were compared using the Student’s t-test. Statistical significance was set at P < 0.05. Considering the stool Ag test as the gold standard, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for serum antibody tests were calculated.
Results Socio-Demographic and risk factors of H. pylori infection A total of 103 patients were included in the study, with 40 (38.8%) being male and 63 (61.2%) female. The median age of H. pylori-infected patients was 25 years, with ages ranging from 19 to 57 years. The highest percentage of patients infected with H. pylori were under 30 years old, accounting for 70 (68%) of the total patients. The study also revealed that the prevalence of H. pylori was higher in rural areas (57.3%) compared to urban areas (42.7%). Interestingly, the study found that most patients obtained their water requirements from non-refined sources, with 62 (60.2%) patients falling under this category. Additionally, the majority of patients reported eating their meals at home routinely, with 82 (79.6%) patients indicating this. Furthermore, the distribution of ABO groups among patients showed that the O group was the most prevalent at 60.2%, followed by A at 33% and B at 6.8%. Additionally, 94 (91.3%) of patients were rhesus positive, while 92 (8.7%) were Rh negative. (Table 1 ) Prevalence of H.pylori infection The overall prevalence of H. pylori infection was 80.6% among symptomatic patients. Of the 103 patients examined using the H. pylori stool antigen assay, 83 (80.6%) and 20 (19.4%) patients were positive and negative, respectively. The number of positive stool Ag samples was higher in females 51 (49.5%) than in males 32 (31.1%). In the assay for detection of serum antibodies against H. pylori, there were 65(63.1%) seropositive cases, with 43 (41.8%) females and 22 (21.4%) males showing high and low seropositivity, respectively. The rate of H. pylori infection in female patients was not significantly higher than that in male patients using the two methods (p = 0.906; p = 0.178). (Table 2 ; Fig. 1 ) The sensitivity of the serum Ab test in relation to stool Ag test was 68% with a specificity of 55%, positive predictive value of 86%, and negative predictive value (NPV) 29%. The agreement between the two tests in the diagnosis of H. pylori infection was 65.1% (slight agreement) (Table 3 ). The antibody test failed to detect 27 H. pylori-positive samples using the antigen test. Nine samples were positive only for the antibody test. Test results were concordant for 56 samples. A total of 27 samples produced discordant results. (Table 3 ) Association of ABO, Rh phenotypes and secretory status with gender in patients with H.pylori infection Out of the 103 H.pylori patients, 34 (33%) were blood group A; 7(6.8%) were blood group B and blood group O were 62 (60.2%). As regard the gender, the frequency of ABO blood groups in males was 28(27.2%) had group O, 8(7.8%) group A, 4(3.9%) group B and 0(0%) group AB. Similarly, of the 63 female patients, 34(33.0%) had group O, 26(25.2%) group A, 3(2.9%) group B and 0(0.0%) group AB. There was a statistically significant variation in the ABO blood groups between males and females and the prevalence blood group O was (34/63) higher in females than males(28/62) (p = 0.047) (Table 4 ). A total of 94 (91.3%) of the patients were Rh positive and 9 (8.7%) of them were Rh negative (Table 1 ). The Rh positive blood group in the H.pylori patients were 36 (34.95%) males and 58 (56.3%) were females, while Rh negative H.pylori patients were 4 (3.88%) males and 5(4.85%) were females. Among the Rh positive patients, were 76(92%) H. pylori infected patients and 18 (90%) H. pylori non-infected patients. There was no statistically significant difference in the distributions of Rh phenotypes between gender of H. pylori patients (p = 0.721) (Table 4 ). The frequency distribution of secretors and non-secretors across the gender of the study patients is presented in Table 4 . In the study, 103 H.pylori patients representing 66 (64.1%)% and 37 (35.9%) were secretors and non-secretors, respectively. Among a total of 40 male patients, 24 (23.3%) were secretors and 16 (15.5%) were non-secretors. The 63 female patients had 42 (40.8%) secretors and 21 (20,4%) non-secretors among them. However, the difference between gender and secretor status was not statistically different (p = 0.497). Association of ABO, Lewis blood group phenotypes and secretor status in patients infected by H. pylori Regarding the ABO blood group in H. pylori infection, the distribution of H. pylori infected patients was 42 (51%) in blood group O, followed by blood group A in 34 (41%), blood group B in 7 (8%), and the distribution of non-infected patients by H. pylori was 20 (100%) observed only in blood group O. There was a significant association between the ABO blood group and infected and non-infected patients (p = 0.001). Among a total of 83 H.pylori infected patients, 52 (63%) were secretors and 31 (37%) were non- secretors and a total of 20 H.pylori non-infected patients had 14 (70%) secretors and 6 (30%) non-secretors. There was non-significant association between secretor status and infected and non-infectedpatients with H. pylori and using noninvasive immunochromatographic Stool Ag assay (p = 0.367). (Table 5 ) The study reported secretor status belong to Lewis Le (a-b+), Le (a + b+) and those who are non- secretors to Le (a-b-) and Le (a + b-) blood group phonotype. The most common phenotype in the Lewis groups was Le (a + b+) 44(42.7%) followed by Le (a + b-) 34 (33%), Le (a-b+) 22(21.4%) and Le (a-b-) 3 (2.9%). Among a total of 83 H.pylori infeccted patients, 52 (63%) were secretors; where Le (a-b+) was 17(20%) and Le (a + b+) was 35(42%) and 31(37%) were non-secretors; where, Le (a + b-) was 28(34%) and Le (a-b-) was 3(4%). Among a total of 20 H.pylori non infeccted patients, where Le (a-b+) was 5 (25%) and Le (a + b+) was 9(45%) and 6 (30%) were non-secretors of Le (a + b-) phenotype. (Table 5 ) As regard the secretory status in saliva with ABO blood groups in H.pylori infection, 66 (64.1%) were secretors, where, the blood group O was 40 (60.6%) followed by blood groups A patients 22 (33.4%) and blood group B 4 (6.1%). However, 37 (35.9%) of non-secretors, the blood group O was 22 (59.5%) followed by blood groups A patients 12 (32.4%) and blood group B 3 (8.1%). Data show that there was no association between secretor status and ABO blood groups in patients infected by H. pylori (p = 0.924) (Table 6 ). The link between the Lewis blood group and infected and non-infected patients was not significant (p = 0.807), and there was no correlation between the Lewis blood group and ABO in H. pylori patients (p = 0.671).
Discussion The prevalence of H. pylori infection in Yemen was found to be high at 80.6%, which is similar to other countries in the Middle East and North Africa region [ 30 ]. However, a recent study from Yemen found a lower rate of infection at 19.3%, which could be due to factors such as socioeconomic status and living conditions [ 31 ]. No previous study has been conducted on ABO, Lewis group systems, and secretory status in H. pylori-infected patients in Yemen. In this study, the rate of H. pylori infection in females is not significantly different than in males. Some studies show higher rates in females than males, while others show higher rates in men, but overall there is no significant relationship between gender and H. pylori infection [ 31 – 33 ]. Studies show that males may have a higher rate of H. pylori infection but in general, there is no significant relationship between sex and H. pylori infection rate. This implies due to hormonal differences between sex and has no role in the H. pylori infection [ 34 – 37 ]. The results of our study found that H. pylori infection is most prevalent among patients under 30 years old, with a significant association between infection rate and age group. This finding has been supported by studies in Iraq and Iran [ 38 , 39 ]. However, other studies have shown high infection rates in the 43-50-year-old age group, but this difference is not significant [ 40 , 41 ]. A previous studies observed that H. pylori infection is more common in people under 50 years old and may be involved in the development of Colorectal Adenomatous Polyps [ 42 , 43 ].The age group under 30 is particularly vulnerable to infectious diseases due to their active lifestyle and potential lack of personal hygiene and healthcare. H. pylori colonization starts at a young age and exposure to multiple sources of infection increases with age [ 43 – 47 ]. This study found that the prevalence of H. pylori, a bacteria that can cause stomach ulcers and cancer, was slightly higher in rural areas compared to urban areas, but this was not statistically significant. This finding agrees with studies from Yemen and Iraq [ 31 , 44 ] but disagrees with studies from Tanzania and China [ 45 , 46 ].The variation in prevalence could be due to factors such as poor water supply, inadequate sewage disposal, social habits, and low education levels among low-income populations [ 48 , 49 ]. Our data revealed that the serum antibody test displayed a sensitivity of 68% compare to the stool antigen test, with a specificity of 55%. Additionally, it exhibited a PPVof 86%, while its NPV was noted to be 29%. A study conducted in Yemen revealed that the serum antibody test had a sensitivity of 50% and specificity of 65%. Additionally, the PPV was 65% and the NPV was 50% [ 47 ]. Abadi and colleagues discuss the benefits of stool antigen testing, including its accuracy, ease of use, and popularity. However, the test is limited by factors such as bleeding, antibiotic use, bowel movements, and proton pump inhibitors. The authors recommend using monoclonal antibodies to measure and eliminate H. pylori, as well as for initial screening in clinical settings [ 50 ]. The stool antigen test is a reliable diagnostic tool for H. pylori according to numerous studies. It has been compared to gold standard methods like breath test and biopsy bacterial culture. For small laboratories lacking advanced equipment, the test has been proposed as a useful alternative [ 50 – 54 ]. The present study observed that a significant association between H. pylori infection and ABO blood group, with patients who have blood group O being more susceptible to the H.pylori infection. A meta-analysis study suggests that O blood type may be a risk factor for H. pylori infection [ 55 ].Additionally, the study confirms that O blood type increases the risk of H. pylori infection while A/AB blood type is associated with a predisposition to gastric cancer [ 56 ]. The ABO and Lewis histo-group antigens may affect susceptibility to H.pylori infection [ 3 ]. In our investigation, the most H. pylori patients were found to be O secretors, but there was no significant difference between secretor status in infected and non-infected patients. A recent study found that non-secretors were more prone to H. pylori infections,[ 7 ] but another study found that secretors were at a higher risk in H.pylori infection [ 57 ]. The bacteria may attach to the Lewis b antigen, which is expressed on the surface of the gastric mucosa also correlates with infectious disease risk [ 58 ], and non-secretors may be resistant to H. pylori [ 59 ]. H. pylori have a protein called BabA that binds to a type 1 H antigen, which are commonly found in the stomach lining, [ 60 ] and able to attach to Leb, which is found in high levels in stomach cells related to O and secretor phenotype [ 62 ]. This explains why people with type O blood are more prone to gastrointestinal diseases like gastritis and stomach ulcers. Different studies have found varying prevalence of Lewis blood group phenotypes [ 60 – 63 ]. Our study found Le (a + b+) to be the most common phenotype. A meta-analysis reported that secretors often have the Le (a-b+) phenotype, while non-secretors have Le (a-b-) and different secretor status and their phenotypes vary in prevalence in different populations. The secretor phenotype is present in all populations but more prevalent in Caucasians, whereas the Le (a + b-) phenotype is found in over 20% of Caucasians and Blacks but is rare in Asians [ 7 ]. In two similar studies found Le (a- b+) to be most prevalent [ 62 , 63 ]. The Le(a-b+) phenotype is frequent in all populations, while the Le (a + b+) phenotype is more common in Asians and Polynesians. Lastly, the Le(a-b-) phenotype is rare in Caucasians but common among Blacks [ 63 , 64 ]. The cause for these differences is unclear, but it is thought that disease-causing microorganisms may have a role in this process [ 15 ]. Data from this study showed a significant association between ABO blood groups, but not between Lewis and secretor phenotypes and H. pylori infection. Previous studies have shown an association between antigens of these groups with susceptibility to H. pylori colonization [ 14 , 15 ]. Recent research suggests that these blood group systems can affect susceptibility to infection, disease progression, and immune response [ 26 ].
Conclusion Based on our findings, it is important to consider the blood group of individuals when assessing their risk for H. pylori infection. Those with O blood may be more susceptible to infection due to their increased likelihood of being secretors. Additionally, the presence of the Le (a + b+) phenotype may also increase an individual’s risk for H. pylori infection. To effectively screen for H. pylori infection, we recommend using the sensitive H. pylori stool Ag test as a non-invasive screening method before resorting to invasive procedures such as endoscopy or biopsy. This approach could help identify infected individuals earlier and potentially prevent further complications associated with untreated H. pylori infections.
Background The ABO and Lewis blood group antigens are potential factors in susceptibility to H. pylori infection. This research aimed to examine the prevalence of Helicobater pylori (H.pylori) infection and its association with ABO, Lewis blood group systems, and secretory status in Yemeni symptomatic patients. Methods In a cross-sectional study, 103 patients referred for endoscopy due to dyspepsia were included. H pylori infection was assessed using stool antigen and serum antibody rapid tests. ABO and Lewis blood group systems were examined using hemagglutination assay. Saliva samples were investigated for identification of the secretory phenotype using hemagglutination inhibition test. Results The prevalence of H. pylori infection was (80.6%), with a higher rate of infection in females than males. The ABO blood groups were found to be significantly different between males and females (p = 0.047). The O blood group was prevalent among H. pylori patients, especially secretors. There was a significant association between ABO blood groups and H. pylori infection (p = 0.001). The Le (a + b+) phenotype was the most common, followed by Le (a + b-), Le (a-b+), and Le (a-b-). Lewis blood group systems and secretory status of symptomatic patients were not associated with H. pylori infection. The results showed that serum Ab test for H. pylori achieved poor sensitivity (68%), specificity of 55%; positive predictive value (PPV) 86%, negative predictive value (NPV) 29% and accuracy 65.1%. Conclusion The prevalence of H. pylori infection was high in Yemeni patients. This infection was linked to the O and Le (a + b+) secretor phenotype. The H. pylori stool Ag test is the most reliable noninvasive diagnostic method for detecting H. pylori infection. Keywords
Acknowledgements We want to thank our patients, without whom this study would not be possible. Authors’ contributions Mohammed AW. Almorish made the conception and design the research. Mohammed AW. Almorish, Boshra Al-absi did the experiments and collected data. Ahmed M. E. Elkhalifa, Elham Elamin analyzed data. Abozer Y. Eldedery, Abdulaziz H wrote the manuscript. All authors reviewed the manuscript. Funding The authors did not receive any funding. Data Availability The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. Declarations Ethics approval and consent to participate This study was approved by the Research Ethics Committee of the Faculty of Medicine and Health Sciences, Sana’a University, Yemen. The study was in Accordance with Helsinki Declaration principles. Written informed consent was obtained from all participants. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests.
CC BY
no
2024-01-15 23:35:09
BMC Infect Dis. 2023 Aug 8; 23:520
oa_package/da/58/PMC10408178.tar.gz
PMC10410216
37553994
Introduction In limbless animals, lateral undulatory locomotion is the most common paradigm, in which the body bends laterally in a sinusoidal shape. This type of locomotion has long attracted the interest of scientists from the perspectives of evolutionary biology [ 1 , 2 ], physiology [ 3 – 5 ], morphology [ 6 – 8 ] and mechanics [ 9 , 10 ]. Several physical models of undulatory robots have been developed, inspired by snakes [ 11 – 14 ], salamanders [ 15 ], centipedes [ 16 ] and Caenorhabditis elegans [ 17 ], in order to demonstrate and understand different concepts involved in undulatory locomotion. In the complex body of animals, active and passive mechanics play a critical role. In undulatory locomotion, the role of passive dynamics has been little studied compared to active dynamics. Incorporating passive dynamics through materials and morphology can lead to energy efficient, sustainable, robust, easy to control, self-adaptive and safe systems [ 18 ]. In [ 19 ], passive properties of the body are shown to help snakes manoeuvre through heterogeneous environments with minimal sensing. Furthermore, the passive stiffness of lamprey tail is investigated to generate different wake structures for different stiffnesses [ 20 ]. While the interaction of an animal body with its environment, exogenous effects, has been extensively studied in fluidic environments [ 10 , 20 – 23 ], little is known about the passive adaptability of undulatory locomotion on land. In [ 9 ], undulatory locomotion endogenous, characterized by body properties and kinematics, and exogenous effects in a dry frictional environment are investigated. The authors modelled undulatory locomotion in an isotropic frictional environment and suggested that endogenous parameters do not play a significant role in gait modulation. However, these results have not been validated on the physical system. Wang & Alben simulated the sinusoidal heaving of a thin, flexible foil at one end in an anisotropic dry frictional environment [ 24 ]. Findings give important insights into the role of resonance on the input power and speed of undulatory locomotion. Power increases and speed decreases at resonance in a low-frictional anisotropic environment; however, both speed and power vary smoothly in a high-frictional anisotropic environment. Some studies have modelled undulatory locomotion in granular media [ 25 , 26 ]. In [ 25 ], simulations are accompanied by a physical validation. The authors compared swimming speeds and forces obtained from simulated and physical models, demonstrating the importance of head drag on swimming speed and energy consumption. However, further investigation of the interconnection between endogenous and exogenous parameters and their effects on the dynamics of lateral undulatory locomotion is required. Therefore, in the present paper we endeavoured to correlate passive endogenous and exogenous effects of lateral undulatory locomotion for speed optimization. We will explore how exogenous effects, generated during body–environment interaction, and endogenous effects, generated by the inherent body stiffness and internal losses, influence the trajectory and the system speed. The results obtained from the mathematical model and physical system will be compared. Furthermore, we will suggest how to define the optimal body stiffness distribution to maximize the locomotion speed in relationship with specific environments. It should be noted that the objective of our study is not to replicate the snake-like lateral undulatory locomotion but rather to investigate and analyse the functional aspects of lateral undulatory locomotion involved in passive compliance using mathematical and physical models. Therefore, the physical model is designed accordingly, and functional aspects of endogenous and exogenous parameters are discussed in correspondence to the locomotion of animals.
Material and methods Mathematical modelling In our model, we assumed an animal body as one-dimensional and discretized it into N links. The links are joined by viscoelastic springs to represent the endogenous parameters of the body. The bending stiffness of the springs is represented by k i , and the internal damping constant is represented by b i , where i represents the joint number such that i ∈ [1, N − 1]. Particular to the present paper, figure 1 shows the schematic of the body divided into five links N = 5 and actuated at one end. The exogeneous parameters are modelled by an anisotropic dry frictional model, constituted by frictional forces in the tangential (equation (2.1)) and the normal (equation (2.2)) directions: and is the gravitational acceleration constant, is the mass of the i th link, () is the sign function that gives the sign of normal or tangential velocities, and are unit vectors in the tangential and normal directions, figure 1 , is the tangential frictional coefficient as a function of the tangential velocity, and is the normal frictional coefficient as a function of the normal velocity. We consider the frictional coefficients as a function of speed due to the viscoelastic contact between our developed robo-physical model and the substrate (see §2.2); therefore, the trends of the frictional coefficients are found experimentally on different substrates and then approximated by regression analysis (see §3.3 for more details). According to the resistive force theory, to produce forward thrust for lateral undulatory locomotion, the normal frictional coefficient should be higher than the tangential frictional coefficient [ 27 , 28 ]. function in (2.1) and (2.2) is approximated as follows: Here, the function can be any function for which we want to determine the sign. Here the parameter controls the sharpness of the square wave, which is the representation of a sign function. The accuracy of the approximation of increases as ε decreases. We found that to closely match the trend of the experimental speed, on average ε should be set to 10 −3 , especially on a relatively rough substrate, i.e. cloth and cardboard (electronic supplementary material, S1). Physically, the smaller the ε, the sharper the shift in the direction of the frictional force. Since we used viscoelastic material around the wheels (see §2.2), during the locomotion, there is a cyclic transfer of perturbations to the surfaces of the wheels. Because of these disturbances on the viscoelastic material, there is another friction component: the hysteresis component [ 29 ]. Furthermore, at the peak of the cycle, when the direction of the frictional force changes, due to the rubber material, the sudden change is limited by the time the material needs to reach its relaxed state before the next cycle. In our model, ε describes this phenomenon, and its value depends on the type of material used and the magnitude of cyclic disturbances around the wheels; in our case, it is observed that ε changes when the substrate is relatively smoother depending upon the cyclic disturbances due to stick and slip. However, for the sake of simplicity and consistency we set the value of ε to 10 −3 . Equations of motion are formulated by using the lagrangian function as follows: Here, L is the lagrangian function, is the vector of generalized coordinates, is the generalized force of friction, and R is the viscous dissipation energy. Further details of the model can be found in [ 30 ]. Physical model We implemented a physical system to verify the behaviours achieved in simulation and reiterate in the simulation for behavioural predictions. The system consists of five links. Theoretically, a minimum of three links are required to model friction-driven undulatory locomotion [ 27 , 31 – 33 ], even though there are no specific criteria for determining the number of links. In our case, it was practical to use five links so that there are at least three passive joints to play with the stiffness distribution. Furthermore, increasing the number of links increases the power utilization [ 34 ]. The prototype is fabricated using three-dimensional printed polylactic acid (PLA) modules. The three-dimensional model of the prototype is shown in figure 2 a . To provide frictional anisotropy, we used two wheels on each module. The wheels consist of a bearing (7 mm external diameter, 4 mm internal diameter and 2.5 mm width) enveloped by a skin of Dragon Skin TM 30 (Smooth-On Inc.). A servo motor (28952, Amewi) is fixed on the head, and a custom electronics and battery supply provide oscillations at desired amplitude and frequency. The angular amplitude and angular frequency of the servo motor, used as the actuator, are kept constant throughout the experiments and are set to (11/72)π rad (27.5°) and 15.7 rad s −1 (899.5° s −1 ), respectively. The waveform of the angular frequency is trapezoidal, which is measured by recording and tracking the position of the servo motor horn using a camera and Kinovea software. The waveform is then approximated as defined in electronic supplementary material, S1, and shown in figure 3 . The total length and mass of the physical model are 287 mm and 132 g, respectively. Masses and lengths of individual links are listed in electronic supplementary material, S1. Joints are prototyped with five materials of different stiffness: Dragon Skin TM 10, Dragon Skin TM 30 (both from Smooth-On Inc.), polydimethylsiloxane (PDMS, Sylgard 184, Dow), Elastic 50A and Flexible 80A (both from Formlabs). Joints made of Dragon Skin TM 10, Dragon Skin TM 30 and PDMS materials are cast from their constituent parts at room temperature. Joints with Elastic 50A and Flexible 80A materials are fabricated by stereolithography (SLA) three-dimensional printing with a layer thickness of 0.1 mm. Experimental characterization The equivalent bending stiffness of the joints is calculated experimentally by measuring their deflections and force responses (using a Z005 Universal Testing Machine, ZwickRoell) in a cantilever configuration, as shown in figure 2 b . Similarly, damping constants are calculated using the logarithmic decrement method [ 35 ], and tests are performed in a cantilever beam configuration. The schematic is shown in figure 2 c . Videos of the cantilever beam tests are recorded with a high-speed camera (Phantom Micro C110) at a frame rate of 2300 f s −1 to determine the amplitude dissipation over time. To emulate different environments, we used four substrates: polyoxymethylene copolymer (POMC), plastic panel, cardboard, and cloth. As shown in figure 2 d , e , longitudinal and lateral frictional constants are found on these substrates. In friction tests, we utilized two modules of the physical model connected by a joint. The modules are attached to a load cell with a nearly inextensible nylon thread (Nano17 SI-12-0.12, ATI Industrial Automation). The substrate is then moved at constant speeds by motorized micro-translation stages (M-414.2PD, Physik Instrumente GmbH). Finally, the behaviour of the physical model on different substrates with different joint stiffnesses is captured by tracking markers on the modules, as shown in figure 2 f . The videos are captured by a Nikon D7500 camera and post-processed using the free, open-source software Kinovea 1 .
Results Equivalent bending stiffness of joints Equivalent bending stiffness, k , is calculated using equation (3.1). The proof is provided in electronic supplementary material, S1: Here F is the force applied and recorded by load cell, l is the length of the specimen and is the deflection recorded from the universal testing machine under the action of force at a distance l from the fixed end. The calculated equivalent bending stiffness of different materials is shown in figure 4 a . Damping constants of joints Damping constants are calculated using the logarithmic decrement method according to (3.2) [ 35 ]. A derivation is provided in electronic supplementary material, S1: In equation (3.2), I is the moment of inertia of a rectangular cross-section area of a beam, calculated as ml 2 /3; here, m is the mass of the specimen. ζ is the damping ratio calculated by the logarithmic decrement method as follows: Here in equation (3.3), δ is measured experimentally according to the following equation: Equations (3.3) and (3.4) are taken from [ 35 ], where a 1 is the peak-to-peak distance of oscillations at time, t , and a n + 1 is the peak-to-peak distance after n oscillations. Calculated damping constants are shown in figure 4 b . Friction tests In our case, the classical model of Coulomb friction posed some limitations due to a non-uniform sliding of the system also dictated by a friction coefficient velocity dependence. This phenomenon can occur because of the viscoelastic material used around the wheels. This issue is covered by finding the trend of average frictional coefficients at various speeds. The average dynamic normal and tangential frictional coefficients are measured according to the formula μ = F p / W at various speeds to determine their speed-dependent trends. F p is the pulling force calculated by the load cell, and W is the weight of the specimen. Figure 5 displays the increasing effect of stick and slip with speed by comparing the raw measurement of the pulling force for both cardboard and panel substrates used in the tests to calculate the normal frictional coefficients. Similar behaviour is also observed for other substrates. The phenomenon of stick and slip is predominantly observed in the normal direction because of the constraint provided by wheels in that direction. Further assumptions of the frictional model include (1) no deformation and asymmetric normal pressure distribution in the wheels, (2) there is no temperature change during the motion at the contact area and its surroundings, and (3) friction is independent of the contact area between the body and the ground. The results of the normal frictional tests are shown in figure 6 , and the tangential frictional coefficients on various substrates are shown in figure 7 . The trend of the normal frictional coefficient is estimated by exponential plateau curve equations, as shown in figure 6 . The average value of the normal frictional coefficient ratio stabilizes after certain speeds. The value of the frictional coefficient higher than 1 shows the dominance of the sticking phenomenon. While in the cases where the frictional coefficient is lower than 1, slippage is the dominant phenomenon. On cardboard and cloth, the normal frictional coefficient ratio did not reach 1 because these substrates are less smooth than the panel and POMC substrates; consequently, they offered less affinity to adhesion. Unlike the normal frictional coefficient, the tangential frictional coefficient reaches an equilibrium state only in the case of cloth. In the other cases, the tangential frictional coefficient increases with speed ( figure 7 ). The frictional coefficient ratio ( μ n / μ t ) on these substrates is given in electronic supplementary material, S1. Comparison between mathematical and physical models After evaluating all required input parameters, we ran the simulations based on a mathematical model and compared the speeds at different joint stiffnesses and on different substrates. Overall, the physical and mathematical models showed significant agreement, as shown in figure 8 a–d . The mathematical model quantitatively captured the generalized trend within a factor of 2. A side by side comparison of the physical system along with trajectories is given in electronic supplementary material, video S1. Physically and mathematically, it can be witnessed that the speed optimization depends on the environment and joint stiffness. Experimentally, the maximum speed occurs when joints of Elastic 50A, PDMS, PDMS and Elastic 50A were used on cardboard, cloth, panel and POMC, respectively ( figure 8 a – d ). The comparison between experiments and simulations is shown in electronic supplementary material, video S1. It is observed that the amplitude of the tail increases as the stiffness of the joints increases, while the speed increases as the stiffness increases and then decreases as the stiffness increases further. For cardboard, the trend of the amplitudes of each joint for various joint stiffnesses is shown in figure 9 . A similar trend is observed for other substrates (see electronic supplementary material, S1). When considering the origin of the disparity between the mathematical and physical models, it is noteworthy that hysteresis is one of the contributing factors. The hysteresis acting on the wheels arises due to the cyclic loads and the material properties of the wheel coverings (Dragon Skin 30 TM ). Passive deformation of the wheel coverings due to the cyclic load induces a time lag in the change of friction direction. In our mathematical model, the hysteresis should be captured by the ε parameter (equation (2.3)) because it controls the smoothness of the square wave, a sign function to obtain the direction of the friction force. The smaller the value of ε, the more accurate the square wave, and the sharper will be the change in the direction of friction; therefore, the smaller will be the hysteresis. Physically the hysteresis component of the friction depends on the internal friction of the material due to cyclic loads [ 29 ], and it has been observed that at higher speeds, the transfer of cyclic disturbances increases to the wheels on smooth substrates, i.e. panel and POMC; consequently, it increases the deformation of the wheel coverings and, therefore, the hysteresis. That is why it is observed that the value of ε increases at higher speeds on the panel and POMC substrates to match the physical and mathematical results, as shown in electronic supplementary material, S1, figure S5. However, in many instances, the value of ε was observed to be 10 −3 , which is why we used only one value in all simulations for simplicity and continuity. Another source of discrepancy is the contact loss during locomotion which is inherent in the physical implementation due to manufacturing tolerances. Especially on the smoother substrates (panel and POMC) at higher stiffnesses ( figure 8 c , d ), we see more standard deviation resulting from vibrations because of the oscillations and random contact losses with the ground. At higher stiffnesses, contact losses increase because of the induced rigidity in the system, as can also be seen in the cardboard case at 0.05 Nm rad −1 stiffness ( figure 8 a ). While on a relatively rough substrate (cloth), the standard deviation remained small ( figure 8 b ). In addition, other possible sources of error include simplified assumptions of one-dimensional geometry, inextensible joints and neglect of out-of-plane dynamics of locomotion. Optimization based on stiffness distribution We additionally explored the relationship between speed and stiffness distribution among the joints. Five different stiffnesses are available and combined over three joints, resulting in 125 combinations for each substrate, shown in figure 10 a . To simplify the presentation of different stiffness combinations, we defined a convention of naming different joint materials according to their ascending order of stiffness from 1 to 5, as shown in figure 10 a . After running simulations for all stiffness combinations on various substrates, we found that qualitatively the highest speed is achieved when the stiffness of the middle joint is higher than that of the head joint, and subsequently decreases or remains constant towards the tail ( k 1 ≤ k 2 > k 3 ), as shown in figure 10 b–e . By following the combinations suggested by the simulations on the physical model, we obtained approximately 69%, 184%, 2% and 17% higher speeds on cardboard, cloth, panel and POMC substrates, respectively. The percentage increase is calculated by comparing the experimental speeds of the optimal cases obtained from figure 10 and figure 8 . Figure 11 shows snapshots of the robot's centreline for the optimal combinations of stiffness distributions across various substrates. These images provide a visual representation of the temporal evolution of undulatory locomotion. Furthermore, the trend of stiffness combination changes on different substrates ( figure 10 b–e ). It is inferred that the stiffness distribution could differ qualitatively and quantitatively in different environments. To consolidate the numerical results, we tested the false optimality condition, deduced from the simulations, on the panel substrate. The false condition defines that the bending stiffness at the joints must be k 1 > k 2 ≤ k 3 . Based on this, 40 different combinations are tested on the panel substrate. The tested combinations and their results are listed in electronic supplementary material, S2. In all cases, the speed is less than 34.3 ± 1.88 mm s −1 , corresponding to the stiffness combination ( k 1 , k 2 , k 3 = 3,5,4) as suggested by the simulations. In addition, after testing the validity of the optimality condition, we have experimentally found the optimum stiffness combination using the qualitative result of simulations producing the criteria k 1 ≤ k 2 > k 3 . In our experiments, we eliminated the outliers for k 3 = 1,5. The sliced planes of figure 10 b–e in electronic supplementary material, S1, reveal that at k 3 = 1, the stiffness is insufficient to achieve the maximum speed. Conversely, when k 3 = 5, no combination adheres to the inferred law of optimal stiffness since 5 is the highest stiffness available, and the condition k 2 > k 3 cannot be fulfilled. Hence, a total of 26 different combinations are tested on each substrate. The results of optimum combinations found experimentally are listed in table 1 , and to see the experiments of optimum cases, see electronic supplementary material, video S2. Based on both simulations and experimental results, it can be deduced that the environment strongly affects locomotion gaits and speeds. The hysteresis component is likely causing quantitative differences between simulation and experiment, which is assumed not to vary for each joint in our simulations. Nevertheless, we can define the stiffness distribution law for each environment to maximize the speed of locomotion. The influence of input frequencies At resonance, when the input frequency matches with the natural frequency, in a fluidic environment the speed of undulatory locomotion increases [ 20 , 36 ]. Contrarily, in [ 24 ] for dry frictional environment, it is found that at resonance, the speed of undulatory locomotion decreases and the power increases. More recently, in [ 37 ], for viscous friction, the resonance frequency is defined as the frequency at which the speed of a system is maximized with minimum actuation effort for undulatory locomotion. In this section we study the correlation between the input frequency and the resulting speed. We then subsequently relate these findings to the optimization process based on stiffness, as presented in figure 8 and figure 10 . We limited our input frequencies to the range of 1–30 rad s −1 for uniform stiffness distribution and 1–50 rad s −1 for non-uniform stiffness distribution. These limits are set based on the findings of a previous study [ 36 ], which demonstrated that higher forcing frequencies induce deformation modes that hinders forward motion. Hence, lower deformation modes are typically observed in biological systems due to their effectiveness in generating propulsion. Figure 12 a – d shows the speed peaks achieved at different actuation frequencies for different joint stiffness values on different substrates. While comparing the results of the speed peaks across substrates, it can be seen that the effect of a different environment on defining the optimum input frequency is negligible. By contrast, the optimum input frequency depends on the stiffness of the body ( figure 12 a – d ). As also elaborated in the detailed study of the resonance of undulatory locomotion in [ 37 ], that resonance frequency depends only on body stiffness and inertia of a body and that resonance frequency enhances the speed. In our case, the body's inertia remained constant and only the stiffness of the body and the friction of the environment changed. Therefore, following [ 37 ] we can say that the peaks of the speeds observed in figure 12 are at resonance frequencies. Furthermore, our results show that the environment plays an important role in defining the height of the peak (the maximum locomotion speed), which makes certain stiffness values suitable for certain environments. For example, on cardboard, the joint stiffness of 0.05 Nm rad −1 provides the highest locomotion speed. This is consistent with the results of figure 8 a , where we found the stiffness of 0.05 Nm rad −1 to be the best for cardboard both experimentally and analytically. Whereas, in the case of panel and POMC substrates, a joint stiffness of 0.017 Nm rad −1 has the highest speed, and in the case of cloth a joint stiffness of 0.05 Nm rad −1 has the highest speed in the input frequency range of 1–30 rad s −1 . We also investigated the response to the input frequency when the joint distribution is not uniform for cardboard and panel substrates ( figure 13 a , b ). Stiffness distribution of (3,5,4) among joints 1, 2 and 3 is selected according to the law of optimum stiffness distribution ( k 1 ≤ k 2 > k 3 ) as found in §3.5, and stiffness distribution of (5,1,5) is selected contrarily to the case of optimum stiffness distribution law. These figures highlight the two main advantages of applying the law found for stiffness distribution: firstly, to enhance the speed; secondly, it increases the range of the effective input frequencies as compared to figure 12 when uniform stiffness is employed. Locomotion characterization based on Froud number The ratio of inertial forces to other relevant forces plays a pivotal role in characterizing gaits. In terrestrial locomotion, this ratio is called the Froude number [ 38 ]. Animals with equivalent Froude numbers walk and run in a dynamically similar manner [ 39 ]. In friction-dominant locomotion, the Froude number can be defined as F r = λ /( μ n,max τ 2 g ) [ 40 ]. Here λ is the stride length which we are taking as the average wavelength of the trajectories traced by the physical model segments at their tracking points. μ n,max is the maximum calculated normal frictional coefficient on each substrate, since the normal frictional coefficient is the component of the friction forces acting on the body responsible for the propulsion [ 41 ]. τ is the period of the oscillations, in our case equal to 0.4 s, and g is the gravitational acceleration constant. The lower Froude number indicates the dominance of frictional forces. Our calculated range of Froude number is approximately 0.004–0.01 (electronic supplementary material, S2), defining the friction-dominant nature of the locomotion.
Discussion We investigated the effects of passive stiffness and environment on lateral undulatory locomotion by comparing mathematical and physical models to determine the performance optimization criteria. Our findings suggest a strong correlation between the resultant locomotion and the surrounding environment and body properties. We observed that changing stiffness affects locomotion in an environment, and stiffness has different responses in different environments. These relationships are evident in some living organisms. For example, eels modulate their body stiffness to achieve different performances in the same environment [ 4 , 42 ]. They increase their speed by engaging more muscles and increasing their body stiffness. As also predicted for sunfish, stiffness can be doubled to increase the speed [ 43 ]. A larval zebrafish-inspired robot also showed the significance of right stiffness for locomotion [ 44 ]. Furthermore, it is also reported how changing environments influence locomotion, e.g. in Caenorhabditis elegans [ 3 , 21 , 45 , 46 ] speed in low-viscous fluids is faster than in more viscous environments [ 10 , 22 ]. Experimental tests and simulations show that the tail amplitude increases as body stiffness increases, regardless of whether speed is increased or decreased. When animals change their gait from swimming to crawling, it is either because of the passive interaction with the environment [ 3 , 47 ] or because of the active increase of speed [ 4 ], with consequent change of wave kinematics. As in the case of swimming, the body wave amplitude increases from anterior to posterior, whereas in the case of crawling gait, the body wave amplitude either remains the same or decreases from anterior to posterior. Our results elucidate the functional role of the environment and passive body stiffness in body wave kinematics. The distribution of body stiffness plays an essential role in the performance of limbless animals. In sunfish, flexural stiffness increases and decreases from head to tail [ 43 ]. Typically, the highest stiffness is observed to be three orders of magnitude greater than the lowest stiffness. Computational results on lamprey locomotion have showed the importance of tail flexibility [ 20 ]. In particular, as tail stiffness increases, wakes become less coherent, and speed performance decreases. However, in [ 44 ], it is found that non-uniform body stiffness does not lead to more satisfactory performance than a uniform stiffness distribution. Our analysis reveals that the physical model performs better when the stiffness of the middle joint is higher than the head joint. Furthermore, our results suggest that when the environment changes, the quantitative requirement of stiffness distribution changes; however, the qualitative trend of the stiffness distribution remains preserved. Resonance frequency is another determining factor in the dynamics of undulatory locomotion. Changing the stiffness of the joints, while keeping other body parameters constant, sets the natural frequency of the body regardless of the environment. This defines different preferences of stiffness in different environments. Biological evidence for this phenomenon can be found. For example, in animals, central pattern generators regulate the rhythmic movements of the body at its natural frequency in coordination with the feedback from the environment [ 48 ]. Furthermore, we also know that animals can modulate their body stiffness through different muscle engagements. For example, eels have been observed to recruit more muscles under certain circumstances to increase speed [ 4 , 42 ]. By manipulating the stiffness distribution along the body, our investigation reveals that the body's response can be optimized over a broad range of input frequencies, rendering it less susceptible to frequency sensitivity and augmenting its velocity. The characterization of the locomotion based on the Froude number showed the dominance of frictional forces over inertial forces. Furthermore, a lower Froude number means a shorter time taken to reach the steady state. Our calculated range of the Froude numbers is consistent with those obtained for snake-slithering locomotion [ 40 , 41 ]. The discrepancy between physical and mathematical models can be attributed to modelling simplifications such as uniform mass distribution, rigid links, and reduced geometric dimensionality. It is found that accurate modelling of the nature of the interacting bodies plays an essential role in the resulting locomotion. In addition, out-of-plane motion due to random vibrations, instantaneous contact loss and manufacturing tolerances can also introduce discrepancies.
Conclusion We analysed and found a correlation between body stiffness and the environment for limbless undulatory locomotion in a dry friction environment. Our mathematical results are in agreement with physical experiments. The results suggest that the interdependence between passive body stiffness and environment can be exploited to build efficient undulatory robots that need to operate in specific environments. Furthermore, the speed of undulatory locomotion can be improved by utilizing a non-uniform stiffness distribution along the length of the body. The stiffness distribution can be arranged in either an ascending–descending or ascending–plateau order. Future work includes exploiting body material properties and patterns instead of wheels as a frictional interface with the environment and employing learning algorithms to aid understanding gait responses over cluttered environments.
Electronic supplementary material is available online at https://doi.org/10.6084/m9.figshare.c.6751782 . The current study investigates the body–environment interaction and exploits the passive viscoelastic properties of the body to perform undulatory locomotion. The investigations are carried out using a mathematical model based on a dry frictional environment, and the results are compared with the performance obtained using a physical model. The physical robot is a wheel-based modular system with flexible joints moving on different substrates. The influence of the spatial distribution of body stiffness on speed performance is also investigated. Our results suggest that the environment affects the performance of undulatory locomotion based on the distribution of body stiffness. While stiffness may vary with the environment, we have established a qualitative constitutive law that holds across environments. Specifically, we expect the stiffness distribution to exhibit either an ascending–descending or an ascending–plateau pattern along the length of the object, from head to tail. Furthermore, undulatory locomotion showed sensitivity to contact mechanics: solid–solid or solid–viscoelastic contact produced different locomotion kinematics. Our results elucidate how terrestrial limbless animals achieve undulatory locomotion performance by exploiting the passive properties of the environment and the body. Application of the results obtained may lead to better performing long-segmented robots that exploit the suitability of passive body dynamics and the properties of the environment in which they need to move.
Ethics This work did not require ethical approval from a human subject or animal welfare committee. Data accessibility The data are provided in the electronic supplementary material [ 49 ]. Authors' contributions B.Y.: data curation, formal analysis, investigation, methodology, validation, writing—original draft; E.D.D.: data curation, formal analysis, investigation, methodology, project administration, resources, supervision, validation, writing—review and editing; A.M.: investigation, methodology, project administration, supervision, validation, writing—review and editing; A.R.: conceptualization, investigation, methodology, supervision, writing—review and editing; B.M.: conceptualization, funding acquisition, supervision, writing—review and editing; N.M.P.: funding acquisition, investigation, resources, supervision, writing—review and editing. All authors gave final approval for publication and agreed to be held accountable for the work performed therein. Conflict of interest declaration We declare we have no competing interests. Funding N.M.P. is supported by the Italian Ministry of Education MIUR, Italy under the PRIN-20177TTP3S.
CC BY
no
2024-01-15 23:43:51
J R Soc Interface.; 20(205):20230330
oa_package/37/4b/PMC10410216.tar.gz
PMC10426048
37580698
Introduction Psittacosis is caused by Chlamydia psittaci , an intracellular gram-negative bacterium. It is a zoonosis that commonly infects birds. Although not always present, exposure to birds is a major risk factor for infection [ 1 ]. Psittacosis was found to be responsible for 1.03% of all community-acquired pneumonias (CAPs) in a meta-analysis [ 2 ]. It also accounts for 2.3% of cases of severe CAP [ 3 ]. The clinical manifestations range from subclinical or brief to multi-organ failure that was less commonly reported as fulminant psittacosis [ 4 – 7 ]. Before the advent of antimicrobial agents, the mortality of pneumonia caused by C. psittaci was 15–20%[ 8 ]. However, mortality is rare today. Psittacosis can be diagnosed on the basis of clinical presentation and tests that detect the human pathogenic C. psittaci , such as microimmunofluorescence, indirect fluorescence antibody, culture, or polymerase chain reaction (PCR)[ 1 ]. Metagenomic next-generation sequencing (mNGS) was recently developed for disease screening and diagnosis. Many recent reports have described the use of mNGS for the diagnosis of psittacosis in patients with severe CAP [ 9 – 11 ]. However, psittacosis is not routinely tested for in pneumonia diagnostic panels in many countries. Therefore, psittacosis may be underestimated and unrecognized, especially for the critically ill patients. Early diagnosis and treatment is essential for such patients. In this multicenter retrospective study, we evaluated the clinical characteristics and outcomes of patients with severe CAP and acute hypoxic respiratory failure (AHRF) caused by psittacosis admitted to the intensive care units (ICUs).
Materials and methods Study design and patients This retrospective study included patients with severe CAP and AHRF caused by psittacosis who were admitted to the 19 tertiary hospitals of China from April 2018 to May 2021. Patients were included if they fulfilled the following criteria: (1) severe CAP [ 12 ]; (2) AHRF (arterial partial pressure of oxygen [PaO 2 ] < 60 mmHg on room air, and arterial partial pressure of carbon dioxide [PaCO 2 ] < 45 mmHg), or need for > 6 L/min oxygen for respiratory support, and respiratory symptoms < 72 h; (3) C. psittaci detected in sputum or bronchoalveolar lavage fluid (BALF) using mNGS; (4) samples included blood, sputum, and BALF culture with negative results of routine microbiological tests; (5) ICU admission; (6) the diagnosis of psittacosis pneumonia independently decided by two physicians according to the clinical manifestation, microbiological tests results and lung computed tomography (CT). This study was reviewed and approved by the Ethics Committee of Beijing Chao-Yang Hospital of China (2021-Ke-389). Because this was a retrospective study, consent was waived by the Ethics Committee of the Beijing Chao-Yang Hospital. All methods were carried out in accordance with relevant guidelines and regulations. Data collection Demographic and clinical data of the patients were entered into an electronic case report form. The collected data included the demographic characteristics, comorbidities, symptoms, signs, laboratory tests, microbiological findings, and radiologic images of the lung (chest X-ray and computed tomography [CT]). The treatment process during ICU admission such as, antimicrobial therapy, respiratory support, complications, and outcomes were also recorded, in addition to an experienced radiologist in pulmonary imaging experience’s interpretation who was blinded to the clinical data reviewed the CT images. CT images were evaluated and defined according to the Fleischner Society glossary of terms for thoracic imaging [ 13 ]. The extent of disease at CT was evaluated as CT score [ 14 ]. Microbiological tests Sputum, blood, serum, and BALF samples were collected at admission and during ICU admission. Sputum and BALF samples were tested on bacterial and fungal smear and culture, and real-time PCR for common viruses that cause respiratory disease including influenza virus, adenovirus, rhinovirus, respiratory syncytial virus, cytomegalovirusm, etc. Serum samples were used for M. pneumoniae, C. pneumonia , and L. pneumophila antibodies. BALF samples were processed using mNGS to screen for pathogenic microorganisms at Vision Medicals Co., Ltd. (Guangzhou, China). The BALF samples were then subjected to nucleic acid extraction (Vision Medicals Cat# VM001, Guangzhou, China). DNA libraries were prepared and sequenced on an Illumina Nextseq sequencer for clinical metagenomic analysis. Sequence analysis was performed through Vision Medicals’ IDseqTM commercial bioinformatic pipeline. Briefly, reads that mapped to human genome and plasmids were removed. And the remaining reads were taxonomically classified by aligning to Vision Medicals’ curated microbial database. Statistical analysis SPSS software (version 22.0; IBM Corp., Armonk, NY, USA) was used for the statistical analysis. Continuous variables are presented as mean ± standard deviation (SD) or median (interquartile range, IQR). Chi-squared tests were used to analyze the categorical variables, and Mann–Whitney U test was used to analyze the continuous. Univariate analysis was used for the comparison of different treatments. P- value < 0.05 was considered to be statistically significant.
Results From April 2018 to May 2021, 45 patients with severe CAP and AHRF were diagnosed with psittacosis. C. psittaci was detected using mNGS on BALF samples. Droplet digital PCR validation was carried out while preparing this study, all the samples showed positive results of C. psittaci . It was found psittacosis occurred throughout the year, especially with the high incidence between September and April. The median PaO 2 /FiO 2 of the patients was 119.8 (IQR, 73.2 to 183.6) mmHg, and the time distribution did not vary with the PaO 2 /FiO 2 (Fig. 1 ). The mean age was 60 ± 14 years, and 27 (60.0%) patients were males. A history of poultry exposure was found in 64.4% of the patients. The median duration from symptom onset to admission was 7 (IQR, 4 to 10) days (Table 1 ). Clinical characteristics and laboratory examination Almost all of the patients had high fever, and the highest recorded temperature was 39.4 °C (IQR, 39.0 to 40.0). Patients commonly presented with cough, expectoration, and dyspnea. The median Acute Physiology and Chronic Health Evaluation (APACHE) II and sequential organ failure assessment (SOFA) scores were 11 (IQR, 8 to 17) and 5 (IQR, 3 to 7), respectively. More than half of the patients (53.3%) had hepatic injury, 10 patients had heart failure, and eight patients had acute renal failure during admission (Table 1 ). The median white blood cell count was 8.39 × 10 9 /L (IQR, 6.02 to 11.77), which was at the upper limit of the normal range (4–10 × 10 9 /L). The median lymphocyte count was 0.50 × 10 9 /L (IQR, 0.34 to 0.58), which was significantly lower than the normal range (0.8 × 10 9 /L; p < 0.001). Glutamic-oxaloacetic transaminase, alanine aminotransferase, and bilirubin were mildly elevated. The median serum procalcitonin (PCT) level was 1.64 ng/mL (IQR, 0.42 to 4.88), which was significantly higher than the normal (0.5 ng/mL)[ 15 ]. The other laboratory results are shown in Table 2 . Lung CT in most patients showed consolidation and infiltrate appeared in multiple lobes and segmental of bilateral lungs, lesions were more common in the lower lobes (Fig. 2 ). Treatment and outcome Before the diagnosis was confirmed, 30 patients received fluoroquinolone and 2 received azithromycin. None of the patients received tetracycline. The remaining 13 were empirically given β-lactam antibiotics or antivirals included oseltamivir. After the diagnosis was confirmed, 12 patients who initially received fluoroquinolones were shifted to tetracycline, and the other 18 patients continued fluoroquinolones. Among patients who empirically used β-lactam antibiotics or antivirals, eight were given fluoroquinolones combined with tetracycline, two were given tetracycline, and one was given fluoroquinolones alone (Table 2 ). The median duration from admission to start of targeted therapy was 5 days (IQR, 3 to 10). Sixteen patients received non-invasive positive pressure ventilation (NIPPV), and four of them were intubated because of failure of NIPPV. Twenty patients (44.4%) were intubated and received invasive mechanical ventilation (IMV), whereas two received veno-venous extracorporeal membrane oxygenation. Four of 45 patients (8.9%) died in the ICU, and the median ICU stay duration was 12 days (IQR, 8 to 21). The median hospitalization duration was 15 days (IQR, 12 to 27) (Table 3 ). The lung CT of the survival patients before discharge from ICUs most commonly showed consolidation and infiltrates absorbed and residual ground-glass opacities and reticular/fibrotic lesions (Fig. 2 ). The median CT score at the time of discharged was 4.3 ± 3.1, which was marked decreased compared with admission of 12.5 ± 5.6. Comparison of treatment methods Because only a few patients received azithromycin and tetracycline alone, we compared the clinical characteristics and outcomes between patients who received fluoroquinolone initially and shift to tetracycline after diagnosis, fluoroquinolone initially and continued after diagnosis, and fluoroquinolone combined with tetracycline. There were no significant differences between the three groups in terms of the demographic data, duration from symptom onset to admission, and disease severity (Table 4 ). The white blood cell and neutrophil counts were significantly higher among patients who received fluoroquinolone initially and after diagnosis compared to the other patients ( p = 0.015). Among patients who received fluoroquinolone initially and after diagnosis, 11 received IMV and 6 underwent continuous renal replacement therapy, which was more common compared to the other two groups. There was no difference in in-hospital mortality and hospitalization duration between the three groups.
Discussion To the best of our knowledge, this was the largest cohort study of psittacosis accompanied by AHRF from mainland China. In this study, we found psittacosis may present with varying AHRF severity, and the common seasons of psittacosis were autumn and winter. Fluoroquinolones may have efficacy equivalent to tetracycline. Considering the limited diagnostic ability, standard empirical treatment should follow CAP guidelines [ 12 , 16 ], which recommend that treatment should cover atypical pathogens to improve the prognosis of patients with psittacosis and AHRF. C. psittaci was reported to account for 2.3% of cases of severe CAP [ 3 ]. The mortality rate of patients with psittacosis admitted to the ICU was as high as 15%[ 17 ]. It is essential to recognize psittacosis based on the symptoms in patients with CAP and AHRF. Patients should receive early targeted therapy for a better prognosis. C. psittaci infection presents with an abrupt onset of fever, chills, headache, malaise, and myalgias. A non-productive cough is usually present and can be accompanied by breathing difficulty or chest tightness. Radiographic findings may include lobar or interstitial infiltrates [ 1 ]. About 70% of psittacosis patients had a known past exposure to poultry [ 18 ]. The incubation period for the illness is 5–14 days [ 19 ]. Age older than 65 years, male sex [ 20 ], and abnormal CK and BNP levels [ 21 ] were the risk factors for severe cases. Similar to previous reports, there were atypical clinical manifestations of psittacosis in our study, which makes differentiation of psittacosis from other infections difficult. Although psittacosis can occur at any time of the year, most patients presented in autumn and winter. Therefore, the outbreak of psittacosis coincided with that of influenza. However, most psittacosis patients had a history of poultry exposure, and it is important to differentiate it from other infections that are transmitted from poultry, including H7N9, H5N1, H5N6, etc. The difficulty in differentiation between these pathogens partly explains why 30% of the patients in this study received antivirals initially. In addition, we found the white blood cell count was in the normal range, the lymphocyte count was slightly reduced, and PCT was slightly elevated (almost lower than 2 ng/mL). Most patients with psittacosis had impaired liver function. Lung CT showed consolidation and infiltrates appeared in multiple lobes and segments of bilateral lungs. These presentations may help to differentiate psittacosis from other similar infections that present with severe CAP. Rapid diagnosis is essential for a good prognosis of CAP patients. Psittacosis can be diagnosed on the basis of a suggestive clinical presentation and detection of C. psittaci in human specimens. The current confirmatory laboratory tests for psittacosis include PCR and serologic tests (complement binding reactions, enzyme-linked immunosorbent assay, immunofluorescence tests, and immuno-peroxidase tests)[ 22 ]. In recent years, PCR has become the most commonly used diagnostic method for psittacosis [ 2 , 23 ]. However, to ensure biosafety, this test can only be performed in special laboratories [ 24 ]. mNGS can theoretically detect all pathogens in a clinical sample and is especially suitable for rare, novel, and atypical etiologies of complicated infectious diseases [ 25 ]. The use of NGS for the diagnosis of a suspected outbreak of psittacosis-induced severe CAP and ARDS was first reported in 2014 [ 9 ]. With the recent increase in the use of mNGS, psittacosis is being increasingly and rapidly screened and diagnosed. It seemed superior to the traditional methods [ 10 , 11 ]. In particular, mNGS has great potential for use for rapid diagnosis during public health crisis [ 26 , 27 ]. Rapid diagnosis is essential for critically ill patients with severe pneumonia and AHRF. In our previous work, we found that mNGS improved the sensitivity of pathogen detection in BALF and provides clinical guidance for management. Additionally, dynamic changes in reads could indirectly reflect therapeutic effectiveness [ 28 ]. Due to its sensitivity, speed, and cost-effectiveness, mNGS has the potential for routine use in diagnostics, and may even partly replace the traditional paradigm of serial tests [ 25 ]. In this study, psittacosis patients with AHRF who were admitted to the ICU had a better outcome, with a mortality rate of 8.9%, which was lower than previous reports [ 17 ]. In addition to appropriate organ support, standard empirical treatment for CAP covered atypical pathogens, including C. psittaci , which might play an important role. Tetracyclines are the drugs of choice for C. psittaci infection in humans, and are prescribed for 10–14 days [ 1 , 12 , 16 ]. Most C. psittaci infections respond to antibiotics within 1–2 days. In this study, 30 patients received fluoroquinolones as the initial empirical antibiotics. After diagnosis, only 12 patients were shifted to tetracycline, and the other patients continued using fluoroquinolones. Because few patients had received azithromycin or tetracycline individually, we only compared patients who received fluoroquinolone individually with those who received fluoroquinolone combined with tetracycline. Although the proportion of patients who required IMV and continuous renal replacement therapy was slightly greater among the fluoroquinolone group compared to the other groups, the mortality rate and hospitalization stay duration were not significantly different between the groups. Although tetracycline is the preferred antibiotic for chlamydial infections in non-pregnant adults, some studies have reported that fluoroquinolones are active against chlamydia in vitro or in veterinary medicine [ 29 , 30 ]. The results of this study provide more evidence for the use of fluoroquinolone for patients with psittacosis, especially those with severe CAP and AHRF. There were still some limitations to this study. First, this was a retrospective study with a small sample size, and the patients were exclusively from mainland China. Therefore, the results may not be generalizable to all populations. However, considering the low incidence rate of psittacosis, the results of this study have certain clinical value. Second, we could not include all severe CAP patients treated at the study centers. Therefore, the actual incidence of psittacosis among severe CAP patients could not be calculated. Third, microbiological diagnosis of psittacosis was based on mNGS, which is not included in the diagnostic criteria for psittacosis. However, droplet digital PCR validation (DDPCR) was carried out in this study, which could strengthen the diagnose accuracy. Fourth, because of insufficient data on respiratory support, we could not analyze respiratory mechanics parameters or respiratory support parameters in patients with AHRF. Lastly, only four patients died in this study. Therefore, factors related to poor prognosis could not be determined.
Conclusion Psittacosis caused severe CAP was not rare, especially in the patients with the history of exposure to poultry or birds. It may present with varying AHRF severity. Novel microbiological technologies may improve the diagnostic potential. Empirical treatment that covers atypical pathogens may benefit such patients, which fluoroquinolones might be considered as an alternative. The results need to be verified in large, well-designed, prospective randomized controlled studies to further evaluate the treatment and outcomes of psittacosis.
Introduction Psittacosis can cause severe community-acquired pneumonia (CAP). The clinical manifestations of psittacosis range from subclinical to fulminant psittacosis with multi-organ failure. It is essential to summarize the clinical characteristic of patients with severe psittacosis accompanied by acute hypoxic respiratory failure (AHRF). Methods This retrospective study included patients with severe psittacosis caused CAP accompanied by AHRF from 19 tertiary hospitals of China. We recorded the clinical data, antimicrobial therapy, respiratory support, complications, and outcomes. Chlamydia psittaci was detected on the basis of metagenomic next-generation sequencing performed on bronchoalveolar lavage fluid samples. Patient outcomes were compared between the treatment methods. Results This study included 45 patients with severe CAP and AHRF caused by psittacosis from April 2018 to May 2021. The highest incidence of these infections was between September and April. There was a history of poultry contact in 64.4% of the patients. The median PaO 2 /FiO 2 of the patients was 119.8 (interquartile range, 73.2 to 183.6) mmHg. Four of 45 patients (8.9%) died in the ICU, and the median ICU duration was 12 days (interquartile range, 8 to 21) days. There were no significant differences between patients treated with fluoroquinolone initially and continued after the diagnosis, fluoroquinolone initially followed by tetracycline, and fluoroquinolone combined with tetracycline. Conclusion Psittacosis caused severe CAP seems not rare, especially in the patients with the history of exposure to poultry or birds. Empirical treatment that covers atypical pathogens may benefit such patients, which fluoroquinolones might be considered as an alternative. Keywords
Acknowledgements Not applicable. Authors’ contributions Z.H.T., Q.L., and B.S. conceived the idea, designed, and supervised the study. Z.H.Y., Q.L., and B.S. had full access to all of the data and took responsibility for the integrity of the data. X.T. and N.W. drafted the manuscript. N.W., G.L., H.T., A.M.L., Y.Q.G., M.Y.Y., N.W., H.D.J., Q.G.D., L.C., X.Y., and Y.Z. collected data. R.W., X.Y.L., and Y.L. analyzed data and performed statistical analysis. All authors reviewed and approved the final version of the manuscript. CONSORTIUM NAME Severe community-acquired pneumonia and acute respiratory failure study group 1Xiao Tang, Rui Wang, Xu-Yan Li, Ying Li, Xue Yuan, Yu Zhao, Zhao-Hui Tong, Bing Sun. Department of Respiratory and Critical Care Medicine, Beijing Chao-Yang Hospital, Capital Medical University, Beijing Institute of Respiratory Medicine, Beijing, China; 2 Na Wang. Department of Pulmonary and critical care medicine, Beijing Luhe Hospital, Capital Medical University, Beijing, China; 3 Gang Liu, Qi Li. Department Pulmonary and critical care medical center, Xinqiao hospital, Army Medical University, the Chinese People’s Liberation Army Respiratory Disease Institute, Chongqing, China; 4 Hai Tan. Department of Respiratory and Critical Care Medicine, General Hospital of Ningxia Medical University, Xi Ning, Ningxia Hui Autonomous Region, China; 5 Ai-Min Li. Respiratory and Critical Care Medicine, First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, China; 6 Yan-Qiu Gao. Respiratory Intensive Care Unit, Zhengzhou Central Hospital Affiliated to Zhengzhou University, Zhengzhou, Henan Province, China; 7 Meng-Ying Yao. Department of Respiratory Intensive Care Unit The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, China; 8 Na Wang. Department of Pulmonary, The first hospital of Fangshan district, Beijing, China; 9 Hui-Dan Jing. Department of Intensive Care Unit, Daping Hospital, Army Medical University, Chongqing, China; 10 Qing-Guo Di. Department of Pulmonary and Critical Care Medicine, Cangzhou Central Hospital, Cangzhou, Hebei Province, China; 11 Liang Chen. Department of Respiratory and Critical Care Medicine, Beijing Jingmei Group General Hospital, Beijing, China. 12 Ru-Fang Li, Department of Pulmonary and Critical Care Medicine, The First People’s Hospital of Yunnan Province, Kunming, Yunan Province, China; 13 Ling Zhang, Department of Pulmonary and Critical Care Medicine, Chengdu Second People’s Hospital, Chengdu, Sichuan Province, China; 14 Xiu-Zhen Jia, Respiratory intensive care unit, Inner Mongolia People’s Hospital, Hohhot, Inner Mongolia Autonomous Region, China; 15 Yong-Hui Zhang, Department of critical care medicine, The First Affiliated Hospital of Army Medical University, Chongqing, China; 16 Peng Chen, Department of Emergency, Enze Hospital, Taizhou Enze Medical Center (Group), Taizhou, Zhejiang Province, China; 17 Ying Tian, Medical intensive care unit, First hospital of Qinhuangdao, Qinhuangdao, Hebei Province, China. Funding This work was supported by the Clinical medicine development project of Beijing Hospital Authority (XMLX202105), clinical diagnosis and treatment technology and translational research project of Beijing (Z201100005520030), Excellent Talents Development Project of Public Health Technology (XUEKEDAITOUREN-01-19) and Reform and Development Program of the Beijing Institute of Respiratory Medicine (Ggyfz202332). Data availability Data is deposited in China National Microbiology Data Center (NMDC) with accession numbers NMDC10018302 ( https://nmdc.cn/resource/genomics/project/detail/NMDC10018302 ). Declarations Ethics approval and consent to participate This study was approved by the Ethics Committee of Beijing Chao-Yang Hospital of China (2021-Ke-389). Because this was a retrospective study, consent was waived by the Ethics Committee of the Beijing Chao-Yang Hospital. Consent for publication Not applicable. Competing interest The authors declare that they have no competing interests List of abbreviations Acute hypoxic respiratory failure Acute Physiology and Chronic Health Evaluation Community-acquired pneumonia Computed tomography Droplet digital PCR validation Intensive care units Invasive mechanical ventilation Interquartile range Metagenomic next-generation sequencing Non-invasive positive pressure ventilation Partial oxygen pressure Polymerase chain reaction Procalcitonin Standard deviation Sequential organ failure assessment
CC BY
no
2024-01-15 23:35:08
BMC Infect Dis. 2023 Aug 14; 23:532
oa_package/dd/a8/PMC10426048.tar.gz
PMC10542848
36917408
Methods Participants The sample consisted of 140 six- to ten-year-old children (M age = 7.9 years; SD age = 1.1 years) and caregivers participating in a laboratory visit as part of a larger study on youth with and without ADHD and for whom microcoded parent–child interaction data were available. As 99% of caregivers were the child’s parents, we hereafter refer to caregivers as parents; 85.7% of parents identified as female. The sample was racially and ethnically diverse: 54% of children were White, 27.3% were multiracial, 8.6% were Black, 7.2% were Hispanic, and 1.4% were Asian. Sample characteristics are presented in Table 1 . Procedures Families were recruited from mental health centers, pediatric offices, and through flyers posted in local elementary schools and public locations. Inclusion criteria included English fluency, residing with at least one biological parent at least half of the time, and full-time school enrollment. Exclusionary criteria were an IQ below 70 or a neurological, pervasive developmental, or seizure disorder. Study eligibility was based on a telephone screening with the caregiver. Eligible families were invited for a laboratory-based assessment and were mailed rating scales to complete. To capture the full range of functioning in the sample, including differences secondary to the child’s medication status, parents were asked to report on their child’s unmedicated behavior, if possible (e.g., a child who takes stimulant medication on weekdays but not on weekends). Parents were also asked to have their child abstain from medication on the day of the assessment; however, this was not an exclusion criterion if suspending medication was otherwise undesirable or unsafe for the child. Approximately 85% of children were assessed in the lab on days when they had not taken any medication. After obtaining parental consent and child assent, parents and children participated in all activities. The lab visit lasted approximately four hours and included structured parent–child interaction tasks (Eyberg et al., 2005 ) that previously demonstrated predictive validity and sensitivity to interventions (Thomas & Zimmer-Gembeck, 2007 ). Other tasks included neuropsychological and computerized assessments plus parent- and self-reports on children’s symptoms. Multiple breaks were offered to support children’s task engagement. Families were instructed to engage in three tasks in a fixed order: a) a 10-min child-led play, b) a 10-min parent-led play, and c) a 5-min parent-led clean-up time. Families received $50 compensation. The University of California, Los Angeles IRB approved all study procedures prior to study participation. Measures Micro-Coded Behavior During Parent–Child Interactions All parent–child interaction tasks were digitally recorded and coded using the Dyadic Parent Child Interaction Coding System (DPICS; Eyberg et al., 2005 ), which previously demonstrated moderate to high interrater and test–retest reliability (Chronis-Tuscano et al., 2008 ). Parent and child behaviors were coded in 10-s intervals, yielding 60 epochs during each 10-min play episode and 30 epochs during the 5-min clean-up. Research assistants completed intensive training on the DPICS until at least 70% agreement with training videos was attained. Weekly coding meetings prevented rater drift and resolved disagreements. To estimate reliability, 20% of the videos were randomly selected and coded by two independent coders. In this study, the intraclass correlations (ICC) for composite categories indicated good reliability (ICC negativity = 0.75; ICC praise = 0.88; ICC noncompliance = 0.78). Because nearly all (> 97%) parent and child variables in each interval were either 0 or 1 (i.e., multiple instances of behavior rarely occurred in a single 10-s interval), variables were dichotomized. Child Noncompliance Child noncompliance was coded when a child failed to comply with a parental command or when a child failed to respond to a parental request for information (i.e., 1 = noncompliance, 0 = compliance). If the parent did not give a command or asked a question that required a response during that interval, the child’s behavior was coded as missing. Parent Behavior Parent negative talk was coded when a parent made hostile or critical comments directed toward their child (e.g., “You’re doing that wrong”), negative commands (e.g., “Stop doing that!”), or sarcastic and condescending remarks (e.g., “You think you’re so clever, don’t you?”). Parent praise was coded when they positively appraised their child’s behavior, attribute, or a product created by the child (e.g., “You’re a good builder”). Parent-Reported Child Externalizing Behavior Problems Parents rated child behavior problems using the 113 item Child Behavior Checklist (CBCL) rating scale (Achenbach & Rescorla, 2001 ). Items were rated on a 3-point scale (0 = Not True , 1 = Somewhat or Sometimes True , 2 = Very True or Often True ) based on the last six months. The validity and reliability of the syndrome and DSM -oriented scales were well-established (Achenbach & Rescorla, 2001 ; Achenbach et al., 2003 ). As suggested by the scale developer to maximize variance in key variables (Achenbach & Rescorla, 2001 ), all analyses employed total CBCL subscale scores. Child disruptive behavior problems were estimated from the 35-item CBCL broadband externalizing problems subscale, which included aggressive and rule-breaking behaviors (α = 0.92). Child ADHD symptoms were estimated from the 7-item DSM -oriented Attention Deficit Hyperactivity Problems (ADHD) clinical scale, which consists of the seven items most consistent with DSM inattention and hyperactivity-impulsivity (α = 0.88).
Results Preliminary Analyses We observed one outlier (> 3 SD from the mean) on child disruptive behavior problems. Because results did not change with or without its inclusion, the outlier was not excluded from the dataset, and results are based on all available data. Skewness and kurtosis of all continuous variables (child disruptive behavior problems, child ADHD symptoms; parent and child age) met assumptions of normality (Brown, 2006 ). Table 2 presents the overall prevalence of observed child noncompliance, parent negative talk, and parent praise, per task, and results of within-dyad tetrachoric correlations. Given the null correlations in the underlying processes giving rise to these behaviors, as well the large sample size requirements for reliably estimating random effects covariances (Schultzberg & Muthén, 2018 ), covariances between random effects were not included in primary analyses. Based on bivariate Pearson correlations, independent samples t -tests, and one-way ANOVAs, children’s disruptive problems and ADHD symptoms were unrelated to the potential covariates: whether the child lived with siblings; parent and child sex, age, and race-ethnicity; and parent-reported family income (all p ’s > 0.05). Because they were not related to values of or missingness on primary study variables, these potential between-level covariates were not included in primary analyses. Children’s disruptive problems and ADHD symptoms were correlated, r (137) = 0.718, p < 0.001. Child ADHD symptoms and disruptive behavior problems were evaluated in separate models due to multicollinearity concerns. Primary Analyses We first estimated the average within-dyad processes, in each task, with three DSEMs. These models allowed within-dyad dynamics to differ across families but did not include between-dyad predictors. Model-derived estimates of within-dyad intercepts and regression path intercepts, for each task, are shown in Table 3 . Supplementary Table 1 presents estimated within-dyad probabilities of child and parent behavior, per task. We then conducted two sets of DSEMs to evaluate, per task, between-dyad differences in within-dyad processes, based on (a) child disruptive behavior problems and (b) ADHD symptoms. Our conceptual model is shown in Fig. 1 . Non-null between-dyad effects are below. Estimates of between-dyad effects of child externalizing behavior problems and ADHD symptoms on within-dyad processes, for child- and parent-led play, are in Supplementary Tables 4 and 5 . Estimates of between-dyad effects for the clean-up task are in Table 4 . Aim 1. Average Within-Person Processes Within-Child Carryover Consistent with hypothesis 1a, there was non-null positive within-child carryover in noncompliance in child- and parent-led play, such that during these tasks, changes in child noncompliance were likely to persist from one epoch to the next (Table 3 ). During child-led play, the unconditional probability of child noncompliance (i.e., within-child trait-like noncompliance at time t -1) was 11.48% but this increased to 19.95% when the child was noncompliant in the prior epoch. During parent-led play, the unconditional probability of child noncompliance was 10.85% but increased to 13.78% if the child exhibited noncompliance in the prior epoch. Contrary to expectations, there was a null carryover in child noncompliance during clean-up (Table 3 ). Within-Dyad Cross-Lagged Processes: Parent Negative Talk and Child Noncompliance Contrary to hypothesis 1b, during child-led play, parent negative talk negatively predicted their child’s subsequent noncompliance. If the parent exhibited negative talk in the prior epoch, the estimated probability of child noncompliance decreased from 11.48% to 9.22%. However, consistent with expectations, child noncompliance positively predicted subsequent parent negative talk. During child-led play, the unconditional probability of parent negative talk was 1.94% but this increased to 2.50% when the child was noncompliant in the prior epoch. Simply put, during child-led play, child noncompliance was less likely to occur following parent negative talk, but parent negative talk was more likely to occur following child noncompliance. Contrary to our hypotheses, during parent-led play and clean-up, parent negative talk did not predict subsequent child noncompliance, which did not predict parent’s subsequent negative talk (Table 3 ). Within-Dyad Cross-Lagged Processes: Parent Praise and Child Noncompliance Contrary to hypothesis 1c, on average, during child-led play and clean-up, parent praise did not predict subsequent child noncompliance, and child noncompliance did not predict subsequent parent praise (Table 3 ). However, consistent with hypotheses, during parent-led play, parent praise negatively predicted subsequent child noncompliance, and child noncompliance negatively predicted subsequent parent praise (Table 3 ). If their parent praised them in the prior epoch, the estimated probability of child noncompliance decreased from 10.85% to 7.57%. If their child was noncompliant in the prior epoch, the estimated probability of parent praise decreased from 5.47% to 4.27%. In other words, during parent-led play, child noncompliance was less likely to occur following parent praise, and parent praise was less likely to occur following child noncompliance. Aim 2. Between-Dyad Differences in Within-Dyad Processes Overall, estimates of the within-person and within-dyad intercepts and regression paths from models that examined between-dyad differences based on child disruptive problems (Supplementary Table 2 ) or child ADHD symptoms (Supplementary Table 3 ) were consistent with the base models that did not contain between-level predictors (Table 3 ). Results suggested child disruptive behavior problems and ADHD symptoms accounted for differences in the trait-like component of parent negative talk during child-led play (Supplementary Table 4 ). During child-led play, parents of children with low disruptive behavior problems (-1 SD below the mean) had a 1.66% unconditional probability of displaying negative talk, whereas parents of children with elevated (+ 1 SD above the mean) disruptive behavior problems had a 3.48% unconditional probability of displaying negative talk. Similarly, parents of children with fewer ADHD symptoms (-1 SD below the mean) had a 1.96% unconditional probability of displaying negative talk whereas parents of children with elevated ADHD symptoms (+ 1 SD above the mean) had a 3.45% unconditional probability of displaying negative talk during child-led play. Results also revealed ADHD mean differences in the trait-like component of child noncompliance during clean-up (Table 4 ), such that children with fewer ADHD symptoms (-1 SD below the mean) had a 7.64% unconditional probability of displaying noncompliance, whereas children with elevated (+ 1 SD above the mean) ADHD symptoms had a 15.34% unconditional probability of displaying noncompliance. Between-Dyad Differences in Within-Child Carryover in Noncompliance Child ADHD symptoms predicted less carryover in child noncompliance, during clean-up only (Table 4 ). Only children with below average (-1 SD) levels of ADHD symptoms showed carryover or persistence in child noncompliance during clean-up, Est = 0.38, 95% credible interval: [0.11, 0.65]. Children with fewer ADHD symptoms (-1 SD below the mean) had a 7.64% unconditional probability of being noncompliant; if they were noncompliant in the prior epoch, their probability of being noncompliant increased to 12.10% in the subsequent epoch. Between-Dyad Differences in Within-Dyad Cross-Lagged Processes During clean-up, child ADHD symptoms predicted within-dyad relations between parent negative talk and child noncompliance (Table 4 ). ADHD symptoms positively predicted the effect of parent negative talk on subsequent child noncompliance, and negatively predicted the effect of child noncompliance on subsequent parent negative talk (Table 4 ). Post-hoc probing at ± 1 SD mean ADHD symptoms revealed the effect of parent negative talk on their child’s subsequent noncompliance was only non-null when children had fewer ADHD symptoms, Est = -0.46, 95% credible interval: [-0.95, -0.12]. That is, parent negative talk reduced the likelihood of subsequent child noncompliance only for children with fewer ADHD symptoms. Children with fewer ADHD symptoms (-1 SD from the mean) had a 7.64% unconditional probability of noncompliance; if their parents displayed negative talk in the prior epoch, their probability of subsequent noncompliance decreased to 4.03%, a non-null difference (Supplementary Table 6 ). The effect of child noncompliance on their parents’ negative talk was null at low and high levels of ADHD symptoms.
Discussion The present study aimed to improve understanding of real-time antecedents and consequences of child noncompliance, a behavior problem theorized to arise from bidirectional relational processes (Granic & Patterson, 2006 ; Kalb & Loeber, 2003 ; Owen et al., 2012 ). Leveraging intensive longitudinal data on child noncompliance, collected via three unique task demands, and a novel statistical modeling approach, we evaluated theory-derived hypotheses regarding within-child carryover and within-dyad cross-lagged processes between parent behavior and child noncompliance during parent–child interactions. Given that parents’ socialization goals and the effects of parent socialization behaviors differ across conditions (e.g., child behavior, settings; Eisenberg et al., 1998 ; Kalb & Loeber, 2003 ), we evaluated within-dyad dynamics in three tasks: child-led play, parent-led play, and clean-up. Results offered mixed support for hypotheses. School-aged children’s noncompliance was predicted by and predicted parent behavior, but specific antecedents and consequences of child noncompliance varied. Further, within-dyad processes, especially during demanding tasks, may differ between families. During the clean-up task, only children with fewer ADHD symptoms exhibited a predictable pattern of noncompliance influenced by prior noncompliance and parental negative talk, relative to their counterparts with elevated ADHD symptoms.
Conclusion To prevent recurring child externalizing behavior problems, we must elucidate both antecedents and consequences of child noncompliance as it occurs during parent–child interactions. The current study suggests that parents’ behavior precipitates the onset of child noncompliance. Yet, specific parental antecedents of child noncompliance differ depending on the context, highlighting ways parents can adjust how they give commands and respond to child behavior that may promote children’s well-regulated, compliant behavior. Replication of the current results would suggest that, among children with elevated ADHD symptoms, changes in negative child and parent behaviors become untethered from each other. Future research across multiple timescales, including developmental time, is needed to uncover when and why these processes emerge, and their implications for developing trajectories of externalizing disorder.
Given that noncompliance is the most common externalizing problem during middle childhood and reliably predicts significant conduct problems, innovations in elucidating its etiology are sorely needed. Evaluation of in-the-moment antecedents and consequences of child noncompliance improves traction on this goal, given that multiple theories contend that child noncompliance and parent behavior mutually influence each other through negative reciprocation as well as contingent praise processes. Among a sample of 140 families (child age: 6–10 years; 32.1% female), the present study capitalized on intensive repeated measures of observed child noncompliance and parent negative talk and praise objectively coded during three unique tasks. We employed dynamic structural equation modeling to evaluate within-dyad parent–child behavioral dynamics and between-dyad differences therein. Results provided mixed support for hypotheses and suggested that antecedents and consequences of child noncompliance differed according to task demands and child ADHD symptoms. Contrary to models of coercive cycles, during child-led play, parent negative talk was more likely following prior child noncompliance, but child noncompliance was less likely following prior parent negative talk. As expected, during parent-led play, parent praise was less likely following prior child noncompliance, which was also less likely following prior parent praise. Relative to youth with fewer symptoms, for children with elevated ADHD symptoms, during a challenging clean-up task, child noncompliance was less stable and less contingent on prior parent negative talk. Results are discussed in terms of their implications of real-time parent–child interactions for typical and atypical development of externalizing problems. Supplementary Information The online version contains supplementary material available at 10.1007/s10802-023-01045-0. Keywords
Children’s noncompliance with parental requests represents the most common externalizing problem for which parents seek child mental health services (Kalb & Loeber, 2003 ; Owen et al., 2012 ). Temporally, parents’ immediate response to their child’s noncompliance, including escalating negativity or withdrawal of praise, may reinforce or momentarily resolve behavior problems, thus implicating unique within-family processes in the development of child externalizing disorders, including attention-deficit/hyperactivity disorder (ADHD) and disruptive behavior disorders (Granic & Patterson, 2006 ). To advance etiological theories of youth externalizing psychopathology, elucidating the temporal course of youth noncompliance during typical parent–child interactions is essential. Capitalizing on intensive repeated measures of observed child noncompliance and parent behavior across diverse task demands, the present study employed dynamic methods to rigorously characterize within-dyad parent–child behavioral dynamics, including the extent to which children’s noncompliance is both an antecedent of and response to changes in their own parent’s behavior. Transactional models of development and dynamic systems theories underscore that parent–child interaction processes shape developmental trajectories of youth externalizing disorders (Granic & Patterson, 2006 ). Applied to coercion theory (Patterson, 2002 ), a dynamic systems lens underscores that moment-to-moment reciprocation and escalation of aversive behaviors between parent and child eventually culminate in parent capitulation to child noncompliance (Granic & Patterson, 2006 ). Over time, parent capitulation negatively reinforces child noncompliance, which may entrench a stable pattern of externalizing behavior problems (Granic & Patterson, 2006 ; Lunkenheimer et al., 2016 ; Patterson, 2002 ). Although coercive processes and other negative parent–child interaction factors (e.g., inconsistent discipline) are featured more centrally in etiological models of externalizing disorders, positive parenting practices also uniquely predict youth externalizing problems (McFayden-Ketchum et al., 1996 ). For example, according to theories of emotion socialization, when parents contingently respond to children with warmth and support, children learn to correctly anticipate appropriate affective responses and effectively self-regulate, which may promote their persistence with undesirable tasks and resolve conflict, eliciting more positive parent behavior (Morris et al., 2007 ). Conversely, children’s negativity may be less likely to elicit supportive parenting, and parents’ positive affect or supportiveness may fail to dampen their children’s negativity when children experience inconsistent caregiving (Granic & Lougheed, 2016 ; Lougheed et al., 2015 ). Within-Family Processes and Child Noncompliance Although theories of externalizing development have increasingly emphasized the role of reciprocal, within-dyad parent–child dynamics (e.g., Granic & Patterson, 2006 ), there is limited empirical evidence on unique within-family processes that govern how child noncompliance unfolds during real-time interactions. Studies of parent–child interactions among youth at risk for externalizing problems typically employ observational methods that rate global parent–child characteristics (e.g., rates of child noncompliance or parent negativity across entire tasks; Li & Lee, 2012 ; Tung et al., 2015 ). However, global coding precludes strong tests of within-dyad behavioral contingencies occurring during parent–child interactions that influence children’s externalizing problems (e.g., noncompliance) in the moment . To advance research on parent–child behavioral processes implicated in child externalizing problems, elucidation of moment-to-moment, within-dyad processes during salient tasks must be prioritized. Newer dynamic systems approaches have generated empirical evidence that temporally-contingent changes in parent behavior in response to their child’s negativity/noncompliance are implicated in risky trajectories toward externalizing disorders (Granic & Lougheed, 2016 ; Granic & Patterson, 2006 ; Lougheed et al., 2015 ). Compared to typical parents, mothers who endorsed more hostility and elevated child externalizing problems were 35% more likely to change their behavior in response to their preschooler’s off-task behavior during a challenging puzzle task (Lunkenheimer et al., 2016 ). Parents who reported lower self-regulation were more likely to transition into negative parenting (i.e., negative directives or disengagement) specifically in response to toddler noncompliance during clean-up (Geeraerts et al., 2021 ). Dynamic models were also applied to positive parent–child processes that may promote child compliance, even during challenging tasks, although empirical evidence is mixed. Stronger contingencies between maternal autonomy support (e.g., guiding a child through tasks, proactively structuring engagement) and child compliance/persistence were related to better child behavioral regulation and fewer child behavior problems (Lunkenheimer et al., 2013 , 2017 ). However, other studies have not observed contingencies between maternal autonomy support and preschool child compliance in negative and neutral contexts (Lobo & Lunkenheimer et al., 2020 ). The extant literature faces methodological limitations leaving unanswered questions about for whom and in what contexts reciprocal, dyadic processes governing changes in child noncompliance unfold. The use of methods that preclude the identification of specific within-dyad antecedents and consequences of changes in child externalizing problems (e.g., noncompliance) limits our understanding of how child noncompliance is organized within the dyad. Despite the assertion that parent–child dynamics are “a function of reciprocal causality unfolding in real time” (Granic & Patterson, 2006 , p. 106), most dynamic systems studies focus on modeling survival processes (e.g., time to event; e.g., Granic & Lougheed, 2016 ; Lougheed et al., 2015 ), in contrast to bidirectional relations between dynamic fluctuations in parent behavior and child noncompliance that unfold over the course of an interaction. Models that account for multiple parent behaviors (e.g., negativity or praise) are also needed to comprehensively elucidate dynamic processes involving momentary child externalizing behavior. Task demands must too be explicitly considered given the effects of dyadic affective contingencies may vary according to demands (Lobo & Lunkenheimer, 2020 ). Further, evaluations of parent–child interactions often aggregate across interactions; failure to disentangle between-dyad differences in interaction quality from within-dyad variability across tasks may bias results (Roesch et al., 2010 ). Last, previous dynamic systems work on externalizing behavior focuses almost exclusively on preschoolers; however, parent–child relationships in middle childhood predict child externalizing problems (Pinquart, 2017 ), which increase prior to adolescence (Loeber & Burke, 2011 ), making middle childhood a critical period to evaluate parent–child dynamic processes before they are consolidated and catalyze externalizing disorders. Current Study The current study sought to address key knowledge gaps of within-dyad processes governing children’s externalizing behaviors as they unfold during naturalistic parent–child interactions. We evaluated within-dyad, moment-to-moment fluctuations in observed child noncompliance and parent negativity and praise, derived from contiguous 10-s intervals from three discrete, salient, and ecologically valid tasks, among school-aged children with and without ADHD. Using a novel methodological approach, dynamic structural equation modeling (DSEM; Asparouhov et al., 2018 ), we rigorously examined bidirectional within-dyad, moment-to-moment augmentation or blunting of one dyad member’s behavior by their partner’s prior behavior, while simultaneously accounting for the frequency and carryover (or stability) in an individual’s behavior. Consistent with dynamic systems theory and prior evidence of carryover in child noncompliance (Williams & Forehand, 1984 ), we hypothesized that, on average, (1a) there would be positive carryover of child noncompliance from one moment to the next (10 s later). Drawing on prior theoretical and empirical work on coercion (Granic & Patterson, 2006 ; Patterson, 2002 ) and positive parenting (e.g., Lunkenheimer et al., 2017 ; Owen et al., 2012 ), we expected that, on average, there would be within-dyad cross-lagged processes, such that (1b) there would be amplifying effects between parent negative talk and child noncompliance, and (1c) there would be dampening effects between parent praise and child noncompliance from one moment to the next, even after adjusting for within-person frequency and carryover. Although child noncompliance influences and is influenced by parent behavior (e.g., negativity or praise), the magnitude of these influences may not be equivalent. Disambiguating the lead-lag structure of bidirectional cross-lagged processes allows us to separately identify the antecedents and consequences of dyadic processes involving child noncompliance. We evaluated these hypotheses separately for each task, as parent socialization goals and the effects of socialization behaviors vary based on situational demands. For example, parents may seek to encourage appropriate regulatory responses in emotionally- and cognitively-demanding situations (e.g., shifting from playing with toys to cleaning up) while scaffolding problem-solving approaches to unexpected challenges in other situations (e.g., repairing a toy that breaks during play; Eisenberg et al., 1998 ). Given the lack of extant research in this area, we did not have a priori hypotheses about context-specific processes. Second, because within-family processes are concurrently and prospectively associated with youth externalizing disorders (Lunkenheimer et al., 2015 , 2017 ), we tested between-dyad differences in these within-dyad processes among a sample of children with and without ADHD symptoms. Consistent with the vast literature demonstrating that the traits underlying major child psychiatric disorders are continuous in nature rather than qualitatively distinct shifts from typical functioning (Beauchaine et al., 2018 ), and specific evidence that ADHD symptoms are associated with psychopathology symptoms across the general population and among individuals both with and without a diagnosis of ADHD (Orm et al., 2022 ), we adopted a dimensional approach to evaluating between-dyad differences in within-dyad antecedents and consequences of child noncompliance. In addition to the elevated risk for noncompliance among children with elevated disruptive behavior problems and ADHD symptoms (Kalb & Loeber, 2003 ), we evaluated whether (2) within-child carryover in noncompliance and within-dyad cross-lagged processes involving child noncompliance differed according to child externalizing behavior problems or ADHD symptoms. Data Analysis Plan Three sets of dynamic structural equation models (DSEM; Asparouhov et al., 2018 ) evaluated within-dyad processes that were allowed to vary between dyads, during child-led play, parent-led play, and clean-up tasks. Primary analyses were conducted using M plus (M plus v.8.4; Muthén & Muthén; 1998 – 2017 ), which uses Bayesian Markov chain Monte Carlo (MCMC) with a Gibbs sampler. We used two unthinned chains, each running for a maximum of 100,000 iterations, to ensure the estimation was stable. We allowed the algorithm to terminate prematurely if the potential scale reduction factor dropped below 1.05 (Gelman & Rubin, 1992 ). We used the default diffuse prior distributions in M plus , which was reasonable given the sample size. Posterior distributions were summarized with the median. In Bayesian analyses, missing data are treated as unknown parameters, which implies that missing data are sampled from their conditional posterior, and MCMC estimation yields consistent estimates when missing data are missing at random (Hamaker et al., 2018 ). Binary variables are accommodated in DSEM through the probit link function (Asparouhov & Muthén, 2019 ). Lagged variables of child noncompliance and parent negative talk and praise were created in M plus . The continuous processes underlying the lagged (lag-1) binary predictors were latent centered to yield pure within effects (Hamaker & Grasman, 2015 ). As a result, the intercept is an unconditional probability that refers to when a person is at their typical, trait-like value for the underlying process of the predictor at time t -1 (hereafter referred to as the behavior, or specifically, noncompliance, negative talk, or praise). At the within-dyad level, the behavior is mean-reverting, such that at any moment, a person may exhibit a state-like fluctuation that is either higher or lower than their trait-level of the behavior. State-like fluctuations in child and parent behavior were predicted by fluctuations in their own prior behavior and each other’s prior behavior during the immediately preceding 10-s epoch. That is, all autoregressive and cross-lagged paths in the within-level model were estimated. Probit models relate the predictors to the outcome through the standard normal cumulative distribution function; thus, regression coefficients correspond to changes in the Z -scores associated with the predicted probability. Random effects were placed on intercepts and slopes of children’s and parents’ behavior at the within-level, which allowed these effects to differ at the between-level. To evaluate Aim 2, models included between-level predictors (grand mean centered child ADHD symptoms or disruptive behavior problems) of the within-level intercepts and autoregressive effects (i.e., carryover effects) and the cross-lagged effects between parents’ and their child’s behavior. No between-level predictors of cross-lagged effects between parents’ behavior were specified. Unstandardized estimates are presented for all models. Similar to a frequentist framework, effects were considered non-null if the 95% credible intervals (CIs) excluded zero. Between-level child predictor effects on within-level relations were probed at the mean and ± 1 SD of the mean on the predictor using a multilevel moderation web utility (Preacher et al., 2006 ). Average Within-Dyad Processes Involving Child Noncompliance Germane to dynamic models of parent–child coercion is the initial negative reciprocation between child noncompliance and parent negativity (Granic & Patterson, 2006 ). Consistent with experimental evidence that aversive child behavior elicited adult negative behavior during lab-based interactions (e.g., Wymbs et al., 2015 ), and extending prior dynamic systems research on high-risk mothers of preschool-aged children (Geeraerts et al., 2021 ; Lunkenheimer et al., 2016 ), our results suggested that child noncompliance evoked subsequent parent negative talk during a child-led play task. However, contrary to our theory-derived expectation of bidirectional amplification of negative dyadic behavior, parent negative talk was not an antecedent of child noncompliance, and in fact, predicted a lower likelihood of child noncompliance. Among typical families participating in a child-led play task, brief moments of parent negativity may quickly redirect children and return the dyad to fluent, harmonious play. Reprimands and negative nonverbal parent responses, especially when paired with a command, have been associated with child noncompliance in both naturalistic and experimental studies (Owen et al., 2012 ). During child-led play, parents often avoid giving direct commands (e.g., “Don’t get the blocks out yet”), and therefore may use negative talk to prime their commands or as a vague “beta” command (e.g., commands that lack clear directions regarding desired behavior change; “Stop being a pest”), which may help promote child compliance in this context. The antecedents of child noncompliance also varied according to task demands. Whereas parent negative talk arrested child noncompliance during child-led play, bidirectional relations where parent praise dampened child noncompliance, and vice versa, emerged during parent-led play. When parents lead playful interactions, they are often focused on directing their child’s play in ways that promote learning and intentionally socialize children’s decision-making and problem-solving, including modeling these behaviors (Zimmerman & Schunk, 2011 ). Following the child’s lead (e.g., monitoring, encouraging, or acknowledging the child’s effort) may be one way that parents scaffold children’s executive functioning and self-regulation (Obradović et al., 2021 ) while being less directive of children’s behaviors. Further, withdrawal of praise in response to noncompliance may potentiate the reinforcement value of parent praise in this context, underscoring the importance of contingent in-the-moment use of praise. In contrast to the mixed empirical support for the benefits of maternal autonomy support during challenging tasks (Lobo & Lunkenheimer, 2020 ; Lunkenheimer et al., 2017 ; Owen et al., 2012 ), contingent praise may be one way for parents to support on-task child behavior during structured play. Implications of Intensive Longitudinal Data Antecedents and consequences of child noncompliance differed depending on task demands and the correlates of within-family behavioral dynamics with respect to externalizing psychopathology were also task-specific. When parents and children were faced with more challenging demands during clean-up, between-dyad differences based on child ADHD symptoms in within-dyad processes emerged. Relative to child- and parent-led play, clean-up requires more effortful control and attentional resources, as children must inhibit their responses (e.g., desire to continue playing) to effectively transition from playing to cleaning up while following parent instructions. After accounting for ADHD-related differences in initial child noncompliance, ADHD symptoms affected within-person stability and within-dyad bidirectional relations between parent negative talk and child noncompliance. For children with few ADHD symptoms, there were clear intrapersonal and interpersonal antecedents of child noncompliance, such that prior child noncompliance predicted a greater likelihood of subsequent child noncompliance whereas prior parent negative talk predicted a lower likelihood of subsequent child noncompliance. Due to executive functioning deficits (Barkley, 1997 ) and low frustration tolerance (Seymour & Miller, 2017 ), children with ADHD may find clean-up tasks taxing for self-regulation, which may be reflected in unstable intrapersonal processes. In children with elevated ADHD symptoms, during clean-up, parent negative talk did not predict subsequent child noncompliance. Children with elevated ADHD may struggle to understand what is asked of them, and thus may be less equipped to detect command primes or “beta” commands implied in parent negative talk (Kalb & Loeber, 2003 ). Also, children with elevated ADHD symptoms, who experience repeated negative interactions with parents (McKee et al., 2004 ; Podolski & Nigg, 2001 ), may find parent negative talk more aversive and be less willing to comply with “beta” commands or command primes. By middle childhood, externalizing symptoms may reflect the lack of coordination between real-time changes in parent and child behavior, rather than negative moment-to-moment dyadic processes. Across development, initial negative reciprocation among at-risk families may give rise to stable, crystallized dyadic patterns (Granic & Patterson, 2006 ), where both parent and child are pulled toward recurring states of noncompliance and negativity, but each individual’s behavior is no longer contingent on the other’s immediately prior behavior. By examining within-dyad processes, using novel methods for disaggregating within-dyad intrapersonal and interpersonal processes from trait-like differences in overall noncompliance, we uncovered alterations in within-child carryover and between-dyad negative dynamics associated with ADHD symptoms. Future developmentally-informed work with repeated assessments of parent–child interactions must evaluate how externalizing problems are influenced by changes in real-time dyadic behavioral processes. Strengths, Limitations, and Future Directions The present study benefited from strengths including intensive longitudinal data on child noncompliance and parent behaviors (negative talk and praise), collected in 10-s epochs, to elucidate within-dyad cross-lagged processes, accounting for the frequency and intrapersonal carryover in individual behavior. In contrast to most dynamic systems research that employed a unidirectional approach (see Lunkenheimer et al. ( 2017 ) for a key exception) or focused on dyadic-level processes (Granic & Patterson, 2006 ), evaluating the lead-lag structure of these bidirectional dyadic relations rigorously illuminated the antecedents and consequences of child noncompliance during parent–child interactions. Our sample of socioeconomically and ethnically diverse school-aged children addressed a critical gap in the literature, which typically focused on preschool youth when defiance is common, whereas noncompliance later in development (e.g., during middle childhood) can frequently be impairing and necessitate mental health services (Kalb & Loeber, 2003 ; Owen et al., 2012 ). Evaluation of within-dyad processes in three contexts also uncovered task-specific interpersonal antecedents and consequences of child noncompliance, depending on the “leader” of playful interactions; when faced with greater task demands during clean-up, the intrapersonal and interpersonal antecedents of child noncompliance also differed between families depending on child ADHD symptoms. These results must also be understood in the context of study limitations and point to needed future directions. Results may not generalize to other interaction contexts, diverse caregivers or other authority figures (e.g., teachers), families with greater contextual disadvantage, different timescales (e.g., second-by-second, across development), or different developmental periods. Whereas child ADHD symptoms may be particularly salient to tasks with high cognitive demands, disruptive behavior problems may contribute to noncompliance in more relational contexts (e.g., navigating conflict; Garcia et al., 2019 ). Examination of co-occurring internalizing problems (e.g., anxiety) may also shed light on which children are most susceptible to coercive dynamics (Granic & Lougheed, 2016 ). Families with maltreatment histories may also exhibit unique behavioral patterns in structured observational tasks (Zumbach et al., 2021 ). Our analyses examined correlates of within-dyad dynamics among children with varying ADHD symptom levels, and future analyses with larger sample sizes are needed to examine whether these relations differ based on clinical status. Further, results were specific to the timescale on which behavior was assessed in this study. Adjusting for intrapersonal carryover allowed us to adjust for possible continuity of behavior from one epoch to the next. However, our modeling approach did not allow us to pinpoint when children transitioned between compliant and noncompliant states, which may be achieved with shorter epoch lengths or modeling approaches such as multilevel survival analysis (Stoolmiller & Snyder, 2014 ). From a dynamic systems perspective (Granic & Patterson, 2006 ), future research is needed to evaluate whether dyadic processes have self-similar organization at different timescales (e.g., whether lack of contingency on moment-to-moment timescales leads to changes in parenting practices and worsening child behavior problems across development). A comprehensive examination of the antecedents and consequences of child noncompliance requires assessment of parent and child behaviors, which range in frequency and intensity, as well as in experiential aspects of parent–child processes. Although the present study focused on specific parent behaviors, the intensity of these behaviors may influence how within-dyad processes unfold in specific contexts. For example, reciprocated dyadic negativity may require more intense or harsh negative verbalizations (Owen et al., 2012 ) than observed in the present study. By focusing on child noncompliance (i.e., a response to a parental request), we were unable to examine child-led processes that militate against externalizing problems. For example, children’s receptive and enthusiastic compliant or on-task behavior may beget positive parenting (Kochanska et al., 2015 ; Lunkenheimer et al., 2017 ) and disrupt processes that give rise to or amplify child externalizing problems, during real-time interactions or across development. Last, we employed a well-validated coding system of child and parent behavior, but parents and children may not have experienced their own and each other’s behavior as it was coded. One possibility for null effects of parent praise during child-led play and clean-up is that praise may not have been experienced as positive, but rather as controlling (Owen et al., 2012 ). Parents and children may have also failed to perceive behaviors noted by trained coders, as their attentional focus may be affected by task demand, setting, and the child’s clinical presentation. Studies capturing individual experience are needed to provide a nuanced understanding of dyadic processes involved in the natural ebb and flow of child noncompliance in daily life. Supplementary Information Below is the link to the electronic supplementary material.
Author Contributions Conceptualization: Somers & Lee; Data Curation: Stiles & Shen; Formal Analysis: Somers; Funding Acquisition & Investigation: Lee; Writing – original draft: Somers, Stiles, MacNaughton, & Schiff; Writing – review & editing: Somers, Stiles, MacNaughton, Schiff, Shen, & Lee. Funding This work was partially supported by the Consortium of Neuropsychiatric Phenomics (CNP) (NIH Roadmap for Medical Research grant UL1-DE019580, RL1DA024853) and NIH Grant 1R03AA020186-01 to S.S. Lee. J.A. Somers was supported as a postdoctoral fellow on NIMH T321575. Data and Code Availability The data and code that support the findings of this study are available from the corresponding author upon reasonable request. Compliance with Ethical Standards Ethics Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the University of California, Los Angeles Institutional Review Board and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed Consent was obtained from all individual participants included in the study. Competing Interests The authors have no known competing interests to disclose.
CC BY
no
2024-01-15 23:42:01
Res Child Adolesc Psychopathol. 2024 Mar 14; 52(1):7-19
oa_package/33/74/PMC10542848.tar.gz
PMC10561706
37818117
INTRODUCTION The massive consumption of fossil fuels all over the world has led to excessive CO 2 emissions into the atmosphere, which has caused serious environmental issues and energy crises [ 1–3 ]. Electrochemical CO 2 reduction (ECR) by renewable electric energy offers a promising strategy to convert CO 2 into useful energy substances, such as CO, CH 3 OH, HCOOH, CH 4 and C 2 H 4 et al ., which will simultaneously reduce CO 2 and produce useful energy fuels [ 4–8 ]. In recent years, multifarious ECR catalysts have been developed for CO 2 reduction, and high efficiencies have also been achieved, which showed great future prospects for practical uses. However, most studies only focused on the ECR half reaction at the cathode when evaluating catalytic performance, while neglecting the relevant oxidation half reaction on the anode side, thus causing a great waste of energy [ 9 , 10 ]. In most of the reported researches, the conventional method to treat anode reaction has been coupled with the water oxidation reaction (oxygen evolution reaction, OER) by using a carbon or platinum rod as the anode [ 11 , 12 ]. Unfortunately, this OER process will cause overpotential and also needs a high energy input due to slow kinetics and unfavorable thermodynamics of H 2 O oxidation reaction, thus leading to lower energy efficiency for the overall catalytic reaction. Besides, the O 2 produced is relatively less value-added compared to many industrial chemicals [ 13 , 14 ]. Therefore, there is an urgent need to develop an oxidation reaction with high energy efficiency to replace the OER process. The application of the anodic oxidation process to the organic molecules oxidative synthesis, such as methanol oxidation reaction (MOR) to produce HCOOH, can effectively improve energy efficiency due to low theoretical overpotential and also be in line with the demand of green chemistry [ 15–17 ]. However, it remains a challenge to enable these two electrocatalytic reactions to cooperate effectively. The main barrier in this field is the lack of highly active multifunctional electrocatalysts to fulfill these two processes. Theoretically, the electrocatalysts for ECR coupled with MOR should satisfy the following requirements: (1) highly active and accessible catalytic sites for reduction or oxidation reaction [ 18 ]; (2) affinity and adsorption activation for substrates such as CO 2 or methanol [ 19 ]; (3) preferable electron and proton transfer ability [ 20 ]; (4) high stability during the electrochemical measurements [ 21 ]. Until now, many researches have explored the activity of a single functional homogeneous catalyst (such as metal complex) for ECR or MOR separately, while the problems of recycling and stability are still difficult to solve [ 22 , 23 ]. The construction of bifunctional heterogeneous catalysts can effectively solve the above problems which is used for ECR coupled with MOR, yet this has rarely been studied. Among them, the well-defined model with precise structure is particularly important for studying the structure-function relationship and mechanism of bifunctional heterogeneous catalysts. Covalent organic frameworks (COFs) with excellent structural designability and high stabilities are promising platforms for catalytic reactions [ 24–27 ]. Some building blocks of COFs possess appropriate coordination sites, thus making them capable of introducing metal active sites for typical catalysis [ 28 , 29 ]. Up to the present, COFs-based catalysts have been successfully applied for ECR, OER and oxygen reduction reaction (ORR), et al ., which illustrates the great potential for electrocatalysis [ 30 , 31 ]. However, the precise introduction of multiple active sites with different chemical environments into COFs is still in its infancy, much less their application in electrocatalysis. Recently, metallophthalocyanine (MPc) and metalloporphyrin (MPor) based-COFs have been studied for catalytic reactions [ 32 , 33 ]. Nevertheless, most of these works only focused on studying the catalytic performance of a single functional component, while the integration of MPc and MPor together into crystalline COFs for bifunctional catalysts was still unexplored. Besides, as one of the most important classes of crystalline COFs, Pc-based COFs possess excellent conductivity, mechanical performance and redox-active properties [ 34 , 35 ]. However, the traditional synthesis of Pc-based COFs based on solvothermal methods will inevitably use toxic organic solvents and catalysts [ 36–40 ]. Therefore, it is necessary to develop green and efficient methods to synthesize crystalline Pc-based COFs (Scheme 1 ). Unterlass et al. demonstrated that highly crystalline all-aromatic polyimides can be synthesized by hydrothermal polymerization using only H 2 O as solvent [ 41 ]. Besides, the alcohol-assisted hydrothermal synthesis which was developed by Lotsch et al. also confirmed imide-linkage can be obtained without a toxic solvent [ 42 ]. On the basis of the above research results, we successfully get a series of crystalline Pc-based COFs. It is noted that our report is the first synthesis of highly crystalline Pc-based COFs by hydrothermal synthesis in pure water without using catalyst and toxic solvents, which conforms to green synthesis chemistry. According to the above H 2 O-phase synthesis method, we rationally prepared crystalline NiPc-2HPor COF by condensing a phthalic acid group of NiPc and aromatic amine group of 2HPor through hydrothermal methods (Scheme 1 ), and further synthesized NiPc-NiPor COF by post-synthesis coordination reaction. The polyimides-linked COFs (PI-COFs) which were formed showed high chemical stability and activity for electrocatalysis MOR coupled with ECR. The porous NiPc-NiPor COF structure not only plays the role of metal site supports, but also possesses high conductivity and regularity. Besides, the Ni in the pockets of MPc and MPor with different chemical environments can act as synergy active sites, thus greatly enhancing ECR coupling MOR catalytic performance. Above all, the synthesized NiPc-MPor COFs combining the features of crystallinity and conductivity, also have multiple active sites with different chemical environments for ECR and MOR. Among them, the NiPc-NiPor COF shows excellent activity for cathodic ECR (FE CO = 98.12%, j CO = 6.14 mA cm −2 ) coupled with anodic MOR to HCOOH (FE HCOOH = 93.75%, j HCOOH = 5.81 mA cm −2 ) in a H-cell at low cell voltage (2.1 V) and exhibits remarkable long-term stability, which is comparable to most reported ECR-MOR coupled catalysts. The in-situ Fourier transform infrared spectroscopy (FT-IR) was used to identify the key intermediates for both ECR and MOR. Furthermore, the density functional theory (DFT) calculations demonstrate that the ECR process mainly performs on NiPc unit with the assistance of NiPor, meanwhile, the MOR process shows a preference for NiPor and conjugates with NiPc. The synergistic catalytic effect of NiPc and NiPor combined contributes to such high catalytic activity. This is the first report of bifunctional MPc-MPor-based COFs for electrocatalytic cathodic ECR coupled with anodic MOR simultaneously, which is also of great significance in the field of bifunctional electrocatalysts.
METHODS Synthesis of PI-linked metallophthalocyanine-metalloporphyrin COFs Synthesis of NiPc-2HPor COF: A Pyrex tube measuring 10 × 200 mm (o.d × length) was charged with NiPc (23.1 mg), 2HPor (16.8 mg), H 2 O (2.5 mL). After sonication for about 60 minutes, the tube was then flash frozen at 77 K (liquid N 2 bath) and degassed by three freeze-pump-thaw cycles, and refilled by N 2 (99.999%) to 1 bar then flame sealed. Then, warmed to room temperature, the mixture was heated at 230°C and left undisturbed for 48 h. A black precipitate was isolated by filtration in a Buchner funnel and was washed with THF and acetone until the filtrate was colorless. The wet sample was transferred to a Soxhlet extractor and washed with THF for 24 hours. Finally, the product was evacuated at 120°C under dynamic vacuum overnight to yield the activated sample (∼21 mg, 57% yield). Synthesis of NiPc-NiPor COF: The NiPc-NiPor COF was synthesized by post synthesis method. In detail, NiPc-2HPor COF (15 mg) and Ni(OAc) 2 ·2H 2 O (50 mg) were added to the ethanol (20 mL). After being purified by N 2 , the mixture was heated and refluxed for 12 h at N 2 atmosphere. Following that, the solutions were cooled down to room temperature and filtered. The filter cake was washed thoroughly with water and ethanol to remove free metal ions. The final filter cake was dried at 120°C under dynamic vacuum overnight to get NiPc-NiPor COF (∼14 mg, 82% yield). Electrochemical measurements Electrocatalytic ECR coupling MOR experiments: All electrochemical tests were applied in an airtight H-cell (Tianjin Aida Hengsheng Technology, China) which separated the cathodic and anodic chambers by using Nafion 117 membrane. The standard two-electrode system, i.e. catalyst-modified carbon fiber papers both as working anode and cathode electrode, and the ECR coupling MOR tests on the electrochemical workstation (Bio-Logic VSP) and the CO 2 saturated 0.5 M KHCO 3 and 1 M CH 3 OH in 1 M KOH were used as electrolyte. The potential range of 1.8 to 2.4 V (cell voltage, step size = 0.1 V) was applied during the ECR coupling MOR test and calculated the Faradaic efficiency and current density. The yield of CO and H 2 was quantified by gas chromatography (GC-7920, CEAulight, China). The HCOOH yield was quantified by ion chromatography (Ion Chromatography System, Themorpher, China). The working electrode was similar to the preparation of ECR. The polarization curve results were obtained by performing linear sweep voltammetry (LSV) mode with a scan rate of 5 mV s −1 . Potentials were measured against an Ag/AgCl reference electrode and the results were converted to those against a reversible hydrogen electrode (RHE) based on the RHE calibration. Electrochemical impedance spectroscopy (EIS) measurement was performed on the electrochemical analyzer in a frequency range from 100 kHz to 100 mHz by applying an AC voltage with 10 mV.
RESULTS AND DISCUSSION Synthesis and structure of NiPc-MPor COF As shown in Scheme 1 and Fig. 1 , a [4 + 4] condensation reaction is applied to synthesize NiPc-2HPor COF. Specifically, NiPc-2HPor COF was synthesized by condensation between 2,3,9,10,16,17,23,24-octacarboxyphthalocyanine nickel (NiPc) and 5,10,15,20-tetrakis(para-aminophenyl) porphyrin (2HPor) via hydrothermal methods (Fig. 1a ). The crystal structure of NiPc-2HPor COF was characterized by powder X-ray diffraction (PXRD) measurements combined with structural simulation. The AA and AB stacking structural model was constructed based on reticular chemistry, while the results showed that the theoretical PXRD patterns of AA and AB model had some deviations from the experimental curves (Fig. 1b ). Interestingly, we found that the theoretical PXRD pattern of AA slipped stacking with Pm (6) space group model fitted well with the experimental one (for details, see the structural modeling section). Therefore, we then conducted Pawley refinement based on AA slipped stacking against the experimental PXRD pattern, which provided unit cell parameters of a = b = 25.7859 Å, c = 3.4637 Å, α = γ = 90°, β = 120°. The refinement of PXRD diffraction patterns fitted well with the experimental results, with residuals of Rp = 2.81% and Rwp = 3.63%, thus confirming the accuracy of the simulated structure. The peaks at 5.27° and 10.54° are assigned to the (110) and (220) planes, respectively. The porosity of NiPc-2HPor COF was then determined by N 2 adsorption isotherms at 77 K, and the results showed that the pore size distributed at 1.45 nm, which was consistent with the theoretical aperture (Fig. 1c–f ). The BET surface area of NiPc-2HPor COF was calculated to be 258.608 m 2 g −1 . Based on the NiPc-2HPor COF, we further synthesized NiPc-NiPor COF by post-synthesis coordination reaction (Scheme S2). The synthesized NiPc-NiPor COF also shows high crystallinity as confirmed by the PXRD pattern (Fig. 2a ). The comparison of PXRD patterns of NiPc-2HPor COF and NiPc-NiPor COF with 2HPor and NiPc show no precursor monomers exist, suggesting the completeness of the polymerization reaction ( Fig. S1 ). Fourier transform infrared (FT-IR) was then conducted to characterize the chemical structure which confirmed the imide formation reaction in NiPc-2HPor COF and NiPc-NiPor COF. As shown in Fig. 2b , the obvious peaks at 1762 and 1707 cm −1 correspond to asymmetric and symmetric vibrations of C=O of the five-membered imide rings and the peaks at 1368 and 1324 cm −1 belong to the stretching vibration of the C-N-C bond of polyimide [ 43 ]. Furthermore, the peaks corresponding to the carboxylic acid of the precursor NiPc at 1696 cm −1 and the amide bond of the NiPor at 1674 cm −1 are not observed, which indicates full imidization yielding the desired PI-COFs (Fig. 2b ). The thermostability of COFs was studied by thermogravimetric analysis (TGA) under N 2 and O 2 atmosphere ( Figs S2 – S5 ), which showed no obvious change up to ∼300 o C under both nitrogen and oxygen atmospheres. X-ray photoelectron spectroscopy (XPS) was conducted to confirm all the element states over COFs ( Figs S6 – S10 ), which showed C, N, O and Ni coexisting in NiPc-2HPor COF and NiPc-NiPor COF. Furthermore, the analysis results show the divalent state of the central Ni in COFs. We then performed high-resolution XPS and their deconvolution for C1s, N1s and O1s. In the high-resolution N1s spectrum of NiPc-2HPor COF ( Fig. S10a ), the binding energy peaks at 398.8, 399.2, 399.8 and 400.5 eV corresponding to iminic N, C = N, pyrrolic N and C-N, respectively. As a result, the disappearance of the pyrrolic N peak in NiPc-NiPor COF ( Fig. S10d and Table S1 ) shows the successful post-metalation of Ni [ 44 ]. All above results illustrate the successful condensation of NiPc and 2HPor and the formation of PI-COFs. We then performed the CO 2 adsorption-desorption test of these COFs. As shown in Fig. 2c , NiPc-NiPor COF have a strong CO 2 adsorption capacity of about 31.19 cm 3 g −1 in 273 K, which is higher than the NiPc-2HPor COF, and thus is more beneficial for the ECR reaction. The crystal morphology of these COFs was observed by transmission electron microscopy (TEM) and scanning electron microscopy (SEM). The TEM shows that NiPc-NiPor/2HPor COF displays a lamellar shape crystal with a size of ∼50–100 nm (Fig. 2d and Fig. S11 ). The SEM images of COFs further confirm the microcrystal morphology ( Figs S12 and S13 ). Furthermore, the high-resolution TEM (HRTEM) of NiPc-NiPor COF displays clear lattice fringes of (001) crystal face with a distance of 0.343 nm, which fits well with the theoretical layer distance (0.346 nm), further confirming the precise nature of the simulated crystal structure (Figs 1f and 2e ). Energy dispersive X-ray spectroscopy (EDX) mapping reveals the uniform distribution of the Ni, C, N and O element of NiPc-NiPor COF, which illustrates the homogeneity of these materials (Fig. 2f ). Electrocatalytic ECR coupling MOR performance Based on the above analysis and characterization of the structure and features of NiPc-NiPor COFs, it can be concluded that the MPc and MPor monomers in crystalline COFs form the well-defined, isolated, and atomically uniformly multiple single-metal active sites with different chemical environments, which is favorable for catalytic reaction. The electronic conductivity of NiPc-2HPor COF and NiPc-NiPor COF were performed by current (I)-voltage (V) measurements and electrochemical impedance spectrum (EIS) ( Figs S14 and S15 ). We then calculated the conductivity values of all tested COFs from I-V test results by using a double probe system. As a result, the NiPc-NiPor COF exhibits higher specific conductivity values (7.28E-8 S m −1 ) than NiPc-2HPor COF (2.5E-8 S m −1 ). It can be concluded that the NiPc-NiPor COF possesses a superior electron transfer rate, which is due to their highly conjugated π-electron structure. Accordingly, NiPc-NiPor COF will be a more promising platform for electrocatalysis. Bearing the above ideas in mind, we then studied the ECR coupling MOR performances of NiPc-MPor COFs. The separated electrocatalysis tests were first conducted in a common H-cell reactor with a three-electrode standard system and the coupling reaction was then performed in the two-electrode system by electrochemical workstation. CO and HCOOH were detected as main the products for ECR and MOR with a minor by-product H 2 , which were quantified by gas chromatography (GC) and ion chromatography (IC) by external standard methods ( Figs S16 – S18 ), respectively. First, we studied the NiPc and NiPor monomers as catalysts for MOR by conducting a linear sweep voltammetry (LSV) test on a three-electrode system in 1 M KOH electrolyte with or without methanol substrates, respectively. Interestingly, both NiPc and NiPor show effective enhanced current density for MOR in methanol electrolyte (Fig. 3a ). Besides, NiPor monomers exhibit maximum current density when applied in a methanol electrolyte, suggesting that the NiPor may be a more effective active site for MOR. The LSV performance for ECR of NiPc and NiPor monomers also shows that NiPc has a higher current density than NiPor in CO 2 (Fig. 3b ), indicating that NiPc may play a key role in ECR. Based on the above results, it is reasonable to assume that integrating NiPc and NiPor monomers will be greatly beneficial to the ECR coupled with MOR performance. Based on the above consideration, we then conducted the LSV test for NiPc-MPor COFs (Fig. 3 ). The results show that both NiPc-2HPor COF and NiPc-NiPor COF represent higher current density in methanol electrolyte compared with pure KOH electrolyte along with the increase in applied voltage (Fig. 3c and Fig. S19 ). Further, NiPc-NiPor COF shows more enhanced current intensity compared to NiPc-2HPor COF, which also indicates that the NiPor in the COF might play a key role in contributing to the activity of MOR. Besides, the Tafel slope of NiPc-NiPor COF at the anode in 1 M KOH with 1 M methanol is 123.84 mV dec −1 , much lower than that in the pure KOH electrolyte (318.55 mV dec −1 ), suggesting that it has more favorable reaction kinetics for MOR ( Fig. S20 ). We then tested the ECR performance for NiPc-MPor COFs as a cathode in Ar and CO 2 saturated solution. Both the NiPc-NiPor COF and NiPc-2HPor COF obtain enhanced current density in the existence of CO 2 compared to Ar environment, which suggests that the ECR is priority to HER process. Furthermore, the current density of NiPc-NiPor COF is almost same as that of NiPc-2HPor COF in a CO 2 environment (Fig. 3d ), which further illustrates that the NiPc (rather than NiPor) in COFs mainly contribute to ECR activity. Encouraged by the above performance, we then explored, separately, the Faradaic efficiency (FE) and partial current density (j) of these samples for ECR or MOR on a three-electrode system in greater detail. We first discovered the NiPc and NiPor monomers under wide potentials ranging from −0.5 V to −1.1 V vs. RHE for cathode ECR and 1.4 V to 1.7 V vs. RHE for anode MOR and calculated the corresponding FE values ( Figs S21 and S22 ). The results show that, for MOR, both the NiPc and NiPor monomers own effective FE HCOOH and the detailed comparison suggests that the NiPor monomer has superior selectivity to NiPc. At the same time, based on the ECR performance as shown in Fig. S22 , we can also conclude that the NiPc shows more superior selectivity for ECR. We then studied the NiPc-2HPor COF and NiPc-NiPor COF as catalysts for ECR and MOR tests, separately (Fig. 3e and f ). Compared with NiPc-2HPor COF, NiPc-NiPor COF exhibits superior MOR catalytic activity and selectivity with maximal FE HCOOH of up to 92.63% with a j HCOOH of 15.84 mA cm −2 at 1.55 V vs. RHE. On the other hand, the NiPc-NiPor COF also shows the better activity than NiPc-2HPor COF on ECR, with maximal FE CO of up to 96.57% and a partial current density (j CO ) of −4.39 mA cm −2 at −0.8 V vs. RHE. Based on the above results, ECR and MOR coupling reaction performances were carried out by using a two-electrode H-cell, in which the NiPc-MPor COFs act as both cathode and anode active catalyst. Specifically, the anode part with 1 mg/cm −2 NiPc-MPor COFs active layer was applied in 1 M KOH electrolyte containing 1 M methanol, and the cathode with the same active material was applied in 0.5 M KHCO 3 electrolyte (denoted as NiPc-MPor COFs || NiPc-MPor COFs). The LSV patterns for the paired MOR (1 M methanol) || ECR show that the NiPc-NiPor COF electrode only needs a cell voltage of 1.5 V to obtain a current density of 1.0 mA cm −2 , which is much lower than the paired OER || ECR without methanol (Fig. 4a ). We then tested the FE of two COFs under cell voltage ranging from 1.8 V to 2.4 V. When paired the MOR and ECR, the FE CO of NiPc-NiPor COF exhibited higher than 90% in a potential range from 2.0 to 2.2 V and the maximum FE CO can reach up to 98.12% with a partial current density (j CO ) of ∼6.14 mA cm −2 at 2.1 V (Fig. 4b , and Figs S23 and S25 ). Meanwhile, NiPc-2HPor COF shows a little less FE CO than the NiPc-NiPor COF for ECR, which can be concluded that the NiPc unit mainly contributed to ECR activity. On the other hand, NiPc-2HPor COF also shows much lower FE HCOOH than NiPc-NiPor COF for MOR, which illustrates that the NiPor may play a key role for MOR. Besides, the corresponding anodic MOR produced HCOOH with FE over 75% at all applied voltages and toward a maximum value of 93.75% with a partial current density (j HCOOH ) of ∼5.81 mA cm −2 at 2.1 V (Fig. 4b and Fig. S24 ). The detailed structure-functional relationships will be discussed will in the following part. To further detect the liquid product of NiPc-NiPor COF during MOR process in MOR || ECR cell, the reaction mixture was analyzed by 1 H-NMR ( Fig. S26 ). The CH 3 OH oxidation products, that is, HCOOH is obviously found in 8.27 ppm. For ECR, the 1 H-NMR of reaction mixture shows no liquid products ( Fig. S27 ), and no other except for CO and H 2 is detected from GC and IC, which means CO and H 2 are the only products of ECR. To verify the source of carbon atoms in HCOOH, we carried out isotope labeling. The MOR test was carried out in 1 M KOH electrolyte containing 13 C labeled CH 3 OH, the 13 C-NMR showed an evident peak of H 13 COOH as shown in Fig. 4c . The isotope labeling experiments were also performed to ascertain the carbon sources of ECR products, i.e. CO, 13 CO ( m/z = 29) was finally detected by gas chromatograph-mass spectrometer (GC-MS) (Fig. 4d ). These results confirm that the produced HCOOH and CO originated from the reactant CH 3 OH and CO 2 , respectively, instead of decomposition of the catalyst. The durability of a catalyst is also one of the most important factors in further practice application. Therefore, we then evaluated the stability of NiPc-NiPor COF in the electrochemical conditions by chronoamperometric testing (Fig. 4e ). After long-time evaluation, no obvious decay in FE and current density was detected during 8.5 h (the FE of CO and HCOOH was analyzed every 0.5 h). Furthermore, the crystalline structure was preserved from the PXRD patterns of NiPc-NiPor COF after being immersed in 1 M KOH aqueous solution and 0.5 M KHCO 3 aqueous solution for 48 h, respectively ( Fig. S28 ). More importantly, the PXRD patterns show that NiPc-NiPor COF still maintains crystallinity after the electrocatalytic reaction ( Fig. S29 ). It is noticed that the PXRD peak intensity after electrocatalytic tests on anode showed reduced. This phenomenon maybe caused by the intrinsic instability of this COF under the combination of strong base and electric field conditions, the electrochemical corrosion by electrolysis, and mechanical force by stirring. All the above results confirm that these COFs are highly stable catalysts. Investigating structure-functional relationships We then performed the in-situ FT-IR investigation of the catalytic process to study the key intermediates for ECR and MOR. For the MOR process (Fig. 5a ), an increasing positive band centers at ∼1647 cm −1 which corresponds to the C=O of *COOH and is clearly observed in applied cell potential of 1.55 V vs. the RHE. Meanwhile, a small positive band centers at 1565, 1409, 1340 and 1241 cm −1 , which corresponds to the asymmetry and symmetry stretch of C-O and OH vibrations for *COOH also observed [ 45 ]. Besides, the increasing peak at 1029 cm −1 suggests that *CHO species exist [ 46 ]. The above results show that the *COOH and *CHO are the key intermediates for CH 3 OH oxidation to HCOOH. In addition, the bands at 2941 and 2839 cm −1 in the spectra are ascribed to surface CH 3 OH species. As for the ECR process (Fig. 5b ), the *COOH is also observed as a key intermediate for CO 2 reduction to CO, whose peaks appear at 1700–1200 cm −1 [ 47 ]. Guided by the in-situ FT-IR analysis and conclusions, we further investigated the ECR and MOR catalytic processes in detail based on DFT theoretical calculations (Fig. 5c ). For the ECR process, the electron transfer to the adsorbed CO 2 was then combined with a H proton to generate *COOH which is calculated to be the rate determining step (RDS) on NiPc-NiPor COF. Interestingly, the Gibbs free energy on NiPc for the RDS step is 0.99 eV, which is obviously smaller than the process on NiPor (Fig. 5d ). Therefore, based on the minimum energy principle, the ECR process is more likely to occur on the NiPc part. As for the MOR process, the RDS is determined to be the oxidation process of *CH 3 OH to *CH 2 OH. It is noted that the energy barrier for *CH 3 OH to *CH 2 OH on NiPor and NiPc have a small difference, this indicated that the MOR catalytic process can occur in both NiPor and NiPc. From the calculation results, we also found that the free energy for *CH 3 OH to *CH 2 OH process on NiPor is 0.34 eV, which is slightly weaker compared to NiPc (0.38 eV) (Fig. 5e ). Thus, we can conclude that the main active site for MOR contributed to the NiPor part and simultaneously conjugated with NiPc, these synergistic effects caused significant catalytic activity during the MOR reaction.
RESULTS AND DISCUSSION Synthesis and structure of NiPc-MPor COF As shown in Scheme 1 and Fig. 1 , a [4 + 4] condensation reaction is applied to synthesize NiPc-2HPor COF. Specifically, NiPc-2HPor COF was synthesized by condensation between 2,3,9,10,16,17,23,24-octacarboxyphthalocyanine nickel (NiPc) and 5,10,15,20-tetrakis(para-aminophenyl) porphyrin (2HPor) via hydrothermal methods (Fig. 1a ). The crystal structure of NiPc-2HPor COF was characterized by powder X-ray diffraction (PXRD) measurements combined with structural simulation. The AA and AB stacking structural model was constructed based on reticular chemistry, while the results showed that the theoretical PXRD patterns of AA and AB model had some deviations from the experimental curves (Fig. 1b ). Interestingly, we found that the theoretical PXRD pattern of AA slipped stacking with Pm (6) space group model fitted well with the experimental one (for details, see the structural modeling section). Therefore, we then conducted Pawley refinement based on AA slipped stacking against the experimental PXRD pattern, which provided unit cell parameters of a = b = 25.7859 Å, c = 3.4637 Å, α = γ = 90°, β = 120°. The refinement of PXRD diffraction patterns fitted well with the experimental results, with residuals of Rp = 2.81% and Rwp = 3.63%, thus confirming the accuracy of the simulated structure. The peaks at 5.27° and 10.54° are assigned to the (110) and (220) planes, respectively. The porosity of NiPc-2HPor COF was then determined by N 2 adsorption isotherms at 77 K, and the results showed that the pore size distributed at 1.45 nm, which was consistent with the theoretical aperture (Fig. 1c–f ). The BET surface area of NiPc-2HPor COF was calculated to be 258.608 m 2 g −1 . Based on the NiPc-2HPor COF, we further synthesized NiPc-NiPor COF by post-synthesis coordination reaction (Scheme S2). The synthesized NiPc-NiPor COF also shows high crystallinity as confirmed by the PXRD pattern (Fig. 2a ). The comparison of PXRD patterns of NiPc-2HPor COF and NiPc-NiPor COF with 2HPor and NiPc show no precursor monomers exist, suggesting the completeness of the polymerization reaction ( Fig. S1 ). Fourier transform infrared (FT-IR) was then conducted to characterize the chemical structure which confirmed the imide formation reaction in NiPc-2HPor COF and NiPc-NiPor COF. As shown in Fig. 2b , the obvious peaks at 1762 and 1707 cm −1 correspond to asymmetric and symmetric vibrations of C=O of the five-membered imide rings and the peaks at 1368 and 1324 cm −1 belong to the stretching vibration of the C-N-C bond of polyimide [ 43 ]. Furthermore, the peaks corresponding to the carboxylic acid of the precursor NiPc at 1696 cm −1 and the amide bond of the NiPor at 1674 cm −1 are not observed, which indicates full imidization yielding the desired PI-COFs (Fig. 2b ). The thermostability of COFs was studied by thermogravimetric analysis (TGA) under N 2 and O 2 atmosphere ( Figs S2 – S5 ), which showed no obvious change up to ∼300 o C under both nitrogen and oxygen atmospheres. X-ray photoelectron spectroscopy (XPS) was conducted to confirm all the element states over COFs ( Figs S6 – S10 ), which showed C, N, O and Ni coexisting in NiPc-2HPor COF and NiPc-NiPor COF. Furthermore, the analysis results show the divalent state of the central Ni in COFs. We then performed high-resolution XPS and their deconvolution for C1s, N1s and O1s. In the high-resolution N1s spectrum of NiPc-2HPor COF ( Fig. S10a ), the binding energy peaks at 398.8, 399.2, 399.8 and 400.5 eV corresponding to iminic N, C = N, pyrrolic N and C-N, respectively. As a result, the disappearance of the pyrrolic N peak in NiPc-NiPor COF ( Fig. S10d and Table S1 ) shows the successful post-metalation of Ni [ 44 ]. All above results illustrate the successful condensation of NiPc and 2HPor and the formation of PI-COFs. We then performed the CO 2 adsorption-desorption test of these COFs. As shown in Fig. 2c , NiPc-NiPor COF have a strong CO 2 adsorption capacity of about 31.19 cm 3 g −1 in 273 K, which is higher than the NiPc-2HPor COF, and thus is more beneficial for the ECR reaction. The crystal morphology of these COFs was observed by transmission electron microscopy (TEM) and scanning electron microscopy (SEM). The TEM shows that NiPc-NiPor/2HPor COF displays a lamellar shape crystal with a size of ∼50–100 nm (Fig. 2d and Fig. S11 ). The SEM images of COFs further confirm the microcrystal morphology ( Figs S12 and S13 ). Furthermore, the high-resolution TEM (HRTEM) of NiPc-NiPor COF displays clear lattice fringes of (001) crystal face with a distance of 0.343 nm, which fits well with the theoretical layer distance (0.346 nm), further confirming the precise nature of the simulated crystal structure (Figs 1f and 2e ). Energy dispersive X-ray spectroscopy (EDX) mapping reveals the uniform distribution of the Ni, C, N and O element of NiPc-NiPor COF, which illustrates the homogeneity of these materials (Fig. 2f ). Electrocatalytic ECR coupling MOR performance Based on the above analysis and characterization of the structure and features of NiPc-NiPor COFs, it can be concluded that the MPc and MPor monomers in crystalline COFs form the well-defined, isolated, and atomically uniformly multiple single-metal active sites with different chemical environments, which is favorable for catalytic reaction. The electronic conductivity of NiPc-2HPor COF and NiPc-NiPor COF were performed by current (I)-voltage (V) measurements and electrochemical impedance spectrum (EIS) ( Figs S14 and S15 ). We then calculated the conductivity values of all tested COFs from I-V test results by using a double probe system. As a result, the NiPc-NiPor COF exhibits higher specific conductivity values (7.28E-8 S m −1 ) than NiPc-2HPor COF (2.5E-8 S m −1 ). It can be concluded that the NiPc-NiPor COF possesses a superior electron transfer rate, which is due to their highly conjugated π-electron structure. Accordingly, NiPc-NiPor COF will be a more promising platform for electrocatalysis. Bearing the above ideas in mind, we then studied the ECR coupling MOR performances of NiPc-MPor COFs. The separated electrocatalysis tests were first conducted in a common H-cell reactor with a three-electrode standard system and the coupling reaction was then performed in the two-electrode system by electrochemical workstation. CO and HCOOH were detected as main the products for ECR and MOR with a minor by-product H 2 , which were quantified by gas chromatography (GC) and ion chromatography (IC) by external standard methods ( Figs S16 – S18 ), respectively. First, we studied the NiPc and NiPor monomers as catalysts for MOR by conducting a linear sweep voltammetry (LSV) test on a three-electrode system in 1 M KOH electrolyte with or without methanol substrates, respectively. Interestingly, both NiPc and NiPor show effective enhanced current density for MOR in methanol electrolyte (Fig. 3a ). Besides, NiPor monomers exhibit maximum current density when applied in a methanol electrolyte, suggesting that the NiPor may be a more effective active site for MOR. The LSV performance for ECR of NiPc and NiPor monomers also shows that NiPc has a higher current density than NiPor in CO 2 (Fig. 3b ), indicating that NiPc may play a key role in ECR. Based on the above results, it is reasonable to assume that integrating NiPc and NiPor monomers will be greatly beneficial to the ECR coupled with MOR performance. Based on the above consideration, we then conducted the LSV test for NiPc-MPor COFs (Fig. 3 ). The results show that both NiPc-2HPor COF and NiPc-NiPor COF represent higher current density in methanol electrolyte compared with pure KOH electrolyte along with the increase in applied voltage (Fig. 3c and Fig. S19 ). Further, NiPc-NiPor COF shows more enhanced current intensity compared to NiPc-2HPor COF, which also indicates that the NiPor in the COF might play a key role in contributing to the activity of MOR. Besides, the Tafel slope of NiPc-NiPor COF at the anode in 1 M KOH with 1 M methanol is 123.84 mV dec −1 , much lower than that in the pure KOH electrolyte (318.55 mV dec −1 ), suggesting that it has more favorable reaction kinetics for MOR ( Fig. S20 ). We then tested the ECR performance for NiPc-MPor COFs as a cathode in Ar and CO 2 saturated solution. Both the NiPc-NiPor COF and NiPc-2HPor COF obtain enhanced current density in the existence of CO 2 compared to Ar environment, which suggests that the ECR is priority to HER process. Furthermore, the current density of NiPc-NiPor COF is almost same as that of NiPc-2HPor COF in a CO 2 environment (Fig. 3d ), which further illustrates that the NiPc (rather than NiPor) in COFs mainly contribute to ECR activity. Encouraged by the above performance, we then explored, separately, the Faradaic efficiency (FE) and partial current density (j) of these samples for ECR or MOR on a three-electrode system in greater detail. We first discovered the NiPc and NiPor monomers under wide potentials ranging from −0.5 V to −1.1 V vs. RHE for cathode ECR and 1.4 V to 1.7 V vs. RHE for anode MOR and calculated the corresponding FE values ( Figs S21 and S22 ). The results show that, for MOR, both the NiPc and NiPor monomers own effective FE HCOOH and the detailed comparison suggests that the NiPor monomer has superior selectivity to NiPc. At the same time, based on the ECR performance as shown in Fig. S22 , we can also conclude that the NiPc shows more superior selectivity for ECR. We then studied the NiPc-2HPor COF and NiPc-NiPor COF as catalysts for ECR and MOR tests, separately (Fig. 3e and f ). Compared with NiPc-2HPor COF, NiPc-NiPor COF exhibits superior MOR catalytic activity and selectivity with maximal FE HCOOH of up to 92.63% with a j HCOOH of 15.84 mA cm −2 at 1.55 V vs. RHE. On the other hand, the NiPc-NiPor COF also shows the better activity than NiPc-2HPor COF on ECR, with maximal FE CO of up to 96.57% and a partial current density (j CO ) of −4.39 mA cm −2 at −0.8 V vs. RHE. Based on the above results, ECR and MOR coupling reaction performances were carried out by using a two-electrode H-cell, in which the NiPc-MPor COFs act as both cathode and anode active catalyst. Specifically, the anode part with 1 mg/cm −2 NiPc-MPor COFs active layer was applied in 1 M KOH electrolyte containing 1 M methanol, and the cathode with the same active material was applied in 0.5 M KHCO 3 electrolyte (denoted as NiPc-MPor COFs || NiPc-MPor COFs). The LSV patterns for the paired MOR (1 M methanol) || ECR show that the NiPc-NiPor COF electrode only needs a cell voltage of 1.5 V to obtain a current density of 1.0 mA cm −2 , which is much lower than the paired OER || ECR without methanol (Fig. 4a ). We then tested the FE of two COFs under cell voltage ranging from 1.8 V to 2.4 V. When paired the MOR and ECR, the FE CO of NiPc-NiPor COF exhibited higher than 90% in a potential range from 2.0 to 2.2 V and the maximum FE CO can reach up to 98.12% with a partial current density (j CO ) of ∼6.14 mA cm −2 at 2.1 V (Fig. 4b , and Figs S23 and S25 ). Meanwhile, NiPc-2HPor COF shows a little less FE CO than the NiPc-NiPor COF for ECR, which can be concluded that the NiPc unit mainly contributed to ECR activity. On the other hand, NiPc-2HPor COF also shows much lower FE HCOOH than NiPc-NiPor COF for MOR, which illustrates that the NiPor may play a key role for MOR. Besides, the corresponding anodic MOR produced HCOOH with FE over 75% at all applied voltages and toward a maximum value of 93.75% with a partial current density (j HCOOH ) of ∼5.81 mA cm −2 at 2.1 V (Fig. 4b and Fig. S24 ). The detailed structure-functional relationships will be discussed will in the following part. To further detect the liquid product of NiPc-NiPor COF during MOR process in MOR || ECR cell, the reaction mixture was analyzed by 1 H-NMR ( Fig. S26 ). The CH 3 OH oxidation products, that is, HCOOH is obviously found in 8.27 ppm. For ECR, the 1 H-NMR of reaction mixture shows no liquid products ( Fig. S27 ), and no other except for CO and H 2 is detected from GC and IC, which means CO and H 2 are the only products of ECR. To verify the source of carbon atoms in HCOOH, we carried out isotope labeling. The MOR test was carried out in 1 M KOH electrolyte containing 13 C labeled CH 3 OH, the 13 C-NMR showed an evident peak of H 13 COOH as shown in Fig. 4c . The isotope labeling experiments were also performed to ascertain the carbon sources of ECR products, i.e. CO, 13 CO ( m/z = 29) was finally detected by gas chromatograph-mass spectrometer (GC-MS) (Fig. 4d ). These results confirm that the produced HCOOH and CO originated from the reactant CH 3 OH and CO 2 , respectively, instead of decomposition of the catalyst. The durability of a catalyst is also one of the most important factors in further practice application. Therefore, we then evaluated the stability of NiPc-NiPor COF in the electrochemical conditions by chronoamperometric testing (Fig. 4e ). After long-time evaluation, no obvious decay in FE and current density was detected during 8.5 h (the FE of CO and HCOOH was analyzed every 0.5 h). Furthermore, the crystalline structure was preserved from the PXRD patterns of NiPc-NiPor COF after being immersed in 1 M KOH aqueous solution and 0.5 M KHCO 3 aqueous solution for 48 h, respectively ( Fig. S28 ). More importantly, the PXRD patterns show that NiPc-NiPor COF still maintains crystallinity after the electrocatalytic reaction ( Fig. S29 ). It is noticed that the PXRD peak intensity after electrocatalytic tests on anode showed reduced. This phenomenon maybe caused by the intrinsic instability of this COF under the combination of strong base and electric field conditions, the electrochemical corrosion by electrolysis, and mechanical force by stirring. All the above results confirm that these COFs are highly stable catalysts. Investigating structure-functional relationships We then performed the in-situ FT-IR investigation of the catalytic process to study the key intermediates for ECR and MOR. For the MOR process (Fig. 5a ), an increasing positive band centers at ∼1647 cm −1 which corresponds to the C=O of *COOH and is clearly observed in applied cell potential of 1.55 V vs. the RHE. Meanwhile, a small positive band centers at 1565, 1409, 1340 and 1241 cm −1 , which corresponds to the asymmetry and symmetry stretch of C-O and OH vibrations for *COOH also observed [ 45 ]. Besides, the increasing peak at 1029 cm −1 suggests that *CHO species exist [ 46 ]. The above results show that the *COOH and *CHO are the key intermediates for CH 3 OH oxidation to HCOOH. In addition, the bands at 2941 and 2839 cm −1 in the spectra are ascribed to surface CH 3 OH species. As for the ECR process (Fig. 5b ), the *COOH is also observed as a key intermediate for CO 2 reduction to CO, whose peaks appear at 1700–1200 cm −1 [ 47 ]. Guided by the in-situ FT-IR analysis and conclusions, we further investigated the ECR and MOR catalytic processes in detail based on DFT theoretical calculations (Fig. 5c ). For the ECR process, the electron transfer to the adsorbed CO 2 was then combined with a H proton to generate *COOH which is calculated to be the rate determining step (RDS) on NiPc-NiPor COF. Interestingly, the Gibbs free energy on NiPc for the RDS step is 0.99 eV, which is obviously smaller than the process on NiPor (Fig. 5d ). Therefore, based on the minimum energy principle, the ECR process is more likely to occur on the NiPc part. As for the MOR process, the RDS is determined to be the oxidation process of *CH 3 OH to *CH 2 OH. It is noted that the energy barrier for *CH 3 OH to *CH 2 OH on NiPor and NiPc have a small difference, this indicated that the MOR catalytic process can occur in both NiPor and NiPc. From the calculation results, we also found that the free energy for *CH 3 OH to *CH 2 OH process on NiPor is 0.34 eV, which is slightly weaker compared to NiPc (0.38 eV) (Fig. 5e ). Thus, we can conclude that the main active site for MOR contributed to the NiPor part and simultaneously conjugated with NiPc, these synergistic effects caused significant catalytic activity during the MOR reaction.
CONCLUSION In conclusion, we rationally designed and synthesized two stable PI phthalocyanine-porphyrin bifunctional COFs in pure water by a hydrothermal method for electrocatalytic cathodic CO 2 reduction coupled with anodic CH 3 OH oxidation. The dual Ni sites in NiPc-NiPor COF in different chemical environments are mainly devoted to different electrocatalytic reactions (i.e. MOR and ECR). Interestingly, NiPc-NiPor COF behaves as the superior FE and j for both MOR and ECR. In detail, the NiPc-NiPor COF shows FE CO = 98.12% (j CO = 6.14 mA cm −2 ) for ECR and FE HCOOH = 93.75% (j HCOOH = 5.81 mA cm −2 ) for MOR. According to exhaustive electrochemical measurement and comparison results, we demonstrate that the NiPc unit mainly contributes to the ECR with the assistance of NiPor, meanwhile NiPor is mainly for the MOR process and conjugates with NiPc. These ingenious synergistic effects cause significant catalytic activity for ECR coupled MOR reaction. More importantly, an in-depth mechanistic study based on in-situ FT-IR and DFT simulation also confirmed the above conclusions. Our work provides a new insight into the design and development of dual functional COFs-based catalysts for various catalytic reactions.
Equally contributed to this work. ABSTRACT Electrocatalytic CO 2 reduction (ECR) coupled with organic oxidation is a promising strategy to produce high value-added chemicals and improve energy efficiency. However, achieving the efficient redox coupling reaction is still challenging due to the lack of suitable electrocatalysts. Herein, we designed two bifunctional polyimides-linked covalent organic frameworks (PI-COFs) through assembling phthalocyanine (Pc) and porphyrin (Por) by non-toxic hydrothermal methods in pure water to realize the above catalytic reactions. Due to the high conductivity and well-defined active sites with different chemical environments, NiPc-NiPor COF performs efficient ECR coupled with methanol oxidation reaction (MOR) (Faradaic efficiency of CO (FE CO ) = 98.12%, partial current densities of CO (j CO ) = 6.14 mA cm −2 for ECR, FE HCOOH = 93.75%, j HCOOH = 5.81 mA cm −2 for MOR at low cell voltage (2.1 V) and remarkable long-term stability). Furthermore, experimental evidences and density functional theory (DFT) calculations demonstrate that the ECR process mainly conducts on NiPc unit with the assistance of NiPor, meanwhile, the MOR prefers NiPor conjugating with NiPc. The two units of NiPc-NiPor COF collaboratively promote the coupled oxidation-reduction reaction. For the first time, this work achieves the rational design of bifunctional COFs for coupled heterogeneous catalysis, which opens a new area for crystalline material catalysts. Bifunctional phthalocyanine-porphyrin COFs were synthesized by hydrothermal method without using catalyst/toxic solvent, and simultaneously possessed high crystallinity, conductivity and multiple active-sites, which exhibited high activity for CO 2 electro-reduction coupled with methanol electro-oxidation.
Supplementary Material
FUNDING This work was supported by the National Natural Science Foundation of China (22225109, 22071109, 22105080 and 22201083), the Project funded by the China Postdoctoral Science Foundation (2020M682748 and 2021M701270), the Guangdong Basic and Applied Basic Research Foundation (2023A1515010779 and 2023A1515010928), the Guangzhou Basic and Applied Basic Research Fund Project (202102020209) and the China National Postdoctoral Program for Innovative Talents (BX20220115). AUTHOR CONTRIBUTIONS Y.-Q. L., and Mi. Z. conceived the idea. J.-P. L. and Mi. Z designed the experiments, collected and analyzed the data. R.-H.L. and other authors assisted with the experiments and characterizations. All authors have approved the final version of the manuscript. Conflict of interest statement . None declared.
CC BY
no
2024-01-15 23:35:07
Natl Sci Rev. 2023 Sep 2; 10(11):nwad226
oa_package/4f/f0/PMC10561706.tar.gz